The security of public key cryptography relies on computers not being able to generate anywhere near 2256 guesses per any reasonable time length. The obvious implications of a computer this powerful would be that Bitcoin and all other cryptocurrencies would be hacked immediately. But what other less obvious destruction could a computer with this capability provide? What would immediately tumble if the power of this computer were directed at it?
-
47Do they bring 8 billion of them and hand 'em out for free to everyone, or do they bring 1 and drop it off at Fort Meade? (I know you're thinking somewhere in between, but where in between? If they give us the schematics but only a handful of companies can build them, and the biggest / most-oppressive governments can lean on those companies, the answer will be different from "anyone can build it in their backyard with a hammer and a couple screws.") – Sep 02 '16 at 06:05
-
34Recommended reading: Amount of simple operations that is safely out of reach for all humanity? on [security.se], as well as If we had a “perfectly efficient” computer and all the energy in the Milky-way available, what number could it count to? on [physics.se]. You may also be interested in When you start talking about numbers as small as 2⁻¹²², you have to start looking more closely at the things you thought were zero. – user Sep 02 '16 at 06:50
-
I clarified the question's title a little, because I felt that the "/s" at the end was easy to miss. (I certainly missed it at first, and that makes for a very different question.) – user Sep 02 '16 at 06:53
-
1Could this not simply be defeated by increasing the size of our keys up to 64.000 bits? Or 640.000 bits? Or would this be problematic (even when you have a computer that can find your primes of this size) – Jeff Sep 02 '16 at 07:37
-
10@Jeff: generally speaking with the crypto primitives we use, God can make a rock so heavy that He can't lift it. So if everyone has an alien computer then the playing field is back to normal, attacker has a huge advantage over defender, and you just choose "big enough" keys, hash algorithms, etc. Might take a while to update protocols of course. I think actually that it's not even necessary for everyone to have alien computers, and that current PCs have the clock cycles to use "big enough" keys already. High-traffic secure websites would need more hardware, though. – Steve Jessop Sep 02 '16 at 08:27
-
1@SteveJessop A desktop system might have the brute force power to do something like that meaningfully, but last I looked, actual desktop systems represent a shrinking share of the personal computing universe. Now we are looking more and more at tablets, laptops and smartphones, all of which are computationally constrained in ways that desktop systems aren't. – user Sep 02 '16 at 08:58
-
2Also, the point of encryption is to transform a large secret (the plaintext) into a small secret (the key). If the key is on the same order of magnitude size as the initial secret, a lot of assumptions get turned upside down. The extreme case of this is the one-time pad, which is often misunderstood but when used correctly is provably secure in terms of confidentiality of the data. Textbook OTPs are, however, potentially undetectably malleable, which is usually a property we don't want in a cipher, and which needs to be mitigated somehow. – user Sep 02 '16 at 09:03
-
1@MichaelKjörling: same applies for tablets, laptops and smartphones. They all have the power to be the client in a system that uses, say, 1024 bit symmetric keys. Which is enough to make this 2^256 operations per second alien computer look like a chump. Obviously they'd be slower than current protocols with smaller keys, but slower by one or maybe two orders of magnitude, not slower by the age of the universe. But TLS servers that currently are running hot don't have an order of magnitude to spare, so will fall over and need to be replaced with more or bigger servers :-) – Steve Jessop Sep 02 '16 at 10:50
-
4RFID and the like might be in trouble, mind. I don't know how much power and CPU cycles they have to spare at the moment. And you might want a bigger battery on your phone. Maybe the main effect of this alien computer, given enough time for the dust to settle and protocols to be enhanced, will be mass suicide of Apple designers because they have to put several mm back on the thickness of their devices ;-) – Steve Jessop Sep 02 '16 at 10:54
-
1@MichaelKjörling The key to the brute force argument is that there is an asymmetry between encryption/decryption and breaking. Doubling your key size doubles the amount of work required for encrpytion/decryption, but squares the amount of work that must be done for breaking. When it comes to smartphones, it's not actually their computing power that matters but the difference in computing power between that and the cracker's computer. Smartphones are weak compared to NSA supercomputers (or this alien's computer). However, there are limits on how powerful the adversary's computer can get.. – Cort Ammon Sep 02 '16 at 14:57
-
1.. and they hit that limit far faster than most modern devices hit the limit for encryption/decryption. That being said, it is worth noting that most smartphones come with specialized hardware to do AES 256-bit encryption rather than relying on their anemic general purpose processors to do the job. They save a ton of power by using hardware to do that, and that means longer battery life. If AES got broken, we might have to use the CPU to do encryption for a while, until the next standard gets put in hardware. – Cort Ammon Sep 02 '16 at 14:58
-
1In the long run, we simply beef up our encryption. Now it would require 2^256 guesses on average to break, but only take slightly longer to validate for the appropriate algorithm and key. – Devsman Sep 02 '16 at 15:20
-
5Careful! With a computer capable of performing enough calculations per second (At one point I estimated between 10^101 and 10^303 ops/second) you can actually simulate the entire universe, rendering the whole question of decryption moot. – Michael Sep 02 '16 at 17:33
-
@Devsman On average, using brute force to find a given, random 256-bit value, such as a brute force key recovery attack on a 256-bit key, will take 2^255 time. Only in the absolute worst case will it take 2^256 time, and if you luck out, it will take 2^0 time (if you pick the correct key at your first attempt at guessing). In most practical cases it should take somewhere between 2^254 and 2^256 time, simply because the odds that you check the correct key early in the process are rather small. Note that 2^254 is larger than the sum of all values 2^0 through 2^253. – user Sep 03 '16 at 09:52
-
1@Michael Arguably no. If there is true randomness in the universe then no computer will ever be able to truly simulate it, just some of the possible evolution paths. – Bakuriu Sep 03 '16 at 10:52
-
shouldn't there be a hard-science tag? – njzk2 Sep 03 '16 at 16:48
-
Update your Bitcoin client, we went to 2^512 difficulty keys – MolbOrg Sep 04 '16 at 18:46
-
1It is silly to presume any half-decent security system will let you try until you get it, I'd say 5 tries and bye bye forever. Also, there is the matter if input - regardless of how fast your hypothetical system is, it will be bottlenecked by the input capacity of the presumably vastly inferior target. So pretty much it wouldn't be doing you any significant good, regardless of how fast it is. On the upside - it may be capable of running Crysis in software ;) – dtech Sep 05 '16 at 17:53
-
Major operating systems and web iAPIs develop still more layers of abstraction until the point where spreadsheets are barely usable is once again achieved? – dmckee --- ex-moderator kitten Sep 06 '16 at 00:56
-
@ddriver "It is silly to presume any half-decent security system will let you try until you get it". Hackers steal the hash of the passwords, and download them to their system, now all bad tries protections are worthless. You are cracking offline with no connection to the host computer so they don't know you have failed 1 billion trillion trillion times. – cybernard Sep 06 '16 at 03:29
-
Is this a custom computer like an ASIC that can basically only do password brute forcing? Or a general PC that can run basically any software? Like Windows or linux. – cybernard Sep 06 '16 at 03:39
-
@cybernard - you are talking very poor security here, the kind that has become the "norm" for corporations which apparently don't give a damn about leaking user information. I assume a decent amount of security won't even have the sensitive data on a computer that's connected to the internet, but would use some hardware interface which is reprogrammed on a regular basis and only allows for "appropriate" usage of that data, and certainly not bulk download. Also, note that despite the very frequent user data breaches, there are barely any breaches when it comes to THEIR sensitive information. – dtech Sep 06 '16 at 10:30
-
@MichaelKjörling's link; Thomas Pornin's answer says the sun converted to energy could power max 2^225 calculations - and the sun is half used up so assume max 2^225 powered so far; Earth gets ~0.000000045% of its energy output. Does that imply the 4 billion year evolution of life happened in << 2^194 'calculations' - and this alien computer could simulate evolution/life on Earth from scratch to present day about ... four quintillion times every second? – TessellatingHeckler Sep 06 '16 at 17:45
-
@ddriver Your bank is connected to the internet, its all over from there. Most users re-use their passwords, even on their bank account. Download all data breaches, crack all the passwords, and try the username and password at each bank and etc. Hackers break safe guards all the time as a matter of routine practice. Use my pool of 100,000 bots so the ip address keeps changing to stay ahead of detection. – cybernard Sep 06 '16 at 21:24
-
@cybernard - ok, granted if you keep trying random keys and eventually get "password" then you could make a valid guess this is my password. But if one key gives you a "S08D9dsj1Sd" and another key gives you a "ls8Hksodkk87" how do you know which is the right password to try and gain access rather than get locked out for incorrect tries? And the IP doesn't really matter, as you have like 5 tries per login, regardless of the IP. The account is locked and the user is contacted on the phone to investigate the issue. You may hack "the idiots" but that won't gain you much... – dtech Sep 06 '16 at 23:28
-
1@cybernard - your understanding is based on all those leaks you hear about on the news, but those are all products of atrociously poor security measures. Unhackable security is quite doable, to the degree you could only gain access to the data if you have physical access, have the machine operator in your hands to torture and his family to threaten to kill them. The industry simply doesn't care enough about user's data to go through such lengths, and they know regardless of how bad it looks, it is not like the user can leave and go somewhere better, it is all the same. – dtech Sep 06 '16 at 23:36
18 Answers
The destructive power of this device would be immense. If you defeated the safeguards on it, it would become the single most powerful bomb ever envisioned.
Doing irreversible calculations, as described here, takes energy. It turns out there's a bare-bones minimum amount of energy required to set 1 bit, based on the entropic content of that data and the temperature. A computer doing irreversible operations will naturally warm up to the temperature of its heatsink, and the coldest heat sink we can get is 3K, the temperature of the background radiation of the universe. You can try to cool it lower than that, but you end up burning more energy than you save.
As a result, there's a minimum of $2.87 \times 10^{-23}$ J/bit of energy wasted every time we flip a bit in an irreversible computer. If we had a reversible computer, this limit would not apply, but in the case of reversible computing, number of calculations is not the unit of measure, so they would not apply to your question.
It turns out that just to run a counter from 0 to $2^{256}$ takes a lot of energy.* A lot of energy. In fact, using that bare bones minimum energy per bit-flip, it will consume 3/4 of the energy in the known galaxy. That's just to run the counter, not even doing any calculations.
So, given a device with 3/4 of the energy of a galaxy, I think we'd want to respect its integrity. The destructive power of this computer would be unimaginable if it were simply disassembled and turned back into usable energy.
* As pointed out in the comments, counting like this is a reversible operation. In this case, I am assuming the counter is implemented using irreversible logic like those we find in a modern ALU. This counter is my surrogate for the general purpose calculations that we could have been doing, such as calculating SHA-1 checksums. This operation is within a factor of a thousand of the most trivial irreversible operation possible (erasure of an unknown bit)
- 16,803
- 11
- 66
- 143
- 132,031
- 20
- 258
- 452
-
2Comments are not for extended discussion; this conversation has been moved to chat. – Serban Tanasa Sep 02 '16 at 14:52
-
1Why are you assuming that said operations are irreversible? You can emulate an irreversible computer with a reversible one. It could "easily" be "do 1 irreversible operation to set state, do 2^192 reversible operations, do 1 irreversible operation to copy out the relevant state, undo said 2^192 operations, repeat", at which point it's "only" ~2^64 irreversible operations per second. – TLW Sep 02 '16 at 15:14
-
66OMG.. 75% of the galaxy's energy, engulfed in a FIZZBUZZ loop. We're doomed. – Mathieu Guindon Sep 02 '16 at 15:23
-
2@TLW You can, in theory, convert any irreversible computation of a finite length into a reversible computation. However, the general case solution involves saving off the old state of every bit that might have been erased. This can take exponential amounts of space. Your 2^192 reversible operations may also require 2^192 bits of memory to reverse! This limit is far more demanding than the energy limits of irreversible computing on any reasonable scale. In theory, if you were to do an algorithm which takes 2^256 operations, you could make it reversible using O(2^256) bits of memory. – Cort Ammon Sep 02 '16 at 15:38
-
If your reversible computer is memory bound (i.e. not an infinite computer provided by a deity), there is a really good chance that there is no series of 2^192 reversible operations which lead to a meaningful state to irreversibly measure off. This is not always true (Shor's algorithm being the most famous example), but it is the general case. – Cort Ammon Sep 02 '16 at 15:40
-
1Could you explain this answer in layman's terms? My head hurts... In particular the irreversible operations concept and how that affects energy. – Matthew Peters Sep 02 '16 at 16:47
-
24"If you defeated the safeguards on it, it would become the single most powerful bomb ever envisioned." - I did some back-of-the-envelope calculations, and it would take a fraction of a second for the energy used to have enough mass to create a black hole larger than the Earth. So the safeguards would have to include very powerful antigravity tech. – Rob Watts Sep 02 '16 at 17:03
-
2@MatthewPeters The basic idea behind an irreversible operation is that some information must be "lost" along the way... typically in the form of thermal heating. For example, "round to the nearest power of 10" is irreversible because you don't have enough information in the rounded outputs to figure out what the original number was. A solution to make it reversible is to output both "round to the nearest power of 10" and "save off the original input," but that requires extra storage, and in this question, it'd take a lot of it – Cort Ammon Sep 02 '16 at 17:20
-
2As it turns out, there's a connection between energy and information (see the link), so if you know how many bits of data your calculation erases, you can determine the absolute bare minimum amount of thermal energy required to do that calculation. Realistic computers consume far more than that minimum, but as we see, even the minimum is a pretty gargantuan amount of energy. – Cort Ammon Sep 02 '16 at 17:21
-
I don't think this holds. The physics model I used to permit 2^256 in a reasonable amount of time inherently protected from the energy disaster as well. – Joshua Sep 02 '16 at 17:27
-
7@Joshua The question does say that the aliens "import" this computer. It doesn't fully define what "import" actually means. If the computer indeed followed the laws I used in my answer, "import" might not mean "brought to earth' as much as "parked in orbit around the Large Magellanic Cloud." That would be a much safer distance for a galaxy devouring computer, don't you think? =) – Cort Ammon Sep 02 '16 at 17:41
-
14Question does not have hard-science or even science-based tags, so I believe this answer totally misses what the question is asking. The processing power of the computer is given, so add any technology which makes it possible (for example, doing calculations in a self-energized pocket universe optimized for the purpose, just sending program and data in, and getting results out). – hyde Sep 02 '16 at 19:33
-
5@hyde I felt the science based answer gave useful insight. If you assume magic instead of science, the answer is a lot more boring. The transient cases are all dependent on the exact implementation of the interface and how many computers the aliens bring and how much we know about them. The steady state case is "nobody cares, because we just bump it up to 512-bit encryption and win the arms race." Far less entertaining. – Cort Ammon Sep 02 '16 at 20:48
-
1@CortAmmon And some of us already bumped up our crypto. Better bring a few extra universes to consume on the computation you'll need for that... – Michael Hampton Sep 02 '16 at 22:58
-
I guess its safe to say no alien life in our galaxy posses this technology YET. – NuWin Sep 02 '16 at 23:16
-
1@CortAmmon The answer does make a useful point about realism. Going further down that road, it could be better if it was expanded to speculate how long we could run it with our current energy sources, and what could we do then. It'd mean that any calculation we can pay for in energy could be done in a blink of an eye. – hyde Sep 03 '16 at 04:16
-
@CortAmmon a computer "parked in orbit around the Large Magellanic Cloud" (which is a galaxy, BTW, and we normally don't speak about orbiting them) would be over 150 000 light years away, and no use for humans. What we would notice is Large Magellanic Cloud slowly collapsing into a single point by an unknown force (which might be tricky from the momentum conservation standpoint, but w/e). – John Dvorak Sep 03 '16 at 09:50
-
The Landauer bound has been avoided by using resoviors other than energy, and a new publication seems to refute it with a measurement of 5% of the supposed limit. – JDługosz Sep 03 '16 at 10:36
-
I think there is an easy fix. As TLW said, each time there is enough bits to be erased, clone the useful part of the program state, and undo all the computations to reset the other part of memory to zeros, to free the memory. But make a stack of those clones, and a counter of how many times this is done. In the kth time revert it to the state after the (k&(k-1))-th time. This way the memory usage would have a factor of log(time), and the time would be squared. But the 2^256 reversible operations per second computer is still quite powerful. – user23013 Sep 03 '16 at 12:12
-
And for uses such as brute forcing an encryption key, you could simply reverse the computations after trying each key. A naive and maybe not optimal way of collecting the answer is, for each bit in the key, assume this bit is 0 and use a counter to check whether there is a key that works. – user23013 Sep 03 '16 at 12:22
-
@user23013 We actually do checkpointing algorithms like that in real software. However, to use it in encryption, you would need the encryption algorithm to happen to be weak to that particular logarithmic set of checkpoints. There's no guarantee that such a breakdown exists for an algorithm. The worst case is when you know you took the wrong path between 2^254 and 2^254 and need to backtrack, but you don't know where you need to backtrack. At that point, you still have 2^254 states to check, and you have to use irreversable logic to create all of them. – Cort Ammon Sep 03 '16 at 14:36
-
@JDługosz The papers discussing avoiding the Landauer limit recognize that there is currently no known way to generate a resevoir of a non-energy conserved value which takes less energy than you'd use by just using the energy to power the computer directly. It makes the spin reservoir they used in the paper into a giant battery. They even suggested how Laplace's Daemon could use such a resevoir to generate work from an ideal gas in two chambers! Its cool stuff, though. Certainly a clever way to approach things. – Cort Ammon Sep 03 '16 at 14:39
-
@CortAmmon What do you mean? Neither of my two algorithms had the concept of backtracking. – user23013 Sep 03 '16 at 15:25
-
The earlier reports talk about non-energy resovoirs such as spin, but this paper directly measured a mechanical implemetation and came up with numbers far lower than the L.B. there’s no mention of spin or anything in the new report I linked to. – JDługosz Sep 03 '16 at 17:05
-
@JDługosz Ahh, I clicked through to the next article and then to that paper's article rather than clicking on that article's paper! Totally different setups! It looks like this paper uses MEMS constructions and non-equilibrium thermodynamics. That's actually quite the fascinating approach. Non-equilibrium thermodynamics is a strange place where you can get unexpected things like objects which are colder than absolute 0, yet act hotter than the sun. Landauer's principle is soundly based in equilibrium thermodynamics, so it does seem reasonable that this approach might truly sidestep it. – Cort Ammon Sep 03 '16 at 21:20
-
-
@CortAmmon can you explain reversible vs irreversible in this context? Google returns confusing results. – NuWin Sep 04 '16 at 15:56
-
2@NuWin From my conversations with JDlogosz, its a bit more nuanced than I have made it appear, but the general idea is that a circuit it reversible if it has enough information to be returned to its state before the previous calculation without adding additional energy. An irreversible one is a circuit which cannot return to its previous state because it "forgot" information from its inputs. The classic example of irreversability is an OR gate because if i say "event A or event B occured," which is the output of the OR gate, you do not have enough information to figure out if event A occured – Cort Ammon Sep 04 '16 at 16:58
-
2eventB occurred, or both. When you "forget" information, the reset process must consume energy, per Landauer's principle. – Cort Ammon Sep 04 '16 at 16:59
-
-
This assumes a binary system. Since it is alien technology, I guess it has the possibility to be something different than binary. – Mixxiphoid Sep 05 '16 at 06:37
-
1But...the OP didn't ask if the computer could destroy the world (and probably the solar system, and possibly a few nearby star systems as well) if you tampered with it? Or at least, I don't think this was their intent when they asked "what other less obvious destruction could a computer with this capability provide?". I think they probably meant things like an immediate end to security/privacy for most communications (at least until everyone switches to 512-bit encryption). The answer should at least touch upon those aspects that the OP was probably referring to. – aroth Sep 05 '16 at 13:40
-
1@aroth I can see you do not wish to upvote my answer =) The OP did mention that they recognized that cryptography could be hurt, but the question sentence in the post was "But what other less obvious destruction could a computer with this capability provide?" The first step to answering that is to capture just how extreme of a machine we are talking about. If we wanted to, the next step could be to start discussing architecture, IO, maximum memory storage, etc. – Cort Ammon Sep 05 '16 at 15:39
-
2
-
Since it is an alien device, power maybe a non-issue, and it will either have an infinite battery, or be so power efficient power it will be a non-issue. – cybernard Sep 06 '16 at 03:14
-
@cybernard My answer does assume that the alien device follows the current known laws of physics. If you bring in magical devices (by Arthur C. Clarke's definition), strange concepts like infinite battery become possible. However, at that point the reality is that our entire existence as a species is now utterly dwarfed by the existence of this device, so some of the things which might be destroyed by it are not only cryptography and bitcoins, but in fact human society as we know it. In such a case, the question is less about what the computer could do, and more what the aliens will choose – Cort Ammon Sep 06 '16 at 03:41
-
-
1@Mixxiphoid "This assumes a binary system." You misunderstood the word "information". We normally use the word "entropy" instead of "information", so we assume the computer adheres to Physics, specifically the Second Law of Thermodynamics. – Aron Sep 06 '16 at 04:41
-
@cybernard The calculations that CortAmmon made are assuming 100% (Carnot) efficiency. We aren't debating whether the aliens have infinite power source. We are in fact inferring that the computer does in fact have infinite power. The consequence being that we could use that infinite power supply as a huge universe destroying bomb. – Aron Sep 06 '16 at 04:46
-
If we are speculating a search system able to force search 2^256 space then it seems a bit pointless to assume it is bounded by our current technology as regards energy for bit storage and switching. I'm therefore not persuaded by an energy argument. – Stilez Sep 06 '16 at 08:38
-
1The energy talked of here is not "current technology" but just fundamental physics. It mentions in the answer that current technology is vastly less efficient as the limit used, but the limit used in simply a limit set by physics themselves, not by any specific technology. – Erik Sep 06 '16 at 09:31
-
1@Aron (Carnot) efficiency is nothing to aliens capable of building this machine. They have long ago cast that aside as non-sense. Compared to them all the rules we know are woefully incomplete and/or wrong. Using technology that makes quantum computing look like stone knives and bear skins. It probably has the equivalent Carnot efficiency of 1000000%. – cybernard Sep 06 '16 at 21:17
-
@cybernard I addressed that issue in my answer. This answer assumes that the aliens are using the kind of computer whose capacity limits are in terms of operations per second. If you open the door to quantum computing, we measure quantum computers fundamentally differently such that the OP's units of measure would not make sense. If we open the door further to "magic" computing like you describe, then there is no grounding with which to describe the effects. They are simply whatever the author feels they should be. – Cort Ammon Sep 06 '16 at 21:54
-
Why would it be a bomb? Considering that this is energy TAKEN not created, what if said computer was 100% efficient with it's energy use? – Sep 14 '16 at 19:28
-
1@tuskiomi My answer is based on the assumption that the computer contains its own power supply. If we go with your assumption, that the energy is taken from nearby, the result is even more devastating: every star in the milky way, including our own, is consumed, winking out of existence. The sky goes dark, and we freeze. This is the end of humanity, and life as we know it. BUT, we finally know what data Snowden's been keeping in reserve. – Cort Ammon Sep 15 '16 at 01:38
-
Why are all of your answers so entertaining to read through... Although: assuming the computer is capable of transferring even very small amounts of information from one part of the processor to another (which is a reasonable assumption for a functional computer)... Why couldn't it just funnel any excess energy from one irreversible operation into the input energy used to carry out the next irreversible operation, only expending additional energy as necessary? – trevorKirkby Jun 03 '18 at 04:07
-
@someone-or-other I've had my good days with some of these answers. As for the irreversible operations, they result in the generation of heat, not usable energy. There's some clever things you can do to use that heat, like Matroska brains, but the limits from Landauer's Principle are hard to circumvent. You could go to quantum computing, which is reversable, but that can quickly need a lot of space rather than energy. – Cort Ammon Jun 03 '18 at 05:17
As far as decryption is concerned: The encryption systems currently in use are using key sizes that make it absolutely impossible to crack them using known technology. These key sizes would be cracked if you had 2^256 operations per second available. So what would you do? Increase the key size. RSA with 1024 bit keys is close to uncrackable today. Not completely out there, but very hard. RSA with 4096 bit keys would be uncrackable by the alien computer.
It would be a bit harder to use with our native hardware, but not that hard. The same with symmetric keys; you would have to rearrange your algorithms a bit, but use a 512 bit key where today 256 bits are considered total overkill, and you are fine.
- 907
- 4
- 7
-
54,096-bit RSA only gets you an approximate 109-142 bits work factor, depending on which authority you subscribe to. Give https://www.keylength.com/ a try (you want factoring modulus size for RSA). Now, there is no direct equivalence between work factor and time required, but it is good enough to be useful. So to a first order approximation, 4,096-bit RSA, against an adversary capable of 2^256 calculations per second, might hold up for a few seconds, but I can't see it faring better. Remember, as you say we can envision key recovery attacks on 1,024-bit RSA, but not even 128-bit AES. – user Sep 04 '16 at 19:49
It would make most of our current encryption systems obsolete. However, it would also make new systems possible. These new systems will be unbreakable until the next set of aliens arrive.
Net result: Many old secrets will be revealed. But new secrets would still be secret.
There will be a transition period before we adjust. History shows us that criminals adjust faster than business and law enforcement. That could be chaotic, for a while. But then things will settle down.
Passwords will become a thing of the past. Any password a human can remember, these computers can break.
I think biometric recognition would have to replace it. Today that doesn't work too well, but with better computers we can do a better job of it. You might have to both look into a camera and speak into a microphone to identify yourself. Maybe other sensors can be used too, like smell sensors, signature recognition (with writing speed and pen pressure added to the data) With enough computing power, the possibilities are endless.
However, one thing is certain. Computer programs will be written that are complex enough that even these computers will seem slow to their users.
- 5,919
- 26
- 57
- 11,983
- 24
- 49
-
2No, it will make asymmetric encryption obsolete. It will do nothing to symmetric encryption, and a 10-letter password will still be as weak/strong as before. It's very nice that such a computer could try so many passwords in a second, but that's quite irrelevant when it will spend years waiting for the I/O :) – Luaan Sep 02 '16 at 11:55
-
25... and biometrics are not a panacea. A password that can never be changed and that you are constantly expressing in public is quickly useless. – Eric Towers Sep 02 '16 at 13:04
-
13Biometrics are poor security, as @EricTowers rightly points out. Worse is when your biometrics do change, and you can't get into your stuff any more. Voice authorization when you have a cold? Fingerprint scan after you burn your finger? Breath analyzer after you eat some really hot curry? "I'm sorry, Dave, I'm afraid I can't do that." – Mar Sep 02 '16 at 19:11
-
6@MartinCarney Even worse than that, Biometrics are terrible for encryption since they are so "fuzzy". They are sort of passable for authentication, but not as keys/passwords for encryption. This is why iphones can't be unlocked with a fingerprint after reboot: http://security.stackexchange.com/a/134393/20035 – Patrick M Sep 03 '16 at 04:48
-
2This will not make asymmetric encryption obsolete. The computer is fast but still polynomially bound. Increasing the key size for future encryption would be sufficient. – Sep 03 '16 at 10:12
-
Quite a few biometric measures would be useless too. I don't know how many distinctive retinal or fingerprint patterns there are, but it might be possible to brute-force search the space. Especially since quite a few biometric traits vary even for a single person, so there has to be quite a wide space of biological traits encoded to the same biometric value – Stilez Sep 04 '16 at 19:32
-
@Luaan "It will spend years waiting for the I/O". No it alien tech, they have crazy fast I/O and it won't be a problem. Even now we can get 2-4gb/s with and M2 ssd. An alien capable of 2^256 power per second will have I/O to spare. In order to achieve 2^256 it will need many times that in overall speed as encryption/decryption takes more than a single CPU function. – cybernard Sep 06 '16 at 03:21
-
1@cybernard That's all nice and fine, but it's not going to help you one bit when the server you're connected to is linked with a 1 Gbps cable :) Unless you suggest that the aliens also replaced all our networks with their magi-technology, as well as all the servers and all the other computers and all the paper... I/O is a two-way affair - if the device you're connecting to doesn't have the bandwidth, you're just as screwed if your computer doesn't :) – Luaan Sep 06 '16 at 08:02
-
1@Stilez Well, the last time I worked with a fingerprint scanner, the pattern was something like 160 bytes, so even if we assume it has 100% perfect entropy, it only gives about 1280 bits. And they have nowhere near 100% perfect entropy - as Martin and Patrick noted, the scanner must be very forgiving, since the match is never even close to perfect. On most scanners, you can pick on a scale of "easy to match" versus "chance of mismatch". On the highest security settings, it may routinely take multiple tries to get through, even right after you made the pattern. Lowest mismatch one in hundred. – Luaan Sep 06 '16 at 08:05
-
@Luann - DNA or neuronal connections as a biometric? Those are probably not too far away (decades at most?) and potentially a huge bitcount equivalent. So maybe only current not future biometrics would be affected – Stilez Sep 06 '16 at 08:29
-
@Luann "it's not going to help you one bit when the server you're connected to is linked with a 1 Gbps cable" Hackers steal password hash by the millions every week there is a new data breach. So we take all the hackers raw dumps, and crack the passwords from that. Then login to their accounts and steal all their money and change their passwords. – cybernard Sep 06 '16 at 21:08
-
It's not really true about human's ability to remember passwords. Many people already routinely use passwords they can't or don't want to remember and it's not a problem for them. If these passwords get larger (but not larger than, say, 1 MiB per password) then nothing will change drastically — HDDs are still cheap enough. – Display Name Aug 31 '17 at 07:03
I not sure we would even understand the limits on what this computer could do.
However, I know one thing, it would not be able to do quickly, simulate a monkey typing out a copy of Shakespeare's Hamlet by random typing. As remarked in Wikipedia
However, for physically meaningful numbers of monkeys typing for physically meaningful lengths of time the results are reversed. If there were as many monkeys as there are atoms in the observable universe typing extremely fast for trillions of times the life of the universe, the probability of the monkeys replicating even a single page of Shakespeare is unfathomably minute,
People rarely have an intuitive understanding of the difference between really big numbers and the infinite. 2256 is a really big number (OK, not so much when compared to say Graham's Number). But infinity is completely different.
The reason for the comparison to the infinite is that this example is often phrased in terms of an infinite number of monkeys. With infinite monkeys you get Hamlet, Mabeth, etc. including translations into every language, as well as else everything else that can be typed in the time it take to type it in without without mistakes.
Really big as in 26^130000 for Shakespeare is so far beyond 2^256 that the computer will not dent the problem before the heat death of the universe. There are many computer algorithms that act more like the Shakespeare problem in terms of needed computation time that you might expect intuitively. Just because an algorithm is known, does not always make the problem solvable.
- 16,803
- 11
- 66
- 143
- 22,360
- 2
- 42
- 84
-
10Seems like a bit of a nonanswer to me. Also seems like the answer conflates a lay person's inability to contemplate a trillion monkeys with a mathematician, cryptographer, or physicist's ability to compare large numbers or determine the significance of such a number. Which begs the question IMO. – djechlin Sep 02 '16 at 06:23
-
3@djechlin I think he's trying to say that trying every possible combination to unencrypt something could take longer than a second for this device, so it's high computational power is not capable of infinite wonders – Sarfaraaz Sep 02 '16 at 11:50
-
3
-
4The concept of infinity has nothing to do with the question that was asked. – Frostfyre Sep 03 '16 at 13:00
-
One monkey for each atom in the universe seems like too low a number of monkeys. What if you had one monkey for each Planck Volume that fits within the volume of the observable universe? That should yield a much more satisfactory number of monkeys. – aroth Sep 05 '16 at 13:53
Most of the existing answers are completely ignoring physics. Assuming you want to compute anything, you need data, and the Bekenstein bound puts a lower bound on the physical size of the device that has any hope of representing a given amount of data. Combined with whatever size you get, the speed of light then gives an upper bound on propagation of data within the system. 2256 is such a huge number that even if your data size were just a few bits, you could not reach anywhere near that computation speed. So your computer simply does not exist.
- 1,356
- 9
- 13
-
9There's no hard-science tag. Coming up with plausible ways of such computer existing doesn't take much imagination (artificial pocket universe, or a way to locally alter our current "fundamental" constants, or just a boring alien solid black box with input and output, or...). – hyde Sep 02 '16 at 19:51
-
2Indeed. This would be a good answer to "how could a computer be designed such that it can perform 2^256 calculations per second?", but in this case, the OP posits that the computer exists and asks about the effects of such a computer on our world. That's not to say this is invalid or incorrect; just that it's an answer to a different question! – user Sep 04 '16 at 19:54
Practically: It would make any sort of super computer superfluous and it would break all and every encryption currently employed. It would NOT allow password guessing or breaking into a remote computer (not a mathematical problem - you can block access after x attempts and there is no way around this) but forget any sort of digital signature. Forget HTTPS.
-
7"all and every encryption currently employed" -- although interestingly it wouldn't break all current crypto primitives in every way. A brute-force SHA-512 pre-image attack is still way out of its league. Given sufficient memory it could find SHA-512 collisions, though, so that's still not great since it's still a break. Not sure what that much memory would look like ;-) – Steve Jessop Sep 02 '16 at 08:09
-
As emphasis: "It would make any sort of super computer superflous" - this would also change quite some economic landscape. Those computers are EXPENSIVE. Renting on 10% of the computation time for that machine would likely handle all current supercomputers 100 times over... and that means business changes for a lot of companies involved. – TomTom Sep 02 '16 at 11:17
New physics. Attempting a 2^256 calculation with terrestrial timescales and energies is utterly impossible. Cort Ammon's answer is spot on with what happens attempting it given trying to build the computer with current physical understanding.
Whatever befalls in the finding of such a computer to our current internet is nothing compared to the power that would be unlocked by dissecting it to learn the new physics and rip the secret out of it. I see no non-magical cases where a contained computer such as can be delivered to the earth that can do this does not unlock for us one of warp drive or time travel.
- 1,695
- 10
- 12
Assuming the physics actually worked(see Cort Ammon answer about the amount of energy)
Nearly every single mathematical problem would be able to be solved by brute force. And for the ones that don't the numbers would get extremely high very quickly.
Chess for instance only has 2^155 positions. So it could evaluate a board to the finish faster then you could decide a move.
-
19I think you mean "computational problem", not "mathematical problem". As a mathematician, let me tell you that no amount of brute force is going to solve most of today's math problems. – Sep 02 '16 at 08:13
-
2Take a look at the wikipedia page for Graham's number. 2^256 is infinitesimal compared to Graham's number. Also, there are problems that are undecidable, meaning that no finite amount of processing power could solve them (even if you could do Graham's number calculations per second). – Rob Watts Sep 02 '16 at 16:47
-
So we strike out chess olympics, and leave old card games. I would say yheap, take it. – MolbOrg Sep 04 '16 at 18:54
-
@NajibIdrissi In fact I would pose Riemann as a counter example. Just because you can find solutions to the Zeta function really quickly and be able to verify that they are in fact on the critical line, does not mean that you can say that ALL roots of the Zeta function lie on the critical line. – Aron Sep 06 '16 at 04:49
It would mean that WPA2 wouldn't be safe anymore. Keys could be bruteforced in no time. Router manufacturers would be required to develop and deploy new, secure WiFi encryption schemes into their new models, and until their wide adoption (could take years), everyone and their dog would use their neighbours WiFi.
- 121
- 1
-
-
2@CamilStaps: This already happened once (more gradually), with WEP, but it's probably still not totally gone from every home use, even years after its encryption was broken wide open (~1 minute of traffic capture, and 3 seconds of CPU on a Pentium-M from 2005). Adoption by people who care about security wouldn't take as long this time, but "use your neighbour's wifi" is not a significant problem for most people. Still, there would be electronic break-ins at places with bad IT people. – Peter Cordes Sep 04 '16 at 14:54
-
-
@CamilStaps Yup, there were only all those leaked naughty photos and everything, nothing that would make people think about security. There was even an Microsoft paper that analyzed the cost-benefits of educating computer users in security - their conclusion was pretty clear; the investment is too big to pay for the losses. So they focused on making systems secure by default instead, which already helped a lot more than education ever did, even though there's still ways to go. The old-school security approach is kind of like the abstinence approach to sex ed :) – Luaan Sep 06 '16 at 08:13
This would allow the aliens to simulate other possible Worlds, simulate life appearing, intelligent life forms appearing, and eventually a civilization appearing. They can then build a real world copy of that civilization for their own use.
A potential problem for us is then that the outcome of such a simulation might be that they happen to generate our civilization by chance. If they happen to generate a virtual copy of you and decide to copy you as well, you may wake up in an alien World instead of your own bed.
- 1,691
- 1
- 10
- 10
Connectapocalypse
we could now create server to host mind diving and connect the whole human race to it via their spinal cord (sound familiar?)
unlock human potential
that many calculations per second would only be beneficial on running multiple calculation at once like the brain. it could help us understand the brain if at least not emulate the brain.
Fast Unencryption through brute force
There are supposedly many black sites out there that have the encrypted form of passwords that were obtained through scrupulous means and having this machine would allow these black sites to get a likely password more re-actively instead of brute force attempting the password for months at a time. usually its quicker to run a few computed guesses based on other information such as the location the data was obtained from, when information was obtained, how much information there is etc..
-
5Re "host mind diving" - Just because we have the processing power doesn't mean that we magically have the software. – TLW Sep 02 '16 at 15:15
-
4
-
@PatrickM pretty sure we have the hardware. would just take an insane amount of ram and hard drives. – Sarfaraaz Sep 05 '16 at 05:58
-
@PatrickM although now that i think about it you won't need that much since u always have the live copy of the human mind on hand and the PC can process it so u won't need that much HDD and ram just a very high bandwidth transfer medium. will probably have to connect several layers of circuits to the spinal cortex – Sarfaraaz Sep 05 '16 at 11:50
-
1@Sarfaraaz I wasn't talking about RAM or disk space since I was assuming absurd amounts of RAM came "free" with the faster computer. I meant that we don't currently have the hardware needed to make a copy of everything inside of the human brain, or to tap into the spine to totally simulate a virtual environment. – Patrick M Sep 05 '16 at 15:58
-
@PatrickM but i disagree. i belive having even a short glimpse of being able to match data from stimulus (lets say a week) and we'd have figured out what we need to do to transfer that stimulus from the brain into a virtual evironment. transferring the that thought into another mind safely would be much more difficult – Sarfaraaz Sep 06 '16 at 05:50
In terms of cryptography it would indeed mean the end to all current forms of crypto and the mechanisms which rely on them (I've seen RSA, WPA2 and others mentioned in the other answers here but really all of our current algorithms rely on the same fundamental theory). However, we are already looking at post-quantum cryptography and designing theoretical algorithms to be 'quantum-hard', in anticipation of our research into quantum computers turning up serious results in the foreseeable future. A quantum computer would have a similar effect to one capable of vast numbers of computations as by nature it is able to check every value simultaneously (in theory - I believe the prototypes have to limit the range of possible states but the idea holds). It is believed to be possible to create an algorithm which doesn't rely on a computationally expensive problem (in current crypto, this being the basic mathematical problem of the generation of large primes).
I'm no mathematician but my field is IT security. If you're interested in how we might alter our systems to deal with an issue like this then check out the New Hope algorithm. You can find a paper about it here
- 187
- 3
-
10"In terms of cryptography it would indeed mean the end to all current forms of crypto" Eh, no. You cannot brute-force a One-Time-Pad. Or, to be exact: you will be able to brute-force it with this contraption, but you would not know the true answer among all the false ones. Want me to prove it to you? Ok... here goes: the clear-text, the key and the cipher-text are unsigned 8-bit integers. The method of encryption and decryption is bit-wise exclusive OR (XOR) between the text and the key. The cipher-text is '0'. What is the content of the clear-text and the key? – MichaelK Sep 02 '16 at 10:45
-
No you're right about that. But I believe one time pads are not practical in computing due to the difficulty of securely exchanging keys. I may have been more accurate to say "all digital crypto" – Brae Sep 03 '16 at 09:29
-
2This is incorrect. The computer is fast but still polynomially bound. Increasing the key size for future encryption would be sufficient. – Sep 03 '16 at 10:09
-
1Quantum computing does not significantly change the playing field for symmetric cryptography. Grover's algorithm can effectively halve the work factor for a key recovery attack for a given key length, but that just means that we need 256-bit keys for 128-bit effective security, or 512-bit keys for 256-bit effective security, which is quite managable entirely beside the open question of whether a quantum computer large enough can be built. Shor's algorithm for integer factorization is a much bigger deal for public-key cryptography, and is mainly what "post-quantum cryptography" is about. – user Sep 04 '16 at 20:00
-
@MichaelKjörling And of course, since we've known about the "thread" for quite some time, we do have algorithms that work well even against quantum computers. It's just that the cost isn't quite worth it yet - until we get quantum computers big and cheap enough, factorization-based asymmetric cryptography is here to stay. That might change in the next five years, or twenty, or half a decade; no point in wasting so much work switching everything before it's really necessary. Unless you're doing top-secret work, where you probably don't use asymmetric crypto in the first place :D – Luaan Sep 06 '16 at 08:10
Unfortunately 2^256 is a very big number; for all practical purposes it's close to infinite. Remember that 1 googol, 10^100, is literally more than everything, for example more than all photons and atoms in the observable universe. 2^256, or roughly 1e77, is almost nothing compared to a googol, but still close enough to be over the top. It is for example more than the number of atoms in the galaxy (perhaps 4e11 stars * 1e57 atoms/star). This makes the question less interesting than, say, 2^100 flops. But ok, let's suppose that we have essentially unlimited computing power, adequate storage with it and that it is 100% reliable:
Update: The first thing to note would be that it's likely that this computer would be a post-singularity super intelligence of its own. It will not only be alive but it will be the equivalent of a god to us; all speculation about it is futile because its ways will be unfathomable. Still, in human terms it'd be an interesting question what its motivation would be to talk to us or even help us, and in which ways it would choose to help us. These are similar questions as people around the world ask about their respective gods. I'm tempted to say that this machine is — next to the spaghetti monster — a candidate for the god of the SE crowd (except perhaps the Judaism SE), in as far as it is a surface for the projection of our speculations and hopes of redemption. (This paragraph was inspired by my Marvin quote in the comments section.)
The rest of the discussion is based on the however incongruent assumption that the computer will behave like a contemporary computer, just faster.
Not only cryptography but all computationally intensive tasks would be almost infinitely accelerated, if the device is (remotely) accessible to the general public.
- Essentially all cloud storage will be transferred to this machine. The only reason to have local or regional computers is speed of access: akamai won't go out of business.
All simulations which today are performed on supercomputers or expensive work station clusters will be performed on it. The interesting thing is that with better simulations less true insight is necessary (take chess as an example).
- Weather: One can simply brute-force simulate the whole atmosphere, by the molecule. Don't get me wrong -- weather is a chaotic system and simply is not deterministic. But forecasts will improve dramatically. I suppose that the computational complexity of weather forecasts is exponential; every doubling of the computing power may buy the forecasters perhaps 6 hours more prediction time. But we probably talk 20 or 40 or 80 doublings, depending on how much of the computing power we want to devote to weather forecasts.
- Brain: A human brain apparently has < 10e11 neurons. We could not only brute-force simulate a brain but all brains. (I'm not sure whether we'll have consciousness; that may need some qualitative insights. But the progress will be immense.)
- Physics: One could, for example, simply brute-force simulate a whole star; the sun has apparently only 1e57 atoms in it. Which gives you an idea of the size of the machine, by the way, if it has storage to match its computational power and each bit is an atom: as big as 1e20 suns, or 1e9 galaxies.
It is clear that the machine's capacity is sufficient to simulate all of the physical reality in our galaxy to a degree of precision (the atomic level) which will make it very close to the actual thing. My guess is that we would start uploading simulations of ourselves fairly soon (some decades or at most a couple hundred years). Elon Musk's simulacrum would run around in it and speculate about being in a simulation.--
That we'll be immediately able to have CGI look like real life is just a foot note; but one which will change the entertainment business and may have implications in the court room because true evidence could not be told from falsifications.
- 77
- 1
- 8
- 2,284
- 11
- 13
-
1
-
@JDługosz It seems a reasonable assumption. The OP didn't say it has only little storage either. The huge number of computations can logically only be achieved in parallel, thus necessitating adequate storage; if you try to perform 2^256 = 1e77 computations serially, and each one only takes the Planck time, it's still 1e33 seconds, which is 1e25 or so years, or many billion times the age of the universe. (Either way, 2^256 is too big, but still.) – Peter - Reinstate Monica Sep 05 '16 at 12:15
-
1@JDługosz For all practical purposes, such a computer would be the universe, which is running perfectly concurrently. – Peter - Reinstate Monica Sep 05 '16 at 12:22
-
I was supposing it’s like a GP-GPU: more bark than byte. It might only have enough (fast) ram to set up the problem like checking AES, and in fact relying on functional programming concepts to avoid storing results per se. It can iterate on the same data using parallel universes or timelike curves, but doesn’t have storage on the same scale. – JDługosz Sep 05 '16 at 14:27
-
so it doesn’t compute in parallel the way we understand it, with separate local registers for each unit; maybe use closed timelike curves to go sequential, or something beyond quantum computing but using superposition of states where one gets realized. – JDługosz Sep 05 '16 at 14:31
-
The time and space values apply with the assumption of it being classical. Even forseeable quantum computers are not using the same rules. Hilbert space is unreasonably large, quoth some famous physisist. Basically you ruled out it being a classical computer, if we accept the narrative premise that it exists. – JDługosz Sep 05 '16 at 14:35
-
@JDługosz You got a point there -- its technology would be sufficiently advanced to be indistinguishable from magic ;-). – Peter - Reinstate Monica Sep 05 '16 at 16:48
-
Also don't confuse number of particles with number of connections or interactions. A lot of examples above are of the form "has < 2^256 particles so we can simulate it" but you're proposing simulating all interactions (for atoms/stars) and connections (brain) - that's still vastly greater than 2^256 – Stilez Sep 06 '16 at 08:32
-
@Stilez While you are right that the number of connections e.g. in a brain is an order of magnitude or two larger than the number of neurons the alien computer is still vastly o.p.: "Here I am, brain the size of a planet, and they ask me to take you to the bridge. Call that job satisfaction, 'cause I don't. " – Peter - Reinstate Monica Sep 06 '16 at 09:11
Late to the party here.
But a lot of people talked about the physics challenges of a computer like this. People have already brought up that you would need antigravity to prevent the energy contained from destroying the planet, and that you would need a very good system for dealing with excess heat. I'd also like to explore for a moment the spatial requirements of a computer that can do 2^256 operations per second.
To my knowledge, the current most powerful supercomputer is Sunway TaihuLight, which can hit speeds around 100 PFLOPS (about 2^53 operations a second). That is probably a decent order-of-magnitude estimate for the maximum density of FLOPS that can be achieved by transistor-based computing.
For anything resembling transistor-based computation, a computer capable of 2^256 operations per second would be a lot bigger than the planet.
Even if you just gave humans a client to this massive computer and kept the actual computer somewhere up in space... Such a large computer would be unable to function. Because it would be so big that to send data from one part of the computer to another in a timely fashion, you'd need to send signals at speeds faster than the speed of light.
And you could only get the computer back to a manageable size by making something 2^203 times more compact than a transistor. For reference, a quark is only somewhere in the ballpark of 2^32 times smaller than a transistor.
Probably the only way to get around this to use more than 3 dimensions of space to build your computer. That allows all of the bits to be closer together, so that data transfer is no longer impossible.
In N dimensions, for a computer made of transistors that are close enough to reasonably communicate data, we can estimate that it's computing power would be very roughly (2^(cuberoot(53)))^N operations per second.
If we want that number to equal 2^256, N=14.4633, but since you can't exactly have a fraction of a dimension, we round up to 15.
So basically, you would want about 15 spatial dimensions to house a computer capable of 2^256 operations per second.
I figure it is probably hard to put an upper limit on the capabities of any technology that can leverage 15 spatial dimensions.
- 257
- 1
- 7
Nothing will really happen.
Most secure system have a 2 party login, your bank have a token, where you need the token and the password to get in. Every time the token changes so you have to make the right guess at the first try.
Lets say you could make a qualified guess with the right algorithm, you still would only have a limited amount of attempts before the account is closed down, because the intrusion counter measures kicks in.
Then the next step would be to try and kick in the door. Well, unless you are able to pick out the data and move it physically to your device, then the bandwidth of the unit stored on it (High end PCI harddrive makes about 250.000 operations per second) would make it a major slow down. Doing this over the internet would be even worse, asuming you had the bandwidth you would just make one HUGE DDOS attack on the world.
My conclusion is: Unless you have the data directly on your device (and it could take months to transfer, because of this google even fedexed nasas website to them self), it would be useless, and if you have access to the raw data, well you don't need a fast computer to break in to it.
It would be like have a F1 car on the Faroe Islands.
(More about fast cracking computers and its problems here)
- 5,672
- 25
- 55
-
3"Every time the token changes so you have to make the right guess at the first try" -- although if you told this super-computer the pseudo-random algorithm used by the token-generator (often the algorithm is published to avoid security through obscurity), and let it observe some number of sequential outputs (because you've owned SSL/TLS), then it conceivably could crack the key and predict the next output (so you can log in). Depends on the size of the internal state used by the token generator, and I don't know what's typical. – Steve Jessop Sep 02 '16 at 08:14
-
1... and of course in order to do anything to SSL you have to actually put yourself in the path of the communication. A really big computer doesn't much help achieve that. You need a screwdriver and access to an internet backbone, or you need to limit yourself to attacking people in the same coffee shop as you are. – Steve Jessop Sep 02 '16 at 08:16
-
-
@SteveJessop Read the last link i posted, there is computers doing 350 billion guesses per second (just a micro fraction of what moby wants) and the problems with bandwidth i described still occurs. – Magic-Mouse Sep 02 '16 at 08:54
-
I agree with this answer for an average person owning one of these. But if the NSA has one of these, they can get at all the encrypted data they capture from Internet backbones. Unless all their efforts to subvert crypto implementations and introduce backdoors for themselves is a smoke-screen to cover up a significant break in RSA or something, they can't currently decrypt encrypted traffic they capture. (Or maybe they can break RSA, but it takes too much CPU time to use it on backbone traffic.) – Peter Cordes Sep 02 '16 at 10:39
-
2Bandwidth is irrelevant if you only have to make one "guess". My point is that for a typical token generator, and given this alien computing power, snooping enough past values via broken SSL lets you predict all future values because it's just a PRNG. Therefore you don't need to make lots of guesses against the server. It's a different attack from the one you're talking about. Basically, alien computing power turns "something you have" (the token generator) into "something you know" (the internal state of the token generator), so it's no longer 2-factor. – Steve Jessop Sep 02 '16 at 10:40
-
@PeterCordes the backbone does not work that way, it is like a roadnet, you might go fast on then highway but you still have to slow down in the suburbs, the chain is no stronger than it's weakest link, and just because the backbone is fast, does not mean you can utilise everything. – Magic-Mouse Sep 02 '16 at 10:44
-
@SteveJessop your response is like saying, why crack the system when it is easier to just make the guy login at gunpoint. Bandwidth and calculation power is irrelevant if you already got the password. – Magic-Mouse Sep 02 '16 at 10:45
-
1No, my response is like saying, "this is what difference the alien computer makes, it enables an attack that otherwise doesn't exist". Or rather, it might enable it: I don't know the parameters of a typical token generator handed out by a bank. – Steve Jessop Sep 02 '16 at 10:46
-
No you are assuming it got the programs to calculate the algorithms (someone have to write those programs), but lets asume someone did, it would be more complicated to write thos "cracking programs" than just make the original in the first place. Kind if like making a highway between New York and Los Angeles - the wrong way around the earth. – Magic-Mouse Sep 02 '16 at 10:49
-
@Magic-Mouse: Have you heard of the NSA, or Edward Snowden? They really do read straight from the Internet's big pipes, and capture a large fraction of the total traffic. They filter it on the fly (with hardware they co-locate with Internet backbone providers), discarding most and keeping all the emails and other plain text, so the actual bandwidth required to get it back to their mainframes is reasonable. – Peter Cordes Sep 02 '16 at 10:51
-
It is interceptable information they get, only a fraction of information, it is know that Illegal organisations uses online games, messages hidden in images, protonmail, to comunicate, besides the backbone is not one cable it is a massive network of giant cables containing cables. Just because you herd of NSA or Snowden does not mean it is the whole world. – Magic-Mouse Sep 02 '16 at 11:11
-
1An adversary that can do a key recovery attack against a 128-bit to 256-bit cipher in on the order of seconds realistically has no need to go through the normal login page. They can just wait for you to initiate a session, and then quite likely fairly easily hijack it. – user Sep 04 '16 at 20:04
-
Ok lets make this easier @MichaelKjörling, with the computing power you have today you should be able to calculate any number between 1 and 1000 within 1.000.000 of a second, i'm thinking of a number, you got 3 attempts. Go ahead and calculate it! – Magic-Mouse Sep 06 '16 at 10:15
It would be the end of RSA. Some evil genius would make a boatload of money and we go back to the age of trading goods in stead of electronic currency.
-
4This is incorrect. The computer is fast but still polynomially bound. Increasing the key size for future encryption would be sufficient. – Sep 03 '16 at 10:10
-
It is true that RSA can overcome such strong computers. The problem is tough, that the current used prime-factors are not big enough to counter this computer. So initially you can break most encrypted messages. Once similar computers are used to create sufficient coding again the damage is already done. – Sep 05 '16 at 08:19
-
No. It only takes polynomially more time to generate a larger key, but the effort needed to bruteforce it grows exponentially. – Sep 05 '16 at 08:22
For, this answer, I am assuming the aliens tech make us look like cavemen and discussion of laws of physics are all moot because we don't understand anything.
If the computer was practical (similar to a regular PC), not the size of a galaxy, nor using a solar system worth of energy to power it.
If we could duplicate it, we could make it bigger better stronger, and defeat most encryption for a long time.
If it were general purpose enough like a PC, it would end all illness as the human genome would be fully sequenced instantly. All drugs would run all permutation basically instantly, and instead of trial and error we would just brute force every molecule combination against every genome of every illness.
If you could keep it a secret, you could get a large set of all the secrets and use them for good or evil. Blackmail people into whatever you want, or your secrets will be revealed.
As soon as people knew, all encryption would be increased by orders of magnitude nullifying your advantage. 512 bits = 2^256 seconds to crack, not in many lifetimes. 256 bits to 1024 or 2048 or whatever. 4096 bits encryption to maybe 65536 bit encryption, and it is impossible to crack again. It will take several weeks for critical systems to be updated, but after that your device is worthless for this purpose. Login to all vulnerable systems would probably just be disabled until the new bits lengths were deployed. There are devices that won't be updated, and you can get their data until people buy version 2 of the product that is no longer vulnerable.
The additional bits will slow things down for awhile, but the hardware will catch up and everything will be fine.
If the device is like one of our ASIC dedicated to only brute forcing passwords, it will quickly become useless, and all system are updated to longer more complex encryption.
- 2,756
- 9
- 6
Brute force any tech. Want some nanotech? just tell this computer to simulate possibilities using an evolutionary algorithm ( + quantum theory) until perfect nanobots are developed. The computer could design almost any tech that fitted our understanding of physics in an instant.
- 1,995
- 11
- 11
-
1See comment above - in a lot of cases this won't help as it confuses number of particles or cells with number of connections or interactions. – Stilez Sep 06 '16 at 08:35