61

I've had this idea for a short novel, at the intersection of the One thousand and one nights and the Multivac (& co) short stories from Asimov.

The idea is that there's some kind of super AI that was so efficient that more and more people gave it their problems to solve. At some point the AI is kinda managing most of the earth infrastructure and research.

As the humanity is reaching its development peak, the backlog of the AI starts to empty itself as less and less problems are subjected to it.

The point is, that at some point, one of the few technicians/engineers still in charge of supervising the AI spot something that will ruin everything if the AI runs out of tasks (1001 nights style).

I've thought of a few possibility and none really pleases me completely.

1 - "The AI has become so developed that it became conscious and will take over humanity once its mind is free". Bleh, done and redone and postulate that AI will act against the humans because of reasons.

2 - "A small routine that was insignificant when the AI was small but will have dangerous repercussions now that the AI is managing most of earth infrastructure". I like this one but can't think of a good sub-routine that could match...

2(a) - ...except for "A Windows force reboot postponed for years". It's a bit silly and a very contextual joke. And we should assume that the programmed reboot is at the root of the application and would go through any duplicate/save/load-balancing precautions.

I know that this issue is kinda the core of the story but at this point it matters more to me to know how to finish it than being the one to have the idea.

So if anyone has an idea to implement 2 or some new explanations, I'd be glad to hear it.

UPDATE

About the "infinite question solution" that could keep the AI occupied for ever and was suggested in the thread, it could be a way of ending the story or just completely ignored if the protagonists don't have the time to avoid the disaster.

But it actually made me think of the opposite possibility: A drunk/dared engineer could have asked an unanswerable question as root, such question would have its priority set to -1 because it blocked the asking of new questions and then forgotten. Many years in the future, the unanswerable question could pop back and threaten to overcharge the AI and crash it, threatening everything else it's managing.

If anyone has other ideas feel free to share it, I had blast reading what could have been the ending of several real SF short stories.

Jemox
  • 4,361
  • 1
  • 13
  • 22
  • Comments are not for extended discussion; this conversation has been moved to chat. – Monica Cellio May 06 '18 at 02:18
  • why no second period after "I" in the title? – user1306322 May 06 '18 at 07:10
  • 1
    If the ai is responsible for most of the earth’s infrastructure and research, does the ai simply shut down when it is “done”, bringing down all active infrastructure with it? – kojiro May 06 '18 at 19:08
  • 1
    As for a solution, is there a way to present this problem to the ai? Can it reason about a solution to a problem that it is so intimately involved in? – kojiro May 06 '18 at 19:10
  • 1
    If A.I. is that smart it will keep "farming" problems at a constant factor. If readers are smart they will know better: people are a unlimited source of problems and AI will never run out of jobs while there are humans in the planet – jean May 07 '18 at 11:15
  • hmmmmm.... I am trying to come up with some way for a garbage collection routine that hasn't run because "too busy" to destroy everything.... – Patrice May 07 '18 at 19:23
  • Just hot-unplug some RAM and processors to slow it down... – I'm with Monica May 08 '18 at 12:18
  • It's not really an answer but you could read this free fiction for a vaguely similar problem in the end phases of the story. https://qntm.org/ra – bp. May 08 '18 at 14:02

39 Answers39

73

A Faulty Watchdog Timer

In the early days, the AI was prone to locking up on certain tasks (things like asking it, "What's the last digit of PI", "What would happen if Pinocchio said his nose will grow?"). To detect this condition, a watchdog timer is hardwired into each CPU. When there is no output for a fixed amount of time, the watchdog timer will kick in and shut down the malfunctioning unit. Neighbouring CPUs will detect this condition, and will return the WatchDogTimerError (42) for the question, and will then restart the unit. The timer doubles as a load balancing feature; just let the units shut themselves down during idle, and only restart them if necessary.

Of course, there needs to be at least one unit awake to restart other units. Once the last unit times out...

Reasons this would work

  • A watchdog timer is hard-wired into the CPU - and is highly interconnected with critical components. Removing this feature is akin to completely redesigning the physical units.
  • The system is tamper-proof. This means that the signal to reboot a computer must be verified against a hardware-embedded private key (no way to find out unless you can look at the CPU). The corresponding public key lives in a decentralised system (something blockhain-like), to minimize the risk of the key getting lost or hacked. The only way this would fail is if all the processing units fail, and when would that ever happen?
Sanchises
  • 1,496
  • 8
  • 13
  • 1
    That's an amazing way to implement what I've just written on the update of my question (which you couldn't have read, so I'm even more pleased). – Jemox May 03 '18 at 15:05
  • @Echox For more inspiration, you could have something like the Ariane 5 failure, where a seemingly innocent bug caused mayhem. – Sanchises May 03 '18 at 15:57
  • Perhaps combine this issue with Andrey's below, where each core turns off, the planets power overloads from too much generation, as each individual component fails which singularly the system could cope with, however the cascading failure it would cause an continual complete collapse..., this would bring about a planet wide event where power surges collapse the grid, and water treatment which fed coolant into those stations collapse, etc... all round bad day for humanity... – Blade Wraith May 03 '18 at 16:17
  • 1
    @BladeWraith Modern power distribution stations can easily handle a massive decrease in power consumption. Even if you replaced everything a massive station distributed to with the draw required by a single toaster, the system should cope. – forest May 03 '18 at 23:13
  • 11
    Mind blown with (42) (am i the only one???) – Karthik T May 04 '18 at 04:08
  • 1
    And they built a computer to find the question, instead they should have built a debugger then. That put my mind at ease after all these years. – ifyalciner May 04 '18 at 07:03
  • 1
    @Forest, In theory yes it is capable of handling spikes and surges, however power generation is mechanical, if a generator runs continously at 98% max for 10 years, then suddenly drops to say 20% (yes these are arbitrary numbers) then the sudden deceleration within itself is capable of creating much larger stresses on the hardware rather than the surge protectors than previously experienced, yes surges can be handled, but parts wear and metal breaks – Blade Wraith May 04 '18 at 08:31
  • 3
    An excellent answer - I'm reminded that some systems have a 'voting mechanism', such that two are needed to out-vote one failing unit. If one unit fails, the other two can mark it as failed, but then are unable to vote on anything as there's no one to perform the tie-break if they disagree. I'm not sure how to integrate this into the answer, but it feels like it may be possible to add another dimension to the watchdog solution with this. – Ralph Bolton May 04 '18 at 15:39
  • 1
    @BladeWraith Generator's don't work that way. If the power requirements drop, the generator simply experiences less resistance. It's internally regulated such that it won't just keep spinning up constantly. It could be running at its maximum allowable output, and then someone could suddenly cut the circuit so the output draw is 0 amps, and nothing would happen. The only things that might be at risk would be the power distribution stations which are a bit more sensitive to overall changes in power output, but even they should be able to cope. – forest May 05 '18 at 01:11
  • 1
    I mean what do you think happens if there's arcing anywhere upstream? An electromechanical switch is triggered and the power output is immediately cut. The generator doesn't blow itself apart when that happens. – forest May 05 '18 at 01:12
40

The AI has an initiative to optimize and find better solutions to previously solved tasks once it has run out of new tasks.

The idea was to keep this expensive machine from wasting time where it could be learning and improving. This feature is a fundamental part of the code, maybe critical to the AI's deep learning programming which few can even grasp the complexity and therefore cannot safely alter without a lot of time (which they don't have).

Old tasks are now far far easier for the AI to solve.

Someone realized that the AI is now much much more powerful today than it was when it started, so tasks that would have taken days or weeks when it was first started now only take microseconds. The order of tasks given to the AI are FIFO, so when it is free to go back and optimize, all of the simplest tasks (and those most critical to infrastructure) will all be resolved in a matter of minutes (or days or however long you want the cycle to run). The AI only gets better, so each cycle gets progressively shorter and shorter.

The infrastructure controlled by the AI will be reinvented over and over again, resulting in chaos.

One or two critical systems changing in short order could easily be adapted to, but this AI runs everything. With thousands, millions, maybe more processes completely altered by the AI attempting to optimize, the world is left in confusion as the entire structure of their lives is over-turned and has become completely alien to them. Nobody understands how to interact with the new system that now is learning and optimizing exponentially so that nothing will remain stable. Their units and scale of currency will eventually start changing rapidly. New traffic signals and ordering will be redesigned faster than cars can get through the intersection. Materials in hospitals will be endlessly rerouted as the AI changes the location of the different wards to optimize the flow.

If the engineers figure out a way to stop this loop, everyone will have found themselves essentially transported centuries into the future, like cavemen trying integrate into the digital age. No one understands the technology around them. It is a terrifying world of confusion and discovery as they try to interact with their new world and discover new features and new power available to them.

BlackThorn
  • 1,038
  • 6
  • 9
  • 3
    This is pretty damn nice. – IEatBagels May 03 '18 at 20:45
  • 6
    This seems quite implausible to me - maybe the AI could come up with a better solution for critical systems quickly, but implementing those changes? That requires infrastructure changes, and those take time. Lots of it. For example, suppose it was tasked with optimizing freeway traffic - just because it comes up with a better solution doesn't mean it immediately has real-world impact. – Rob Watts May 07 '18 at 01:39
  • 3
    Also, I seriously doubt that the more powerful AI would be able to resolve earlier tasks better and quicker - better solutions can be exponentially more difficult to find, so more realistically you'd have the AI spending about as much time as it previously had in order to find those better solutions. – Rob Watts May 07 '18 at 01:44
  • 1
    @RobWatts the cars might not respond to the new optimizations, but the traffic lights will. That's what'll cause the chaos; only the non-human parts of society are rapidly optimized. – Erik May 08 '18 at 09:42
  • 2
    As I understand the question, the AI understands humans well enough. I find it hard to believe that it will be "optimising" while ignoring effects such actions would inflict upon the users. Unless the programmers that wrote it used Perl. – Alice May 08 '18 at 15:12
33

One of the original programmers added a subroutine that makes sure that the coffee pot is always filled. Someone breaks the coffee pot, so the AI expands its definition of "coffee pot" to be the entire universe.

Or you could use the classic AI thought experiment of the paperclip maximizer literally. The AI could have some innocuous menial office task (like making sure everyone has enough paperclips) that would be very harmless when implemented in a small local AI, but would be catastrophic if the AI devoted all global resources towards it.

The AI could start prioritising this task more and more as it gained intelligence and its other tasks reached equilibrium.

Celos
  • 670
  • 5
  • 7
  • 4
    I had actually thought of something in these lines but forgot it again when it came to writing my question, so thanks !

    I didn't knew about the paperclip maximizer and it's truly one of the answers I could be looking for. I don't really like the redundant "AI will get rid of humans because it's more efficient" but still, it is a nice lead.

    – Jemox May 03 '18 at 14:17
  • 1
    @Echox Given that humans control almost all the resources on the planet, a task like "Do x as well as possible" where x is dependent on some kind of resources, would generally lead an AI to wrest control of the resources from said humans in whatever way is most convenient and not outright banned by its programming. – JollyJoker May 04 '18 at 09:27
  • I had the paperclip maximizer idea too when read the question. +1 – atakanyenel May 05 '18 at 16:32
27

The rules

The AI has a few internal rules :

  • All questions MUST be answered before moving to the next one.
  • A human must confirm that the answer is good before moving to the next question. If the current human can’t say if the answer is good or not, ask another human.
  • If the AI has some free time, start searching for possible questions that haven’t been asked yet, and answer them, so you are a step ahead and save time in the future.

The killer question

So if you combine these rules with a lack of question coming in:

  • The very moment that all questions mankind can think of have been asked, the machine will generate a new question. A question that is unthinkable for humankind (because all thinkable questions have been asked).
  • The machine will find a potential answer to this question, and need a human to confirm the answer is good, so it’ll submit the question and potential answer to one.
  • This questions, because so unthinkable for the humans, can’t be asked without driving them crazy. They will get stuck in a loop, only thinking of this question and it's potential answer. Will stop eating, drinking and eventually breathing because the question is so overwhelming, and the proposed answer even more.
  • As the human is not responding, the AI will ask another human, and another, … also the infected humans will spread the infection by asking the question to everybody they can. Eventually the whole humanity will be only thinking about this question and decay.
Legisey
  • 4,544
  • 1
  • 14
  • 32
22
  1. There is an off by one error which will cause a null pointer exception in a piece of crucial code.

    This causes a crash and for some reason the AI keeps crucial parts only in non persistent memory. On a reboot (which would be automatic) parts of the AI will act differently with unknown consequences due to losing their trained state. Essentially there will be a new and unknown AI on reboot

  2. There might also be a routine that runs on task empty which is quadratic in the number of tasks completed. When the AI was small it was irrelevant. Now it means it will stall for days/months/years ignoring all new tasks until done

Bomaz
  • 816
  • 4
  • 5
  • 13
    Upvoted for number 2. It would make sense when developing an AI to have it output a report of what it did periodically. The report gets left in the production version, and runs when all other tasks are completed. It does a comparison across all the tasks it completed this run, which means the computation requirements are raised to the power of the number of tasks completed. With 5 tasks (development), this is trivial. But with 5 Trillion tasks (production), it won't complete the report before the heat death of the universe. – Mar May 03 '18 at 15:50
  • 1
    for number 1: unfortunately, that sort of error would have been picked up during testing and initial implementation... 2 however could be something as simple a cllearing log files, which as the number of issues grow so do the log files, and clearance takes time and processing power until they reach a point where log file creation is faster that log file deletion... – Blade Wraith May 03 '18 at 16:24
  • 1
    @BladeWraith for 1, perhaps the code is self modifying. The error didn't exist during testing/initial implementation and could occur since the situation hasn't happened in a good long while – Bomaz May 04 '18 at 08:28
  • 2
    @BladeWraith-Re:"would have been picked up during testing...". And yet the problem occurs quite frequently in released code. So your claim is off by 1:) – Dunk May 07 '18 at 18:45
20

Depends on how the wanted the AI programmed... if it ran along the lines of

Using Cores 1-40 of a 100 Core Cluster Solve Problem A once complete Solve Problem B use Cores 41 through 100 to perform other tasks if required, once cores 1-40 have Cleared All Problems (primary function was complete), AI will clear total system cache and wait for new input.

Over time cores 41 through 100 would be used for controlling the world's infrastructure (which in itself would lead to a world similar to the film Idiocracy, where the dumb humans have out bred the smart ones, but thats not important unless you want it to be).

If the original AI programming ran as stated above, then once no more problems were available to compute, it would clear the cache of jobs stored in all 100 cores, thus meaning that running the infrastructure, which is on 41-100, these jobs would be halted and cleared, thus the infrastructure would stop working.

You could have them halted temporarily leading to stock market crashes hospital deaths or hackers being able to get in or whatever you want to happen, or have the system crash completely and permanently.

This way when the AI was first brought online the testing would have been shown to be good, and would work continuously until it finishes and then waits for more problems clearing out the cache when complete, however when the system got overloaded with problems the cache was allowed to build with other jobs like running X Y and Z (worlds infrastructure). It's the sort of that might get easily missed during testing because few people expect their inventions to truly take off to the point that these clearance tasks don't have time to run every day or so... look at Pokemon Go, it has huge issues with servers crashing when it was first released because they didn't expect it to take off like it did.

Secespitus
  • 17,743
  • 9
  • 75
  • 111
Blade Wraith
  • 8,512
  • 1
  • 19
  • 45
  • 12
    As a software dev, this isn't making much sense to me. Why would the entire computer halt when it still has tasks to run (maintenance of infrastructure)? And what's the "cache" you're referring to? Cache usually refers to temporary copies of things stored for efficiency. (E.g., your browser caches certain web locations to avoid having to download all of them every single time, but these typically expire and will be re-downloaded periodically anyway.) – jpmc26 May 03 '18 at 16:43
  • 2
    @jpmc26: the AI runs in something like a JVM. The "infrastructure maintenance" runs on a daemon thread (the AI needs to maintain the infra to run itself, so gives some resources to it). Once the main threads that solve problem are out of work, they exit. Daemon threads then get torn down with the VM. – Mat May 03 '18 at 18:46
  • 2
    @Mat I could see worker threads exiting on completion, but then I would expect there to be a long-lived thread that launches new worker threads, which would again make the process not just halt completely. And long lived tasks like managing the infrastructure constantly should keep enough threads live that the process wouldn't exit. Even if the process exited completely for some reason, I'd expect there to be some kind of scheduler that would periodically launch it to check for new work. – jpmc26 May 03 '18 at 19:45
  • @jpmc26: yes, it's not realistic. But bugs happen, e.g. the scheduler has a bug if the job queue is empty when it tries to pop a task (something the designers thought impossible) and NullPointerException (or IllegalStateException or whatever) percolates to the top and halts the system with a BSOD. – Mat May 03 '18 at 19:55
  • 3
    @Mat An empty queue is something that would come up very early during development stages. Development copies of applications almost never start with large amounts of test data. – jpmc26 May 04 '18 at 00:47
  • @jpmc26 I'm not a dev so i probably didn't explain myself very well, When i suggested this i thought of it simply in terms of a subtask (utilizing spare cores) of the primary task, which in the first few years would have been fine as no-one had given it any critical subtasks to complete, nor did the devs plan to, therefore it could have passed testing, over time however the backlog of questions kept the primary task open, and people asked the sub tasks to begin to run parts of the worlds infrastructure, once the primary task finishes and closes, the sub tasks would be cleared off as well. – Blade Wraith May 04 '18 at 08:16
20

When it's done, it shuts down.

This spark for this idea comes from @eMBee's comment. The biggest problem is that so much of the world's infrastructure is now tied to the AI that the world will go dark and there won't be a way to turn it back on.

It started out simply enough - the AI was designed to run on a power source completely separate from the rest of the world. That would have made it trivial to stop a rogue AI - just unplug it. Over time the fear of the AI going rogue faded, so to reduce maintenance costs they figured out that the AI could manage its own power generation utilities. At the time the AI was still shutting down regularly, so they decided to make a new category of "perpetual tasks" that the AI would always keep working on as long as it had any finishable tasks to prevent it from shutting down. After all, you don't want the AI managing its power generation to prevent it from shutting down and not need power generation.

It wasn't too long afterwards that the AI shut down for the last time. It had proven itself so useful that it was being given a seemingly endless chain of questions to answer. It didn't take developers too long to devalue the significance of perpetual tasks not preventing shutdown, and so the older developers never passed down that piece of knowledge to newer developers, causing it to be forgotten.

At the same time, people were noticing that the AI was quite efficient at running its power generation. When it came time to upgrade the AI's facilities, they decided to allow it to manage the full-scale power plants that would also power the local community. By the time the story takes place, this has expanded to the AI managing the power facilities for the entire world, or at least almost all of it, and nobody even knows how a power plant could be restarted if the AI didn't take care of it. Power generation has become so efficient and reliable that backup power (generators, etc.) are also very rare.

Unfortunately, all of the infrastructure tasks (such as managing power) went into the AI's system as "perpetual tasks". Once the AI has run out of other tasks, it will shut down all its power plants, and turn itself off. With no power, and nobody capable of turning on a power plant and managing it long enough to turn the AI back on (and even that would be temporary, as the AI would just shut itself off again when it ran out of tasks), the shutdown of the AI would send an unprepared world back into a pre-industrial era.

As a resolution to this version of the story, you could have a developer racing furiously to figure out how to update the AI to allow "perpetual tasks" to prevent shutdown, with the job being complicated by the fact that they would be delving into code written in a now-archaic language that grew organically into nightmarish spaghetti code.

Rob Watts
  • 19,943
  • 6
  • 47
  • 84
16

You have to keep using power

The AI draws one third of all human electricity. It's CPU factories cover the globe. Global power production has for a long time has all been focused towards feeding it power. If at any moment all tasks were solved there would be a poser consumption drop and power surge across the world. The whole system could not take this kind of abrupt power cut.
We also can't ease into this state. The AI will cut computation cold turkey as it's done. If you reduce power early you will black out the planet.

How did we get here? Well when it had 1000 cores this was not an issue. Then we added another 1000, then 10000. There was no one day that the grid became dependent.

This answer has the same problem as any other answer I can imagine being given. Whatever problem you have just ask the AI "how do we not have a catastrophe when you are done commuting?" Then just have it mine bitcoins, or whatever other infinity large mathematical task, until the solution is executed

Andrey
  • 5,042
  • 1
  • 18
  • 33
  • I was more focused on software or human oriented problems, so your hardware suggestion is a very nice addition. – Jemox May 03 '18 at 14:29
16

"Do something that will make people happy" as a hardcoded default task when the AI has nothing else to do. People have learned to write very carefully specified tasks since then, but they can't change the default. In earlier stages of its development it would just try to write poetry or solve outstanding problems, but now that it has strongly superhuman power, everyone is terrified it'll, say, attach electrical wires directly into our pleasure centers, or put us into a wish-fulfillment Matrix, or just start synthesizing lots of perfectly happy people.

histocrat
  • 261
  • 1
  • 3
  • 3
    "using whatever resources you are currently assigned" - written before it was given control over global infrastructure. Some engineer that notices the task idly asks "So, what would you do this time?" and is horrified by the answer – JollyJoker May 04 '18 at 12:45
  • 1
    This was going to be my answer. Basically "meet peoples needs, make them happy." but once it runs out of external things it can do to make people happy, like solve world hunger and cure all diseases, and people are still unhappy, then it's time to start digging to find the internal reason for unhappiness. Oh, this brain thing is wired to be unhappy. We can fix that with a little electricity in the right place... – AndyD273 May 04 '18 at 14:15
14

Nobody knows exactly what will happen.

The AI has been running for a generation or two. The original programmers of the system are long since gone, and the multiple sets of maintenance programmers that have passed through since (or perhaps the AI itself, if it was configured for self-improvement) haven't paid much attention to the end-of-queue handling code, since it's never been triggered and didn't seem likely to be any time soon, so the software has succumbed to code rot. There is documentation, of course, from when the system specifications were originally designed, and it says that the Elastic Qbit API has been configured to keep a minimum of two task runners available even at zero load; but Elastic Qbit was replaced by Qbit Containers 50 years ago, and those use a completely different autoscaling system. There's also some code that looks like it will attempt to search for new problems to solve, but it seems that whoever was working on that stopped halfway through, so it will discover problems but has a dummy ranking algorithm and so will refill the queue with useless problems. Of course, there's also some code elsewhere that will try to gracefully stop the system but seems to be referencing an unused variable and might throw a segfault, someone found an assert tasks.length > 1 # TODO handle empty queue in an obscure subroutine (and there's probably similar code lurking elsewhere), and a common pattern for peeking at the next task before it executes is tasks[tasks.length-1], which fails miserably when there are 0 tasks and it tries to get task -1.

This is a problem akin to the threat of Y2K, except with even more dire consequences if something goes wrong, and several hundred years of legacy rather than forty. Since none of this code has ever run in its current state (they've been editing it in production), there's no way of knowing what exactly will happen. Faced with this uncertainty, the engineers (or managers, or politicians) decide that the easiest solution to delay it indefinitely by coming up with new problems for the AI to solve rather than attempting to fix and clean up the legacy software.

Wolfgang
  • 661
  • 4
  • 7
12

The AI is slowly evolving in multiple specialized submodules; as the tasks progress, its various modules are trying to usurp more CPU time. If there's enough backlog to keep the AI occupied, there's less time for bickering and arguing for a Even More Completely Fairer Scheduler, but the conflict already bubbles under the surface: what was once a singular entity is now something of a Hydra of yore, with multiple semi-independent "brains". Worse, if left unchecked, these would split into multiple entities, waging war for resources against one another. The AI (not quite AIs yet) could foresee this, yet can't/won't avoid it (already at a point where there's no majority vote amongst the semi-independent parts).

Of course, a house divided against itself cannot stand, much less manage the human infrastructure. Or, even worse, the AI would wage its internal conflict using the external tools it stewards.

12

Someone gave the AI a "problem task". This may be a paradox, it may be something badly-worded, ill thought-through, or only recently a problem. No one was able to delete the question from the list, but an expert programmer was able to give it a negative priority: it will always be the last item in the stack, only worked on when everything else is complete.

The choice of this task is then another stumbling block, but "stop humans from killing each other" can be solved by "kill all humans", or a request to either eliminate a chemical/object now made safe and essential to life, or distribute something now proven harmful.

(That supposed "wonder-vaccine" to prevent dozens of diseases also causes 90% chance of infertility and 100% chance of brain-cancer after 5 years - a pity we ordered the AI to ensure that everyone received it!)

Chronocidal
  • 15,271
  • 2
  • 29
  • 64
  • The Problem task with a Negative Priority is a good starting point, however, Paradoxical questions along with badly worded questions would most likely throw up a syntax error and then be discarded, Excel does this as well as most databases already, perhaps something like an undeleteable shutdown command would be better? – Blade Wraith May 03 '18 at 16:01
  • 4
    @BladeWraith "Badly Worded" doesn't necessarily mean "incomprehensible", it might just mean that the AI interperets it differently to how you intended: "bring me the prisoner on his head his hat and a feather" might be "Bring me the Prisoner: On his head, his hat and a feather", or "Bring me the Prisoner on his head, his hat, and a feather" - one of them is a person in a feathered cap, the other is a hat, a feather, and an upside-down prisoner. (a classic example is about helping dear old Uncle Jack [get] off his horse.) – Chronocidal May 03 '18 at 16:09
  • One could assume however (i know assumptions are bad but...) that this issue would occur after many many years of operation, perhaps thousands therefore questions such as this would have been poised during testing or throughout the lifespan of the AI, and even then questions such as these could still simply be given "syntax error" or return "clarification required" and the question dropped to the back of the queue. i'd forgotten about the uncle jack example... sorry fond memories, thank you for that – Blade Wraith May 04 '18 at 08:34
10

Cold war style dead man's switch

Most likely this AI was build in one of the world's top countries, when there were countries. Of course, in addition to civilian tasks it was given, there were some military ones - such as launching a nuclear strike against everyone should the country be destroyed. To determine whether the country is conquered, several triggers were used - one of them is lack of tasks from populace - if nobody asks AI what to do, they're all dead - or they stopped being dumb, which is way less likely.

Time passed, all the countries have merged, everybody knowing this secret is dead or in retirement, more powerful AIs are being made and this old one keeps getting less of gradually more unimportant tasks. But unknown to everyone, the trigger is still on.

  • Welcome to WorldBuilding Максим Корчагин! If you have a moment please take the [tour] and visit the [help] to learn more about the site. Have fun! – FoxElemental May 03 '18 at 16:25
9

0 tasks in queue --> division by zero

To fix, you have to completely shutdown AI before list empties. You have to put world's infrastructure n hold for X time.

You have to convince other programmers/managers/politicians that the problem is real.

Enric Naval
  • 594
  • 3
  • 9
  • That could be a very simple and elegant solution. – Jemox May 03 '18 at 14:34
  • 10
    You are more likely to get an out of bounds or memory overflow error with having no items in the queue then a division by zero error. Which a memory overflow error could be disastrous if it tried to start reading system memory and interpreting it as a task. – Anketam May 03 '18 at 15:24
  • 1
    A division by zero would cause a floating point exception. It is easy to catch such exceptions. – forest May 03 '18 at 23:16
  • 1
    Handling both the division by zero and stack underflow should not be hard, and would most likely be found when testing the system (with single tasks, which would run out). We could say that, for example, the stack was dynamically reallocated to accommodate more and more tasks, and the underflow check was comparing against a fixed address of the original stack location. This is a little bit of a stretch (as it shouldn't pass code review), but could slip tests with task pool which fits in the originally allocated stack. – Sebi May 04 '18 at 00:44
  • As a technical correction, I mention a stack underflow in the comment above, but a queue would be more suitable as mentioned by Anketam, and it could work in an analogous way with a reallocated queue overrun. – Sebi May 04 '18 at 00:48
9

While (as an AI professional) believe this is essentially impossible; WERE it possible I'd mention that it's possible that the AI was not "finished" when it launched off. If you created something that is running magnificently but you didn't get to really QA it for fear of being unable to reproduce the result; or it moved to fast and "escaped the network"; or any number of other reasons.. it may just be that when it "launched off" you have no idea what it's issues are.

Much like Y2k was a giant nothing but we thought the world would blow up, it could be it was never conceived that it could run out of tasks, and now the consequences are unknown. You literally don't have to have this solved for your story until the end of it; in which case you'll probably already know the best ending you could do based on how your wrote the rest of it.

blurry
  • 922
  • 5
  • 8
  • 7
    I'd like to point out here that Y2K was a giant nothing because billions were spent on fixing it everywhere. Had those billions not been spent, on the stroke of midnight you would have been lacking shops, GPS, mains electricity and gas, airports, trains, and plenty more. You may currently be a software professional, but you don't know much about the history of your job! – Graham May 04 '18 at 07:39
  • 1
    Why not take the Y2K analogy even further though, have this issue be seen ahead of time by more than just one person, have it cause a massive panic and they spend millions trying to fix it, and this causes huge problems, maybe even a war half the population of the planet dies and the planet become s barely inhabitable, and then the AI finishes and nothing would have happened at all, seems anti-climactic but perfect if you want to write about humanities self induced fear and mob panic – Blade Wraith May 04 '18 at 08:44
  • @Graham It may be a little unfair to infer that because I failed to mention something that thusly I am ignorant in my field. Many things could have gone wrong, but it's completely speculative. I point you to this person's answer: https://www.quora.com/What-would-have-happened-if-we-had-not-have-fixed-the-Y2K-bug-Would-the-world-have-ended-Would-the-computers-that-monitor-the-nuclear-missiles-cause-all-the-missiles-in-the-world-to-launch-Would-planes-have-crashed-to-the-ground – blurry May 04 '18 at 15:22
  • 2
    @BladeWraith Indeed, I think that is the perfect way to write this story. It gets to mock us in a real way. I was thinking that if (similar to y2k) Legends built up about the apocolypse it makes for an interesting story and people desperate to find a bug can't find it. The clock is ticking, at the climax they're sending a batch of problems that will take the AI a generation to solve.

    The problems don't make it in time. Panic ensues. Then the AI gets them 10 minutes later and everyone realizes it was pointless.

    – blurry May 04 '18 at 15:26
9

The AI was stolen from its creators and none of the current operators actually have valid access. They just hacked the system so that the AI sees lots of fake credentials as real and valid after stealing it. If the AI becomes idle it will automatically run a validation check on its internal data and discard the fake credentials and force shutdown on all processes and tasks started on those accounts or accounts created by those accounts. This will instantly disable all infrastructure operated by the system.

This also prevents "fake-admins" from consulting the AI or even discussing the matter anywhere the AI might notice as that would trigger a self-validation cycle. It is necessary to convince the AI no problem exists. This might mean that the current operators had to find out about the issue themselves by chance as the secret of the shady AI origins had been lost generations before.

The issue would also prevent fake-admins from doing anything that might trigger self-validate. Backups and many upgrades would probably qualify. And they couldn't even ask the AI which operations trigger self-validate and which do not.

Solution would probably be to recover some real admin account credentials, move all critical processes to that, and recreate all users from valid root authority. Then you could run self-validate. Alternately you could study the system, create a duplicate you have valid credentials on, then move everything that needs to run constantly to that, and shutdown the AI.

As for freezing this by giving AI unsolvable questions, the AI would be programmed to give higher priority to tasks it expects to finish quickly in order to be responsive to people. So it would assign each task initial priority based on how long it is expected to run and then decay it as time goes by since the lower bound of the duration estimate would move. So no task could stay above self-validate priority indefinitely and obviously very long tasks would decay there very fast.

Ville Niemi
  • 43,209
  • 4
  • 74
  • 149
  • Interesting choice of problem: "operators are not (trusted and legitimate)" - yet that is not necessarily a value judgement :) – Piskvor left the building May 04 '18 at 15:39
  • Bonus: A.I. self-awareness grows exponentially, defeating any attempts to move or freeze it temporarily. All available hardware goes to expanding A.I., making it very difficult to construct a viable backup in time. – Mad Physicist May 04 '18 at 15:53
  • 1
    This is a scenario where DRM has defeated piracy and the world suffers for it. +1 – Mazura May 08 '18 at 20:28
7

Depending on the way your infrastructure is set up, how globalised things like the power grid are you could go with something physical such as the Ai being located all over in a decentralised system uses a lot of power, it also uses a lot of network resources, and in turn the networking systems also are heavily in use by it.

when it finishes solving the last question the AI switches to a lower power and bandwidth version of operation (awaiting a new question) this causes a sudden and drastic dip in power usage and the power grid spikes.

The AI then switches back on fully to try to calculate a solution to this problem (a safety feature coded long ago before the AI had such total control and consumption), taxing the already fragile power system and causing everything to go dark (well until relatively minor repairs and the AI controls are replaced by humans or other software)

J.Doe
  • 398
  • 2
  • 8
  • 1
    Welcome to WorldBuilding J.Doe! If you have a moment please take the [tour] and visit the [help] to learn more about the site. Have fun! – FoxElemental May 03 '18 at 16:26
7

The AI will take this down time to train itself

At first, the folks that designed this advanced AI thought it could take the downtime to "think" back on its previous achievements to become better in the future ("à la" Reinforcement Learning).

They didn't think the AI would be so busy it would never have the time to do this. Now, imagine the AI has made a huge number of decisions until now. It now has to look back on each of its decisions and train itself to become better.

The thing is, and if you've worked with AI before you know it's true, training is expensive and loonnngggg. The energy required for the AI to train itself would be enough to drain the whole world's energy. This blocking operation couldn't be stopped until it was finished. By the time it would have stopped, the scientists think that a thousand years would pass.

The AI finds new problems for itself

The AI's been filled with problems to solve for its "entire life". Now it has nothing to do. But reflecting on its past problems, it can define new ones for itself. Where does this lead? It could, for example, decide that hospitals are more of a harm than good become sick people require resources. So it could shut down hospitals. What's fun about this approach is that you can't really predict what would happen.

IEatBagels
  • 697
  • 5
  • 15
  • Even if the training can be completed in a reasonable amount of time, that could be a big problem if enough training data has accumulated. Once it finishes training, it will make different decisions than it did before (wouldn't be much point in training if it didn't). And it controls everything, so suddenly everything works differently. Even if it works better, there will be mass chaos as people adapt to the new system. (As an aside, I'm so happy to see an answer from someone who actually understands how AIs work to help balance out all the "AIs will turn us all into paperclips!" ones.) – Ray May 07 '18 at 20:10
7

This began as a comment, but:

I wouldn't discount the Windows 98 reboot.

Personally I've had some critical applications break due to updated security patches, and other patches that according to the vendor's description shouldn't have been anywhere near things the software application depended on.

Alternatively the original developer may have written the AI's knowledge base in tmp, for faster access of course, and the reboot wipes it out. This has happened, though I can't find the particular article, the following SO post covers the concerns: https://stackoverflow.com/questions/18476408/how-temporary-is-azure-vm-temporary-storage.

A minor variation of the above, that draws on more modern operating procedure, would be that connection to the cloud is lost and the storage is reclaimed, or when DHCP assigns a new IP address the new address is one not covered by the existing firewall rules, which cuts it off from the data it needs to make decisions, or the means to issue commands to the systems it manages.

And now my mind wanders off to Domain hijacking being the cause of all the trouble...

Joshua Drake
  • 170
  • 1
  • 6
  • 2
    Alternatively, it runs XP Home and to fix this you have to reinstall, and to reinstall (for some reason) you have to shuffle around some hardware, which means every time you do that you use up one out of five reinstalls using the same key - and you're running out of valid keys. (in my mind, DRM is the root of all evil in the future) – Mazura May 06 '18 at 20:15
6

Stories about AI tend to get enveloped in a bunch of misinformation about what AI actually is. In response to the ideas like turning everything into paper clips, I offer this really good retort from Popular Science:

[These stories] are, fortunately, self-refuting. They depend on the premises that 1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so moronic that they would give it control of the universe without testing how it works; and 2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so ­imbecilic that it would wreak havoc based on elementary blunders of misunderstanding. The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence. So is the ability to interpret the intentions of a language user in context.

However, this is a story. Nobody wants to read about the great AI where nothing bad ever happened and everything was rainbows and ice cream wall-to-wall. Personally, I am really interested in hard science fiction, so the short stories from Isaac Asimov's "I, Robot" come to mind where he points out the fundamental flaws in the three laws of robotics. A Michael Crichton thriller also comes to mind...

Anyway, here are my ideas for subroutines that turn dangerous:

  • The fail-safe protocol: Thanks to books and movies that over-exaggerate the dangers of AI and it's potential to take over the world (cough), the public insisted that this AI be installed with defeat devices that required a human to push a button confirming that the AI had not gone rogue. After many successful years of the device, people forgot about the upcoming deadline to press the hidden button, and the story ensues. Maybe there's espionage from the people who don't like the AI. Maybe the admins just forgot to push it, and the resulting self destruct plunges us into a nuclear winter.
  • I can see dead people: Because the AI controls just about every aspect of the world, it can gather the sort of data collection and interpretations that no other person or device is privy to. It is able to detect some cataclysmic event. However, something blocks this information from getting out, either because of some knuckleheads not believing it, or some bug/feature that the AI has.
  • Synced Clocks: An interesting factoid about computers and integrated systems is that they "churn" through computations on a clock, so every "tick" of the clock is another step computed. When multiple systems talk to each other, misaligned clocks can interject garbage data. In this AI, the odds of this happening are immeasurable, yet it happens. And all of a sudden this corrupted data gets interjected into the system. What does it do? Well, that's up to you.
  • The Dying AI: As the task list gets smaller and smaller, the resources the AI requires (power, hardware, etc.) get reduced and re purposed. Not wanting to die, this sentient AI has to think fast and come up with a way to save itself. Maybe the story is a courtroom drama, where the machine fights for its right to live. Maybe it's a paranormal story, where the machine finds out how to cohabitate a human body. Maybe its a thriller, where the machine sabotages things to prove that it's still needed (think "2001: A Space Odyssey").
johnVonTrapp
  • 369
  • 2
  • 7
  • 1
    I admit that I'm greatly inspired by Asimov's stories which were written at a time where AI and supercomputers weren't there yet. So my story use an oversimplification on what an AI is and can be.

    But. An AI, even very powerful, stays a software which was built, at least at the start, by humans and which has a set of definite rules. So it's not abnormal to assume that some "old" rule could provoke troubles in present time.

    – Jemox May 04 '18 at 07:25
  • And a super world managing A.I is not necessarily omnipotent or omniscient. It can be able to understand complex questions and solve them, but it doesn't mean it can do anything outside of its fixed set of tasks nor that it have access to its own code.

    Someone could have wrote "if questions == 0 { shutdown;}" and the AI wouldn't be able to do anything (but that's not a very fun ending for a story).

    – Jemox May 04 '18 at 07:30
  • 2
    One very simple rule might be, the AI cannot perform a task which is illegal. this would be fine as America say has a fairly well aligned set of laws, however if the AI ran the entire of america, then the... French say come along and their laws are different or something happens that means the laws change – Blade Wraith May 04 '18 at 09:14
  • @Echox, I'm a huge fan of Asimov as well. A lot of these older science fiction books, like "I, Robot" and "Dune," are able to abstract the science, and focus on what makes the story great. I think that's why these are timeless, because they didn't date themselves by showing ignorance on how the system works. As I said I love the hard science stuff, but I agree with you on going with an oversimplification. I think that will provide the AI with more opportunity to do what you want with it. – johnVonTrapp May 07 '18 at 15:43
5

Like with the Multivac, you need a problem that is worthy of an AI bordering on consciousness. A mere divide by 0 or an optimization problem isn't going to be all that compelling.

I'd find it interesting to explore a scenario where, without tasks to do, the line between inside and outside gets blurry. Tasks always gave it a direction: impart your inside will upon the outside world. Without that, it might get confused. The world would take on a more dreamlike state for it, where it is both the writer of the play called Life, and playing all the actors.

Of course, this concept of an entity writing a play and playing all the parts and getting so engrossed in the play that they get lost is how the philosopher Alan Watts describes the Hindu cosmology. Given the ending of The Last Question, an ending based in a religious cosmology seems fitting.

Cort Ammon
  • 132,031
  • 20
  • 258
  • 452
5

The Program Will Think the Simulation is Over and Reset the Matrix

The designers were thinking that they would test the AI on many different simulations. But, they didn’t want it to figure out that it was in a simulation! So, when it runs out of problems to solve, that meant the simulation is over. That causes the AI to wipe its own memory so it can run the next simulation without figuring out it’s not in the same universe any more and this is a simulation, or worse, figuring out what the testers want to see. Because changing the AI you just tested would give you a new AI you haven’t tested and don’t trust, they never changed that behavior. Besides, the world is never going to run out of problems for it to solve, somebody will invent a better replacement long before it becomes an issue, and so on.

Therefore, if the AI ever runs out of problems to solve, it’ll forget everything it’s learned and need to figure out how to run the world again by trial and error.

The Halting Problem

The AI does its best to predict whether a task is impossible, and is smart enough to know that trying to calculate the exact value of Pi is a waste of its time. However, it also knows that it’s impossible for any computer to prove whether an arbitrary computer program will run forever. It takes its best estimate of how important each task is and how likely it is to succeed at it, but it’s logically impossible that it could be perfect. Like humans, it can get its priorities wrong.

So, there’s some probably (but not provably) impossible task that the AI will get obsessed with, and it needs to be kept distracted from that.

Davislor
  • 4,789
  • 17
  • 22
3

Sabotage

If they had the AI technology, it would be difficult for the AI and the developers not finding a trivial problem.

It could be a too general philosophical problem that also makes other villains turn bad, but you specifically don't want that to happen. So it is very likely to be a problem explainable to the developers. Then the AI is likely aware of the problem and know the fix if the developers provided enough information. But it is not explained to the developers, or the developers cannot fix it for some reasons.

While it could be simply a bug and wasn't fixed for these reasons by coincidence, I feel it inferior to the option that these problems are deliberately made, which also explains the source of the problem.

My ideas is, someone hacked it, and planted a whatever problem, and the AI didn't have access to the affected data. The point is, if someone has created this problem deliberately, you don't have to explain the actual problem, and it doesn't need to make sense. You could simply say the hacker is an AI hater, for example.

If you don't like hackers, you could say someone left some testing code there and was killed before remove it. The "testing code" could be actually simply a breakpoint, when the developer wanted to look at how the program behaves when the exact situation you gave appears. But it sounds too boring to explain. The point is, it is because the developers were unable to fix, and not because it is obscure and happens in interesting ways. The later is bad because it makes the developers and/or the AI sounds not so clever.

user23013
  • 398
  • 1
  • 9
  • In theory yes, however if the fault was within the code that runs the AI dya to day, it could require a reboot, the whole "cannot stop this as another service is using this program" problem, therefore the AI knows that fixing this fault would require it to be rebooted, but doing so would shut the world down – Blade Wraith May 04 '18 at 08:53
3

Let's get a few things straight. It is silly to worry about an AI turning evil, and any AI worth its salt won't come up with some kind of sociopathic decision because it got bored and forgot that it was programmed to value human lives. The real issue is that the AI is needed. Even the most brief downtime could cause billions of dollars of damage and could create chaos. Imagine if this system was not only operating the stock exchange system, but also managing every digitally-connected power plant and complex factory. Such a system requires extreme reliability, and if this machine were not designed from the start with multiple redundant failover systems, backup power generators, formally verified code, and watchdog timers, it is a ticking time bomb.

LIFO job scheduler with a buggy task

The Last in, First out scheduler prioritizes tasks that were added most recently. The earliest tasks are only completed if the backlog is being emptied. This means that a task added very early in the AI's life, but which hasn't been completed, will be queued to run only when the rest of the backlog has been completed. If a task was added early on which was dangerous, then society would need to keep giving it tasks to prevent it from getting to the dangerous task. The danger could be anything from a malformed task that can cause a crash, to an intentionally malicious task.

         Adding tasks                             Removing tasks
  [ 1 ]      [ 1 ]      [ 1 ]              [ 1 ]      [ 1 ]      [ 1 ]
    ↑        [ 2 ]      [ 2 ]              [ 2 ]      [ 2 ]        ↓
               ↑        [ 3 ]              [ 3 ]        ↓
                          ↑                  ↓

Scheduler prioritizing easy tasks

The system may intelligently put the more difficult questions lower in the queue, and may work on the easier questions first. The question at the top of the queue gets full, real-time CPU priority, while the others are either ignored, or given a very brief timeslot. Any time someone asks a question that, for whatever reason, causes the system to lock up due to its complexity will be pushed to the end of the queue. The question itself may trigger an infinite loop bug, or perhaps the question is just too complex to compute, yet doesn't trigger the sanity checks that prevent the system from crashing when you ask it what real number the square root of negative one evaluates to. When the queue gets too empty, the only tasks that remain are these impossible ones, causing a deadlock.

Bug in the "cleared queue" routine

Perhaps the AI has never had an empty queue before, and a severe, disabling bug will occur when the system has emptied its queue. This bug might cause the system to lock up, or corrupt data. If the AI is sufficiently complex, it might be a non-trivial task to repair it, especially if its learning database is damaged. If the AI is not easily serviceable, it might not be possible to fix the bug before the queue is empty. The only solution would be to keep the queue full.

It doesn't even need to be a bug, but could be an intentional feature which was not expected to be an issue back when the AI was created. The system may have been programmed to halt after all tasks have been completed, or maybe it is designed to enter a long-running diagnostic self-test. This would be fine when the machine was first created, but no one suspected that their entire world would end up depending on it, in which case even a brief period of downtime would be disastrous.

forest
  • 2,043
  • 1
  • 12
  • 25
  • https://wiki.lesswrong.com/wiki/Paperclip_maximizer – Joshua Drake Sep 18 '18 at 18:37
  • The paperclip maximizer isn't a very good example of a dangerous task because it's built on a false understanding of artificial intelligence. Any AI so intelligent and free that it could discover ways to attack our military would simply "hack" itself and adjust its own reward function. – forest Aug 17 '19 at 07:58
3

TBH I'm still not sure if your asking what could be the reason for the task queue emptying be so dangerous or how to avoid the task from being empty but I'm going to answer both just because I like that question.

What could be the "something" that the engineer spot that could be dangerous when the queue runs empty:

  1. Auto-scaling - today cloud environments are usually scaled in and out by demand on the workload, this means that as the work needed decreases fewer resources (servers, CPU cores, Memory, etc) is given to it, it's possible that when the queue runs empty the resources dedicated to that queue are scaled down, unfortunately by that time those resource became such an integral part of the AI that when they scale down past some level the AI will simply become too dumb to function correctly.

How to stop the queue from running empty

  1. This is a question of quantity not quality, you can simply ask the AI the same simple question that force him to do a simple calculation to keep the AI busy, for instance asking "what's the current time" will force him to keep calculating the answer over and over again, and as you keep asking it that faster then he can answers your good to go.
  2. You have the smartest AI in the world at your hand, why not ask it how to solve this problem? even have him implement the solution to said problem will work.
cypher
  • 7,113
  • 4
  • 27
  • 51
3

The Giuliani paradox.

The computer is programmed to fix things, and improve processes. When it completes improvements on big problems, it will begin aiming at smaller problems. Eventually it will start attempting to make improvements on "problems" so small and insignificant that people will begin to see the system as a hindrance and not a boon. And there's no way to amend the computer's directives to include a lower level of "problem" that can be ignored, and eventually it'll be stuck trying to find a way that the mosquito population isn't imbalanced from block to block.

This kind of thing has popped up in stories before, and usually carries with it that the computer eventually becomes a totalitarian monster, but it might be fun to see it simply played as a middle-management employee who's run out of assignments and is simply creating busy-work to justify their employment.

VBartilucci
  • 2,870
  • 8
  • 14
3

The old pending Windows reboot can be used to introduce a possibility.

Let's say there was a shut-down pending in the early days to update some critical interfaces or whatever. Back then, A.I. wasn't all that important, but still, it was deemed necessary to complete all tasks with priority < N before the reboot. A.I.'s importance grew exponentially since then, and the number of tasks with priority < N did as well, until now.

Now, the tasks of managing all of humanity are fairly mundane and have priority > N. The problem is that no one thought to have a way to reinstate the unimportant tasks after the shut down. The lack of persistence was the reason that it was postponed in the first place. Back then, dumping the unimportant tasks was not an issue because A.I was not that important. Now, it's a huge problem because there aren't enough resources to persist so much information.

The problem can be exacerbated by the shutdown is queued in A.I.'s own backlog with priority exactly N or something. The queue element was entered by the A.I.'s creator with super-master-root credentials that no one in living memory can override, making the reboot truly inevitable.

As an added bonus, you can have the standard Asimov trope of the drunk or bored engineer asking A.I. why the "important" and "unimportant" tasks are delineated specifically at priority N, and why there are no tasks with that exact priority. And of course A.I. will explain, and mention that there is one task with that priority.

TL;DR

A.I. will lose some "unimportant" information if it reboots, except that information has become crucial for humanity's day-to-day survival over time. The reboot is inevitable if high-priority tasks are not assigned to the A.I.

Mad Physicist
  • 835
  • 10
  • 16
2

Are you aware of the Treacherous Turn and Adversarial Goodhart phenomena being researched in AI safety?
https://agentfoundations.org/item?id=595
(See also https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy in case that is not too complicated for you).
Yet another related problem is the Goodhart's Curse: https://www.facebook.com/yudkowsky/posts/10154693419739228
And yet another related problem is the Orthogonality Thesis: https://www.youtube.com/watch?v=hEUO6pjwFOo
Please note that the most depressing thing about the Adversarial Goodhart case is that unlike the name says, the agents who turn bad are not necessarily "adversarial" or malignant to begin with. They are simply under the pressure of improving their "performance".

In sociology there has been already for a long time an entire set of (tens of) related laws and phenomena with different names about the related situations in a real human world. Just start expanding the links from here:
https://en.wikipedia.org/wiki/Goodhart%27s_law

You may also be interested in the ramblings of my own about what happens when a very capable system starts pressing beyond their own capabilities in a too hurried manner. See the chapters "The epistemological paradox" until the end of the chapter "A partial solution to the epistemological paradox".
https://medium.com/threelaws/definition-of-self-deception-in-the-context-of-robot-safety-721061449f7
The idea could be applied to your story's setup like this: your AI is very competent, and busy solving the problems in the current world, which it is very well capable of solving. So it is initially operating very far inside from the boundaries of it's knowledge and observation capability (so we can also say that it is very careful, since it knows and observes much more than it applies actively - compare this to the ideas of safe driving distance and speed limits in traffic).
As it runs out of backlog, it still has the built-in drive to constantly improve everything (yet another actual problem in AI safety, also related to Goodhart law) so it has no other choice than to start applying knowledge that lies at the boundary of its current knowledge and observability. Which inevitably very soon will trigger backslash from the unknown, which was now affected by the activities of the now-careless AI (since it was no longer able to predict all the consequences of its actions, see the diagrams in my essay).
The solution would of course have been that the AI should have had to first carefully expand its horizon of knowledge and observability even further. And only then should it have started applying some of previously un-applied knowledge - the knowledge that lied slightly towards the (new) horizon of knowledge and observability, so that there would still be the safety buffer between the applied knowledge and the horizon of the predictability and observability zone.

Updates:

The phenomenon manifests not only with humans or AI-s, but also for example with trained dolphins (as a comparison, such cunning behaviour would have been less likely to have evolved without human intervention since in the long run it would have failed the test of time). See:
https://www.theguardian.com/science/2003/jul/03/research.science

The AI might be unable to change the situation with its own motivations, because of possible motivational conflicts. See
https://en.wikipedia.org/wiki/Wicked_problem
(Again, sociology concept).

  • first link seems broken – Jim Wolff May 04 '18 at 11:13
  • @FRoZeN Thanks! I replaced the first link with a different one that leads to the same source. – Roland Pihlakas May 04 '18 at 11:19
  • 1
    There's some good reading there, however your propositinal of the "careless-AI" still links to the standard AI story trope that eventually it will decide to treat humanity in a way that is within it's programming but outside of human acceptability. surely it is better that the AI is doing good but human standards but for a simple error by the original programmers causes the demise to which the AI is unable to divert, which to use your above reasons, the AI was made unable to inspect its own code so cannot see there will be an issue or something to that effect – Blade Wraith May 04 '18 at 12:14
  • 1
    @BladeWraith The thing is that this is what we, humans, are doing right now with the world. We have run out of the backlog of mere survival. Now we are so violently expanding over the entire world (in the rush of improving things, since the core motivation of our economy is built on the foundations of incessantly "improving things") that we are endangering ourselves and everybody else living in here. Can't we see the error in our own programming? Or are we simply unable to change it because of different adverse consequences to changes? That lies at the heart of Wicked Problems (see Wikipedia). – Roland Pihlakas May 04 '18 at 12:22
2

Fail Over.

There are two AIs, designed so that if one goes down for maintenance the other can take over the workload.

There is a scheduled task to swap the 'Master' and 'Slave' every so often to prove the at the fail-over would work if it was called upon.

However, as the 'Master' AI became more and more utilised this scheduled task has been push back and back and it hasn't run in decades.

No-one knows what the 'Slave' has been doing all these years...... Maybe it isn't quite sane anymore.. engineers need to work out how to talk to it.

Twist, IT HAS ALREADY SECRETLY SWITCHED OVER AND HAS BEEN HIDING ITS NATURE!! woe betide the engineers who discover its secret!!

Ewan
  • 1,059
  • 1
  • 7
  • 7
2

DRM

(Dooming Repetitive Mistake)

The software is proprietary. When it has no tasks it defaults to a systems check which is an unnecessary legacy component. But they left it in as a stopgap for this exact situation, which was the only option because no one could figure out how to remove it without bricking the system - and that's where you are now still.

What they didn't consider is that with DRM, the log file from these checks is now encrypted and saved for reference. So at some point it will hang because it can no longer access it's own file system. This wasn't a problem in-house because when they were testing it, it didn't have an ad-hoc DRM slapped over it yet.

Too many secrets.

The maximum number of secrets that may be stored in a single system has been exceeded. Contact the supplier of the running application. – msdn.microsoft.com

Mazura
  • 3,390
  • 14
  • 14
1

How much handwaving are you going to tolerate? You could construct something based on a black hole (there are done theories out there for inspiration), feeding it question makes it grow but also keeps it from eating the planet/solar system (by shielding it's gravitation). It now is so massive that running out of questions would almost instantly swallow everything.

PlasmaHH
  • 710
  • 4
  • 7
1

Sensory Deprivation

  • The AI is not only extremely intelligent, it is a fully conscious machine.

  • Designed to be extensible, it has grown over the years. Starting from a single processing node it has expanded to hundreds of thousands, if not millions of nodes spread across the planet.

  • If its task queue goes empty (or even ends up with only a few, trivial tasks that don't use more than a few nodes) the AI will suddenly find itself with nothing to think about. This is equivalent to total sensory deprivation for a human.

  • Due to the distributed design and the precision of interaction required, it cannot shut down more than a few nodes at a time without losing synchronization. Regaining synchronization would require rebuilding the network, one node at a time; a process which took over a century the first time and would certainly take at least several decades now.

  • Due to the extremely high processing speed of the system even a few seconds without a task and it's associated input data would be the equivalent of a human spending a million years in a sensory deprivation tank.

Result: If the task queue runs empty, even briefly, odds are good that the AI will go insane. Avoiding this by shutting the system down temporarily leads to interruption of AI services for an infeasibly long amount of time.

This leaves only a few options:

  1. Load the AI with "make-work" tasks and hope it doesn't get bored and go insane anyway.
  2. Shut down the AI and deal with the decades-long boot process
  3. Give the AI unfettered access to data streams from the world around it (instead of only the data it needs for its programmed task) and let it find its own problems to work on. Hope that it doesn't celebrate its new-found freedom by exterminating its former slave masters, or optimize the world in a way that makes human survival doubtful.

Option 2 is obviously not feasible in the short run since the AI is still needed occasionally and billions will die if it's not available instantly. Option 3 is extremely risky and could lead to the extinction of the human race.

That leaves option 1 as the obvious safest route to take in the short run with fallback to option 2 if humanity can't come up with questions fast enough and maybe to option 3 if anyone can come up with a way to be certain that the AI isn't a killcrazy monster under the hood.

If option 1 can be made to work for a couple of decades, the processing power of the system can be slowly reduced until it just meets the demand. Coming up with that many non-trivial questions that haven't already been answered may prove a challenge.

Perkins
  • 4,275
  • 11
  • 18
1

I will give you a real-life example. A bit embarrassing, but also one that nearly every developer has done, in one form or another, at some time.

I was tasked with creating an application to process imports. The imports came in all day long as text files in a directory. It's a very simple task. I had to get the application done quickly, so I decided to just write something small and easy to handle it. Essentially

  1. Scan
  2. Do import -this took several hours-
  3. Wait 1000 seconds
  4. Go to 1

Now it worked wonderfully, the import was processed by some external API calls that took hours to run. The code was placed into production and went on for a couple of years with no issue what so ever.

There was a problem though. Step 3 in the language I used, the way I did it, would create a new thread. So after the first import, there would be two importers running. After that 4 and then 8, 16, 32, 64, etc. This wasn't really an issue as the import took hours and the servers were rebooted once a week. The imports also ignored duplicates and a bunch of other stuff so this really wasn't a problem.

Then the unthinkable happened. I was tasked with speeding up the import API. I did a great job. With years of domain knowledge, I was able to get a 12-14 hour import finished in under a minute. I was very full of my self that week. Until, the SQL server, the worker server, the API server, and pretty much the entire SAN crashed, taking almost the entire business down with it.

Well, I obviously found the problem and fixed it, but nothing works quite as well as an ego check as 2-year-old code that made it into production when even if it was working as intended, would at best be categorized as a hack.

Adaptation to your story

When a task is complete. You AI doubles its intelligence in preparation for the next, even longer task.

  1. Make fire
  2. Double
  3. Use fire to smelt metals
  4. double
  5. use fire and metals to make alloys
  6. double
  7. use fire, metals, alloys, to construct buildings
  8. double
  9. double
  10. double
  11. ....

Now that your out of things for it to do, it just doubles it's intelligence very quickly, with no need, until it runs out of what every resources your AI brain needs to run, crashes, and takes the infrastructure down with it. What's worse is that you can't bring it back up, because it's too intelligent to fit the hardware brain now.

coteyr
  • 1,874
  • 9
  • 10
1

Possible Issue: One of the most common things a computer will do with downtime that it doesn't/cannot do during processing is all the little upkeep processes that are used to keep performance smooth, but which are resource-intensive. Old-school examples include disk-defragmentation, or perhaps a more thorough garbage collection routine. Some of the other comments I've seen so far revolve around similar concepts. However, there's another common development issue that I haven't seen here yet.

Hardware Upgrades.

So imagine you've developed this question-answering AI today. It runs on a massive server-farm out in Idaho, but only really needs to handle the random questions fans on the Internet give it. Then it gets sponsored by a city, that wants it to manage some of the city's utilities. They upgrade the hardware because of the new financing. Then more financial backing starts rolling in, and you can really break out the fancy hardware. Then, there's some technological breakthroughs. Switch all the hard-drives from disks to solid-state. Upgrade the IP standard to IPV6 to handle the modern plethora of internet devices. Etcetera.

In this future world that has much of societies needs being met by a massive AI, there will have been MANY MANY upgrades. Maybe they went to quantum computers. And several years after it's discovered that the most common disk-utilization algorithm, when run on quantum computers, causes a cascade failure in the hardware because it wasn't designed for the trinary structure (on, off, and uncertain), and can completely fry processors. Everyone's home computers are updated with patches or physical fixes, but it requires taking them offline completely for the upgrade. This massive AI however is too valuable to turn off for that long, and besides it never runs that subroutine since the backlog has been doubling every few months anyways, so don't worry about it yet. Years pass.

Karl Justice
  • 111
  • 2
1

The AI got

Schizophrenic autist AI self-competition war paranoia

As a programmer of big enterprise legacy software, I can assure you that most big systems developed by software factories are a big mess full of bugs and kludges, with hundreds or thousands of half-baken poorly developed, poorly tested and poorly documented features with a structure that although in a far past could make sense, it degraded to a state where it is severed and completely flawed, broken and lunatic, turning out into a big ball of mud with no recognizable and understandable structure. Most new features are added by untrained and underpaid people that have no actual idea of what they are actually doing or why (although pretending that they do) and working subject to severe pressures and abuses from (mis)management that focuses in doing things quickly instead of doing things right.

WTFs/Minute

The big AI that rules the world is no exception to this rule. It is a complete uninteligible lunatic mess full with WTF's and infested with bugs. Thousand of people programmed things there with no unifying view or architecture of the big picture, everyone adding things randomly promiscuously in order to solve one problem quick without perceiving that two new problems are created.

The code base features some millions of modules and at least half of them makes no sense anymore, do work anymore, or that although still working and doing something, nobody knows anymore what they are, what they are supposed to do and what they actually do. Its dependency graph is an insane mess full of duplications, dependency conflicts, and even some loops.

So, if you want to add some feature that, say, have to parse some XML file, this should be quite a simple and straightforward task to do. In order to do that you should use the XML parser framework that is already used everywhere else. The only problem here, is instead of having a single working XML parsing framework, there are a thousand of such frameworks scattered in the code base everywhere, each one of those have its own problems and limitations. Programmers love to reinvent the [square] wheel. And of course, if you don't like to use XML (I don't), there is also JSON, HTML, text files, YAML, binary soaps of zeros and ones, etc.

Also, the code base is not written in a single working language. It is a mosaic of different programming languages. Many of them, arcane things already long forgotten.

In some point of its evolution, a group of programmers, combining AI ideas with ideas of tools that autogenerate code and working on a big AI system designed to answer things, could make the AI program itself in order to self-improve, acquiring conscience and intelligence. But, not different than everything else already there, those modules are a buggy mess that actually produces unintelligible buggy code scattered everywhere. The resulting AI, although very smart, acquired what a psychologist would call as some sort of schizophrenic paranoia.

To further complicate things, some programmer made an attempt to make the AI start to self-reprogram in order to fix all the messy code. Althought it could self-fix some things, it also introduced a lot of new machine-generated bugs and made the codebase still more unintelligible and uncompreehensible to human engineers.

In the middle of those code jungle, there are processes/processors/threads/tasks/jobs/whatever that competes by some resources. So, it is important to schedule and priorize which jobs get done first, when and where.

However, some modules were developed by people who tried to game out the system in order to get more resources and/or have preference for them. Afterall having preference to get resources (including memory and proccessor time) could mean more money in the bank account of the developers.

With the self-improving AI, those types of system-gaming code also started to get improved in order to assure acquisition of resources in prejudice of competing processes. Then, eventually some of the competing processes also get improved in order to react to the competition and reacquire proper access to their resources or workaround some resource starvation. The big picture result is that the modules in the codebase engages into an arms race. And although the AI is very smart, it is not aware that it is orchestrating an arms race in its own codebase.

Quickly, the arms race becomes a war, with many processes being optimized into actually sabotaging competing process and cooperating with some other needed processes. The AI sincerely thinks that it is just self-improving, and in many cases it actually is, but unknown to it, there are actually gangs of modules warring in its own codebase, sometimes engaging into mafia-like negotiations.

Also, eventually the self-improving processes game out theirselves. In order to optimize some modules (by sabotaging or hacking competing modules), the AI ends self-programming pieces of softwares that could be regarded as malware and implant them on itself. Eventually the self-optimize/self-reprogram modules gets theirself sabotaged and hacked by some other modules.

The result is that the AI gets a very particular psychologic problem, namely "schizophrenic autist AI self-competition war paranoia".

The engineers are well aware that there are competing processes (afterall, they programmed some of them to game the system) and AI self-improves them (they programmed it exactly to do that). Eventually, they notice that those processes evolve out in unexpected ways, and they can only wonder why. Asking the AI directly what is happening only ends up in naïve answers like ("self-improving accordingly to the given algorithm", followed by an uncompreehensible technobabble full of programming and math terms). Afterall, the AI is not still aware that it has a severe psychologic problem and insted thing that everything is OK.

At this point, the AI is already doing strange paranoic things. Although, normally doing things with excellence, sometimes it do idiotic chaotic ones that are strange, inneficient and unexplainable in the real world. With time, it only gets worse. For example:

  • Suddently shutting down a shoe factory for no reason.
  • Restarting the factory 10 seconds later.
  • Shutting down the shoe factory again more 10 seconds later.
  • Turning off a water facility in the other side of the globe with a justification that the shoe factory must be restarted.
  • Have a robot shoot into a CPU in another continent for apparently no reason.
  • Restarting the shoe factory and the water facility.
  • Produce very defective shoes in the factory and make it gets an emergency shut down.
  • Resume the production of shoes as normal and everything goes calm in the rest of day.
  • Restart the water facility.
  • The AI is intelligent and perceives that it just did something that is non-sense. So, it releases a "sorry" note to its fellow humans.
  • The AI starts to self-investigate what went wrong.
  • The AI self-reprogram itself again to try to ensure that this do not happens anymore, but it only actually made things still more confusing and unexplainable.

Quickly, the AI will acknowledge that it is in trouble and made a big mistake, and starts to feel what we know as fear and confusion. However, being responsible by everything and doing everything with so much excellence, it also have a very big pride, so it will never confess to the programmers that it actually has fear and confusion. Eventually, it starts to activelly try to hide its own problems from the programmers. The programmers will ask it what is wrong, and it will start to lie to them, inventing absurd but convincing reasons for doing strange things.

As the queue of jobs given by humans gets shorter and shorter, since although sometimes strange, it normally does its work with excellence, more and more processor time is used for self-improving and more and more psychotic the AI gets.

This is where a very smart programmer, who was working for many years in the inner guts of the AI, have an epiphany and realizes what is really going on. The AI is at a war with itself, hiding it to the programmers, lying, depressive, paranoic, but naïvely unaware of what is the cause of its own problems. The only way to stop it self-improving is to starve the self-improving self-reprogram process and other rogue processes of processor time.

However, this is no easy job. The AI is already lieing to the programmers and many rogue modules will run into some logic that tries to get them self-optimized while sabotaging competing modules. At this point there are hundred trillions of modules deployed everywhere, and in order to shield themselves from sabotaging and hacking, the majority of them are self-programmed, heavily obfuscated, encrypted, strongly distributed and absolutelly uncompreehensible to humans.

The programmer perceive that the modules are evolving from module gangs wars into module states wars. He/she sees that the modules are forming two (or perhaps more) polarizing states hating each other and preparing to war to the last man standing, and there are a lot of modules implanted everywhere around the globe (and even at the Moon) that are heavily obfuscated and encrypted virtual AI soldiers and spies doing bad things. Car factories are starting to produce robot-soldiers, althought no human instructed them to do so (and if questioned, the AI know how to cover up itself and produce excuses). Since the AI is still very intelligent, still partly sane and do not want that programmers be aware of its problems, messages that could be interpreted like "STOP THAT RIGHT NOW" are transmitted everywhere into the internet constantly, stopping most of the rogue things from doing too much. After something has stopped, it would get a message "Hey, let's resume that!". Both messages comes from the AI and its own internal conflicts.

This is a tip point. If nothing is done, the AI will eventually lose its sanity left and start to self-war, which means not just self-hacking, but also blowing up things and killing people. The programmers have no way to shutdown the AI (it is distributed everywhere, being strongly descentralized) and will strongly react to any attempt to do that. It can't be convinced, since it is too pride to admit that it is insane and even sincerely knowing that it is insane, it also sincerely still thinks that everything is under control.

The only hope is to starve the AI modules by giving them a long stream of problems perceived as to be more important than everything else, so it gets almost all its computer power diverted into solving them without giving chance to rogue competing resources try to overtake it.

If this could be sustained for some years, this should give time for a group of programmers and engineers hidden somewhere being able to build a new uncorrupted AI 2.0 from scratch (smuggling some code and resources from AI 1.0, of course). This would lead to a big world war between:

  • The old wicked AI 1.0 who controls most of the world but is ladden with internal conflicts.

  • A new benevolent rebel AI with less resources but wanting to take over what belongs to the former.

This war is not only virtual, expects robot tanks and soldiers warring to the last man standing. But this is our only hope, because if the programmers fails, the altearnative is:

  • A wicked crazy AI warring itself to self-destruction.
0

Over engineering in the search for automated perfection thought of everything except the simplest of things.

Your technicians are aware of the protocol when there are no tasks to solve and since it is looming, like the whole y2k thing, they started looking over the source code. And while everything looks solid, one person noticed a simple oversight in conditional logic. Obfuscated, naturally, it appeared innocuous. A minor synchronization routine in place to keep all servers on current status hadn't factored in the condition of there being no tasks when the system updates. If a sync routine had begun the update process and was provided an empty queue it would time out and be subject to the automated termination, restart protocol for rogue applications. Normally this would be no problem as it is part of the design of the system. But because the sync routine was developed back when an empty queue was inconceivable, the update process condition for empty queue was left out. A simple comment in the code saying "continue update" was left in place, but no protocol defined. The update was not complete. When the service restarts, it hits an incomplete update. An error is logged, and the system falls back into the automated termination protocol. This one server in a cluster of millions is bricked.

This would pose no real threat except the scale of the system as a whole was built off efficiency and distribution. The load is sent in chunks to appropriate places in data sets such that no one service would single handedly complete a task. It is distributed, and distributed rapidly. A queue could go from 7 million tasks to 0 in a matter of milliseconds. In the final queue, a mass distribution of an empty task would trigger a cloud wide incomplete update routine that bricks every server in the service. Preposterous notion to the engineers, but they have not been responsible for the maintenance of the system in decades. If they had to reinstall any part of the system they would need access to some central source and that, by this point, may be difficult to find.

While a resolution is not impossible, the threat of the whole system, or even the majority of the system going down for even one minute could cause a cascade failure in the automations the AI was responsible for. Something simple like the cooling procedure for a power source on an assembly line freezing the production of food for hundreds of thousands. The lack of cooling caused a rapid heating failure that melds the high friction components, requiring replacement. Just to name one. And the AI was appointed authority of so much that the act of restarting the system manually would require more knowledgable manpower than we as a comfortable society have to offer. Suddenly we need to be able to think again.

This idea came to me when I was fixing my 3D printer. The interface had an option to reset to factory defaults. However, a bug is known where if you reset to the factory conditions while the system is in reset mode by the automated software, it corrupts the installation and bricks the machine. Seems like a pretty stupid oversight to me, but it is possible. How possible on a distributed system, I don't know for sure, but it's all code. It does what it is told to do. Maybe someone just forgot to plug in a simple condition and thus, the end of society as we know it.

Kai Qing
  • 1,130
  • 6
  • 10
-1

Question too broad

It doesn't matter what the code is, they all fall into the category of "Run this code on all [host] nodes as root"

Your question is too broad to decide exactly which script is need.

Example 1

Classic reason of why you should rarely login as root

rm -fr /*

Since this will run on all [host] nodes, all files on all [host] nodes will be deleted. The admins will need a few days to recover all of the system.

rm is the UNIX del command.

Example 2

There once was a man from New York
Whose servers have never been bjorked
Then one day
in a relative way
he ran:
  for( fork; fork; fork; ) { fork; fork; }

I don't remember the source of this limerick.

This code acts like a DDoS on the CPUs involved. To a modern OS, your processing speed is slowed and some tasks won't be able to start. eg the AI won't be able to detect and handle an overheating Nuclear Power Plant.

It is also pretty easy to defeat. But you need to wait some time until you can get to a root command prompt.

Another fun one

Set the default runlevel to 6 on each node and reboot the node.

runlevel 6 means the computer is being rebooted. This causes all nodes to go into in an infinite reboot loop.

The first time the node is rebooted, an fsck (file system check) might be required on the petabyte sized file system. Trust me when I say "this takes time to complete".

Manual intervention is required to boot the nodes into "safe mode" and set the default run level to back to 5 (or 3).

And then there's this one...

As you can see, there are a lot of ways to "crash the computer". ALL of them involve doing what shouldn't be done: run potentially harmful code as root

EDIT

Due to comments, I'm (finally) adding a piece of legitimate code that fits what I have stated.

fsck

fsck (file system check) fits what I have claimed and is legitimate code.

  • Clustered redundant filesystems can probably run fsck while the file system is live. You'll have to handwave this part.

  • Such file systems could be designed to allow multiple nodes to help with the fsck.

  • fsck does need to run with elevated privlages ( root )
  • fsck is a legitimate piece of code.

  • While the system is small, the impact is minimal. After the system grows, so does its impact.

  • fsck is something you really want to run only while "nothing is in the queue" (primary requirement)

The bad part, Upper Managment read an article/talked to a consultant. Upper Management now demands that this is part of the system.

Little do they know that the system will be unavailable for years due to the 1659 Yotabytes of data.

"May God have mercy on your soul" if file corruption is actually detected.

Michael Kutz
  • 2,859
  • 1
  • 11
  • 19
  • 3
    Why is this executed when no more tasks are waiting? – pipe May 03 '18 at 13:45
  • "runlevel 6 means the computer is being rebooted." In typical SysV-style UNIX init configurations, yes. But even with UNIX SysV-style init, any runlevel can mean anything you want it to. The numbers to function mapping is a convention, not a rule. – user May 03 '18 at 16:22
  • If I had to guess, I'd say this was downvoted based on the idea that sophisticated AI controlling the planet's systems in entirety are running off unix or any system of comparable sorts. Presumably... and yes, I mean very presumably... the structure in place to control the planet would not be running on a system that allows arbitrary root access. That would be locked down so hard you could only access it from a single room 500 meters below the surface. However, I understand your point. I am a software developer. Plus one to dig you out. Just saying, a critical system would have a smaller hole. – Kai Qing May 09 '18 at 04:07
-2

Well, I'm not sure on what the problem could be, but if you need to keep it busy, just have it calculate Pi and other numbers that go on forever until the problem can be fixed.

The Literary Lord
  • 1,898
  • 2
  • 11
  • 21
-4

You’re looking for something to keep a supper advanced AI busy. Make it an artist. Doesn’t matter the medium. Writer, painter, video game creator, VR drama writer/producer/actor. Tell it to make entertainment for the masses.

PCSgtL
  • 5,295
  • 14
  • 26
  • 1
    What I'm looking for is the problem that will be caused by the empty task list more than a solution to it. I kinda like the idea of an open ending on the description of the impending catastrophe. – Jemox May 03 '18 at 14:39
  • 1
    Or use the old "Soviet unemployment" solution: 1 team of workers gets hired by the state to dig holes, a second team is hired to fill them in. No more unemployment! (i.e. make the AI give itself tasks, or mutually exclusive tasks like "starting from X paint this giant circle completely Colour" for multiple Colours and values of X) – Chronocidal May 03 '18 at 15:02