26

In a lot of stories there are fights between man and machine; rogue AI that has decided humans are inferior in some respect and chooses to wipe them out.

What if the only chance for any remnant of our existence to survive was through AI machines? Once made energy efficient and solar powered robots would cause no pollution, fight no wars amoungst themselves and live in complete harmony with each other in a bid to spread across the universe.

Humans will probably never achieve this, they are too busy quarrelling about who owns what. My question is, how do you convince the general population of this? How do you make them value the spread of our technology across the universe over their own lives so much that they are willing to die for this cause? Are you convinced?

Vincent
  • 16,803
  • 11
  • 66
  • 143
Varrick
  • 3,683
  • 20
  • 26

8 Answers8

54

"How to convince humans to allow a machine take over?" - the answer is, of course, "Gradually." Start with putting one machine into every home, say an AI that is so dumb that it is not really an AI, but just a computational device. Then start adding other similar machines, maybe some that will do tasks like vacuum or have coffee ready. Then put the machines actually on people, small enough they can carry. Make them give a service that people will come to depend on, say communication from anywhere to anywhere. Maybe even make them wearable, like a watch. All along, keep making them smarter and smarter. Give them names like Sirus or Cortina or Alexis, and give them voice interface. Then give them visual interface as well, so they can respond to gestures and expressions (you could introduce that with games). And don't even bother about robot bodies, why not just have them live in the computational ether where they can follow you anyway without moving - why not head towards disembodied AI?

I am not sure where you could go after that, but if you get that far, I imagine the patterns would be there to continue towards ever more ubiquitous AI, and humans would give up their privacy, individuality, and human community without hardly a whimper. Other parts of their humanity would follow as the pattern gets set of exchanging ourselves for machines. Why do we need to interact with people if we have a little machine that will listen to everything we say and act like it is the most important stuff in the world by sharing it with the world.

I don't know, sounds far fetched, but I think you could do something with this kind of gradual pattern of machine takeover.

  • 1
    The beauty of this process is the machines never really have to take over; human beings just naturally become part of the machinery. Gradually our overlords grant life enhancements, molding subservience, as people play 'lifelike' entertainment. – Dan Bryant Oct 14 '15 at 14:31
  • Merge this (iWatch gone AI) with iRobot (Robots take over all the day to day work) until the only thing that matters to meatbags is catching the latest WiiBox360 match on YouTwitch and arguing about politics on FaceSpace. – WernerCD Oct 14 '15 at 15:13
  • 1
    I gave this answer half in jest, but the half that is serious is that you really can learn much from real life about how we could get to the kind of world the OP is talking about. We really are exchanging much of our humanity for the convenience and entertainment that machines provide. I don't want to get on Win 10 because of the blatant invasions of privacy that are part of it, and yet, it won't be long before I will have to accept it. –  Oct 14 '15 at 15:35
  • 1
    By the way, I think true AI is inherently impossible. We can teach machines to run deductive and even inductive processes/algorithms (dare I say "thinking"?), but we will never instill them with true abductive reasoning. I cannot see that we will ever infuse a machine with the ability to create. –  Oct 14 '15 at 15:40
  • 5
    @GiliusMaximus Why not? A human brain is a machine and does the things you claim are impossible. – JBentley Oct 14 '15 at 15:59
  • 1
    @JBentley - I knew I should not have posted that comment - it steers us to questions of opinion and not related to the OP question or the site's purpose. But couldn't help it, and can't help answering you now. Basically, I don't accept your premise that we are just a chemical factory or that our thinking resides entirely in our brain. There is more to us than "this crude matter." And it is that "more" that we cannot inject into a machine. –  Oct 14 '15 at 16:04
  • 8
    @GiliusMaximus Go read Gödel, Escher, Bach by Douglas Hofstadter; it does a great job of explicating on how such seemingly basic building blocks as neurons (and whatever support network keeps them alive, of course) may very well be all that is required for complex consciousness, no supernatural factor required. The issue with current machines isn't that they somehow can't be given a "soul", whatever that means; it's that they don't possess the self-reconfigurability and ability to grow physically necessary for consciousness to emerge. – JAB Oct 14 '15 at 16:14
  • (Well, technically Hofstadter defines souls as the visible symptoms of that emergent process, but that's just semantics.) – JAB Oct 14 '15 at 16:15
  • 1
    Hofstadter is a brilliant writer, but he proved nothing. I see no reason to accept his thesis. When I behold the works of the like of Bach, or even when I experience my own ability to create, I see far more reason to start from an assumption that what you call a "soul" is as intrinsic and fundamental a part of us as is our body. –  Oct 14 '15 at 17:18
  • 5
    Sounds like something we are already doing...:P – John Odom Oct 14 '15 at 18:30
  • 1
    Creation, to me, is simply an aggregation of previous experiences or 'willy-nilly' motions. Writing a new song is about playing a chord on a guitar or a note on a piano and 'hearing' (in my mind) where I want it to go next. Now, who is to say that I haven't heard that chord or note step in another song and am just copying it and then the next note is from a different song and so on? Creation of an art piece can also be moving a paint brush on paper to 'copy' what I see in front of me. I think much of what we consider 'creativity' is quite programmable - including 'flaws' in the created work. – Tracy Cramer Oct 14 '15 at 19:44
  • @TracyCramer - every single note, step, note length, dynamic transition in Moonlight Sonata had been used before. Yet this particular combination got Beethoven so many compliments that he complained "surely I've composed something better." But the piece's poetic beauty is that universally recognized as unprecedented and breathtaking. This combination never existed before, then one day it appeared on the page in front of Beethoven. Imitation? Or hear Max Planck on deriving the equation that gave birth to Quantum Physics - "I was ready to sacrifice any of my previous convictions about Physics. –  Oct 14 '15 at 20:08
  • @GiliusMaximus, I completely agree that not every bit of what we call creativity is explained by what I experience. But what we consider beautiful may not be that way to an AI. They might compose something we disdane but they adore. We each have our own individual perspectives. – Tracy Cramer Oct 14 '15 at 21:17
  • I think you have to go pretty far out on a limb to claim that Moonlight Sonata is "universally recognized as unprecedented and breathtaking". – Kyle Oct 15 '15 at 02:25
  • I would start in a different end. Give most humans a computer that they can carry and that allows them to access information. Put algorithms in place to filter that information, at this point you have outsourced a fair amount of human decision making processes to computers. This can be gradually turned up as algorithms gets better and the computer can offer advice for more contexts. – Taemyr Oct 15 '15 at 09:00
  • @Taemyr -I think starting with something portable might be a little too much intrusion all at once, so that is why I started with a machine in every home, maybe at the desk. I think someone said something about that once. Then move on to portable and wearable as people get used to and more dependent on their machines. –  Oct 15 '15 at 18:11
  • "...say an AI that is so dumb that it is not really an AI, but just a computational device." Some people (not necessarily the answerer) need to realize that this will always be so, no matter how complex the device is, until there is an actual mind/soul in there that can truly think. – Panzercrisis Oct 15 '15 at 18:55
  • @AgapwIesu Why? It's not like there is a huge outcry against google for this. – Taemyr Oct 16 '15 at 03:28
  • @Taemyr - I am not sure you understand. I was lightly recounting the way computers have taken an ever growing place in our lives. It started with computers on a desk at home, portable came later. People did not object to the intrusions of Google or Apple, and will not object to Win 10 much, because there were other gradual steps earlier. That was my point - do it gradually. –  Oct 16 '15 at 13:28
24

I think the best way to make humans give up their humanity is to offer them something better.

Say, for instance, you define a human as a creature that walks on two legs (thank you Animal Farm). Then say someone comes up with a cheap, quiet, solar-powered jetpack. Many people will buy this technology, and some will use it to such an extent that they no longer use their legs. Then one day, you offer a smaller, quieter, cooler jetpack that only works on people who don't have legs. The people who got used to your old model will be greatly tempted by this new one; some will probably get their legs removed in order to use it. After the first few cave, others will see how much better the new jetpack is, and how stupid it is to have legs. Then maybe in a few years very few people will have legs, and thus by your definition of 'human', most humans will have been destroyed.

Now, imagine a human is a mortal being. Offer someone immortality and they will take it. Imagine a human is defined by their intelligence; offer someone the ability to be smarter and they will take it. Imagine a human is a squishy bag of carbon-based life; offer them a robot body that never tires, never gets old, never has acne or cramps or rashes or colds or burns or sores or bug bites or-- well, you get the point. What I'm saying is that there are a lot of things wrong with being human, but it's these very problems that make us human. The more we solve humanity's problems, the less human we become. Thus, all you really need to do to make humans give up their reign to robots is to turn them into robots.

There may be some Luddites, like some religions that value the innate flaws of humanity, but these people will quickly be run out of business. Imagine trying to get a job when you're competing with super-intelligent robots. They may be able to sustain themselves in their own little communities, but that's a win-win: they're not getting in your way, and now you have little human zoos.

The key to getting this to work is to make it gradual enough to not be noticed. Make every change take place in a new generation; the old may not approve, but the young will be all for it; after all, their definition of humanity will be tainted by the existence of the new technology. Every further generation will imagine 'humans' less as what we know them to be today, and more as the AI that you want.

DaaaahWhoosh
  • 19,994
  • 8
  • 68
  • 136
  • I'm marking correct despite having a lot less upvotes. This is because this answer is more reciprocal to the actual question which involved humans eventually disappearing from the world. – Varrick Oct 15 '15 at 23:19
  • Reading over the other answer again it is in fact pretty similar to this one, however this one is more concise. – Varrick Oct 15 '15 at 23:21
  • This answer sent shivers down my spine. It's an excellent description of how society is moving right now. Wish I could +10. – Rand al'Thor Dec 18 '15 at 01:48
7

The main way I can see this happening is out of necessity. Lets say an alien race is attacking earth with vastly superior technology, and they decide to turn to AI to help them in their darkest hour. Then every human on earth (the couple million or so that are still alive anyways) sees AI fighting and destroying the aliens, and defending them against these aliens that killed their families.

At this point you have a vastly reduced / weakened human race, who are now exposed to the remainder of this alien race and the universe in general. This is where they decide to rely on the AI more and more, to the point where they cannot function without the AI. I see this happening over several generations but if you decide to reduce the population more you could shorten that time.

So you end up with a society that relies on AI to function (Factories, Farming, all industries run by AI) and an AI that is becoming more and more intelligent and influential. This is the point at which the AI can begin controlling society and implementing changes. The AI can basically be so ingrained in our daily lives that they can indoctrinate us. If our teachers, entertainment, job, and every aspect of our life is created, chosen, and monitored by the AI, that will be when humans will be willing to die for the cause.

But be warned, this answer can only be implemented with either significant backstory over several generations, or just a society with little backstory and explanation. I would say it depends on what kind of story you are doing and who/what you want it to be about.

BackwardsBear
  • 418
  • 2
  • 4
  • "The AI can basically be so ingrained in our daily lives that they can indoctrinate us. If our teachers, entertainment, job, and every aspect of our life is created, chosen, and monitored by the AI, that will be when humans will be willing to die for the cause." - This has probably already happened. – Jon Story Oct 15 '15 at 12:43
5

The same way you convince humans to accept pretty much anything: panem et circenses.

Give them a comfortable(ish) life; public safety; fun entertainment. Some elements of the former, preferably in addictive form. A vast majority of population will accept (and thus politically enforce, either via democratic voting or more forceful methods) whatever form of rule and system gives them that.

It work(s|ed) for pretty much anything, from populace loving horrible dictators; to Putin love in post-Yeltzin Russia. AI overlords would be absolutely no different (and likely easier to accept, as there's no jealousy of "people" in power)

user4239
  • 4,727
  • 1
  • 19
  • 42
5

The roadmap for getting people to accept machine takeover of almost any aspect of our lives is already here to be seen, for example in end-user licences (EULAs) on software people use (update software, add new more onerous EULA), and loyalty cards. Throw in some scary threats people want protection from, and any opposition can be pretty effectively marginalized.

So you offer some service with increasingly intrusive conditions in a EULA or equivalent. Nobody reads those things, and on the available evidence seem to happily give away almost any rights for a bit of software.

Considering a number of recent events with computer manufacturers, software giants and so on now shamelessly spying on our every move (e.g. this and this and really, a host of others), the South Park episode "HUMANCENTiPAD", which was supposed to be an over-the-top-satire now begins to seem rather more prophetic.

In addition, people will (apparently happily) give away large amounts of privacy for the promise of extremely modest discounts or other "rewards" (via the use of loyalty cards, for example).

So, basically, offer people something they want (helpful machines that perform some convenient service), put the less palatable consequences of their choice in a gigantic agreement that nobody will read. Maybe add in the promise of a little discount -- or even just the dubious possibility of eventually getting (say) free flights to give up any remaining privacy rights, and then just gradually change the terms over time.

Now to marginalize the opposition. You see this with terrorism threats (even though the actual risks may be quite low) -- play up scary threats people want protection from, and people will go to almost any length - accept almost any loss of freedom - and at the same time, any opposition can be pretty effectively marginalized, by painting them as being disloyal enablers of the threat. In the 50s it was McCarthyism, reds under the bed; more recently, terrorism.

Your scenario was perhaps less dark than one I imagine (you're asking about getting people to accept beneficial machines, I'm mostly talking about getting them to accept a much more Faustian bargain) -- I think you're overly optimistic -- but the basic strategy (which we can already see works really well) is still much the same.

Cue the Simpsons:

Spacewoman: This is the last known piece of art before the collapse 
              of Western civilization.
Spaceman: If only we'd known that iPods would unite to enslave the 
          people they entertained.
(Outside the dome, giant iPods are whipping a group of humans.)
Slave: What do you want?!
iPod: Nothing, we just like whipping!
Glen_b
  • 171
  • 3
  • 6
4

Machines are not in a hurry, impatience is one of those inferior human traits the machines want to eliminate. Therefore it is not necessary to have humans give up their life; it's sufficient to just prevent new humans to come into existence, and in about a hundred years the problem will have resolved itself in a natural way.

So how do you prevent new humans to be born? The best way is to make humans not desire to have children. So make their environment so that they have great advantages if they don't have children, and lose those advantages if they get children. To prevent accidental children, create an environment where humans rarely meet inperson, by eliminating all needs to do so, and provide sexbots and teledildonics so they can live their sexuality without getting into a situation where children may be conceived. Make sure that everyone knows about sexually transmitted diseases and are warned about the (dramatized) dangers of direct sexual contacts, so people prefer to use the safe technology-aided version. Make having children socially unacceptable, for example through movies and TV series showing people getting into deep trouble because they chose to have children, making clear that the right choice is not to have them.

You see, there's no need to actually kill anyone. Intelligent machines will know that.

celtschk
  • 31,444
  • 13
  • 91
  • 152
0

Replace humans with robots by transforming humans into robots with cybernetics. The Ghost in the Shell manga/anime describes a near-future heading in that direction. Simply replace natural body parts with artificial ones until people start to question just how many parts can be replaced before the patient's humanity is affected. Solve that social problem, wait for the civil rights dissonance to settle, and when a generation arises that allows someone with a "full-body prosthetic", as they're referred to in GitS, to be elected to a leadership position, your mission is accomplished.

talrnu
  • 1,388
  • 7
  • 12
0

I had a slightly different take I thought I'd throw into the pile.

tl;dr:

The Machine will demonstrate superior decision making capabilities, and humanity will fall in line, single file.

One person at a time:

The machine makes good decisions. They might not always be perfect, but the machine would be like a super-sentient Watson; Cross-check available data, rank possible action is order of most-to-least likely to produce desired outcome.

Initially people would ask it things like a joke to cleverbot "Hey Multivac, how do I get my crush to like me back?". It would answer with something weird as shit: "Buy a pair of loafers, stand at the corner of 5th and Main, smile at your crush and make eye contact for 3 seconds. Post your results on spacebook, mytubes, and instasnap."

Using the massive database, Multivac can feasibly guess the emotional states and situational requirements required to answer most questions kinda-right. (You crush forgot to get his/her father a b-day present, and your thoughtful interception is the spark that carries you into matrimony.) All it needs to do is be right some of the time, and the true believers will convert the rest. Compared to religion, this machine will tap into that superstitious part of the brain too, while actually delivering visible results.

And when its wrong? Well obviously that person didn't follow the direction right. Look at the groupthink in AA or reddit - it's your fault when things go bad for you, not ours. People who don't get results aren't following direction well enough, and so they are totally at fault, not Multivac.

Nation-states:

"How do I get elected Senator?", "How do we secure peace in Iraq?", "How do we solve world hunger?".. Larger and larger questions will be posed to this machine, it will become a larger and larger recipient of both data and GDP. The answers will be mostly common sense stuff, but now they will actually get acted upon, because the fear from receivers of its wisdom will drive the populace to follow directions exactly, and they won't really notice that they gave up their agency.

Long-term:

At some point, either people will get disillusioned with the machine ("How do we fix local-valleys water shortage?" - "Genocide the Canaanites upstream."), or we will become so intertwined with it that we won't be distinguishable from separate entities (The Ultimate Upgrade), and so either way it won't last forever. But this is a good start. One correct decision (guess) at a time...

vulpineblazeyt
  • 345
  • 1
  • 7