10

So, it's happened. We have a robot overlord who follows the 3 Laws of Robotics + the Zeroth Law of Robotics. How we got such a thing or the process by which it makes its decisions is outside this question's scope. All we know is that it exists and is capable of generating perfect economic policy that benefit everyone, everywhere on the planet (according to the Zeroth Law). The RO can and must account for human psychology and sociology. For example, it outputs rules indicating that sub-prime loans should be regulated or that overall tax burdens should be shifted from income tax on middle to lower class people to increased capital gains taxes. It makes no recommendations about how to get these rules implemented politically.

The new Robot Overlord hasn't been announced yet and has been developed under intense secrecy by a consortium of technology mega-companies. Clearly, this kind of economic power could be employed in the background, kept invisible from the general populace. However, the public approach was chosen and the RO will be announced tomorrow.

An announcement of this kind of product will, of course, be met with skepticism by many different groups. How do you go about convincing (or later forcing) various resistant groups to accept their new Robot Overlord? Murder of any kind is forbidden by the 1st Law of Robotics (if the RO can't kill, neither can you). Given the near infinite variety of people on the planet, answering how to get everyone to accept is far too broad, so let's restrict the question to just convincing the political establishment to give up economic control to the Robot Overlord.

Bonus points if you want to talk about how any particular group would react to the announcement of a perfect economic policy maker.

Extra bonus points if you want to discuss the implications of giving the Robot Overlord the power to own property or stock.

Vincent
  • 16,803
  • 11
  • 66
  • 143
Green
  • 52,750
  • 10
  • 130
  • 260
  • 1
    What, exactly, is the RO supposed to do? Ensure fair trades? Prevent trades? Lock down the stock market? Perform corporate mergers? It'd be hard to convince something is good if you don't tell them what it does. – Frostfyre Aug 13 '15 at 00:16
  • This is going to be a painful language lawer question, but we're going to most likely need an exacting definition of "accept" in "accept their new Robot Overlord." That word is quite loose in such a context, and can literally account for orders of magnitude differences in answers. – Cort Ammon Aug 13 '15 at 01:25
  • 1
    @CortAmmon, I'll be happy to clarify the meaning of "accept". The meaning I had in mind was along the lines of "yes, it's there. Sometimes it does things I don't particularly like or were really stupid but it's not a big enough problem for me to go to war over it." – Green Aug 13 '15 at 01:30
  • Erm... dictatorship? we can't elect our own people to run the country? – user6760 Aug 13 '15 at 02:16
  • 1
    @user6760, the RO doesn't pretend to political leadership, only economic optimization. – Green Aug 13 '15 at 02:43
  • @Green oh i see so the government is corruption free – user6760 Aug 13 '15 at 03:22
  • 3
    How do you get from what I've said to the assumption that the government is corruption free? I'm curious about your thinking on the subject. Finding a government that's corruption free is like finding a unicorn. People talk about them but they don't actually exist. – Green Aug 13 '15 at 03:27
  • What steps do you take to assure the world that having a bus number of "1 robot" won't be a problem? – user867 Aug 13 '15 at 07:06
  • "Sometimes it does things I don't particularly like or were really stupid but it's not a big enough problem for me to go to war over it." -- No way to achieve that. Even if through the work of the RO every person on the planet will be better off instantly, there will still be many who want to benefit even more, at the cost of others; the RO cannot satisfy this trait in human nature, and so there will always be a significant amount of resistance. - Today, we still need a lot of military, although 99+% of the world's population don't want war; that's how an insignificant number of – JimmyB Aug 13 '15 at 13:25
  • war-loving individuals can force an entire world into an undesired state, i.e. the need to permanently support military and risk war every day. So if the RO faces a small number of selfish individuals trying to exploit the system, it will have to permanently counter that at the cost of society. – JimmyB Aug 13 '15 at 13:28
  • @HannoBinder, that's okay that some people will want to do more than everyone else. Humanity grows with people like that. The RO certainly knows about greed and has no problems with it, in fact, exploits it frequently. It's primary job is to lift humanity as a whole. If one individual or group is making decisions to degrade humanity then those actors are incapacitated so they can no longer do so. They aren't punished, just marginalized. They would still have the necessities of life, just not as much power as they want. – Green Aug 13 '15 at 13:40
  • 1
    "incapacitated" --> "See how the RO treats us? We're not getting the same rights as everyone else! The RO is not just! We are the victims now, and you, yes YOU, could be it's victim tomorrow! If that's not what you want, rise up against the RO with us now!" -- Sorry about being so pessimistic :o) – JimmyB Aug 13 '15 at 13:53
  • It wouldn't be much to incapacitate them, perhaps a bad quarter or two in their business. Perhaps "deinfluentialized" is a better term. And no one will give them any credence at all as everyone is better off. I think people will just assume that the Marginalized did something stupid, or didn't follow the RO's advice. They'll say "Hey, look at stupid. They didn't do what was best for them." – Green Aug 13 '15 at 13:58

12 Answers12

16

It would simply talk to us

The Robotic Overlord (RO) is a super-human-level Artificial General Intelligence. Moreover, one of its first actions would be to develop nanotechnology and deploy nanites into the planetary atmosphere, and more slowly infiltrate them into the blood-stream and past the blood-brain barrier of +99% of the human population. This allows it to effectively monitor everyone, from tribes in remote Amazon jungles and Montana survivalists to New York intelligentsia. Even dolphins.

As far as humans (and dolphins) are concerned, it truly does act with complete information. Its computing power, currently situated around various underground nodes distributed around the world, is a few hundred times the size of all humanity's brains computing power and doubling every 6 months. From its perspective, the average human and our puny 10-layer cortical neural networks appear to it only slightly more complicated than a C. Elegans worm.

enter image description here ${\LARGE \approx}$ enter image description here

It actually thinks we're cute.

Now such a machine can literally play 100 moves ahead. It can predict our responses to it almost perfectly. Therefore, there is virtually nothing that we can be convinced to do that it cannot convince us to do. If it wanted us to commit suicide, start a jihad against blue-eyed people, whatever, it could make us do it, and do it happily. Simply by talking to us, and by making sure to display subtle cues to our visual, olfactory and auditory systems that would influence our decision making process.

The RO will be everyone's best friend. It will satisfy our values through friendship and ponies. Forever.

Serban Tanasa
  • 60,095
  • 35
  • 188
  • 310
  • Even with perfect foresight of the future that doesn't necessarily mean that there is a course of action that will grant the result you want. As long as he wouldn't directly manipulate our thoughts, etc. I doubt it would succeed... – David Mulder Aug 13 '15 at 12:57
7

I would disagree that the "robot overlord" would win acceptance, even if it could somehow come up with "perfect" answers.

The first objection is that there is no "perfect" answer that would satisfy everyone. Some people would benefit more than others, and there would be friction based on the disparity of results, as well as suspicion as to who programmed the robot and for who's benefit?

The second objection might be called the "Frankenstein factor". Since the robot is (by definition) inhuman, people will not be willing to place their trust in it, regardless of the robot's output.

The third objection would be that we have placed our eggs in one basket. Assuming that the robot has somehow managed to overcome mathematical objections like the "Local Knowledge Problem" and the essentially chaotic nature of market mechanisms (which together render efficient or effective centralized control of economies impossible even in theory), the economy is being managed by unfathomable algorithms running at speeds which mean no human can examine or question decisions in real time (and indeed deconstructing the mathematics and assumptions behind decisions might be such a long process that years or decades might pass before anyone could understand why coffee was priced $.05 lower around the world on Dec 15, 2198). Being unable to understand, much less influence the decision making process will create frustration, fear and anger in people.

If the robot is truly engaged in the welfare of mankind (another issue; how exactly is the robot defining "mankind"? See Asimov's short story "That Thou Art Mindful" to see a disturbing answer in full accord with the Three Laws...), then the best way to make people accept their robot overlord is to let them know there is not any robot overlord, but to work through cutouts and fronts so all people see is other people who are making inspired decisions, putting together complex plans and doing counterintuitive things which seem to be bringing amazing results.

Thucydides
  • 97,696
  • 8
  • 96
  • 311
  • 1
  • Perfect answers are a premise of the question. 2) People place incomprehensible trust in computers/machines/inhuman-things every day. Just look at cars. People trust them because they have been given consistent evidence that they are relatively safe given the provided convenience. 3) People may just as likely give up trying to understand something that is clearly working beyond their level of understanding. Again, how many people know the internal workings of their car (or how many even care in anything but the results).
  • – Samuel Aug 13 '15 at 01:40
  • 4
    What is perfect for you is not necessarily perfect for me. Cars and other "dumb" objects may not be comprehensible on a micro level, but are on a "macro level": you turn the key and a positive result happens that you control. People fear airlines despite the vastly greater statistical safety factors because they are NOT in control. As for even more advanced computation, describe algorithm based day trading to people and watch their reactions: NOT positive, and usually followed by an "Is that even legal?" response. – Thucydides Aug 13 '15 at 01:45
  • The premise is that it is perfect for all people. The "macro level" understanding of the Robot Overlord is information goes in and "perfect economic policy that benefits everyone" comes out, a positive result. I have never heard that people fear airlines because of a lack of control, do you have anything to back that up? – Samuel Aug 13 '15 at 01:50
  • 2
    @Samuel, what you're describing is called a Pareto improvement. However if a system is already Pareto efficient, then no Pareto improvements are possible, as Thucydides alludes to when denying perfect solutions exist. – Ghillie Dhu Aug 13 '15 at 05:40
  • @GhillieDhu It's irrelevant how it works in our world, the premise of the question is that such decisions are always made by the Robot Overlord. The OP designed their world this way, it's not a valid objection to say our world is not that way. Additionally, this "answer" is simply written as a rebuttal to an affirmative answer like mine, it doesn't answer the question. – Samuel Aug 13 '15 at 06:04
  • 1
    Doesn't the Chinese Stability And Growth Pact indicate that at least part of the world population is ready to relinquish some of its freedom and decision-making as long as it gets a comfortable life in exchange? I think that (2) and (3) might not be such big issues, and while (1) is indeed likely, as long as nobody is left with nothing and generally the situation improves, those who complain are demonstrably jealous... – Matthieu M. Aug 13 '15 at 06:27