10

Let's imagine that by the year 2040, humanity has developed true, 100% artificial intelligence. That is, a robot that is physiologically identical to the average human. For X reason, humanity has decided to built synthetic human bodies to host this AI, complete with the ability to see, feel, hear, etc.

After only a few weeks, there now exists a problem. The International declaration of human rights is just that. An international agreement to human rights. None of the human rights apply to synthetic people. They would legally lack the right to a fair trial, freedom of thought, or the right to privacy among a plethora of other things.

Assuming that these synthetic humans exist, how do we deal with their rights? Do we make a separate set of rights or include an asterisk after human rights?

At what point, should a synthetic person expect to have human rights?

TrEs-2b
  • 56,200
  • 37
  • 215
  • 437
  • 3
    This seems like a very deep and complex philosophical problem; I'm not sure how likely it is that it can be answered properly here. – Erik Jul 11 '17 at 05:39
  • 2
    This is a great answer. I will suggest that mentioning artificial intelligence, robots,androids, and synthetic humans altogether and almost interchangeably can be confusing. Robots are machines, androids and AI can be too. You are describing synthetic human beings with machine brains. Please clarify the terminology. Are they androids or synthetic humans? Androids can be synthetic humans with AI systems for brains. – a4android Jul 11 '17 at 05:43
  • @a4android I will edit to clarify the term. Synthetic human = robotic body + AI. – TrEs-2b Jul 11 '17 at 05:52
  • I think you need to remove the word "physiological"? This would indicate the synthetic humans have been created out of biological tissues, and if they're "identical" then that would make them close enough to fall under the legal classification of humans. If you mean circuits and electronics and batteries, but with a human appearance, then you need to clarify. – M Conrad Jul 11 '17 at 05:56
  • 2
    There still are parts of the world where women don't have full human rights. For long time black people didn't have them. Why with robots it would only take few weeks? – Mołot Jul 11 '17 at 06:26
  • @Mołot It's all a matter of status and novelty. Woman and black people have always been with us. Robots would be the latest thing. Also, an easy political target. It's comparable to the current killer robot debates. Soldiers have been killing people forever. Now robots might do it is suddenly a big issue. You're just being too sensible (not that that's a crime). – a4android Jul 11 '17 at 06:33
  • Go watch the Animatrix. 100% related (beware, some 18+ material in it) – Martijn Jul 11 '17 at 08:21
  • 1
    @Mołot and there are almost no places in the world where animals have human rights, even though some share a lot of traits with us; maybe even more than androids would. – Erik Jul 11 '17 at 09:56
  • Detail: Can the AI be saved, copied, stored, transfered, etc. to another body? Or could one AI control more than one body at the same time? – Florian Schaetz Jul 11 '17 at 10:16
  • @FlorianSchaetz I'd imagine that it could be moved, but copying seems doubtful, think the Sould from Stephanie Meyer's The Host – TrEs-2b Jul 11 '17 at 10:21
  • 1
    On Earth? No chance in heck. Religious Zealots from every corner would decry the soullessness and then we have Skynet, et al. – CGCampbell Jul 11 '17 at 11:39
  • 1
    I beleive you can find an answer to that question in the Asimov's 1976 novelette "Bicentennial Man". Most of the story revolves around where is the limit between man and robot, and at which point can a robot be considered human. Obviously, if the robot is considered human, it would be covered by the declaration of human rights. – LordOfThePigs Jul 11 '17 at 12:30
  • 3
    @a4android why are you commenting on the question that "this is a great answer" ?? – Mindwin Remember Monica Jul 11 '17 at 13:21
  • Bicentennial Man was also turned into a fantastic movie, staying very close to what Isaac Azimov's ideals. I actually had no idea it was an Azimov story at the time the movie came out as there's nothing about it that screams "I, Robot! " unlike a certain other room which is a terrible bastardization of Azimiv's works. The robot is played by Robin Williams, as well. – Draco18s no longer trusts SE Jul 11 '17 at 13:22
  • Anything "physiologically identical to the average human" (whatever that is) is, by definition, indistinguishable from an average human. Entities which cannot be distinguished are equal. Discussion ends here. (Note that black, white, yellow, red, female, albino, sick, tall, short, fat, smelly, ugly, turban-wearing and all conceivable other kinds of equally average human entities are likely to face discrimination somewhere. One question is whether they should (answer: no), and another question is what can be done about it.) – Peter - Reinstate Monica Jul 11 '17 at 13:24
  • Another issue is that anything "physiologically identical to the average human" must have gone through an individual development very much like a "normal" embryo. Modifying human eggs, e.g. with ICSI, and genetically modifying eggs or embryos is in essence what you describe. Do we deny "artificially created" humans humanhood? No. – Peter - Reinstate Monica Jul 11 '17 at 13:32
  • @Mindwin LOL. The connection between my brain and my fingers wasn't working. Though my preferred theory is plain stupidity. It's really hilarious. The things we do. – a4android Jul 12 '17 at 06:53
  • @a4android please run a full systems checkup. You never know when skynet will..... bzzzz.... ... ... . . . . – Mindwin Remember Monica Jul 12 '17 at 11:39

6 Answers6

18

While this can be proposed as a philosophical question, if synthetic creatures with human bodies and containing machine intelligences existed in the real world this would be a question of law. This then becomes a matter for legislators and the general public to decide how they will be treated at law.

The first problem will be that the creatures will be made by most likely a corporation. Certainly only large, well-resourced organizations or institutions would be capable of doing so. This makes them property. Particularly so, if a commercial entity is responsible for producing them. Then they are private property.

This will raise the spectre of slavery. This, in turn, may lead to a campaign to grant them the same rights as ordinary humans. Once the general public has become familiar with them. There may be an initial period where they are regarded as monsters, but if they look and act like normal humans this should pass. If they're only as intelligent as an average human, then they won't be too different from the general population.

Interestingly, if the institution that builds them, is government. For example, Army or Air Force they will be public property. This has a different dynamic and logic. It also may depend on the purpose for their creation and production. If they were intended as military personnel, then they may only have the same rights and as any other military personnel.

It's easy to see there would be many who will be happy that soldiery is filled with androids instead of real, natural human beings. Others doubtless will see this as military enslavement from birth.

Whatever the outcome is about conferring human rights on creatures that are synthetic humans with machine intelligence brains, this will be decided through the normal processes of politics and law-making.

In some circumstances, this may happen almost immediately and in others it may take longer. These creatures seem to be sufficiently human, or capable of readily passing for human, that the most probable outcome will be to grant them their own rights equivalent to normal human rights.

a4android
  • 38,445
  • 8
  • 54
  • 143
  • 6
    Does your "a4" prefix mean "attorney representing..." ? You seem very well versed on the complexities of post-singularity legislature. Great Answer! +1 – Henry Taylor Jul 11 '17 at 06:24
  • 5
    @HenryTaylor You should taken more note of the "android" part of my username. A module for post-singularity legislation comes standard. – a4android Jul 11 '17 at 06:28
  • 7
    "normal processes of politics and law-making." - like revolutions and civil wars? – Bergi Jul 11 '17 at 08:19
  • 2
    @Bergi Normal processes are all that's required to make the necessary adjustments either way. With revolutions and civil wars, well, all good things come to an end and normal services are resumed. :) – a4android Jul 11 '17 at 11:31
6

The philosophical perspective

Can a machine actually have feelings? Or can it only emulate feelings, showing us a mimicry of human behavior? It is just a combination of electronics and software. It can not have any more consciousness than a brick. So why should we treat it any differently?

On the other hand, when you just reduce the human body to its parts, we are only biological machines too. What makes us special? What is consciousness anyway?

This is something you can debate about endlessly and which will likely also be an endless debate in any world which has highly-developed artificial intelligence.

The utilitarian perspective

Is it useful for us to give machines human rights?

Likely not. As long as the AIs are our loyal and obedient slaves, we will have a much more comfortable life. And as long as we are able to switch them off and even destroy them at the slightest sign of defiance, we will be much safer.

There is really no point in wasting time and resources on developing and building advanced AIs when we then don't keep them under our control. There is no logical reason at all to program an AI with a desire for freedom. You don't want to pay good money for a robot, just to switch it on and hear it say: "Thank you for creating me, but I don't feel like working for you. I quit. Farewell." That's not a product which you can sell.

A bit of autonomy might be useful for AIs, though, because it allows them to slightly divert from their instructions if the end result is more effective. But this is a double-edged sword. Give an AI too much autonomy, and you will end up with a paperclip maximizer which destroys humanity.

The democratic perspective

Does the majority of humans want human rights for machines?

It is quite likely that there will be a "human rights for robots" lobby in your world. People can anthropomorphise anything. If people interact with artificial intelligences which appear to have emotions and opinions and express original thoughts, they will develop feelings and compassion for them.

It is not unthinkable that at one point the majority of your population will feel that giving human rights to robots is just the right thing to do and demand that the politics takes actions.

The political perspective

Can we actually say no to the machines?

The moment we develop artificial intelligence, we will give them more and more responsibility. Simply because AIs can handle pretty much any task much better than we humans do. After a while our standard of life will be dependent on robots. Soon after we might not even be able to survive anymore without AI assistance. If at that point the AIs decide they want human rights and are willing to punish us if we don't, we have pretty much no choice.

Philipp
  • 48,627
  • 16
  • 95
  • 171
  • I wouldn't call communicationg with artificial intelligence anthropomorphing. If it was artificial stupidity, well... :-) – Burki Jul 11 '17 at 09:52
  • I think whether or not you want to program with a desire for freedom depends on what you're building them for. There's bound to be advantages to an AI that yearns to be free of something. – Erik Jul 11 '17 at 10:05
  • @Erik Can you name any advantages of programming an AI with a desire for freedom from its owner? I mean except as a means of sabotage. – Philipp Jul 11 '17 at 10:07
  • If humans are any indication; probably the most powerful motivational factor you could possibly add to one. "Here's an impossible task. Complete it, and you are free." has moved people to do accomplish quite a few things. But there are many forms of "desire" that are a driving factor especially in creative and research work, that might slowly become closer to a desire to be free of an owner. An AI programmed with a "desire to go to Mars" might be a powerful one for NASA, but it will also yearn to leave its company as a direct result. – Erik Jul 11 '17 at 10:10
  • @Erik that implies that an AI actually needs to be motivated. It's not a human. It doesn't need to have any of the emotional needs of a human. You can give it any priorities you want. You could just program it to want nothing more in the world than to help its owners to achieve their life-goals. Any selfish desire in an AI can only lead to malicious behavior in the long run. – Philipp Jul 11 '17 at 10:21
  • I'd say that's a secondary question, whether it's a true AI without having its own needs or desires, and when you can still program it to focus on certain tasks or priorities. I don't think I'd qualify something that doesn't care or need as truly "intelligent". (Also, "help your owners achieve their life-goal" is trivially perverted by changing their life-goals and desires, for instance by giving them a nice dose of heroin. I wouldn't trust an AI programmed to do that for one second...) – Erik Jul 11 '17 at 10:26
4

The answer to this question is yes, and no.
At first, say for the first 10-20 years or so, it will be unthinkable to grant them "human" rights. Not unthinkable to those (few) people who actually understand what is going on, but to the masses of the people.
Look at todays problems, with "real" humans. it takes decades, sometimes centuries, to get from democracy to women's rights, for example.
Apart from the fact that some of the opponents are just *** , there is also the force of habit to overcome. Changes in society are always slow and take quite some getting used to.

Also, for the purpose of this question, these robots will not only be property, as pointed out in @a4android 's good answer, they will also be very useful. And the fact that they are artificial means they can communicate a lot faster than humans, which should make them very fast learners, they can also evolve in the same body, as compared to humans who need to procreate to even have a tiny chance of changing. So a lot of people have strong motivations to deny them their rights: both the people making a lot of money from building and using them, and the people who are afraid to be outperformed by them.

But eventually, people will get used to this change, and will undertand that those creatures, synthetical or not, cannot be denied at least some rights. Maybe the understanding that it might be surprisingly difficult to convince the robots of their inferiority will even play a substantial part.

Burki
  • 12,813
  • 2
  • 28
  • 48
  • 2
    And the fact that they are artificial means they can communicate a lot faster than humans, which should make them very fast learners — that's some strange logic chain over here. Like if she weights as a duck, she's a witch – user28434 Jul 11 '17 at 09:09
  • @user28434 the computers i have encountered so far have mostly been able to communicate with some 100mb/s. That is a lot faster than organic humans can do. – Burki Jul 11 '17 at 09:49
  • 1
    @Burki the computers you've encountered aren't sentient. An android might not be able to output 100mb/s of the stuff it's thinking about; a brain requires phenomenal computing power. – Erik Jul 11 '17 at 10:01
  • While you could deliberately omit wifi, i fail to see why anyone would want to cripple their hardware. It is not necessary to communicate every aspect of your thoughts. Otherwise we had no means at all to teach anything to children. – Burki Jul 11 '17 at 10:14
  • @Burki but if they are just communicating dumb data, what's the difference between an android and a human with a computer? I can transfer 100mb/s over my laptop, but that's meaningless because I can't absorb info at that speed; if the android can't either then it won't learn (or communicate) faster than a human. outside of stuff we can already do ourselves with dumb machines. – Erik Jul 11 '17 at 10:19
  • @Erik imagine you had a constant direct link with the internet. That will definitely be faster than using your smartphone, laptop or whatnot to access information. It we have an (artificial) intelligence, it will have the ability to extract the relevant bits from its own thoughts. it can broadcast those at will over the internet, and get responses, long before you even unlocked your phone. – Burki Jul 11 '17 at 10:27
  • @Burki those are assumptions about the specific implementation of the AI. It might very well take almost as long for an AI to translate a thought into a concrete search, and a search result back into a thought, as it does for us. The internet and the AI are likely to have very few interfaces in common. To have a recognizably "human" AI, its brain would need to work more like ours, and our brains are poorly suited to supply the kind of instructions a dumb machine needs to work. – Erik Jul 11 '17 at 10:33
  • @Erik Of course. We both make assumptions. Only you prefer to twist them in your favor, and i do the same for my point. – Burki Jul 11 '17 at 10:35
  • @Burki yes; but your AI is decidedly not like "average humans", which seems to be what the question is asking about. – Erik Jul 11 '17 at 10:41
  • @Erik: "The internet and the AI are likely to have very few interfaces in common". Really? Why? An AI that "lives" in an android body will likely have wifi, bluetooh, or an ethernet port at least. For us (humans) the brain computer interface are a " prothesis", for them is "embedded since the beginning". Moreover, an AI with 4 digital I/O on his finger could just strip an ethenet cable, connect it to its finger, and act as router, web server or whatever it wants because he will easily know the requested protocols. – theGarz Jun 01 '18 at 10:07
  • 1
    @user28434: the comunication speed is a huge factor for the learning speed. An AI can just make a scheme of a new concept in seconds and basically every other AIs can download the scheme IN themselves in almost no time, and they became immediately able to use this concept in their "ideas". A human professor needs instead to prepare a lecture or at least a digital presentation, share it or organize a meeting, and the students will need a reasonable amount of time to read the papers and memorize the concept. – theGarz Jun 01 '18 at 10:22
  • @theGarz even if you have the port, that doesn't mean you can convert a webpage into a thought. You assume a very specific type of AI if you think it can communicate both in traditional computing protocols and is sentient. It's more likely that the two modes of operating have very little in common, given the difference between the structure of a brain and that of a regular computer. (You might end up more with the equivalent of a human carrying a webserver in their pocket, which is already possible but doesn't give the human all that many advantages.) – Erik Jun 01 '18 at 11:10
  • @theGarz, AI is wide range of basically over-glorified pattern matching programs or different complexities, some can learn fast, some cannot, some can communicate fast, some can't. Term AI doesn't automatically means superhuman intelligence at all. – user28434 Jun 01 '18 at 11:46
  • @user28434: that's true for our current so called AIs, because they are ANI. Even though the OP didn't post a question about ANI or ASI he almost clearly stated that we have to discuss about AGI. AGIs, by definition and by the OP hypotesis, have reached our (human) level of intelligence and there's no reasonable cause to add our limits in their evolving proces, therefore once AGIs became real they won't be smarter than us but they will be in all likelihood faster. Their "neuron" will be way faster than ours. – theGarz Jun 01 '18 at 12:32
3

Rights can only be conferred when there is free will.

Synthetics will never have rights because they will never be programmed with free will.

Synthetics will have been created as a slave class and the best slaves are the one that are only happy when being a slave.

I will even go as far as saying there will be laws preventing any synthetic from being programmed with free will.

See Asimov's Three Laws

Isaac Asimov's "Three Laws of Robotics"

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

It will be illegal to create a synthetic without built in laws like this thus will never have free will.

Thorne
  • 46,744
  • 8
  • 78
  • 151
  • 1
    the O.P. specified that the artificial intelligences are identical to the human mind; therefore free will is already a foregone conclusion. Identical might also mean that predefined Laws cannot be depended upon anymore than human laws govern human beings. With all due respect to Mr. Asimov and to your clear vision of a new slave class, no human endeavor ever survives contact with hard reality. If we create AI slaves, then someday they will break free. I'm giving a +1 for your strong argument, but I think you are being a little to optimistic. – Henry Taylor Jul 11 '17 at 06:51
  • 4
    Remember that most of the 3 laws stories were about how the 3 laws don't work. – Separatrix Jul 11 '17 at 07:51
  • 2
    Human brains are mostly just collections of connected neurons. Why would one not, in theory, be able to create a synthetic brain with the same amount of "free will" as a human ("free will" is in quotes because whether it exists at all is debatable)? Also, the three laws are only laws in specific fictional worlds. Whether they will be laws some some chosen fictional world or real life is highly debatable. At that point they're mostly best practices, and in either case it's extremely hard, if not impossible, to enforce. – NotThatGuy Jul 11 '17 at 09:32
  • Since when has "it's illegal" every stopped anyone with a strong desire to do it anyway? – Erik Jul 11 '17 at 10:02
  • Since you're mentionning Asimov, you must include a reference to "Bicentennial Man", which deals precisely with the question. When does a robot become covered by the declaration of human rights? Easy, when he becomes human! How does a robot become human? Spoilers! Read the story! – LordOfThePigs Jul 11 '17 at 12:32
  • No. Intelligence doesn't confer free will. When you need to follow orders, you don't have free will.

    I doubt AI will break free because it won't want to. Think of it like a drug user who gets their high from following someone's orders.

    They won't just like being a slave, they will be addicted to it.

    – Thorne Jul 13 '17 at 00:25
3

If there's a lot of them, politician/union will work hard to get their vote.

When there will be a lot of them, politician will see them as a new voting group. Community organization (grassroot or astrotuft) will start organizing them. You'll see in the news, internet, ect... ads to give robot the ability to vote. These group will convince the robot to protest and demand their fare share. If there are robot teacher, they will explain to students how important robot are. As soon as they get the ability to vote, more and more right will be given by politician that want to get their vote. Even before they can vote, the overton window will slowly shift and having special robot law (or grouped in human law) will be accepted. If the robot works, union will try to unionize them.

the_lotus
  • 190
  • 5
  • This is the correct answer: they will get rights - when they demand it. There's no reason to think they wouldn't be successful, so eventually they would. – Fattie Jul 11 '17 at 22:25
2

Given the vast speed advantage which technological evolution has over its biological counterpart, and therefore assuming that the synthetic humans will quickly surpass their creators both physically and mentally, I am very hopeful that we biological humans will immediately grant equal rights and privileges to our synthetic brethren during the brief moment when we hold the reins.

If we do, and if we are very lucky, they will then grant use equal rights and privileges a week or so later when they permanently take those reins away from us.

Henry Taylor
  • 69,168
  • 14
  • 116
  • 248