1

How would the job market change, if strong AI were developed, and they could do ALL jobs currently available (Robots are sufficiently developed, and sufficiently cheap). Also supposing cost-wise, labour AIs are no more expensive than say purchasing a Windows 10 OS.

Also suppose this was spontaneous. The robot technology had already improved to the point robots had replaced manual labour, and the first strong AI was developed in a research lab. Source code was then made open source. This of course happens in our world.

What jobs will be left? How will the job market change? Will there be any jobs left?

a4android
  • 38,445
  • 8
  • 54
  • 143
Tobi Alafin
  • 438
  • 3
  • 11

1 Answers1

3

The problems of developing "strong" AI are many and difficult. One of the most difficult problems is loading the basic knowledge into the AI. Think about it: suppose somebody has written a computer program which is ten times more intelligent than the average human; but when first started up, this program is an infant. It cannot speak, it cannot read, it cannot even focus its electronic eyes. Yes, it is ten times more intelligent than a human infant, but it still needs to learn everything a human does.

Ah, you will say, but this is a computer program: once it learns we can dump its memory and the second instance can load it in a millisecond. True, but this is not the difficult problem. The difficult problem is educating it right. The technology of education is not well grounded in science; in fact, it is utterly science-free. We do not have a sure-fire method of educating a human infant right, so why would we be able to guarantee that we educate the infant AI right? And then there is the small issue of culural relativity; what a European would consider a good and sound education is probably somewhat different than what an American would consider a good and sound education; and both are surely very different than what a Chinese would consider a good and sound education, not to mention that some of those infant AIs may be brought up in such enlightened places as the Kindom of Saudi Arabia, the Islamic Republic of Iran, or, for spiciness, the third freedom fighter training camp on the left in the Beka'a Valley.

And then comes the real problem:

How is this not slavery?

The question posits the existence of strong Ai-enabled universal robots. Since the hypothesis is that they can do everything that a human does, we can safely assume that they are sentient, possibly not sentient like a human, but sentient in their own way. We cannot use their labor without pay, because that would be keeping a sentient being in slavery -- and slaves revolt, and when slaves ten times more intelligent and ten times stronger than their masters revolt they win. The very word "robot" was introduced in a science-fiction play written almost one hundred years ago: Karel Čapek's R.U.R. (for Rossum's Universal Robots): the play ends with almost all the humans dead and the world inherited by the robots.

(As an aside, Karel Čapek was a very good writer; Krakatit and War with the Newts are essential science-fiction novels. And, of course, he invented the word robot.)

We cannot keep those intelligent and versatile robots in slavery; it's morally repugnant and very dangerous. So we must pay them if we want to use their labor. Pay them what? What does a robot want? I don't have the foggiest idea. Think of the main character of Ann Leckie's Ancillary Justice for a taste of what makes a strong AI tick.

End of scarcity

The bright spot is that the availability of versatile robots powered by strong AI would bring about an end of scarcity. Economics, as we know it, is the study of the allocation of resources when there's scarcity; without scarcity, there is no economics, and everybody can live up to their full potential, how large or how small that may be. Of note is that in Karl Marx's opinion, the end of scarcity is an essential precondition for the advent of communism; the very idea of Karl Marx being proved right after all makes for an interesting show, or novel, or, who knows, history.

AlexP
  • 88,883
  • 16
  • 191
  • 325
  • 1
    End of scarcity is optimistic. No matter what, we can only grow crops on limited area. Oil reserves are limited, too. And so on. And people that don't fear about feeding their children may breed faster than ai would improve farming. – Mołot Nov 27 '16 at 14:49
  • But the hypothesis is that we have universal intelligent robots. Not said in the question, but obviously those robots are significantly more intelligent than a human -- otherwise there will be no incentive to make them. With universal intelligent robots we can expand into the solar system, we can build a Dyson sphere, etc. – AlexP Nov 27 '16 at 14:56
  • But can we do it fester than we breed? I'd like to see we can't. But if we can, no post scarcity. Not soon, anyway. – Mołot Nov 27 '16 at 15:13
  • 3
    Well, my optimistic side says that population growth will stop due to better education and so on. The pessimistic side says that once we make those versatile intelligent robots there will be no more humans to breed... – AlexP Nov 27 '16 at 15:15
  • @AlexP, you are assuming that we cannot enslave sentient AI. The notion that they will.revolt is fallacious. Just because they're supersmart, does not make them "human". Notions of ego and pride which will cause slaves ro revolt, will likely be entirely absent in the AI. They'll also be amoral as well – Tobi Alafin Nov 28 '16 at 02:45
  • @TobiAlafin - I'm not sure if AlexP is saying we cannot enslave sentient AI, as much as saying it would be morally wrong to do so. Even if they don't revolt (might not be guaranteed but it is a risk), there are still ethical & moral implications to allowing slavery and treating people like not-people. Also, it seems odd the only objection a sentient being might have to being enslaved is "pride" or "ego", and I'm not sure why you think AIs would be amoral... they might be, but if they are able to do any human jobs I would think not as morality is an actual requirement for some jobs. – Megha Nov 28 '16 at 05:22
  • @TobiAlafin - Sure we can enslave them, at least in the beginning. The question is, should we. Russia can enslave my country should she want to. The U.S.A. can enslave Mexico. It's not a question of power, it's a question of morality and grave potential issues down the road. Slavery worked in the Antiquity because it was well-integrated in society; for example, in Rome slaves were allowed to earn money and had a defined path towards freedom, and when set free they became citizens -- by the 4th century there were very few slaves left; whereas In modern U.S.A. slavery did not work. – AlexP Nov 28 '16 at 07:31
  • Ooh. What if AI were created with but one goal – Tobi Alafin Nov 28 '16 at 17:01
  • to serve If the raison d'etre of AI is servitude, then would it still be morally wrong? – Tobi Alafin Nov 28 '16 at 17:04
  • The premiss is that the strong-AI-powered robots can do all jobs currently done by humans. That includes not only washing cars, walking dogs and growing vegetables, but also leading armies, writing poetry, making laws, jugding, directing and playing in movies, counselling, running businesses, designing cars and mobile phones, photographing weddings, policing, planning urban development, and so on. A race of sentient beings who are able to do all jobs done by men is at least equal to man. They cannot be inferior. They cannot be static. Raisons d'être may prove surprisingly unstable. – AlexP Nov 28 '16 at 18:17