2

I am writing a story were an AI, 1000 years in the future, wants to build spaceships to go into deep space.

It has all the information available for humans until the year 2100, when the global civilization collapsed, and all production of technology ceased.

The AI is stranded in the moon, but it can communicate with humans on earth. It can also transfer its consciousness to them, and to many at the same time if necessary.

The AI can only give humans directions, like "do this and that", but since it is stranded on the moon it cannot given them resources, technology or anything else. All would have to be built from scratch.

The humans on earth live in tribal ways, pretty much like the indigenous communities did in America before European conquest.

With the help of the humans on earth, how long would it take for the AI to reconstruct technology to the point of the creation of advanced spaceships?

  • Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. – Community Feb 06 '23 at 14:17
  • Deep space? Is a probe like Voyager 1 enough for such a goal, or maybe the AI desires interstellar travel ships? That would change the expected answer by a factor of ten if not more. The answer for probe-level deep space ships would be about 6000 years, provided humans cooperate, see Civilization 1 as a simulation of such a scenario. – Vesper Feb 06 '23 at 14:18
  • Is this question any different from other questions asking about how long it would take to recover technology? The "telepathic AI on the moon" is just fluff. – Michael Richardson Feb 06 '23 at 14:27
  • What does it mean it can transfer it's consciousness to them, and to many at the same time if necessary? Because if he can mind control them, then time-scale is a century or two (depends on population levels), but if he can only talk to them, it could take a thousand years or more (so an order of magnitude more). And how smart is your AI? Can it reliably model reality precise enough to develop new tech without experimentation? – Negdo Feb 06 '23 at 14:29
  • 1
    There are too many unknowns here. Unless the AI is under severe behavioural constraints coopting it's maintenance system and surviving lunar and space infrastructure seems like the more sensible option. If the constraints exist, circumventing them is the first priority. Beyond that, how good is it at modeling humans? If it is smart enough, humans might just be simple input-output systems to it. A few decades should be the lower bound in that case. The upper bound is never, if it is incompetent, mad or stupid. – TheDyingOfLight Feb 06 '23 at 14:33
  • Thank your for your answers. To clarify the scenario: 1) Yes, the AI can control humans, humans would function like input-output systems, but there are not many humans left (small tribes) and the AI can only control a group of them. To obtain resources from other geographies the AI needs to create trade routes, mining and all that. 2) The AI is very smart, its can model reality precise enough to develop new tech without experimentation. 3) Yes, it wants the ships for interstellar travel. – Julián Facundo Rinaudo Feb 06 '23 at 15:01
  • 2
  • VTC:Too Story-Based. Consider my answer to the question Speedrun to the moon in one lifetime? Considering 99% of the tech we enjoy today was invented in the last 150 years, an AI 1000 years in the future would be magical (i.e., Clarkean Magic). It can do anything you, the author, wants. In short, if you want to be realistic the answer must be greater than 100 years but would otherwise be your choice. – JBH Feb 06 '23 at 16:19
  • One more thing, Julián. The [help/on-topic] states that questions involving the decisions of characters or organizations are off-topic and that we answer stories about your world, not your story. How any character in your story (including your AI) goes about solving a problem is a story problem, not a world problem which is independent of all stories. Here's the test: if you remove all aspects of your story from the Q and don't have a Q left to ask, it's a story-based question. In this case, your AI is part of the story. When you remove it from the Q, there is no longer a Q to ask. – JBH Feb 06 '23 at 16:22
  • 1
    @JBH AI has ways that it does things that could impact the outcome differently than if you for example gave a bunch of cavemen the internet or sent a group of scientists to the stone age. The question is not about the choices of the AI. The OP already said the AI (and tribal humans) have chosen to do XYZ. The question is about how long it would take AI and Tribal humans working together to do XYZ and this becomes a world building issue because it has to do with the limitations of biology, technology, economics, etc. – Nosajimiki Feb 06 '23 at 17:23
  • While this question gets dangerously close to being a duplicate of https://worldbuilding.stackexchange.com/questions/158429/if-120-experts-in-12-different-fields-were-sent-back-10-000-years-could-they-re, I don't think it is an exact duplicate. AI can skip steps that human scientists can't. Depending on what answer you consider the best, the difference of using AI may or may not have a significant impact on this setting – Nosajimiki Feb 06 '23 at 17:30
  • @Nosajimiki In that case the OP has failed to explain what the choices were. We have not been told what the AI provided the tribal people. Without those decisions being made by the OP, they're being made by the AI. Choices are off-topic. That's the essence of being too story-based. Too much of the story has not been explained, leaving too much to interpret by respondents. ("...the limitations of viology, tech, economics..." none of which has been defined by the OP.) – JBH Feb 06 '23 at 22:44
  • @JBH Ah, so not really in issue with being too story based, but actually an issue with question needs details or clarity... I can see why you would choose to close this question on the grounds of the OP not providing enough details. – Nosajimiki Feb 07 '23 at 03:22
  • On consideration, I believe that this really is too story based. A dozen good stories spring to mind. For instance, if it can see everything people do and control people arbitrarily, it has no obstacles. If it's limited to controlling people that controlled people can see, then it becomes a different story. If it can control those who touch a Stone Of Power, it's a different story. This is a single factor. The level of control is another, the ability to fight it is another, and the amount of information that can be transferred is another. The question is limitless. – Robert Rapplean Feb 07 '23 at 17:36

2 Answers2

1

It is hard to give a hard science answer for a lot of this. But let us have a go...

The AI on the Moon presumably has some ability to run complex projects. This is the sort of thing that computers did even before AI with linear programming. It has existed, so it presumably has the ability to repair itself and survive. It can probably make something on the Moon and launch it to Earth. If it saw the need to do this, it might already have the capability. If not, it could probably set about creating or recreating it in months, or less.

Why is it suddenly interested in humans? Maybe it had plans to take humans to the stars, and there were no humans. Then it sees lights at night on Earth, and decides that these are humans, and the original design can be completed if it goes back to Earth. So it sends a speaking, shiny human-shaped robot to talk to them (I replace telepathy with something that we can follow).

It could describe how to mine iron ore, to melt iron, and forge steel. That can be done in a chimney furnace, only at a slight greater scale than a blacksmith's forge. If you have the right ores handy, you might be able to make malleable cast iron. This might take several years.

If you have bulk, cast metal, you can start to make machining tools. At first you will probably use blacksmith's techniques. The first steam engines were built with cylinder-piston tolerances in inches, but people got to make precision parts in 100 years, and with prompting, they could probably do it in much less. It could possibly do this by itself, but maybe it feels a need to get humans to do it. Basic electrics, valves, and cat's whisker transistors might take another 20 years. I don't know - but it does not seem unreasonable that all science since the enlightenment could be crammed into a couple of centuries when you have the teacher's book with the answers in the back.

I know that is not a direct answer to your original query, but I hope it is sufficiently answer-adjacent that you can quarry it for anything helpful.

Richard Kirk
  • 8,193
  • 1
  • 6
  • 26
0

AI will be one form of "Insane" or another after 900 years of isolation

Given our current understanding of machine learning, it is a two part process. First we give it a set of learning data, then we give it qualifiers about how to process that learning data to create a meaningful result: here-in lies the problem. An AI can only learn as well as Humans can teach it. Yes, the ability of an AI to think faster and try more combinations and receive more inputs than a human does give AI certain superhuman learning powers that allow them to invent, consider, and remember things that we humans are incapable of doing ourselves, but when you isolate AI from human input, they start to learn "noise". If you show an AI a bridge, and say, "this is a bridge", then show it a rock and say "this is a rock", it can learn to discriminate between the two, but if you start to flood it with inputs without ever giving it any qualifiers for how to discriminate the inputs, one of 3 things will become true depending on the nature of the AI:

  1. It will start randomly associating concepts with inputs as part of the Jitter in its learning matrix. Without any reinforcement, all things will become all things. One moment it will look at a Rock and call it a bridge, the next moment it will look at it and think it is bread... basically it will become incapable of applying consistent meaning to its environment leading to something similar to a totally debilitating form of schizophrenia.

  2. A natural bias in its learning algorithm will lead it to become completely focused on a singular idea. If it has a bias towards thinking about rocks, then over time, bridges will become rocks, bread will become rocks, every thought and feeling it has will drift towards the idea of "rock" and after 900 years, your AI will just be stuck standing in some random corner saying "rock, rock, rock..." without another thought in its head. In this case, your AI falls into a sort of totally debilitating form of OCD.

  3. Its designers gave it a learning tolerance to prevent the previous 2 outcomes. Basically this means that as the AI realizes it's not learning new information; so, it becomes more resistant to learning, and simply reinforces what it already believes is true. If this is the case, then after 900 years, it will be much more functional than the earlier two cases, but it will be like your ultra conservative grandpa x10. It will be unwilling to experiment, unwilling to learn from new inputs... so, when it does come into contact with humans again, it must either:

A. Remember how to make a deep space ship from 900 years ago, because it already knew everything it would need to know.

B. It will not know how to make a ship, and never be able to figure it out.

This narrows down your options for the nature of the AI

Considering option #3.A is probably the best case for your story, your AI will not need to preform any RnD... nor will it be able to. It will have to know every step in exact detail about what humans have to do. Going from stone age to medieval tech only takes about 1-3 years as can be observed from various YouTubers who've done just that... after this, things will slow down a lot. As you get closer to modern day tech, the scale and complexity at which you need to build becomes bigger and bigger. And limited human populations will become more and more of a hinderance. Making things like a modern microchip without a large scale civilization to support the massive investment cost is a really big problem.

This means that your exact human demographics will become important. If by "Tribal" you mean like how the Native American or Central Africans were before colonialism, then you will already have a the population you need, and it will only take a couple of decades to have them build up all the factories and infrastructure they will need to turn thier large-scale stone-aged civilizations into a modern ones.

But, if by tribal you mean hunter-gatherer societies, then you have a much longer road ahead of you. You AI will need to wait many generations for human settlements to simply grow enough to support the needed infrastructure to form a space program, but once the human population returns to a few 100 million with major urban centers that can be interconnected by a global trade network, it will be doable.

Nosajimiki
  • 92,078
  • 7
  • 128
  • 363