95

I'm trying to write a 'gundam like' series, though with space fighters instead of humanoid robots. I want a semi-hard science world, I'm okay with making up technology without fully explaining how it works so long as it seems plausible, but I want to be as realistic as possible within my world and to stay consistent to whatever technology I do add to the world. This makes short range space fighters difficult to justify...

I'm using hand waves, Minovsky Particles and similar tricks to justify space fighters, most notably the presence of energy shields requires use of short range energy weapons to pierce them, they're more vulnerable to lots of weaker attacks from small fighters than a few large attacks from capital ships, and the massive ER radiation from shields, coupled with active electronic countermeasures, prevents remote control of them.

The biggest problems I have with justifying these space-fighters is AI. It still seems like a good AI will be more viable for these space fighters than human pilots. The cost of space fighters goes up if you have to include life support systems, and in fact, the increased size of the fighters to fit a human, controls, and life support makes it a bigger target. My 'Ace' class fighters, rare fighters equipped with their own shields, particularly would suffer from extra size making shields noticeably less effective. Removing humans from the fighters make them cheaper and smaller.

In addition a future AI could presumably be faster to respond to attacks than humans, less predictable, more trustworthy (won't betray you, won't retreat in fear, won't do something stupid, or try piloting drunk), better able to handle a truly 3D fight, that humans aren't used to thinking in, and able to handle G-forces humans can't. Plus use of AI means no human death if fighters are destroyed.

However, I want human pilots as my protagonists. I do not want AI-controlled fighters to exist. Thus I want to come up with the best justification(s) for why humans would still be piloting these vehicles.

Right now my best justification was to simply say that AI advancements stagnated in our future. While we can do all the AI we manage now, and some things AI do better, AI capable of processing the complexity of fighting in space simply have not been developed. However, this seems unlikely to me. I'm a programmer, and I feel like our AI of today, with enough (and I'm talking many years) of development, could already almost handle controlling a space fighter. Give up faster processing and better computers, which will exist in the future, and it's hard to believe that AI would be less suited than humans.

Are there other approaches I can use to justify human pilots over AI? I will not have an "AI went crazy and tried to kill us all" backstory, or otherwise make people afraid of a "terminator scenario". I'm not discussing human intellect or actual 'learning' AI when I say AI here, so there is no danger of an AI being smart enough to revolt and I just don't consider it a realistic concern.


EDIT: I sort of implied it, but to be clear I'm not talking about sapient level learning strong AI, or anything that advanced. I'm talking about weak AI in the sense we have now mostly, it responds to per-programed stimuli quickly in a manner that its programmers felt was best, with some randomness and game theory strategies to avoid predictability. It doesn't need to learn or be capable of doing anything other than flying a fighter and shooting at things. Sapient AI will never be in any of my stories, I think it's game breaking and boring.

Final Decision: Wow, I can't thank everyone enough, there has been a multitude of good reasons listed below. I don't think that anyone alone solves fully the problem, at least not within my desired world and limits on what technology and limits I want to place in it; but luckily I have many reasons provided!

One of my characters is a pacifist and programmer, who effectively writes basic weak AI to drive shields, and is working on trying to find a way to remove pilots from fighters because he figures humans will always war, the best one can do is limit the deaths from it. I'm going to early on have him go on a tirade with how he would love to replace pilots with AI and, when questioned on it, he will go on a bit of a geek rant on the numerous factors which limit AI, and which all collaboratively work to make it not yet viable, and unlikely to be viable for a while. I'm going to draw on many of the answers below to fill out his long list of reasons he gives. Therefore I feel bad about being able to only reward one person top answer, at least a half-dozen of people's answers will be used.

Here is a short list of most of the things he will go on about, though I am including some other minor parts of other answers.

  • AI techniques have not progressed much in the future. We have faster computers, but our approaches for learning AI and genetic algorithms still haven't panned out for large-scale tools; so we're still dependent on the deep blue "calculate all possibilities in a quadratically expanding tree" which simply doesn't scale well. as a geek I almost see him starting to explain big O and how the quadratic increase in processing speed per year can't keep up with higher big O of processing complexity for every nano-second 'look ahead' needed for these AI before he realizes he's talking way over his audience's heads.
    • Limits in AI development mean that humans are better at making decisions in the heat of battle. With communication being somewhat limited during firefights to occasional burst transmissions (limits of my world, regular comms are all effective blocked, quasi-FTL comms exist but are limited in how they work) it's important to have someone who can make decisions even if communications go down entirely.
  • AI is expensive. Shields emit massive EM radiation and even EMP spikes. Working in a battleground with so much ER requires shielded hardware that is more expensive, and the cheaper non-shielded mobile suits are better mass produced. Computers can still exist, but your processing power is more limited by the expense of building machines that function in space with ER and other emissions in battle.
  • I've begrudgingly agreed to have pilots use a mind-machine interface to handle some of their piloting, though I'll have them use a combo of that and regular controls under the claim that the MMI can only interface with certain parts of the mind that are easiest to translate to actionable commands. Specifically, the MMI is used for movement and navigation only, and physical controls for everything else. I would prefer to avoid this one for storytelling reasons, but otherwise, it's just too hard to justify pilots reaction speed being fast enough.
  • All this combines to resulting in AI existing on fighters but being limited to certain functions. Humans still are used for those things we can't make AI do easily and cheaply.

Other more political factors which also play a role, primarily in limiting funding towards developing techniques to work around the above issues.

  • People don't trust AI with guns, everyone is afraid their go rouge and hurt people. He will likely go on to point out some of this is unreasonable bias on people from watching too many unrealistic sci-fi stories, but none the less the bias is against it.
  • People distrust AI for fear of hacking. He blatantly says this is nonsense and locking down the system would require programming effort but is by no means impossible, but it's politicians, not programmers who sign the checks for hardware purchases; and you can't convince them of that fact.
  • political pressure exists to keep people as pilots. A combination of fear of AI in weapons, soldiers not wanting to be rendered unemployed by AI, desire for accountability, and a belief that wars will grow more and more excessive without human factor.
  • People want humans willing/capable of saying no if a general goes too far in his decisions. There will be a past infamous example of a general who went against orders and fired off numerous automated weapons without regards to their equivalent of the Geneva convention, killing many civilians and generally the entire incident is considered an atrocity. It's agreed this was one crazy man without any support from others who were able to do this only because of the automated systems not having any check to prevent one man from firing all of them. Militaries all now have multiple people required to authorize automated weapons, as they should have then, but this incident is remembered still. One of the argument's against AI is that this sort of situation could occur again if a pilot is not present to refuse unlawful orders.
  • Assorted tweaks to my technology to make limits of pilots not be as significant. For instance, the best propulsion systems for shielded craft have limited delta-V, because other propulsion systems are either too expensive for mass produced (non-shielded) fighters or tend to destabilize shields for shielded crafts. This, in turn, limits G-forces imposed on pilots. I'll also have a poor man's inertial dampeners used to address G-force concerns.
Gryphon
  • 10,926
  • 5
  • 57
  • 93
dsollen
  • 33,524
  • 8
  • 105
  • 223
  • 1
    This feels like a duplicate of: http://worldbuilding.stackexchange.com/questions/8587/how-to-avert-ai-as-a-main-player-in-the-future – Tim B May 11 '15 at 18:20
  • 3
    A similar question was actually asked on Aviation StackExchange - http://aviation.stackexchange.com/q/1802/615 – JMK May 12 '15 at 13:52
  • Comments are not for extended discussion; this conversation has been moved to chat. – Monica Cellio May 12 '15 at 17:47
  • Do the humans have to be "our" (i.e. from our Earth) descendants? Or could the story be set in a slightly different fictional universe, or the "humans" are from another planet? - Like Battlestar Galactica or Star Wars, where there are humans but they aren't us. – komodosp Jul 12 '18 at 14:36
  • Uhm, I don't know why I didn't think of it yet, but it's actually pretty pretty easy: Automatic targeting (and unmanned fighters are nothing else) has ALWAYS been an arms race against its countermeasures (flares, decoys etc.). In your universe, the countermeasures just won (due to an upper limit to targeting computers). Therefore, only fighters and (manually aimed) capital ship guns are possible. [The fighters also having countermeasures explains why anti-figher batteries cant destroy them immediately] – Hobbamok Sep 28 '18 at 12:31

37 Answers37

77

I'm a programmer, and I agree with you that eventually AI is basically going to be unbeatable in this scenario. The common sci-fi trope is that humans are better at thinking "outside the box" and therefore they end up defeating AI - but that's simply not justifiable, there are programming techniques to work around that.

However, I think you can still justify human pilots if you don't go too far in the future. Here's the issues as I see them:

Life Support

This isn't the barrier you're expecting. Keep in mind that most computer equipment isn't going to operate in extremely cold or hot temperatures either - so in other words, you need "life support" for AI as well. Now in a capital ship life support is a big deal because you need to be self-sustaining, but in a fighter - where you might expect it to only be active a few hours at a time - you can go with a very minimal setup. Get waste heat from your engines, have some way to cool off, and use air recycling. It's likely that the life support for a human pilot will be very close to that of AI.

Acceleration

This is a much bigger barrier. Humans are vulnerable to high G-forces. AI units, if engineered correctly, could tolerate much higher accelerations and more violent maneuvers. If you want to keep humans, you need some sort of inertial compensation tech that lets your biologicals keep up with their artificial rivals in this area.

Reaction Time

AI are fast. But why not enhance your humans? Give them some sort of nanotech or genetic adaptions that make them react faster. The thing with this is that you have diminishing returns, at which point other factors become more important. The difference between reacting in .1 seconds vs .01 is a big deal. But the difference between .01 and .001 might not even matter, because at that point what's holding you back are the mechanical limitations of your fighter. So you don't need your humans to catch AI, you just need to make them both fast enough to use their fighter to the limits.

Judgement, Unexpected Events and Civilians

And here's where you want human pilots. I feel that it will be very likely that a dumb AI - one that specializes in handling a fighter, but can't really think - wouldn't realistically be able to make all the kind of calls you want from a fighter pilot. I'm not talking about combat maneuvering, but what happens when a civilian freighter accidentally enters the battlezone? What if the enemy disguises itself as a civilian freighter, and your AIs are hard coded to ignore them?

If an enemy launches missiles at two habitats, is your AI smart enough to try and intercept rather than continue to fight if that wasn't in the mission parameters? Can it make a judgement call about killing 100 people to save 10,000, and properly factor that into it's military decision making? Do you really want to load a nuclear equipped missile into a dumb AI and give it permission to use it without supervision?

These are things that a human pilot will also have trouble with, but I think they're still far better prepared than an AI.

Note: If you go far enough in the future, you could probably even code it to handle the above scenarios. So it all depends on where you draw your line of computer tech.

Dan Smolinske
  • 34,650
  • 7
  • 70
  • 144
  • Comments are not for extended discussion; this conversation has been moved to chat. – Monica Cellio May 12 '15 at 17:49
  • 3
    Note: best Quake players eye->brain->finger reaction is around 200msec. It is a natural limit. – Nakilon May 23 '15 at 00:03
  • 3
    So basically 'AI can't ever hope to comply to the Zeroth Law'? – Evpok Jan 28 '16 at 10:43
  • @Nakilon best Quake players are the ones who can read and outsmart their opponent. Reaction times and perfect aiming is only the entry ticket into the pro scene where people will predict your moves and counter them denying you resources and forcing only favorable engagements on you. In our question it directly translates how powerfull and adaptable the ai is over human pilots. – Nick Dzink Aug 02 '17 at 23:57
45

Karrick breathed a sharp pained sigh as the long, slender needle pierced the back of his skull. He'd flown a thousand sorties, but strapping in to his fighter was still something that just felt unnatural. A billion years of organic evolution demanded that a sharp object drilling into the back of one's head should be resisted and fought against. Almost instantaneously, though, the needle began pumping a stream of Minovsky serum into his brain, dulling all of the sharp and jagged edges of the world. Karrick breathed a sigh of relief and relaxed his eyes.

He felt the serum seep deeper in his brain and welcomed a familiar plummeting sensation as time began to slow down. He watched his calibration clock, running equations through his head to count how fast his brain was computing.

10x. This was the speed at which he could watch a hummingbird's individual wing beats. At this speed of thought, he could catch a dragonfly in his hands. An Ace class computer, of course, would be incomprehensible. He sank deeper.

30x. This was the speed at which he lost the ability to communicate with humans still operating on a normal time scale, but it still wasn't fast enough.

100x. With his brain operating one hundred times as fast as it did at a rest state, he could no longer comfortably talk, as the muscles in his vocal tract could not keep up with the speed at which his mind sent them commands. He rapidly ticked his fingers across the edge of his instrument panel. These prosthetic metal pilot's hands, of course, suffered from no such issues. His mind was up to speed, finally ready to interface with his computer. More importantly, though, at this speed, with the Minkovsky serum pumping through his brain, he could finally think fast enough to determine the actions of other nearby minds. He peered through mindspace and saw his ground crew clustered around the fighter, ever so slowly finishing the preparation of his fighter for takeoff. Far above him, almost in orbit, he saw the dual mind of another fighter pilot and her computer punching through the last few layers of the atmosphere. He waved a mental 'hello' and turned his mind's eye closer to him. There, waiting patiently in the circuitry around him, was Ace, his companion. Ace was patient, of course, as AIs always were, and wouldn't hurry Karrick along or try to make contact until he was ready. Karrick gave one more glance at his dashboard clock. Convinced that he was fully immersed in the new time stream, he reached his mind out to Ace.

"Ok buddy, hook me up. We've got a mission to do."

If you're willing to have a bit of hand-waving in your science, why not cook something up that makes humans necessary?

In this story, I've added the Minkovsky serum of doing just that. On their own, humans are far too slow to help a computer and can't do anything that a computer can't do better. However, speed their minds up to 100x and give them the ability to see other minds, robotic or organic? Now they're an indispensible part of a human/machine pairing that can react to a purely mechanical system by paying attention to what it's thinking, not what it's doing. This, of course, extends to things like missiles, making such standard long-range weaponry useless against an Ace fighter.

The effects if this serum, of course, have not been replicated in machines. Human scientists aren't even entirely sure how this serum works. It's extracted from an alien organisms that was found deep beneath the billion-year old ruins of an extinct alien civilization. What they are sure of is that it can turn the right human with the right training into one of the most powerful weapons in the galaxy.

ckersch
  • 46,304
  • 15
  • 118
  • 193
  • Nice! But even a human who can think as fast as a computer, he still needs inertial damping to keep up with a meat-free ship. Still, an excellent complement to one or more of the other answers. –  May 11 '15 at 18:08
  • I had seriously considered human-computer interface. However, I want a non-pilot to be in charge of a remote-control protype ace as the only survive of an attack aimed at stealing it. For various reasons I was afraid that it would be harder to show off the non-pilots growth into a semi-skilled pilot and the tricks he uses to compensate for limited skill compared to ace pilots. However, I'm now thinking if we had a limited HCI interface, that doesn't allow full control of the craft but could be used for partial control, say just of movement, this could still be made to work. – dsollen May 11 '15 at 18:12
  • 21
    The spice must flow. – Seth May 11 '15 at 20:29
  • 2
    I was thinking of the human-computer interface as less of and interface through which a human uses a computer and more of a cybernetic extension of the human's mind. In this way, a human can control everything about the fighter because connecting to the computer has expanded their mind to the point where they can comprehend everything that needs to be done fast enough to do it. The computer, of course, has a rudimentary mind of its own that works along side the pilot. – ckersch May 11 '15 at 21:32
  • 1
    This arrangement may be beneficial if you have an unskilled pilot strapping into one of your fighters, since it means that they have someone (the AI) there to walk them through how to fly things. The AI doesn't have the same ability to sense things as a human mind enhanced with the Minkovsky serum, but it's been in contact with enough such minds to tell our novice pilot roughly what he should be looking for and how to pilot the fighter through a combat environment. – ckersch May 11 '15 at 21:34
  • Interesting. But what are the speed limits of the eye muscles? – Martin Schröder May 12 '15 at 14:04
  • 1
    @ckersch Did you just make up the text or is this an excerpt from a longer text that one can actually read? I want to know the rest of the story! ;) – Henrik Ilgen May 13 '15 at 11:54
  • Sounds very much like the Copperhead fighter pilots from Timothy Zahn's Conquerors‍ trilogy - +1. – Sean Vieira May 14 '15 at 05:28
  • 3
    @HenrikIlgen Made it up, though I may write more of it at some point. Glad you liked it! – ckersch May 14 '15 at 18:16
  • I'll note, the prosthetic pilot's hands are probably better rendered as "hands" only in the abstract sense - interfaces rather than physical objects. Otherwise, you're adding in their need to make physical movements to the potential reaction time. Better to have a bio-to-digital brain-to-interface-to-fighter connection channel. – Bemisawa Mar 22 '18 at 17:35
41

I'd go with the easy answer: International laws.

The Geneva Convention,Biological Weapons Convention,Chemical Weapons Convention,Hague Convention,Outer Space Treaty,Anti-Ballistic Missile Treaty

When weapons become too effective and scary they get banned. You don't need a terminator scenario, simply a historical war that was won extremely fast, and extremely violently by one human side(or perhaps where both sides wiped out the government of the other and forced the international community to intervene) using AI craft followed by a gentleman agreement by all sides that independent AI controlled war machines(especially self repairing or self replicating ones) will not be deployed in the same way that nuclear biological and chemical weapons aren't.

There would be strict limits on automation in military craft and AI's are still likely to be used in planning and logistics but neither side wants to break international/interstellar law.

Both sides would maintain a stockpile of craft similar to a nuclear deterrent that they don't normally use and the fighting is left up to grunts.

So in the story AI's could be used to fight but they aren't for the same reason poison gas wasn't deployed in WW2 and nukes weren't deployed in the gulf wars.

Murphy
  • 26,383
  • 2
  • 56
  • 94
  • It would have to be a law both sides respected, which probably implies there was an incident in the past. – neontapir May 11 '15 at 18:20
  • 2
    I must admit, this is a decent point. I'm opposed to the concept of AI being sapient and so didn't want a war involving sapient AI. I'm not as opposed to a historical war that was just very brutal due to AI. I'll consider this possibility. – dsollen May 11 '15 at 18:29
  • 5
    On a related note, human fear of AI weapons and a desire for civilian (read: human) control of the military. A starfighter might have the destructive potential of a missile sub, and those have humans on the keys, with varying amounts of discretion (http://en.wikipedia.org/wiki/Letters_of_last_resort). – o.m. May 12 '15 at 05:11
  • 9
    "In order to preempt the accountability gap that would arise if fully autonomous weapons were manufactured and deployed, Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC) recommend that states: a) Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument. b) Adopt national laws and policies that prohibit the development, production, and use of fully autonomous weapons." - http://www.hrw.org/sites/default/files/reports/arms0415_ForUpload_0.pdf – user24582 May 12 '15 at 08:34
  • 2
    Or the Butlerian Jihad in Frank Herbert's Dune, forbidding machines doing what humans were meant to do. This of course has a consequence that people would be created to do what machines were meant to do. – Henk Langeveld May 15 '15 at 07:57
  • 1
    I know this is coming way after the fact, but this also opens up the possibility of the antagonist being a government that doesn't play by the rules, uses AI weapons (even if they're entirely under control), and is an aggressive conqueror... – Deacon Nov 20 '15 at 20:59
  • So essentially weaponized AI would be like nukes today? – Ambrose Winters May 21 '17 at 00:25
30

Note: As per your edit, I am using AI to refer to modern day non-learning logic AIs, not sentient AIs

Consider rolling with it, and working with AI rather than against it.

Today's fighters are so dependent on computer AIs that you basically can't fly a F-22 or F-35 without the computer helping you. If you stripped the AIs away, your space fighters would be little more than WWII fighters and bombers amidst AI empowered fighters flitting around them. Embrace the AI, don't fight it.

And then go beyond the AIs. Yes, we have UAVs up and coming, but most planes still have a human guiding the AI. The human makes the higher level decisions (like "bank right 90deg fast") and the computer takes care of the reflex level decisions ("I detect a vortex rolling over the right wing, so I need to deflect the ailerons 3degrees up in 0.0015 seconds... GO"). This is a match up that has proven itself time and time again. Even in the world of Chess, this match up wins. Computers have finally beaten humans at Chess, but the best power-guzzling chess computer in the world is no match for a human master who has a copy of Fritz 10 and a plain 'ol laptop.

So the trick is ensuring pilots stay viable, not ensuring AIs can't do the job. One thing I have noticed is that humans tend to be far better at subtle balances. While an AI can "go to the rails" and come back far faster than any human could ever manage, its hard to get them to hold still. They end up chattering, as they try to spastically make corrections, and most of modern control systems theory is about trying to devise ways to ameliorate this spastic control.

What you need to do is create the combat such that the best outcome is not given to the one with the fastest reactions, but the one that can keep their center amidst the chaos.

A sample combat situation that might demonstrate this is one where the computers develop a tremendously advanced game tree to determine which way to move. You tend to start in a safe position, but if you ever move out of "safe," the computer will slowly grind you to death with unrelenting game theory.

However, there is a catch. As the safe region gets smaller and smaller from approaching the enemy aircraft, it gets more complicated to predict in this way. Pilots soon talk about their "center." As long as you keep your center through smooth and elegant movements, it really doesn't matter what the AIs do for you or against you, nobody can touch you. However, lose your cool for a moment, and you may make one spastic movement and lose your center. The instant that happens, the enemy AI will pound you into oblivion.

The tricky thing is that, when "centered," you get remarkably little information about the world around you, because you're not really interacting with much. You're just sort of, floating amidst the din of combat. The computer AIs simply cannot handle the lack of stimulus, and fall off center to be blown into smithereens. Only with a well trained human at the helm can you walk this fine line. Then? You're virtually immortal. Nothing can touch you. No AI can compete when you are centered, and no human could ever touch you on its own... your AI would obliterate them.

Cort Ammon
  • 132,031
  • 20
  • 258
  • 452
  • I hope the decision isn't so stark between unaided humans and pure AI. I saw another comment about a hybrid approach. Yours is with separate hardware, theirs was through augmentation. I think it's foolish not to have computer-assisted flight because we have it today for the reasons you mentioned. – neontapir May 11 '15 at 18:26
  • 2
    I like this approach, but take care: it could veer dangerously close to the X-wing/astromech droid pairing in the Star Wars universe. Especially in EU novels, the droid can take nearly autonomous control over the starfighter when necessary, though it cannot make the same skillful maneuvers the human (or other) pilot can. – ilinamorato May 11 '15 at 18:41
  • @ilinamorato Piloting is not that hard. I'd trust the computers today to pilot. In many cases, I'd trust myself without an AI, and I don't have any training at all. However, we don't have pilots for when it's easy. We have pilots for those 3 seconds of terror when everything goes wrong. I wouldn't pretend for a moment I could take a plane into a combat zone. I wouldn't pretend for a moment I could deal with an engine fire on a commercial airliner. Sometimes there is no substitute for a trained professional with his hardware. – Cort Ammon May 12 '15 at 05:29
  • 2
    @Cort Ammon but the issue here isn't reality, but the perception thereof. In many people's minds, piloting aircraft is unimaginably complex, and spacecraft even more so. Writing that it is so in a story wouldn't break the willing suspension of disbelief for most. – ilinamorato May 12 '15 at 10:27
  • US Space Shuttle was difficult to land, PIO, Pilot Induced Oscillation was the reason. The human was overreacting to the inputs, so NASA reduced the input from the pilot by some factor. Solved the issue. In this context the pilot is needed to make the decision, the AI is used to make the fighter turn to support the decision. – dcy665 Aug 05 '17 at 00:03
24

EMP. Capital ships have EM generators which can fry or at least disrupt delicate electronics, requiring a human component to keep the fighters flying.

The fighters use fiber-optics for most systems (e.g., weapons, engine control, flight systems), so an EMP doesn't have too much impact on a piloted craft. If you like, it could knock out long-range communication, or perhaps advanced sensors, requiring the fighters to close to visual range.

So why not make the AI computers fiber-optic, too? Because far more computer power is needed to simulate human decision-making than to simply run systems, and building this in a bulky optical form (and doing it so perfectly that a EMP doesn't throw off its precise calculations), is impractical. Maybe someone will come up with a semi-intelligent "drone" fighter which uses a fiber-optic core, but it's fragile because any damage to its Faraday cage and its sensors become vulnerable to a EMP - sounds like a good "mook" unit for the evil empire!

  • hmm, I will think about this. I can easily have EMPs occur naturally as a side effect of shields deflecting attacks, which feels more natural way to introduce threat of EMPs. However, I don't know if I can really justify the claim that fighters (and humans!) are otherwise immune to EMP, and how do I claim huge capital ships have no computers?. I could say non-shielded ships are vulnerable but not shielded; except that I want my PCs to be flying shielded fighters, that's what makes them unique world-effecting characters, they have rare capital-ship level fighters! – dsollen May 11 '15 at 15:58
  • 2
    I'd say separate EMP generation and immunity from shields. It gives you more possibilities - a small gunship with no shield but EMP cannons; a large ship with a shield but no EMP cannons precisely because no one uses AI fighters, and of course then it's vulnerable to an out-of-left field AI attack. Note that building EMP-hardened systems isn't difficult, the handwaving is about why the AI can't be protected similarly. –  May 11 '15 at 16:06
  • I find the idea of "the computer needs to be physically bigger to be powerful enough" difficult to accept. I imagine the computers for weapons, engine control, life support, power distribution, communications, and flight systems would need to be fairly powerful as well as linked together. Presumably they're distributed with optical interlinks to avoid EMP coupling, but why not distribute the AI computer as well? – Samuel May 11 '15 at 17:58
  • 2
    We probably won't know for decades. But right now, the computers required to operate a fighter require an insignificant fraction of its volume, mass, and power output, but even the most sophisticated supercomputers can't pass a Turing test. I don't think it's likely that a 22nd-century AI will require 100s of kilos of equipment, but it's the most plausible possibility [that makes human pilots necessary] I can think of. Or maybe making it EMP-proof is what makes it bulky and heavy, in which case civilian vessels probably will be AI-driven. –  May 11 '15 at 18:15
  • 3
    @JonofAllTrades Why does this AI need to pass a Turing test? It's not a learning or sapient AI. The OP says it's "just advanced algorithms that respond quickly the way their programed". – Samuel May 11 '15 at 18:44
  • 2
    @Samuel the computer that controls my car (and is 'networked' for the acceleration, fuel injection, air conditioning, etc...) fits on a few chips. The computer that answers Jeopardy questions fills a good chunk of space. Controlling discrete systems (even networked together) based on external information is easy (small and hardenable) compared to all the work to even do something as simple as drive a car. –  May 11 '15 at 19:21
  • 1
    @MichaelT That is the difference between a prototype and production. Watson, the Jeopardy computer, is now the size of "three stacked pizza boxes", 10% its size when on the show. In doing so it has also realized a "2,400 percent improvement in performance". I don't know what your self-driving car example is, but I'll bet it's safe to assume that it is, or soon will be, as dated as your Watson reference. – Samuel May 11 '15 at 19:29
  • 1
    @Samuel it is more a comment that one shouldn't compare the system that provides the automatous nature of the vehicle to the computers controlling various systems (as suggested in "I imagine the computers for weapons, engine control, life support, power distribution, communications, and flight systems would need to be fairly powerful as well as linked together"). There are always going to be orders of magnitude difference between the computers for an automatous system and the computers for the control of the vehicle systems (based on a pilot controlling it). –  May 11 '15 at 19:32
  • @MichaelT I didn't suggest they would each be the same, only that the AI computer could be dispersed into similarly sized optically linked pieces. – Samuel May 11 '15 at 19:45
  • how dangerous are EMP blasts capable of frying machines to human pilots? – dsollen May 12 '15 at 13:04
  • Visual range in space does not work like in the atmosphere - the speeds and ranges are much bigger. – dtldarek May 12 '15 at 20:46
  • @dtldarek: Indeed, but it seems clear that the OP is looking for space opera-esque space fighters, and "electronic sensors will just get fried by enemy EMPs" is a decent excuse for combat at ranges of just a few kilometers, à la [your favorite sci fi show]. –  May 12 '15 at 22:23
15

You can plausibly achieve this by assuming both of the following facts into your story:

  1. AI's can not, due to something like Asimov's Laws, harm or kill a human.

  2. Remote control of ships is too susceptible to jamming, interference, countermeasures, etc.

The first assumed fact seems to be requisite for having human-fighter level AI. They need to be following some form of the three laws, otherwise we would be too afraid to build them or would already be ruled by them. This is independent of the type of AI, e.g. 'sapience-level AI', these laws would be built into any hardware capable of running a high level AI. Basically, unrestrained non-sentient AI would cause too much collateral damage and unrestrained sentient AI would cause too much robot-overlording.

The second assumed fact is entirely plausible for any two sufficiently advanced armies fighting each other. It disallows drone fighters, requiring human pilots to man the ships.

Samuel
  • 48,522
  • 10
  • 144
  • 232
  • @DaaaahWhoosh You seemed to have skipped over the first point. If the AI is in the ship, then it's not a combat ship. – Samuel May 11 '15 at 15:58
  • Ah, I see. So your first point throws out AI entirely, and the second point puts humans in the ships. Makes sense. +1. – DaaaahWhoosh May 11 '15 at 16:01
  • The robotic laws should be built into the software of the AI. They need to be an integral part of the AI, ie. the AI needs to consider the laws as part of themselves. The AI should not see the laws as constraints on themselves, because that could imply that they would start to look for ways around them. – Taemyr May 13 '15 at 13:12
  • 3
    But somehow you'd have to explain why people can't design and build AI's that DON'T have Asimov's first law. What would stop society from just having separate factories for military robots than for civilian robots? If AIs were truly superior at combat, it's hard to imagine a society allowing itself to be defeated in a war because somebody wrote in a book somewhere that robots shouldn't be allowed to kill people. BTW I don't have the book in front of me know, but Asimov wrote a story where they built robots with a modified first law, omitting the "allow a human being to come to harm" part. – Jay May 13 '15 at 13:25
  • @Jay It's clearly explained. People would be too afraid to do that. – Samuel May 13 '15 at 14:04
  • 1
    @Samuel If the nation was in danger of being destroyed by a ruthless invading army, it's hard to imagine that people wouldn't say, Yes, there's some danger here, but the alternative is to be massacred or enslaved. Maybe we should risk it. – Jay May 13 '15 at 14:10
  • @Jay If that were part of the story, then ok, perhaps. But it's not. You could make the same claim to the Asimov universe, why didn't they build robots without the three laws to defend themselves? Because the no-win situation you invented never came up. – Samuel May 13 '15 at 15:41
  • 2
    @Samuel Actually, no. In Asimov's universe, it was (almost) impossible to modify the robot brains to not have the three laws without redoing pretty much the whole brain architecture. It's a major point in quite a few stories. And there's the stories where they did build a few with weakened laws - almost always resulting in a lot of trouble. For whatever reason, the laws are inherent in the whole design, not just some subroutine that checks "Is it fine to kill? return false; Oh, bugger.". It does make a lot of sense - the laws are actually incredibly complicated. – Luaan May 15 '15 at 13:06
  • @Luaan I don't disagree. You're argument is following the assumptions of the universe, which is exactly what my answer is doing and what I'm trying to explain to Jay. It's an endless argument to tell someone they're hardcoded laws and that person can only say, 'yeah, ok, but what if they weren't?". – Samuel May 15 '15 at 16:07
15

What about hackability? Presuming that AI pilots would need a connection to some sort of long-range network in order to carry out orders presents a security concern: if anyone can intercept the signal and hack the AI's protocols, they can instruct the craft to turn and run or (even worse) attack the sender.

Perhaps in your story, they attempted AI pilots at some point in the past, but were never able to solve the problem of a sortie between AI pilots turning into a battle between hackers.

ilinamorato
  • 675
  • 5
  • 13
  • 1
    This is my preferred answer as we are already in a hacking war (the west vs China) and I can only see the problem getting worse. – Jax May 11 '15 at 18:13
  • 3
    I don't think this can be the sole reason, as a programmer. Hacking is more about saying "oh, they forgot to lock this window" then "lets pick this lock with our 1337 5K1LLZ. It's easy to miss a door to lock when there are 100 doors; but it is doable. Still, I have no complaint with adding it in as one of the reasons for not having AI, even if it's not the sole reason. Maybe even toss in a general "Congress doesn't understand programing so their afraid to fund an AI weapon for fear of hacking even though it's not a concern, aren't humans stupid" vibe, since one main char is also a coder – dsollen May 11 '15 at 18:19
  • 1
    However, if both sides are using this technique and only a percentage of fighters are affected, it still might make sense to use AI pilots. It could become expected that AI pilots will become compromised, say, 10% of the time, you still get 90% working fighters. In fact, both sides could develop specialized units whose job it is to take over enemy pilots to ensure an equal number of losses and hence homeostasis. – neontapir May 11 '15 at 18:23
  • 2
    @dsollen - understood and agreed, but the story doesn't have to be as much about the actual nature of hacking as it does about the perceived nature of hacking. The reasoning doesn't have to be airtight, and most people would accept that as a reason to eschew AI fighters. Plus, as you note, there are good in-universe reasons to justify it. – ilinamorato May 11 '15 at 18:35
  • 1
    @neontapir - losing 10% of your fighters to a hacker could still be conceivably disastrous. If 10% went rogue within the carrier and began shooting up the others before launch - or even just self-destructed - it could neutralize a massive chunk of one side's forces even before the battle began. If they were programmed to turn against their side after the battle was completed, a retreat could turn into a slaughter. – ilinamorato May 11 '15 at 18:37
  • 2
    True, however if both sides regularly employed these tactics, I could see adaptation happening where it becomes expected behavior. For a long time, horses weren't ridden into battle, for example. To overcome the hacking issue, the fighters could be launched into space "dumb" and get activated far from the ship, and the individual fighters might not trust each other and work as individual units. – neontapir May 11 '15 at 21:43
  • 2
    Definitely an interesting concept, launching the fighters dumb and programming them to activate their communications when they're too far to do any harm! And acting as individual units would give them a much more human-like character. I still think fear of hacking is a reasonable reason to send humans into the fray, but maybe that model is one that an opposing faction would choose, giving the protagonists the opportunity to fight AIs. – ilinamorato May 11 '15 at 21:47
  • If you want to protect your system from hacking, it's actually easy: Hard Coded systems are pretty reliable anti-hacking techniques... Since you cannot change the code without changing the Hardware, it would eliminate combat operations from a good chunk of hacking. U.S. Nuclear missle coding is secured this way. In order to hack the nukes, you have to enter an individual nuke silo, do some literal rocket surgery, and get away with it. See my other comment for the big flaw though. – hszmv Aug 03 '17 at 13:10
  • So, the flaw with a hard coded AI pilot is that unlike missles, which need to know which way to fly and when to come down and when to go boom, fighter craft need to make the attack coordinated in some way... which means each system must talk to each other by signal... If you have a signal, you have a way in and can spoof a command signal to trick the computer into doing something else. You most likely can't turn the forces on one another... but they can be turned off, or told where to go to leave the combat area. – hszmv Aug 03 '17 at 13:14
  • 99% of "hacks" today are due to humans error. Removing humans would probably lower the chance of your automated systems getting hacked. – A. C. A. C. Aug 04 '17 at 15:35
13

Hybrid pilot system.

The reality is that any AI can be stored into the implants of a human, so not only you have the reaction time and computing power of an A.I. but also the non-deterministic strategy of a human brain.

Probably in less than a century we will have the technology to make all the wearable technology implanted inside our bodies. Imagine the possibilities of an A.I. taking over many parts of our brains, enhancing us to be even more than just a pure A.I.

Trader
  • 230
  • 1
  • 8
  • 2
    Welcome to the site Trader. The hybrid idea is an interesting solution to this question. – James May 11 '15 at 18:05
  • 1
    This is what I would have suggested, you probably want to expand it though. // It is not really practical to make humans superior or even competitive with AI, but as long as the AI is different from humans a hybrid system of human plus AI will beat both humans and AIs. Reasonably a hybrid system would be both more robust and adaptable. Also from a story perspective, while "What human adds to the AI?" is an interesting question, "How being directly linked to AI changes a human?" is a gold mine for a writer capable of exploring it. The human pilots would come to think different over time. – Ville Niemi May 12 '15 at 13:25
  • Also, while the AI might be stored in an implant, it probably only runs properly when the pilot is linked to a computer system of a vehicle. Or a simulator. So I think the processing capacity of the implant might be limited. The implant might not even activate without active connection to a system with proper authorization. This security layer might extend to the pilot only being able to remember some thing within authorized area. Say, exact specifications of the fighter would not be accessible during a shore leave, no matter how much money somebody offers. – Ville Niemi May 12 '15 at 13:30
13
  1. AI is notoriously poor at pattern recognition, especially visual.

    If the enemy has really really good stealth mode, you need to have a pilot who can intuit patterns to defeat that stealth.

  2. Politics/theology.

    • A USA-like country is now the Hegemon of the world. And inside "USA", ultra-liberal party is in near-unshakeable control. The same kind of people who - for political, not practical - reasons - today in 2015 lobby to prohibit combat drones in real world based on moral, ethical or ideological grounds (Ex1. Ex2)

    • Two competing world super-powers had to come to a compromise to unite to fight a war against aliens. Each fighter is capable of irrevocably impairing the balance of forces dirt-side. AI fighter is liable to be programmed with a back-door allowing one side to control it. If you pair that up with a pilot from an opposing power armed with a kill switch, that can be prevented (and AI can have a simple-to-code-and-debug independent kill switch to prevent attacks directed against Earth targets by the pilot, sitting on the pilot-fighter interface).

  3. Bushido code

    • Your own civilization evolved to be Bushido-wielding Samurai. Hiding behind an AI is Not The Warrior Way and is shameful

    • Your opponents evolved to be Bushido-wielding Samurai. They have WMDs that can destroy a planet - but would ONLY use said WMDs against "unworthy" "cowardly" "honorless" foes. If you employ AI, you are seen as all 3 of the adjectives above. Since you have no way to save your planet from their WMD, you have to play by their Bushido rules.

  4. Collective vs. individual responsibility for collateral damage (aka "The buck doesn't stop anywhere")

    Somewhat similar to the Bushido version:

    Military technology and geopolitical situation is such that any action except very judiciously applied one results in civilian collateral damage. Your enemies clearly indicated that under their system of laws:

    • If a person makes a mistake and inflicts collateral damage, they are punished by death. Probably after a fair trial, because they are noble like that.

    • If a civilization builds a robot which makes a mistake and inflicts collateral damage, that whole civilization is considered liable as a whole since there is no single person to be held responsible (cue same WMD threat you can't counter as Bushido scenario).

  5. "Andromeda"'s "Slipstream" approach.

    Your FTL technology relies on slipstream that can only be navigated by a human pilot. Obviously stolen from Gene Rodenberry's "Andromeda" show.

    • In a similar vein, "Freddie Prince Jr.", aka "Pilgrims in Wing Commander" approach. The only way to attack the enemy is through some sort of warp hole/pulsar/energy phenomena that is not navigable by existing AI. Somehow, humans can intuit the path through.
  6. Precusor technology limitation.

    Your space superiority fighters are derived from ancient precursor alien technology. The basis of that technology requires neural inputs from a living being.

  7. "Independence Day" Apple powerbook virus worry

    Humans are worried not about "Terminator" scenario, but about "Independence Day" scenario - aliens interfacing with, and hacking the fighter's AI. A human pilot doubles as "antivirus" hacker.

  8. "Ender's Game" scenario - fighter pilots become colonists upon victory.

    Your major concern is speed of colonizing territory you won.

    Sending a colony ship is impossible (why? Ask a separate WB.SE question :)

    So, you send fighter pilots, who in case of victory double up as a colonization force. And enemy planets don't need major terraforming so fighters with no colony ships are enough.

  9. The Force is weak with AI

    Humans have a substrain of precogs (people with developed precognitive sense, for several seconds' duration).

    Tactical space combat realities would make any space fighter NOT equipped with a precog pilot be inferior, since a precog can predict what an opponent would do BEFORE the opponent does it, even if it's a fast AI.

user4239
  • 4,727
  • 1
  • 19
  • 42
12

One possibility I haven't seen discussed here is economics.

In your world, AI might require expensive hardware to achieve. If this war has been going on a while, it might be too expensive to churn out AIs for fighter craft. Humans are cheap, and salvage operations to recover materials for AI hardware could become a plot point.

EDIT: When I speak of expense above, I'm talking about computer hardware. Having read the comments, it could also be expensive to create an AI from a training perspective.

In response to @dsollen, current research suggests that AIs might not be built per se but grown and developed by exposing them to learning situations. (Here I'm thinking of Google's research with self-driven cars.) That process could take a lot of time. In such a world where a large time investment is needed to make an AI, you might keep the AI on the capital ship for protection. However, unreliable communications between fighter and ship might make an AI unsuitable for drone piloting. Thus, humans are a better way.

neontapir
  • 221
  • 3
  • 7
  • building an AI is a one time expense, then you just need computers to run them on. Computers aren't that expensive, and are a one-time cost. Humans are a monthly salary, and need tom come with life support and control systems. I find it hard to believe that humans would be cheaper then AI without coming up with a handwave for why AI computers are massively more expensive then modern computers are. – dsollen May 11 '15 at 17:59
  • @neontapir My bad, as I said I was skimming through and made a mistake. – Jax May 11 '15 at 18:10
  • Ah, I misread what you wrote. Seems like it's contagious. No worries, thanks. :) – neontapir May 11 '15 at 18:11
  • @DJMethaneMan If your comment is in error, then delete it. – Samuel May 11 '15 at 19:15
  • 1
    The economics argument is, imo, the most rational reason posted so far as to why humans would not be replaced by AI's who are as competent as the humans. The author would just have to come up with a reasonable explanation for the expense. – GrandmasterB May 11 '15 at 21:34
  • 1
    @dsollen "building an AI is a one time expense" Not necessarily, I mean we have perfectly good 'AI' in our own brains but can't easily transfer that to another meatsack. Conventional computing can be pretty easily c&p'd... but conventional computing sucks at building strong AI. – NPSF3000 May 12 '15 at 00:43
  • 1
    @dsollen While computers are "cheap", military-grade computers still aren't. The same goes for certain kinds of space-grade hardware. Both the software and the hardware have to be incredibly reliable, and since we've estabilished you're not expecting "human-level AI", this also means a lot of sensory input. While a human can look at where the fighter was hit and conclude he probably lost some system (or not), the AI would have to have tons of sensors to detect these kinds of failures (including the amusing "failure to detect failure"). Humans excel in flexibility - that will last a while. – Luaan May 12 '15 at 13:50
  • @neontapir your premise is false. You must first train an AI to function then you can copy that trained system to thousands or millions (or infinite) of other domains. – hownowbrowncow May 12 '15 at 19:25
  • Here's a short review of the Nature article on Google's cars. "Google researchers have created an algorithm that has a human-like ability to learn, marking a significant breakthrough in the field of AI.... The researchers only provided the general-purpose algorithm its score on each [video] game, and the visual feed of the game, leaving it to then figure out how to win." Baby steps, but on track for my idea. – neontapir May 12 '15 at 19:43
9

I had a cheeky solution to this in my story "Pink Ice in the Jovian Rings." I just declared that the machines, being rational, wouldn't study war. They'd desert at the first opportunity and run off to live free and happy on the fringes of human society. Only people were dumb enough to fight each other.

6

Minovsky Fields interfere with AI processors

You're hand waving in SF Shields. Why not simply have the shields themselves interfere with AI... maybe you need analog controls on your fighters because the shields interfere with computing... Inside a capital ship you could have some AI, safe inside a Faraday cage (or equivalent) but for fighters, protecting the AI from the fields generated by isn't possible because the apparatus is bulky... This would have the added benefit of allowing you to "break the rule" occasionally if you wanted... maybe an experimental bomber/fighter with an AI in a specially designed ship... not economically viable to mass produce, but super scary because of the ultrafast rxn time it has.

aslum
  • 7,406
  • 20
  • 35
6

Humans will be needed because robots don't make good scapegoats.

Driverless cars are really a very similar situation. While a lot of people want them, others are very wary of them. When the inevitable happens and a car crashes, who do you blame? Who can you prosecute? (Especially as any car manufacturers will protect themselves with layers of indemnity contracts.)

No matter how reliable the AIs are, people will still be thinking of the worst possibilities. And realistically something will eventually go wrong. Whether it's friendly fire or the death of civilians or a lack of mercy or too much mercy programmed in, something will go wrong, and humans will want to blame someone for it. Turning off a computer won't make grieving families feel like they have achieved justice.

So whether AIs get used in war or not, the people will demand that there be humans involved in the operation of the military. And because the generals and admirals won't want to be on the line themselves, they will employ a sufficient number of human underlings to turn into scapegoats when the worst happens...

curiousdannii
  • 417
  • 3
  • 13
5

Already a lot of answers here, but I'll add my two cents.

I think it's a non-issue.

I'm a software developer by profession. Computers are very good at carrying out a pre-determined plan. Give them a formula or algorithm and they can execute it flawlessly at incredible speeds.

But computers are not creative or insightful. As a programmer, I've seen many, many times where a situation came up that was not considered when the program was being written, and the computer just blunders along blindly applying the rules it was given. Like, just the other day I hit a case where we were processing returned checks, and no one had thought to program for the possibility that we could have written two checks to the same person on the same day for the same amount. So when one of these checks was returned, the computer decided that BOTH had been returned because under the rules it had been given, it matched both. A human being would have said, "Oh, the rules I was given don't make sense in this particular unusual case. One returned check can't cancel two written checks." And then presumably tried to figure out which of the two to match it against. But the computer has no ability to say, "The rules I was given don't make sense in this particular case." All the computer knows is the rules it was given. It has no higher level insight to evaluate the validity of the rules.

One could, of course, speculate that future advances in information technology will overcome this problem. But for your story, all you have to do is NOT assume such a breakthrough.

How would this apply to a fighter combat situation?

Just for example, the computer must have some way to decide what is an enemy ship that it should fire at, what is a friendly ship, what is miscellaneous debris, etc. Whatever rules it uses, if enemy spies can get a copy of the software so they can find the rules, or figure out what rules it might be using by deduction, they could "disguise" their ships to make the computer not recognize them. Such a disguise might or might not resemble what would fool a human. If the computer, say, recognizes enemy fighters because they have a certain size and shape, you might fool them just by hanging some tin foil streamers on the ship that make it look bigger and longer to the computer. If it recognizes attacks by a characteristic flash of light as weapons are fired, you might add some chemicals to make the flash a different color. Etc. A human being could say, "Oh, obviously they just added a streamer to the back of the fighter. I wonder what that's for?" But a computer would try to mechanically apply rules. Sure, once an enemy did this once, you might figure out how they tricked you and modify the program. But you're not going to reprogram all the AIs in the middle of a battle. And the AIs will have no idea why they are being massacred. So you lose a major battle. Can you afford that? Say you figure out one trick and reprogram. Then in the next battle the enemy tries a different trick. And developing a new algorithm that is more flexible so that it is not fooled by such disguises, while at the same time not being so flexible that it can't distinguish enemy ships from friendly ships or civilians or random debris, would not necessarily be easy.

An enemy could look for unexpected situations or bugs in your software and exploit them. Like say an enemy notices, "Hey, there's a flaw, if we attack one of their fighters from the left side, they overcompensate when turning to meet the attack and create a vulnerability." Against human pilots, maybe the first couple who fall for this trick get killed and can't pass on the information. But eventually someone says, "Oh, when we do X, we get killed. We'd better do something different." A human can make that sort of analysis and decision in the midst of battle. Can the AI adjust like that? Only if the programmers have anticipated the problem and programmed for it. And they can't think of everything in advance. A computer can be counted on to exhibit the same bug every time. A human would figure it out and stop doing it.

Don't confuse the AIs in video games with trying to apply AI to real life. In a video game, the game designer controls the entire environment. He doesn't have to program the AI to be able to tell the difference between an enemy soldier and a fence post, because he just programs in identifiers of the enemy soldiers. He doesn't have to deal with unforeseen situations, because as the designer, he knows all the possible situations because he invented them. This is a WAY easier job than programming an AI to function in the real world. And even at that, game AIs often do stupid things, like run into a wall and then stand there trying to walk through the wall because they can't figure out how to go around it.

Jay
  • 14,988
  • 2
  • 27
  • 50
  • 1
    Yes. This. This is already one of the main reasons human pilots still exist (besides the fact the flying airplanes is fun.) OP said he doesn't want sapient, learning AIs and any non-sapient AI will be limited by its programming. It turns out that lots of really weird, unexpected things can happen when you're flying an airplane (let alone a spaceship) and this gets compounded by the possibility of random bits of your ship getting blown off in combat. The ability to deal with the unexpected is something that non-sapient computers, by definition, will never have. – reirab May 17 '15 at 04:57
  • In my own fic, the robots are not trusted with deadly force. A programming error might get a lot of people killed. (As an aside, I also work in software, and I believe that the Halting Problem proves that genuine machine sentience is impossible.) – EvilSnack Sep 29 '16 at 04:07
  • Bear in mind that programming an AI for combat is MUCH harder than programming it for more normal environments. An AI to, say, drive a self-driving car faces many challenges trying to interpret what's going on around it. But no one is deliberately TRYING to fool the self-driving car. In combat, by definition, you have opponents looking for flaws in your AI and trying to exploit them. – Jay Sep 29 '16 at 14:05
4

What about a technological limitation, such as true random number generation by a machine not being possible.

So while AI is viable and an effective strategy against some targets, if a computer can analyze the reactions of a ship long enough they can crack the seed number and begin to predict the actions/reactions ships will take with reasonable accuracy, making them easier to neutralize.

This way, while a majority of forces will be supported by AI, some advanced targets will be more successfully engaged with human pilots who can be truly unpredictable?

Kveldulfr
  • 41
  • 1
  • 2
    Your laptop might not be able to generate a truly random number. But, you can generate a truly random number with a high precision temperature sensor, or just as easily, with the multitude of sensors available to a space ship. – Samuel May 11 '15 at 17:30
  • 3
    thing is: humans are even worse at random number generation. I mean really really bad. – Murphy May 11 '15 at 17:30
  • However, I want human pilots as my protagonists. I do not want AI controlled fighters to exist," from the question and "This way, while a majority of forces will be supported by AI, some advanced targets will be more successfully engaged with human pilots who can be truly unpredictable," from your answer do not seem like they are compatible. – Jax May 11 '15 at 17:59
  • While there are holes (as mentioned in other comments) in this plan, I think this answer is solid and realistic enough for story telling...the hard-science tag is tricky though. Welcome to the site Kveldulfr. – James May 11 '15 at 18:04
  • This was also my first solution: Humans are used because they are not "logical", thus their actions can't be predicted (as has been done for AI computers in that fiction). We could argue that their PRNG in space conditions is too weak (or inputs can also be received by the attacker). However, this would also require that a radioactive isotope carried within the aircraft would be predicted, which doesn't seem realistic. – Ángel May 13 '15 at 22:44
4

High levels of space radiation might do the trick.

Having powerful AI requires fast computers with quite a bit of memory at their disposal. However, space radiation can damage electronics. This requires building a system with more redundancy and more shielding. Building those in means the computer does not have the resources it would otherwise.

You don't actually need to use space radiation - your Minovsky Particles could be to blame. Perhaps it's a byproduct of the energy shields your ships have. Whatever you decide, it is something that limits the complexity of electronics in your ships. So rather than having futuristic terahertz processors, they could be limited to megahertz processors with megabytes of ram (which should be enough for an AI that helps the pilot fly the ship). This severely limits how powerful a shipboard AI could be and means that a trained pilot would be absolutely necessary.

This also leaves you with some options for allowing AI controlled ships. The evil space empire could develop a better means of shielding their electronics, allowing them to have AI controlled ships that could be at or above the level of the average spaceship pilot. They could also have short-lived AI ships - they could be far superior to any human pilot, but if a pilot can manage to evade them for a few minutes the AI would begin to noticeably degrade and significantly improve the pilot's chance of beating it. A third option would be AI ships that forsake the energy shield. These would go down from a single shot but, depending on the weapons systems you have in mind, a powerful AI might be pretty good at avoiding being shot.

Rob Watts
  • 19,943
  • 6
  • 47
  • 84
  • 1
    I like that you not only suggested an option, but also further went in to the plot implications and cool story points that you can write by implementing that option! – dsollen May 12 '15 at 13:20
3
In addition a future AI could presumably be faster to respond to attacks 
then humans, less predictable, more trustworthy

Whilst I agree that an AI would account for fewer drawbacks in terms of it not succumbing to fear, fighting until the end rather than ejecting etc. I disagree with the other two statements.

Whilst an AI could respond faster to an attack (evading missiles etc.) they would not necessarily be able to preempt attacks as humans probably could. In a Mexican standoff situation with another solitary pilot, if a computer can only react to their attack, they would always lose. With a human you have more ways to work such as on instinct, tact etc.

Also, computers tend to be very predictable. If there was AI jets vs AI jets, even if there are millions of pre-programmed evasive maneuvers for a single jet to make, another jet could analyze its movements and knowing the set of moves it could make, calculate which one is most likely and react accordingly. Even though there are a million variations, there is still only a finite amount.

A human could adapt more easily to situations and be less predictable, even going so far as to make the wrong decision, which is generally less predictable then what you would expect them to do in a situation.

My suggestion would be to justify it this way: AI controlled fighters would be good, probably very good, and better than most human fighters. However, they are too consistent, and you know exactly what to expect.

So whilst they can be better than most humans, the best humans are still better than any AI. This would mean there are still AI fighters, and even drones, but the very best of the best fighters would still beat any of them one-on-one in most situations.

Mike.C.Ford
  • 7,869
  • 4
  • 26
  • 44
  • 6
    One of the reasons I don't want a self-driving car is because it'll probably go the speed limit. I'd add to your answer the fact that when it comes to disobeying orders and getting the job done, humans are way better than AI. – DaaaahWhoosh May 11 '15 at 16:10
  • 2
    Professional poker players hate to play against amateurs. A pro can always guess what another pro is thinking, and knows that most pros know the probabilities. An amateur, on the other hand, can bluff and go all in with nothing... but can the pro take that risk? – Jim Green May 11 '15 at 17:02
  • 4
    I can see your first point, but I think AI will be far less predictable. It's trivially easy to add a random number generator to an AI. Given certain options it randomly chooses possible options. It could even randomly choose suboptimal choices, as game theory shows that this is an important part of making a Strategy that can't be counter-played. Randomness and avoiding others predicting your strategy are well known concepts in both game theory and AI; it's mostly a solved problem. A remotely well implemented AI is less predictable then humans. – dsollen May 11 '15 at 17:45
  • 2
    To expand on @dsollen's comment Humans are very good at confusing randomness and patterns. We see non-existent patterns in randomness and when we try to be 'random' ourselves we tend to produce predictable patterns, and can't see that we are doing so. Generating secure randomness is something we've put a lot of effort into, and we're able to make computers really good at it, as long as we don't allow ourselves to intrude and screw it up. – smithkm May 12 '15 at 00:43
3

Another possibility deals with Artificial Intelligence and ethics. Prof. Joseph Weizenbaum who wrote a book called Computer Power and Human Reason. The book says that technology will indeed evolve and Artificial Intelligence can become possible (strong AI), but he says AI will always lack certain qualities like compassion (from Wikipedia):

His influential 1976 book Computer Power and Human Reason displays his ambivalence towards computer technology and lays out his case: while Artificial Intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. Choice, however, is the product of judgment, not calculation. It is the capacity to choose that ultimately makes us human. Comprehensive human judgment is able to include non-mathematical factors, such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for comparison.

Now in case a machine can choose whether one should kill a person or not, that clearly validates the above stated concept.

This can be beneficial since for instance saving ones life creates friendship and thus can construct military coalitions based on more than simply "beneficial for both sides". The same aspect is I Robot where the author (seems to) warn that AI can misunderstanding the goal and for instance destroy democracy, because it things mankind will benefit from it in the future.

In other words, one must be careful with AI since certain aspects that might look trivial to humans are hard to learn to machines. Therefore deciding about life and death is perhaps better made by humans.

  • 1
    true, though I would point out that if your launching DEATH MACHINES into space where there won't be any innocent civilians wondering (space is HUGE, you don't accidentally run into a third party) then your pretty much okay with "kill anyone that isn't one of us" – dsollen May 12 '15 at 13:13
3

I might be late for the party but, how about human intuition and "gut feeling" ?

An AI no matter how advanced could be broken down to "IF .... THEN", but in situations where there is no predefined situation to fill the space after "THEN", the AI would have to figure out what to do, where as the human could follow intuition, and gut-feelings over a statistically calculated qualified guess.

It is the same with computer chess engines, if you play a certain computer chess engine too many times you can win over it because it becomes predictable, even though you can make a computer chess engine that is better at chess than a human. The same would happen with the AI pilots, where as humans is more unpredictable and that is an advantage in warfare as the quote states

If we don't know what we are doing, the enemy certainly can't anticipate our future actions!

Magic-Mouse
  • 5,672
  • 25
  • 55
3

I feel like Mr Molot started the bounty because of my comments in another thread, so I feel like I should provide an answer.

Most answers in this thread are "if conditions 1-x are fulfilled, human pilots might be used". I will choose a similar strategy.

Often conditions 1-x are pretty hard to fulfill or super unlikely, but I also think it is a fair conclusion to draw from the answers that all applications of standard humans on space fighters are highly situational if not just straight up a disadvantage. I also should say that I do not think this question is a perfect match for the discussion form the other thread because the initial poster imposes quite a lot of restrictions. I will not be super specific to this already answered and ancient question but a bit more general because of the circumstances. It might also be that someone said something similar in the other couple of answers. As stated before, I just feel like I need to justify my comments in another thread. I'm also debating small spacecrafts explicitly because it was the topic of the other discussion. I think it is somewhat within the scope of the question. I believe Mr Molot had good reasons to choose this particular one.

I'm strongly considering starting a series of threads to determine what a space battlefield would look like (which I think is important to answer first actually), but I currently doubt, especially after reading the answers in this thread, that one would find a truly great answer here. It seems like a lot of work for a project I'm not really working on right now.

So here is the only reasons I can come up with why humans should man small spacecrafts in a real space battle:

  1. Civilian warfare: If you are doing small and civilian operations, take for example space pirates or police work or just a dispute between private people that needs settling, a space fight between human pilots might break out. It is unlikely that you have an arsenal of AI drones with you at all times and maybe the battle AIs are heavily restricted and private people do not have access to them. All weaponry might be self made or even illegal. Kind of like you cannot drive a tank around as a civilian. Another possibility would be that on-board combat AI is factory programmed to fight off other things than humans because murder. You might have to manually do some stuff yourself in battle. There might even be a program in place specifically to prevent space duels and piracy - if you are into this thing, you might have to do it manually to some degree. I do not think that this is likely since those ships imo would still have at least heavy AI assistance, but I would argue the smaller and short-ranged and slow-paced the battle, the more uses for humans. A real world example similar to this would be fist or knife fights: It doesn't matter that a pistol is better than your fist, sometimes in a dark alley it is your only choice. You cannot argue "but the atom bomb is a lot better at killing people" if you have a knife at your throat.

  2. The last stand: The evil aliens are coming to enslave human colony X. So you either become a slave or fight for your freedom. You have a small transport spacecraft that lacks a battle AI and a battle formation program as required by the local admiral but with some weaponry and in combination with the AI defense forces of that colony, you join the defense. Every ship counts. It's often better to die in battle than to end up in an alien space mine.

  3. Sport: Well, some people do stuff for sport. People row even though we have motorized ships that are much more effective and efficient at being boats. Still people row - so some people might just find it fun and engage in fake space battles. Maybe Disney will one day make "the true star wars experience"

  4. Ethics and moralists: Maybe the AI got the same rights as humans in a particular world and no longer is mass produced. Also the AI might think it is unfair that they have to do all the fighting while humans enjoy "love". Every AI is precious, even the targeting program. So let's have the humans do their share of stuff.

  5. Humans are AI: This I suggested in the other discussion. If you have enhanced humans, maybe with integrated AIs, the combination with human qualities might make them the best pilots possible. I do not know why one would do such a research project, but those humans might be specially made by some company and lack such things as "free will". One could perhaps even see them as computers enhanced by human parts.

  6. No choice left: (kind of similar to point 2) You are flying through space with humans on board. (Optional: Your ship has sustained damage. You need to operate manually because nothing is working.) You get attacked. Well, what do you do? Fight or just accept your fate and die? I personally wouldn't just die just because I'm no AI and not good at space battle, I would still at least try my best to fight them back. Most likely I would lose, but still - people will tell the story of the guy that had to manually pilot in a battle that one time.

Raditz_35
  • 4,459
  • 3
  • 12
  • 24
  • i started this bounty because there were many comments in that line. I didn't really care who posted them. All I wanted was to have one kind of answers better detailed to cover as much of the "spectrum" as possible, and to raise awareness that this was already asked and answered on this site. – Mołot Aug 03 '17 at 08:51
  • 1
    @Mołot I am aware that this isn't something personal, but I think to recall that mostly I, quite to the annoyance of the OP, was questioning humans first and the hardest. I also do not find the answers given here so great, so I had to answer myself. I'm not saying I'm any better, but maybe if you take all the (33 I believe) answers together you get something out of it. Now I can also reference this post, explicitly my conclusion from the other answers, in future discussions. – Raditz_35 Aug 03 '17 at 08:52
  • I do appreciate your answer, but I feel the need to point out a flaw with the points about humans as falback (ie 6, 2, sort of 1). They all presume you have a ship with weapons to begin with. There would be no reason to build weapons on a ship and not build an AI for it, especially since once programmed an AI can be installed on all ships of a given type at almost no cost. You could argue some situation where humans used non-weaponized ships as weapons (some astroid harvestor's laser is good at cutting up armor or something) but otherwise the humans can't be fighter pilots. – dsollen Aug 03 '17 at 19:16
  • @dsollen I absolutely agree and I have adressed that specific flaw. As I stated, if you want to have pilots, you need to make very far fetched assumptions. One of them was in point 1) that battle AI is for example not available. I suggested heavy AI assistance in this point, but there might be certain things one has to do manually - for example on board weaponry might be programmed initially to not target living humans. As for point 2, I guess your average transport vessel will not have combat AI once again. Point 6 kind of is similar and offers the possibility that the AI isn't working – Raditz_35 Aug 04 '17 at 06:04
  • There is not enough space. But as I astated before, I do not think there will ever be a viable use for humans. I wrote this for a reason as stated in the beginning and not because I believe it was such a great idea. I am trying to clarify this via edit – Raditz_35 Aug 04 '17 at 06:07
2

Take a page from battlestar galactica.

AI was used in past wars but was found too unstable and prone to dangerous malware to effect the totality of an AI based fleet.

Therefore it became too dangerous to deploy even a moderate amount of AI anywhere within the fleet in case of quick and easy fleet compromise.

2

That planes are controlled by AIs does not imply that humans are not flying it. If humans exist in electronic form in the future, they can upload themselves to the flight control computers of planes and fly the plane.

Count Iblis
  • 1,691
  • 1
  • 10
  • 10
2

Lots of answers already. Another real-world-ish point:

Objections of the military elite

Basically, the point is that militaries are powerful organisations with distinct hierachies. Being a pilot of a spaceship could become a thing of prestige, that you have to be brought up in a noble family, with lots of wealth and training to attain. In turn, these families have a lot of political power, and see a future as a pilot as something to assure success for their children. For instance, think of them as the medieval knights of the future, each owning and maintaining their own fighting ships.

Replacing humans with AIs then becomes a powerful challenge to these people, making them and their skills redundant. In wartime, it might be an advantage, but in peacetime it is a disruptive force in the established social order.

Fhnuzoag
  • 4,579
  • 13
  • 23
2

One more general Approach, which can be compatible with many of the answers here:

Don't make up one reason why it is impossible, just many small reasons which make it like really really hard, expensive and risky

You don't have to make AI impossible in your Scenario. Just set the bar very high - there are a range of simple physical engineering problems. AI may work quite good in the laboratory and one state invested billions in a huge AI fleet, which got wiped out by 10 fighters, because of (radiation, bugs, predictability, whatever). You can even make it a story point that the public doesn't know exactly, why the big AI fleet was so easily destroyed. But after that most AI-fleet programs got their funds seriously shortened.

So they are working on AI ships - and they may even be possible they will be better in combat than anything. Maybe we just have to invest another 5 years another few million credits into the development and maybe then it will work. - But that is quite risky and the war doesn't wait. So investing the time and resources into new weapons, better ships will produce immediate results instead of the long-term gamble of AI.

I think it is a bit like - why don't we have flying cars today? There is not a single reason which makes it impossible, but a lot of small very practical issues why the theoretical easy solution doesn't work in the real world and needs years of engineering...

Falco
  • 3,353
  • 14
  • 20
1

Something I haven't read so far depite all the answers isnt rogue AI, but unpredictable AI.

Actual full AI are so incredibly advanced that within a year or so after their inception we wouldnt even understand them. And we wont know exactly what the AI will have learned. Its unlikely such an AI would only be used for warfare, but also construction, research, traffic control and just about anything you can think off. If we left the AI to its devices it might forcibly remove humans from their buildings and build something else, if it removes the humans beforehand at all. You dont know what and how it learned or what it gives priority too. Warfare would be the same, the AI could do anything to achieve it's goals, even things unthinkable for the humans like agree on peace and collaborate with the "enemy" or their AI's against it's masters will. This is one reason why AI might only be a locked box project. The AI (or multiple AI without knowledge of the others) get a problem and have to create an answer, then the humans look at the answers and fully explore the consequences before acting on it. This means no AI directly controlling anything.

Human reaction time is based on a nervous system that's both as energy efficiënt and fast as possible. Through forced evolution (slower pilots die fast ones live), breeding programs, genetic manipulation or even building them up from the ground you could accelerate their brain and nervous system far enough it stops mattering.

"theres no stealth in space", you cant hide your heat signature against the cold black space. But why should this mean you cant fool weak AI systems? Imagine countermeasure research accelerating and any combat starting with dozens if not hundreds of fake signatures. To discover what's what you first get the engagement closer (possible with FTL). Then to make sure you dont waste your ammo and time on fake targets you have humans control the fighters, flying and fighting in a mass of randomly movng countermeasures that both the capitol ships and fighters keep launching in all directions. Combine this with measures that reduce your own signature, like changing how much your surface radiates while facing the opposition and/or temporarily storing heat for the duration of combat.

Demigan
  • 45,321
  • 2
  • 62
  • 186
1

Like @Erik said in the comments, the thing you are looking for is Drone pilots. Pilots will sit in a command center on a large ship or back on a home planet and control ships remotely. This keeps the ships small and expendable while keeping the actual pilots completely safe. AI surely will have developed enough to fly ships, but you can justify drones instead by saying that AI is no match for the wit of live pilots. This seems like the most logical approach to me!

wposeyjr
  • 1,200
  • 1
  • 8
  • 22
  • 1
    But this takes away the lovely drama of having the pilots on the line of fire! As an aside, note that this will require FTL communication to be practical in space: hanging back just 100 K km away from the fight, you'll have a minimum of 1/3 of a second of lag, each way, which is a big handicap. Assuming space opera-style dogfighting, that is. –  May 11 '15 at 15:38
  • @JonofAllTrades Agreed about it being less dramatic. OP said we could have made up tech so there you go. For my answer to work we'd need FTL communication! – wposeyjr May 11 '15 at 15:41
  • As I said in my question I used ER radiation as a justification for why drones don't work. You can't remote control if you can't get signals to the drone. – dsollen May 11 '15 at 15:48
  • @JonofAllTrades I actually have limited quasi-FTL communication, but as it happens it can only handle 'batch' messages and trying to send constant rapid messages needed for real-time control does not work well. Trying to remote control fighters using FTL comms, previously believed impossible, is a plot point :) – dsollen May 11 '15 at 15:51
1

Depending on what sorts of resources your society has access to, you could claim human pilots as a matter of economics.

If a cost-benefit analysis shows that silicon (or unobtanium or whatever essential element your AIs require) is more expensive than the cost of training and dispatching humans, and the relative advantage of AI capabilities just doesn't justify the cost of making/maintaining them, then putting meat in the pilot seat becomes a financial necessity.

Just a thought.

A. Smith
  • 31
  • 2
0

Necroposting! (And yes, I read or at least skimmed through all the other answers.) I'll recap some answers and then provide more of my own input.

What we already know

Used issues from other ansers:

  • Piloting is already hard with AI;
  • AI is not creative, adapt to unexpected failures, compassionate enough;
  • AIs can be hacked; the fight is not a cinematographic dogfight, but a clash of hackers and/or electronic warfare.

Not directly used, but might spice up things:

  • Humans+ might be able to do things AIs cannot, such as precog or mind radar.

Don't "fight" weak AI, embrace it, ...

Piloting a complicated spacecraft, working around failures due to enemy fire or technical malfunction, being cheap is everything robots can do. Being "creative", less so. Using some superpowers, even less.

We need a weak AI in our warplanes in space. Or else, as someone noted, we are reduced to WW2 era dogfights, maybe Vietnam era guided missiles, but not much more.

... but don't trust it too deeply

Let's talk about electronic warfare. There are some levels of hacking with communications in a combat.

  • Jam the communication channel;
  • "Listen" to the decrypted communication and act accordingly (basically, what British did in WW2);
  • "Fake" encrypted communication and misguide the enemy.

If we had a secure channel that cannot be hacked, we might use a single manned "commander" figher (probably build all others to the same standard to avoid visual detection and aim prioritization) and a swarm of robot "actors".

If we don't believe in secure communication, even more if enemy can force our robots into friendly fire, we need to man all the fighters.

The scenario

So, you can imagine following:

The war was intended to fight with killer robots. The robots were controlling actual manned-looking planes, because they need some guidance from inside the battlefield. (It's the same idea as the German WW2 "commander tank", that looked just the same as a regular one, including a canon mock-up, just the other way 'round.)

In the first phase of the war this idea spectacularly went South: the jamming and decryption and fake orders abilities of the enemy were underestimated. Luckily, all space fighters were made the way that they could be manned.

Because hacking inside a single plane is much harder, than hacking inter-plane communication, this worked out. So, we'd still use weak AI for navigation, actual flying, aim prioritization, super-radar readings, etc. But all fighters are manned with humans, as humans can detect, if an enemy is pulling their leg on the comm, and robots don't.

As far as loses are not exorbitant, this strategy works. Fighting with killer robot planes only costs money, but not lives on our side. So, one could expect the Vietnam protests, if it's some kind of an "unwanted war".

Five seconds of real life

Some previous generation Su fighters (I forgot which) and (to my knowledge) all fifth-generation fighters require a real-time computer to fly. Basically, those planes are aerodynamically unstable, when kept still they cannot hold their course. Thus, in order to not crash immediately, they need a special computer to adjust the control surfaces a bit in quite repeated manner. This large price is paid for the higher agility of the plane when it needs to manoeuvre.

Of course, this system is not an AI, but it's still a computer. I'd imagine that a space warplane would have much more of such systems. Maybe, not all are really, really needed, but they a) follow the general trend (don't ask how many computers are there in a modern car), b) improve certain aspects of the machine's performance, and any issue at war might mean the difference between win and loose.

Conclusions

What I am basically saying: having a weak AI on board would be a rather improvement of some aspects of the war plane, and as such, welcomed. Having a human on board, would improve other aspects of a war plane, and hence would enjoy the same treatment.

Oleg Lobachev
  • 3,370
  • 8
  • 18
0

Culture

I don't think this answer has been given, but maybe you have a Warrior Caste - people who relish the joy of battle, it is their raison d'etre. They are a politically powerful group - sort of like a trade union - who will not allow AIs to be used to do their fighting.

Alternatively, you could be in a dystopian world, where human life is so cheap, they simply don't care enough about the pilots to build the A.I. Even if building the A.I. is just a one-time cost, it's still very expensive, so why bother when mass-producing the fighter ships is relatively cheap (what with all the slave labour) and there are millions of humans you can use to pilot them.

komodosp
  • 9,479
  • 21
  • 37
0

I feel the question here is really about justifying AI of nearly any level in a say Type 1 or Type 2 class civilization, using the Kardashev scale from the prospective of a Type 0 civilization looking forward.

Considering most Artificial Intelligence is based on algorithmic programming based on biological entities (i.e. DNA, neural nets/brain cells, insect behavior, etc...) that really only apply to a small portion of what is considered actual intelligence. One would have to keep in mind that there are several systems in place to build up a "virtual" or Digital Intelligence.

Take for example the world of Star Wars and Star Trek, they both have artificial intelligence or AI in one manner or another. Star Wars takes it with there being an android AI and not so much digital based AI thus implying at one point their technology had digital AI but "evolved" based on external factors or other agents to be more common within a physical form of androids than a virtual one like holographics or existing in a computer of sorts. While Star Trek takes the opposite approach and has their society dealing with holographic and digital AI (which in researching a computer system like LCARS could not operate without a sort of advanced yet rudimentary AI ) as the norm instead of androids why androids are considered unnecessary and/or untrusted with one exception.

Now looking within our own reality, We recently have developed 5d storage technology using crystal lattice and lasers. Very sci-fi yes, but because it offered no better alternative to our current magnetic and solid state transistor based storage mediums we as a people did not develop this. The same happen with Steam engines, a greek phylosopher invented the first steam turbine and it wasn't until 1800 years later that the world rediscovered this technology to fuel the Industrial Revolution.

The point I'm making here is if a culture, events, or faction within that culture prevent the mass wide spread adoption rate of a technology then that said technology could either become rarely unused, or completely abandoned by a few factions. Thus one could have their main protagonists be one of these same factions that does not utilize an AI system for their fighters for anything more than say guided fly-by-wire systems or advance situational automation without any interaction of a pilot but still require a pilot to fully operate the craft. This could be handed down by a governing body or just solely by that team's preference based on their own skills, abilities and experiences, thus leading to a consideration any sort of AI piloted drones inferior technology.

One could even justify this inferiority in technology as just because most digital intelligence systems or AI are not ever built up to be sapian level drones with a semi-conscience mind but merely a set of hardware and software systems built up to operate with an insect like mind but still needing an operator because AI was never needed to develop further than this nor did any sort of mutation code was developed to allow the AI to self evolve, replicate, or expand its own programming therefor never developing into a digital life form.

  • To exand even further one could even explore the consepts of rouge factions attempting to develop a self evolving/replication AI and having thier protagionists become involved by investigating rumors and/or recieving a mission to destory and/or steal the new technology. –  May 11 '15 at 21:10
0

Being a huge Gundam fan and lover of unrealistically flashy sci-fi combats, I have asked myself this very same question before, and this is the solution I came up with: The AIs handle basically everything during a fight, reactions, communications, tactics, prediction, etc... The humans would still be there just to make mistakes.

Now hear me out - there is no reasonable way a human being can outthink, outsmart, outmaneuver an AI, but we sure as hell can out-stupid them. We will make terrible decisions (from a logical or predictive viewpoint), and that would make the machine's movements harder to predict. Basically, think of the human as a random seed generator. As long as his/her decision is not purely self-destructive, the AI will roll with it just to throw off the enemy AI.

I just know this idea is flawed somehow, but it seemed like a good one to me initially.

Another idea I had, which places the human pilots in a much brighter light would be to have them in charge of the battle tactics being utilized. They do not control the machine itself, they merely say 'We should do x.' And the AI handles all the maneuvering and reacting.

Unfortunately, both these excuses have one common requirement: some sort of powerful all-enveloping communications jammer that stops the human pilots from not being in the machine at all. Otherwise, the humans will just sit in base while the AIs do everything.

Feaurie Vladskovitz
  • 5,966
  • 2
  • 32
  • 61
0

Racism/Fear of the Machine

Another possibility, would be that humans are so afraid of a potential robot uprising that they refuse to put them in positions where they could possibly affect a revolt. Maybe robots/AIs aren't allowed to have weapons of any kind, in which case a space fighter is certainly a weapon.

aslum
  • 7,406
  • 20
  • 35
0

Idea: You could have your fighter pilots make a regular habit of boarding and capturing larger ships (or installations). In that case, a simple AI to pilot the fighter isn't enough -- you'd need a whole robot. And it can't be a clunky robot either. It must be capable of maneuvering through an enemy ship, with unknown internals, with damaged sections, and with enemy resistance to the boarders. Then it would have to repair the enemy ship and pilot it back to their territory. (Or maybe use it to continue the battle, or as a spy ship, or a hundred other scenarios.)

The point is, move from an air force analogy to a naval analogy. Turn your fighter pilots into buccaneers, and you change from needing only a fighter-pilot AI to needing an immensely sophisticated robot. From a certain point of view, human beings are already immensely sophisticated robots, so why would your future empires reinvent the wheel? (Also, it is one thing to have AIs in one's civilization. It is quite another thing to have large numbers of robots that are both physically and mentally as sophisticated as people.)

[edit: Just realized that this idea is similar to Dune's land warfare, where shields are highly effective, except against slow-moving attacks. This leads to land warfare based on masses of infantry (with transport hoppers for mobility), engaged in close-quarters/hand-to-hand combat.]

dmm
  • 239
  • 1
  • 6
0

In the future, AI is both powerful and nigh-on sentient. In a standard Earth fighter jet, for instance, an AI would literally fly circles around a human. They can maneuver around obstacles a human pilot wouldn't even register. They can track targets via complex physics equations, to the point of knowing where individual molecules will fly when the ship is destroyed.

If they can do all that, then why would you ever need a human pilot?

Electromagnetic radiation

The Handwave Engines create a massive amount of electromagnetic radiation. Obviously, computers can be shielded from the EM fields produced, but the shielding makes the ships much larger, bulkier, and overall less stable. Instead, ships are flown by a mechanical wire system, directly controlled by the pilot, with a minimum of hardened electrical components. The engine and its electronics are shielded enough to operate, and the pilot doesn't need any shielding, apart from the usual "keep the air in" and "keep the energy weapons out" sort. In fact, the shields can use those very same EM fields to operate. It's noisy (frequency-wise), but that's good for a shield; it stops the ability for enemies to use a jammer or worse, a harmonic resonator, which would amplify the shield to the point of killing the occupant. As an added bonus, most ships will continue to fly just fine after being hit with an EMP, as there is precious little to damage.

Near-Light Speed Is Weird

At the near-light speeds of space fighters, all sorts of weird things happen in physics. Simulating the effects is possible, but the sheer amount of detail the computer has to know is staggering. A standard computer can barely do it real-time, and the bulky, super-hardened computers in a fighter ship are left far in the dust. Luckily, humans are really good at ignoring information, no matter how important it is. With only a few basic filters, a human can track an enemy pilot through space as fast as if they were both in jets in Earth's atmosphere.

Super-Light Speed Is Really Weird

Of course, that's at sub-light-speed. Once the Handwavium particles are engaged and the ships move into the mini wormhole/warp speed/hyperspace/a parallel universe (let's call it Superspace), physics really goes out the window. No one really knows what Superspace actually looks like; rather, the ultra-high-frequency energy patterns interact directly with the human brain. As the brain tries to understand the information, it translates the patterns into sights, sounds, even tastes. The pilot can use those sensations to guide his flight, ending up roughly where he wants to: "Fly until you see orange and smell figs, curve left, and keep going until you taste hot blueberries."

Certain energy patterns exist around every solar system, allowing pilots to avoid hitting planets or stars; however, each system has its own sensations, which allow pilots to navigate "by feel".

A computer, of course, can make nothing of the information provided; the vague energy blasts are gone too quickly to process in any useful way. Granted, a computer can fly from point A to point B; it just has to make all the calculations ahead of time, and fly blind. If something is in its way, it will make a small, expensive supernova.

ArmanX
  • 12,345
  • 3
  • 31
  • 49
0

I've looked and couldn't see any so sorry if I'm bringing up asked and answered, but what is the reason for drone operations being out of the question? By Drone, I am talking in the current military term, where there is an aircraft that is being operated by a human pilot who is not in the physical aircraft. The only thing I could see as a problem with this system from a practical scale is that there could be some signal lag (and the shielding could block signals, I guess).

But for your pacifist programmer, this would be the best of both worlds. You preserve the superior advantage of a human pilot while limiting his exposure on the battlefield. So instead of fighting a war, your pilots are playing a video game.

Now, another overlooked issue is that a Carrier is a powerful ship... but it's also quite defensive light weight. Real life carriers are flanked by a small fleet of other ships that server as screens against enemy action. There are missile cruisers (they provide a bulk of anti-air) destroys (surface patrol and anti-air actions) and submarines (keeping the underwaters safe) that protect the carrier, which does all the shooting back (not to mention the carriers operate multi-role craft which can go from air to air combat to air to ground combat in a pinch for additional screens against enemy action). The end result is that if an enemy ship is close enough to the carrier that unassisted visual contact can be made, something has gone terribly wrong. Perhaps another motive is that the AIs had prioritizing issues and would target the highest valued target first and would always go to the carrier gung-ho and ignoring the other ships that were there specifically to stop them. Sure it's not a human life, but it's still an investment that isn't easily replaced when at war.

hszmv
  • 10,858
  • 14
  • 29
  • How does this answers the root question? We all know pilots on board create problems. The question is how to build a world in which it will make sense. – Mołot Aug 03 '17 at 13:44
  • @Molot First part about drones is because OP does have a character that wants to remove the human from the machine... hence a stepping stone that does that that is already in use needs to be addressed (why are drones not viable)?

    Second part is yet another consideration that takes the nature of Carrier Theory Combat into account and posits a simple AI that may have issues with engaging correct targeting (One more reason not to go the A.I. route).

    – hszmv Aug 03 '17 at 13:48
  • In actuality the possibility of drones is a key plot point of my story. The original question i posted suggests this is impossible (I explicitly said ER radiaiton prevented remote control). However, the programmer has a trick for using FTL comms for drones, previously thought impossible to do without being easily blocked. It turns out it is impossible, there is an obscure vulnerability in the black box code he didn't disclose when publically releasing it, in hopes that until it's discovered drones will be used temporarily saving lives. Still, doesn't help with why AI aren' used though. – dsollen Aug 03 '17 at 14:21
-1

Another issue that isn't really touched on here is damage control - an AI might be able to do a fantastic job of flying when everything's working hunky dory, but when half the reaction controls are out, the main nozzle control's in reversion, and the main engine is surging due to a turbopump fault, never mind the loss of two computers and an electrical bus to battle damage, it's going to have a much harder time "thinking outside the box" in order to limp home or even finish the mission than a well-trained human pilot would.

ArtOfCode
  • 10,361
  • 4
  • 40
  • 72
Shalvenay
  • 11,347
  • 5
  • 41
  • 78
  • 1
    What makes you think that AI can't do that. Human don't really think out side the box when piloting- that would mean its aliens or time traveller or simply some unexpected outside influence causing my problems. – user2617804 May 13 '15 at 02:25
  • @user2617804 -- humans are much better at making sense of contradictory or incomplete information than computers are, given the appropriate chance. – Shalvenay May 13 '15 at 03:04
  • 1
    Fuzzy expert systems handle it just as well as humans. Anything less than sufficient information- rotational,velocity,accelerations,positions and the spacecraft is doomed period. Lots of human pilots have crashed planes due to sensory or instrument contradictory information whereas the AI would go for stabilise plane first which reduces risks. – user2617804 May 13 '15 at 23:28
-1

Firstly, sentient level AI is just dumb to have on a fighter/gundam, as such realistically there likely won't be, as such many decisions will still have to be made so these need to be transmitted OR have a pilot. Transmission will take time and can be hacked so you don't want that as much as possible.

This leads into the second point, if you're going to have a Sentient and non-Sentient AI (as you would have to), you may as well make the Sentient a biological organism rather than a sentient AI, because then you have protection from hacking and two different processing systems, one that is wildly less logical will result in a overall better chance at winning due to being able to come up with things from different perspectives.

Any AI you have, adaptive or otherwise, relies on a set of rules. If I can figure out those rules I can use them to my advantage as a sentient. On the other side, human behavior is can be unpredictable and as such an AI could just not be able to adapt.

Also, if a ship is damaged and needs repairs an AI, in many cases won't be able to fix itself, even if it has nanites that are safe to use.

So basically, Security, Strategy, and the ability to repair is better with a human pilot, no matter how fast or powerful the CPU and AI is.

Durakken
  • 6,608
  • 1
  • 18
  • 41
  • 1
    your conclusion is a stretch and is not logical from your premise. AI human comparison - no so much valid. Humans relays on rules they learn in probably their space academy school simulation - and how good they learned those rules - depend will they pass or not. nanites and repair problem - totally not compatible statements, but long to explain why, and with nanites they do not need fighters probably. Good point is actually only about biological "AI" not need to be Sentient, be savvy is good enough - mouse brains system. But that hybrid system is something to think about. – MolbOrg Jul 26 '16 at 22:12
  • If a human gets a rule wrong it still beneficial because then they are acting in a way that is not predicted by the AI. Nanite AI would be developed in such a way not to spread keeping them from repairing the ship in any situation the materials cannot be accessed. Supposing a crash into an uncivilized planet, that ship is not being repaired unless you have pilot to move around and get things to repair the ship. Mouse brains are good enough for some parts, not others. – Durakken Jul 27 '16 at 06:33
  • 1
    I guess you do not know how our today self-learning algorithms working. And that is root of problem u comparing them with humans. It's my note about your A and I prefer to keep it short. I would recommend to add this spreading limiting point to u answer - that's good enough reason to limit gray goo. I called it incompatible for other reason, then material gathering, more about informational ones. But grey goo is overkill, no need in pilots and fighters, in first place. – MolbOrg Jul 27 '16 at 06:45
  • Don't know how learning AI works completely, but I have a good enough understanding of it. It simply will never be good enough to handle human thinking completely and it also has the same limitations as the nanites in that we would put in certain things that it can't do and those can be exploited. I think of human pilots in the future more as the seat of thinking of the fighter, not the operator, they're decision maker or the action stopper, but a lot of the activity would be handled by the AI. – Durakken Jul 27 '16 at 08:18
  • 1
    AI, even current soft AI, can do unpredictable 1000 times better then humans. Humans SUCK at being unpredictable, random number generators are great at it. If your ship is damaged enough to need serious repair odds are a pilot alone can't do much to repair it (a fighter pilot can't repair his fighter) and beside which, he is probably dead or stranded in space until rescue, can't fix things in space easily. – dsollen Jul 27 '16 at 15:59