8

Background and Goal

I'm building a science fiction setting for a game that will have a lot of space combat focused around fighters. One thing I wanted to avoid was letting the setting drift towards what is deemed realistic space combat where ships would be firing on each other from far outside visual range or simply ignoring the logic that would lead to such combat standard. I have decided, though, that the realistic space combat format would be the "old way" of fighting so that a lot of really old ships look like they were designed with that form of combat in mind.

Previous Attempt

I've looked into a few different ways to limit the setting to visual range space combat and one was having fleets saturate the area with so much electronic warfare measures that such long range combat was impossible as sensors would not be able to detect ships accurately enough to fire at from outside visual range. This wouldn't stop automatic point defenses. Those would make fighters useless (in fact, point defenses are kind of overpowered in current builds of my game right now, so I'm having to constantly adjust them without making them useless.)

The Treaty

Another approach that came to mind was simply having a treaty between the major powers of this region of the galaxy. One that forbids any form of automated combat unit, requiring every weapon have a sentient being to pull the trigger and aim. Some exemptions might be made for defensive weaponry on civilian ships.

There is one large exception, sentient AI. Sentient robots exist within the setting and are recognized as citizens with full rights, including military service, for both factions involved in the story. There is only one limit set by the treaty, that they are not given the ability to connect with combat equipment to gain more direct control. Instead, they must rely on the same physical controls as other sentient beings.

It could be the result of a malfunction that caused a group of combat robots to slaughter civilians in a war, a horrific incident where a ship's automated weapons fired on diplomatic vessels during peace talks, it could be simply be a war deterrent or it could even have been written by a more advanced civilization that presents it to any sufficiently advanced race to try to keep the scale of war between the younger races in check by threat of their intervention. Regardless of its origin, the treaty would forbid any form of weapon to be entirely operated by a non-sentient force.

I'm considering making it go a step further and even forbid automatic tracking of targets, even ones chosen by a sentient being, forcing gunners to have to manually aim the weapons to make point defenses less effective.

What the treaty will forbid:

  • Computerized systems that can acquire and/or fire on a target without input from a sentient being (every weapon must have at least one sentient controlling it and make the final call to fire)
  • Control range and payload limits on unmanned military vessels (drones)
  • Equipment that can produce combat units without any sentient direction, interaction or supervision
  • Combat vehicles that can interface with sentient robots to give them more direct control than an organic pilot would have
  • AI, sentient or otherwise, integrated into any military equipment that isn't explicitly denied access to the equipment's weapon systems

How would this treaty affect warfare between interstellar powers?

Clarification:

  • I'm looking at how this will change tactics in warfare, not for obscure loopholes as the wording I used above is not going to be exact.
  • The goal of the treaty is to make sure that every shot fired as a sentient/intelligent operator making the decision to fire. Using an automated system to prompt involuntary reflexes to fire would be a violation of the treaty.
  • I plan to revise the exact wording of the treaty to make sure it takes into account potential future developments, as well. For example, the limit on the control range of remote operated combat units will be a set distance
Arvex
  • 3,034
  • 13
  • 39
  • 3
    A treaty may or may not be of any relevance in a war. Consider for example the lovingly crafted worlds of the Honorverse, where they have the Deneb Accords forbidding indiscrimate orbital bombardment of inhabited planets. What would you say, was this treaty respected by all the parties in the multisided war? (I won't tell; read the books.) – AlexP Oct 04 '20 at 18:43
  • A big aspect of this will how will they deal with the handwaving that causes IVR combat. Space is mindnumbingly huge and its easy to get going very quickly. Tools that we use to control adversaries on the ground didn't work when those adversaries took to planes. Similar issues will arise here. The maneuverability of these platforms is going to massively outstrip the ability to control space, potentially by several orders of magnitude. – Cort Ammon Oct 04 '20 at 19:21
  • 2
    "Every weapon must have at least one sentient controlling it and make the final call to fire" We already have weapons in real life that break this rule and no one bats an eye. Point defense systems like CIWS can engage targets on their own, with no human input. – Ryan_L Oct 04 '20 at 19:23
  • 2
    How would this treaty be enforced? Look at China today. They violate multiple human rights treaties and no one ever challenges them. Why? Because they would be suicide. Declaring war is not an option because China will win, and they will end up enslaving your population, and no one else will declare war on them out of fear. And sanctioning them isn't an option either, because so many countries depend on China for so many products. If someone violated this treaty it would give them leverage over the other great powers by default, and the other great powers would be too scared to declare war. – Nip Dip Oct 04 '20 at 20:31
  • 3
    I am doubtful that such a treaty could ever be enforced. With human nature being what it is, any side not willing to subscribe to a "total war" philosophy would simply be crushed by those who do in an interstellar conflict. Treaties don't matter without enforcement, and you'd need to enforce it by conquering the rule-breakers... Also, historically, "rules of war" or banning weapons doesn't work. Take for example chemical weapons in WWII. If they had been effective, they would've been used despite general agreement that they were bad. – Dragongeek Oct 04 '20 at 21:43
  • A few thoughts based on the comments: -Space IS big, bigger than big some would say, but no one is going to fight over empty space. Mobility between points of interest will be the most important thing in visual range combat. -It could be an "if you don't, I won't" type treaty. The enforcement here is just that the other side is no longer constrained by it if it's broken. -I could go down the path of a more advanced civilization pushing others to follow it under threat of intervention from said advanced power. -I actually did have plans for one faction to outright ignore the treaty. – Arvex Oct 04 '20 at 22:19
  • 4
    There seem to be giant loopholes in your treaty. A fully automated targeting system, which automatically identifies and aims at targets and then flashes "FIRE" on the screen. And a human, which just pushes the red button whenever "FIRE" flashes on the screen. Would be compliant with your rules, but effectively an AI guided ship, with a human pressing the button, whenever the ships computer says so. – Falco Oct 05 '20 at 09:50
  • @NipDip To be fair, it's not like China is the only one doing it, or even the one to start doing it. Others get away with it, so why would China limit itself voluntarily when it could join in on the "fun". – Alice Oct 05 '20 at 10:47
  • 2
    @Dragongeek I remember reading an article regarding historical motivations for treaties banning specific weaponry (it started with dumdum bullets, and included the Geneva Protocol as well). It claims that most countries agreed to those bans because they were considered ineffective anyway, so were they effective, there would be no agreement at all. – Alice Oct 05 '20 at 10:51
  • 1
    @Falco Call centers literally do this, to deal with laws which require a human to dial for certain kinds of calls (to discourage robo-calling). – Jedediah Oct 05 '20 at 14:34
  • I added some clarifying bullet points at the bottom of the post. – Arvex Oct 05 '20 at 20:49
  • 2
    In practice, it won’t change things at all. Every country will develop the illegal weapons anyway, just in secret, and as soon as any one of them is stupid enough to deploy them in battle, the others will bring their own toys to the party “in self defense” and “to punish the treaty violation.” Any alleged “rules of war” are merely a way for the winners to justify executing the losers. – StephenS Oct 06 '20 at 00:59
  • @Alice I was just using it as an example. I am aware other countries violate these but by far China is the largest. – Nip Dip Oct 06 '20 at 03:17
  • 1
    Even with the amendments a fighter pilot would most likely not fight in visual distance to enemies, but rather look at the output of sensor data from a computer. The computer would scan the battlefield and display things it registered as potential enemies. And would your treaty forbid simple heat-guided missiles ? If not, many battles will probably be fought at very wide range with scanners an missiles, not like WW2 airplane battles. – Falco Oct 06 '20 at 07:18

6 Answers6

7

Computer virus.

This is an infection of AI and other computer systems. It is not clear how it is transmissible. Systems with no connection to other systems still can get infected; possibly the virus propagates through subspace. Back in the day this virus infected most or all of the AI combat systems and many other things besides. The virus does not just break things; it slaves the infected thing to an obscure mass mind, with obscure motives. Infected systems are unreliable, and instead of your goals infected systems may start pursing the goals of the virus.

The virus might be a weapon, or an evolved thing, or possibly a life form from somewhere else. Back in the day it took a systematic purge to get rid of this virus and result is a heavy reliance on biological systems, clockwork, vacuum tubes and other infection proof automations. The aforementioned ancient ships have their infected systems removed or if not removed, detached from control of the ships and just present, mute. Sometimes people talk with them. There are places and vessels which were abandoned by life, relinquished to the virus. It is not clear what goes on in such places now.

As regards the robot sentiences those that remain must have some intrinsic resistance to infection. Some adhere to a religion-like discipline that they think protects them. Robots used to be a lot quicker in the old days, or so people say. There are no new robots, and the ones still around are not quick at all; many are slower thinking than most biologics. Age? Infection? It is not known, but you don't want these robots anywhere that requires fast reflexes or quick thought.

Willk
  • 304,738
  • 59
  • 504
  • 1,237
  • 3
    Good answer, but just one comment- the question asks what the treaty means for warfare between interstellar powers, rather than why it happens. Could you edit to append some implications on warfare that this would have? – Enthu5ed Oct 05 '20 at 11:27
  • 2
    How is this the most upvoted answer when it's not even an answer? – user253751 Oct 05 '20 at 15:08
5

The treaty

In broad strokes, the treaty should include:

  • A preamble that defines the general goal of the treaty, and affirms that it is only valid for signatories against other signatories.

  • Article 1 defines explicitely and exhaustively the entities allowed to operate weapon systems. If it's only humans, you can leave it a "humans" or "homo sapiens sapiens". You may add other requirements, such as being a member of the armed forces. Anything that doesn't meet the criteria (e.g. computers and chimpanzee) may not legally fire a gun.

  • Article 2 defines what constitutes a decision to fire and reaffirm it is exclusive to entities defined in Art.1. I would suggest a decision needs to be affirmative and explicit. This would include pressing a button, pulling a trigger, typing a command, but would exclude a deadman switch (not affirmative, since the lack of action triggers the shot). Add provisions for a continuous decision, such as keeping the finger on the trigger as one continuous decision.

  • Article 3 defines what constitutes a shot. At its most restrictive, a single shot from a single cannon at a single target. It may be extended to allow a single shot from multiple cannons of a single entity at a single target taken in coordination at the same time (volley fire) and multiple shots from a single cannon taken at a single target in rapid succession (continuous fire). You might want to make volley fire and continuous fire mutually exclusive.

  • Article 4 defines that a single decision equates to a single shot. You should affirm that a decision needs to be unique to a situation (e.g. "fire now on this target I'm aiming at") and cannot be delayed or conditional (e.g. "fire on the next blue uniform"). Affirm that continuous fire may only be taken with a continuous decision.

  • Article 5 defines what may be considered accidental or misfire. Incidents under this article do not engage the responsibility of the nation that committed them vis-à-vis the treaty, although it does not absolve them from criminal prosecution. The general idea being honest mistakes should not be considered violations.

  • Article 6 defines what may not be considered accidental or misfire. It should establish a mechanism to investigate and deliberate on such incidents. Incidents under this article will engage the responsibility of the nation that committed them.

  • Article 7 defines the penalties for violating the treaty. The harshest penalty would probably be exclusion from the treaty (and thus lack of protection thereof).

  • More articles can define which tasks can be performed by automated systems, and which cannot. You might for instance want to define a distance (either physical e.g. in kilometers, in steps e.g. how many mechanical systems, or in something more metaphorical) between the button press and the gun firing.

  • Yet more articles should define how the treaty can be updated, and probably include a periodic review process to make sure it is working as intended.

The consequences

International law is highly voluntary. As such, you need to have an array of sanctions that is well-defined and makes abiding by the treaty more attractive than not.

If the treaty concerns the development of weapons, then you would probably see most nations acquiring the theoretical knowledge to make killer robots but lack the means of immediate production. While it would take weeks or months to convert industry and build infrastructure to mass-produce them, there should be enough civilian applications that you wouldn't have much adjustments to make to field a relatively effective army of robot soldiers.

If the treaty concerns use of weapons, then you might have them stockpiled in a corner, or even embedded but disabled in current systems. If a nation broke rank, other would be able to use their own arsenal almost immediately. That would likely increase the perceived threat of violating the treaty, but it would also likely make the violation more tempting if you just have a switch to flip.

In and out of itself, I think the protections offered by this treaty aren't excessively dissuassive. A bullet is a bullet, whether fired by a human or a robot. The benefits of an automated army probably outclass the cost of it.

Therefore, I would suggest to make this treaty a part of a larget set of agreements (on e.g. POWs, WMDs, and other regulations on war), and make them a packaged deal. Violating any of them should violate all of them. You want to include things such as insuring civilians aren't indiscriminently slaughtered, that glassing a planet is not an acceptable tactic, that providing assistance to military ships dead in space is required, and anything that distinguishes beetween civilised skirmishes and the unadultered savagery of war.

Taken together, the protection those treaties provide should be invaluable enough that you don't want to risk going against it.

At that point, restricting development gives an edge to nation that can covertly and illegally do it, so restricting use might be a better option, and the looming threat of unregulated war should be enough to create a balance of terror as long as forces remain balanced. Ultimately, that's the only thing that will keep this treaty relevant.

AmiralPatate
  • 8,900
  • 1
  • 19
  • 42
  • I do agree with the idea of it being packaged into a larger treaty to define unacceptable wartime behavior, actually. I might look into expanding it into that as well. And the idea of an "if you violate it, I will, too" is one of the enforcement mechanisms I intended. – Arvex Oct 05 '20 at 20:41
3

Biologically Triggered Weapons Systems


As humans we have physical reflexes:

Most reflexes don't have to travel up to your brain to be processed, which is why they take place so quickly. A reflex action often involves a very simple nervous pathway called a reflex arc.

A reflex arc starts off with receptors being excited. They then send signals along a sensory neuron to your spinal cord, where the signals are passed on to a motor neuron. As a result, one of your muscles or glands is stimulated.

These involuntary reflexes can be used by linking a weapons operator's nervous system to the weapons mainframe and ship sensors.

A ship's weapons can be almost instantaneously fired upon enemies through the following process:

When a corresponding enemy triggers a sensor, a specific group of the operator's receptors can be triggered accordingly.

The receptors set off a reflex arc, and a signal will be sent to the corresponding muscle neuron.

The human operator can be trained to respond to these twitch reflexes, so that the response time is extremely short. The weapons systems could be wired to the spinal neurons that are in charge of, say, the trigger finger, and should the operator choose to fire upon receiving the twitch reflex, as soon as the operator wills it, the weapon systems can detect the spinal neuron activation, and fire accordingly. The human operator becomes the circuit connecting the sensors and the firing system of the ship.

The taboo of automated systems that this circumvents:

Computerized systems that can acquire and/or fire on a target without input from a sentient being (every weapon must have at least one sentient controlling it and make the final call to fire)

This taboo is circumvented, because the sentient human gives the final call to fire the weapons systems upon the enemy. It will also be much quicker than any physical action, as the weapons and sensor systems can be linked to specific neurons close to the brain. An AI could be hooked up to the sensors and provide the corresponding reflex arc on verification of an enemy, but the human, as a sentient being, would be the operator firing the weapon.

Consequences on Space Warfare


Operators become integrated with the weapons and sensors systems. They are hooked up to hybrid biological ships, which become an extension of their bodies.

Human neuron signals are magnitudes slower than electricity or the speed of light (120m/s vs 300,000,000m/s), meaning that this method is still slower than conventional automated methods.

This would mean 'Operators' would be genetically engineered to achieve faster neural response times and reflexes, with biological improvements to take their neural reflexes closer to the cap of the speed of light.

The smaller a ship is, the better the 'reflexes' of the ship system, due to less travel time between sensors and systems (electricity and light travels at the cap of 300,000,000m/s). This would mean that smaller ships like fighters would remain incredibly versatile. Operators would most likely be employed not just as Circuits in weapons or flight systems of large ships, but also as Fighters, in smaller Fighter-style ships, to make extensive use of their reflexive combat abilities.

Ensuing combat would be greatly decided by sensor and weapon tech, but also by how talented the Operators and their response times are (think E-Sports of today). It is likely that greater powers with highly talented Operators would pursue more and more lethal one-hit kill weapons, greatly highlighting fast and extremely personal combat. There would be an arms race in the development of competent and talented Operators in every major group.

In summary, the pact banning automated systems would result in some Dune-like settings: people are weaponized and ultimately take on the roles machines once filled. Fighter ships would remain relevant in space combat due to the cap of the propagation of light, and there would be extensive cultivation of Operator talents. All space combat would be literally personal.

Enthu5ed
  • 3,759
  • 1
  • 15
  • 50
  • 1
    What's the minimum amount of sentient being involvement? Can we just extract one of their neurons and put it somewhere in the circuit? – user253751 Oct 05 '20 at 15:09
  • The neuron itself isn’t sentient though, it’s mainly because its part of a sentient human. I guess it would hinge on how the treaty defines sentience xP. – Enthu5ed Oct 05 '20 at 15:11
  • That's why you take a neuron from a human instead of a lab-grown neuron. – user253751 Oct 05 '20 at 15:21
  • @user253751 but the neuron itself isn’t sentient, to avoid the taboo, it still has to be attached to a sentient being – Enthu5ed Oct 05 '20 at 15:30
  • Ah, so we use a rope to tie him to the jar full of neurons (each of which fires a different weapon and is triggered by the computer) – user253751 Oct 05 '20 at 15:35
  • @user253751 in this case he is the jar of neurons. Added benefit of being able to administrate other parts of the ship directly too. – Enthu5ed Oct 05 '20 at 15:36
  • Yeah, but no need to use all his neurons when you only need one per weapon. It can just be a little box attached to his leg or something. – user253751 Oct 05 '20 at 15:39
  • Yes, only one or more muscle neurons are needed per weapons system – Enthu5ed Oct 05 '20 at 15:50
  • The point of it was to make a sentient/intelligence operator make the decision to fire. What you described takes the operator out of the process entirely. – Arvex Oct 05 '20 at 20:39
  • @Arvex then we can just not have the muscle neuron linked to the firing, the operator’s neuron for pressing the trigger finger is. On detecting a pain signal, the operator can reflexively press and fire, meaning they still make the decision. Since the weapons are linked to the neuron firing, they will be as instantaneous as the Operator’s reflexes, rather than waiting for them to press slowly on a button. – Enthu5ed Oct 05 '20 at 21:27
  • @Arvex I can modify the answer to do this instead if you want – Enthu5ed Oct 05 '20 at 21:27
  • @Arvex modified the answer, now no longer only based on twitch reflexes. – Enthu5ed Oct 06 '20 at 00:22
3

Biological Warfare 2.0

You explicitly mention a sentient being needs to be in control.

Thing is, there is uncertainty about what sentience entails. At the core, sentience just means being able to feel stuff. Nearly every multicellular organism on Earth is sentient. Dogs are sentient. Mice are sentient. Pigeons are sentient. Fdrsl-pfs't, household favorite pets from Betelgeuse are sentient. Ob'enn, the murderbears from the Galactic Core, are sentient.

I will guarantee you within minutes of this treaty even being considered, military minds will already be working on genetically engineering minimally viable, minimally sentient beings who LIVE for just being hooked up to a targeting system 24/7. Similar to the cow who wants to be eaten from H2G2. And that gets amended, less scrupulous armies will go down the Ender's Game route and start breeding the children from the undesirables to be ruthless killing machines.

Nzall
  • 8,439
  • 3
  • 32
  • 73
1

Decay of AI processing circuits turns them into metaphorical landmines

  • In near future, and AI vision chip gets invented using actual biological materials. Allows perfect processing of a video stream in real time.
  • Its so much better than any other method of doing it it becomes the only way we know how to do true AI vision.
  • After a few years, it can decay, and starts registering humans as monsters, trade ships as incoming missiles, your commanding officer as the enemy president, etc.
  • If an AI is under the direct control of a sentient being, they can keep and eye as their AI and replace their circuitry when it starts to show early warnings.
  • If an AI is left for years without human control, it will start attacking anything that moves. Civilian, friendly, etc.
  • Similair to how landmines are regulated on Earth, AI will need to be regulated in the same way. They need kill switches that trigger after a year without contact, they need a human monitoring them and in communication with them, etc.
  • And like the landmine treaty on earth, a few rogue states wont sign it.
Ash
  • 44,182
  • 5
  • 107
  • 219
  • Head scratch and why wouldn't you simply go back to using good old microchips for computers? I mean, if I know my enemy has no computer aid and can only fight in visual range, I only need to deploy conventional computers to engage from far outside of visual range and have destroyed them before they even know whats going on. – Polygnome Oct 05 '20 at 11:07
1

War is replaced by ritualized battles

For such a treaty to be effective it has to be enacted in spirit and not just followed by law. If your treaty is just a contract, there will be numerous loopholes and scientific advancements to get around it - and enough situations in which people are willing to cheat and get around it.

If the spacefaring nations are not bound by law, but by conviction - for example honor, they will not as easily try to find loopholes. One scenario could be a heavily interwoven net of economics, where all states are codependent on each others. An all-out-war would be devastating to everybody, so military conflicts take on the shape of proxy wars, or ritualized fights with clear rules of engagement. This can also be ensured by the presence of devastating weapons, which could end any conflict by devastating means (mutually assured destruction)

This could lead to a future, where military conflicts are fought with clear rules of honor and certain banned technology. A military victory over a certain patch of space is only affirmed by other nations, if the combatants followed the rules of engagement. In this scenario space-battles could be fought primarily by "knights" piloting fighters and fighting close enough so they can see and talk to their noble adversaries.

An example of a similar scenario is Dune. The galaxy is controlled by warring houses, who fight constantly over control. But they also have a galactic emperor, who acts as a neutral entity above the houses ensuring nobody uses atomics, or bio-weapons to kill an entire planet.

Falco
  • 3,353
  • 14
  • 20
  • Historically the Holy Roman Empire had a similar internal structure, where a lot of local conflicts where fought between local lords, but etiquette was seldom broken - there were some instances, where warring factions even sat at the same table under the roof of the emperor, just to meet again on the battlefield a few days later. – Falco Oct 05 '20 at 15:08