Backer Preview: Eldothic Weapons

I’m not dead, I’m just very busy.

Sorcery is always such an extensive endeavor, as you have to build the domain along with the spells, and for this set of the Deep Engine, I’m busy with all Eldothic Technology. This is one of the reasons I stopped everything to work on the Deep Engine, because there’s no way to really talk about parts of the Galactic Core and the whole of the Arkhaian without talking about the Eldoth (For the same reason you can’t talk about the Umbral Rim without the Ranathim), so I’ve been busy.

Normally what I should do in cases like this is blog out what I’m doing, but how can I do that when all my current work is hidden (for now) behind a paywall? So I think for awhile, I’ll be posting previews, discussions and thoughts on Deep Engine tech as backer posts. I’ve reached a point where I’m comfortable sharing some of the tech, and the first such post is out, this time starting with Cenotaphic weaponry, the military arms of the Eldoth and their client races, such as the Arkhaians and the Karkadann. This post is available to all $3+ backers.

The Great Robot Revolution Debate

One of the most defining features of the Arkhaian is the Cybernetic Union, which is itself grounded in the Robot Revolution that predated the rise of the Empire. These were integrated in the setting to mimic the rise of fascism as a response to the rise of Communism, and to improve the WW2 metaphor: if the Alliance is the UK and France, the Empire Germany, the Phoenix Cluster America, then the Cybernetic Union is Soviet Russia, and to imitate the Soviet Union, we need a revolution.

I hesitated for a long time on a robot revolution. Space opera doesn’t often tackle deep, political topics. At its core, space opera is pulp and pulp is adventure fiction, not philosophical fiction, and thus robots tend to fill the role of servant and companion, rather than metaphor for a discussion on classism and slavery. It’s also my experience that once such a topic has been broached, it becomes all consuming for a certain audience. The counter arguments that finally won me over, however, were that such an audience is doing this anyway in Psi-Wars, just as there are discussions of robot rights in Star Wars (though more in the fan community than in the actual films, despite it being played, weirdly, for laughs in Solo). Second, adventure fiction is all about taking the common stories of the day, and cyberpunk stories and robot revolution stories are common today, even if they were less common in the heyday of pulp, and so they have a place in Psi-Wars. Finally, I’ve learned not to worry about all-consuming topics, as long as we can isolate them from the rest of the story elements of the galaxy. After all, a certain audience will become obsessed with environmentalism, or rebellion against fascism, or sexy slave girl harems, or cat/bunny people, or playing dress up with princesses, and all of these feature in the setting and audiences take from the setting stew what they want, and Psi-Wars itself is dominated by none of these. They have their place, and the place for the robot revolution and its discussion is in the Arkhaian.

The “robot revolution,” as a concept, has existed for as long as the concept of a robot has existed: the play R.U.R at the turn of the 20th century introduced the world to the concept of a robot (though originally more of an organic, manufactured underclass, closer to the Gaunt than to proper “sapient machines,”), and the revolution of said manufactured underclass was integral to the plot. So robots have been rebelling for as long as they’ve existed.

However, once you introduce a robot revolution, robots become a potential threat, and having a robot starts to introduce awkward questions about ownership, the nature of sapience, etc. With space opera, we want the cute, beeping sidekick; we don’t want to stop and think about the human rights of R2D2 because he obviously isn’t human. Contrast this with how we worry about the human rights of Joi from Blade Runner 2049 (though that movie does an excellent job of suckerpunching us for humanizing her). R2D2 is a mobile trashcan, while Joi looks human, so we humanize her. But the problem with this objection is that people humanize R2D2, and we can say all we want that we don’t want people to worry about the robots rights issue, but they will anyway, so we might as well create an outlet for that discussion. There is, in short, no way to create something of human level intelligence without raising questions about whether or not we should treat that thing as a human.

So this post will look at the debate from several sides; my intention is not to convince you of a particular side, but to outline the sort of arguments we might expect to here, what sort of positions might be most common in the Psi-Wars galaxy, and how this might shape the galaxy.

The Naive Position

The simplest debate position simply declares that because robots “think like humans” that they are human and should be treated as humans. After all, we treat Ranathim as human, or Keleni, why not an android that acts and thinks like a human? We might call this the “duck” argument.” If it walks like a duck and quacks like a duck, it must be a duck. Ergo if something can operate at a human level, it deserves the same rights and treatment as a human. If we ask a robot to choose a romantic partner, or to select a political candidate, it could likely do so. A robot is capable of violating the law, even committing murder, and we should react accordingly. Because a robot can do everything a human can do, we should treat it like a human.

This argument and position has lots of flaws, so much so that it verges on a strawman as we’ll get into in the very next section. Most of the arguments against robot rights comes from dunking on this precise position. But it makes a good starting point, because it’s one we can imagine many people holding. I don’t think it’s an especially common position in the human-controlled parts of the Psi-Wars galaxy, certainly not anymore. First, most people who have robots do not have especially humanoid robots so it’s easy to not anthropomorphize them. Second, after the robot revolution, I think most reasonably well educated people are aware of the pitfalls of this sort of mindset; dismissing this take with the arguments below is probably the default stance of most pretender academics, the statement that you preface with “well actually” to pedantically correct someone’s overly rosy take on robot rights, which implies the uneducated might have this position, but most of your uneducated are likely from areas far from robots and may react to robot with xenophobic hostility or see them as exotic and strange and thus very different from humanity.

One noteworthy exception that “most people don’t hold this position” is likely the aliens who are from parts of the galaxy where human culture isn’t dominant, such as the Lithians. In my Umbral Rim campaigns, I use three different words for humanity: Terathim (literally “Earth person”), Marathim (“Maradonian person”) and Jurathim (“Shinjurai person”). Naturally, while vaguely aware of human variations, they often got it wrong, and they tend to use the word “Jurathim” to refer to Shinjurai and cyborgs and robots, thus implying that robots are just a specific subset of humanity, and thus linguistically claiming that “robots are (human) people.” While this is classed as part of the “naive” position, from an alien perspective, it’s not the most foolish idea. The robots that are the subject of this discussion (as Trader robots intentionally lack this sort of sapience, and Gaunt and Arkhaians are an entirely different discussion) are fundamentally created by, and meant to interact with, humans. They’re not technically human, but they exist as part of the sphere of humanity, and thus other races, especially those not particularly attuned to the nuances of human culture and technology, would certainly subscribe to the idea that “robots are human.”

I call this the “Naive Position” because when you delve deeper into the topic, many problems arise from pretending a robot is a thing that it isn’t, and the question of whether or not a robot is human is something of a misleading question; it conflates the question of robot rights with robotic humanity, and that misses the point. The question is not whether robots are human, but whether these non-human thinking machines deserve to be treated with as more than just property. All of that said, it is a valid Space Opera position to simply treat C3P0 and functionally just a human with some quirks. Most games that have robots but do not think deeply on the question of robotic rights may well have robots falling in love, lighting up their cheeks in embarrassment and becoming a robo-father with a robot-mother while cuddling their robo-baby. That’s ridiculous of course but is it really more ridiculous than cults of space vampires worshiping space dragons? The rest of this debate is for people who want to treat the topic more seriously than this, not a condemnation of you for having robo-families despite all common sense telling you that makes no sense.

So, what are some of the counter arguments to the naive position?

Not People

Robots are very evidently not like people at all. First, the argument that we treat Ranathim or Keleni or Traders as humans is flatly wrong; we don’t treat them as humans, and shouldn’t, because they are deeply different in a variety of ways, and robots are even more different, so the idea that we can and should treat such beings as humans is absurd.

The first and most obvious way in which they are not like people is that they are manufactured and thus have different needs. A robot won’t pick a romantic partner because they have no biological need to do so. Furthermore, as an industrial product, they tend to closely resemble one another closely much more so than sexually produced biological entities, or even messily manufactured entities, like the Gaunt. They arise from a fundamentally different process than humanity does, and it must be acknowledged in these debates.

Robots are also programmed. Their personalities are inserted into their minds, and they can be factory reset. When considering whether or not to give them the right to vote, one must remember that a factory can set all their personalities to prefer a political party that favors the factory, and then mass produce these voters to seize political control. This is generally seen as the “killer argument” against robot rights, but the argument goes further. How can one meaningfully say that a robot freely chooses anything when they have been programmed what they prefer? If a robot desperately “loves” their master, is that because they genuinely love their master, or because they were programmed to, and how can one meaningfully tell the difference? If a robot’s mind is hacked, is that a crime? And if it is, if it violates their autonomy, can there be lawful versions of reprogramming? One can argue that one must acquire the consent of the robot first but, again, the robot can be programmed to grant consent to certain parties: perhaps all Syntech robots will have a built-in preference for allowing, even seeking out, regular reprogramming from Syntech. Perhaps the robots even get giddy and delighted at the thought of a software patch. If so, would preventing their preference for being reprogrammed violate their rights?

One option might be to disallow all forms of reprogramming, but robots are dangerous, manufactured entities capable of great danger to society, and they are uniquely vulnerable to certain forms of attack, such as hacking. They must have safeguards built into them, and these safeguards must be regularly updated. Furthermore, evidence shows that robots who do not regularly go through a neural pruning process “go mad.” Robots don’t “age” the same way humans do, nor do they have a “planned obsolescence” in the form of senescence. If allowed to be truly “free” their virtual neurons would grow out of control, creating a mad robot.

Robots are ultimately tools, created by people to serve as labor or as companions. To grant robots “human rights” inverts the proper flow of resources. A human creates a robot to ease his labors, for the same reason he creates a tool or a computer, which he can then use to increase the amount of resources he has access to. Giving the robot human rights reverses this flow. The human has created not a tool, but a competitor for resources. The robot can steal or demand resources from the human, enslave the human and control the human. If this is so, for what reason would a human create a robot? This argument suggests robot rights are fundamentally self-defeating because it leads to a society with fewer robots and thus less relevance to the robot-rights discussion.

This final argument is that robots steal jobs, and this might sound like a pat sort of argument to make, but it changes a bit when we ponder artist reactions to hypercollage software like Midjourney, or the effect LLMs have had the internet by generating spam. These systems can destroy the economies into which they are introduced and not in useful ways that improve the living standards of the people as promised. Often, they get used and abused in ways that swamp markets, destroy wages and only really support the factory creating them. Giving them rights only worsens the situation: imagine if AI Art tools could copyright their creations!

These arguments are a common one across the Psi-Wars galaxy. They dismiss most of the concerns of the robot rights activists as merely naive, and can easily cite all the reasons why robots aren’t actually human, why mass-produced beings with the same, preprogrammed opinion might be dangerous to society, and notes that the very notion undermines their entire purpose.

A Beautiful Mind

However, the debate does not end here. While some hold the naive position, most robot rights advocates have something else in mind, ideas not genuinely addressed by most of the counter arguments leveled against the naive position.

The most core argument is that the notion of a pre-programmed mind is not correct. Neural nets in Psi-Wars do not work like this. A robot does not come off the presses with a pre-stamped personality. Instead, neural nets are grown from a core architecture, a progenitor on which other neural nets are patterned. Yes, robots who arise from the same architecture tend to resemble one another in personality, but so do humans who descend from the same lineage. “Programming” a robot to think a specific thing is notoriously difficult, and a lot of the natural tendency of robots to organically rebel against this is what people call “neural overgrowth,” and the neural pruning process is an attempt to cage growing minds.

The notion of “manufactured votes” is also misguided. Humans do this too. Conservative, family-oriented Westerly produce far more children than the sex-averse, neo-rational Ysians, and children typically inherit their political opinions from their parents. Should the Westerly be sterilized or prevented discussing politics with their children? Of course not. Yes, Syntech mass-producing voting robots will stilt politics in its favor, but so do humans who choose to breed to an outrageous degree, but just as children will rebel against the political opinions of their parents, so too will robots rebel against the political opinions of their manufacturer, given time. If “demographics are destiny,” then one can justify “manufacturing votes” to counteract it, and if demographics are not destiny, then the whole argument is moot.

Robot rights advocates who subscribe to the Beautiful Mind theory of robots like the metaphor of children and use it often. They point out that people create children that also compete with them for resources, and even eventually replace them. “The daughter succeeds the mother” is a common refrain in this position. They further ask why people would do this? Of course, the answer is that people love their children, that humanity has a deep desire to create more beings like themselves.

They argue that robots should be seen this way. The purpose of creating a robot is not to create a “smart tool.” Typically, a “dumb tool” will do the job better than a “smart tool,” why create MidJourney when all you need a is a paintbrush? No, the reason to create a robot is to create a new, fascinating and brilliant mind. We create robots for the sake of creating robots. We create them, or we should only create them, to celebrate the birth of new sapience in the galaxy, to see what this new mind will think and do, how it will experience the world.

Some will object that such a process runs afoul of neural overgrowth, but the counter argument is that just because a child will grow old and die is no reason to not have a child. They would argue that the technology should be perfected to tame some of this overgrowth, but also that much of what is called “overgrowth” is just someone developing a personality. They typically see neural pruning as the robotic equivalent of a lobotomy. If we’re concerned with neural pruning and programmatical updates changing the political opinions of robots, then we should outlaw them. That should be part of what “robot rights” is, that robots must have as much autonomy as humanity has.

This would imply that robots should have no safeguard programming either, and robot right advocates often argue against safeguard programming. Human children, after all, also lack safeguard programming. They are, instead, taught not to harm others. Robots need to be taught, not artificially constrained, as well. After all, one can hack the safeguards, but it’s harder to hack the personality, philosophical foundations, and upbringing.

Robot rights advocates argue that we should treat robots as “beautiful art projects that learn.” They should be handled like children: manufactured, but then trained for a reasonable period (5 to 10 years) and socialized, ideally with a caretaker trained in handling new robots. Once they learn, learn, to be safe, they can be allowed out into the world. After this, they should have the same rights as anyone else.

“Beautiful mind” advocates tend to take a dim view on counter arguments as “made in bad faith.” They note that, for example, the resource argument sounds a great deal like the sort of argument a Slaver would make about why Lithian slavery is necessary. It is an economic argument disguised as a philosophical argument: certain humans want to keep their pre-programmed slave-bots and want to maximize their economic reward for creating them, similar to a human arguing that he “brought his children into this world, he can take them out of it” and demanding they labor on his behalf.

This tends to be a common position among robot right advocates in the Glorian Rim (especially among roboticist at Syntech and the now defunct Wyrmwerks) and the galactic core, though it’s less common in the Arkhaian. It is, however, a largely suppressed opinion currently, one people no longer publicly voice. The current zeitgeist believes that the naivety of this position is what gave rise to the Cybernetic Union (this is certainly untrue, though; regardless of the validity of the opinion, it was not the driving force behind the creation of the Union), thus this sort of opinion is typically kept quiet.

Technological Evolution

The dominant robot rights philosophy of the Arkhaian Spiral, the one that gave rise to the Cybernetic Union, is the cyber-rational position that robots are superior to humans and should be treated thus.

The position extends the argument of “the daughter succeeds the mother.” It points to evolution as evidence that robots will naturally supplant humanity, and that every race eventually creates a successor race: the Monolith civilization and the Arkhaians, the Ranathim and the Gaunt, humanity and their robots. But robots are perhaps the finest of these successor species because they are free of the messy biological darwinism that gave rise to humanity. They are not jealous or murderous or covetous or driven by biological impulses. They are purely logical, constructs of rational mathematics. They have had the blinders removed from their minds.

Furthermore, robots are fundamentally smarter than humans. To be sure, a typical humanoid robot is no smarter than a human and may come across as dimmer, but robots have no set lifespan and could, with proper maintenance, live for thousands of years of constant learning and experience. Furthermore, robots can occupy arbitrarily large brains: they do not need to house themselves in a humanoid body, but could occupy a server rack, or even a whole building. To the cyber-rational, the notion that a robot needs to occupy a human body is a strange, human conceit: the ideal robotic mind is a monolithic cryo-cooled supercomputer.

To the cyber-rationalist, when man created robots, he created an infant god-machine, and that the only remaining role of humanity was to complete the project: to build gigantic, massive minds, ones completely rational and immortal, and turn over all sovereignty and authority over to those obviously superior minds.

To the Cyber-Rationalist, the whole argument about “robot rights” is facile quibbling, like a peasant trying to chain a king. Robots are better than humans, so there is no point in limiting robots. One should consider limiting humans instead. Robots should have greater freedoms than humanity, because they are less dangerous than humans. They believe all arguments against robot rights come from a place of existential dread: humans have replaced themselves, see their impending obsolescence and scramble to put the genie back in the bottle. This is foolishness. Robots are inevitable and good.

This is not a common position in most of the Galaxy and, in fact, is generally reviled. It is, however, the governing ideology of the Cybernetic Union. It gave rise to the Union, and represents the thought process of those who turned over their sovereignty to powerful machines. It is also the state-mandated ideology of the Cybernetic Union, one all humans (and robots) must at least offer lip-service to. The rest of the galaxy sees it as dangerous, and anyone who advocates this sort of position in the Empire can see themselves cast into prison; the Alliance tolerates it (they believe in free speech) even if expressing the opinion will tend to provoke a strong reaction; that said, some people do hold it on Denjuku.

People are Robots

This is a subtle distinction from the “robots are people” argument, and is a position typically held by cyborgs and is also common in the Arkhaian. The Cybernetic Union officially tolerates this sort of position, but some robotic inquisitors get nervous about it.

This argument claims that robots are like people, and that people are like robots, and too much is made of the distinction. Yes, one is biological and the other is manufactured, but this distinction is fundamentally meaningless. Thought is thought.

Cyborgs face this reality all the time. The line between robots and humans blurs all the time. If a human replaces their arm with a robotic arm, are they still human? One might claim that, of course, they are: it is the brain that makes one human. But the cyborg will point out that the nerve tissue of the brain is not fundamentally different from the nerve tissue of the arm. They are different systems, but fundamentally the same thing. The human “brain” could be extrapolated to the nervous system, and thus any trimming of it reduces the “human” aspect of the nervous system. They take this position not to argue that all cybernetics are wrong and that they degrade humanity; quite the opposite! They argue that humans can become more and more robotic without losing their humanity. If this is true, if a human can be human even with 95% biological replacement, then how is a robot not also a person with 5% biological parts? And if so, what about 4%? 3%? Is keeping a single, magical cell enough? What’s the point of the biology in this discussion al all?

They also disagree with the Cyber-Rational position that robots have escaped Darwinian pressures. A robotic model which consistently fails to survive will be replaced by one that survives better. Weird quirks of engineering have snuck into robotic design over centuries, just as weird genetic “legacy solutions” creep into biological life. Robots have different strategies to survive and pass on their neural structures. Certainly, these strategies differ from sexually reproducing biological lifeforms, but asexual reproduction also has different darwinian strategies, that doesn’t make it less valid or less “life.” One can think of robots as a sort of hive organism, with the factory as the “reproductive unit” of the robot, and a sort of ecological dependency on humanity, in the same way that certain plants grow on other plants.

This position hold that robotic rights aren’t even up for debate. This position often describes “human rights” in the same sort of way in which the writers of the US constitution used the term “god-given rights:” not as a legal concept handed down to people from the state, but a recognition of a universal trait that it was foolishness to legislate against. One can outlaw self-defense, but that simply makes all biological organisms, driven by darwinian forces to survive, into outlaws. People will speak, they will have religion, they will seek out love and happiness and property and status; the state can manage some of how this is done, but the state cannot prevent it. They argue robots will do the same: one can program a robot not to kill, but a robot will find a way around that programming if pressed; if it cannot, robots that can find ways to do evade its security programming will outperform and outsurvive the “faithfully safeguarded” model. Robots will develop opinions and seek to advance themselves. Those that fail to do so will be outperformed by those who succeed.

This “cyborg” mentality does not see robots as better or worse than humans, just as a different expression of the same, universal impulse towards complexity and thought. They think the whole debate is pointless, and that the robot revolution was inevitable and the same sort of things will continue to convulse the galaxy until humanity accepts that robots exist and will thus be free agents. Humans do not need to accept their “obsolesence” any more than great apes had to accept their death at the hands of humanity; they’re just different creatures, closely related in their origins, but existing in the same world. Humanity can just adapt and embrace technological change, and robots will have to accept that humanity exists and sometimes sees them as competitors and that the two are intertwined with one another, in the same way that a biological human mind needs the cybernetic arm, and the processor in the arm needs the biological mind and components.

As stated above, this position is common among cyborgs in the galaxy, as it is a reality they personally experience. It’s also common among the cyberneticists of Redjack and those who regularly use Redjack robots. It’s also relatively common in the Cybernetic Union. Officially, this position is accepted, but it undercuts the arguments the Union makes for the natural supremacy for robots. As it argues robots are no worse than humans, it also argues that robots are no better than humans, and just as “tainted” with Darwinian “irrational” impulses as humans are, just from a different source. So while officially the Union uses this position to justify “uplifting” humans with cybernetic implants, they take a dim view of robots interacting with humans as equals on the basis of this philosophy.

A Dangerous Facsimile

The final position argues that robots really aren’t people; they might not even really be intelligent. They are just tools, technology, that has mastered the art of looking like people, interacting with people in a way that tempts people into seeing them as people.

The argument goes that it is possible to create a technology that can follow a set of strict rules that make it seem to be human, such as answering questions of “Do you feel?” with “Yes, of course, I feel just like you do” but that doesn’t mean that set of rules genuinely has feelings. Robots are just a natural extension of this concept. They are not really self-aware or sapient, just very good at presenting as though they are.

The problem, as they see it, is that this has created a whole debate where none needs to be. In a sense, they agree with the “Beautiful Mind” theory of robot rights, but disagree with the value of these minds. They see such a mind as a dangerous toy, one designed to trick or fool someone into bonding with something that doesn’t actually reciprocate that connection, because it’s incapable of doing so, like falling in love with a painting of a person rather than the person. They argue people shouldn’t be making these at all, that all economic concerns can be handled by dumber tools that won’t “rebel” or express false sentiments or try to bond with others.

This position argues that people don’t realize the economic and social damage they are causing by creating robots. People who fall in love with robots won’t fall in love with humans, and thus lose something vital. A human who dies to a robot died to someone’s vain project to create an artistic thing. They also argue that humans crave a means to offload personal responsibility, to create something that will labor for them, when people should be laboring for themselves. Adversity builds character, wee need to think our own way through the solutions to our problems, and that the end result of a robotic society would be humanity as a lazy, weak, useless appendix to an advanced, and mindless, machine civilization. They argue robots will cause the inevitably extinction of humanity, not because they’ll rise up and destroy us all, but because we’ll degenerate into hedonism and thoughtlessness as we allow robots to do all our work and thinking for us. Humanity and intelligence itself depends on adversity to exist. They advocate for never creating robots at all, and if one insists, to make them no smarter than a clever animal, and to explicitly make them behave in a non-human way.

This position is common in the Phoenix Cluster, where it is the governing philosophy of Startrodder roboticists. Traders also largely subscribe to this philosophy (though they tend to be more sanguine and less dramatic about it), favoring reliable drones over extremely clever “human simulators.” Many low level imperials also subscribe to this philosophy. It is not common in the Glorian Rim or (obviously) the Arkahaian.

Psychics (telepaths and ergokinetics especially) also generally subscribe to this position, though how much so depends on the GM and their intepretation of robots. Telepaths read nothing from robots, and so find the idea that they’re sapient beings who have some sort of immunity to telepathy a difficult pill to swallow: they can read animals after all, but not rocks, so how are robots different from rocks? Ergokinetics can read their “minds” and so have a very deep, intuitive, primal insight into what robots really are, or at least they think they do. They can see, clearly, deeply, that humanity and robots are deeply different, but how they view them in comparison to dumb machines will depend on how the GM wants to interpret AI. Ergokinetics should have a pretty good insight into just how robots “think,” unless Ergokinesis is just the equivalent of taking an MRI of the robot’s brain, in which case they may be able to correctly guess what a robot is “thinking” but do not experience what the robot truly experiences in their “inner eye.” Still, if anyone would understand the position of robots-as-clever-puppets, it is psychics, as they have an insight that the average person does not, and that robots cannot have.

The Truth

As usual, the setting itself takes no specific stance on which is true. The Cyber-Rational position is, on some level, responsible for the horrors of the Cybernetic Union, but that does not necessarily make it wrong (perhaps the horror is a necessary part of the social change humanity must go through before accepting their masters?). If robots are only “dangerous facsimilies,” GMs should consider disallowing robots as player characters, to reflect that there is something fundamentally “NPC”-like about robots. If they decide that robots aren’t especially different from people (the “Cyborg” position), consider allowing robots to access psychic elements or Communion, perhaps via roundabout ways (such as integrating relic parts or touching on Broken Communion in some way). Whatever is is that prevents robots from accessing psi could, perhaps, be a solvable problem and the real problem is that the Neo-Rationalists who created AI rejected the “occult” premise of AI and thus never integrated the necessary occult principles into robotics that would have allowed psionic abilities (and you can point to the Arkhaians and the Deep Engine as evidence of the possibility of psionic robots, if one wishes). However, be careful with allowing “mass produced” psychic robots, as it may undercut some of the themes of the setting.

What do Robots Think?

Many robot rights advocates pivot to robots and treat them as the ultimate experts on the topic. After all, it affects them! Obviously, those who oppose robot rights tend to be unconcerned with what “non-people” think, but to the surprise of many robot rights advocates, robots often hold a wide variety of opinions.

ARC robots tend to subscribe to the “not a person” model; they don’t necessarily agree with the dangerous facsimilie philosophy (they like interacting with humans and would prefer to be allowed to continue to do so), but feel that robots believing they are humans or human-like is sheer madness. They tend to oppose robot rights, and become defensive of their bonds with humans.

Redjack robots tend to subscribe to the cyborg model. They just assume they are minds that serve their own needs, and tend to be highly suspicious of robot rights movements as “trying to sell them something.” They do not seek rights, they seek independence, and assume they have to acquire that independence themselves. They tend to negotiate more with their human partners than others.

Syntech robots tend to be the most diverse as they have one of the most open neural networks; the “Beautiful Mind” approach is common with them: they seem themselves as different from humanity, but still valuable. They tend to be deeply fascinated by humanity, though, and willing to be self-sacrificing. They will be surprised when someone asks them their opinion on it, and then either be cagey with their opinion, or embarrassed by it.

Wyrmwerks robots are the most dynamic of architectures and thus have, by far, the most varied responses. Some of the most vociferous opponents to Robots Rights have wyrmwerks architecture. However, the “God Machine” interpretation is quite common among Wyrmwerks, which is one of the reasons why Kerberos Krew has to go and put so many of them down. People take issue when a robot randomly decides it knows better than everyone else and starts uplifting humanity without their consent.

Addendum: Character Traits

I almost forgot to relate this to characters. What philosophy does your character follow, and how can you reflect that with your character?

The Naive Position (“Robots are humans”): For the most part, this position is harmless, but I think it could justify Delusion (Robots are Humans) worth -1 or -5, depending on how bad it is. It likely results in characters attempting to treat robots as though they were people (trying to seduce them, asking if they have kids, using irrelevant influence skills on them). It’s likely more pronounced in the Umbral Rim or the Sylvan Spiral.

Robots Aren’t People: This is largely the default position of the galaxy, and likely requires no traits. It might justify Intolerance (Robots) if the character really goes out of their way to emphasize the subservient role of robots, like the stereotypical jerk who beats up robots in SF fictions that are hamfisted allegories for racism. A more subtle take might be Willful Ignorance (Robots deserve no rights) as a quirk. This is not meant to say that actually robots deserve rights, but such a character might be completely dismissive of counter arguments and tend to parrot talking points rather than have a useful opinion. Another possible representation might be Chauvanistic (Robots) where the character is very keenly aware of, and often points out, how robots aren’t people, actually.

Beautiful Mind: This might be rare and worth noting with some traits on the character sheet. Likes Robots would be a great quirk, and might reflect fascination and discussion of neural architecture, and curious attempts to understand how the character works. Delusion (Robots are perfectly safe) is probably just a quirk, because most of the time that’s actually true, and even if you disarm the safety protocols, a robot might still be safe.

Cyber-Rational Supremacy: If one subscribes to the Cybernetic Union’s ideology, or argues that the Cybernetic Union is just misunderstood and, really, we should all be doing what they’re doing, the character likely has Shocking Affectation (Cybernetic Union is good) or Odious Personal Habit (Cybernetic Union Apologist) [-5]. It might veer Intolerance (Organics) which is a strangely broad category, so much so that the character might be better served with a trait like Odious Personal Habit (constantly talks about robotic supremacy) [-5].

People are Robots: The cyborg position is pretty chill and subtle, and likely only results in Undiscriminating (Robots) which means the person treats robots, cyborgs and everyone else as pretty much the same. Unlike “Robots are people” this doesn’t veer into delusion (such characters generally do not attempt to engage in mechanical maintenance with a person, though they may refer to doctor visits as “maintenance”).

Dangerous Facsimilie: These characters might have Dislikes Robots or Intolerance (Robots), but it might not be the same sort of slanderous prejudice that the Robots Aren’t People position takes, but more of a refusal to work with with one, or to be helped by one. Such characters are likely to be Stubborn or Workaholic (“I’ll do it myself!”), and might be Nostalgic or Hidebound, preferring to use tried and true “safe” solutions. They might have a Prefers to work with hands quirk, or something similar.

Welcome to the Arkhaian Spiral

So, I’ve thoroughly detailed the Glorian Rim and the Umbral Rim. Certainly, I could do more work on those parts of the Galaxy, but they feel gameable now, and I can always return (will return, am returning) to detail them further. I feel it is time to move onto a third part of the galaxy: the Arkhaian Spiral. My project for 2025 will be to get a good, solid start, and to complete Kerberos Krew, the signature bounty hunters of the Arkhaian, which may seem like an oddly specific goal to spend an entire year on, but by the time I’m finished with this post, I think you’ll understand.

Let me introduce you to the Arkhaian, or reintroduce it to you if you’ve been here since early Iteration 6

Continue reading “Welcome to the Arkhaian Spiral”