Harper Voyager Science Fair: Real Science of Military Sci-Fi

Today, three acclaimed military sci-fi authors join us for the Harper Voyager Science Fair (#hvsciencefair) to share the real science behind military science fiction!

Neuralink Features

By William H. Keith, aka Ian Douglas (author of Dark Mind)

The works of Jules Verne, Arthur C. Clarke, and other SF luminaries notwithstanding, science fiction does not predict the future, and was never meant to do so. As a means of social commentary or criticism, as a way to explore our relationship with our own technology, as a glimpse into what it means to be human, or simply as a rollicking good read, science fiction is unsurpassed as a literary instrument. But it doesn’t tell us what tomorrow will be like.

At least… not usually.

Case in point: until… oh, probably the mid-1970s or so with a very few arguable exceptions, not a single science fiction writer predicted the advent of desktop computing, personal PCs, laptops, or smartphones, technologies that have utterly and completely transformed our world. Before that, computers in SF tended to be planetary computer banks or, at best, the room-sized mainframe running a spaceship. No one predicted the internet, or wrote about its impact on social networks, access to information, or communications, or dared to suggest such revolutionary concepts as crowdsourcing or citizen science.

Nor did anyone have the temerity to suggest that twelve men would walk on the face of the moon in the space of a mere two and a half years between 1969 and 1972… but that for the next fifty years human spaceflight would be relegated to low Earth orbit!

Or, for that matter, that by 2006, 27 percent of Americans age 18 to 29 would doubt that the moon landings even took place.

Still, sometimes the writers of speculative fiction get lucky. One of the more prescient of SF writers, Robert Heinlein, described water beds and waldos in his short story “Waldo” (1942), the post-war nuclear standoff in “Solution Unsatisfactory” (1941), and mobile phones in the juvenile novel Between Planets (1951), among other impressive feats of crystal-gazing.

I must say that it feels a bit odd to have entered such exalted company.

In 1992 I began developing an SF-military series called Warstrider. Pretty standard military-SF fare: the Japanese have grabbed the high ground of space, valiant rebels are fighting for their independence, and intelligent nanotech oil-slicks called xenophobes are making things difficult for all concerned.

But in the process, I invented some future technology which I variously labeled “neuralinks,” “cephalinks,” and “circuitry implants.” The idea was that micro-electronics could be “nano-chelated” directly into the sulci of the human brain—the sulci are the folds and fissures in the brain’s surface that dramatically increase its surface area—and literally grow electronic components connected directly to the central nervous system.

Why? Well, having a computer attached to your brain allows you to interface directly with the technology around you. You can download data directly into your conscious mind, reading it in open windows appearing inside your head, or you can order a meal or open an automatic door with a thought. Telepathy becomes possible when you can communicate directly with anyone else with the same hardware over a radio channel, and an in-head coprocessor lets you perform complex calculations… well… in your head. With a thought, you can link with a teleoperated robot at the bottom of the sea or on the hellish surface of Venus; in a real sense you no longer pilot a spacecraft, but are the spacecraft. You can even upgrade your memory by adding some in-head RAM.

Neuralink technology was so useful in my writing that I kept using it with only slight changes in terminology in all of my later Ian Douglas mil-SF series: the Heritage, Legacy, and Inheritance trilogies; Star Corpsman; Andromedan Dark; and of course Star Carrier. I’ve been able to explore a number of facets of this fictional technology… things like making back-ups of yourself in case things go seriously wrong, or being able to enter virtual realities generated by an artificial intelligence. How humans relate to their technology is important—the very essence, in my opinion, of science fiction. My future Marines, for instance, have to go through a difficult period during recruit training when their civilian neural circuitry is removed. They must be able to survive, fight, and prevail even when technology fails them, so they train without the gadgets before receiving new, military-issue hardware.

Imagine taking a teenager’s smartphone away from him when he enters boot camp and you’ll get an idea of how traumatic this might be.

But now, twenty-five years after I first wrote about neuralinks and thoughtclicking in-head icons, it seems that real-world science and technology is about to catch up with me. The Wall Street Journal has just announced that Elon Musk is backing a new venture, a company with the goal of creating a brain-computer interface (BCI). The company is called—where have I heard this before?—Neuralink.

It’s enough to make me wonder what mil-SF books Mr. Musk has been reading lately.

The technology is still in its earliest phase, no more than a gleam in the developer’s eye. The goal, however, is to create devices that can be implanted within the human brain, allowing, among other wonders, humans to interface with computers, enhance their memory, and merge their minds with software. These devices are elegantly referred to as “neural lace.”

In other words, I seem to have anticipated, by a quarter of a century, the coming technologies that promise to remake our species.

Why is the CEO of SpaceX and Tesla delving into what until this was purely the province of science fiction? It turns out there’s solid reasoning behind the initiative. Not content with creating cheap access to space and practical electric cars, Mr. Musk is out to save the human race from extinction.

Consider, for a moment, the Technological Singularity, another SF concept that I’ve been visiting, a lot, in my writing. The term was first used by SF writer and computer scientist Vernor Vinge, who suggested that the advent of artificial intelligence could lead to what Benchley Park cryptologist I. J. Good called an “intelligence explosion.” In this scenario, the first genuine AIs capable of improving their own software might engage in runaway cycles of enhancement and upgrades, with each successive iteration in machine intelligence appearing more and more quickly. The result would be super-AI far surpassing human intelligence, staggering advances in technological growth, and vast, rapid, and incomprehensible changes in human civilization.

Where would such an explosion leave the human species? “In the dust” is one answer. If humans don’t do something to keep up, their electronic offspring will self-evolve into literally godlike super-minds who might very well set out to redesign the planet according to their incomprehensible agendas without stopping to discuss the decision with us. After all… do we discuss the right-of-way of a new super highway with the ants that happen to be in its path?

And unlike biological evolution, machine intelligence likely would advance exponentially and very, very swiftly. Given the speed of electronic operations, artificial intelligence could jump from desktop PC to Godhead in seconds, or less.

In Elon Musk’s view, our only hope is to join ourselves to our own technology before things get out of hand. “Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk recently told the World Government Summit in Dubai.

If we don’t find a way to merge with our machines and evolve to godhood with them, humans might well find themselves both obsolete and irrelevant. If we’re lucky our new masters might decide to keep us around as slow but sometimes amusing pets. Maybe….

Once in a while, it seems, SF writers are lucky and are granted a glimpse of the future. My glimpse included neuralinks and in-head circuitry creating a tightly knit symbiosis between humans and their machines—one possible post-human future.

Given how rapidly we are moving toward true AI, however, I wonder if Musk’s Neuralink will be in time.

A few recent science articles discuss the future of neuralink technologies:
Elon Musk thinks humans need to become cyborgs or risk irrelevance
How Close Are We to Melding Mind and Machine?
Facebook Finally Released Details on Their Top Secret Brain-Computer Interface
Humans and Technology Are Becoming One, and It’s Changing Everything


The Science of the Sim War

By Henry V. O’Neil (author of Live Echoes)

I’ve always had great admiration for science fiction writers who could knowledgably weave real science into their stories. It’s fun to find out just how much of their fiction is actually science, and then to watch some of it become reality as the years go by. With that said, I’ve also tried to follow the advice of the outlaw Josie Wales, who admonished us all to recognize our limitations. As my West Point transcript so glaringly demonstrates, science isn’t my forte. So in writing my military science fiction Sim War series, I’ve stayed much closer to the fiction than the science.

Of course that didn’t free me from the obligation of doing as much research as I could, and so here are a few items from my Sim War series with interesting roots in scientific theory and reality.

Background: At the start of the first book, Glory Main, the Sim War has been raging across our galaxy’s solar systems for forty years. The human race had settled several habitable planets before the faster-than-light mode of travel known as the Step brought us into contact with the Sims. The Sims get their name from their similarity to humans, in that they’re physically almost identical to us. Their technology lags behind humanity’s, but they make up for this with numbers, organization, and ferocity. The two sides are fighting for habitable planets, which somewhat limits the war because it’s not a good idea to wreck the object of the conflict.

The science of the Sims: The Sims are humanoid, but there are major differences between the races. The Sims can’t reproduce, can’t eat what humans eat, can’t make any of the sounds of human speech (their language sounds like bird talk) and they die if they come into contact with us for very long. All of these limitations suggest they’re a designer enemy, created to oppose us and then die out. So the biggest question of the war revolves around who—or what—is making them.

The human alliance’s propaganda line strongly rejects the idea that the Sims are human in any way, but the prevailing wisdom suggests they’ve been made with altered human DNA. Although the dates of the Sim War are never specified, the conflict takes place in a relatively distant future. As we’ve already created test tube babies and appear to be on our way to designer babies in our current time, the notion of generating altered humans in the future rests on a pretty solid foundation. The Sims’ mysterious creators are assumed to be highly advanced (at least in regard to mass-producing a humanoid race) and continue tinkering with the DNA to make larger, stronger, and faster Sims.

The science of Sim technology: Although the enemy’s tech isn’t on par with humanity’s, it improves by sporadic leaps that contain some major surprises. In Glory Main, a human assault force encounters one of those surprises in the form of a sophisticated new munition. Delivered as artillery shells, the munition turns dry ground into deep mud. The tanks and personnel carriers of the assault force get bogged down in the morass, and their crews are forced to abandon their sinking vehicles. The ground later returns to solid dirt, creating a surreal landscape with empty armored vehicles jutting out at odd angles.

I’ve scoured the Internet for anything resembling this innovation, and have so far come up empty. The inspiration for the mud munition comes from the military concept of temporary area denial, which is jargon for rendering a piece of ground unusable for a limited period of time. Current examples of such technology are non-persistent chemical agents that eventually disperse, and minefields that deactivate or blow themselves up. The idea is to prevent your opponent from traversing that territory while retaining the option to cross it yourself later on.

In my mind, the mud munition functioned by either attracting the available moisture in the ground (which could also explain its limited duration) or changed the dirt itself on an elemental level. The second possibility runs into difficulty when the munition is used on different planets, because the compositions of the soil could vary dramatically. This complication comes into play in the second book, Orphan Brigade, when the process runs away with itself. During a large battle, concentrations of the munition cause a chain reaction that first creates a giant, expanding bog and later ruins the ecology of the entire planet. On a more hopeful note, an agriculturally-oriented group of humans later adjusts the mud munitions to create a seed capsule that can flourish in arid terrain.

The science of human troop delivery: The Step allows human fleets to suddenly appear in orbit around a contested planet, but the soldiers still need to reach the surface. A wide variety of shuttle craft perform much of this task, but moving tanks and personnel carriers that way can take up a lot of room. In the Sim War series, the answer involves an energy tunnel I dubbed a cofferdam.

In our world, a cofferdam is a watertight wall assembled in a body of water, its inside pumped dry to permit construction work such as building a bridge support. The space equivalent in the Sim War is cut through a planet’s atmosphere, from low orbit to the ground, to reduce the effects of atmospheric friction. Pressurized assault vehicles are dropped into this energy tunnel, using hull-mounted thrusters to keep them level and arrest their descent before touchdown. Enormous personnel rings, resembling wagon wheels, deliver large numbers of troops in this fashion as well.

The inspiration for this energy well comes from an existing scientific theory designed to reduce the effects of atmospheric friction when launching objects into space. Although still just an idea, the Vactrain (vacuum tube train) would combine maglev technology with partially evacuated tunnels to launch payloads into space. The partially evacuated tunnels would reduce air resistance, and that’s where I got the inspiration for the cofferdams in the Sim War series.

The science of human infantry technology: The main character in the series is an infantry lieutenant, and he and his fellow soldiers are greatly aided by specialized goggles and helmets.

The goggles are part of a face-hugging frame that allows the wearers to slide the lenses up when the naked eye is preferred. They combine eye protection with night vision, range estimation, and a sight reticle for whichever weapon the user is holding. They also provide imagery from orbital, aerial, and drone surveillance, as well as maps, operational graphics, and locations of other units. Using navigational technology similar to our GPS systems, the users can literally see the boundaries of a preset route (including changes in direction) as lightly glowing walls to left and right inside the goggle field of vision.

Most of this technology is available already, so the goggles simply combine them. However, it’s easy to become over-reliant on such useful technology, as is demonstrated in the third book, Dire Steps. A cut-off infantry company starts to run low on goggle batteries, but fortunately one platoon has trained extensively for such an eventuality. The platoon sergeant’s much-resented “Goggle Appreciation Nights” pay off in a major way when those troops have to function in thick jungle without the electronic aid of the goggles.

Helmets are also more sophisticated in the story, in that they provide constant radio communication while protecting the wearers’ hearing. Doughnut-shaped “dampers” snug down when very loud incoming sound waves (such as an explosive detonation) are detected. The “snugging” sensation can also be used as a signaling device, to silently get someone’s attention or to indicate that an important message is on the way.

The foundation for this technology already exists, in the form of electronic earplugs containing microphones that detect and block high-decibel sound waves. The helmets worn by the humans in the Sim War are even more developed than that, as they can detect incoming sound waves that are moving at great speed. So when an explosive goes off nearby, the snugging sensation protects the users’ hearing while also warning them to take cover.

Much, much more: In addition to the aforementioned goodies, the Sim War series also touches on organic replacement and regeneration of missing limbs; an active form of barbed wire, attracted by the electrical field generated by human bodies, that ensnares the victim and then glows with a bright light to attract enemy fire; and full-body armored suits worn by an elite, all-female unit known as the Banshees.

Earlier I mentioned a faster-than-light mode of travel that the humans call the Step. Although I didn’t go far into specifics, the story does mention the generation of a “threshold” through which the ship passes—suggesting scientific principles similar to the theory of a wormhole. I had some fun with the language on that one, in that one character mentions that this process was originally known as a Transgression because transgress means “to step across”. Human propagandists changed Transgression to Step because a transgression suggests we were doing something wrong when we first encountered the Sims.

The final books of the series, CHOP Line and Live Echoes, reveal the extent of the wrong things we were doing before and after the war began. Hopefully I haven’t gone too far wrong myself, in explaining the scientific basis for some parts of this tale. If I have, I’d like to refer you to my aforementioned grades in the science-related courses on my West Point transcript. As with my current writing, my answers on Chemistry and Physics exams were always more fiction than science.


Computers Are Our Friends

By Elizabeth Bonesteel (author of Remnants of Trust)

Computers in science fiction tend to go one of two ways: Siri on steroids, or Skynet.

In other words, they’re either fast, reliable, inerrant background noise, or evil AI villains trying to destroy us.

And isn’t that humanity in a nutshell? We want the new, the easy, the advanced. We want the everyday drudgery of our lives to be taken away by the Technology Fairy. Yet at the same time, we’re hard-wired to be wary of what we don’t understand.

Computers, by their nature, tickle that wariness: they can sound human, they can act human, they can make us react to them as if they are human, all while doing complex things for us at incredible speeds. It makes perfect sense that this would be frightening on some animal level, that at the same time as we’re attracted to their convenience, we fear what might come of their artificial minds. One of the most brilliant thinkers of our age, Stephen Hawking, is on record warning about the dangers of AI learning to evolve without us.

Due respect to Professor Hawking, who is far more brilliant than I am, but: I believe we’ve got some distance to go before we need to worry about Skynet. (If I’m wrong, you guys can sacrifice me to the machines first.)

I’ve been writing software since I was 14 years old. I’ve seen, in that time, tremendous evolution in the computer industry, but there are a couple of things that haven’t changed:

  1. Computers are fundamentally stupid.

This is really the crux of Hawking’s argument, and the most compelling piece of it: computers are basically tireless, adroit, well-trained dogs. They will do exactly what they are told to do, for exactly as long as they’re told to do it. If you begin with a sophisticated AI interface, it’s not hard to image the many and varied ways it could go wrong, especially since there’s a flawed human programmer at the root.

But we are, as it happens, pretty far away from a sophisticated AI interface. This is not the same as adding “human-feeling” interactions to our software—Siri, Alexa, Rachel from cardholder services—which are, at their base, far more Chinese Room than Turing. They are all fast data-retrieval systems (well, maybe not Rachel, or she’d know better than to keep calling me) with user-friendly voice interfaces. They are designed to fool us, or at least make us feel comfortable. They are not designed to be independent intellects.

And such data-crunching programs may indeed become powerful enough someday to figure out where the nuclear codes are stored, but without access, we’re back to humanity being the basic problem.

Which brings me to:

  1. Programmers are too ego-driven to agree on a universal interface.

In science fiction, whether computers are the bad guys or not, there’s an underlying assumption that everybody’s running the same software: that data is exchanged in well-known formats, that everybody has the right programs to open every chunk of data, that all the systems are free of bugs and talk to each other without any glitches at all.

Those of you who have written software can stop laughing now.

It’s unfair of me to attribute the interface problem to egos, because that’s not really what it is. (Well, not usually. Because sometimes there’s that One Programmer who is so certain that their way is the Absolute Best and Most Intuitive Way, and they’ve mysteriously convinced the CEO, and the project gets delayed for six months and crashes and burns on release, and everybody but the self-declared company Neo gets blamed for it.) It’s not ego, but progress. (Also unfettered capitalism, but that’s a different post.)

Computers, by being the perfect intellectual blank slates, are incredibly versatile. There are a thousand ways to solve a problem, and an exponentially huge number of ways to solve combinations of problems. No matter how elegant an interface you’ve got, there’s almost certainly a better way to write it. And why wouldn’t you change it if you could? Why wouldn’t you take the experience of your users and your fellow programmers—not to mention the developing abilities of the underlying hardware—and build something better? So what if the folks who wrote to your interface two years ago can’t take advantage of the new stuff—they’ll have to update their own product at some point, right? They can rewrite to your new interface then.

See how it happens? Innovation moves software forward, but it also keeps it fragile. This is why I’m not losing sleep over Professor Hawking’s vision of Skynet.

Of course, I’m not sure you’d get much interesting fiction out of a realistic portrayal of the computers of the future. You’d end up with some poor guy stuck in a mining ship out by an asteroid, his critical software update interrupted mid-download by a solar flare or a black hole or an invasion of alien ants, now stuck in a loop where every time he tries to power up his mining drill or call back to the mother ship for assistance the whole engine system reboots. It’d be sort of like Groundhog Day, except he’d have nobody to help him but Neo, who’s still busy telling the CEO that it’ll work his way if all the foolish end-users would just do as they’re told and upgrade properly.

On the other hand, that might be kind of awesome.

Comments

comments

Sign up for Newsletters

Harper Voyager on Twitter