Thursday, June 30, 2016

How to Write a Realistic Atompunk Timeline

Guest post by Mark Appleton.

Atompunk is a subgenre of science fiction.   The simplest way to describe it is that atompunk is to Asimov and the 1950's as steampunk is to Verne and the 19th century: a retrofuturism inspired by the dreams and fears of the early Cold War.   Like steampunk, atompunk is not so much about content as it is about style: chrome, jumpsuits, fins, rockets, flying cars, monsters mutated by atomic testing.   The world is powered by atomic energy, computers are the size of houses, and the moon is this year's hot vacation destination.

Unfortunately, atompunk hasn't enjoyed as much success (yet) as its steam-powered cousin.   The quintessential modern exemplar is the Fallout series, and the Venture Brothers cartoons sometimes venture into atompunk territory, but for other examples we mostly have to go back to the '50s and '60s: the novels of Asimov and Heinlein, the Tom Swift Jr. young adult books, and the original Star Trek series.

The common belief is that atompunk is, in retrospect, implausible.   Space travel and atomic energy are a lot harder, and a lot more expensive, then they seemed in 1950s pulp science fiction, and by this point it's pretty clear that Mars is not home to green-skinned space babes eager to meet some Intrepid Explorers.   Technology gave us iPods and the internet instead of moon colonies and atomic cars.

That perception is not totally wrong – radiation causes cancer, not gigantism – but it's not totally right, either.   Something close to the classical atompunk world is not impossible – it's not a likely outcome of the last seventy years of history, but it's not impossible.   And I'm here to tell you how it could have happened, at least in fiction: how to write a technically realistic atompunk timeline.   I'm going to tell you about atomic-powered airplanes, atomic excavation, cheap atomic power, and more about radiation then you ever wanted to know.

Buckle in, folks.

Aesthetics

Let's start with the easy stuff: the look.   Like I said, a lot of atompunk is about style: a car with giant chrome fins and a bubble canopy is pretty atompunk even if it burns gasoline instead of uranium.   And that sort of thing is at the whimsy of trends which could easily have run in an atompunk direction.   Dress everybody up in jumpsuits and goggles, make everything out of aluminum, and periodically pause to stare towards the rising sun with your chin jutting out: this will take you a long way towards your goal.

Also, consider language.   You can capture a lot of style by using deliberately archaic word choices from the 1950s.   A computer is a “logic”.   A satellite is a “sputnik”.   A nuclear-powered submarine or ship is an “A-sub” or “A-ship.”   And don't ever say nuclear: always say “atomic.”   Proper attention to language – making your story sound like it comes from the 1950s – will help a lot to make your reader feel like it's from the 1950s, or at least the future as the 1950s imagined it.

Radiation

But like I said, that's the easy stuff.   Now let's talk about the hard stuff, and let's begin in the obvious place: radiation.   The conventional wisdom is that the discovery of radiation, and the fact that it's bad for you, is why the 747 isn't powered by an atomic engine.   This conventional wisdom is not exactly wrong, but the full truth is a lot more complicated then most people think.

Most of what we know about how radiation effects human health comes from studying the victims of the Hiroshima and Nagasaki bombings.   Because the vast majority of radiation produced by the bombs was produced all at once at a single point, we can calculate how much radiation each victim was exposed to, based on where they were when the bombs went off.   By comparing the radiation exposure to the victims' medical histories, we get a pretty good idea of what happens to someone when they are exposed to a lot of radiation all at once.

For this article, I'm going to use millisieverts (mSv) as the measure of radiation.   In the data from the bombing victims, we begin to see people suffering radiation sickness above 1,000 milliSieverts (mSv) exposure, and we see an increase in cancer risk above 100 mSv exposure, at a rate of about 1% per 100 mSv.   So someone exposed to 300 mSv has a 3% increase in cancer risk, 600 mSv means 6%, etc.   Since roughly the mid-70s, we have assumed that we can extrapolate the risk downwards: so 10 mSv exposure means a 0.1% increase in cancer risk, etcetera.   We have also assumed that the rate of exposure doesn't make a difference to cancer risk: absorbing 100 mSv in a second is the same as absorbing it over a year.    This is based on the simple assumption that cancer risk is caused by DNA damage, and the amount of damage is directly proportional to the amount of ionizing radiation you absorb.   This is called the Linear No-Threshold hypothesis, or LNT.

But we don't actually know that LNT is correct.   Below 100 mSv exposure, we cannot distinguish an increase in cancer risk from the random fluctuations found in any statistical data.   And we know the body does have mechanisms for repairing DNA damage.   There's an alternative hypothesis, usually called Linear Threshold or LT, that there's some threshold of radiation dose where the body can handle the radiation without an increase in your cancer risk.   What the threshold is depends on who you ask, but it's usually estimated at somewhere between 10 mSv and 50 mSv.   And the more time the dose is spread out over, the higher the threshold.

The reason this matters is that, in a radiation accident like Chernobyl or Fukushima, the vast majority of victims will be exposed to very small doses of radiation spread out over a very long period of time.   In a reactor accident, very few people will be exposed to a radiation dose of more than 50 mSv, and it will be delivered over weeks or months.   Even under the LNT hypothesis, the individual risk is very low – but many thousands or millions of people will be exposed to that risk.   If you expose a million people to a 10 mSv dose, and the LNT hypothesis is correct, then 1,000,000 times 0.1% equals 1,000 people will get cancer, and about half will die of it.   That's why atomic technology is regulated so tightly – the individual risks are very small, but the collective risks can be very large.

But this assumes LNT is correct. If the LT hypothesis is right, and you expose a million people to 10 mSv of radiation, then the result will be: absolutely nothing.   But, right now, there's really no way to tell which hypothesis is correct.   There are no reliable studies of how many people died as a result of Chernobyl or Fukushima based on actual death statistics – the numbers thrown around are based on estimating radiation doses, and then using your choice of hypothesis to translate those doses into numbers of deaths.   Except for a small number of people at Chernobyl who worked in the reactor zone itself and were exposed to massive radiation doses, the variations in cancer incidence are so small that epidemiologists cannot distinguish them from random noise.   That doesn't mean that Fukushima or Chernobyl didn't kill people, just that – except for those few at Chernobyl – we can't prove that they did.   The only real hope of definitively resolving the controversy is by developing a better understanding of cellular biology, of how DNA damage and repair works in the body, and that is not going to happen in the near future.

In the meantime, Western regulatory agencies are sticking with the LNT hypothesis out of an abundance of caution.   And, in the real world, I think that is the right decision.   If we regulate based on LNT, and we're wrong, then we've wasted a lot of money.   If we regulate based on LT, and we're wrong, then people die.

But we're not talking about the real world.   We're talking about fiction.   In fiction, we can make whatever assumptions we want.   And if we assume the LT hypothesis is true, then a reactor meltdown is just an industrial accident – it's not a good thing, but it's no worse then any other accident involving toxic chemicals.   And this is the key to making an atompunk world possible.

And a lot of the wild atomic technology imagined in the '50s really is viable if we make that assumption.   Electricity from fission really can be cheap.   Atomic-powered airplanes and spaceships really can be safe and practical.   We really can dig canals with hydrogen bombs.   Some things still aren't feasible, and there are still question marks attached to some of these ideas that have nothing to do with safety or economics.   But a lot of seemingly crazy things suddenly become good ideas.

The A-Plane

The first of these technologies I'd like to talk about is the atomic-powered airplane, which is even cooler then it sounds.   I'm going to call it the A-plane, by analogy to the A-bomb, because I think it sounds cooler that way.   Even in our own world, the Aircraft Nuclear Propulsion (ANP) program was an enormous undertaking.   It lasted fifteen years, from 1946 to 1961, and cost the equivalent of about $20 billion in today's money.   The goal was to build an atomic-powered bomber, able to remain aloft for weeks.

Today, at first glance, the idea seems insane.   What if the thing crashes?   But this is where the distinction between LNT and LT comes in.   Under the LT hypothesis, if an A-plane crashes, only the immediate area around the crash site is going to have to worry about radioactive contamination.   Again, under the LT hypothesis, a radiation accident is just an industrial accident like any other: not good, but not a major catastrophe.   And since, in the '50s, the atomic energy community was tacitly or explicitly operating under the LT hypothesis, that's why they were willing to pursue this concept.
The Air Force and Atomic Energy Commission studied two approaches: the direct cycle, in which air from the jet intake passes directly through the reactor, and the indirect cycle, in which heat from the reactor is carried by molten metal or pressurized gas to a heat exchange in the jet.   The direct cycle had the advantage of relative simplicity, and came closest to flying – several prototype direct-cycle turbojets were static-tested at the National Reactor Test Station in Idaho.   The indirect cycle had the potential for higher performance: a sodium- or gas-cooled reactor could be smaller then an air-cooled reactor, and therefore have a higher thrust/weight ratio.

And the thrust/weight ratio was very important.   One of the big problems with the ANP program was that, even if you count the fuel in a conventional turbojet as part of the “engine”, an atomic engine is usually going to weigh more then a conventional engine and produce less thrust.   By the end of the program, ANP knew how to build an atomic-powered airplane, but that airplane would have been big, expensive, and slow – not what you'd want penetrating Soviet airspace.   A big part of the problem was that the Air Force could never make up its mind whether or not they really wanted the plane – they were perpetually ramping up and cutting back funding, shifting into crash priority mode and then downscaling to feasibility studies.
But, while the A-plane wouldn't make a good bomber, there are other roles it could be very effective in.   For example, an A-plane could be a very effective missile carrier: it could remain airborne for long periods, and therefore immune to a Soviet first strike, and launch its payload from outside of Soviet airspace.   After the initial exchange, it could also perform follow-up reconnaissance and strikes, after the fuel supplies of conventional aircraft had been exhausted.   ANP engineers did suggest this idea during the program, which they called the CAMAL – a rather awkward acronym allegedly standing for “Continuous Airborne Alert Missile Launcher and Low-Level Penetration Airplane”.   But the Pentagon wasn't interested.

Let's suppose that the Pentagon provides reliable support for the A-plane instead of our own timeline's oscillation.   This could be accomplished by just changing Eisenhower's Secretary of Defense – part of the problem for the program historically was that Secretary Quarles thought the whole idea was absurd.   The really interesting part is not what could be built in the 1950s, but where this would push airplane technology in the '60s and '70s.

The thrust/weight ratio of an atomic engine improves – a lot – as the weight of the airplane increases.   A heavier airplane needs a more powerful engine.   The power of an atomic engine is proportional to its volume.   But the weight of that engine is dominated by the weight of its radiation shielding, which is proportional to its surface area.   And the ratio of surface area to volume decreases as the volume increases – so the thrust/weight ratio improves as the engine gets bigger.   For a big enough plane, the thrust/weight ratio of an atomic engine is better then a conventional engine.

How big?   That depends a lot on the specifics of your reactor.   But we're typically talking about a plane north of a million pounds takeoff weight, and the bigger the better.   NASA studies in the '60s and '70s envisioned civilian A-planes of ten to twenty million pounds takeoff weight.   The heaviest plane that has ever flown in our world, the Antonov An-225 Mriya, has a maximum takeoff weight of about 1.4 million pounds.   So these are enormous aircraft – but for us, that's an asset, not a liability.   Imagine a monster flying wing, thirty times the size of the 747 – big enough to carry a Saturn-V rocket – slowly cruising past the sunset.   Lockheed, at one point, floated the idea of an atomic-powered flying aircraft carrier based on this concept.   How much cooler can you get?
And it's not just about style.   We can't really accurately estimate the cost of machines so distant from anything we've built historically, airplanes big enough that we'd have to rebuild most of the nation's airports to handle them.   But the NASA studies claimed that, in the ten to twenty million pound region, these monster machines could transport cargo at a cost competitive not just with conventional aircraft, but with trucks.   Even if they don't reach that mark, they could mean a world where vastly more of our cargo travels by air, where our entire economy is oriented around air transport instead of highways.

But where things get really interesting is when we start looking at what we could do with an atomic turbojet on the ground.

Too Cheap to Meter

The original “too cheap to meter” comment was actually talking about fusion, not fission, but it's too iconic a comment to pass up.

Historically, the cost of atomic energy has been, at best, disappointing.   Even in places where atomic energy is competitive with coal and gas, it's only competitive.   It was supposed to mark a new industrial revolution, where cheap energy would make all manner of new goods and services possible.   Maybe we can't get quite that far, but we can get a lot closer then we did.

To begin with, let's talk a little bit about the history of atomic energy.   The vast majority of the atomic reactors in the West evolved from reactors developed to power submarines, called Light Water Reactors, or LWRs.   These reactors use solid fuel elements made of enriched uranium oxide ceramic, cooled and moderated by pressurized water, with the heat turned into electricity in a steam turbine.   These designs were chosen for for two reasons: first, because they're (relatively) simple, and second, because they could piggy-back off of the enormous amount of research that had already been done on these designs for submarines.

But, in retrospect, these weren't necessarily the best kind of reactor we could have chosen to use to make electricity.   There are a lot of ways to split an atom; I once tried to make a list of all of the seriously proposed types, and stopped after I hit forty.   To illustrate, let's break our reactors down on fuel type, coolant, and moderator – a vast over-simplification if ever there was one:


Fuel
Coolant
Moderator
Uranium Oxide Ceramic
Water
Solid Metal Alloy
Heavy Water
Carbide or Nitride Ceramic
Carbon dioxide
Unmoderated
Molten Metal Alloys
Helium
Graphite
Molten Fluoride Salts
Molten sodium
Metallic Beryllium
Molten Chloride Salts
Molten lead
Beryllia Ceramic
Uranium Hexafluoride Gas
Hydrocarbons
Sulfate Salts Dissolved in Water
Molten Salts
Uranium Plasma
Molten Sulfur



We could also break it down on how we turn the heat into electricity: if we use a steam turbine, gas turbine, or something more exotic, like direct energy conversion or a reciprocating engine.   We can also break it down on whether we use uranium or a breeding cycle like uranium/plutonium or thorium/uranium.   And a lot of those entries could be split up further – there's a big difference, for example, between the highly dilute uranium fuel alloy of Brookhaven's Liquid Metal Fueled Reactor concept, and the concentrated plutonium alloy of the Los Alamos Molten Plutonium Reactor Experiment.   There are many, many ways to split an atom.

A lot of these proposals can be discarded because, in retrospect, they were clearly bad ideas.   But many of them might have been good ideas, possibly better ideas then the LWRs we ended up building.   But it takes a lot of money to turn a reactor concept into a working prototype, and then even more money to turn a prototype into a design that can be rolled out commercially.   And that money just wasn't there for most of these proposals – as I mentioned, a big part of why the LWR became the dominant reactor type was because a huge amount of money had already been spent by the Navy.   For the followup to the LWR, the US Atomic Energy Commission chose the liquid-metal-cooled fast breeder reactor (LMFBR), and the cost of trying to build a commercial LMFBR prototype led to the cancellation of their other reactor research programs.   But the LMFBR was a massive flop – far too expensive to compete, and with questionable safety characteristics (the coolant is flammable).   When congress finally killed the LMFBR in the '80s, research into other alternatives was already long dead in the United States, it has only recently begun to tentatively revive.

But we were just talking about the A-plane.   If A-planes fly, then a second class of reactors is going to get that same kind of development subsidy, this time from the Air Force.   A direct-cycle atomic engine could be used as the basis for a gas-cooled power reactor, and in the long run, such a machine could have significant advantages over a light water reactor.   Power from these reactors probably wouldn't be “too cheap to meter”, but, between lighter radiation regulations and a better base reactor technology, they could be much cheaper then the LWRs we have today – cheap enough to out-compete coal and gas.

And that has its own implications, most of which are beyond the scope of this essay.   To mention a few, though: cheap electricity means cheap plastic trinkets are replaced with cheap aluminum trinkets.   Large-scale desalination is feasible as a solution to water shortages.   Carbon emissions are radically lower – we'd still burn gasoline in our cars, but we would see natural gas drastically decline, and coal perhaps eliminated altogether.   Cheap, clean energy would have enormous, and to some extent unforeseeable, implications.

The Atomic Shovel
Like the atomic-powered airplane, Project Plowshare was a massive program in its day that, now, is largely forgotten.   Running from roughly 1958 to 1975, its aim was to find peaceful uses for atomic explosives.   Plowshare looked at a lot of possibilities – they actually set off more bombs for peaceful scientific research than for any other purpose, a fact which even most historians of the program overlook – but it is mostly known, then and now, for its plans for “geographical engineering”: earth-moving on a massive scale.   Edward Teller, the project's foremost backer, joked that “if your mountain is not in the right place, drop us a card.”   Plowshare engineers hoped to use atomic explosives to excavate harbors, mountain passes, mines, and, most famously, canals – for much of the project's life, its primary justification was to dig a new, sea-level “Pan-Atomic” Canal to replace the too-narrow Panama Canal.

Frankly, even in a world where radiation isn't a big deal, it's not at all clear that atomic geographical engineering is practical.   There are two main issues.   The first is economic: even in a world using the Linear Threshold hypothesis, an area where atomic excavation is being used is going to have to be evacuated for about two years until the initial radiation dies down.   Plus, ground shock means that any buildings in the area will suffer serious structural damage.   These two facts mean that you can't use atomic excavation anywhere near where lots of people live – but if not many people live there, what's your new canal for?   There are a few places where it might make sense – such as excavating harbors for mines in remote locations – but it's unclear if there are enough such places to justify the cost of developing the technology in the first place.
But, let's assume this issue can be overcome.   The Russians apparently overcame it in our own world.   Their equivalent of Plowshare continued until they ended nuclear testing in 1990, and they apparently found it economically useful, though they primarily used it for less-glamorous tasks like deep seismic sounding, excavating underground storage tanks, and stimulating oil and gas production.   So it's not impossible that this could make economic sense.

A more subtle problem, but in the long run more dangerous, is proliferation.   If the United States is using atomic devices for excavation, it becomes much more difficult to tell other countries that they can't have them.   Even in our own timeline, this was a serious problem – India, for example, initially insisted that its first atomic test was for peaceful purposes, and the Nuclear Non-Proliferation Treaty includes a clause guaranteeing that the weapons states will provide “peaceful nuclear explosives” to non-nuclear states on request.

But states aren't the only problem.   We're talking about building and detonating dozens – perhaps hundreds – of atomic explosives per year, and shipping them all over the world for industrial use.   I would be concerned about one of them disappearing, and then reappearing inside a crowded city.   Now, the NATO militaries – and probably the Russians – did keep large numbers of tactical nuclear weapons in relatively insecure facilities for decades without losing any (that we know of), but it seems like pushing one's luck.

Project Orion

I'm going to keep this section brief, because Project Orion is one of the few historical atompunk concepts that is fairly well-known.   For those who haven't heard of it, Orion would have used atomic bombs to launch spacecraft.

An Orion spaceship would consist of a gigantic metal plate.   Atomic bombs would be dropped through a hole in the plate and detonate underneath, and the shockwave would push the ship forward.

On paper, such a ship is an extremely efficient way both to launch material from Earth into space, and to move it around in space once it gets there.   In fact, in a purely technical sense, it is by far the best space drive by almost any measure that could be built with present technology.   A single Orion launch could lift thousands of tons of cargo into orbit at a low cost per kilogram, and, once in orbit, could carry it to anywhere in the solar system.   A world using Orion drives would have had a manned expedition to Mars by now at the very least.

In our world, Orion was killed by the Partial Test-Ban Treaty of 1963, which effectively prohibits atomic detonations in the air or in space.   The PTBT was primarily driven by concerns over the health effects of atomic testing, so in a world using the LT hypothesis, it will not be signed – and it is quite possible that Orion will fly.

That said, I would like to point out that there are still a number of unanswered questions about Orion's technical feasibility, particularly regarding whether the energy from the explosions can be properly directed, whether the pusher-plate will hold up to repeated detonations, and just how much all this will cost.   Still, compared to the other assumptions we've made, these are relatively small, and the technical hurdles for Orion are no worse than for any other proposed way to get into space for dollars per kilogram.

What We Can't Do

So what parts of the atompunk dream aren't possible, at least not in the 20th century?

Well, atomic-powered cars are definitely Not Happening.   It might just be barely theoretically possible to build a reactor small enough to power a car – maybe – but it definitely couldn't be built at a price anyone could afford, even a billionaire.   And you'd have to fuel it with weapons-grade fuel, which the government would presumably frown on.   So that's not going to happen.
Atomic-powered locomotives are technologically possible, barely, but they're unlikely to make economic sense even under the best possible assumptions.   Miniaturized reactors small enough to run a train could probably be built, but it's unlikely they could compete on cost with building the reactor by the side of the track and electrifying the line.

The traditional nuclear thermal rocket probably isn't practical either.   A nuclear thermal rocket (NTR) uses a nuclear reactor to heat a fluid – usually hydrogen – which is then used as exhaust.   Unlike an Orion, which looks like a gigantic plate on top of an explosion, an NTR would look like a “classical” rocket like the Saturn-V.   Like the A-plane, NTRs were the focus of a major AEC/NASA research effort in the '60s that is now largely forgotten.   However, Orion drives outperform nuclear thermal rockets on every metric.   If you take Orion off the table, it is theoretically possible to build a nuclear thermal rocket with a thrust/weight ratio high enough for Earth launch, but it requires either liquid oxygen augmentation – which eliminates almost all of the efficiency advantage – or materials that don't exist today, and probably won't exist in this half of the 21st century.   The most we might see would be a nuclear-powered upper stage with a conventional lower stage, like the TIMBERWIND proposal in the 1980s, but it's tough to make that make sense economically.   Even once you reach orbit, nuclear thermal rockets might have a niche, but it won't be a big one – nuclear- or solar-powered ion drives can do most of the same things, but better.   So, all in all, it's tough to see them finding a role to play.   Which is a shame, because I really prefer the aesthetics of the nuclear rocket to the Orion – a gleaming, needle-nosed rocket is such a classic atompunk motif.

Concluding Remarks

I hope that this essay has sparked some ideas.   Before I leave you, though, I'd like to call back to something I mentioned at the start.

The Linear Threshold hypothesis really might be true.   We don't know and we can't know right now, but scientists continue to work on understanding how ionizing radiation effects human cells.   And someday, we may finally figure it out, we may prove one of these dueling hypotheses.   And the right answer might turn out to be the Linear Threshold hypothesis.

We may wake up one morning, turn on the internet, skim through the science news – and discover that quietly, subtly, the world has changed.   That a lot of regulations we thought were necessary really weren't.   That atomic energy really can be cheap, and that Orion ships really can be built without killing people.   That we really can have the world our parents dreamed of.

This isn't just alternate history.   It's a possible future.

So keep an eye on the science news.   Because you never know.

* * *

Mark J. Appleton blogs on atompunk history at Atomic Skies.

No comments:

Post a Comment