Musk. Hawking. Gates. The tech visionaries have redoubled their warnings about how we could destroy ourselves with technology. But they’re not talking about deliberately pushing buttons to trigger annihilation—Kennedy and Khrushchev could have done that back in ’62. They mean accidentally killing ourselves with something that we invent for our own benefit, that later acquires a mind of its own and a shirty attitude.
But there’s no need to wait for the next big existential threat. We’re already designing a whole new world of ways to accidentally off ourselves, and no one seems worried about it.
As biochemists concoct new life-extending medications, calls to Poison Control after swallowing the wrong pills or the wrong number of the right pills have recently doubled. We put a smartphone in every hand, and now more than 1,000 distraction-related crashes happen on our roads every day (also steadily rising). Kids and pets succumb to heatstroke inside cars that are more environmentally sealed than ever—we’re on track to set a new record for hot car deaths in 2017. Falling off of ladders? Even those numbers are climbing, and if you’re wondering how that could possibly be related to technology, well, read on.
It’s a little embarrassing to have to admit that accidental deaths are increasing in a world that our forebears made safer for hundreds of years. Our grandparents saw the invention of the automobile, the blender, the bulldozer, and the radial arm saw—and they made them all safer. Is what we’re inventing today really more dangerous than that stuff? We may not have to worry about sentient A.I. any time soon, but our innovations are quietly outpacing our ability to figure out how to not get wiped out by them.
We rely on instincts—our common sense—to tell us what’s dangerous and what’s safe. Before technology came along, hazards were mostly self-explanatory. Bears, snakes, sharp sticks, cliffs—it’s hard to get any of this stuff wrong. And our instincts became exquisitely tuned to them across millennia. But the hazards we face today can be more subtle, harder to recognize, and even counter-intuitive.
We instinctively take a step back when we see something large, but technology is turning our fear of size on its head. Technology progresses by packing more and more power into smaller and smaller packages. One errant wave of a laser pointer and you could bring down an airliner. The number of people who visit emergency rooms following physical interactions with television sets is rising. It’s not hard to see why. In the old days, TVs weighed a ton, and they sat in the corner, mostly unbothered. Now TVs are light and portable. They invite us to pick them up, sit them on stuff, or hang them on the wall. And they’re shaped like giant guillotine blades. Put the TV in a phone and it’s more dangerous still. The lighter the TV, the less we fear it, and the higher the body count.
Invisible hazards lurk in the logic and code used to implement our technology. We place blind trust in complex systems that reveal little about how they work. Our medication prescriptions pass through computer systems accessed by professionals who majored in something other than computer science. Patient safety advocates loudly remind us that we are potential victims of human and machine error and that we need to think and ask questions before we swallow pills. But not many people are heeding their advice, and medication errors run rampant.
More insidious still, we create technology that assumes we have superhuman cognitive abilities, and consumers seem willing to play along. Put a phone, a latte, and a steering wheel in front of us doing 80 mph, and we’re proud multitaskers. Sure, other people on phones are dangerous, but you can smoothly switch your attention and notice when anything scary pops up, right? A psychologist in a gorilla suit debunked that idea some years ago, but we pound our chests in defiance. We imagine ourselves able to accurately assess risks in complex situations after watching a news story about a 15-pound flying sausage crashing through the roof of someone’s house. We even think we’re good at seeing oncoming trains while wearing earbuds.
The hazards of yesteryear gave us immediate feedback when we screwed up. When we misjudged a bear, the bear instantly let us know. But technology can place the consequences of our missteps at a distance. Delayed reactions, complex chain reactions, hidden reactions—these are all part of how technology works. But adapted to a world of instant feedback, we cruise through the day on autopilot, seldom stopping to consider what could go wrong later down the road. Worse still, technology is even hacking our feedback system. We get a blast of dopamine when we check our phone behind the wheel. Edith Harbaugh, whose company, LaunchDarkly, specializes in the controlled release of new technology into the wild, pointed out that we’re creating a dangerously lopsided system of actions and rewards: “We’re not given snow cones every time we do something safe.”
Technology is even reshaping our safety culture. My grandfather, a craftsman, taught me how to use tools. If I used a ladder wrong, I was quickly corrected. But then technology moved more than half of all grandparents into office jobs. Today people are firing up tools again in record numbers because DIY is in fact cool and arguably good for your soul. But we’re now getting hurt more while doing it.
Few stop to realize how much we have come to rely on the work of concerned, dedicated consumer product designers and the lawyers who incessantly sue them. But today we’re inventing stuff faster than we can design safety features. A town in Germany just installed sidewalk traffic lights to alert phone-immersed pedestrians. But what happens when wearables and VR kits come along and we’re looking up and through a device rather than down at it?
Even the government stepped in on our behalf to create safety standards. But our technology may now be even outpacing our ability to establish new standards. There’s a scene in Anchorman 2 where Will Ferrell flips on the cruise control and leaves the driver’s seat of his motorhome. When Paul Rudd points out that cruise control only handles the speed, not the steering, chaos ensues. But that confusion isn’t just a Hollywood comedy routine. Kelly Funkhouser at the University of Utah recently inventoried the names that car manufacturers are giving to speed control, lane-keeping, and blind-spot monitoring functions. She found that similar names are being given to different combinations of these functions across manufacturers. Hop in the wrong rental car, push a button, whip out your phone, and boom … you’re Anchorman.
So if you’re really worried about summoning the demon, relax. It’s already too late. The age of life- and limb-threatening technology is upon us. But our ancestral tendencies may be preventing us from changing our ways or even seeing why we should. In the midst of Silicon Valley, where some of the smartest people live and work, I spotted one driver doing a video call on one phone while texting on another phone. Meanwhile, the numbers keep rising and accidents are now the fourth leading cause of death in the U.S. (For the record, bear-related fatalities remain relatively constant.)
There’s no magic Silicon Valley gadget that will fix this problem. Instead, we need to reconsider the very way we think about being careful—to learn the nuances of a more complex world that we must now navigate and to learn the limits of what our creative, analytical, sometimes fallible minds can and can’t get away with. The modern world is breaking everything we know about staying safe. If we harbor any hope of squaring off against the rise of the machines, we’re going to need to learn to survive the things we have today.
This article is part of Future Tense, a collaboration among Arizona State University,New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.
[“Source-slate”]