fbpx

11 out
Three Commandments for Technology Optimists

Technology can be a powerful force for good, but its impacts are unpredictable. On the 25th anniversary of WIRED, technologists should temper their enthusiasm with proper caution.

he impacts of new technologies are unpredictable: Inventors hyperbolize a revolutionary technology when it emerges, but it’s impossible for society to anticipate its long-term effects. Because new technologies are so powerful and incomprehensible, responsible technologists must practice what the poet John Keats called negative capability: “capable of being in uncertainties, Mysteries, doubts, without any irritable reaching after fact,” simultaneously cultivating caution and enthusiasm.

I’ve been thinking about technology enthusiasm as long as WIRED has been published. Long an editor of competing magazines, I now help build life sciences companies, mostly in health and agriculture. Witness and participant, I’ve become (to my slight surprise) a person of a fixed, familiar ideology, one of those blithe bastards who think technology can solve big problems, grow wealth, and enlarge human possibilities. All I lack is the fleece vest.

I’m not an absolute fool about technology: a determinist or militant naif. I know that most technologies are contingent, neither necessary nor impossible, and that use makes a particular technology good or bad, according to circumstance and effect. I don’t forget Clay Shirky’s rueful dictum that “it’s not a revolution if nobody loses,” and I concede that the losers in any technologically wrought social transformation are often those with the least to lose. But I believe that any broadly adopted technology satisfies some profound human need. We are technology-making apes who evolve through our material culture; everywhere, people fly like birds, speed like cheetahs, and live as long as lobsters, but only because of our technologies. I’m confident that smart, generous policies can ameliorate technological unemployment and other displacements.

 

More religiously, while I recognize that technological solutions create new problems, I have faith those problems will find yet more solutions, in an ascending spiral of frustration and release—the greatest show on Earth that will never end, until we do.

Science, unlike technology, is an absolute good, and learning about the world is a kind of categorical imperative: an unconditional moral obligation that is its own justification. Those who expand human thought are especially heroic, because they replace obscurity with the truth, which however so shocking is always salutary.

But science is only directly useful insofar as it leads to new technologies. In my new life, I often ask myself: With the time I have left, what novel technologies should I pursue? Which should I reject? Not long ago, the partners at my firm considered a technology that might prevent disease. But we chose to let someone else commercialize it, because its expansive powers and potential liability confounded us. Was our choice admirable or cowardly?

These are not easy questions, not least because there is no consensus—and surprisingly little systematic writing—about what technology is and how it develops. The best general book on the subject, Brian Arthur’s The Nature of Technology: What It Is and How It Evolves (2009) distinguishes between the singular use of the word technology as a means to fulfill a human purpose (for instance, a speech-recognition algorithm or filtration process) and a generic assemblage of practices and components (technological “domains,” like electronics or biotechnology). Arthur, an economist at the Santa Fe Institute who refined models of increasing returns, writes, “A technology is more than a mere means. It is … an orchestration of phenomena to our use.”

If technology is functional and its value instrumental, then it follows that not all singular applications of technological domains are equal. Nuclear fission can power a plant or detonate a bomb. The Haber-Bosch process, which converts atmospheric nitrogen to ammonia by a reaction with hydrogen, was used to manufacture munitions in Germany during World War I, but half the world’s population now depends on food grown with nitrogen fertilizers. (Fritz Haber, who was awarded the 1918 Nobel Prize in Chemistry for coinventing the process, was a conflicted technologist—the father of chemical warfare in World War I. His wife, also a chemist, killed herself in protest, in 1915.) What’s more, designs possess a moral direction, even if technologies can be put to different uses. You can hammer a nail with a pistol butt, although that’s not what it’s for; a spade can kill a man, but it’s better for digging. Therefore, the first commandment for technologists is: Design technologies to swell happiness. A corollary: Do not create technologies that might increase suffering and oppression, unless you’re very sure the technology will be properly regulated.

However, the regulation of new technologies presents a special problem. The future is unknowable, and any really revolutionary technology transforms what it means to be human and may threaten our survival or the survival of the species with whom we share the planet. Haber’s fertilizers fed the world’s people, but also fed algae in the sea: Fertilizer runoffs have created algae blooms, which poison fish. The problem of unpredictable effects is especially acute with some energy and all geoengineering technologies; with biotechnologies such as gene drives that can force a genetic modification through an entire population in a few generations; with artificial eggs and sperm that might allow parents to augment their offspring with heritable traits.

One tool to regulate future technologies is the precautionary principle, which in its strongest form warns technologists to “first do no harm.” It’s an alluringly simple rule. But in an influential paper on the principle, the Harvard jurist Cass Sunstein cautions, “Taken in [its] strong form, the precautionary principle should be rejected … because it leads in no directions at all. The principle is literally paralyzing—forbidding inaction, stringent regulation, and everything in between.” A weaker version, adopted by the nations that attended the Earth Summit in Rio in 1992, stipulates, “Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” The threshold for plausible harm is left worryingly undefined in most weak versions of the principle. Nonetheless, the weaker version suggests a second commandment for technologists: In regulating new technologies, balance costs and benefits, and work with your fellow citizens, your nation’s lawmakers, and the world’s diplomats to enact reasonable laws that limit the potential damage of a new technology, as further evidence is forthcoming. It’s good that Facebook invented a global social network, but the company must now cooperate with regulators to limit how malefactors can hack our heads, maddening populations and hijacking elections.

A final commandment helps technologists choose which technologies to pursue. In a complicated fashion, new technologies are not only the “orchestration of phenomena to our use” but are tools of scientific inquiry. Brian Arthur notes, “Science not only uses technology, it builds itself from technology.” High-throughput screening speeds drug discovery, but it also provides new understanding of cancer genomics. Deep learning may one day permit driverless cars, but it will also untangle the mysteries of brain development. Thus, the third commandment for technologists: The best technologies have utility but also provide fresh scientific insights. Prioritize those.

On my desk at work, I have a replica of the skull of La Ferrassie 1, the most complete Neanderthal skeleton ever found. The original belonged to an adult male who lived 50,000 to 70,000 years ago. He walked as upright as you or me, and had you met him on a Paleolithic hillside in what is now the Vézère Valley in France, he would have seemed hauntingly strange: obviously human but stockier, broad-nosed, and beetle-browed. In ways we can only dimly guess, his manners would have been strange too. Surely, he could talk after a fashion, because he possessed the anatomy for speech and shared with us a gene, FOXP2, necessary for the development of language. But the archeological record tells us that he was also different from Homo sapiens. Around 70,000 years ago something switched on in the heads of modern humans—either a genetic mutation or a social adaptation; we don’t know what—that allowed us to design new stone tools that Neanderthals only clumsily imitated, as well as make cave art, flutes, wine, and, eventually, all the rest: the vault of King’s College Chapel, Cambridge; Darwin collecting his irrefutable factsa cure for cancerthe mission to Mars.

 

Original published: Wired 

Deixe um comentário

Seja o Primeiro a Comentar!

avatar
Está com dúvidas?
Estamos aqui pra ajudar! Envia um e-mail ou chama no whatsapp: (86) 3133-7070
Entrar em contato!
© 2017 iCEV Instituto de Ensino Superior
Esse domínio pertence ao Grupo Educacional Superior CEV
CNPJ: 12.175.436/0001-09