‘It’s Cool’ Isn’t Enough: The Need for Responsible Scientific Innovation
The device you might be using to read this article exists thanks to a groundbreaking experiment conducted about 55 years ago. Researchers connected two computers that were 560 kilometers apart in the United States, marking the inception of what would evolve into the internet. This innovation further transformed in 1989 when English scientist Tim Berners-Lee invented the World Wide Web and offered it to the public at no cost.
Unlike earlier advancements, today's scientific innovations often seem to have lost their spirit of openness. The majority of scientific research has shifted toward privatization. A significant turning point occurred with the 1980 US Bayh-Dole Act, which empowered private companies to patent inventions resulting from public funds. Because of this, intellectual property has increasingly fallen into the hands of a wealthy few. Multibillion-dollar tech moguls now play a crucial role as gatekeepers of discoveries across various fields, from healthcare to energy and the rapidly advancing realm of artificial intelligence (AI).
Furthermore, tech leaders comfortably adhere to what philosopher Heather Douglas describes as the “old social contract for science,” which minimizes accountability for their actions. This contract is based on three main assumptions: that basic science (like splitting the atom) is separate from applied research (like creating an atomic bomb), that scientists engaging in basic research bear no responsibility for its outcomes, and that public funding should support basic science without considering its industrial applications.
This old contract—tacitly supported since the Industrial Revolution—has primarily served the scientific community. It allowed figures like J. Robert Oppenheimer to race toward developing nuclear weapons while maintaining a clean conscience. Yet, this viewpoint is also naive, permitting capitalists to exploit scientific advancements to hide social damages.
Tech entrepreneurs like Jeff Bezos and Elon Musk promote themselves as innovators, distancing themselves from issues regarding their business practices. Bezos has expressed a desire to be viewed as “inventor Jeff Bezos,” while Musk cultivates a persona as a modern-day scientific pioneer undertaking revolutionary projects. Some, like Sam Altman, CEO of OpenAI, take a more direct approach in using the old social contract to their advantage, suggesting that government regulations should follow the advancements in science and that oversight should be minimal, likening it to “weapons inspectors” for AI.
The commitment of tech entrepreneurs to these oversight measures is questionable, especially considering Altman's past threats to halt operations in Europe over regulatory challenges. Heather Douglas asserts, “Scientists cannot operate in a responsibility-free manner,” advocating for a new framework that emphasizes accountability.
Douglas proposes accountability mechanisms tied to what she calls “responsibility floors,” with the fundamental principle being to “not make the world worse.” Applying such standards will allow us to assess technologies like generative AI, which powers tools such as ChatGPT. While these technologies may have potential benefits in niche areas like cancer screening, Douglas warns that their predominant uses can be harmful. Examples range from generative deep fake pornography to misleading political advertisements, with the risks often overshadowing the benefits.
Regulating generative AI is crucial on its own, but it also serves as a preparatory exercise for addressing the potential existential threat posed by artificial general intelligence (AGI), which goes beyond merely imitating human intelligence and may eventually surpass it. Geoffrey Hinton, dubbed the “godfather of AI,” suggests that AGI could be within a couple of decades. He emphasizes that scientists deserve protection from themselves amid these advancements.
Hinton has acknowledged his previous dismissal of ethical concerns when working with Google, reflecting the need for change in the scientific community's approach toward responsibility. He paraphrased Oppenheimer's thoughts on technological advancements, initially advocating for their pursuit without considering the implications.
As Douglas puts it, “The old contract really undermined public trust in science.” The public may find it challenging to trust a scientific community that appears indifferent to the societal impacts of its innovations. There is an urgent need for a new social contract in scientific research that prioritizes collective well-being over unchecked innovation.
technology, innovation, responsibility, AI, science