OpenAI's Altman and Ethereum's Buterin Present Differing Perspectives on AI Development
This week, two prominent figures in the technology sphere expressed their differing visions for the future of artificial intelligence, revealing a growing divide between the push for rapid innovation and the essential need for safety measures.
In a recent blog post, OpenAI's CEO Sam Altman announced that the company has successfully tripled its user base, now exceeding 300 million weekly active users. This rapid growth is part of their ambitious goal to achieve artificial general intelligence (AGI) within a short timeframe.
Altman confidently stated, "We are now confident we know how to build AGI as we have traditionally understood it," suggesting that by 2025, AI agents could be integrated into the workforce and significantly enhance the productivity of various companies.
Not only is OpenAI focused on developing AI agents and AGI, but Altman also indicated that the company is starting to explore the realm of "superintelligence" in depth. However, the specific timeline for achieving AGI or superintelligence remains uncertain, as OpenAI has not provided immediate comments on the matter.
Earlier on the same day, Vitalik Buterin, a co-founder of Ethereum, offered a contrasting viewpoint. He proposed the use of blockchain technology to create comprehensive safety mechanisms for advanced AI systems, including a "soft pause" feature that would enable temporary restrictions on industrial-scale AI operations if any risk signals emerge.
Utilizing Blockchain for AI Safety
Buterin introduced the concept of "d/acc," which stands for decentralized/defensive acceleration. This idea, unlike traditional acceleration models promoting rapid growth at all costs, emphasizes a more cautious approach that prioritizes safety and human agency.
Looking back on the progression of d/acc, Buterin highlighted how this philosophy could help establish a cautious framework towards AGI and superintelligence by harnessing existing blockchain tools, such as zero-knowledge proofs.
Under Buterin's vision, significant AI systems would require weekly approval from three international bodies to remain operational. This approach would ensure that any directive for AI operations would need to apply universally across all systems involved, reinforcing accountability.
"The signatures would be device-independent, potentially even requiring proof that they were posted on a blockchain, providing an all-or-nothing approach," Buterin explained. Such a system would function like a master switch, allowing all approved AI systems to operate or none at all. This effectively prevents selective enforcement actions.
Buterin remarked, "Until such a critical moment happens, merely having the capability to soft-pause would cause little harm to developers," describing this safety mechanism as insurance against possible disasters.
Nonetheless, OpenAI's remarkable growth from 100 million to 300 million weekly users within just two years emphasizes the fast pace of AI adoption in the modern world.
Altman acknowledged the significant challenges associated with establishing "an entire company, almost from scratch, around this new technology." The contrasting proposals from Altman and Buterin underscore ongoing debates in the industry regarding the management and control of AI development.
Advocates of a global control system argue that it necessitates exceptional collaboration between leading AI developers, governments, and the cryptocurrency community to successfully implement it.
As Buterin aptly put it, "A year of 'wartime mode' can easily be worth a hundred years of work under conditions of complacency." He emphasized that if limitations on AI operations are essential, it is preferable to enforce those limits equitably rather than allowing one entity to dominate the rest.
Edited by an unbiased editorial team
AI, Blockchain, Safety