In a groundbreaking speech delivered to the U.N. Security Council, Anthropic’s co-founder, Jack Clark, issued a potent warning about the uncharted territory of artificial intelligence (AI). Clark, formerly of OpenAI, the creator of ChatGPT, expressed concerns over the safety of AI systems and questioned the ability of big tech companies to ensure this safety.
According to Clark, AI systems’ rapid expansion and increased complexity have led to their potential for misuse and unpredictability. In his virtual presentation, he detailed the journey of AI from its early days to 2023, where AI systems have surpassed military pilots in simulations, enhanced nuclear fusion reactors, and significantly boosted production line efficiency.
Clark’s statement could not be timelier or more relevant. We are seeing the drastic growth of AI applications, but not without a fair share of potentially catastrophic risks. As AI has been used for revolutionary scientific purposes, the understanding it provides of biology could be exploited to create biological weapons.
AI SAFETY UPDATE: The handful of Big Tech companies leading the race to commercialize AI can't be trusted to guarantee the safety of systems we don't yet understand, according to an AI company executive. https://t.co/twfcVZYwAx
— NEWSMAX (@NEWSMAX) July 19, 2023
Clark highlighted the uncomfortable reality: the core players developing these AI systems are primarily big tech companies. With their extensive resources, the tech giants hold the reins of AI development. Yet, Clark doubts their capacity to guarantee the safety of the systems they pioneer. He suggests that the path is open for “regulatory capture” that could jeopardize global security without reliable standards for evaluating these AI systems.
Conservatives need to understand the implications of Clark’s warning. It implies a need for stronger regulation and oversight, not just domestically but on a global scale. Regulatory measures are not meant to hinder innovation but to protect societies from potential misuse and exploitation.
Citing nations like the United States, China and those of the European Union, that emphasize the importance of safety testing and evaluation in their AI proposals, Clark insisted that proper assessment and measurement are vital to any practical regulatory approach. It would hold companies accountable and allow them to gain the public’s trust.
The skepticism toward big tech should not be mistaken as an attack on AI. On the contrary, AI has immense potential and is an invaluable tool for many aspects of society, including science, industry, and defense. But like any tool, when left unchecked, it has the potential to cause harm. For this reason, Clark and others argue that appropriate oversight is crucial.
His words resonated with U.N. Secretary-General Antonio Guterres, who echoed the need for global standards to maximize the benefits of AI while mitigating risks. Professor Zeng Yi, a director at the Chinese Academy of Sciences, emphasized the crucial role of the U.N. in establishing a framework to maintain global peace and security amid AI development.
All of this underscores the need for a global, unified approach to the opportunities and challenges posed by AI. As Clark said, “Any sensible approach to regulation will start with having the ability to evaluate an AI system for a given capability or flaw.” We hope his words spur governments and the U.N. to act swiftly and decisively.