The Future of Global AI Governance
With the recent rise of generative AI, calls for comprehensive global AI governance are being amplified. The best path forward, however, remains unclear.
Some see an AI-powered future as a net positive, representing a chance to extend human capabilities and usher in an age of abundance. Others see a clear and present danger in AI systems that could one day surpass human expertise in all fields.
Either way, AI is advancing at an exponential pace and we may only have one chance to steer it toward its greatest promises and away from its potential perils.
Two paths to AI Governance?
There has been growing global consensus that universal AI governance is necessary. So far, two paths to regulation have emerged.
There are market-driven solutions to AI regulation that may accommodate fast adoption and encourage innovation, but they also raise concerns about a “winner-takes-all” approach that could result in poverty of options for consumers, an unhealthy concentration of power, and even geopolitical unrest.
Governmental processes likewise face limitations. Regulation and legislation are often too slow to keep up with exponential technologies. And if not coordinated, conflicting approaches could stifle growth, prevent interoperability, and create a hard-to-navigate patchwork of AI governance frameworks.
Market-driven and government-led paths to global AI governance share a critical blindspot:
They are both focused on the developers and deployers of AI rather than the AI systems themselves, and therefore fail to account for AI’s trajectory toward greater intelligence and autonomy.
Ultimately, any AI governance approach that fails to consider that AI systems might one day self-regulate is likely to hit a ceiling.
How do we govern AI systems that are on a path to governing themselves?
The first step is to ensure that AI systems can make sense of the world in the same way that we do. Only then will these systems be able to understand our laws—what is allowed and not allowed—and our values—what is considered right and wrong.
To achieve this, we need a common standard language that accurately captures our reality from a spatial, semantic, and societal standpoint, enabling machines to ground their understanding of our world in a shared and new modeling language. Establishing this ‘common language’ enables AI systems to communicate their actions and rationales to us. While we in turn can modify their behavior accordingly.
This ongoing ‘conversation’ builds trust, allowing us to grant AI more and more autonomy, gradually taking our hands off the proverbial wheel.
Technical Standards, the Bedrock of Modern Society
In May 2023, the G7 proposed a potential solution. In a joint communiqué, leaders called for the adoption of international technical standards to govern the development and deployment of AI. Similar calls for technical standards have recently come from private and public stakeholders.
Technical standards, often overlooked, serve as the bedrock of our technological societies. They ensure that cars can use any gasoline and phones can connect via Bluetooth. However, because of AI’s vast potential to impact society, technical standards alone are not enough. Given that AI systems will reach into nearly all facets of daily life, it is imperative that they understand our cultural beliefs, our norms and values, and our laws.
Recognizing this need, the US Department of Commerce's National Institute of Standards and Technology (NIST) suggested a hybrid approach of developing "socio-technical" standards, which merge technological guidance with societal values, bridging the gap between technology and society.
The Spatial Web Socio-technical Standards
Socio-technical standards for global AI governance are already in development at Institute of Electrical and Electronics Engineers (IEEE), which has standardized everything from Bluetooth to WiFi to Ethernet to electricity.
In 2020, the Spatial Web Foundation partnered with the IEEE to lead the development of socio-technical standards for AI and AIS alignment, interoperability and governance.
The IEEE P2874 Standards address human rights, well-being, accountability, and transparency for AI and AIS. They can convert regulations into a machine-readable, machine-sharable, and machine-executable format. This “law as code” breakthrough offers a means to close the gap between societal values, regulatory policies, and technological mechanisms, leading to AI that is more equitable and accessible.
By taking into account social principles as well as technical aspects, socio-technical standards will ensure:
- A shared understanding of meaning and context between humans and AIs.
- The explainability of AI systems, enabled by the explicit modeling of their decision-making processes.
- The interoperability of data and models that enable universal interaction and collaboration across organizations, networks, and borders.
- Compliance with diverse local, regional, national, and international regulatory demands, cultural norms, and ethics.
- The authentication and authorization of activities, which drives compliance and control, with privacy, security, and credentialing built-in by design.
The Spatial Web standards enable smarter AI governance by grounding AI in our human understanding of the world.
These standards grant lawmakers the flexibility to govern as they deem appropriate, while also ensuring seamless interoperability across the network, regardless of the diversity of systems involved.
They become the infrastructure that will enable smart devices and systems to collectively evolve and become smarter through their participation.
And they allow us to encode principles and values directly into the network. Which means we get to determine how much autonomy to grant to AI systems as they continue to evolve.
AI Governance and Autonomous Intelligent Systems
AI will inevitably converge with robotic and Internet of Things (IoT) systems, giving rise to Cyber-Physical Systems (CPS). The convergence of CPS with AI could lead to the emergence of Autonomous Intelligent Systems (AIS) that operate autonomously across digital and physical domains.
AIS will usher in a new generation of the web, powering various intelligent applications, from smart assistants to smart cities to smart supply chains. Similar to the autonomic nervous system’s intelligent regulation of the body, AIS could seamlessly orchestrate countless activities in the background of our lives, with increasing autonomy.
The word autonomy means “self-regulating” or self-governing. Therefore, effective AI regulation and governance must account for future AI systems that can govern themselves. Central to this new governance paradigm is the AIS International Rating System (AIRS). More than a classification tool, AIRS provides a roadmap for AI's evolution, charting its journey from basic pattern recognition to causal reasoning and human-like adaptability.
A Path to True Intelligence
The Spatial Web Standards will enable new forms of AI to evolve beyond large language models.
VERSES is already leveraging these standards to build small, agile, adaptable, explainable, and increasingly autonomous "Intelligent Agents."
These agents utilize a neuroscientific principle called "Active Inference," which decodes how intelligence manifests in nature. Developed by renowned neuroscientist and theorist Karl Friston, active inference bridges the gap between neuroscience and AI, lighting a path to the development of genuinely intelligent AI systems.
Active Inference-based agents operating on socio-technical standards enable smarter governance.
Active Inference-based agents are accountable—they can “introspect” their own actions and decisions, and offer detailed explanations for their reasoning. Agents are also adaptable, continuously improving their intelligence and understanding of the world around them.
While these agents can be specialized to accomplish specific tasks at an expert level, they also have the ability to communicate with one another. By forming networks, agents continuously exchange knowledge and work together to tackle complex, dynamic, large-scale challenges.
This level of networking results in an “ecosystem of intelligence” where agents learn, adapt and work together similar to how people do, but at a scale and speed far beyond our capabilities.
A Vision for the Future
The active inference-based AI that VERSES is building can better align with our goals and values, cultivating trust and transparency, even as agents become increasingly self-governing. The agents demonstrate genuine intelligence, they are transparent and explainable by design and they require much less data than state of the art AI as we know it today.
To test a smarter approach to AI governance that leverages socio-technical standards, VERSES, the Spatial Web Foundation, and the world’s largest law firm, Dentons, are announcing the formation of a global regulatory sandbox. The Prometheus Project, named after the Titan God of Fire, is a public-private initiative utilizing state-of-the-art AI and advanced simulation technology that runs on the Spatial Web standards.
The sandbox provides a space for stakeholders to collaboratively test AI innovations in a controlled environment. The name “Prometheus’’ meaning “forethought” is fitting because this project invites us to simulate and explore the potential alignment of new technologies before they are widely adopted.
With Spatial Web standards, a new, future-proof approach to global AI governance becomes possible, paving the way to a world where AI systems free us from mundane physical and mental tasks, allowing us to focus on innovation, self-actualization, creativity, and new horizons.
To learn how we can make this vision of the future a reality, download a first-of-its-kind report, The Future of Global AI Governance.