This paper examines the current landscape of AI regulations, highlighting the
divergent approaches being taken, and proposes an alternative contextual,
coherent, and commensurable (3C) framework. The EU, Canada, South Korea, and
Brazil follow a horizontal or lateral approach that postulates the homogeneity
of AI systems, seeks to identify common causes of harm, and demands uniform
human interventions. In contrast, the U.K., Israel, Switzerland, Japan, and
China have pursued a context-specific or modular approach, tailoring
regulations to the specific use cases of AI systems. The U.S. is reevaluating
its strategy, with growing support for controlling existential risks associated
with AI. Addressing such fragmentation of AI regulations is crucial to ensure
the interoperability of AI. The present degree of proportionality, granularity,
and foreseeability of the EU AI Act is not sufficient to garner consensus. The
context-specific approach holds greater promises but requires further
development in terms of details, coherency, and commensurability. To strike a
balance, this paper proposes a hybrid 3C framework. To ensure contextuality,
the framework categorizes AI into distinct types based on their usage and
interaction with humans: autonomous, allocative, punitive, cognitive, and
generative AI. To ensure coherency, each category is assigned specific
regulatory objectives: safety for autonomous AI; fairness and explainability
for allocative AI; accuracy and explainability for punitive AI; accuracy,
robustness, and privacy for cognitive AI; and the mitigation of infringement
and misuse for generative AI. To ensure commensurability, the framework
promotes the adoption of international industry standards that convert
principles into quantifiable metrics. In doing so, the framework is expected to
foster international collaboration and standardization without imposing
excessive compliance costs