Why the most powerful AI systems must be paused

A clear-eyed case for Nigerian citizens, researchers, and decision-makers.

The Risks in Three Pillars

Existential Risk

Leading researchers including Geoffrey Hinton and Yoshua Bengio have warned that advanced AI could create severe, potentially irreversible harms if progress outpaces governance. The 2023 Statement on AI Risk, signed by hundreds of experts, argued that reducing extinction-level AI risk should be treated as a global priority alongside pandemics and nuclear threats.

Loss of Control

Frontier systems are improving faster than independent auditing, safety testing, and legal oversight. Competition between labs and states creates pressure to deploy first and patch later. That race dynamic increases the probability of misuse, accidents, and governance failure.

Nigeria's Exposure

Nigeria is deeply exposed through finance, telecoms, media, education, and public services. AI-enabled misinformation can target elections and social cohesion at scale. Without strong local safeguards, Nigeria could become dependent on external systems built for different legal and democratic realities.

Our Proposal

We call for a temporary pause on training the most powerful general AI systems until developers and governments can demonstrate credible safety guarantees.

  • Not a ban on all AI.
  • Not anti-technology.
  • A call for democratic oversight and enforceable safeguards.

Read the global proposal.

Nigeria in Context

Nigeria has become one of Africa's most influential digital economies, with rapid growth in fintech, telecoms, digital public services, and online media. That growth creates opportunity, but it also expands the attack surface for AI-related harm. If highly capable models are rolled out without enforceable safety standards, disruption will not be theoretical. It will hit sectors where millions of livelihoods are already fragile and where regulatory capacity is still catching up to existing digital risks.

In practical terms, frontier AI could intensify disinformation in election cycles, reduce transparency in automated decision systems, and accelerate labor displacement in white-collar work before retraining systems are in place. Sectors such as customer operations, legal services, financial analysis, media production, and even parts of healthcare administration are likely to see fast productivity shifts that may not be matched by worker protections or social safety policy. A pause on the most powerful systems creates policy time for Nigeria to design clear standards on auditability, accountability, and public-interest oversight.

Nigeria is not starting from zero. Institutions including NITDA and other policy bodies are already discussing AI strategy, and this momentum should be deepened with stronger safety language, independent oversight, and democratic consultation. Across Africa, countries like Kenya and South Africa are increasingly active in AI governance dialogue. Nigeria should lead that continental conversation rather than inherit rules written elsewhere. The goal is not to reject innovation. The goal is to ensure innovation is aligned with public safety, democratic legitimacy, and long-term national resilience.

See the Nigeria AI Policy Tracker.

Common Counter-Arguments

"AI will create jobs"

AI can create new industries, but that does not cancel short-term displacement risk. Without a transition plan, many workers can lose bargaining power and income before new opportunities are accessible. A safety pause gives time for workforce policy, retraining pathways, and labor protections to catch up.

"Regulate later"

Regulation that arrives after high-risk deployment is often cleanup, not prevention. Once unsafe systems are widely integrated into critical infrastructure, rollback becomes politically and economically difficult. Prevention requires rules before deployment, not after harm.

"This is a Western concern"

African countries are major consumers of AI systems developed elsewhere, often with limited ability to audit training data, risk controls, or governance assumptions. That makes this a sovereignty issue, not a foreign debate. Nigeria has strong reasons to shape global standards from an African perspective.

"Nigeria has bigger problems"

Nigeria has multiple urgent priorities, and that is exactly why AI safety matters. Poorly governed AI can worsen fraud, inequality, misinformation, and institutional mistrust across existing policy challenges. Responsible AI governance should be integrated into development strategy, not treated as a separate luxury issue.