State-level tech offices (Lagos, FCT, Kano)
Status: Under review
Current stance: State-level innovation offices are active, but frontier AI risk governance language is uneven.
Latest action: City and state digital programs continue to expand; dedicated AI safety guardrails vary by jurisdiction.
Our ask: Develop a shared state-level AI governance template with risk reporting, procurement standards, and public accountability requirements.
State-level institutions can pilot practical safeguards quickly and inform national-level policy design.
Presidency / Executive Office
Status: No public stance
Current stance: No explicit public executive directive currently addresses a temporary frontier AI pause.
Latest action: Digital economy messaging remains high-level, with no consolidated executive AI safety posture published.
Our ask: Issue a national executive statement endorsing precautionary frontier AI governance and multilateral engagement.
Executive leadership can set urgency and align institutions around a coherent national AI safety strategy.
National Assembly
Status: Active engagement
Current stance: Legislative attention to digital governance is increasing, with room to formalize frontier AI oversight.
Latest action: Committee-level digital policy discussions continue, but dedicated frontier AI safety provisions are not yet comprehensive.
Our ask: Establish a formal parliamentary inquiry into frontier AI risk and national preparedness, including public hearings.
The National Assembly is a critical channel for durable legal safeguards and democratic oversight of high-risk AI systems.
FMCIDE
Status: Under review
Current stance: AI modernization is discussed within broader innovation and digital economy priorities.
Latest action: Ongoing ecosystem engagement on digital transformation, with opportunities to include stronger frontier AI safety language.
Our ask: Coordinate a cross-ministerial frontier AI safety consultation with civil society, academia, and labor stakeholders.
FMCIDE can serve as a coordination hub for policy alignment between innovation goals and public-safety safeguards.
NCC
Status: No public stance
Current stance: No fully articulated public position specific to frontier AI pause measures has been published.
Latest action: Telecom and digital communications policy updates continue, but no dedicated frontier AI safety framework has been issued as of this update.
Our ask: Publish a telecom-focused AI safety and misinformation resilience framework aligned with national AI governance efforts.
NCC has policy relevance through digital communications oversight and anti-misinformation coordination, but frontier AI-specific safety policy remains limited.
NITDA
Status: Under review
Current stance: NITDA has acknowledged the need for AI governance and has signaled policy development interest.
Latest action: Consultation and strategy activities indicate momentum, but frontier AI-specific safety controls are still developing.
Our ask: Publish a precautionary frontier AI position that prioritizes independent safety testing, transparency, and democratic oversight.
NITDA is central to national AI governance and is positioned to anchor stronger safety-first standards for high-risk systems.