Kenya is poised to add yet another layer to its already dense digital regulatory space, without clearly settling the question of who is actually in charge.
- The proposed Artificial Intelligence Bill, 2026 (Senate Bill No. 4), seeks to create a powerful Office of the Artificial Intelligence Commissioner, effectively introducing a new regulator alongside those already enforcing data protection and cybercrime laws.
- In a nod to innovation, the bill suggests regulatory sandboxes, controlled settings where firms can trial AI systems under the Commissioner’s watch.
- Tabled by Nominated Senator Karen Nyamu, the bill forms part of a wider policy drive following Kenya’s National AI Strategy (2025–2030), which aims to position the country as a regional AI hub.
At its core, the legislation is meant to define how AI is developed and deployed in Kenya, with an emphasis on safety, transparency, accountability, and innovation.
Central to the framework is the proposed Commissioner, an independent office appointed through a process involving the Public Service Commission, the President, and Parliament. This office would enforce compliance, track industry trends, audit AI systems, oversee risk assessments and sandboxes, and handle complaints such as algorithmic bias or discrimination.
However, these sweeping powers risk overlapping with existing regulators like the Office of the Data Protection Commissioner and agencies enforcing cybercrime laws, leaving businesses potentially juggling multiple, and possibly conflicting, compliance demands.
The bill adopts a risk-based classification model, closely resembling the European Union’s approach, dividing AI systems into four tiers: unacceptable, high, limited, and minimal risk. Systems deemed “unacceptable” would be banned outright, while high-risk applications, particularly in sectors like healthcare, finance, education, and security, would face strict regulatory scrutiny.
Developers of high-risk systems would be required to conduct human rights impact assessments, keep detailed operational records for at least five years, and ensure transparency in how their systems function.
Yet Europe’s own experience with similar rules has been far from smooth. Regulators there have faced criticism over vague definitions of key terms like “AI system” and “harm,” leaving companies uncertain about compliance. Kenya appears to be borrowing that structure without offering much-needed clarity, raising the likelihood of similar confusion, perhaps even earlier in its AI journey.
The proposed law also centralises enforcement authority without clearly defining institutional boundaries or technical standards. Unlike the EU, which has gradually built supporting guidelines and frameworks, Kenya’s approach introduces broad powers and criminal penalties upfront, potentially creating a system where obligations are wide-ranging but interpretation is left largely to the regulator.
Among its provisions, the bill requires companies to obtain explicit user consent before deploying synthetic media such as deepfakes and to disclose when users are interacting with AI. Firms using AI in employment-related contexts would need to assess workforce impacts and implement reskilling or mitigation measures.
It also mandates human oversight, ensuring that critical AI-driven decisions can be reviewed or overridden. Non-compliance could attract penalties of up to KSh 5 million or prison terms of up to two years. Company directors and senior officers may also face personal liability unless they can prove due diligence.
An advisory committee comprising government, industry, and civil society representatives would be established to guide responses to emerging risks and technological changes.
Kenya’s move comes as AI usage gains traction locally. Around 8% of Kenyans report using AI tools, higher than in neighbouring Uganda and Rwanda, though still behind South Africa and Egypt. Engagement with generative AI platforms like ChatGPT is particularly strong, with over 40% of internet users aged 16 and above reporting usage.
Also Read: Equity Group Posts Record FY2025 Profit Amid Strong Regional And Digital Growth
Still, the proposed framework could prove burdensome for young, homegrown AI firms. Compliance requirements such as audits, documentation, and continuous oversight come with costs that large corporations can absorb far more easily than startups.
There is also the matter of expertise. Effective AI regulation demands specialised skills in areas like data science and algorithmic auditing, capabilities that even advanced economies are still developing. For Kenya, building such capacity may prove to be a rather steep climb.