UK AI Minister Liz Kendall Admits Not Using AI at Work

AI minister Liz Kendall says she doesn't use AI at work

AI minister Liz Kendall says she doesn't use AI at workImage Credit: BBC Business (Finance)

Key Points

  • LONDON – In a statement that has sent ripples through the UK's technology and policy circles, Liz Kendall, the minister responsible for steering the nation's artificial intelligence strategy, has admitted she does not use AI in her professional duties. The disclosure, made during a public discussion, highlights a significant and potentially troubling gap between the government's ambitious AI agenda and its own practical adoption of the technology.
  • Key Distinction: The minister's use of a publicly available AI for a personal health query is a world away from using vetted, secure, enterprise-grade AI for sensitive government work. This divide is at the heart of the current debate.
  • Global Leadership: The UK hosted the world's first AI Safety Summit at Bletchley Park, aiming to establish a global consensus on managing the risks of advanced AI. The government's goal is to be the international arbiter of safe and ethical AI development.
  • Economic Growth: Downing Street has repeatedly linked AI adoption to future economic prosperity, productivity gains, and international competitiveness. The push for businesses to integrate AI is a core component of its growth agenda.
  • Public Sector Transformation: A key promise of the AI revolution is its potential to make public services, from the NHS to HMRC, more efficient and effective. The government has allocated funding for pilot projects and research into public sector AI applications.

AI Minister Liz Kendall Says She Doesn't Use AI at Work

LONDON – In a statement that has sent ripples through the UK's technology and policy circles, Liz Kendall, the minister responsible for steering the nation's artificial intelligence strategy, has admitted she does not use AI in her professional duties. The disclosure, made during a public discussion, highlights a significant and potentially troubling gap between the government's ambitious AI agenda and its own practical adoption of the technology.

While championing the UK's ambition to become a "global AI superpower," Kendall's admission that the technology hasn't yet integrated into her own ministerial workflow raises critical questions about the real-world readiness and internal strategy of the very government promoting its widespread use.

The comment underscores a stark reality: advocating for a technological revolution is one thing, but implementing it within the complex, security-conscious machinery of government is another challenge entirely.


The Personal vs. The Professional

The minister's candid remarks came in the form of a personal anecdote. When discussing her own interactions with AI, she revealed a use case far removed from matters of state, policy, or economic strategy.

"I got AI to go through the ingredients of all the products," Kendall explained, referring to a personal health issue. "Because you know there's so many of them really, to identify was there one that was common between the three, and to suggest something I could put on to stop this eczema that had come up."

While this demonstrates an individual's familiarity with consumer-facing AI tools, her subsequent confirmation that she does not use them for her work as Secretary of State for Science, Innovation and Technology has become the central focus for industry observers.

  • Key Distinction: The minister's use of a publicly available AI for a personal health query is a world away from using vetted, secure, enterprise-grade AI for sensitive government work. This divide is at the heart of the current debate.

A Nation's AI Aspirations

The minister's statement is particularly striking when set against the UK's official and highly publicised AI strategy. The government has invested significant political and financial capital in positioning the country as a leader in the field.

This strategy rests on several key pillars, which now face renewed scrutiny in light of the minister's comments.

  • Global Leadership: The UK hosted the world's first AI Safety Summit at Bletchley Park, aiming to establish a global consensus on managing the risks of advanced AI. The government's goal is to be the international arbiter of safe and ethical AI development.
  • Economic Growth: Downing Street has repeatedly linked AI adoption to future economic prosperity, productivity gains, and international competitiveness. The push for businesses to integrate AI is a core component of its growth agenda.
  • Public Sector Transformation: A key promise of the AI revolution is its potential to make public services, from the NHS to HMRC, more efficient and effective. The government has allocated funding for pilot projects and research into public sector AI applications.

The Governance Gap: Security and Practical Hurdles

While the admission may seem jarring, experts point to a range of deeply rooted institutional reasons why a minister might be unable—or unwilling—to use current AI tools for official business. These challenges expose the chasm between high-level policy and on-the-ground implementation.

The lack of use is likely less a matter of personal choice and more a symptom of a systemic "governance gap" within the public sector.

  • Data Security and Sovereignty: Inputting sensitive ministerial information, policy drafts, or classified intelligence into commercial AI models (like ChatGPT or Google's Gemini) would represent an unacceptable security risk. Most of these models are operated by foreign companies on servers outside of UK jurisdiction.
  • Lack of Approved Tools: There is currently no "government-approved" generative AI platform for widespread use by civil servants or ministers. The procurement, testing, and security clearance process for such a tool would be lengthy and complex.
  • Accuracy and Hallucinations: AI models are known to "hallucinate" or generate plausible but incorrect information. For a minister drafting policy, relying on such a tool for factual data or legal analysis would be professionally irresponsible without rigorous human verification.
  • Accountability and Audit Trails: Government work requires clear accountability. It is currently unclear how the use of an AI's output in a decision-making process would be logged, audited, or challenged, creating a significant legal and ethical grey area.

Implications and Next Steps

The incident, though stemming from an off-the-cuff remark, serves as a critical inflection point. It moves the national conversation from the theoretical potential of AI to the pragmatic, and often difficult, realities of its deployment. For the financial and business communities, the implications are significant.

The government's ability to drive private sector adoption is intrinsically linked to its own credibility and perceived competence. If the public sector cannot navigate the hurdles to its own AI usage, its authority in guiding the private sector is diminished.

The path forward will require a clear and concerted effort from the government to close its own internal adoption gap.

  • A Call for Clarity: There is now immense pressure on the Cabinet Office and the Department for Science, Innovation and Technology (DSIT) to issue formal guidelines on the permissible use of AI for civil servants and ministers, clarifying the boundaries and providing a roadmap for safe adoption.
  • Accelerating Sovereign AI: The episode may accelerate efforts to develop or procure a secure, UK-hosted "sovereign AI" for government use, ensuring data remains within national control and models are vetted for security and accuracy.
  • A Wake-Up Call: Ultimately, Minister Kendall's admission serves as an important, if unintentional, wake-up call. It highlights that the journey to becoming an "AI superpower" requires not just grand strategy and international summits, but also the meticulous, unglamorous work of building the internal infrastructure, protocols, and trust necessary to put the technology to work.