
Truth, trust and transformation — 4 key takeaways from Zaizi’s AI in government event
Zaizi recently held an exclusive breakfast event in Westminster on AI and misinformation for the national security and defence community.
‘Uncovering Digital Shadows: How can we build trust in the age of AI?’ addressed pressing issues surrounding the risks and opportunities of AI, especially in the public sector.
The event brought together the public sector, SMEs, and academics to share insights and collaborate.
Moderated by Zaizi’s director of strategy and innovation Chad Bond, the expert panel included Rupert Small, Founder and CEO of Egregorious.ai; Dave Sully, Founder and CEO of Advai; and Alex Burton, Co-founder at Alchem Technologies.
Here are some key takeaways from the discussion.
Watch: A civil servant’s guide to kickstarting AI projects
1. Disinformation, bias, and learning from adversaries
The World Economic Forum ranks disinformation as a top global risk. Adversaries already use AI for disinformation campaigns, manipulating public opinion, and disrupting critical infrastructure.
The panel discussed how, unlike regulated entities, these ‘bad actors’ are unburdened by assurance processes. They can easily deploy bots to spread disinformation and manipulate information supply chains.

There are risks of AI systems perpetuating biases or being manipulated by hostile actors to create new ones. This includes bias in detection systems, inherent limitations in large language models (LLMs), and the risks of humans relying too heavily on AI outputs. Model poisoning and supply chain vulnerabilities add further layers of complexity.
These biases can perpetuate and amplify existing inequalities, particularly in public services. The panel highlighted that governments must demand greater transparency from AI vendors, requiring evidence of rigorous testing, bias mitigation strategies, and security protocols.
Maintaining human control and oversight is crucial to prevent unintended consequences and ensure ethical use. But humans also remain the biggest vulnerability — uniquely susceptible to deception and manipulation.
Over-reliance on AI, without critical thinking, can lead to flawed decision-making, so investing in public education and awareness campaigns is essential.
2. Can AI deliver transformative value for government?
The AI marketplace is awash with hype, making it difficult to separate fact from fiction. Big venture capital investments in generative AI, particularly in marketing, exacerbate this dilemma.
The hype can create a disconnect between what’s possible with AI and whether it’s ready for deployment in complex public sector environments.

Underpinning many of the challenges is also the global shortage of individuals with a genuine understanding of AI’s underlying mechanisms.
There’s clearly a need within the public sector for transformative technology. For that to happen, government must develop a better understanding of AI so that it can solve specific problems. The panel underscored the urgency of addressing this skills deficit through targeted education, training, and recruitment initiatives, recognising that AI expertise is now of national strategic importance.
The complex nature of public sector challenges, which often involve political oversight, diverse stakeholders, and an entrenched legacy system, makes this trickier.
3. How government needs to approach AI
There was talk about embracing a “design to deploy” mentality. As well as focussing on pilot projects and POCs, government needs to think about scalability from the outset and plan the route to full deployment.
Focusing just on pilots can stall projects. It’s important to think about the broader impact from the start.
Government also needs to collaborate with SMEs and recognise that expertise in AI often resides outside traditional incumbents. So bridging the gap between government, academia, and the private sector (particularly SMEs) is crucial. SMEs are the key to driving innovation, productivity, and efficiency in the public sector (think Netflix vs BlockBusters).

The public sector faces challenges in procuring AI solutions effectively. Streamlined procurement processes and a willingness to embrace smaller companies can unlock the full potential of innovation in the public sector.
Ukraine is a good example of how this is working. Despite the war, Ukraine’s thriving digital startup culture provides valuable lessons on how to move quickly and innovate.
4. Reflections from Chad
“I had the honour and pleasure to moderate the engaging panel discussion. The themes discussed made me reflect on the adoption of open-source technologies in government, which has been a gradual but significant shift.
“The adoption was hampered by barriers like legacy systems, cultural resistance, and a lack of in-house expertise. While open source was recognised as a viable and secure alternative, its implementation often required significant organisational change and investment in skills development. A similar but more pressing pattern is emerging with AI.
“AI presents both unprecedented opportunities and significant risks.

“The UK government must learn from its experience with open source, addressing barriers to adoption and fostering a culture of collaboration.
“For AI, the focus must shift from hype to reality. This requires a comprehensive approach, prioritising ethical considerations, robust safeguards, and a clear understanding of the technology’s limitations. By learning from adversaries, cutting through the hype, and building in-house expertise, the UK can harness AI’s transformative potential while mitigating its risks.
“The urgency is clear. As Ukraine’s experience demonstrates, a strategic, proactive approach to digital transformation and AI adoption is crucial for national security and economic competitiveness. The UK must act to build a future where AI serves the public good, ethically and responsibly.
“Our session was invaluable in discussing how we might crack some of these themes together — Digital government is hard, together we’ll succeed.”
There will be more sessions in the future. If you’d like to be involved, please get in touch.
-
Advancing DevOps practices with AI —lessons from AWS re:Invent
-
Paris AI summit: Navigating the AI landscape and how SMEs can help UK government
-
Border Force wins Global Customs Innovation Award for ScanApp — a solution Zaizi helped build
-
How to kickstart AI projects in government — lessons from Border Force, HMRC and GIAA
-
Transformation Day – How do you fit a square peg in a round hole?
-
Making AI simple: How it can quickly add value to border security