Mar 2025
This article was originally published as the first edition of Decoding, our monthly briefing on the latest trends in government technology. Sign up here to receive future editions directly in your inbox.
Governments worldwide are adopting AI into public services to enhance efficiency and responsiveness. However, this digital shift also sparks important ethical debates: from ensuring data privacy and addressing biases in AI systems to managing the implications of replacing human roles with automated processes. Different countries are taking varied approaches: the US is focusing on restructuring, the UK on investment, and France on sustainability. As AI technologies evolve, governments face the challenge of harnessing their potential benefits without compromising ethical standards or public trust.
In this edition, we explore how initiatives like Denmark’s 'AI Kompetence Pagten' and the EU’s AI Act are driving digital transformation, upskilling the workforce, and improving citizen engagement. Recent global benchmarks, such as the Government AI Readiness Index 2024, underline the critical need for strategic vision, robust governance, and data infrastructure in public sector AI adoption.
The EU AI Act entered into force on 1 August 2024, establishing a legal framework that balances technological advancements with citizen's fundamental rights. The Act categorises AI applications into four risk levels: unacceptable, high, limited and minimal. In February 2025, the first measures came into effect, when prohibitions and AI literacy obligations entered into application.
This means that AI systems posing unacceptable risks are now banned, and organisations in the European market must ensure that employees using and deploying AI systems must possess adequate AI literacy. The implementation of the EU AI Act also means that companies developing or deploying AI systems must comply with new risk-based regulations, ensuring transparency, accountability, and human oversight. This goes especially for high-risk applications in sectors like healthcare, finance, and critical infrastructures.
To assist stakeholders, the EU Commission is promoting the AI Pact, which is structured around two pillars:
The AI Act is part of a broader package of policy measures to support the development of trustworthy AI, including the AI Innovation Package, the launch of AI Factories and the Coordinated Plan on AI.
At the AI Action Summit in Paris, the EU launched the InvestAI initiative, aiming to mobilise €200 billion for AI investments. This includes a €20 billion European fund dedicated to developing AI infrastructure, such as AI Factories, to foster collaborative AI model development and position Europe as a global leader in AI technology.
Following a major shift in AI policy under President Trump, the US government has revoked Joe Biden’s AI executive order, signalling a potential shutdown of the AI Safety Institute amid rapid AI advancements. Meanwhile, the National Institute of Standards and Technology (NIST) faces deep budget cuts, threatening up to 500 layoffs. The potential elimination of the AI Safety Institute reduces regulatory oversight in the US, leading to a more laissez-faire approach to AI development. European companies integrating US AI models will need to exercise greater caution to ensure compliance with the AI Act's requirements for transparency, accountability, and safety.
The UK Government has partnered with AI Startup Anthropic to explore using its chatbot 'Claude' for enhancing citizen interactions. Claude is already in use by the European Parliament to make its archives more easily accessible, streamlining document search. Additionally, Prime Minister Keir Starmer has launched a comprehensive AI action plan with a multibillion-pound investment strategy to expand AI computing capacity twentyfold by 2030, applying AI across sectors while ensuring the responsible use of anonymised public data with strong privacy protections.
The Swedish government has endorsed a comprehensive AI roadmap developed by a national commission, with key initiatives including a €1.5 billion investment over the next five years to accelerate AI innovation and the establishment of a task force under the Prime Minister’s Office to expedite critical AI reforms.
Germany faces cultural hesitation towards AI adoption, with only 12% of companies utilising AI in 2023 (11% in 2021). Experts caution that failing to adopt AI-driven business models may cause Germany to lag in global competitiveness. Additionally, AI is a politically divisive topic. In January 2025, the German parliament enacted stricter immigration laws, supported by the far-right AfD, igniting debates on AI-driven surveillance and data privacy. At the same time, Germany continues to play a pivotal role in enforcing the EU AI Act, striving to ensure AI complies with ethical principles and fundamental rights. With Chancellor Friedrich Merz recently appointed, Germany stands at a crucial juncture - navigating the balance between AI innovation and regulatory oversight to maintain its position in the shifting digital economy.
At the AI Action Summit in Paris, France introduced 'Current AI', a foundation with an initial endowment of $400 million. This initiative, supported by multiple governments, philanthropic organisations, and private companies, aims to develop AI as a public good, focusing on creating high-quality public datasets, open-source AI tools, and frameworks for public accountability in AI systems. Another announcement during the Summit was the 'Coalition for Environmentally Sustainable Artificial Intelligence', established by the UN Environment Programme (UNEP) and the International Telecommunication Union (ITU). The coalition brings together over 100 partners, including 37 tech companies, 11 countries, and five international organisations, to address the environmental impacts of AI technologies.
Prime Minister Giorgia Meloni announced a $40 billion investment plan from the United Arab Emirates, targeting sectors including AI, data centres, space research, renewable energy, and rare earths, aiming to bolster Italy's digital transformation. In January, Italy also announced agreements worth around $10 billion with Saudi Arabia, signalling a strategic push towards innovation.
South Korea plans to secure 10,000 high-performance GPUs in 2025 to bolster its national AI computing infrastructure. This initiative reflects the country's commitment to enhancing its AI capabilities amid increasing global competition. The government plans to collaborate with the private sector, aiming for early deployment of the improved AI computing centre.
A new report from the Ministry of Digitalisation shows that 28% of Danish companies used artificial intelligence in 2024, nearly doubling last year’s figure and far exceeding the European average of 14%. Larger firms (250+ employees) are at the forefront, while even smaller companies (10-49 employees) show significant progress. However, while Denmark is leading in general AI adoption, we fall behind in generative AI (GenAI) compared to our Nordic neighbours - and the Nordics are falling behind relative to the rest of Europe and the world.
→ Read the full report on GenAI Complacency here.
Denmark is taking bold steps to prepare its workforce for an AI-driven future. The 'AI Kompetence Pagten' (AI Competence Pact) is a public-private partnership aiming to upskill 1 million people in AI. The initiative unites companies, public authorities, educational institutions, and organisations to build a digitally proficient workforce. Key aspects of the initiative include:
The Danish Agency for Digital Government’s 'Vejledning om AI-færdigheder' (AI Literacy Guide) provides practical guidance for Danish companies and public authorities on how to fulfill Article 4 of the EU AI Act, which requires that all organisations deploying AI ensure their employees possess sufficient AI literacy.
Launched in February 2025, the AI assistant Børge is designed to support editors in rewriting content for the platforms borger.dk and lifeindenmark.borger.dk.
Børge provides editors with suggestions for new texts that align with the internal writing guidelines for borger.dk. The purpose of developing an AI assistant was to assist editors across approximately 40 Danish authorities responsible for content on 1,200 pages. The solution not only increases efficiency for employees but also enhances the user experience, as the AI-generated texts follow established guidelines that improve readability.
Employees have described Børge as "a good, helping hand during a busy workday", as the AI assistant's output is not replacing the editor's work but rather gives feedback and suggestions which they can use in the further work of optimising the texts.
Børge exemplifies how the public sector can upskill its workforce by integrating GenAI in a secure, citizen-centric way - positioning GenAI as a collaborative tool rather than a replacement for human expertise.
With the AI Act now in effect, striking the right balance between innovation and regulation is more important than ever. We asked Danish MEP Morten Løkkegaard to share his perspectives on the opportunities and challenges AI presents for public services in the EU.
What is the biggest opportunity for AI to improve public services in the EU?
AI holds immense potential to revolutionize our way of engaging with the public services. This means making them more efficient, accessible, and citizen-centric is a must. The biggest opportunity for me lies in optimizing administrative processes, from automating repetitive tasks to personalizing services, which will free up resources and improve your experience with the public sector.
What is the greatest risk or challenge in integrating AI into the public sector?
The greatest challenge is ensuring that AI deployment respects fundamental European values such as transparency, privacy, and fairness. Public trust in AI systems is crucial especially when it comes to decisions affecting citizens’ lives. Right now we see a rising amount of AI applications that do not have these factors in mind and that makes it ever so important that we in Europe stand keeps our values in mind when developing new innovation.
From your perspective, what is the most pressing regulatory challenge for AI in the public sector today?
The biggest regulatory challenge is balancing innovation with the protection of citizens’ rights. The AI Act is a crucial step in setting global standards, but we must ensure that it does not create unnecessary red tape or slow down the adoption of beneficial technologies, especially in the public sector, where efficiency gains are desperately needed. We need to change the way we look at “risk” and weigh it more in the scale of benefits and efficiency.
How can public-private partnerships (PPP) drive AI innovation while maintaining the necessary regulatory oversight?
PPP's can accelerate AI innovation by bringing together the best of both worlds: cutting-edge technology from the private sector and the public sector’s commitment to serving the common good. These partnerships must be built on trust, with clear frameworks for data sharing, accountability, and compliance with ethical standards. The EU should act as a convener, fostering cross-border collaborations between businesses, governments, and research institutions.
What advice would you offer to public sector leaders as they adopt AI technologies to ensure transparency, accountability, and inclusivity?
Public sector leaders should adopt a “trust-by-design” approach, embedding transparency, accountability, and inclusivity into AI systems from the outset. This means involving citizens in decision-making, conducting regular algorithmic audits, and ensuring that AI complements human decision-making rather than replacing it. Building trust will be key to unlocking AI’s full potential for society.