The EU AI Act roll out presents any company doing business in the EU with some tough decisions to make and an urgent need to establish a robust risk management framework.
Just this month, the European Union Artificial Intelligence Act (EU AI Act) reached yet another major milestone in its roll out. Article 5, covering prohibited AI practices and unacceptable uses of AI, has become law.
It’s not just companies based in the EU that need to prove their systems comply with Article 5 – or indeed, any other aspect of the EU AI Act. One of the most comprehensive AI regulations to emerge worldwide, it applies extraterritorially, meaning that any company doing business in the EU must comply, regardless of where they are based.
This presents multinationals with some tough decisions to make. Should they withdraw from the EU entirely, on the basis that it has become a high-compliance market? Should they restrict the use of AI in their products and services within EU markets? Or should they adopt the EU AI Act as a global standard, potentially incurring substantial costs and operational drag?
Clearly, none of these approaches are optimal. Ideally, regulations should align with global frameworks to avoid fragmentation between jurisdictions. Without that alignment, they are forced to allocate valuable resources to administrative compliance, arguably at the expense of other areas of concern, such as proactive cybersecurity measures.
Many laws, after all, aim to strengthen the security of organizations and that is to be welcomed. However, their multiplication and specificity can be a drain on company resources, increasing costs and creating vulnerabilities.
Global Head of Government Affairs at Elastic.
Navigating a safe path
For now, companies must navigate this less-than-ideal state of regulatory affairs, and do so at a time when AI technology is evolving rapidly – and typically faster than laws and mandates can be put into place.
Doing so will involve striking the right balance between innovation and compliance, while actively participating in the global debate between the private and public sectors around global AI standards.
Companies’ direct experiences of walking this innovation/compliance tightrope will be of great value to these discussions and should be led by public affairs teams with first-hand experience of following legislative developments, collaborating effectively with policymakers and advocating for regulatory harmonization to optimize compliance investments.
In the absence of a global framework, and for however long that situation persists, interoperability between the different regional outposts of multinationals will be crucial. Achieving harmonization, at least internally between those outposts, will help to promote the responsible development of technological solutions within a business that can be put to work in different parts of the world and, eventually, adopted on a global scale.
With an eye on internal efforts, it will be all the more essential to prioritize operational efficiency and process rationalization, focusing on automation, risk-based compliance and close cooperation between legal, IT management and security teams. This approach has the potential to turn constraints into opportunities, and help build a future where innovation and security go hand in hand.
In security terms, managers will face increasing challenges related to regulatory complexity, juggling compliance and operational safety, and protecting critical systems while respecting new and changing rules. Their role will be central in the implementation of an ethical and secure innovation policy, building bridges between various internal services to promote a comprehensive and coherent approach.
Challenging times ahead
The overall challenge that multinational organizations face in 2025 is to ensure that AI governance is aligned with both the regulatory requirements and the strategic objectives of the organization. This requires a robust and confident approach to risk management – one that can weather the storm when companies are inevitably forced to focus on diametrically opposed requirements. That takes rigor, but it also demands flexibility and consistency to allow for efficient resource management.
In the absence of this kind of approach, imbalances will persist and represent a significant burden on organizations, which run the risks of being less compliant, less secure, less able to benefit from innovation – or, indeed, all three.
Organizations may also find themselves woefully unprepared for new regulations coming down the line. Work related to the EU AI Act, for example, has only just begun. While Article 5 is now enforced, the next phase of the AI Act roll out will see the application of ‘codes of practice’ for general-purpose AI systems, such as large language models. Its enforcement and associated obligations for AI providers will commence in August.
On one point, the EU is very clear: the penalties for non-compliance with Article 5 will be stiff. These will be subject to administrative fines of up to €35m or up to 7% of total worldwide annual turnover for the preceding financial year, whichever is higher.
In light of this context, organizations must prepare now for a rolling program of regulatory change during 2025 and beyond. They must keep clear inventories of their AI tools and technologies, work to improve the AI literacy of employees and put in place the risk management foundation discussed here. Only by focusing on building this kind of resilience can they hope to navigate the regulatory minefield successfully and emerge on the other side as stronger, more innovative businesses.
We’ve compiled a list of the best IT asset management software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro