ANS CEO Richard Thompson looks through the EU AI Act to see what the potential stumbling blocks for businesses could be.
The EU AI Act, enacted on 1 August 2024, marks a significant milestone in the regulation of AI technologies.
This comprehensive legislation aims to create a structured environment for businesses to harness the potential of AI while mitigating associated risks. The Act introduces a risk-based approach to AI regulation, with stricter rules for high-risk applications and general requirements for all AI systems.
As businesses prepare for the compliance deadline of 2 August 2026, many are grappling with the implications of this new regulatory landscape.
The Act’s impact extends beyond EU borders, affecting companies worldwide that offer AI-powered products or services within the EU market.
Yet, this far-reaching legislation has sparked discussions about the balance between innovation and regulation in the rapidly evolving field of AI. How will innovation balance itself with regulation?
To find out more, we spoke with Richard Thompson, CEO at digital transformation company ANS, about the implications of the EU AI Act for businesses.
RICHARD THOMPSON BIO
Strategic leader with a creative and ambitious growth mindset, Richard is passionate about developing talent within a high performance culture, with a consistent record of executing both growth and transformation agendas across both wholesale/direct channels to market.
Richard emphasises the importance of striking the right balance between regulation and innovation: “With the law set to come into force on August 1st 2024, and with compliance deadlines set for August 2nd 2026, the new AI legislation provides a structured environment for all businesses to unlock the potential of AI technologies while mitigating potential risks. However, it’s key that this legislation doesn’t slow down industry growth or hold back the possibilities for businesses developing advanced AI technologies. Striking the right balance between regulation and innovation is as important as ever to drive business growth across the UK in a safe and responsible way”.
This perspective highlights the dual nature of the AI Act. While it aims to protect consumers and ensure ethical AI development, there are concerns about its potential to stifle innovation. The challenge for businesses lies in navigating these new regulations while continuing to push the boundaries of AI technology.
Challenges and compliance
One of the primary hurdles businesses face is ensuring compliance with the Act’s complex requirements. Richard notes, “One of the main challenges that businesses are likely to be facing will be ensuring compliance with the complex and varying requirements of the AI Act, especially for high-risk AI applications. However, these regulations will be a force for good and can really support in embracing AI and exploring the art of the possible in a responsible way”.
To address these challenges, Richard advises proactive measures: “Businesses should invest in understanding these regulations and adapting their AI strategies to meet these standards. Early preparation can minimise the risks of regulatory constraints slowing down innovation”.
Richard offers practical strategies for businesses to ensure compliance with the new AI regulations: “Businesses should start by carrying out a thorough audit of their current AI practices and identifying areas that might need adjustment under the new regulations. This involves staying up to date about the specific requirements for different AI applications and investing in compliance training for their teams”.
He further suggests, “Partnering with experts in AI and regulatory compliance can provide valuable guidance and support. Also, adopting best practices in data management and security is key, as these are foundational elements of many AI applications. By prioritising transparency and ethical considerations in their AI development processes, businesses can not only comply with regulations but also spark innovation and enable company growth”.
The future of AI’s operations in business
Looking ahead, Richard sees the increasing regulation of AI as a positive development for businesses. He states, “The growing body of AI regulation signals a shift in how businesses will operate with AI.”
These regulations set crucial standards for transparency, accountability, and ethical usage, which are key for building trust with customers. As consumers become more aware of AI’s capabilities and potential risks, these regulations reassure them that businesses are driving forward their innovative AI capabilities responsibly.
While acknowledging the investment required to adhere to these regulations, Richard remains optimistic about their long-term impact: “Adhering to these regulations requires significant, but worthwhile, investment from businesses. But increasing regulation of AI ultimately creates a safer and more trustworthy environment for innovation.”
By fostering a culture of accountability and ethical use, businesses can leverage AI more effectively and sustainably. So in the long run, this regulatory framework will lead businesses on a positive path, enabling them to unlock their true AI potential”.
As the EU AI Act comes into force, businesses across sectors must adapt to this new regulatory landscape. While challenges lie ahead, the Act also presents opportunities for companies to differentiate themselves through responsible AI practices and build trust with their customers.
By embracing these regulations and investing in compliance, businesses can position themselves at the forefront of ethical AI innovation, driving growth and sustainability in the AI-powered future.