EU AI Act: How contractors, developers and deep-fake creators are affected

It’s closing very quickly but a call for views from the UK government on the cyber-security of AI is actually a bit behind the curve, insofar as the EU AI Act came into force on Thursday just gone – August 1st 2024, writes Maryam Rana of Gerrish Legal.

Now in force, the EU AI Act…

The European Union this month took a big step forward in regulating the rapidly evolving world of Artificial Intelligence (AI) with the introduction of this act.

As AI technologies become more embedded in everything from healthcare to finance, the need for strong regulatory frameworks has never been more critical.

That’s a reality which the UK government’s own AI call for views all but acknowledges too, ahead of it closing this Friday (August 9th).

EU AI Act: what’s the objective, and how does it work?

In the EU’s case, the aim of the AI Act is to create a solid legal foundation for the ethical and safe use of AI systems -- while also promoting innovation and keeping Europe competitive in the global market.

The EU AI Act uses a risk-based approach, categorising AI applications based on their potential impact on individual rights and societal values.

By setting strict regulations for high-risk AI systems and emphasising transparency and accountability, the act seeks to mitigate the dangers posed by AI technology.

For businesses and developers, understanding the ‘ins and outs’ of the EU AI Act is crucial for both staying compliant and making smart business decisions.

Preparing for EU AI Act compliance

So in addition to the GDPR, companies and contractors now need to prepare for the EU AI Act.

Introduced with effect from Thursday August 1st, the act aims to create a legal framework that promotes safety, transparency, and responsibility in AI.

To do this, the act divides AI systems into different risk levels -- from “minimal” to “unacceptable,” and sets regulatory requirements accordingly.

Isn’t it too soon to regulate AI?

While some commentators say it might be too soon to regulate AI, the consensus is that the rapid adoption of generative AI in 2024 means a proactive approach is necessary now, to ensure ethical and responsible use.

So the goal of lawmakers (both in the UK and EU) appears to be to address risks before they become widespread, thereby fostering trust and safety in AI technology.

Three key points of the EU AI Act

1. High-Risk AI Systems: These systems must meet stringent requirements, such as thorough risk assessments, data governance controls, and human oversight.

2. Transparency Obligations: AI systems that interact with people, generate content, or make decisions must disclose their nature and provide clear explanations.

3. Prohibited Practices: Certain harmful AI practices will be explicitly banned.

Where might business groan at the EU AI Act?

Looking at the EU AI Act, one of the most challenging provisions for businesses might be the requirement for high-risk AI systems to undergo rigorous testing and certification.

This provision could increase development costs and time, making it harder for smaller companies to compete and prolonging time-to-market.

EU AI Act: Who it applies to…

Crucially, the AI Act applies to any entity using AI in their operations or developing AI solutions, extending beyond the EU, much like the GDPR. This means any company offering AI-related services could be affected, potentially meaning your business or client may need to reconsider intended activities, especially in relation to “high-risk AI systems.”

Be aware, the EU AI Act’s requirements related to transparency, data governance, and human oversight could be particularly burdensome, necessitating detailed risk assessments and strict compliance.

This reflects the fact that while AI offers significant benefits, it also presents challenges. We can therefore assert that contractors, freelancers or consultants working in tech on behalf of organisations need to be well-versed in both the AI Act and GDPR, to effectively protect data privacy.

What the EU AI Act means for employment management systems to start-ups

Let’s take a specific example. Under the act, AI systems used in the area of “employment management” will need to meet specified transparency and accountability standards. They also need to register in an EU database.

This could benefit workers and the self-employed, by providing more oversight and protection against biases and discriminatory practices in AI-driven hiring and management processes.

However, start-up businesses under the act might face delays from having to navigate new regulatory procedures; conduct thorough risk assessments, and ensure compliance. Any of those could potentially slow down the release of AI models to the public.

Experts are behind the act’s risk categorisations – but you can contribute too

Interestingly for IT contractors, it is experts in AI, data science, and ethics who have defined the act’s risk categories, and these categories can be reviewed and updated as AI technology evolves.

We’re glad about this. It keeps the regulations dynamic and responsive to new threats and industry insights. Furthermore, stakeholders can actually even submit feedback to regulators on high-risk areas not covered by the legislation. Hopefully this will ensure this framework governing Artificial Intelligence remains flexible and relevant as AI matures and evolves.

EU AI Act: Enforcement, penalties and bans

But beware because it won’t be a toothless regime!

National regulatory bodies within the EU will enforce the EU AI Act.

To ensure organisations take their new AI requirements seriously, penalties for non-compliance could include hefty fines, similar to those under the GDPR.

Since Thursday’s commencement of the EU AI Act, Amnesty International has reiterated its belief that the use of Remote Biometric Identification (RBI) systems, like public facial recognition, should be banned by the act but aren’t.

Our stance is that while we acknowledge Amnesty International’s concern, we would argue that the act’s provisions for strict oversight and accountability in the use of real-time facial recognition strike a fair balance between privacy and security -- while allowing the beneficial uses of the technology.

What does the EU AI Act mean for IT contractors’ CVs, pages and portfolios?

And given that the EU AI act is now in force, we recommend contractors should consider updating their CVs, website services pages or product portfolios to assert compliance with the EU AI Act, especially where there is any chance of high-risk AI systems being provided or involved.

Stating this compliance conveys an understanding and adherence to regulatory standards, potentially making you more reassuring and attractive to prospective engagers.

Potentially pleasing if you are such an  IT contractor, the act places a prohibition on deploying high-risk AI systems without comprehensive risk evaluations and certifications beforehand. While it may sound odd to welcome a ‘prohibition’ of anything, this could ultimately improve the quality and safety of AI systems we’re all increasingly using and in future will likely be relying on. Those IT contractors who will only entertain the highest standards of ethics and practices and want both of these aligned, should therefore be among the act’s supporters.

EU AI Act: deep-fakes and Big Tech impact

Something else IT contractors might like about the EU AI Act is its aim of curbing the misuse of ‘deep-fakes’ -- already prominent for the wrong reasons in the contractor job application space -- by requiring creators to disclose that the content is artificially generated.

Although the act primarily applies to the EU, its influence extends to the UK and other regions through cross-border data flows and business operations. And the ‘disclosure requirements’ for deep-fakes need to be explicit and prominent; a small line of text at the end of a deep-fake video isn't enough.

Rather, the disclosure should be clearly visible so viewers know the content is artificially created.

Developers are the other obvious contingent to face ramifications under the act, even if their intentions aren’t usually in the murky waters of deep-fake creators’. In particular, we recommend developers should consider the  EU AI Act’s provisions from the very beginning, to ensure compliance from the outset, and thereby help design AI systems that meet regulatory requirements which don’t need costly revisions later.

EU AI Act: The future

As to the future of the act, we acknowledge that while the EU AI Act imposes some constraints, the legislation might not fully address the dominance of large tech companies in the AI space -- a complaint being vocally made by some. With many ‘small tech’ players among our clientele, our legal advisory believes further measures might therefore be necessary to ensure a level playing field and prevent tech monopolies from stifling innovation and competition.

Monday 5th Aug 2024
Profile picture for user Maryam Rana

Written by Maryam Rana

Maryam is a focused, and hard-working individual who is keen to embrace the legal profession .Having recently completed her LLB (Hons) Law with Human Rights Undergraduate Degree, she will commence her LLM Legal Practice Course in September.

Printer Friendly, PDF & Email