Artificial Intelligence (AI) has long won the argument about its benefits and its importance for modern-day operations. It can be applied to all sorts of problems, and it has proven to be very good in discovering underlying patterns in data, something that can be difficult for humans. For that matter, it has become ubiquitous nowadays in companies, who rely on it to make their businesses more efficient and to better serve their customers.
However, the unchecked use of AI can present some dangers and unintended societal consequences, ranging from discriminatory behavior towards certain groups of individuals (see here), to high-impact scandals that had a catastrophic financial impact on people and lead the government to resign (see here).
In order to mitigate AI’s inherent risks, the European Union has launched a new initiative to regulate the area. Its risk-based approach aims to increase trust, transparency, and accountability in the use of AI algorithms, ensuring that they are fair and that they meet certain requirements before being put into service. Some key elements of the draft legislation are discussed below.
What does it mean for businesses using AI systems?
The EU’s first-ever framework on artificial intelligence will make accountable not only the developers of such systems, but also the companies using them. Human oversight, transparency, data quality, risk management, and ongoing monitoring are only a few of the requirements that might be needed for an AI system to be put in operation. It is the risk level of the potential application that will determine the specific requirements it needs to meet.
According to the proposal, most AI systems would be classified as having low or limited risks. This would mean that the list of requirements imposed would be smaller in such cases. There will be, however, those that will fall into the “unacceptable” or “high-risk” categories which will receive increased scrutiny and oversight to be granted approval.
An assessment of each AI system would be needed in order to fully understand its impacts on society, its risk category, and, thus, its compliance requirements.
In any case, the fines for non-compliance can be steep, reaching up to 30 million euros or 6% of the company’s total worldwide annual revenue according to article 71 of the legislation’s current version (link to the proposal). This should not come as a surprise to businesses, as this is in line with the fines already put in place by the GDPR.
Will companies operating outside of the EU be forced to comply with this legislation?
Yes, the current version stipulates in article 2 that the regulation applies to “(...) operators placing on the market or putting into service AI systems in the Union, irrespective of whether those operators are established within the Union or in a third country; (...)” (Note: text reflects amendments to the draft legislation as of April/2022. See the full content for additional details and exemptions. Link to initial version and amendments.).
What are some of the main criticisms?
In its current state, some potential requirements lack a few essential elements such as feasibility and clarity. For instance, enforcing companies and developers to ensure that the data used in AI systems are “(...) free of errors and complete (...)” might be something impossible to achieve in many cases due to the complexities of data collection and management.
Tech companies such as IBM and Facebook have also voiced concerns that the current draft of the legislation fails to address how it would work in practice, given that multiple players and factors are involved when it comes to the accountability needed for such systems. Other issues mentioned range from the definitions of reasonable efforts companies must take to ensure compliance, as well as the very definition of what is in fact an AI system. IBM’s and Facebook’s feedbacks can be seen here and here respectively.
What is the current state of the EU’s AI-Act and when will it be enforceable?
Currently, the regulation is still under work and it will take time until it is finalized. Once that phase is completed and it becomes law, there will still be a period for businesses across the EU to adapt to its dispositions.
At the moment, it is difficult to lay out a timeframe for it as it still requires further discussions among EU lawmakers. There are certainly many aspects of the legislation that still require further debate before it moves closer to its final version. This could take time.
How can you prepare?
Even though there are still plenty of details to be ironed out, one thing is for certain: companies will need an even more robust data strategy that comprehends all aspects of data management and utilization. The amount of data companies collect is constantly increasing, as well as the issues and obligations that accompany it. As businesses increasingly rely on data for their decision-making, they will also face increased obligations and oversight when making use of it.
Company-wide data initiatives can take time to plan and implement. For that matter, it may be wise for companies that have been postponing such initiatives to start considering these topics soon.
The EU is merely the first main global player that aims to impose standards on AI (and data for that matter), but it is likely that similar regulations will begin to take shape soon all over the world.
More information
If you want to continue discussions about the EU regulation and AI and how you can start preparing your organization, do not hesitate to contact me.
Sources: