Counsel, Senior Associate Co-Head of the IP/IT Practice group TGS Baltics. Silvia is a member of the dispute resolution practice group and has represented clients in several civil matters. In addition, Silvia specialises in IP and has advised clients in many copyright issues and trademark disputes.
In May 2024, the EU Council approved the AI Act, a piece of legislation to regulate artificial intelligence across the bloc. Since 1 August, the act has officially become law, with its stipulations coming into effect over the next six to 36 months.
What does the AI Act mean for fintechs, financial institutions, and other organisations developing or deploying AI for the finance sector?
I am a Counsel and Co-Head of the IP/IT Practice group in law firm TGS Baltic. In this guide, I cover:
I also wrote an article on how the EU’s AI Act affects you if you operate a startup. Read it here: Is my Startup regulated by the AI Act?
The AI Act was set up by the EU to standardise regulation regarding AI across the trading bloc. As a single market, the EU wants to ensure legal uniformity among all member states when it comes to AI. That’s because competitiveness across the bloc may suffer if one state has more relaxed regulation than another.
To this end, the act formalised the various rules, norms, and best practices that AI systems have to follow when provided, deployed or put into market in the European Union. It covers how AI can be used in specific products, in business’s internal operations, and in decision-making.
Of course, the assumption here is that AI does need regulation. While the technology has great potential to help businesses, users, and economies in general, legislators are aware that it also has potential for harm. With the AI Act—the world’s first AI regulation of this caliber—the EU wants to ensure that the technology develops in a way that maintains the rule of law and democracy, as well as the basic rights of each individual.
That’s why the act outlines four levels of risk posed by AI:
So, who will be most affected by this new legislation? In general, every individual in the bloc will be affected, as the Act is designed to protect human rights and prohibit any systems that could impact those rights.
But more directly, it will most affect those companies whose products use AI, whether for internal or consumer use. This includes both startups and bigger corporations, particularly those in the financial services industry.
The AI Act will mostly affect industries that are already highly regulated, such as healthcare, critical infrastructure, energy, and financial services. These are all included in the “high risk” category.
It means that if you’re using—or planning to use—AI in finance, it’s really important that you know how to comply.
If you’re a startup, check out our first article on this topic. In this current article, we’ll cover three specific aspects of the AI Act that you should be aware of, whether you’re a fintech or an established financial institution.
One of the main use cases of AI in the financial industry is in supporting decision-making. This could be:
In all of these cases, the use of AI will be subject to some specific rules. Most importantly, AI cannot be the sole decision-maker in these assessments.
The Act is explicit, in fact, that these decisions will still need some human oversight. According to article 14(1) of the Act, “high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.”
What’s more, the AI cannot use social scoring methods to make decisions about people either. This means it can’t make decisions based on social background, race, or other factors that could be discriminatory.
Another important aspect to be aware of is that, in any decisioning supported by AI, the end user needs to know that AI was involved. This disclaimer needs to be added to the result. This is important in cases where the AI-assisted decisioning is about natural persons, rather than businesses.
Of course, one of the main reasons why you should ensure your startup or financial institution complies with the AI Act is that there are severe consequences for not doing so. Violations will be subject to whichever is higher, either:
The sum depends on which provisions have been violated. For instance, if you’re using a tool defined by the Act as an “unacceptable” risk, you’ll face the highest fines. If you’re using an AI chatbot but not declaring to a consumer that they are talking to AI, the fine will be lower, up to 15 million euros or 3% of annual global turnover for the preceding financial year, whichever is higher.
What’s crucial to know is that these fines will apply even if you haven’t developed the AI tool yourself. The AI Act regulates the use of AI across the bloc, and is more lenient for its innovation or development before putting it onto the market. So, it’s important that any AI tools you use—whether internally or with external users—comply with the law too.
While the AI Act was set up to regulate AI, the EU wants to balance this regulation with support for innovation. And to do this, every member state will be required to open AI sandboxes or collaborate with other member states to allow local startups access to such regional sandboxes.
Sandboxes are tools allowing businesses to explore and experiment with new and innovative products or services under a regulator’s supervision. Under article 57(1) of the AI Act, “member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by 2 August 2026. That sandbox may also be established jointly with the competent authorities of other Member States.”
This is good news for startups interested in developing AI products, as they can use these sandboxes to access:
If you’re a startup, something that it’s important to be aware of is that the Act is not relevant to you if you’re still in the testing phase of your product. It’s only when you put your product on the market for service that the regulation applies.
So, there are provisions in the Act to ensure that innovation continues. But it’s worth noting that if you’re testing a product that isn’t compliant, investors may be less interested. They’ll want to see that you meet all the relevant legal requirements before they invest.
Whether you’re a startup or an established financial corporate, the AI Act will change the way you use this technology. So, what can you do to ensure you’re on the right side of the law?
Here are four key actions to take going forward:
A good place to start is the full text of the AI Act on EUR-Lex, while the European Commission’s website has several short articles and links. Plus, the AI Act Explorer by the Future of Life Institute offers a great high-level overview of the Act.
Of course, it can sometimes be difficult to understand the legal language of the Act itself. But there are simpler guides out there that can help. Plus, you can always reach out to us at Tenity or TGS Baltic if you have any questions.
By November 2024, for instance, member states will have identified the bodies responsible for AI rights. These will be an important point of reference in your home country. And from February 2025, restrictions on some AI tools will apply—for instance, those which use social scoring.
Any AI product already on the market by August 2025 will then need to be compliant with the Act in its entirety by August 2027.
If you’re a startup, you should make the most of this support. You’ll get access to advice and best practices, and you’ll receive useful insights into your products.
Check in your own country for these sandboxes. But you’ll also likely be able to apply to other programmes across the bloc too.
By bringing the AI Act into force, the EU has become the first territory to regulate AI to that extent. The motivating factor here has been to protect the rule of law and basic principles on which the EU was founded, while maintaining innovation. It’s important that the EU finds the right balance, particularly given the growth of AI industries in China and the US.
As the law is rolled out, we’ll see better oversight of the industry. Each member state will have a responsible agency in addition to the EU AI Office, sandboxes will encourage new products, and we’ll see what impact the regulation will have over the next few years. Until this oversight is in place, though, there’s unlikely to be any big changes.
Critically, while the EU may be the first place to thoroughly regulate AI, it’s not going to be the last. It’s a developing technology, the potential of which we haven’t yet fully seen—and legislative bodies around the world are likely to respond.
Already, Colorado has become the first US state to set up a regulatory framework for AI, while Japan is also considering a similar move. I expect other governments will do the same, with all eyes on the bill of the California AI Transparency Act.
In this post, we’ve explored some of the key aspects of the AI Act to be aware of, if you’re a startup or an already big player in the finance sector. Going forward, your first step should be to familiarise yourself with the Act’s provisions and get the legal and compliance support you need.
Get in touch with us at TGS Baltic or at Tenity to find out more—and to stay in the loop of future developments in AI and fintech.