By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Cookie Policy for more information.
Global

The EU’s AI Act: An introduction for fintechs and financial institutions

October 29, 2024

Silvia Urgas

Counsel, Senior Associate Co-Head of the IP/IT Practice group TGS Baltics. Silvia is a member of the dispute resolution practice group and has represented clients in several civil matters. In addition, Silvia specialises in IP and has advised clients in many copyright issues and trademark disputes.

The EU’s AI Act: An introduction for fintechs and financial institutions

In May 2024, the EU Council approved the AI Act, a piece of legislation to regulate artificial intelligence across the bloc. Since 1 August, the act has officially become law, with its stipulations coming into effect over the next six to 36 months.

What does the AI Act mean for fintechs, financial institutions, and other organisations developing or deploying AI for the finance sector? 

I am a Counsel and Co-Head of the IP/IT Practice group in law firm TGS Baltic. In this guide, I cover:

I also wrote an article on how the EU’s AI Act affects you if you operate a startup. Read it here: Is my Startup regulated by the AI Act?

Why was the AI Act established?

The AI Act was set up by the EU to standardise regulation regarding AI across the trading bloc. As a single market, the EU wants to ensure legal uniformity among all member states when it comes to AI. That’s because competitiveness across the bloc may suffer if one state has more relaxed regulation than another.

To this end, the act formalised the various rules, norms, and best practices that AI systems have to follow when provided, deployed or put into market in the European Union. It covers how AI can be used in specific products, in business’s internal operations, and in decision-making.

Of course, the assumption here is that AI does need regulation. While the technology has great potential to help businesses, users, and economies in general, legislators are aware that it also has potential for harm. With the AI Act—the world’s first AI regulation of this caliber—the EU wants to ensure that the technology develops in a way that maintains the rule of law and democracy, as well as the basic rights of each individual. 

That’s why the act outlines four levels of risk posed by AI: 

  1. Unacceptable risk refers to those use cases that directly affect people’s rights. These applications—such as social scoring—will be outlawed. 
  2. High risk refers to uses in industries where the impact may be highest, such as in critical infrastructure, law enforcement, or finance. 
  3. Limited risk involves AI use cases such as chatbots and deep fakes, where there will be an obligation to transparency. 
  4. Minimal risk, namely applications such as games and spam filters. These uses will have more relaxed regulation.

So, who will be most affected by this new legislation? In general, every individual in the bloc will be affected, as the Act is designed to protect human rights and prohibit any systems that could impact those rights. 

But more directly, it will most affect those companies whose products use AI, whether for internal or consumer use. This includes both startups and bigger corporations, particularly those in the financial services industry.

What to know about the AI Act if you work in financial services

The AI Act will mostly affect industries that are already highly regulated, such as healthcare, critical infrastructure, energy, and financial services. These are all included in the “high risk” category. 

It means that if you’re using—or planning to use—AI in finance, it’s really important that you know how to comply. 

If you’re a startup, check out our first article on this topic. In this current article, we’ll cover three specific aspects of the AI Act that you should be aware of, whether you’re a fintech or an established financial institution. 

1. Under the AI Act, any AI decisioning will still require human oversight

One of the main use cases of AI in the financial industry is in supporting decision-making. This could be:

  • Assessing mortgage customers for eligibility for a loan.
  • Running background check for applications for particular financial products.
  • Considering applications for an employee role.

In all of these cases, the use of AI will be subject to some specific rules. Most importantly, AI cannot be the sole decision-maker in these assessments. 

The Act is explicit, in fact, that these decisions will still need some human oversight. According to article 14(1) of the Act, “high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.”

What’s more, the AI cannot use social scoring methods to make decisions about people either. This means it can’t make decisions based on social background, race, or other factors that could be discriminatory. 

Another important aspect to be aware of is that, in any decisioning supported by AI, the end user needs to know that AI was involved. This disclaimer needs to be added to the result. This is important in cases where the AI-assisted decisioning is about natural persons, rather than businesses. 

2. The consequences for non-compliance are considerable, even if the AI you use was developed by a third-party

Of course, one of the main reasons why you should ensure your startup or financial institution complies with the AI Act is that there are severe consequences for not doing so. Violations will be subject to whichever is higher, either:

  • A maximum penalty of 35 million euros, or
  • 7% of your annual global turnover for the preceding financial year

The sum depends on which provisions have been violated. For instance, if you’re using a tool defined by the Act as an “unacceptable” risk, you’ll face the highest fines. If you’re using an AI chatbot but not declaring to a consumer that they are talking to AI, the fine will be lower, up to 15 million euros or 3% of annual global turnover for the preceding financial year, whichever is higher. 

What’s crucial to know is that these fines will apply even if you haven’t developed the AI tool yourself. The AI Act regulates the use of AI across the bloc, and is more lenient for its innovation or development before putting it onto the market. So, it’s important that any AI tools you use—whether internally or with external users—comply with the law too. 

3. You’ll be able to access sandboxes that support AI innovation

While the AI Act was set up to regulate AI, the EU wants to balance this regulation with support for innovation. And to do this, every member state will be required to open AI sandboxes or collaborate with other member states to allow local startups access to such regional sandboxes. 

Sandboxes are tools allowing businesses to explore and experiment with new and innovative products or services under a regulator’s supervision. Under article 57(1) of the AI Act, “member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by 2 August 2026. That sandbox may also be established jointly with the competent authorities of other Member States.”

This is good news for startups interested in developing AI products, as they can use these sandboxes to access:

  • Advice on regulation and compliance
  • Risk assessments of their products
  • New opportunities to develop products in a controlled space. 

If you’re a startup, something that it’s important to be aware of is that the Act is not relevant to you if you’re still in the testing phase of your product. It’s only when you put your product on the market for service that the regulation applies. 

So, there are provisions in the Act to ensure that innovation continues. But it’s worth noting that if you’re testing a product that isn’t compliant, investors may be less interested. They’ll want to see that you meet all the relevant legal requirements before they invest.

What should startups and corporates do to comply and stay up-to-date on the AI Act?

Whether you’re a startup or an established financial corporate, the AI Act will change the way you use this technology. So, what can you do to ensure you’re on the right side of the law? 

Here are four key actions to take going forward:

  1. Ensure you’re aware of the Act’s key provisions as soon as possible. The most important elements of the Act that you need to be familiar with are the levels of risk and the use cases that fall into them. This will determine the kinds of response you need to make.

    For instance, if you’re just using AI in chatbots targeted to customers, you’ll need to ensure transparency. However, you won’t need to worry about the more rigorous compliance considerations, such as those regarding conformity assessment procedures. 


A good place to start is the full text of the AI Act on EUR-Lex, while the European Commission’s website has several short articles and links. Plus, the AI Act Explorer by the Future of Life Institute offers a great high-level overview of the Act.

Of course, it can sometimes be difficult to understand the legal language of the Act itself. But there are simpler guides out there that can help. Plus, you can always reach out to us at Tenity or TGS Baltic if you have any questions. 

  1. If you’re using a high-risk AI system or using AI in a high-risk field, speak to an expert. If you feel like your product is relevant to the higher-risk uses outlined in the Act, it’s worth getting legal advice on how to proceed.

    As a corporate, you’ll likely have an internal compliance expert who will be aware of the requirements already. But if you’re a startup or founder, speak with a compliance auditor, a lawyer, or an AI expert for advice.

    For high-risk uses, the stakes can be high—so it’s worth getting it right.

  2. Be aware of the timeline for the Act’s rollout. While the AI Act came into force on 1 August 2024, there’s an implementation period of up to 36 months. This means that you have time to ensure you’re compliant.


By November 2024, for instance, member states will have identified the bodies responsible for AI rights. These will be an important point of reference in your home country. And from February 2025, restrictions on some AI tools will apply—for instance, those which use social scoring. 

Any AI product already on the market by August 2025 will then need to be compliant with the Act in its entirety by August 2027. 

  1. Apply for a sandbox if you’re a startup working in AI. Remember that each member state will set up an agency to support you to be compliant. They’ll also create access to sandboxes, which provide regulatory support if you’re developing AI products. 


If you’re a startup, you should make the most of this support. You’ll get access to advice and best practices, and you’ll receive useful insights into your products.

Check in your own country for these sandboxes. But you’ll also likely be able to apply to other programmes across the bloc too. 

What’s the future of AI regulation in Europe and beyond?

By bringing the AI Act into force, the EU has become the first territory to regulate AI to that extent. The motivating factor here has been to protect the rule of law and basic principles on which the EU was founded, while maintaining innovation. It’s important that the EU finds the right balance, particularly given the growth of AI industries in China and the US. 

As the law is rolled out, we’ll see better oversight of the industry. Each member state will have a responsible agency in addition to the EU AI Office, sandboxes will encourage new products, and we’ll see what impact the regulation will have over the next few years. Until this oversight is in place, though, there’s unlikely to be any big changes. 

Critically, while the EU may be the first place to thoroughly regulate AI, it’s not going to be the last. It’s a developing technology, the potential of which we haven’t yet fully seen—and legislative bodies around the world are likely to respond. 

Already, Colorado has become the first US state to set up a regulatory framework for AI, while Japan is also considering a similar move. I expect other governments will do the same, with all eyes on the bill of the California AI Transparency Act

AI will change fintech—and you need to stay in the loop

In this post, we’ve explored some of the key aspects of the AI Act to be aware of, if you’re a startup or an already big player in the finance sector. Going forward, your first step should be to familiarise yourself with the Act’s provisions and get the legal and compliance support you need.

Get in touch with us at TGS Baltic or at Tenity to find out more—and to stay in the loop of future developments in AI and fintech.