Stuart Tarmy
AI has emerged as one of the key innovation drivers in financial services, empowering companies to harness an enormous amount of real-time data, optimize internal operations, and enhance decision-making. But the race to adopt AI has largely happened in an unregulated space — a modern-day “Wild West” in which companies have operated mainly without legal limitations. That era is coming to an end.
Governments globally are pursuing legal frameworks to make AI systems more transparent and accountable. In the U.S., regulatory efforts are still in their infancy, with most oversight occurring at the state level or through industry guidelines. However, as financial services are inherently global, firms will need to prepare for an increasingly complex set of rules spanning multiple markets.
The EU AI Act: The First Steps in AI Regulation
The EU AI Act is the first comprehensive regulation of AI. It provides the initial underpinning for AI regulation in other countries, similar to how the EU GDPR data privacy regulations in 2018 set the foundation for other data privacy regulations in the U.S., such as the California Consumer Privacy Act (CCPA) in 2020. It uses a risk-based approach, applying differing degrees of oversight based on the potential harm an AI system could cause. High-risk applications — like those that use AI for credit scoring or mortgage lending for consumers — will bear heavy compliance burdens that require transparency, explainability, and accountability.
The EU AI Act has significant implications for U.S. financial firms doing business in the EU. For example, they must follow the Act’s requirements if they do any business in the EU. Non-compliance is punishable with a fine of up to €35 million or 7% of total turnover worldwide. That demonstrates how seriously regulators take the responsible use of AI.
Catch more Fintech Insights : AI is Making Accounting and Finance Faster, Smarter and More Valuable
The U.S. Regulatory Landscape for AI
There are currently no federal-level AI regulations in the U.S., as they remain a work in progress. However, cognizant of the EU AI Act, a number of states have adopted or are developing their own laws concerning AI. More than 40 state bills regulating AI have been introduced. Not surprisingly, California is one of the leaders in looking at legislation relating to AI in the areas of transparency, privacy, entertainment, election integrity, and government accountability. Here are three examples to illustrate the scope of the legislation under consideration:
- Assembly Bill 2655: Defending Democracy from Deepfake Deception Act21: This requires large online platforms to identify and block the publication of materially deceptive content related to California elections during specified periods before and after an election.
- Assembly Bill 1836: Use of Likeness: Digital Replica Act22: This establishes a cause of action for beneficiaries of deceased celebrities to recover damages for the unauthorized use of an AI-created digital replica of the celebrity in audiovisual works or sound recordings.
- Assembly Bill 2013: Generative AI: Training Data Transparency Act24: This bill mandates that developers of generative AI systems (GenAI) publish a “high-level summary” of the datasets used to develop and train GenAI systems.
At the federal level, guidelines from the National Institute of Standards and Technology (NIST) have laid the groundwork for how AI should be responsibly developed and used. They are not formal laws but will likely influence future regulatory efforts. As they do, financial institutions that operate across state lines will face a patchwork of AI regulations they’ll need to comply with.
What U.S. Firms Need to Do Now for AI Compliance
U.S. financial institutions with a global footprint will need to devote considerable effort to preparing for compliance with international regulations such as the EU AI Act. The Act requires AI systems to be explainable, so companies need to be able to show how AI-based decisions — like approving or denying a loan — are made. This will require comprehensive documentation, systems auditing, and risk assessments to categorize the AI models and ensure they meet regulatory requirements.
U.S. financial institutions must also prepare to address changing regulatory requirements in the U.S. in response to the EU AI Act. Organizations need to establish AI governance frameworks to curb any potential risks related to bias, data privacy, and intellectual property issues. Compliance officers and CIOs will have to align internal policies to ensure AI tools are legally compliant and ethically sound.
This also considers the involvement of third-party vendors. For instance, many financial institutions depend on external providers for AI-powered services (e.g., banking, fraud detection, and customer analytics). These firms will need to assess vendor compliance and ensure that their partners are held to the same regulatory standards. With legal exposure being the top concern, choosing AI vendors with compliance certification will be critical.
Preparing Employees and Customers for AI Regulation
Beyond the technical requirements, there’s also a human element. Employees who interact with AI-based systems need to be informed about the new regulations and their consequences. Just as cybersecurity laws required organizations to implement security training, AI regulations should drive similar efforts to ensure compliance.
There’s also the matter of customer transparency. Financial services must detail the reasoning behind AI-based decisions to customers so they understand when AI is involved in loan approvals, fraud detection, or credit assessments. They may need to update user agreements, disclosure policies, and customer service protocols.
Now Is the Time to Get Ahead
Financial firms can’t wait for comprehensive AI regulations, whether from the U.S. or abroad. As federal and state-level AI policies evolve, now is the time to create compliance roadmaps, assess AI vendors, and implement monitoring tools. At the same time, firms must stay aligned with global standards, including the EU AI Act, to continue leveraging real-time technologies to power AI-driven financial services while ensuring compliance and avoiding large fines. Transparent, trustworthy, and responsible AI practices will help firms mitigate legal and financial risks. Those who act sooner will have the edge in the next era of AI-driven finance.
Read More on Fintech : Banking on AI for Efficiency – no matter which way the regulatory winds are blowing
[To share your insights with us, please write to psen@itechseries.com ]