From turbo-charging document review in compliance to real-time fraud identification, one of the key benefits of advanced artificial intelligence (AI) in financial services is managing operational risk — fast and cost-effectively. Yet firms in regulated industries also know that implementing and adopting next-generation AI tools like LLMs also comes with its own set of risk and compliance concerns.

In this past year since the transformative launch of ChatGPT, there’s been no shortage of scenarios pointing to generative AI’s potential vulnerabilities: provoking chatbots to provide confidential consumer data or AI-powered virtual assistants spitting out biased lending decisions.

Firms must begin launching a strategic direction with AI to capture value and stay ahead of competitors — but it’s also key for financial institutions and others in regulated industries to get ahead of the risks.

Private LLMs, tailored models trained on company data using open-source LLMs, are one possible solution. Unlike public models trained on generic data from across the web, developing private chat models enable companies to design AI in strict compliance with industry regulations – and even anticipated regulations — at the outset. I spoke to Thomas Barton, Blankfactor’s VP of AI, to find out more about what leaders in regulated industries should know, from coming AI regulations to derisking generative AI innovation with private models.

Navigating the future of AI regulations

According to NVIDIA‘s “State of AI in Financial Services: 2023 Trends,” 72% of survey participants say they were working to address their AI technologies’ explainability and trustworthiness concerns by building risk management and governance frameworks. Just 26% of the previous year’s respondents were doing the same, a strong signal of the growth in AI development and a maturing approach to implementation. 

However, FIs should expand their scope to ensure they’re addressing all core areas of risk. While it’s uncertain exactly how regulators will treat AI software in the near future, AI experts have identified its greatest challenges and risks.  

The major concerns map across four key areas:

1. Data privacy & protection: Depending on the application, FIs looking to develop proprietary AI models may leverage consumer data — and will need to establish robust compliance frameworks to protect personally identifiable information (PII) and align with data privacy regulations.

2. Fairness & bias: AI models may reinforce existing biases if they’re trained on biased data. FIs such as banks will need to ensure that these systems do not replicate discriminatory bias in lending, credit risk decisioning, and other offerings. 

3. Transparency & explainability: Some forms of AI models, such as deep learning, pose more challenges for explainability. Because regulators will want to understand how AI makes decisions, firms must prioritize the development of explainable AI. 

4. Cybersecurity: AI can be vulnerable to cyber threats, such as hacker manipulation. AI systems must be robustly secured against these threats to protect customer and company data. 

Firms can anticipate that these four areas will require special consideration in any AI strategy and development initiatives. 

U.S. and U.K. policies taking aim at AI regulation

Regulatory movement is already heading for AI. With major tech players from Google to OpenAI working with U.S. Congress to address regulatory guardrails and ensure the safe and effective development of AI, it’s a sign that regulation will have a lasting impact on AI adoption in the enterprise. (Built In)

At the intersection of financial services and AI development, several existing regulations in the U.S. protect consumers and can currently limit the use and potential of AI-powered software. As FIs know, this includes the Equal Opportunity Act and the Fair Credit Reporting Act, and the Truth in Lending Act, among many others. 

But new data and AI-specific developments in the regulatory landscape will further take aim at curbing risks to consumers, including:

  • The EU’s AI Act: Imposes strict requirements on “high-risk” AI applications like those used in finance. Fines for violations are up to 6% of global turnover.
  • AI Bill of Rights:  The U.S. has yet to adopt federal-level policy regarding AI. But through the White House Office of Science and Technology Policy (OSTP), the Biden Administration developed a set of recommendations for the ethical development of AI known as the AI Bill of Rights. (Built In)
  • Automated decision-making regulations: The EU’s GDPR restricts how automated decision-making can be applied for legally-binding services. In some U.S. states, like California, consumers can also opt out of services that employ automated decisioning. 

Companies will need to understand the existing patchwork of legislation and plan for more comprehensive regulations to come, using the four core areas of risk above to anticipate future developments.

How Private LLMs Ensure Compliance By Design

Generative AI tools and large language models (LLMs) like ChatGPT can turbocharge operations across the business in financial services. But with the risks of public LLMs for enterprises in highly-regulated industries, firms should turn to future-proof AI solutions that prioritize compliance and regulatory guardrails. 

Private language models are a solution. Trained on a company’s proprietary data to execute specific tasks — from AI-powered customer support to predictive analytics in trading — private AI offers the promise of enhanced productivity, efficiency, and cost optimization while mitigating the risks of public LLMs. Developing private LLMs allows companies to retain control of their data, ensuring it remains self-contained within their secure environment.

But to harness the benefits of private chat models, companies should carefully plan their AI development strategy. This includes: 

  • Identifying high-impact use cases aligned with their business goals.
  • Selecting open-source language models fit for their needs.
  • Collecting relevant proprietary data sets for model training.
  • Establishing ongoing governance throughout the AI lifecycle.

Developing secure private LLMs with Blankfactor’s AI Labs

Ensuring secure innovation with AI technologies is crucial throughout the entire AIOps lifecycle and beyond. But companies often lack the skills and resources to execute, from data engineers to strategists who can map an AI strategy to your business goals. 

With Blankfactor’s AI Labs, companies can unlock that expertise. We’ve worked with global financial services leaders to develop advanced machine learning (ML) and AI systems that leverage company data securely, including automating fraud and manipulation detection in capital markets. Our data engineers and AI experts deliver secure AIOps and advisory from end-to-end, from ideation and feasibility assessments to development to training your teams for long-term success with private LLMs.

De-risking development throughout the AIOps lifecycle

Our AI Labs approach to development prioritizes security, from secure LLM platform selection to secure data ingestion, storage, and processing. We also ensure that your customer’s personal data remains anonymous, masking personally identifiable information (PII) in compliance with domain-specific regulatory requirements, such as those related to Know Your Customer (KYC), Anti-Money Laundering (AML), and the Truth in Lending Act.

The testing and validation phase is just as crucial to ensuring that a private model’s performance aligns with regulatory requirements and produces accurate, explainable results. Our teams identify security flaws and fine-tune the AI model, then deploy on secure servers. Access is controlled to ensure that only authorized individuals can interact with the system. 

We then work with your teams to establish frameworks for ongoing governance and compliance, ensuring that the AI model continues to adhere to security and regulatory standards for the long term.

A framework for navigating regulatory future in your AI strategy

FIs face challenges in developing private large language models and innovative AI systems that accelerate business outcomes while balancing the demands of transparency and risk management. Before launching AI initiatives, it’s imperative to develop an AI strategy that aligns with regulatory requirements and risk management best practices.

Our experts can guide you through a framework for adopting and optimizing artificial intelligence systems in hand with the regulatory future of your industry. We advise that firms:

1. Evaluate current and future dependencies on AI and automation in operations.

2. Develop comprehensive AI policies that encompass dataset integrity, accuracy, transparency, and risk mitigation to ensure fairness and protect against biased outcomes.

3. Establish a dedicated function to formulate and oversee AI policies throughout the entire organization.

4. Gain a deep understanding of the inner workings of your AI systems and be ready for the explainability expectations of regulators. 

5. Conduct a thorough risk assessment and cost-benefit analysis for both internal and customer-facing AI systems, integrating risk controls.

6. Implement ongoing governance frameworks for AI inspired by well-established systems within their domain.

Unlock the power of private LLMs

Understanding the potential of AI models is just the beginning — but implementing AI systems that can transform your organization requires careful analysis and risk-mitigation planning. 

Our experts know how to balance LLM security while unlocking the transformative potential of generative AI for financial services and other regulated industries. With AI Labs, get expert AI talent and advisory you need to develop an actionable roadmap and execute game-changing AI technology solutions. Contact us today for a 60-minute strategy session.