Are you ready to tackle the cyber security challenges that come with AI in finance?

Let’s face it: AI is revolutionising the finance sector, and it has been for years. But, with tech comes new risks and responsibilities.

In this article, we’ll take a look at managing these AI specific cyber security risks for financial services.

This article takes recommendations from the U.S. Department of the Treasury (USDT) report on managing AI specific cyber security risks. These recommendations are based on what many financial institutions are already implementing as well as additional recommendations from the USDT.

While this article specifically addresses AI related cyber security risks for the financial sector, many of the recommendations are relevant to the wider business landscape.

Get Your Free Essential Eight Cyber Security Report

Learn:

    • The cyber security gaps costing you time and money.
    • Practical steps to upgrade your security measures.
    • The hidden risks of poor security protocols.
    • How to bolster your cyber security and aid business growth.

1. Integrate AI Risk Management into Enterprise Systems

First things first: You need to integrate AI risk management into your existing systems.

It’s not as complicated as it sounds.

“Operational risk is inherent in all banking products, activities, processes, and systems.”

Those are words from the Basel Committee on Banking Commission (BCBS). And it means that there will always be risk associated with financial services, AI systems within financial institutions are no different.

The BCBS refers to a three layer defence approach to risk management and assigning control and risk management responsibilities. Let’s break it down in terms of AI risk management:

Business Line (first line of defence)

    • The first line is the frontline staff and managers who directly engage with AI systems.
    • They’re responsible for identifying and managing risks in their daily operations.
    • They’re given basic risk control functions because they can act quickly.

Corporate Risk Management (second line of defence)

    • Consists of risk management and compliance functions that support and oversee the business line.
    • They provide the frameworks, tools, and support needed for the first line to manage risks effectively.
    • They take information from the first line and insert it into AI policy and procedure based on risk assessment criteria.
    • They also facilitate communication and decision making about AI risks to ensure that management is aware and involved.

Auditing Risk Controls (third line of defence)

    • The internal audit function, which provides an independent evaluation of the risk management practices to give assurances to senior management. They require direct reporting to executives or the board.
    • They ensure that the first and second lines are functioning properly and that the risk management controls are effective.
    • At least an annual assessment of the organisation.
A pyramid that outlines the BCBS three layer defence model.

Don't miss out on our latest.

Join our subscribers and receive expert insights on cyber security and IT. Sign up now!

  • This field is for validation purposes and should be left unchanged.

If your business does not have such a structured approach to risk management, the National Institute of Standards and Technology Risk Management Framework (NIST RMF) suggests a principles based approach:

    1. Senior leadership sets clear goals and risk tolerance
    2. Align AI technology with these goals
    3. Ensure that AI risk management is a continual requirement over the AI system’s lifespan
    4. Ensure accountability in AI teams, users, and across the organisation in general.

2. Create a Risk Management Framework

There are plenty of risk management frameworks out there, including:

However, some financial institutions have decided to create their own frameworks that leverage the above.

Whichever you pick, you want to make sure that it fits your organisation.

The key takeaway here is to simply have a framework in place. Having a structured AI risk management framework will help you:

    • Identify potential risks related to your use of AI systems.
    • Tailor said risk identification to the specific ways that you intend to use AI and your institutional goals.

This ensures that AI risks are assessed and managed in a way that is relevant to your business’s unique needs and objectives.

3. Integrate AI Risk Management Across Departments

AI risks don’t just affect a single department. You’ll need a team effort to manage them across various departments.

In some cases, financial institutions have appointed an AI lead to handle AI risk, or handed responsibility to an existing official, such as the CTO or the CISO.

In other cases, specific AI centres for excellence have been created to address the specific risks and opportunities presented by AI.

And in some instances, the role has been taken on by the board of directors.

Regardless of the set up, your business is encouraged to integrate AI plans into its enterprise risk management functions and connect them with other aspects of the business to address the complex risks that AI poses.

Focus on integrating key areas such as model risk, technology risk, cyber security risk, and third-party risk management.

    • Model Risk: Ensure that AI models are accurate and reliable, addressing potential biases and errors.
    • Technology Risk: Manage risks related to the technology infrastructure supporting AI, ensuring secure and reliable hardware, software, and networks.
    • Cyber Security Risk: Protect AI systems from cyber threats by implementing robust security measures to prevent unauthorised access and data breaches.
    • Third-Party Risk Management: Oversee risks from external partners involved in the AI process, ensuring compliance with security and risk management standards.

By focusing on these key areas, your business can effectively manage the diverse and complex risks associated with AI systems.

4. Vet Your Vendors

Vendor risk management is a huge part of any business operation.

And this is especially true when it comes to AI systems.

As a financial institution, you must extend your third-party due diligence to include services offering AI systems, as well as those relying on AI systems.

Here are key considerations and questions you should address when vetting vendors:

    • AI Tech Integration: Understand how the vendor integrates AI technology into their products and services.
    • Data Privacy Practices: Ensure the vendor has robust data privacy practices in place to protect sensitive information.
    • Data Retention Policies: Verify the vendor’s policies on how long data is retained and how it is securely disposed of when no longer needed.
    • AI Model Validation: Assess how the vendor validates their AI models to ensure accuracy and reliability.
    • Model Maintenance: Evaluate the processes the vendor uses to maintain and update their AI models.

Additional Vendor Management Questions

Sub Vendors and Dependencies

Ask vendors if they rely on other vendors for data or models, and if so, how they manage and account for these dependencies.

Notification of Changes

Ensure vendors notify you of any changes or updates to products or services that use AI systems.

Disclosure of AI Use

Request vendors disclose the scope of AI system use in their products and services, and notify you of any material changes.

Model & Data Lifecycles

Ask vendors to describe the lifecycle of their models and data, especially when an AI system is significant to their product or service.

 

An infographic showing the AI system lifecycle, including: design, development, validation, implementation, use, ongoing monitoring, updating, and retirement.

Customer Impact

Inquire about the impact AI systems could have on your customers and how this impact can be communicated effectively to the customers.

Security Practices

Require vendors to describe their implemented security practices, including patch management and vulnerability assessment processes for the infrastructure hosting the AI system.

Underlying Third-Party Models

Request information about any underlying third-party models incorporated into the vendor’s AI systems.

5. Strengthen Authentication Measures

With the rise of criminal use of AI, traditional identity-based solutions, including biometrics like voice or video recognition and soft biometrics such as keystrokes or other behavioural patterns, face significant challenges. These methods, once considered top-tier authentication measures, are now vulnerable to sophisticated AI-driven attacks.

Real World Examples of AI-Driven Fraud

Modern Authentication Strategies

To combat these advanced threats, many financial institutions are developing and adopting new ways to verify customer identities. Here are some recommended strategies:

Out-of-Band Identity Tokens

Use digital credentials sent through a separate channel (e.g., an authentication code sent via an app) to confirm customer identity and secure access to financial accounts.

Hardware-Based Devices

Implement FIDO-compliant devices or other hardware-based solutions that provide a higher level of security than traditional methods.

App-Based Passkeys

Use authentication apps that generate one-time passcodes or provide password-less login options to enhance security.

Enhanced Security Measures

Do not disable additional security features like geolocation tracking or device fingerprinting. These measures provide extra layers of protection by verifying the location and device used for authentication

Further Recommendations from the NIST

    • Avoid SMS for Multi Factor Authentication: Due to security risks, NIST advises against using SMS for multi factor authentication.
    • Adopt More Secure Methods: Implement hardware-based devices, app-based passkeys, and other password-less solutions. Although these methods may incur higher costs, they offer better security.

6. Assess AI Tools & Risk Tolerance

When implementing AI systems, especially generative AI (GenAI), you must carefully consider your risk tolerance.

It’s important to determine the level of risk associated with the function of a new or existing AI system. Understand how these risks align with your institution’s overall risk tolerance.

And while IT departments are always under pressure to adopt the latest AI technologies, such as GenAI, it’s crucial to evaluate all solutions and vendors based on their capabilities, applicability, and limitations. Don’t just jump on the next AI release because it looks cool.

In some cases, GenAI tech may simply not align with your institution’s risk tolerances, particularly if a higher level of explainability and transparency is required.

GenAI’s capabilities might not meet the standards needed for certain higher-risk applications.

7. Apply Cyber Security Best Practices to AI Systems

When in doubt, it’s a good idea to use current cyber security best practices to secure AI systems.

At the very least, AI systems should be subjected to the same level of cyber security as any other IT system used by your organisation.

This will ensure a consistent and high standard of protection across all technologies.

The USDT uses the following example: If a data loss prevention policy prohibits entering customer information, business documentation, or files into search tools or external systems, then the same rules ought to be applied to GenAI tools.

It’s also important to clearly communicate security thresholds and requirements to vendors before the delivery of AI products. Ensure that vendors adhere to your security standards.

If developing AI software in-house, use secure design considerations from the outset. Implement security features during the design phase to reduce risks.

Finally, pay attention to the data used to train AI models. Ensure that training data is secure, free from biases, and complies with privacy regulations.

Conclusion

AI presents substantial opportunities and risks.

Risk is inherent to many businesses, not just finance.

By understanding and addressing these risks with risk management frameworks, thorough vendor vetting, strong authentication measures, and established cyber security practices, your business can harness the benefits of AI while protecting its operations and customers.

More Like This