Maintaining data privacy compliance when using AI in finance


executive efficiency Top of mind for nearly all corporate leaders, artificial intelligence (AI) use cases are gaining traction across industries. While AI has offered a wide range of benefits and surprised users with its results, those in heavily-regulated industries – such as financial services – are raising serious questions about the security, data validity and ethics of this technology, especially When it comes to. data privacy.

Whether a financial institution aims to use AI to improve contract management, provide a better customer experience, improve fraud detection, or otherwise, there are regulations governing how data is ingested and maintained. Parameters are very important.

“Finance, legal, IT and operations teams should evaluate appropriate data privacy rules when considering their integration of AI in order to remain compliant and not get into hot water with customers, stakeholders, or regulatory bodies,” Avisort says Associate General Counsel Colby Manganon. , She adds that “Financial Intuition must also ensure that AI integration is backed by robust information security frameworks and data processing policies to protect customer data.”

use of ai in finance

As banks, investment firms and other financial institutions build out their technology stacks to improve efficiency, many have begun bolstering their technology with artificial intelligence-backed solutions that drive results for their back-end operations. makes it better.

Some banks Have started using OpenAI’s GPT-4 chatbots to allow advisors to research and pull data. A Leading Payment Processing Company Leveraging AI to better differentiate between real and false fraud detection and avoid card decline. Another major financial institution is currently using AI to digitally coordinate with internal stakeholders to create customized contracts and approve specific terms. AI also makes many opportunities To improve operations that impact revenue, such as speeding up customer services such as loan processing or onboarding.

Benefits aside, legal teams within financial institutions are aware of the concerns surrounding the privacy and security of their customer, stakeholder and organizational data from artificial intelligence (AI).

Concern over Artificial Intelligence (AI)

While AI-supported technology can be very useful in day-to-day operations, there are concerns about the nuances of data ingestion at the organizational level and the large-scale training of the underlying models. The questions become more specific when examining generative AI models that have gone through both pre-training and fine-tuning processes.

Third-party generative AI tools with minimal regulation, such as ChatGPT, have already taken a pause, while governing bodies work to get answers about the potential legal violations the use of these technologies could bring forth.

Italy recently banned the platform, other European countries raised flags about how to match AI-related data ingestion GDPR rules. state law like California Consumer Privacy Act Also act in relation to the storage, correction and deletion of personal data.

On top of regulatory concerns, many financial institutions are wary of using public third-party AI chatbots for fear that their proprietary data could be leaked. Organizations such as JPMorgan Chase, Wells Fargo and Goldman Sachs Group have Banned for use of ChatGPT for business communication because they “evaluate safe and effective methods of using [these] technologies”.

Is all of this to say that AI should be avoided in order to protect the data used within financial institutions? No. This means legal teams and enterprises will need to scrutinize individual programs carefully to ensure they meet regulatory standards for data privacy.

Ensuring Data Privacy Compliance When Using Generative AI

Protecting your enterprise when using AI requires a deep understanding of specific providers and the parameters they use to build their technology. Manganon explains, “When starting the sourcing process, leaders of financial institutions or any enterprise should determine the specific data they plan to input into the AI ​​model, as it plays an integral role in choosing the right platform for your enterprise “

When investigating potential enterprise-grade solutions, inquire about the specifics of the provider’s AI model, data privacy and security structures, and the security measures currently in place to mitigate risk. Helpful questions may include:

  • What are the AI ​​data training exercises used?
  • How is my enterprise’s confidential data and IP protected?
  • What are the security framework and practices?
  • Is the provider using a custom proprietary AI model or a bolt-on model from a third party?
  • If they have a third-party bolt-on provider, what is their data retention policy?
  • Will our enterprise sensitive data be used to train broad public AI models?

“By being careful in their evaluation of AI-backed solutions, leaders of financial institutions can take advantage of the full benefits of AI while remaining compliant with regulatory requirements while minimizing the risk of data compromise,” Manganon says. .

As artificial intelligence (AI) continues to gain ground in the enterprise technology landscape, legal teams within financial institutions will be responsible for meeting data privacy standards and enabling businesses to improve operations. With an enterprise mindset and due diligence, profitable professionals will lead their organizations to better business results and greater competitive advantage.

The views and opinions expressed here are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Source link