Sep 22, 2023

The Fintech 5 (plus 3) with Adam Broun — Advisory Board Member at Fintech Sandbox

The Fintech 5 is a series of blog posts consisting of questions and answers designed to help you get to know the people behind Fintech Sandbox and our Data Access Residency better.

adam-broun

Adam Broun is a member of the Advisory Board of Fintech Sandbox as well as Chairman of Secondmind, a startup using machine learning to improve the process of designing automobiles. 

Previously, he was CEO of Kensho Technologies, an AI company that continues to build solutions to uncover insights in unstructured data that enable critical workflows and empower businesses to make decisions with conviction. Kensho was an early participant in our Data Access Residency. S&P Global acquired Kensho in 2018 for $550 million.  Prior to that he was CIO, Front Office and global head of IT Strategy at Credit Suisse, and a partner in Deloitte Consulting’s financial services practice.

The use of AI in financial services isn’t new. AI startups have been coming to Fintech Sandbox for access to training data from our very beginning. But recently, generative AI has been garnering significant attention. Generative AI is a specific subset of AI and machine learning capabilities distinguished by its ability to create new data, images, text, or other types of content. Given his experience, we thought Adam would be a good person to ask a few questions.

Question #1: Adam, Kensho was one of the very first startups accepted into the Fintech Sandbox Data Access Residency when we were starting out in 2015. What did Kensho gain from participation in the program?

When Kensho joined the Boston Fintech Sandbox it was still in a very early stage. But we were able to work with several data vendors to accelerate acquisition and evaluation of data that helped us launch the first version of the platform quickly and gave us some of the connections to be able to negotiate more strategic deals as the company evolved.

#2: Why have you chosen to remain involved with Fintech Sandbox?

I never stopped being involved with the Fintech Sandbox. I love being involved in the Boston and broader fintech ecosystem partly for my own curiosity but mostly because it’s an opportunity to help emerging companies transform the financial services industry.

#3: In your opinion, which is the most compelling use case for generative AI in financial services?

I think there are lots of compelling use cases for generative AI in financial services.  Some obvious ones include internal productivity measures like code generation and code review as well as customer facing applications like chat bots. But the more interesting applications probably come when generative AI is used for more specific applications such as helping analysts ask and answer questions about the world. This was always part of Kensho’s original mission, but the emergence of Large Language Model (LLM) technologies has supercharged the ability for startups and institutions to realize that vision and create “digital assistants” that can help financial analysts make sense of the world around them ask and answer intelligent questions and make decisions faster and with more certainty than before.

#4: Are small and midsized financial services firms going to be able to build proprietary generative AI capabilities in-house or will they be better off partnering with AI startups?

It’s interesting because the technology to create generative AI capabilities is incredibly accessible. Almost anyone with a basic coding background and a little bit of time and curiosity can leverage the existing tools to create some amazing solutions. But the complexity arises from the availability of high-quality training data specific to the use case, ensuring that the answers provided are true and not hallucinated, integrating the technology into the workflow of those who will benefit from it, and guarding against unintended vulnerabilities or side effects which arise from the inherent complexity of these tools and the unpredictability of their behavior.

#5:  Given the need for financial institutions to be able to explain their decisions, can generative AI be used to “make” credit decisions or fulfill regulatory requirements related to, say, KYC/AML? Or is its opacity an insurmountable hurdle?

In situations like KYC/AML or credit decisions, generative AI is probably not the right technology to sit at the core of those processes.  Partly that’s because of explainability, as you say, but it’s also about predictability: given the same facts you want the technology to produce the same results every time. Generative AI by its nature includes a degree of randomness that’s highly undesirable in these cases. But there may well be a role for generative AI on either side of those processes, for example to help explain or create a narrative around a decision given a set of facts that can help a salesperson or customer service agent as they interact with a customer. Or you could imagine a generative AI application assisting a customer directly in assembling the information required for an onboarding or credit decisioning process.

#6: What are the biggest risks arising from the use of generative AI in financial services? What do you worry about?

I think there are a few risks that institutions need to be paying attention to.  Perhaps the most obvious is the tendency for large language models to hallucinate, i.e., be confident but wrong about statements they’re making, which is clearly unacceptable if a business decision is going to be based on them. LLMs being used in this type of application are not only going to have to be trained on extremely high-quality data, but they will also have to be adapted to provide traceability of every assertion they make back to trusted source data and that’s going to require some heavy-duty integration from the user interface back to large trusted data sources. A second area of risk arises from the provenance of data being used to train the models. Institutions need to be extraordinarily careful that the data they use is (1) permissible to be used in training and (2) not confidential to the extent that if a model can be prompted to regurgitate that data it does not compromise any privacy or confidentiality concerns.

#7: How do we make sure historical biases are kept out of the data used to train large language models and away from the processes and algorithms used in generating AI responses?

I don’t believe you can! Given the enormous amount of training data required for these models, you’re ingesting a huge amount of historical information which is inherently going to reflect whatever biases, conscious or unconscious, were prevalent at the time. That can show up in quantitative training data such as credit decisioning or in the type of language used in textual reports or other documents. I think the best you can do is be very thoughtful about where the technology is going to be used and do your best to apply corrections to the training data or fine tune the outputs to correct for those biases going forward.

#8: Criminals will make use of generative AI as well. What are the implications for FSIs?

Anytime new technology is deployed, it creates new attack vectors for bad actors. Generative AI creates all kinds of opportunities like this including creating bots that can impersonate customers to fool institutions, or pretend to be institutions to fool customers, at scale and very cheaply.  Given that audio and video can be generated as well as text, it’s not hard to imagine how extremely sophisticated attacks could be constructed. Another area for institutions to be concerned about are adversarial attacks on their generative models. Here for example is a recent paper that shows how adding carefully selected strings to a prompt can cause a publicly available LLM to reply with information that it’s been explicitly told not to.  It’s not hard to see that this could be extended so that potentially with the right adversarial prompt a bank’s LLM would respond with inappropriate personal information.  Institutions are going to have to be extraordinarily vigilant about these new attack vectors and probably create additional layers of surveillance before they can allow these sorts of technologies in customer-facing applications.

Share the article.

Recommended Articles

Biju KK, Head of Fintech Solutions at Fidelity Investments

The Fintech 5 with Biju Kizhakhemadtil — SVP of Fintech Solutions at the Fidelity Center for Applied Technology

In this ongoing series of blog posts, we are introducing you to some of the sponsors, partners, advocates, and entrepreneurs who make...

read more
Dan New of EY

The Fintech 5 with Dan New — Managing Director at EY

The Fintech 5 is a series of blog posts consisting of questions and answers designed to help you get to know the...

read more
Raul Peralta

The Fintech 5 with Raul Peralta — CEO of Kaleidoscope

In this ongoing series of blog posts, we are introducing you to some of the sponsors, partners, advocates, and entrepreneurs who make...

read more