May 06, 2024

The Fintech 5 with Michelle Bonat — Chief AI Officer at AI Squared

The Fintech 5 is a series of blog posts consisting of questions and answers designed to help you get to know the people in the Fintech Sandbox community.

Michelle Bonat, CAIO at AI Squared, merges finance and technology expertise to spearhead AI initiatives. With an MBA from Kellogg, she founded a fintech startup, led AI innovation at Chase Bank, and patented her technology. At JPMC, she served as AI CTO, driving transformative projects. Her career includes time as a software executive at Oracle, and product leadership at Ariba and startups, showcasing her strategic prowess. Passionate about diversity, she mentors, judges, and teaches coding to underrepresented groups. A sought-after speaker, she addresses AI, tech, and innovation. Bonat’s leadership at AI Squared reflects her commitment to solving impactful problems with AI and data, delivering cutting-edge solutions for enterprises. Her career trajectory underscores her vision and execution capabilities honed over years of experience in technology and finance.

Michelle Bonat

Question #1:  Michelle, what is an AI Playbook and why is it important?

Recently, I created an AI Playbook that can be used by any organization. An excerpt of this playbook is below. You can find more details here.

A Step-by-Step Guide to Creating your Organization’s AI Playbook: Boat or Moat?

This AI playbook helps you determine if your AI strategy should be geared towards more of a boat (advancing your position) or a moat (defending your position).

An AI playbook is a strategic document that outlines an organization’s approach to implementing and leveraging artificial intelligence (AI) technologies. It includes a series of steps, best practices, and guidelines for integrating AI into various aspects of the business.

Here are some key components that should be included in a organization’s AI playbook:

  1. Business Objectives and Use Cases: Define AI objectives, identify use cases, prioritize based on business needs, develop a rating system, revisit monthly, evaluate quarterly.
  2. Data Strategy: Establish a comprehensive data strategy covering collection, governance, quality, storage, privacy, and security. Ensure ethical compliance and quality management.
  3. Ethical and Regulatory and Governance Considerations: Establish ethical AI guidelines, conduct risk assessments, engage stakeholders, ensure regulatory compliance, and establish model governance procedures.
  4. AI Development Lifecycle: Define your AI lifecycle: ideation, data prep, model dev, testing, deployment, monitoring. Specify team roles.
  5. Model Selection and Evaluation: Define criteria for AI model selection, evaluation metrics, and validation procedures. Specify rules for use of external services like ChatGPT.
  6. Implementation Roadmap: Implement AI in phases with timelines, milestones, and resource allocation. Pilot, scale, iterate based on feedback. AI relies on circular development cycles.
  7. Ethical and Responsible AI Principles: Embed ethical AI principles: fairness, transparency, accountability, bias mitigation in development and deployment. Enforce these principles.
  8. Risk Management and Compliance: Identify AI risks: legal, regulatory, reputational, operational. Develop mitigation strategies, ensure compliance.
  9. Security and Privacy Measures: Utilize secure AI: safeguard data, systems from cyber threats, breaches. Use privacy-preserving techniques, encryption.
  10. IP Protection: Define your IP strategy: Copyrights, Trademarks, Patents, Open Source. Determine an offensive or defensive patent approach. Choose license options.
  11. Cross-Functional Collaboration: Encourage cross-departmental collaboration: data scientists, engineers, analysts, legal, stakeholders. Define team structures and communication.
  12. Training and Capacity Building: Offer AI training: workshops, resources, certifications. Empower staff with skills for effective AI use.
  13. Continuous Improvement and Optimization: Establish continuous improvement for AI: feedback loops, performance monitoring, refinement based on real-world usage. Ensure auditable processes.
  14. Documentation and Knowledge Sharing: Share AI best practices, lessons, case studies. Create repositories for code and models. Consider an internal data, feature, and model sharing marketplace.
  15. Stakeholder Communication and Engagement: Engage stakeholders: employees, customers, partners, regulators, media. Promote transparency, trust. Educate on AI processes. Hold monthly updates.

By incorporating these components into a comprehensive AI playbook, your organization can establish a structured and disciplined approach to AI governance and maximize the value and impact of your AI investments while mitigating risks and ensuring ethical and responsible AI deployment. By completing this playbook your optimal approach “Boat” vs “Moat” becomes more clear. This playbook is a living document that grows with your organization.

#2.  How do we keep historical biases out of generative AI?

Thank you for this question! Keeping historical biases out of generative AI is a complex challenge that requires a multi-faceted approach. Here are some strategies to keep it on point:

  1. Diverse Training Data: Ensure that the training data used to train the AI model is diverse and representative of different demographics, cultures, and perspectives. Use training data that reflects your customers. This can help mitigate biases that may arise from a narrow or skewed dataset.
  2. Bias Detection and Mitigation: Implement techniques to detect and mitigate biases in the training data and model outputs. This may involve using fairness-aware learning algorithms and techniques such as adversarial debiasing or counterfactual data augmentation.
  3. Human Oversight and Evaluation: Incorporate human oversight and evaluation throughout the development process to identify and address biases. This can involve expert review, bias audits, and user testing to assess the fairness and inclusivity of the AI system. Consider red teaming, a method of testing AI models to identify vulnerabilities and prevent harmful behavior,
  4. Transparency and Accountability: Promote transparency in the AI development process by documenting data sources, model architectures, and decision-making processes. Establish clear accountability mechanisms to address instances of bias and ensure responsible AI deployment.
  5. Bias Impact Assessment: Conduct thorough impact assessments to understand how biases in the AI system may affect different groups and communities. Take proactive measures to mitigate potential harms and ensure equitable outcomes.
  6. Continuous Monitoring and Updating: Implement systems for continuous monitoring and updating of AI models to identify and address biases that may emerge over time. This may involve collecting feedback from users and stakeholders and retraining the model with updated data.
  7. Ethical Guidelines and Standards: Adhere to ethical guidelines and standards for AI development, such as those outlined in frameworks like the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems or the Principles for AI developed by organizations like the Partnership on AI. Get familiar with the recently passed EU AI Act. While the scope for this is currently Europe, we expect it to be adopted at some point globally, much like GDPR got its start in Europe.

By incorporating these strategies into the development and deployment of generative AI systems, we can work towards minimizing historical biases and promoting fairness, diversity, and inclusivity in AI applications.

#3.  What are the biggest risks arising out of generative AI in financial services? What do you worry about?

Generative AI in financial services holds great promise for tasks such as fraud detection, risk assessment, portfolio optimization, and customer service. These are “classic” use cases in financial services that have benefitted from including AI into these processes. Now with GenAI coming into use, we’re seeing organizations incorporating GenAI into these critical processes. However, there are numerous potential risks associated with the use of GenAI in organizations:

  1. Data Privacy and Security: Generative AI models trained on financial data may inadvertently expose sensitive information about individuals or organizations, leading to privacy breaches and security vulnerabilities. Using external foundation models and services like ChatGPT may exacerbate this. Make sure your organization establishes rules for usage of these external services.
  2. Algorithmic Bias: Biases present in the training data used to train generative AI models can lead to biased outcomes in decision-making processes, such as loan approvals or investment recommendations, potentially perpetuating or exacerbating existing inequalities. Particularly in finance, which is highly regulated, we need to pay attention to the origins of the training data.
  3. Model Robustness and Reliability: Generative AI models may produce outputs that are not sufficiently robust or reliable for critical financial decision-making, leading to errors or unexpected behaviors that could have significant financial consequences. This could create end user frustration and inaccurate or harmful results.
  4. Adversarial Attacks: Generative AI models may be vulnerable to adversarial attacks, where malicious actors manipulate input data to produce undesirable outcomes, such as generating fake transactions or bypassing fraud detection systems.
  5. Regulatory Compliance: The use of generative AI in financial services may raise regulatory concerns related to transparency, accountability, and compliance with laws and regulations governing financial transactions, data protection, and consumer rights. Imagine talking to a regulator and explaining the the output given to a customer or employee may be different every time.
  6. Systemic Risk: If widely adopted, generative AI models could introduce new sources of systemic risk to financial markets, such as amplifying market volatility or creating unforeseen correlations between assets.
  7. Ethical Considerations: The deployment of generative AI in financial services raises ethical questions about fairness, accountability, and the potential impact on individuals and society, particularly in terms of financial inclusion, access to credit, and the distribution of economic opportunities.

To mitigate these risks, financial institutions should implement robust governance frameworks, adopt best practices for data management and model validation, invest in cybersecurity measures, prioritize fairness and transparency in AI development, and engage with regulators and stakeholders to address regulatory and ethical concerns. Beyond this, ongoing research and collaboration between industry, academia, and policymakers are essential to address emerging challenges and ensure the responsible and ethical use of generative AI in financial services. I’m happy to report that this collaboration is already underway.

#4.  If you could change one thing about the fintech ecosystem, what would it be?

If I could change one thing about the fintech ecosystem, it would be to systematically and permanently enhance financial inclusion on a global scale. Despite significant advancements in financial technology, there are still millions of people worldwide who lack access to basic financial services such as banking, credit, and insurance. This lack of access perpetuates economic inequality and limits opportunities for individuals and communities to thrive. While I was at JPMorgan Chase. I was so impressed with the strides the company took to work on this, yet there still remains an enormous amount to be done.

To address this, I would focus on:

  1. Developing Solutions for Under-Served Populations that are economically viable for the ecosystem: Encouraging fintech innovation that specifically targets under-served populations, such as the unbanked and underbanked, by providing affordable and accessible financial products and services tailored to their needs. This shouldn’t be a charity; it should function as a business.
  2. Improving Financial Literacy and Education: Investing in financial literacy programs and initiatives to empower individuals with the knowledge and skills to make informed financial decisions and effectively utilize fintech tools and services.
  3. Easing Regulatory Barriers while Maintaining Protections: Working with regulators and policymakers to create a supportive regulatory environment that fosters innovation while ensuring consumer protection and mitigating risks associated with fintech solutions.
  4. Promoting Collaboration and Partnerships: Encouraging collaboration and partnerships between fintech companies, traditional financial institutions, governments, NGOs, and other stakeholders to leverage their respective strengths and resources in advancing financial inclusion efforts.
  5. Harnessing Technology for Social Impact: Leveraging emerging technologies such as blockchain, artificial intelligence, and mobile connectivity to develop innovative solutions that address specific barriers to financial inclusion, such as access to credit, identity verification, and remittance services.

By prioritizing financial inclusion within the fintech ecosystem, we can work towards creating a more inclusive and equitable financial system that empowers individuals and promotes economic prosperity for all.

#5.  What fintech problem or solution are you focused on or most interested in right now?

I’m fascinated by a few opportunities in fintech and the broader ecosystem.

  • How enterprises can leverage AI safely, securely, and in a way that both leverages their own data and at the same time respects (and reflects) their own customers. I expect quality and efficiency tradeoffs we’re all making to improve. For example, should I use ChatGPT for my enterprise to quickly spin up a GenAI system, even if it may take my data? Should I use a foundation model to accelerate an AI experiment even if it may be biased and not reflect my customers, and possibly (probably) be trained on data that does not reflect my customers? Instead of LLMs (Large Language Models) think about SLMs (Small Language Models).
  • Empowering companies around AI regulation. In March of 2024, European lawmakers passed the first major regulatory act around AI (Read about it here). This EU AI Act is expected to take effect this summer for Europe. It provides ground rules to cover how AI is being used. It is the world’s first comprehensive legal framework for regulating artificial intelligence. The AI Act aims to create a safe, ethical, and transparent legal framework for developing, marketing, and using AI in the EU. The act also aims to foster innovation and investment in AI, improve governance and enforcement, and create a single EU market for AI. Similar to how GDPR resulted in global convergence on general data protection and transformed how US privacy laws protect consumers in the US, it is expected that this recent AI legislation in Europe will impact AI regulation in the US. This is an opportunity for entrepreneurs to assist with this regulatory lift.
  • How the fintech ecosystem can be more efficient and equitable with funding ideas. Fintechs and startups in general spend a lot of time connecting with funders, which is time spent they could be working on their business.

#5.  What is the best career or life advice you have received?

One of the best pieces of advice I’ve received is to embrace lifelong learning. In both career and life, the world is constantly evolving, and new opportunities and challenges arise. By committing to continuous learning and personal development, you not only stay relevant in your field but also open yourself up to new possibilities and growth opportunities.

I’m fortunate to be a Kellogg graduate, a school which embraces lifelong learning for their alumni. Recently I participated in the Kellogg Global Leaders Summit for alumni where we gathered for a few days in Miami to share learnings, and I was proud to give back and speak to this community about AI.

For me, this also translates to how I like to build workplace teams. I optimize hiring for people that embrace a lifelong learning philosophy. The AI techniques they know today may pass in popularity, but people that have a hunger to learn and experiment will always have an edge in fast moving tech arenas.

###

Share the article.

Recommended Articles

Raul Peralta

The Fintech 5 with Raul Peralta — CEO of Kaleidoscope

In this ongoing series of blog posts, we are introducing you to some of the sponsors, partners, advocates, and entrepreneurs who make...

read more
Michelle Bonat

The Fintech 5 with Michelle Bonat — Chief AI Officer at AI Squared

The Fintech 5 is a series of blog posts consisting of questions and answers designed to help you get to know the...

read more
Deborah Yang, CEO of DAIZY, is featured in New Faces of Fintech

The New Faces of Fintech — Featuring DAIZY

While we may not know exactly how fintech will impact our future, we have an idea as to who will be leading the charge. In...

read more