Ethics In Financial Services Philanthropic Planning Insights
Ethics Through the Lens of Philanthropic Planning
Managing director of the American College Cary M. Maguire Center for Ethics in Financial Services, Azish Filabi, JD, MA sits down with The College’s Chartered Advisor in Philanthropy® (CAP®) Program director and assistant professor of philanthropy Jennifer Lehman PhD, JD, CFP®, CAP® to discuss the ethical considerations advisors and other financial professionals must make when offering philanthropic planning services in a new continuing education (CE) opportunity available on Knowledge Hub+.
Filabi kicks off the discussion by reflecting on her history as a professional, stating that she has always worked to ensure organizations have the right government instructors in place and the right tools in place so they can consider ethics in their own personal decision-making as well as the impact they’re having on society.
She goes on to discuss the work performed at the Center for Ethics in Financial Services, stating the importance of the group’s research mission and outreach. In reference to this research, Filabi explains the purpose as “learning about the challenges that leaders and individuals are facing with respect to ethics so that can reflect back on the work that we do.” By completing this research, Filabi believes that the Center for Ethics will be able to provide the industry with valuable lessons relating to ethical concerns in the field.
Trust in Financial Services
One of these key lessons focuses on the topic of trust in financial services. Filabi shares that “Everyone I talk to highlighted trust as being a key factor in effective work that we do because it's essentially the glue that brings it all together. Some people went as far as to say that they're not in the business of selling financial products. They're selling trust because people have to trust us as professionals to be able to have their money in our good hands.”
Lehman ties this back to the mission of The American College of Financial Services as a whole, stating a goal of providing applied financial knowledge and education, promoting lifelong learning, and advocating for ethical standards to benefit society. As Lehman points out, philanthropy is a key part of the profession tied to social impact.
Filabi weighs in on this, providing a description of ethics in the industry. She emphasizes the importance of doing no harm and acting in accordance with legal requirements while navigating opportunities. However, she points out that this is a more simplistic view on ethics. When providing her perspective, she states, “We at the center like to think about ethics, not only about the compliance and legal challenges that people face in their day to day, but about the gap between these minimum standards that are expected of us and the day to day challenges that people face in their work…what is the standard that clients expect from you so that they can trust you that might not already be codified in the law?”
How Do We View Ethics in the Context of Philanthropy?
Filabi continues by tying this to the field of philanthropy, discussing concepts such as conflicts of interest, duty of care, and loyalty. She admits this to be a challenging balancing act that also requires financial professionals to consider social impact as part of the equation.
Filabi contends that the importance of social impact is especially critical for the philanthropic sector in recent years. She supports this assertion by stating, “Government budgets are really crunched, and so that means that the philanthropic sector is playing a huge role in addressing some of the business and (societal) challenges that we face in the economy, and I think that should be part of an understanding of ethical duties and obligations as we think about social impact.”
“Government budgets are really crunched, and so that means that the philanthropic sector is playing a huge role in addressing some of the business and (societal) challenges that we face in the economy, and I think that should be part of an understanding of ethical duties and obligations as we think about social impact.”
Lehman and Filabi go on to discuss several additional topics relating to ethics in the philanthropic sector including the Donor Bill of Rights, what an organization should do if a donor’s values don’t align with the organization’s values, key items to consider when weighing the ethical implications of our choices, and more in this discussion, available exclusively on Knowledge Hub+!
To access this learning opportunity and other valuable CE, visit Knowledge Hub+.
More From The College:
- Gain philanthropic and legacy planning knowledge with our CAP® Program.
- Learn about the American College Center for Philanthropy and Social Impact.
- Join the waitlist to be notified when enrollment opens for the TPCP™ Program.
- Learn about the American College Cary M. Maguire Center for Ethics in Financial Services.
Ethics In Financial Services Insights
Drivers of Trust in Consumer Financial Services
The article uses the Center for Ethics’ Trust in Financial Services Study (2021 Consumer Survey) to explore the drivers of trust in consumer financial services. By contextualizing the Center’s research within existing academic research, the study highlights how both corporate reputation and a consumer’s personal values play a critical role in establishing and maintaining trust in the financial services sector.
The Importance of Building Trust
The research, based on responses from nearly 1,700 U.S. consumers, examines trust levels associated with seven types of financial service providers including national banks, credit unions, and online-only financial institutions. One of the key findings is the stark contrast in how trust is built among "familiar non-customers" and "customers." For familiar non-customers – respondents who don’t have a relationship with a firm but are familiar with the services provided – trust tends to be influenced by external indicators such as reviews, third-party recommendations, and the overall reputation of the institution.
This dynamic is especially important for digital-only providers, who are newer to financial services; trust is often built through indirect experiences for such firms. In contrast, for customers who already have established relationships with a provider, trust is more deeply rooted in personal interactions. These customers value shared ethics, protection of their interests, and personalized services, particularly from institutions like credit unions, national banks, and investment firms.
Values Associated with Trust
The study underscores the need for financial institutions to differentiate their trust-building strategies for these two groups. For institutions aiming to attract familiar non-customers, focusing on reputation management and enhancing their public image is critical. By prioritizing transparency, aligning operations with core values, and offering tailored customer experiences, financial service providers can strengthen trust with clients. Conversely, when maintaining existing customer relationships, reinforcing trust through personalized, value-aligned services are key. In addition, these customers consider whether firms are actively protecting their interests.
These findings offer valuable insights for financial institutions looking to navigate the competitive and increasingly digital marketplace. Moreover, the research offers practical guidance for building, maintaining, and repairing trust differentiated by the type of financial entity and the type of customer in the relationship.
More From The College
For further details on the research findings, you can access the full report in the Financial Planning Review.
Pattit, J. M., & Pattit, K. G. (2024). An empirical exploration of the drivers of trust in consumer financial services.
Financial Planning Review, e1190.
For more information on the Center’s research on trust's role in financial services, get our full report.
Author
Subscribe to Newsletter
Related Posts
How Firms Can Strive for Ethical Excellence
Ethics In Financial Services Insights
AI Governance in Life Insurance
The afternoon panel on unfair discrimination in insurance underwriting was a presentation by Azish Filabi, JD, MA, managing director of the Center for Ethics in Financial Services, and Sophia Duffy, JD, CPA, AEP®, associate professor of business planning at the American College of Financial Services, about the ethical and governance challenges of artificial intelligence (AI) in the life insurance industry.
The panel highlighted the ethical and regulatory challenges of AI in the life insurance industry, drawing insights from a 2022 academic paper with the National Association of Insurance Commissioners (NAIC), "AI-Enabled Underwriting Brings New Challenges for Life Insurance: Policy and Regulatory Considerations," and a 2021 white paper, "AI Ethics and Life Insurance: Balancing Innovation with Access."
The panelists emphasized that AI differs from traditional algorithms because complex machine learning systems can obscure the decision-making rationales in underwriting, which creates new legal and ethical challenges. Moreover, once AI systems are embedded within a process, their operations become difficult to disentangle. The opacity of these systems, often referred to as "black box" systems, poses significant technical challenges, necessitating increased technical literacy and education. The proprietary nature of many AI systems adds another layer of complexity. This opacity and complexity make it difficult to ensure that these systems comply with anti-discrimination laws, particularly those that prohibit discrimination based on legally protected characteristics, like race.
AI systems can inadvertently result in unfair discrimination by using data sources that have a historical bias or serve as proxies for protected characteristics, the panelists shared. This can lead to outcomes that are not just unfair, but also potentially illegal. However, determining who is responsible for these decisions is not straightforward. The chain of data ownership involves big data aggregators, algorithm developers, and insurers/lenders. While insurers are ultimately accountable for their products, they may lack the technical expertise to fully understand the intricacies of the AI systems they use. This creates a disconnect where insurers may not have the ability to shape or even fully comprehend the systems they deploy.
Another issue presented was the difficulty in defining and measuring proxy discrimination when it comes to AI-enabled underwriting. Insurers are permitted to use an underwriting factor if it’s related to actual or reasonably anticipated experience, but there’s no clear-cut standard for how effective that factor needs to be. This ambiguity means each insurer’s justification for using a particular factor can be unique, making regulation even more challenging.
Ensuring insurers' systems align with regulations while integrating various external consumer data points is crucial. A major concern is consumers may remain unaware of which data is used, such as credit scores, credit history, and social media data, raising questions about fairness and the ability to correct inaccuracies. The use of irrelevant and incorrect data can lead to mistakes that get embedded in data chains earlier in the process. Embedded mistakes could be particularly pernicious in complex AI systems that use proxy factors to render decisions. In such systems, it's possible the mistaken data input will render an answer false.
To mitigate these risks, researchers at The College recommend a three-part framework: establishing national standards to set boundaries for acceptable design and behavior, implementing a certification system to verify that systems are developed in accordance with these standards, and conducting periodic audits of system outputs to ensure ongoing compliance.
Developing nationally accepted standards would involve the creation of guidelines to ensure AI systems adhere to best practices in system design and actuarial principles. This process requires collaborative research and careful consideration of who should define these standards. Key areas to address include: behavioral validity, or ensuring that data accurately reflects the behavior of interest; actuarial significance, assessing how inputs contribute to risk evaluation; and social welfare outcomes, defining a financially inclusive marketplace.
As the panel discussion ended, the conversation turned to the importance of testing for unfair discrimination in AI-enabled underwriting. Emerging rules suggest both objective and subjective approaches. For instance, an objective method might involve a 5% threshold for evaluating disparate impact on race, while a subjective approach would permit insurers to develop their own AI testing methodologies.
Critical questions remain. Should there be a unified approach to testing for unfair discrimination resulting from insurance underwriting? Who should have the authority to determine this approach? And how transparent should insurers be with consumers about data usage and privacy rights? These considerations are essential as we navigate the complexities of AI-enabled underwriting and strive for a fair and equitable system.
The future of insurance underwriting is undoubtedly tied to AI, and regulators and industry can together make sure that future is fair and equitable. We hope our study sparks a necessary conversation within the industry and among regulators.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Insights
Unpacking Fairness in Insurance
Panelists included Lisa A. Schilling, FSA, EA, FCA, MAAA, Director of Practice Research, Society of Actuaries Research Institute and Peggy Tsai, Chief Data Officer, BigID. The session underscored the challenges posed by AI, emphasizing the importance of strong governance, transparency, and ongoing process enhancements to maintain fairness in data practices and ensure equitable outcomes in insurance.
Fairness in insurance products and processes has been a long-time hallmark of good management for successful insurance companies. Regulations require companies not be unfairly discriminatory to consumers in their processes and practices. This issue has come to the forefront in the industry recently amid advances in artificial intelligence (AI). Panelists underscored that AI and advanced analytics have heightened both the positive potential and negative implications of existing insurance practices. The discussion emphasized the need for a nuanced approach to fairness that addresses the complexities introduced by these technologies.
A pivotal theme was the significance of data quality and governance in ensuring fairness. Highlighting the inherent biases that can emerge during data collection, panelists stressed the ongoing recalibration and transparency necessary in model outputs to mitigate these biases effectively. Robust stewardship practices should prioritize data integrity before model building and decision-making. Ensuring accurate risk classification aligned with expected claims values can serve as a fundamental aspect of actuarial fairness.
The panel then examined the challenges posed by data proxies and synthetic data in insurance models. Synthetic data is data that is produced by machines, sometimes to represent human behaviors. Data proxies similarly involve analysis informed by machines processes to represent real-world behavior. Concerns were raised about the accuracy and representativeness of these proxies, particularly in reflecting real-world demographics. The difficulty of removing synthetic data once integrated into models underscored the importance of rigorous validation and transparency throughout the modeling process, including at the beginning of a development process. A critical aspect of the discussion addressed the use of proxies for race and ethnicity in insurance, highlighting the ethical and regulatory implications. Panelists stressed the necessity of rigorous data management and model validation processes to ensure compliance and fairness in risk assessment practices.
The discussion concluded with a consensus on the imperative for continuous monitoring, recalibration, and transparent communication in insurance practices. Balancing data-driven decision-making with fairness and objectivity remains a paramount challenge, requiring ongoing efforts to align technological advancements with ethical standards.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Insights
Insights and Highlights Self-Regulatory Approaches to AI Governance
The panelists emphasized that good model development practices, irrespective of regulatory requirements, lead to better performance and predictability in tech investments. Companies implementing self-governance ahead of regulations often perform better by integrating risk management with economic considerations. The NIST framework, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,” addresses both technical and social impacts of AI, ensuring comprehensive governance.
Mandated by Congress in 2021, NIST developed a risk-based framework for managing AI models and practices. This flexible resource aids organizations in governing, mapping, measuring, and managing bias in AI. By focusing on governance, policies, procedures, and organizational culture, organizations can take a comprehensive approach to this challenge. By taking a proactive approach to governance, the aim is to help organizations promote trustworthy AI practices, including model validity, reliability, security, resilience, explainability, accountability, transparency, privacy, fairness, and bias mitigation.
The panel also discussed the relationship between federal and state initiatives and the role of self-regulation in AI governance. One panelist mentioned the AI Executive Order's contribution to defining real risks and the ongoing work on an AI risk management profile for generative AI. Another stressed the need for clear documentation and repeatable practices to provide assurance to partners.
The conversation also covered the challenges of accountability within organizations, highlighting the need for a cultural shift towards responsible AI use. The panel emphasized the importance of integrating AI risk management with broader enterprise risk management frameworks and adopting a shared responsibility model with third-party vendors.
Looking forward, one panelist predicted that AI risk management would become a distinct job category, with an increased focus on the societal impacts of AI. Another anticipated a progressive impact on software quality control driven by AI, leading to more regulated software development practices.
In summary, the panel highlighted that given the evolving regulatory landscape there is a need for clear and transparent AI governance practices, as well as the importance of interdisciplinary collaboration and cultural shifts towards responsible AI use.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Practice Management Insights
Optimizing Your Practice’s Relationship With AI
With the growth of AI in the workplace, it may be time to consider how you can best implement AI tools to automate busywork and turn your attention to more valuable tasks that directly support your clients. Fortunately, our new practice management workbook is here to assist!
Learn more about common AI topics like the risks associated with AI, how much time you can save, and the best way to get started. We’ll also provide a closer look into more specific topics such as evaluating the prompts you provide to your AI tools and which activities in your workplace are best to delegate to AI.
Increase your knowledge on these trending topics and more as you propel your practice into the modern era of advising with help from our informative practice management workbook.
Ethics In Financial Services Insights
Insights and Highlights: AI Regulation Panel Key Takeaways
The panelists discussed the regulation of artificial intelligence (AI) and its impact on the insurance industry. Panelists highlighted the industry's demand for AI guidance and frameworks, leading to regulatory action through new rules and guidance on how AI can be safely integrated into insurance processes.
In recent months, state insurance regulators have responded to the widespread adoption of AI in multiple ways. The National Association of Insurance Commissioners (NAIC) finalized its model bulletin on the use of AI systems by insurers, which establishes a blueprint that state regulators can use to address the topic in their jurisdictions. In addition, on January 17, 2024, the New York State Department of Financial Services (NYDFS) issued a proposed insurance circular letter emphasizing the use of artificial intelligence systems (AIS) and external consumer data and information sources (ECDIS) in insurance underwriting and pricing. This proposal aims to enforce compliance with existing laws and regulations while promoting transparency, fairness, and governance to address potential discrimination and bias risks. The Colorado Division of Insurance has also addressed the risks of ECDIS in life insurance underwriting, and proposed new rules to test algorithms for outcomes that may be unfairly discriminatory.
Summit participants discussed challenges such as underrepresented markets in the insurance space and the need for compliance tools to evolve for novel risks, with an emphasis on balancing innovation while safeguarding consumer interests and ethical considerations. The evolving regulatory framework aims to incorporate risk management and transparency principles consistent with the new technology used, with stakeholder engagement seen as crucial in supporting informed and effective regulation.
The panelists highlighted approaches for navigating the risks posed by algorithms developed by unregulated third-party vendors with respect to ethical considerations, acknowledging challenges associated with proxy factors that may lead to unfair outcomes. Regulatory efforts to establish approaches to auditing models could improve risk management processes. Ongoing efforts were discussed to refine regulatory frameworks addressing ethical issues, with an emphasis on risk management, transparency, and new methodologies for outcome testing.
Audience questions reflected industry concerns about regulatory readiness and governance in AI implementation. Proactive engagement from industry stakeholders was encouraged. The dialogue underscored the complexity of integrating AI into decision-making processes and emphasized the ongoing need for human oversight to ensure good outcomes, consistent with law. Overall, the discussion highlighted collaborative efforts between regulators and industry stakeholders in navigating the evolving landscape of AI regulation in financial services.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Insights
Insights and Highlights: AI Ethics in Financial Services Summit
Held as an immersive and educational meeting of financial experts spanning diverse corporate roles, the panel-led discussions explored the pivotal concept of trust in AI within the financial services industry. Key discussions encompassed:
- The ethical risks of AI in finance: From fairness concerns to biased algorithms, including potential pitfalls and how to mitigate them.
- Restoring trust through responsible AI: Best practices for developing, deploying, and governing AI ethically and transparently.
- The future of AI in insurance underwriting: Insights into the latest regulatory updates and ethical considerations in this critical area.
During our productive roundtable discussion, issues regarding transparency, mitigating bias, and the necessity for standardized practices were highlighted. The dialogue underscored the significance of employing data ethically to cultivate consumer trust. There was consensus on the importance of collaborative efforts to develop trustworthy AI solutions that ensure fair and responsible practices within the insurance industry in particular.
Panel topics included:
- AI Regulation Update
- Jillian Froment, Executive Vice President and General Counsel, American Council of Life Insurers (ACLI)
- Kaitlin Asrow, Executive Deputy Superintendent, Research and Innovation Division, New York Department of Financial Services
- Stephanie Schmelz, Deputy Director, Federal Insurance Office U.S. Department of Treasury
- Self-Regulatory Approaches to AI Governance
- Moderator: Sophia Duffy, JD, CPA, AEP®, Associate Professor of Business Planning, The American College of Financial Services
- Anthony Habayeb, Co-founder & CEO, Monitaur
- Reva Schwartz, Research Scientist, National Institute of Standards and Technology
- Fireside Chat
- Arezu Moghadam, Ph.D., Managing Director and Global Head of Data Science, J.P. Morgan Asset Management
- Marty Edelman, Senior of Counsel, Paul Hastings
- Unpacking “Fairness” in Insurance
- Moderator: Azish Filabi, JD, MA, Associate Professor of Business Ethics, Executive Director, The American College Cary M. Maguire Center for Ethics in Financial Services
- Lisa A. Schilling, FSA, EA, FCA, MAAA Director of Practice Research, Society of Actuaries Research Institute
- Peggy Tsai, Chief Data Officer, BigID
- Case Study - AI Governance in Life Insurance
- Azish Filabi, JD, MA, Associate Professor of Business Ethics, Executive Director, The American College Cary M. Maguire Center for Ethics in Financial Services
- Sophia Duffy, JD, CPA, AEP®, Associate Professor of Business Planning, The American College of Financial Services
The summit provided valuable insights into the evolving landscape of AI regulation and ethics, emphasizing the importance of collaboration, transparency, and responsible AI practices.
Stay tuned for forthcoming insights highlighting specific discussion topics from our esteemed panelists, including regulators, researchers, and industry leaders.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Ethics In Financial Services Insights
Five Key Questions for Ensuring Responsible AI in Financial Services
Azish Filabi, JD, MA, Executive Director of the American College Cary M. Maguire Center for Ethics in Financial Services, and Neeraja Rasmussen, Founder and CEO of Spyglaz and an advisory council member of the American College Center for Women in Financial Services, explore the vital link between responsible AI and financial progress in a recent article published by Financial Advisor Magazine. They shed light on key principles and inquiries guiding this transformative journey and uncover essential aspects to navigating this dynamic landscape, shaping the trajectory of responsible AI, and determining its impact on the financial industry.
As the financial services landscape undergoes a profound shift due to the integration of AI, cultivating responsible and trustworthy AI systems has become paramount. Filabi and Rasmussen delve into the intersection of AI and financial services, emphasizing the imperatives of unbiased, fair, and dependable AI within the context of high-stakes decision-making in the financial industry. Unveiling five pivotal questions, this article serves as a strategic guide for financial leaders. The encompassing crucial features of trustworthy AI addressed within range from the establishment of rigorous data and algorithm audit procedures to the integration of responsible AI principles throughout the technology development process.
Critical questioning is also a strategic necessity for industry leaders navigating the complex landscape of AI adoption. This dialogue ensures a commitment to developing AI technologies aligned with the highest standards of ethical responsibility and integrity. The National Institute for Standards and Technology (NIST)’s efforts to standardize trustworthy AI terminology and the inclusion of trustworthy AI principles in the DARPA AI Forward initiative further exemplify the comprehensive approach to addressing the complexities of this transformative field. As financial institutions grapple with the challenges of AI integration, these insightful questions are invaluable guidelines for discussions with both internal teams and AI vendors.
To explore these insights and other strategic guidance on fostering responsible AI, read the article published by Financial Advisor Magazine.
To learn more about artificial intelligence in financial services, you can explore further research findings from the Center for Ethics in Financial Services.