AI decision framework for investment firms aims to keep ‘humans in the loop’

CFA Institute's senior head of Research says professionals should take heed of ethical risks in development and implementation

AI decision framework for investment firms aims to keep ‘humans in the loop’

Last month, the CFA Institute released a framework to help investment professionals navigate ethical decisions in the development of artificial intelligence in their organizations.

The framework was released alongside a new research paper titled “Ethics and Artificial Intelligence in Investment Management, a Framework for Professionals,” which combines fundamental ethical principles with the applicability of professional standards within the CFA Institute Code of Ethics and Standards of Professional Conduct.

“We're seeing increasing interest and new use cases of artificial intelligence in the investment process,” Rhodri Preece, CFA and senior head of Research at the CFA Institute and author of the study, told Wealth Professional. “More and more firms are starting to look at where and how best can they deploy AI tools.”

Read more: Imagining the AI-powered future of the financial planning profession

In a collaborative study conducted with the Hong Kong Institute of Monetary and Financial Research in 2021, the CFA Institute found roughly half of asset management firms in the Asia Pacific region have no AI or big data applications in production, while a third were in the early phases of adoption. That profile, Preece says, provides a fair qualitative indicator that the global asset management industry is in the formative stages of AI implementation.

“We felt it was important to look at how can we help investment teams, firms, and professionals think about the span of ethical issues when it comes to building, testing and ultimately deploying AI tools in the investment process,” Preece says.

Citing another study in partnership with Coalition Greenwhich, he says 54% of institutional investors saw transparency of algorithms as a big area of risk. Another 53% were concerned about intellectual property rights, and 49% cited operational risks, which could arise when AI models evolve outside their initial parameters or start working in a way that’s not in the client’s best interest.

“As an investment professional, you need to make sure that you understand the qualities and properties of your data, and that you're respecting the applicable data protection laws,” Preece says, highlighting the importance of data privacy when considering alternative data sources.

Beyond the question of data privacy, users also have to think about the possibility of bias being introduced into AI algorithms through the data they’re fed. As users validate and cleanse the data to be plugged into a model, Preece says, they should be mindful of the limitations of the information they use, the sampling techniques they employ, and the possibility of biases being imported from the categories or groups within a population they sample from.

“Machines are very good at processing reports, performing tasks, and understanding the properties of vast quantities of data that are beyond the comprehension of a human,” Preece says. “But they don’t possess fundamental ethical attributes that people have, like client loyalty and respect.”

Read more: How financial planning bodies are exploring the potential of technology

The CFA Institute’s framework also highlights the issue of model interpretability, emphasizing the need for users to understand how a machine arrives at a certain result. On a related note, it says users have to ensure the accuracy of the model by training and evaluating it on a sample data set before applying it to real-world data.

“From an accountability standpoint, there should also be a robust governance structure around the deployment of these technologies,” Preece says. “Are you making sure there are appropriate checks and balances, that there are thorough reviews before a model is put into a live environment? And are you considering ethical conflicts as part of that governance and oversight mechanism?”

While some investment firms are developing their own AI algorithms, others are relying on third-party vendors and off-the-shelf solutions. For those firms that have outsourcing arrangements with respect to their AI, Preece says the due diligence process should allow them to uphold the same fundamental ethical principles and obligations to protect their clients’ interests.

“It’s important for investment professionals to bring a philosophical lens to the whole process of building models so that they best serve the client,” he says. “It's this combination of having humans in the loop, human Intelligence combined with artificial intelligence, that we feel will deliver the best outcome for the clients when you're using AI in the investment process.”