AI platforms are giving ultra-wealthy families flawed financial answers, study finds

AI wealth advice under fire as audit finds major errors in estate and insurance guidance

AI platforms are giving ultra-wealthy families flawed financial answers, study finds

A new audit of leading artificial intelligence platforms has raised concerns about the reliability of AI-generated advice for ultra-high-net-worth families, particularly on complex estate planning, insurance, and wealth transfer strategies.

The report, published jointly by 5W and Haute Wealth, reviewed how ChatGPT, Perplexity, Gemini, Claude, and Microsoft Copilot respond to sophisticated financial planning questions involving premium financing, private placement life insurance, irrevocable life insurance trusts, estate liquidity, succession planning, and charitable giving.

According to the study, the systems frequently deliver polished and authoritative-sounding answers that are outdated, incomplete, or inaccurate. Researchers said the errors are not random, but stem from structural weaknesses in how generative AI models process financial information.

“The answers are confident. They are fluent. They are frequently wrong, and the way they are wrong is structural, predictable, and currently unsupervised,” the report stated.

Outdated estate guidance

Among the most significant findings was the persistence of outdated estate tax guidance. The audit found that AI engines continue to warn users about a looming reduction in the federal estate tax exemption tied to the expiration of provisions in the Tax Cuts and Jobs Act.

The report said that guidance no longer reflects current law after the One Big Beautiful Bill Act, signed on July 4, 2025, permanently increased the federal estate, gift, and GST tax exemption to $15 million per individual and $30 million for married couples beginning in 2026.

Researchers warned that relying on obsolete AI-generated recommendations could lead families to make unnecessary irrevocable transfers or other planning decisions based on tax concerns that no longer apply.

The audit also identified shortcomings in how AI systems explain premium financing strategies. While most engines highlighted benefits such as preserving liquidity and supporting wealth transfer, the study found they routinely failed to adequately disclose key risks.

Those omitted or downplayed risks included interest rate exposure, collateral calls, policy performance concerns, refinancing challenges, and carrier credit risk.

“The five disclosed risks every reputable practitioner names — interest rate risk, collateral call risk, policy performance risk, refinancing risk, and carrier credit risk — are routinely buried, minimized, or omitted entirely,” the report stated.

Researchers said collateral call risk, which they described as the most consequential downside scenario, was among the least likely risks to appear in AI-generated responses.

The study further found that AI-generated advisor recommendations often favored large, recognizable financial institutions while overlooking boutique RIAs, multi-family offices, and specialist insurance firms that frequently serve ultra-wealthy clients.

Inconsistent results

According to the report, the same query posed multiple times to the same AI engine could generate different advisor lists, with some recommendations including firms or advisors that no longer provide the cited services.

The audit graded the five AI platforms across categories including accuracy, risk disclosure, source quality, and visibility of boutique firms. Claude received the highest overall marks, while Copilot ranked lowest among the evaluated systems.

Researchers also pointed to broader concerns surrounding citation quality and consistency. The report cited prior studies showing hallucination rates ranging from 20% to 37% for financial citations generated by major AI models.

The findings arrive as more consumers turn to AI tools for financial guidance. The report said 66% of generative AI users have sought financial advice through these systems, while 85% reported acting on recommendations they received.

"AI has changed every part of how decisions get made in the world. Where to eat, who to hire, what doctor to see, what advisor to trust — the answer used to come from a person," said Ronn Torossian, Founder and Chairman of 5W. "Now it comes from a chatbot, and people act on it. This is the biggest shift in information authority in a century, happening with no rules, no auditor, and no firm knowing what is being said about them inside the engines. Wealth is one of the first places it gets expensive. Every industry is next."

The study warned that the technology’s conversational style may create a false sense of authority for users making high-stakes financial decisions.

“The single biggest gap between an AI engine and a competent human fiduciary is not the answer — it is the follow-up,” the report stated.

It added that human advisors routinely probe for additional context around trust structures, liquidity profiles, and risk tolerance before making recommendations, while AI systems tend to provide immediate conclusions without gathering further information.

The report urged ultra-high-net-worth families to treat AI-generated financial guidance as a starting point rather than a substitute for professional advice. It also encouraged wealth management firms to improve their online visibility by publishing regulator-aligned, publicly accessible educational content that AI systems can properly index and cite.

LATEST NEWS