• GL
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

AI washing: Understanding the risks for directors, officers and insurers

30 April 2025

As artificial intelligence ('AI') technologies continue to rapidly evolve, so too do the associated risk exposures. In this article, we discuss the emerging phenomenon of 'AI washing', its implications for directors, officers and their insurers, and strategies for mitigating associated risks.

What is 'AI washing'?

AI has the capacity to be truly transformative. Businesses seen to be riding the crest of the AI wave are often perceived as more forward-thinking, making them attractive to investors and customers alike. However, this has given rise to a phenomenon known as 'AI washing': the practice of companies overstating or misrepresenting their AI capabilities or prospects.

AI washing can take a variety of forms, including companies:

  • Stating that they are using AI, or are using it in a particular manner, when they are not.
  • Overemphasising the AI features of their operations.
  • Misrepresenting or exaggerating the capabilities of their AI systems.
  • Making claims about their use of AI which they are unable to substantiate.

For instance, a financial services company might claim that its investment platform uses AI to deliver real-time investment advice based on market trends. In reality, its platform relies solely on historical data and pre-programmed algorithms, without employing AI techniques like deep learning, which would allow it to learn and improve over time by processing large data sets.

AI washing can be intentional; the company might have deliberately marketed its platform as using AI to appear more technologically advanced that it actually is. However, AI washing can also occur inadvertently, when companies adopt AI terminology and buzzwords without fully understanding the underlying technology. In the scenario above, the company may have mistakenly conflated advanced data analysis with AI, or overlooked the nuances of the terminology adopted.

Key regulatory and civil law risks

Below, we outline some areas of potential risk for directors and officers ('D&Os') both here in the UK and across the Atlantic, where scrutiny of AI washing practices is rising.

UK:

  • Advertising Standards Authority ('ASA'): The ASA has published guidance on the use of AI terminology in advertising, warning that companies should not make misleading claims about AI and should ensure that their ads remain responsible. The ASA has already upheld a complaint regarding a paid-for Instagram post promoting the photo enhancer app 'Pixelup', which was found to have exaggerated its AI capabilities. The ASA can impose various sanctions on companies, and will often refer instances of non-compliance to other regulatory bodies, which can investigate and impose penalties on companies and their D&Os.
  • Competition & Markets Authority ('CMA'): This month, key provisions of the Digital Markets, Competition and Consumers Act 2024 came into force, granting the CMA robust new powers to directly enforce consumer protection laws. Misleading consumers with false information, such as through AI washing, can now lead to substantial fines for both the company (up to 10% of global turnover) and the individuals involved (up to £300,000). The CMA also has the authority to impose fines on individuals, including D&Os, for procedural breaches, such as non-compliance with information requests or other investigative procedures.
  • Financial Conduct Authority ('FCA'): The FCA introduced an anti-greenwashing rule last year due to concerns that firms were making exaggerated sustainability-related claims about their products and services. It remains to be seen whether the FCA will introduce a similar rule to address AI washing. In any event, certain existing FCA rules and requirements are already relevant to the practice; for instance, firms must communicate information to clients in a way that is fair, clear and not misleading. The FCA has extensive investigatory and enforcement powers that apply to regulated entities and individuals.

US:

  • Securities Class Actions: Unlike the UK, the US has a well-developed class action regime, in which multiple shareholders can collectively bring a lawsuit against a company and its executives for alleged violations of securities laws. Increasingly, AI disclosures are forming the basis of such claims: at the time of writing, 46 AI-related securities class actions have been filed in the US since 2020, the majority of which involve allegations of AI washing. It is common for members of senior management to be named as defendants in such actions, the costs of which are notoriously expensive to defend.
  • Securities and Exchange Commission ('SEC'): Earlier this year, the SEC settled charges against a restaurant tech company, Presto Automation Inc., for allegedly making false statements about its AI-assisted speech recognition technology. This followed a string of SEC enforcement actions in 2024 against companies and D&Os for alleged AI washing. Recent leadership changes at the SEC and shifting enforcement priorities mean the future of this trend is unclear. However, earlier this month, the SEC filed a complaint against the founder of fintech startup Nate, Inc., alleging that he fraudulently raised over $42 million in investments by falsely claiming his app used AI technology to complete purchases, which were in fact handled by contract workers in the Philippines. Board members of companies subject to SEC oversight will need to stay vigilant as investor interest in AI grows.
  • Federal Trade Commission ('FTC'): In September 2024, the FTC announced enforcement actions against five companies alleged to have participated in AI washing. The announcements were made as part of the FTC's 'Operation AI Comply' initiative, a "law enforcement sweep" aiming to tackle unfair and deceptive practices involving the use of AI.

Mitigating risk

AI washing is a particularly potent risk because there is no universally accepted definition of 'AI'. The term is generally understood to refer to applications and technologies that mimic human intelligence; however, as the field develops, so too does the associated terminology. This can lead to differing opinions on what does and does not constitute AI at different points in time.

To avoid inadvertent misstatements, it is advisable for companies and their D&Os to be clear about the language they are using. When adopting technical vocabulary and buzzwords, businesses should explain what they mean by these terms and avoid making broad and sweeping claims about AI without further explanation. D&Os should ensure that claims can be substantiated and keep a record of supporting evidence.

Claims that could be considered AI washing can occur in a variety of contexts. In investor-facing settings, statements might be made in presentations, pitches and forecasts. They can also appear in anything from annual reports and company filings to product descriptions, marketing materials and social media content. To ensure communications are consistent and accurate, company boards should regularly review public-facing statements and consider implementing company-wide policies and training which specifically address the making of assertions about AI.

D&Os and their brokers will need to actively consider AI-related risks when reviewing their policy wordings. The scope of coverage may need to be assessed and revised to ensure that exposures such as AI washing are covered, and not carved out by technology or cyber exclusions. There is likely to be an emergence of bespoke products designed specifically to cover litigation, regulatory and reputational risks arising from AI issues.

During the underwriting process, D&O insurers may request information regarding AI governance. This could include whether the policyholder has a formal AI policy, how AI disclosures are monitored and managed, and how testing is documented to support AI-related claims. In light of the class action landscape in the US, underwriters might require additional details from policyholders operating in jurisdictions with high incidents of AI-related lawsuits, such as California and New York, especially for Side C cover placements.

Please contact Tom Mungovan, Partner, and Emma Smith, Senior Associate, with any queries relating to this rapidly developing and complex area.

Further Reading