The risks of over-egging artificial intelligence (AI) technological input to companies could be a major issue going forward in the industry.
A new report from the consultants EY specified how companies could be putting themselves in problematic situations to regulators, customers, and shareholders if they exaggerate their prowess with artificial intelligence whether inadvertently or not.
The report broke down the issue into two sections.
AI wishing — where management overstates its use of AI because it hopes or believes, but cannot verify, the way the organisation is using AI.
AI washing – where the organisation deliberately makes inaccurate representations of how it’s using AI.
Examples of AI washing include overselling of capabilities, vague terminology, and misleading marketing.
The issue is gaining traction with many aspects of society as the number of companies claiming to be more technologically advanced than they are for market clout becomes an issue.
“While it’s tempting to ride the wave of the hype around AI’s potential, a culture of integrity ensures that breaches of trust relating to the organisation’s use are the exception."
Over summer this year, the BBC reported on it naming Amazon as a company that had been criticised for possibly embellishing their AI capabilities when it came to their supermarkets. The stores, which laboured their technological prowess when launched, enable customers to scan on entry and have their accounts debited online without having to pay in store or go near a checkout.
“While it’s tempting to lean into and ride the wave of the hype around AI’s potential, a strong culture of integrity ensures that breaches of trust relating to the organisation’s use of AI are the exception rather than the norm,” said EY’s report. “This is increasingly important, given the rise in enforcement action against such breaches.”
Already the regulators in the US are moving into this at the federal level. Colorado, this year, has become the first state to pass AI-focused legislation for the insurance industry with an act titled “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” and will focus on issues such as truthfulness in claims, as well as broader AI-related issues such as bias and discrimination in systems.
Earlier this year, the SEC charged two investment advisers with making false and misleading statements about their use of AI. Delphia Inc. and Global Predictions Inc. were fined for making false and misleading statements about their purported use of AI. The firms agreed to settle the SEC’s charges and pay $400,000 in total civil penalties.
This issue of regulatory clampdowns therefore could become a large issue for fund operators that manage insurers’ money. It could also open them to criticism at a wider level and a host of legal issues.
“AI washing is going to be one of the most important issues in the AI sector, given its wide use as a buzz word for investment and the general lack of understanding regarding what it truly is."
In other parts of the world, the European Union’s AI Act will also have specific language aimed at AI-washing. The EU has already made forays into greenwashing so there could a precedent for its actions. The AI Act is built around a risk-based approach to regulate AI systems and address risks related to safety, security, and fundamental rights. It will set out guidelines for the development, deployment, and marketing of AI, ensuring compliance and transparency across the board.
Some critics have pointed to AI washing as being similar to greenwashing in scope. “AI washing is going to be one of the most important issues in the AI sector, given its wide use as a buzz word for investment and the general lack of understanding regarding what it truly is,” said James Burnie, Partner, gunnercooke in a paper this year.
How to avoid AI washing?
Already several government and regulatory organisations have developed AI frameworks to avoid issues around the topic so fund operators can keep their materials around AI capabilities fair and honest.
The Canadian Securities Administrators (CSA) released a notice at the beginning of November that specifically provided guidance on AI washing.
The CSA noted there should be a reasonable basis for AI disclosure by issuers and overly promotional disclosure should be avoided. “Instead, a factual, balanced discussion of the benefits and risks of AI systems should be provided,” said law firm BLG’s analysis of the guidance.
Some of the examples the CSA named as AI washing were claims that a company utilises the “most advanced AI technology”, “modernising the business processes and disrupting their specific industry”.
“Given the concepts that organisations must consider to not only implement AI but also instil confidence in it, organisations must develop a cohesive integrity-first approach to AI."
Claims that the company’s business is a “global leader in AI” and discussion of the issuer’s acquisition and development of AI technology or “overly inflating the importance/usage of AI within the company”.
The wider industry will continue to suffer negative headlines and suspicion if it does not regulate its own use of AI-related language. EY said there must be a culture of honesty for customers and investors to fully believe claims and not foster an attitude of mistrust.
“Given all of the concepts and components that organisations must consider to not only implement AI but also instil confidence in it, organisations must develop a cohesive integrity-first approach to AI,” said EY in its report. “Ad hoc efforts to chase risks and challenges after the fact will not suffice.”
Please Sign In or Register to leave a Comment.
SUBSCRIBE
Get the recent popular stories straight into your inbox