While artificial intelligence (AI) promises to make decisions faster and smarter, the Institute of Actuaries and the Australian Human Rights Commission (AHRC) are concerned about possible discrimination and emphasize the need to prevent this.
To address this, they have developed guidance resources designed to help guarantors and law enforcement comply with federal anti-discrimination laws when AI is used to price or guarantee insurance products. .
The guidelines follow a 2021 report by the AHRC that examines the human rights implications of new and emerging technologies, including AI-informed decision-making.
The Actuaries strongly support the report’s recommendations to develop guidelines for use by governments and non-governmental organizations on compliance with federal anti-discrimination laws when AI is used in decision-making. It approached the AHRC with a collaborative offering, and they jointly developed these guidelines.
The guidance resource lists a number of strategies for insurers regarding data used by AI systems to address algorithm bias and avoid discriminatory results. Expert Chris Dolman highlighted.
Dolman led the Institute’s contribution to the development of guidance resources as a representative of the Data Science Implementation Committee.
These strategies include rigorous design, regular testing, and AI system monitoring. This guide also provides practical tips for insurers to help reduce the risk of successful discrimination claims arising from the use of AI for price risk.
“In the context of insurance, AI can be used in many different ways, including in terms of pricing, purchasing, marketing, customer service, as well as managing claims or internal transactions,” Dolman said.
He added: “This guidance resource focuses on the use of AI in pricing and purchasing decisions, as these decisions are likely to already use AI and, by their very nature, have significant financial implications for Individuals. Such decisions may also be likely to provoke consumer discrimination. However, the general principle Many of the above can also apply to the use of AI-informed decisions in other contexts.
According to a survey of Actuaries this year, at least 70% indicated the need for further guidance to follow in the growing / widespread use of AI.
Elayne Grace, executive director of the Institute of Actuaries, said there was an urgent need for guidance to assist actuaries in carrying out their professional duties, noting that the resource should provide comfort to consumers that their rights should be respected. Protected.
“Australia’s anti-discrimination laws are long, but there are limited guidelines and case laws for practitioners,” Grace said. “The complexities that arise from different anti-discrimination laws in Australia at the federal, state and territorial levels add to the challenges facing Actuaries and may reflect opportunities for reform.”
She also noted that many megatrends intersect – including a dramatic increase in ‘big data’, an increase in the use and power of artificial intelligence, and algorithmic decision-making and growth and transformation. Consumers’ understanding and expectations of what ‘fair’ – makes the lack of further guidance. Problems for actuaries.
“This collaboration highlights the complexities of the problems that society is facing and the need for multidisciplinary approaches, especially where data and technology are used to design basic services such as insurance,” Grace said.