Managing risks associated with the use of artificial intelligence

Litrow Hickson

Many organizations are now using artificial intelligence (AI) to increase the distribution of products and services. A common example is the use of natural language to answer customers’ questions. This form of technology uses AI to answer customer questions without the need for human assistance.

However, AI technology comes with a unique risk. In March 2016, Microsoft tested a new chatbot called Tay On Twitter. Within hours of its inception, Tay began expressing profane and insulting remarks, including racism and anti-religion. Needless to say, Microsoft had to suspend the account. In another example, Amazon needs to update it. Alexa The voice assistant after it challenged a 10-year-old girl to touch a coin to the end of a half plug.

If your organization is using a chatbot to provide customer service or use AI to disassemble data or diagnose other issues, there may be some legal and reputational risks that you should consider protecting against.

Unlike traditional programming where people tell the computer what to do, AI uses machine learning and other techniques to create its own rules. Thus, AI mimics the human brain by performing tasks typically performed by humans, including perceiving, analyzing, and processing data to make informed decisions. AI is now used to make medical judgments, predict tax debt cases and examine contracts.

Because AI is not recognized as a separate entity, actions taken by AI (although sometimes unpredictable) can hold your organization accountable. Due to the absence of laws specifically governing AI in Jamaica, here are some things you should consider in addressing AI-related responsibilities.

Also Read :  3 Things About C3.ai Stock That Smart Investors Know

Transparency and human oversight

The first mechanism to reduce risk is to make adequate disclosures. If AI is used in a business aspect that involves customers and other stakeholders, they should be informed that AI is being used. Although Jamaica does not have similar regulations, there are guidelines issued by the European Union that require documentation and record keeping, transparency and the provision of information to users and human censorship.

Behind the scenes, businesses should check that AI technology is constantly being tested to ensure that problems are addressed when they occur. The quick action of Microsoft and Amazon in the example above could save them millions of dollars in damages from civil claims. There are also many well-known AI incidents where AI generates biased results due to data entered in training. It is imperative that AI be monitored to avoid possible claims of discrimination and civil liability.

In some cases, the appropriate solution may be to leave the final decision to the people. If you fear that the decision makers using AI in your business are likely to produce biased or incomplete results, you should ensure that there is some oversight of the decision-making process.

Also Read :  MU Health’s Dr. Thomas Selva on AI Algorithm Development

Use of contract

The only benefit of using contracts to manage liability arising from the use of AI is that they provide an opportunity for the parties to allocate risk before the event that caused the loss occurs. For example, a contract may include compensation related to losses incurred by AI. Such phrases can be helpful if you have a third-party AI provider responsible for testing and monitoring. If suing, your business may rely on that compensation for damages incurred to customers from AI malfunctions.

There may be other types of AI-related risks that can be managed by contract. For example, a contract can be used to place an obligation on an AI provider to maintain the privacy and confidentiality of the data on which AI is trained. This is also necessary, given the data protection obligations imposed by law in Jamaica.

Insurance coverage

Of course, AI responsibility can not be guaranteed at this time. Aspects of AI-generated losses can be covered under a number of principles related to business disruption. Your organization may want to consider whether there is insurance for certain types of risks associated with the use of AI in your business. With the proliferation of more and more uses of AI, we are likely to see more and more insurance being offered. In the UK, for example, there are laws that require owners of autonomous vehicles (powered by AI) to obtain and maintain insurance for any loss caused by such vehicles.

Also Read :  Houston-based language processing co. lands latest partnership with UC Irvine

Legal and regulatory considerations

Your organization may also want to consider whether there are specific rules that are not AI or whether there are regulations that could affect the use of AI in your business. If you work in an industry that uses AI to sort and process customer data, you should consider whether there are any privacy or data confidentiality issues that you should consider and protect. For example, if you are processing personal data, you should consider whether all actions taken are in accordance with the data protection laws of Jamaica.

Each business should consider the legal and reputational risks that may arise from the use of AI in the business and protect against those risks. Your organization should seek legal advice to determine what is the only set of risks for your business.

Litrow Hickson is an associate at Myers, Fletcher & Gordon and a member of the company’s litigation department. Litrow can be contacted at [email protected] or www.myersfletcher.com. This article is for general information purposes only and does not constitute legal advice.



Source

Leave a Reply

Your email address will not be published.