Trusting Artificial Intelligence? Still a work in progress, survey shows

0

Our reliance on AI-driven results seems to be growing every day, both professionally and personally. But are we ready to fully trust this release? Are we sure that the data entered into these systems is accurate? Are decision models and algorithms kept up to date? Is it free from bias? Are humans kept informed?

The answers to these questions are still up in the air, according to a recent investigation of 7,502 companies worldwide, commissioned by IBM in partnership with Morning Consult.

The use of AI continues to grow. Today, 35% of companies are using AI in their business, up from 31% a year ago. Another 42% are exploring AI. There are benefits, such as cost savings and efficiencies (54%), IT or network performance improvements (53%), and better customer experiences (48%).

Trust is a priority, but many organizations haven’t taken enough steps to ensure AI is trustworthy, the survey also shows. Eighty-five percent of respondents agree that consumers are more likely to choose a company that is transparent about how its AI models are built, managed, and used. Additionally, 84% say that “being able to explain how their AI arrives at different decisions is important to their business.”

Maintaining brand integrity and customer trust is the most important reason for pursuing trust in AI, cited by 56% of managers. A further 50% say meeting external regulatory and compliance obligations is essential, and 48% cite the ability to govern data and AI throughout the lifecycle. Another 48% are looking for the ability to monitor data and AI across the lifecycle.

A majority of respondents say they lag behind in many efforts to build trust – from finding the right skills to proactive efforts to avoid bias. The majority of organizations have not taken key steps to ensure their AI is reliable and accountable, such as reducing bias (74%), tracking performance variations and model drift (68%), and s ensure they can explain AI-based decisions (61%).

“A significant challenge is that the field of applied AI ethics is still relatively new, and most companies cite a lack of skills and training,” the survey authors say. The main challenges to ensure greater trust in AI are:

  • Lack of skills and training to develop and manage trustworthiness 63%
  • AI governance/management tools not working in all environments 60%
  • Lack of AI strategy 59%
  • AI results that are not explainable 57%
  • Lack of company guidelines to develop trusted and ethical AI 57%
  • AI vendors that do not include explainability features 57%
  • Lack of regulatory guidance from government or industry 56%
  • Building models on data that has inherent bias 56%

The good news is that the more a company is likely to have deployed AI, the more likely it is to value the importance of reliability, the survey authors say. IT professionals in enterprises currently deploying AI are 17% more likely to say their organization values ​​AI explainability than those just exploring AI.

The survey also shows that most of the activities associated with AI trust focus on protecting data privacy.

Share.

Comments are closed.