Australians lack trust in artificial intelligence: research
Trust is an issue when it comes to artificial intelligence (AI) according to a University of Queensland study that found 72% of people don’t trust it, with Australians leading the pack.
Trust experts from the UQ Business School, Professor Nicole Gillespie, Dr Steve Lockey and Dr Caitlin Curtis, led the study in partnership with KPMG, surveying more than 6000 people in Australia, the US, Canada, Germany and the UK to unearth attitudes about AI.
Professor Gillespie said trust in AI was low across the five countries, with one nation particularly concerned about its effect on employment.
“Australians are especially mistrusting of AI when it comes to its impact on jobs, with 61% believing AI will eliminate more jobs than it creates, versus 47% overall,” she said.
The research identified critical areas needed to build trust and acceptance of AI, including strengthening current regulations and laws, increasing understanding of AI, and embedding the principles of trustworthy AI in practice. The survey also revealed that people believe most organisations use AI for financial reasons — to cut labour costs rather than to benefit society.
It found that while people are comfortable with AI for task automation, only one in five believe it will create more jobs than it eliminates.
One positive finding was that people have more confidence in universities and research institutions to develop, use and govern AI in the public’s best interests.
Professor Gillespie said the research showed that distrust came from low awareness and understanding of when and how AI technology was used across all five countries.
“For example, our study found while 76% of people report using social media, 59% were unaware that social media uses AI,” she said.
Professor Gillespie said despite the gap in understanding, 95% of those surveyed across all countries expected organisations to uphold ethical principles of AI.
“For people to embrace AI more openly, organisations must build trust with ethical AI practices, including increased data privacy, human oversight, transparency, fairness and accountability,” she said. “Putting in place mechanisms that reassure the community that AI is being developed and used responsibly, such as AI ethical review boards and openly discussing how AI technologies impact the community, is vital in building trust.”
Professor Gillespie is the KPMG Chair of Organisational Trust and currently integrating the study findings for building trustworthy AI into the new UQ Master of Business Analytics program. The full research report is available online.
Major US defence company sets up in SA
US defence technology company Sierra Nevada Corporation has opened an Australian subsidiary, SNC...
Queensland boosting local manufacturing
The Queensland Government has announced it will boost homegrown manufacturing with an expansion...
Hazer completes testing of its commercial demonstration plant
Hazer Group has announced that it has completed testing of its commercial demonstration plant for...