Tech

Ethical, trust, and skill barriers slow generative AI progress in EMEA

76% of consumers in EMEA think AI will significantly impact the next five years, yet 47% question the value that AI will bring and 41% are worried about its applications.

This is according to research from enterprise analytics AI firm Alteryx.

Since the release of ChatGPT by OpenAI in November 2022, there has been significant buzz about the transformative potential of generative AI, with many considering it one of the most revolutionary technologies of our time. 

With a significant 79% of organisations reporting that generative AI contributes positively to business, it is evident that a gap needs to be addressed to demonstrate AI’s value to consumers both in their personal and professional lives. According to the ‘Market Research: Attitudes and Adoption of Generative AI’ report, which surveyed 690 IT business leaders and 1,100 members of the general public in EMEA, key issues of trust, ethics and skills are prevalent, potentially impeding the successful deployment and broader acceptance of generative AI.

The impact of misinformation, inaccuracies, and AI hallucinations

These hallucinations – where AI generates incorrect or illogical outputs – are a significant concern. Trusting what generative AI produces is a substantial issue for both business leaders and consumers. Over a third of the public are anxious about AI’s potential to generate fake news (36%) and its misuse by hackers (42%), while half of the business leaders report grappling with misinformation produced by generative AI. Simultaneously, half of the business leaders have observed their organisations grappling with misinformation produced by generative AI.

Moreover, the reliability of information provided by generative AI has been questioned. Feedback from the general public indicates that half of the data received from AI was inaccurate, and 38% perceived it as outdated. On the business front, concerns include generative AI infringing on copyright or intellectual property rights (40%), and producing unexpected or unintended outputs (36%).

A critical trust issue for businesses (62%) and the public (74%) revolves around AI hallucinations. For businesses, the challenge involves applying generative AI to appropriate use cases, supported by the right technology and safety measures, to mitigate these concerns. Close to half of the consumers (45%) are advocating for regulatory measures on AI usage.

Ethical concerns and risks persist in the use of generative AI

In addition to these challenges, there are strong and similar sentiments on ethical concerns and the risks associated with generative AI among both business leaders and consumers. More than half of the general public (53%) oppose the use of generative AI in making ethical decisions. Meanwhile, 41% of business respondents are concerned about its application in critical decision-making areas. There are distinctions in the specific areas where its use is discouraged; consumers notably oppose its use in politics (46%), and businesses are cautious about its deployment in healthcare (40%).

These concerns find some validation in the research findings, which highlight worrying gaps in organisational practices. Only a third of leaders confirmed that their businesses ensure the data used to train generative AI is diverse and unbiased. Furthermore, only 36% have set ethical guidelines, and 52% have established data privacy and security policies for generative AI applications.

This lack of emphasis on data integrity and ethical considerations puts firms at risk. 63% of business leaders cite ethics as their major concern with generative AI, closely followed by data-related issues (62%). This scenario emphasises the importance of better governance to create confidence and mitigate risks related to how employees use generative AI in the workplace. 

The rise of generative AI skills and the need for enhanced data literacy

As generative AI evolves, establishing relevant skill sets and enhancing data literacy will be key to realising its full potential. Consumers are increasingly using generative AI technologies in various scenarios, including information retrieval, email communication, and skill acquisition. Business leaders claim they use generative AI for data analysis, cybersecurity, and customer support, and despite the success of pilot projects, challenges remain. Despite the reported success of experimental projects, several challenges remain, including security problems, data privacy issues, and output quality and reliability.

Trevor Schulze, Alteryx’s CIO, emphasised the necessity for both enterprises and the general public to fully understand the value of AI and address common concerns as they navigate the early stages of generative AI adoption.

He noted that addressing trust issues, ethical concerns, skills shortages, fears of privacy invasion, and algorithmic bias are critical tasks. Schulze underlined the necessity for enterprises to expedite their data journey, adopt robust governance, and allow non-technical individuals to access and analyse data safely and reliably, addressing privacy and bias concerns in order to genuinely profit from this ‘game-changing’ technology.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Tags: ai, artificial intelligence, cybersecurity, data, generative ai


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button