Business News Report Technology
August 28, 2023

Conversica Survey Highlights Need for Responsible AI Adoption in the Corporate World

In Brief

AI companies face a significant gap in prioritizing AI ethics between those already using AI and those planning to adopt it.

Business leaders with existing AI implementations demonstrate a deeper understanding of AI concerns.

The survey emphasizes the need for proactive development of comprehensive guidelines for ethical AI use before adopting AI solutions.

AI companies are racing to provide the best generative AI solutions for enterprises, indicating an increasing organizational appetite for artificial intelligence to streamline workflows and enhance productivity. But, concerns about ethical implications of AI and corporate responsibility have surfaced. 

A recent survey by AI-solutions provider Conversica, delves into the perspectives of AI business leaders in the United States regarding the responsible use of AI. The survey findings offer insights into the challenges and priorities associated with AI ethics.

Disparity in AI Ethics Prioritization

The survey’s results reveal a shocking disparity in the prioritization of ethical AI practices between companies that have already integrated AI and those in the planning phase. It highlights that only one in 20 businesses planning to integrate AI in the coming year have already established guidelines.

Of the 500 companies surveyed, 42% have embraced AI technology. They acknowledge the importance of well-defined guidelines for the responsible use of AI. This awareness arises from first-hand experience with challenges such as transparency, misinformation, and inaccurate training data.

In contrast to respondents who have not yet integrated AI, those who have already adopted AI exhibited a more comprehensive understanding of AI-related issues.

Within this group, 21% expressed concern about false information, compared to 17% in the larger participant group. Similarly, 20% indicated worry about the precision of data models, versus 16%.

Furthermore, 22% of those utilizing established AI services exhibited apprehension regarding the ‘lack of transparency,’ contrasting with only 16% of the general group.

Jim Kaskade, CEO of Conversica, explains that this difference in perception is primarily due to the better understanding gained from actual AI implementation. With the absence of comprehensive government regulations, organizations are taking the initiative to establish their own ethical frameworks to guide AI deployment.

“The U.S. government hasn’t yet established specific regulations for how companies sell and employ artificial intelligence, so it’s critical that those planning to implement AI-powered products have their own guardrails in place,” Kaskade told Metaverse Post in an interview.

The Effect of AI Knowledge Gap

The survey also highlights a wide knowledge gap among leaders at companies adopting AI. A significant number of respondents admit to needing more familiarity with their organization’s AI ethics policies. 

Over one-fifth (22%) of respondents from companies currently using AI indicated that they are somewhat or very unfamiliar with the safety measures provided by their AI service providers.

This knowledge gap could hinder informed decision-making and potentially expose businesses to unforeseen risks.
As the pace of AI adoption accelerates, Conversica emphasized the importance of bridging this knowledge gap. 

Kaskade suggests that businesses invest in comprehensive training programs and diverse interdisciplinary teams to ensure a comprehensive understanding of AI ethics. Moreover, he proposes that leaders should formulate and openly communicate the policies governing the ethical usage of AI to the entire company.

Furthermore, he suggested that businesses should adopt a Responsible AI policy framework and progressively refine it as time passes.

“Be flexible. Be ready to do more. As this technology evolves at a fast pace, the rules will need to change, and AI technology will become more and more enterprise-ready,” he added.

Challenges in Implementing AI Ethics Policies

Despite 73% of respondents agreeing on the importance of ethical guidelines for AI, the survey showed that only 6% of respondents have actually implemented AI ethics policies. This raises questions about the factors hindering policy implementation, even as the significance of such policies is recognized.

The dynamic nature of AI technology and the lack of standardized frameworks could be contributing to this challenge.
Kaskade said that the pressure for enterprises to remain competitive by adopting AI may lead some companies to prioritize deployment over policy development.

“Our analysis of the data is that even though people are aware of the potential challenges with AI, they seem to be creating these policies on the go, working in response to issues they experience and opportunities they identify for improvement,” he said. “However, there’s great risk in working this way—the specific solution an organization adopts can have a big impact on what kinds of safeguards are necessary. Creating these policies before adopting AI-powered products is the ideal.”

“Until trusted sources of AI policy produce simple, easy-to-adopt, and tested frameworks, the 6% will continue to be the reality.”

When asked about the most important aspect of making well-informed decisions concerning AI within their organizations, the predominant concerns encompassed the lack of resources related to data security and transparency, as indicated by 43% of participants. 

Another significant challenge revolves around identifying a provider whose ethical standards align with those of the company, a concern voiced by 40% of respondents.
In contrast, merely 19% expressed apprehension regarding the understanding of AI-related jargon, which could potentially suggest a growing familiarity with AI-related subjects. 

Interestingly, this figure was substantially reduced to 10% among respondents from organizations that have already embraced AI, possibly indicating a higher level of proficiency in AI-related concepts and terminology among their leaders.

Navigating Challenges for Responsible AI Integration

The survey findings also emphasize the challenges businesses face when integrating AI responsibly. Data security and alignment with ethical standards emerged as the top concerns. Kaskade offered practical steps to navigate these challenges:

  • Develop in-house AI policies to mitigate potential risks.
  • Thoroughly evaluate AI providers and seek detailed information for well-informed decisions. Look for solutions that employ multiple models and proactively address potential bias or false information.
  • Stay updated on existing and upcoming AI regulations. Make sure to establish guardrails that comply with the law and protect both the company and their end consumer.
  • Ensure transparent disclosure of AI usage and include human oversight to minimize risks.

Responsible AI Tool Usage and Guidelines

The survey also explores companies’ approaches to popular AI tools like ChatGPT. It highlights that 56% of respondents already have rules for its usage in place or are considering implementing a usage policy. This reflects a growing awareness of potential risks associated with AI tool usage.

When asked about the factors that might drive companies to implement such rules, Kaskade explained: “As business leaders educate themselves more about the challenges associated with popular AI tools – the media is publishing articles about this all the time – it’s natural that they don’t want their companies to be exposed to any type of risk.” 

Kaskade pointed out that it is not completely clear how safe one’s information is with ChatGPT and Bard. Furthermore, there is the potential for data models to produce content that is imprecise or potentially biased, influenced by the text corpus accessible on the web.

“I envision companies leveraging their own brand-specific datasets to train their own “private AI models” to ensure the system understands and can cater to their own unique needs, as well as represent the organization with approved content only,” Kaskade added, on how he sees these usage guidelines shaping responsible AI usage within organizations. 

“It’s NOT much different than the days of public vs. private cloud. It will be public vs. private large language models.”

The Ban on Certain AI Tools

According to the survey, 7% of respondents are either banning or considering banning one or more popular AI tools. 

Among respondents whose companies had integrated their own AI-powered solutions, only 2% signaled existing or potential bans. This points to an emerging contrast between AI-adaptable companies and those less inclined.

However, while some firms seem at ease with AI, this doesn’t automatically translate to unrestricted employee access to AI tools. 

“When individual employees are using publicly-available tools, it’s much harder for the organization to keep track of important details like the models and datasets being leveraged, safeguards for user data or accuracy of output, etc,” Kaskade told Metaverse Post. 

Similarly, 20% of respondents noted their companies’ endorsement of unimpeded AI tool utilization by employees. However, this figure dwindled to 11% for companies incorporating AI-powered services, indicating a balanced viewpoint where AI tools contribute value but require supervision.

“Often, companies and industries that already leverage such tools are more inclined to recognize the importance of establishing limitations on their usage, even though they’re also more inclined to understand the value that they provide,” Kaskade added. 

A Future of Responsible AI Development

The survey results underscore the significance of conscientious AI integration, guided by distinct ethical principles. Conversica stressed that both AI solutions from external sources and those developed internally need to satisfy fundamental criteria.

This is especially critical for generative AI, which engages directly with external individuals such as customers, prospects, or the general public.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via [email protected] with press pitches, announcements and interview opportunities.

More articles
Cindy Tan
Cindy Tan

Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via [email protected] with press pitches, announcements and interview opportunities.

Hot Stories
Join Our Newsletter.
Latest News

Institutional Appetite Grows Toward Bitcoin ETFs Amid Volatility

Disclosures through 13F filings reveal notable institutional investors dabbling in Bitcoin ETFs, underscoring a growing acceptance of ...

Know More

Sentencing Day Arrives: CZ’s Fate Hangs in Balance as US Court Considers DOJ’s Plea

Changpeng Zhao is poised to face sentencing in a U.S. court in Seattle today.

Know More
Join Our Innovative Tech Community
Read More
Read more
Bitcoin-based Eternal AI Launches EAI Token Generation Event On Naka Launchpad
Markets News Report Technology
Bitcoin-based Eternal AI Launches EAI Token Generation Event On Naka Launchpad
May 3, 2024
From Gamer Guy to Meme Coin Sensation: How Sealana’s Humorous Narrative Resonates with Investors
News Report
From Gamer Guy to Meme Coin Sensation: How Sealana’s Humorous Narrative Resonates with Investors
May 3, 2024
Institutional Appetite Grows Toward Bitcoin ETFs Amid Volatility
Analysis Business Markets Technology
Institutional Appetite Grows Toward Bitcoin ETFs Amid Volatility
May 3, 2024
XION And TOKI Announce Launch Of Chain Abstraction Created for BNB Chain Ecosystem
Business News Report Technology
XION And TOKI Announce Launch Of Chain Abstraction Created for BNB Chain Ecosystem
May 3, 2024