May 22, 2024
Generative AI

Rapid Uptake of Generative AI by Companies Raises Concerns for Privacy and Security, Reveals Study

A recent study conducted by the University of the Sunshine Coast has shed light on the increasing adoption of generative Artificial Intelligence (AI) by Australian companies, which is posing significant risks to the privacy and security of the public, employees, customers, and other stakeholders. The research, published in a paper on AI and Ethics, has cautioned that the swift integration of AI technologies is exposing businesses to a wide array of potential consequences.

One of the main concerns highlighted in the study is the heightened risk of mass data breaches that could compromise third-party information. Additionally, there is a growing fear of business failures stemming from manipulated or poisoned AI modeling, whether intentional or accidental in nature.

Dr. Declan Humphreys, a Lecturer in Cyber Security at UniSC, expressed the ethical and moral dilemmas associated with the corporate rush to embrace generative AI solutions such as ChatGPT, Microsoft’s Bard, or Google’s Gemini. These applications have the ability to transform vast quantities of real-world data into content that closely resembles human-generated output, with ChatGPT being a notable example of a language-based AI platform.

The study pointed out that the adoption of generative AI is not limited to tech companies alone, as various industries including call centers, supply chain management firms, investment funds, sales organizations, and human resource departments are also jumping on the AI bandwagon.

Despite the ongoing discussions surrounding the potential impact of AI on job security and the risks of bias, many organizations are seemingly overlooking the cyber security implications associated with AI implementation.

Dr. Humphreys emphasized that companies caught up in the excitement of integrating AI into their operations may inadvertently expose themselves to vulnerabilities by either excessively relying on or unquestioningly trusting AI systems.

The research paper, which was a collaborative effort involving experts in cyber security, computer science, and AI at UniSC including Dr. Dennis Desmond, Dr. Abigail Koay, and Dr. Erica Mealy, highlighted the prevalent trend of companies either developing their own AI models or engaging third-party providers without fully considering the hacking risks involved.

Potential hacking scenarios outlined in the study include unauthorized access to user data fed into the AI models, tampering with the responses generated by the model, or altering the way it processes information. These actions could result in data breaches or negatively impact strategic business decisions.

Dr. Humphreys stressed the importance of prioritizing privacy and security measures for businesses looking to integrate artificial intelligence solutions in the years ahead. He cautioned that the rapid adoption of generative AI appears to be outpacing the industry’s comprehension of the technology and its associated ethical and cyber security challenges.

To mitigate these risks, companies will need to establish robust governance structures and regulatory frameworks aimed at safeguarding employees, sensitive data, and the broader public from the potential pitfalls of unchecked AI implementation.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it