It’s impossible to ignore the impact of generative AI since it burst onto the scene. Some people have jumped on the technology as a workplace transformation heralding a new age when they’ll never have to face the drudgery of writing an email or report ever again.
While for others, it’s the beginning of a new wave of technology that looks set to bring about untold benefits in every business sector, from logistics to the development of new life-saving drugs.
But the initial luster of this game-changing technology — hailed as a significant step forward in personal productivity — is also raising some concerns, not least in terms of data privacy and security.
Earlier this year, electronics giant Samsung banned the use of generative AI tools after reports that Samsung employees had accidentally shared confidential information while using ChatGPT for help at work.
In an email sent to staff and widely reported at the time, the Korean company said: “Interest in generative AI platforms such as ChatGPT has been growing internally and externally. While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
Samsung is not alone. A number of companies — and some countries — have banned generative AI. And it’s easy to understand why.
In effect, using tools such as ChatGPT and other large language models (LLMs) is essentially opening the door to unmonitored shadow IT — devices, software and services outside the ownership or control of IT organizations.
And the problem is simple. Whether it’s an employee experimenting with AI — or a company initiative — once proprietary data is exposed to AI, there is no way to reverse it. Make no mistake. AI holds incredible promise. But without proper guardrails, it poses significant risks for businesses and organizations.
According to a recent KPMG survey, executives expect generative AI to have an enormous impact on business, but most say they are unprepared for immediate adoption. And top of the list of concerns are cyber security (81%) and data privacy (78%).
That’s why security leaders need to strike a balance between enabling transformative innovation through AI, while still maintaining compliance with data privacy regulations. And the best approach to do this is to implement Zero Trust security controls, so enterprises can safely and securely use the latest generative AI tools without putting intellectual property and customer data at risk.
Zero Trust security is a methodology that requires strict identity verification for every person and device trying to access resources across the network. Unlike a traditional ‘castle and moat’ approach, a Zero Trust architecture trusts no one and nothing.
And it is this approach that is essential for any organization looking to use AI. Why? Because Zero Trust security controls enable enterprises to safely and securely use the latest generative AI tools without putting intellectual property and customer data at risk.
Organizations using generative AI need to ensure that their systems are robust enough to prevent any security issues.
Understanding how many employees are experimenting with AI services – and what they’re using it for is critical. Providing system administrators oversight — and control — of this activity just in case they need to pull the plug at any time will help to ensure your organization's data is safe.
The adoption of a Data Loss Prevention (DLP) service will help to provide a safeguard to close the human gap in how employees may share data. While more granular rules can even allow select users to experiment with projects containing sensitive data, with stronger limits on the majority of teams and employees.
In other words, if organizations are to use AI in all its guises they need to improve their security and adopt a Zero Trust approach. And while it’s important to highlight the issue, there is no need to sensationalize concerns around a technology that has the potential to offer so much.
After all, with every transformative step forward in technology, from mobile phones to cloud computing, there are new security threats that rise to the surface. And each time, the industry has responded to tighten security, protocol and processes. The same will happen with AI.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.
This article was originally produced for The AI Journal