Ok, we’re doing it. We’re going to talk about the explosion of artificial intelligence (AI) tools and how to address them from a cyber security point of view within your organization.
Within the past year, AI tools such as Midjourney, Stable Diffusion, Dall·E, ChatGPT, Jasper, LLaMa, Rewind, and others have gone from niche to mainstream to full-on panpsychism. Like me, you have probably marveled at the imaginative use cases for research, creativity, and productivity. And if you’ve used any of the tools yourself, you know it’s hypnotic to watch them interpret prompts, generate responses, and—with only a few user actions—refine and add depth to their outputs.
The speed and ease of these tools for users of all skill levels is a real breakthrough. Personally, I have found the outputs from large language models to be less compelling than their visual counterparts; but across the board the interactive, “generative” process is remarkable. Compared with existing open resources such as Google, StackOverflow, and Wikipedia, generative AI tools represent an incredible leap forward by delivering use-based outcomes that build on interactions, as opposed to merely providing general intelligence. By offering exceptional speed and agility, interactivity outperforms traditional search, retrieval, and synthesis methods — effectively bringing information to the forefront.
Take for instance this image below, which I “imagined” with Midjourney “An ancient civilization wall painting where the hieroglyphics appear to show a civilization using a computer and artificial intelligence, lighting from a fire.”
Image source: Midjourney AI generated - imagined by me
Within our collective organizations, there are incredible opportunities for using AI tools. We can use them to generate computer code, develop user documentation and consumer-facing content, ease the burdens of customer service, or produce more useful knowledge bases for new-hire onboarding and cross-organizational knowledge sharing. These and other use cases could ultimately generate billions of dollars in new commerce and business value.
Just for fun, I asked ChatGPT to “Please write a corporate policy with checklists for the Chief Information Security Officer to validate current security risks with AI tools, and please provide a one-page summary as a cover sheet for the Board of Directors and the executive leadership team” and here is the response that I received.
If you’re like me, you likely are nervous about the obvious security and legal issues these tools present in addition to their awesome potential. As leaders, we should be asking, “Am I in sync with the entirety of my organization about the potential risks and opportunities presented by AI technologies?” and if so how do I ensure that our use of these tools is restrained from causing serious injury?
When I talk with my peers in cyber security, there is a 50/50 split between those who are outright blocking access to these technologies and those who are embracing them. I’ve seen this movie before, where outright blocking side steps addresses a security risk. The result for security leaders is that we find ourselves out of step with our business and our colleagues, who invariably find ways to work around us. Thus, now is the time to address this division, to avoid being Luddites and instead grapple with the reality that people in our organizations are already using these tools, and through leadership, we can permit that in a manner that reduces potential risks.
Where should we start? Setting a few key guidelines for our organizations can help reduce risk without stifling the potential that AI tools offer.
1. Keep IP out of Open Systems. It might seem obvious, but we cannot feed highly regulated information, controlled data, or source code into an open AI model, or one outside of our control. Similarly, we must avoid inputting customer data, employee information, or other private data into open AI tools as well. Unfortunately, we can find examples of such actions every single day. (See, for example, the code leak incident at Samsung.) The issue is that most generative AI tools do not guarantee that this information will stay private. In fact, they are explicit that they use input to conduct research and ultimately improve future results. So, at the very least, organizations might need to update confidentiality policies and train employees to help ensure sensitive information stays out of these AI tools.
2. Ensure veracity to avoid liability. If you’ve ever interacted with an AI tool, you might find that answers to prompts are often presented without context. And you also might discover that these answers are not always accurate. Some answers might not be based on current information. ChatGPT, for example, uses information collected up until September 2021.
Image source: ChatGPT
Of course, we all know the current Prime Minister is Rishi Sunak, and Liz Truss was Boris Johnson’s successor. This response and current limitation doesn’t diminish the power of these tools, but it does help us understand the current risks more clearly, as evidenced by this example where attorneys who used AI-generated content in court filings, discovered that the content referenced completely fictitious cases.
3. Prepare for “offensive AI.” As is the case with many new technologies, cyber attackers have been quick to exploit AI tools for criminal aims. How are attackers using AI for evil? They might find an image of you or a recording of your voice on the Internet, and then create a “deepfake” using AI tools. They could use the fake version of you to embezzle company funds or phish colleagues, leaving fraudulent voicemails that ask for login credentials. AI tools could also be used to generate malicious code that rapidly learns and improves its ability to carry out its goals.
Combatting the malicious use of AI might require… AI. Security tools that incorporate AI and machine learning (ML) capabilities can offer some good defenses against the speed and intelligence of offensive AI attacks. Additional security capabilities can help organizations monitor the use of AI tools, restrict access to particular tools, or limit the ability to upload sensitive information.
So, how do you protect against the risks that AI tools might present? The first step is to recognize that AI is already here—and it’s here to stay. The generative AI tools available today show the tremendous potential of AI for helping businesses enhance efficiency, boost productivity, and even spark creativity. Still, as cyber security leaders, we must be cognizant of the potential security challenges that these tools can present. By implementing the right guidelines, augmenting internal training, and in some cases, implementing new security solutions, you can reduce the likelihood that these potentially powerful AI tools will become a liability.
Cloudflare has taken a Zero Trust approach to enabling and securing its network and employees relative to AI. Learn more about how Cloudflare equips organizations to do the same.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.
Oren Falkowitz — @orenfalkowitz
Security Officer, Cloudflare
After reading this article you will be able to understand:
How AI tools present new organizational security challenges
Where your organization falls in the AI revolution
3 ways to reduce risk without stifling the potential that AI tools offer
Introduzione
Risorse
Soluzioni
Community
Supporto
Azienda