theNet by CLOUDFLARE

How to protect your company in a ChatGPT world

With the potential to be as game-changing as the Internet, smartphones, and cloud computing — the emergence of generative AI tools like ChatGPT and GitHub CoPilot will undoubtedly open up new possibilities and challenges for companies. The swift and sweeping advancement of AI has raised the stakes for those looking to leverage this technology responsibly while also preparing for the potential impact of AI adoption by cyber criminals. With its ability to write code that can help identify and exploit vulnerable systems, generate hyper-personalized phishing emails and even mimic executives’ voices to authorize fraudulent transactions, it’s crucial that organizations reevaluate their risk calculus around AI and consider defensive and offensive strategies against hacking.

Here are three key strategies that IT and security executives should consider when evaluating their cyber security posture in the age of AI:

1. Fight AI with AI

Some people fear the worst with AI and imagine a future where an all-knowing thinking machine becomes a superweapon threatening humanity in an “AI arms race.” That description is quite hyperbolic and the reality of AI is much less ominous.

Bad actors will undoubtedly use AI tools for nefarious purposes. But existing AI tools are generally limited to basic coding and they have safeguards in place to prevent writing truly malicious code.

On the bright side, AI has the potential to enhance the skills of cyber security defense teams and augment their abilities, particularly in the cyber security field where we see a shortage of skilled professionals. By using AI tools, entry-level analysts can receive assistance with routine duties and security engineers can experience an increase in their coding and scripting capabilities.

The key to success is investing in AI tools and training to up-level expertise rather than matching every offensive AI threat with an AI countermeasure.

2. Safeguard email

Most cyber attacks begin in our inboxes. Bad actors send fraudulent emails, leveraging phishing and social engineering tactics to harvest credentials that will let them into an organization. Recent advancements in AI will make these emails increasingly sophisticated and realistic, and integrating AI-powered chatbots into social engineering toolkits will broaden the scope and reach of these attacks.

Cyber security professionals must acknowledge the potential for AI-powered phishing and social engineering attacks and educate users on detecting and responding to them. It is imperative to continue to train users to identify phishing attacks, provide a platform to rapidly report suspicious activities, and enlist their assistance in the overall cyber defense strategy.

However, human error is inevitable, and we must protect users from hacking by relying on technical defenses as well. Unfortunately, the basic filtering tools in leading email services are often not enough. Companies should seek out advanced email security tools to comprehensively block attacks across different vectors, even from trusted senders or domains.

3. Defend the data

While phishing and credential harvesting are often the first steps in any attack, they are not the whole picture. When considering the risks posed by AI to company data and applications, it is important to acknowledge the multi-faceted nature of potential attacks.

Organizations should move beyond protecting networks with a traditional castle-and-moat perimeter and focus instead on where data lives and how users and applications access it. For many companies, this means adopting a Zero Trust architecture with a secure access service edge (SASE) solution fortified with phishing-resistant multi-factor authentication (MFA).

Internet-facing applications and APIs are vulnerable to various types of attacks, including those carried out by bots or other AI-driven attacks. These apps should have appropriate protections like encryption, a web application firewall (WAF), input validation, and rate-limiting to mitigate against bots or future AI-driven attacks.

As companies embrace AI tools, they must ensure their users are not misusing or leaking company data. Some companies, like JPMorgan Chase, have decided to restrict employees from using ChatGPT, but at a minimum, companies should implement acceptable use policies, technical controls, and data loss prevention measures to mitigate any risks.


The path forward

It is essential for cyber security professionals to remain curious and experiment with AI tools to gain a better understanding of their potential uses for both good and malicious purposes. To protect against attacks, organizations should search for ways to amplify their own capabilities–whether by using ChatGPT or exploring cyber security tools and platforms that have access to extensive training data and leverage threat intelligence across various defense dimensions.

The bottom line is that businesses must implement comprehensive security measures that evolve with the changing world. While AI has emerged as a potential threat, the technology can lead to powerful benefits as well–we just need to know how to use it safely.

This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.

This article was originally produced for Security Boulevard



Key takeaways

After reading this article you will be able to understand:

  • The impact AI adoption by cyber criminals can have

  • How to evaluate the risk calculus around AI considering both defensive and offensive strategie

  • 3 key considerations for evaluating security posture in the age of AI



Receive a monthly recap of the most popular Internet insights!