Third-party applications have always presented some level of security and privacy risk. Attackers recognize that software vulnerabilities provide inroads to sensitive data. But the growing use of and heavy reliance on SaaS offerings and increasing integrations of apps have amplified the problem: Attackers now have more vulnerabilities to exploit.
The Salesloft Drift breach in August 2025, which affected hundreds of organizations, demonstrates the serious privacy risks posed by third-party SaaS applications and their integrations.
In that particular attack, cybercriminals compromised the Salesloft Drift sales engagement platform. Attackers stole OAuth tokens, enabling them to access integrated Salesforce instances used by organizations. Organizations whose Salesforce instances were affected by the breach rapidly shut down the attackers’ access, but not before attackers exfiltrated data from those organizations’ Salesforce instances, including the text fields of customer support cases. While we have intelligence to indicate that hundreds of organizations were impacted, only a few have publicly disclosed the impact.
In that Salesloft Drift incident, attackers exfiltrated customer data, which was a harm in itself. But the attackers could also use that information to do more harm: Attackers often use stolen information as part of social engineering schemes. In addition, the Salesloft Drift attackers stole digital credentials and tokens that could allow them to access other integrated applications.
These vulnerabilities put a wide variety of sensitive data at greater risk. While the Salesloft Drift breach led to the theft of customer contact and support information, breaches could expose even more sensitive data, such as trade secrets, corporate financial data, or healthcare information. The 2025 breach of Discord’s third-party customer support vendor, for example, led to the theft of Discord customer names, email addresses, credit card numbers, uploaded images of government IDs, and more.
Beyond providing the immediate response to third-party breaches, organizations also must address longer-term repercussions. They can face regulatory fines, lawsuits, and the erosion of customer trust, which can have a direct impact on revenue.
As privacy leaders, how can we better address the risks posed by third-party applications and software integrations? Implementing a few best practices enables you to reduce vulnerabilities and speed resolutions if and when these attacks occur.
Privacy leaders know the potential consequences of data breaches all too well. Your organization might be subject to regulatory investigations and, potentially, fines from the Federal Trade Commission (FTC), Department of Health and Human Services (for HIPAA violations), state attorneys general in the United States, or from Data Protection Authorities (DPAs) in any number of countries around the world.
Just as important, data breaches can impact your organization’s reputation, your customers’ trust, and your bottom line. When data is exposed, customers often look for remedies through breach-of-contract provisions or service credits. In the worst cases, they lose trust and take their business elsewhere.
Attacks on third-party applications and integrations heighten this risk. It’s one thing to put in place a number of protections for your own systems, but ensuring that your service providers provide equivalent protections is harder. This is in part why regulations like the EU’s Digital Operational Resilience Act (DORA) require covered entities to conduct substantial due diligence into their service providers — and their service providers’ supply chains. A number of EU DPA and US FTC enforcement actions have also taken the position that entities were responsible even where the breach occurred on a service provider’s system.
Four best practices go a long way toward reducing third-party privacy risks and helping mitigate breach damage. These practices are not one-time events but continuous efforts to evaluate vendors, fine-tune plans, stay current on notification requirements, and adapt to change.
1. Don’t inherently trust all SaaS providers.
Work with your colleagues in IT and security to conduct thorough due diligence of vendors before adopting new SaaS applications. Ensure vendors have taken steps to earn certifications, protect data security, and maintain sufficient data hygiene. Most importantly, make sure your team understands how to configure apps correctly.
Certifications: Vendors should have all the relevant certifications for your field, such as SOC 2 and ISO 27701, plus PCI DSS 4.0, HIPAA, or FedRAMP.
Data use and security: Find out how vendors will use and protect your data. You might decide not to work with certain vendors if they can’t meet your requirements. For example, you might require assurances that they will not use your data to train an AI model they are building. You might also insist that your data will be stored separately and encrypted distinctly from other organizations’ data.
Data hygiene: Make sure vendors collect only the minimum amount of sensitive data necessary to accomplish their purpose. Data minimization is a core principle of data privacy regulations such as the General Data Protection Regulation (GDPR). Vendors should also commit to retaining only the data that is necessary, for only as long as necessary, to comply with regulations. Limiting data collection and retention minimizes exposure in the event of a breach. For example, some of the data exfiltrated in the Discord breach — such as driver’s license images — should have been replaced with a token and then deleted.
Configurations: Ensure your IT team understands how new SaaS apps are configured. In particular, examine how they are integrated with other applications. And evaluate the granularity of permissions: Attackers should not have easy access to multiple systems if they compromise a single account or app.
2. Work with security and other teams to develop a response plan.
As long as cybercriminals see vulnerabilities in SaaS applications, they will attack. Have a team and plan in place to respond quickly.
Crisis team: As I’ve written before, privacy and security teams should be working together as part of a privacy-first security program. That program should include a crisis team with not only IT, cybersecurity, operations, and privacy staff but also legal and communications staff. These partnerships should be in place before a crisis hits, so you are not scrambling to design roles and assign responsibilities.
Response playbook: It’s critical to have a documented response playbook in place so all team members know exactly what to do when an event occurs. That playbook should include:
Fact-finding: Establish the “who, what, where, and when” of the breach. These processes should include: verifying that an incident actually occurred; documenting an incident timeline; determining the point of entry (e.g., whether it was caused by a third-party vulnerability); and identifying affected systems and data. The theft of trade secrets requires a different response than the loss of customers’ personal information. Similarly, a large data breach demands a different response than a small one.
Containment and recovery: Define procedures for isolating affected systems, revoking compromised credentials, and preventing further data loss. Security teams can then address the root cause of the breach and restore data, if needed. Test these phases of the plan in advance. If an incident occurs, you should be able to run the play you designed.
3. Understand your notification obligations.
In the event of a breach that exposes customer data, you’ll need to contact customers in accordance with the myriad laws that apply to your company: Depending on where customers are located, you might have to adhere to multiple state laws in the United States, each with different notification requirements. If you have customers in the EU and you control personal data, the GDPR requires that you notify those customers when there is a high risk that their information will lead to identity theft, financial loss, discrimination, or another form of harm.
Public companies also need to notify the SEC if there has been a material breach. If you are a B2B company, you could have specific notification requirements built into customer contracts. For example, you might have an obligation to notify within 24 to 72 hours of a breach.
When breaches are due to third-party applications, you’ll need to understand the notification obligations of each party. Those obligations depend on which organization is controlling and processing data.
Meeting all these notification requirements is a complex undertaking. To reduce that complexity, you could set a policy that follows the most stringent rules for all customers — for example, notifying everyone within 24 hours even if there is only a remote possibility that they could suffer harm from the breach. Alternatively, you could attempt to navigate the notification complexities and handle notifications differently on a jurisdiction-by-jurisdiction basis. Both approaches carry benefits and drawbacks — and each could have an impact on customer trust.
What if you notify a customer that is unlikely to suffer any harm from a breach? Will that customer trust you less — or more? What if you decide not to notify a particular customer of a breach because there is no legal requirement to do so?
In addition to thinking about the legal requirements, put yourself in your customers’ shoes. Think about what they would want to know and when — and how your notification decision could impact your relationship with those customers.
4. Be adaptive.
SaaS applications, software integrations, criminal tactics, and privacy regulations are always evolving. Even organizations that plan extensively should be ready for the unexpected. What’s important is your ability to respond rapidly, then learn from incidents and adapt your strategies accordingly.
Attackers continue to target SaaS apps as a way to access sensitive customer and business data. While it might be impossible to completely stop attacks, you can minimize the likelihood of breaches and reduce the damage to your company, partners, and customers.
Cloudflare is helping to secure SaaS applications and their integrations. After the Salesloft Drift incident, we announced that we are working on solutions that consolidate SaaS connections via a single proxy to improve detection of and response to potential compromise.
As a cloud services provider ourselves, we continuously work to strengthen the security of our services, monitor our third-party service providers, and maintain the confidence and trust of our customers. We are also committed to being transparent about incidents, as we were after the Salesloft Drift breach, and we are always learning and adapting in response. At the same time, we are committed to a wide array of efforts — such as issuing transparency reports, posting warrant canaries, developing privacy-enhancing technologies, and establishing standards — that are all aimed at helping build a better, more private and secure Internet.
Meanwhile, our connectivity cloud helps you address security risks and comply with a full range of privacy regulations while controlling complexity. You can establish robust, consistent security across all of your apps and environments, and implement requisite controls to meet a full range of privacy laws and standards — all from a single, unified platform.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.
Learn why a unified platform is critical to streamlining compliance with a wide range of regulations in the How a connectivity cloud streamlines security compliance white paper.
Get the white paper!
Emily Hancock — @emilyhancock
Chief Privacy Officer, Cloudflare
After reading this article you will be able to understand:
How third-party SaaS apps introduce new privacy risks
4 best practices for reducing third-party risks
Why privacy and security teams must collaborate on risk reduction and incident response