Website security guide: A 10-step checklist

Follow this website security checklist of 10 key measures organizations should take to authenticate and authorize users, encrypt web traffic, mitigate third-party risks, block DDoS attacks and bots, and more.

Learning Objectives

After reading this article you will be able to:

  • Identify different ways to secure websites
  • Understand key ways to authenticate and authorize web users
  • Explain some of the most common website attacks

Copy article link

The importance of website security

Website security is critical for all organizations that rely on web applications as a source of revenue, efficiency, and customer insights. Organizations with websites that intake and store sensitive data, or provide critical infrastructure and services, are particularly susceptible to attacks that vary in complexity, scale, and origin.

Web application security as a discipline is broad and ever-evolving, given that the Internet threat landscape and regulatory environment are constantly changing. For example, this checklist focuses on how to protect websites, but protecting APIs and AI-enabled apps (which websites increasingly incorporate) is increasingly important for large enterprises.

However, public-facing websites of all sizes and across all industries can benefit from 'table stakes' measures around technical controls and access control and user management. To that end, this website security guide covers the following 10 recommendations:

1) Secure accounts with strong authentication

Recommendation: Use 2FA rather than password-only authentication

Just like an airline must verify a passenger’s identity with a valid ID before allowing them to board a plane, organizations must also verify who is logging in to the digital systems powering their web applications.

The process of preventing unauthorized access (by ensuring individuals are who they claim to be) is called authentication. Authentication verifies identity by checking specific characteristics, or "factors," against a digital record.

The following are the most common authentication factors:

  • Something the person knows: This checks for a piece of secret knowledge that only the real person should have, such as a username-password combination, security questions, or PIN codes.
  • Something the person has: This checks if the person possesses an item they were issued or are known to have (similar to needing a physical key to open a house’s front door). In digital systems, authentication checks for a soft token (such as a mobile-generated code), or a hard token (such as a small physical item that must be plugged into your device via Bluetooth or USB port), before permitting access.
  • Something the person is: This assesses a person's inherent physical qualities through biometrics; for instance, by verifying a thumbprint or through facial recognition.

The problem with the first type is that passwords can often be guessed or stolen by attackers. With the prevalence of phishing, on-path attacks, brute force attacks, and password reuse, it has become simpler for attackers to collect stolen login credentials.

For this reason, organizations should implement two-factor authentication (2FA) for their accounts. 2FA requires (at least) two separate forms of authentication — which is more effective than just requiring one. While 2FA is not impossible for attackers to crack, it is significantly more difficult and expensive to compromise than password-only authentication.

2) Enforce role-based permissions

Recommendation: Set role-based permissions to only authorized users

Just because someone’s identity is verified, however, does not mean they should have control over everything. Authorization helps determine what an authenticated user can see and do (i.e, their permissions).

Authorized users

For example, a “super administrator” may be the only one authorized to edit all settings and pages; whereas a “read-only” user might only be able to view the site’s analytics — and nothing else.

As organizations expand, so do the number of roles on their web teams: there may be front-end developers, back-end developers, security analysts, reporting analysts, web designers, content editors, and much more. Therefore, it is important to regularly audit and update role-based permissions.

3) Encrypt web traffic with SSL/TLS

Recommendation: Establish connections with auto-managed SSL/TLS

Any website that collects and transmits sensitive data, such as login credentials, contact information, credit card information, health information, and more, needs HTTPS. HTTPS prevents websites from having their information broadcast in a way that’s easily viewed by anyone snooping on the network.

SSL certificate secure browsing

HTTPS works through a protocol called Transport Layer Security (TLS) — previous versions of the protocol were known as Secure Sockets Layer (SSL).

Automated SSL/TLS

Look for a service that offers auto-managed SSL/TLS certificates, which are what enable websites and applications to establish secure connections.

TLS is the communications backbone of privacy and data security. It allows users to browse the Internet privately, without exposing their credit card information or other personal and sensitive information.

With SSL/TLS, a client (such as a browser) can verify the authenticity and integrity of the server it is connecting with, and use encryption to exchange information. This, in turn, helps prevent on-path attacks and meet certain data compliance requirements.

There are other benefits, too: TLS helps minimize latency to speed up webpage load times, and search engines tend to deprioritize websites that fail to use encryption.

Keep in mind that each SSL/TLS certificate has a fixed expiration date, and the validity periods of these certificates have shortened over time. If a certificate is expired, clients — such as the visitor’s browser — will consider that a secure connection cannot be established, resulting in warnings or errors. Missed certification renewals can also lower a website’s search engine rankings, but certain services can handle auto-renewal.

4) Encrypt DNS traffic over HTTPS or TLS

Recommendation: Keep user browsing secure and private with DNS encryption

A website’s content does not technically live at a URL like www.example.com, but rather at a unique IP address like 192.0.2.1. The process of converting a URL into a machine-friendly IP address is known as a Domain Name System (DNS) lookup; and DNS records are the Internet’s instructions for what IP address is associated with a particular domain.

However, by default, DNS queries and responses are sent in plaintext (UDP), which means they can be read by networks, ISPs, and others who may be monitoring transmissions. This can have huge implications on security and privacy. If DNS queries are not private, then it becomes easier for governments to censor the Internet and for attackers to stalk users' online behavior.

Use a free DNS resolver to encrypt DNS traffic with one of these options:

  • DNS over TLS, or DoT, is a standard for encrypting DNS queries to keep them secure and private. It gives network administrators the ability to monitor and block DNS queries, which is important for identifying and stopping malicious traffic.
  • DNS over HTTPS, or DoH, is an alternative to DoT. With DoH, DNS queries and responses are encrypted, but they are sent via the HTTP or HTTP/2 protocols instead of directly over UDP. This gives network administrators less visibility — but provides users with more privacy.

5) Integrate DNS security

Recommendation: Address certain DNS system limitations with purpose-built DNS security

The DNS system itself was not designed with security in mind and contains several design limitations. For example, it does not automatically guarantee where DNS records come from, and it accepts any address given to it, no questions asked. Therefore, DNS servers can be vulnerable to domain spoofing, DoS (Denial of Service) attacks, and more.

DNS-based DDoS attack

DNS security (DNSSEC) helps address some of the design flaws of DNS. For instance, DNSSEC creates a secure domain name system by adding cryptographic signatures to existing DNS records. By checking its associated signature, organizations can verify that a requested DNS record comes from its authoritative name server — and not a fake record.

Some DNS resolvers already integrate DNSSEC. Also, look for a DNS resolver that can provide features such as content filtering (which can block sites known to distribute malware and spam) and botnet protection (which blocks communication with known botnets). Many of these secured DNS resolvers are free to use, and can be activated by changing a single router setting.

6) Hide the origin IP address

Recommendation: Make it more difficult for attackers to find your server

If attackers were to find the origin IP of an organization’s server (which is where the actual web application resources are hosted), they may be able to send traffic or attacks directly to the servers.

Depending on the DNS resolver already in place, the following steps can also help hide the origin IP:

  • Do not host a mail service on the same server as the web resource being protected, since emails sent to non-existent addresses get bounced back to the attacker — revealing the mail server IP.
  • Ensure that the web server does not connect to arbitrary addresses provided by users.
  • Since DNS records are in the public domain, rotate origin IPs.

7) Prevent DDoS attacks

Recommendation: Implement always-on DDoS mitigation plus rate limiting

At their worst, distributed denial-of-service (DDoS) attacks can knock a website or entire network offline for extended periods of time.

DDoS attacks occur when a large number of computers or devices, usually controlled by a single attacker, attempt to access a website or online service all at once. These malicious attacks are intended to take resources offline and make them unavailable.

Application-layer DDoS attacks remain the most common attack type against web applications — and continue to become more sophisticated in terms of size and frequency.

Application-layer DDoS attack

Look for the following essential DDoS prevention tools:

  • Always-on DDoS mitigation: Look for a scalable, “always-on” DDoS defense with the following capabilities:
    • Automatic absorbing of malicious traffic as close as possible to the attack origin (which reduces end-user latency and organizational downtime)
    • Unmetered, unlimited DDoS attack mitigations (which avoids extra charges from spikes in attack traffic)
    • Centralized, autonomous protections against all DDoS attack types (including application- and network-layer attacks)
  • Rate limiting: Rate limiting is a strategy for limiting network traffic. It essentially puts a cap on how often someone can repeat an action within a certain timeframe — for instance, when botnets attempt to DDoS a web application. This is comparable to a police officer who pulls over a driver for exceeding the road's speed limit. There are two kinds of rate limiting:
    • Standard IP-based rate limiting, which protects unauthenticated endpoints, limits the number of requests from specific IP addresses, and handles abuse from repeat offenders
    • Advanced rate limiting, which also protects APIs from abuse, mitigates volumetric attacks from authenticated API sessions, and provides more customization

    A comprehensive DDoS threat defense also hinges on multiple methods that may vary depending on an organization’s size, their network architecture, and other factors. Learn more about how to prevent DDoS attacks.

    8) Manage third-party scripts and cookie usage

    Recommendation: Look for tools specifically to address client-side risks

    In web development, “client side” refers to everything in a web application that is displayed or takes place on the client (end user device). This includes what the website user sees, such as text, images, and the rest of the UI, along with any actions that an application performs within the user's browser.

    The majority of client-side events require loading JavaScript and other third-party code to the web visitor’s browser. But, attackers look to compromise those dependencies (for example with Magecart-style attacks). This leaves visitors vulnerable to malware, credit card data theft, crypto mining, and more.

    Client-side script monitor

    Cookies also come with client-side risks. For example, an attacker can exploit cookies to expose website visitors to cookie tampering, which can ultimately lead to account takeover or payment fraud. However, website administrators, developers, or compliance team members often do not even know what cookies are being used by their website.

    To reduce the risk from third-party scripts and cookies, implement a service that:

    • Automatically discovers and manages third-party script risks; and
    • Provides full visibility into first-party cookies being used by websites.
  • 9) Block bots and other invalid traffic

    Recommendation: Proactively identify and mitigate malicious bot traffic

    Some bots are “good” and perform a needed service — such as authorized search engine crawlers. But, other bots are disruptive and harmful when left unchecked.

    Organizations that sell physical goods or services online are particularly vulnerable to bot traffic. Too much bot traffic can lead to:

    • Performance impact: Too much bot traffic can put a heavy load on web servers, slowing or denying service to legitimate users
    • Operational disruptions: Bots can scrape or download content from a website, rapidly spread spam content, or hoard a business' online inventory
    • Data theft and account takeovers: Bots can steal credit card data, login credentials, and take over accounts

    Look for a bot management service that:

    • Accurately identifies bots at scale by applying behavioral analysis, machine learning, and fingerprinting to a vast volume of traffic
    • Allows good bots, such as those belonging to search engines, to keep reaching the site while still preventing malicious traffic
    • Integrates easily with other web application security and performance services
    • 10) Track and analyze web traffic and security metrics

      Recommendation: Improve web security with data-driven decisions

      Analytics and logs with actionable data are important for improving web performance and security on an ongoing basis.

      For example, logs and application security dashboards can provide insights into:

      • Potential threats in HTTP traffic, so that errors affecting end users can be identified and debugged
      • Attack variations and their malicious payloads (for example, injection attacks vs. remote code execution attacks), so that systems can be ‘tuned’ and hardened accordingly
      • DNS query traffic, and the geographical distribution of queries over time to spot anomalous traffic

      Visibility into web traffic analytics is a key component for continuous risk assessment. With it, organizations can more informed decisions about how to improve their application performance, and where to boost their security investments.

      How does Cloudflare help secure websites?

      Cloudflare’s connectivity cloud simplifies web application security and delivery, with a full suite of integrated services that connect and protect organizations’ web applications and APIs.

      These services include DDoS protection, an industry-leading web application firewall (WAF), bot management, client-side security, an API gateway, a free public DNS resolver, free SSL/TLS certificates, comprehensive web performance and security analytics, and much more.

      Discover the services that fit your website’s needs at www.cloudflare.com/plans.