The reality of modern DDoS attacks

DDoS attacks are no longer the blunt instruments they once were. In 2025 and 2026, attacks routinely combine multiple vectors, shift patterns mid-attack, and use botnets sophisticated enough to mimic legitimate traffic. The barrier to launching an attack has dropped to nearly zero thanks to DDoS-for-hire services that sell attack capacity for as little as $20 per hour. Meanwhile, the cost of being on the receiving end has only grown.

Defending against DDoS requires thinking beyond a single product or technique. It demands a layered strategy that spans preparation before an attack ever happens, rapid detection when one begins, automated response to minimize impact, and structured recovery to strengthen your defenses for next time.

This guide walks through each phase with concrete, implementable advice.

Phase 1: Preparation

The best DDoS defense starts long before the first malicious packet arrives. Preparation is about reducing your attack surface, building capacity, and establishing the processes that enable rapid response.

Know your baseline traffic

You cannot identify abnormal traffic if you do not know what normal looks like. Before anything else, establish baseline measurements for your infrastructure. What is the normal packets-per-second rate for each server? What protocols and ports carry your legitimate traffic? What does your traffic pattern look like over a 24-hour cycle, a weekly cycle, and during seasonal peaks?

Flowtriq's dynamic baseline engine handles this automatically. Deploy the agent on your servers and it begins learning traffic patterns immediately. Within hours, it has established per-node baselines that account for time-of-day variations and organic growth. This baseline data becomes the foundation for accurate detection without the false positive noise that plagues static threshold systems.

Reduce your attack surface

Every open port, every publicly exposed service, and every unnecessary network path is a potential attack vector. Audit your infrastructure and close everything that does not need to be open.

  • Close unnecessary ports. If a server only needs to serve HTTP/HTTPS, there is no reason to have DNS, NTP, SNMP, or other UDP services exposed. Each of these can be used as amplification vectors.
  • Use rate limiting. Set connection rate limits on services that face the public internet. This will not stop a large DDoS attack, but it limits the damage from smaller floods and application-layer attacks.
  • Disable IP-directed broadcasts. This prevents your infrastructure from being used as an amplifier in Smurf attacks.
  • Implement BCP38 filtering. If you operate your own network, ensure you are filtering packets with spoofed source addresses. This is good internet citizenship and reduces your exposure to reflection attacks.

Build capacity headroom

Infrastructure that runs at 90% capacity during normal operations has no room to absorb even a small attack. Maintain headroom in your bandwidth, CPU, memory, and connection table capacity. A good target is operating at no more than 60-70% capacity during peak normal traffic. That headroom gives you breathing room to absorb minor attacks while your mitigation systems activate.

Establish your response playbook

Document your DDoS response procedures before you need them. Who gets notified? What are the escalation triggers? Which mitigation actions are automated and which require manual approval? What is the communication plan for customers and stakeholders?

A playbook that exists only in the heads of your senior engineers is a playbook that fails at 3 AM when those engineers are not available. Write it down, store it somewhere accessible, and practice it.

Phase 2: Detection

Fast detection is the single most important factor in DDoS defense. The difference between detecting an attack in one second versus sixty seconds can mean the difference between a minor blip and a major outage. During that detection gap, attack traffic is flowing unimpeded, connections are filling up, and your customers are experiencing degraded service.

The detection speed hierarchy

Not all detection methods are created equal when it comes to speed:

  • Kernel-level packet monitoring (sub-second): Agents on each server that monitor packet rates at the kernel level. This is the fastest possible detection because there is zero sampling and zero export delay. Flowtriq detects anomalies within one second of onset.
  • Inline appliances (1-5 seconds): Hardware devices that inspect every packet passing through them. Fast, but limited to the network segments where they are deployed.
  • Flow-based analysis (30-120 seconds): NetFlow/sFlow collectors that sample and export traffic data. The sampling rate and export interval introduce inherent delay.
  • Synthetic monitoring (60-300 seconds): Checking service availability from external probes. By the time availability monitoring detects a problem, the attack has been running for minutes.
  • Customer reports (5-60 minutes): Support tickets about "site is down." This is not a detection method. This is a failure mode.

Attack classification matters

Knowing that you are under attack is step one. Knowing what kind of attack it is determines your response. Different attack types require fundamentally different mitigation strategies.

Volumetric attacks (UDP floods, amplification attacks) overwhelm bandwidth. Mitigation requires upstream filtering or traffic diversion to a scrubbing center. On-server filtering is insufficient because the traffic still saturates your links.

Protocol attacks (SYN floods, fragmented packets) exhaust connection tables and state on firewalls, load balancers, and servers. Mitigation involves SYN cookies, connection rate limiting, and stateless packet filtering.

Application-layer attacks (HTTP floods, slowloris) target specific services with requests that look legitimate. Mitigation requires application awareness, behavioral analysis, and challenge-response mechanisms.

Flowtriq classifies attacks automatically into eight categories the moment detection triggers. This classification drives the auto-mitigation response, applying the right countermeasure for the specific attack type rather than using a one-size-fits-all approach.

Multi-channel alerting

Detection is useless if the right people do not find out immediately. Your alerting system needs to reach your team wherever they are, through whatever channel they actually monitor.

Flowtriq supports Discord, Slack, PagerDuty, OpsGenie, email, SMS, Telegram, Datadog, and custom webhooks. Configure multiple channels so that alerts reach both your on-call engineer and your broader operations team. Set up escalation policies so that if the primary responder does not acknowledge within a defined window, the alert escalates to the next person.

Phase 3: Response

Response is where preparation and detection pay off. The goal is to mitigate the attack with minimal impact on legitimate traffic and minimal manual intervention.

Automated first response

The first 30 seconds of an attack are critical. If your response depends on a human seeing an alert, assessing the situation, and manually deploying countermeasures, you have already lost minutes. Automated mitigation covers that gap.

Flowtriq's auto-mitigation engine deploys countermeasures within seconds of detection. For protocol-specific attacks, it generates targeted iptables or nftables rules that drop attack traffic while passing legitimate traffic. For volumetric attacks exceeding on-server capacity, it can trigger BGP FlowSpec rules on your upstream routers or activate cloud scrubbing services.

The key to effective auto-mitigation is precision. Broad rules (like blocking an entire protocol or IP range) create collateral damage. Specific rules (like rate-limiting SYN packets from identified source ranges on a specific port) minimize impact on legitimate users. Flowtriq's attack classification enables this precision by identifying the attack pattern before generating mitigation rules.

Escalation tiers

Not every attack requires the same response. A well-designed mitigation strategy uses tiered escalation:

  1. Tier 1 - On-server filtering: Kernel-level packet filtering handles attacks up to the server's processing capacity. Zero additional cost, minimal latency impact, fully automatic.
  2. Tier 2 - Network-level filtering: BGP FlowSpec pushes filtering rules to upstream routers, handling attacks that exceed individual server capacity but stay within your network's total bandwidth.
  3. Tier 3 - Cloud scrubbing: For volumetric attacks that threaten to saturate your upstream links, divert traffic through a cloud scrubbing service. This handles multi-hundred-gigabit attacks but introduces some latency and cost.
  4. Tier 4 - Black hole routing: RTBH as an absolute last resort to protect the rest of your infrastructure when all other options are exhausted.

Each tier should activate automatically based on predefined thresholds, with the option for manual override. The goal is to use the least disruptive mitigation that effectively handles the attack.

PCAP forensics during the attack

While mitigation is active, capture packet-level forensic data. This serves two purposes: verifying that your mitigation rules are correct (are you actually filtering the attack traffic and not legitimate traffic?) and providing evidence for post-incident analysis.

Flowtriq automatically captures PCAP data when an incident triggers. These captures include the initial attack packets, the full attack pattern, and traffic samples throughout the incident. The built-in AI analysis examines the PCAP to identify attack tools, botnet signatures, and patterns that can improve your defenses.

Phase 4: Recovery and improvement

When the attack subsides, the work is not over. Recovery and improvement close the loop and make your defenses stronger for the next attack.

Post-incident analysis

Review every significant DDoS incident with your team. The questions to answer:

  • How quickly was the attack detected? Could detection have been faster?
  • Was the attack classification correct? Did the auto-mitigation deploy the right countermeasures?
  • What was the customer impact duration? How much of that was before mitigation activated versus mitigation deployment time?
  • Were there IOC (Indicator of Compromise) patterns in the attack that match known botnets or attack tools?
  • Did the communication plan work? Were customers informed promptly?

Flowtriq's incident detail pages provide the data for this analysis. Every incident includes a timeline of detection, classification, mitigation actions, and resolution. PCAP captures give you packet-level evidence. IOC pattern matching identifies known attack tool signatures like Mirai, LOIC, and other botnets.

Update your defenses

Every attack teaches you something. Use post-incident analysis to update your mitigation rules, refine your escalation thresholds, and close any gaps the attack revealed. If the attack used a vector you were not monitoring, add monitoring for it. If your auto-mitigation was too slow to escalate, lower the escalation threshold.

Test and practice

Regular testing validates that your defenses actually work. Run tabletop exercises with your team to walk through attack scenarios. Use controlled traffic generation to test that your detection and auto-mitigation activate correctly. Verify that your alert channels are working and reaching the right people.

Common defense mistakes to avoid

Even organizations with solid DDoS defense strategies make these recurring mistakes:

  • Relying solely on upstream provider protection. Your ISP or hosting provider likely offers some DDoS protection, but it is typically threshold-based, slow to activate, and provides no per-server visibility. It is a useful layer but not a complete defense.
  • Setting static thresholds and forgetting them. Traffic patterns change over time. Thresholds set six months ago may be wildly inaccurate today. Dynamic baselines eliminate this problem entirely.
  • Ignoring application-layer attacks. Many organizations focus exclusively on volumetric protection and have no defense against HTTP floods, API abuse, or other application-layer attacks that fly under volumetric thresholds.
  • No forensic evidence. Without PCAP captures and detailed incident logs, you cannot do meaningful post-incident analysis. You are defending blind against the next attack.
  • Manual-only response. If every attack requires a human to wake up, log in, assess, and manually deploy countermeasures, your mean time to mitigation will always be measured in minutes, not seconds.

Building a defense stack

The most effective DDoS defense is not a single product but a layered stack. Here is what a modern defense stack looks like:

  1. Per-server detection and classification: Flowtriq agents on every server, providing 1-second detection and automatic attack classification.
  2. Automated on-server mitigation: Flowtriq auto-mitigation with iptables/nftables rules, handling the majority of attacks without external help.
  3. Network-level filtering: BGP FlowSpec integration with your routers for attacks exceeding single-server capacity.
  4. Cloud scrubbing: A scrubbing service on standby for volumetric attacks that exceed your network capacity.
  5. Multi-channel alerting: Notifications reaching your team within seconds through the channels they actually monitor.
  6. PCAP forensics and IOC matching: Evidence capture and threat intelligence for every incident.

Flowtriq provides layers 1, 2, 5, and 6 out of the box, with integration points for layers 3 and 4. This gives you a comprehensive detection and response platform that works alongside your existing network infrastructure and cloud scrubbing services.

Build your DDoS defense stack

Flowtriq provides 1-second detection, automatic attack classification, auto-mitigation, PCAP forensics, and multi-channel alerting. The foundation your defense stack needs. $9.99/node/month.

Start your free 7-day trial →
Back to Blog

Related Articles