The structural gap nobody wants to talk about
Every DDoS response playbook starts with detection. You notice the attack, you triage it, you classify the vector, you pick a mitigation strategy, and you push an upstream rule to your transit provider. The playbook looks reasonable on paper. The problem is the timestamps.
NETSCOUT's 2024 Threat Intelligence Report found that 70% of DDoS attacks last fewer than 15 minutes. That is the attack duration from first packet to last. Meanwhile, security operations research consistently puts manual DDoS response workflows at a minimum of 15 to 30 minutes under best-case conditions: alert triage, vector classification, strategy selection, and upstream provider coordination. Under realistic conditions with on-call handoffs, provider ticket queues, and multi-hop escalation, 45 to 60 minutes is common.
The structural gap is plain: if 70% of attacks are over before a human can push the first upstream rule, the manual playbook is not a response to most DDoS attacks. It is a post-mortem. The damage was done, the attack ended on its own, and you are reconstructing what happened after the fact.
Why short attacks are a deliberate strategy: The DDoS-for-hire economy has normalized short, intense bursts as a tactic. Brief high-volume attacks saturate uplinks, exhaust state tables, and interrupt sessions without leaving enough time for providers to respond. Attackers know that a 10-minute flood causes service degradation, SLA violations, and customer complaints even if it "resolves on its own." The attack goal is disruption, not sustained blackout. Manual response models were designed for the era of sustained multi-hour campaigns. They were never designed for sub-15-minute hit-and-run attacks.
What happens to infrastructure during each unmitigated minute
To make the cost of detection lag concrete, consider what is actually happening to a server under a volumetric attack at each minute mark. The Lorikeet Security incident on March 27, 2026 is a useful reference: a 48.3 Gbps multi-vector attack combining NTP amplification and a SYN flood. Here is what each phase looks like with and without automated detection.
T+0s to T+5s: ramp phase
NTP amplification attacks do not arrive at maximum volume instantly. Reflected packets from the amplifier pool begin returning to the target over several seconds as the attacker's spoofed requests propagate across the reflector network. During this 0 to 5 second window, PPS is rising sharply but hasn't yet saturated the uplink. This is the optimal detection window: if you can identify the attack here, you can get mitigation in place before peak volume arrives.
With sub-second detection, Flowtriq fires at T+0.9s during Lorikeet's attack, before the NTP amplification had reached anywhere near its 39 Gbps peak. With flow-sampled detection (NetFlow at 1:1000 sampling, 30-second export intervals), no alarm fires yet. The sampling window hasn't closed.
T+15s to T+60s: peak volume arrives
By 15 to 60 seconds into the attack, full amplification volume is typically reached. At 48 Gbps, a standard 1 Gbps uplink is saturated by a factor of 48x. Even a 10 Gbps uplink is effectively full. At this point, legitimate traffic is being dropped not by the attacker's packets but by the uplink itself: your router's queue is exhausted and anything that doesn't fit gets discarded. This is symmetric damage: the attack traffic and your users' traffic are equally affected.
With sub-second detection, Flowtriq's on-node iptables rules and BGP FlowSpec rules are already in place at T+11s, before peak volume. The uplink never saturates. With flow-sampled detection, the first alert may be firing around T+30s to T+60s. A human is now reading the alert, not yet acting on it.
T+1min to T+5min: application processes fail
By one minute in, application-layer consequences are accumulating. TCP connection state tables are exhausting under SYN flood conditions. Web server worker pools are filling with half-open connections. Database connection pools time out waiting for responses that can't arrive because the network queue is saturated. Depending on application architecture, this is where you start seeing error rates spike, timeouts cascade, and monitoring dashboards go red.
With sub-second detection and automated mitigation, the application layer never sees the attack. It was stopped at the network and kernel levels. With manual response in progress, a human analyst is now classifying the vector, possibly escalating to a senior engineer, and preparing to contact the upstream provider. No mitigation rule exists yet.
T+5min to T+15min: customer impact becomes visible and irreversible
By five minutes of unmitigated attack traffic, the customer-facing impact is significant. Users are receiving timeouts, connection errors, or partial page loads. For any business with an SLA, the clock is ticking. For a live event like Lorikeet's, participants have been unable to use the training platform for five minutes. Support tickets are being submitted. Social channels are starting to light up.
This is also where the manual response process is typically reaching its first useful action: the upstream provider has been contacted, or a BGP blackhole has been requested. Even optimistically, the mitigation rule won't be in place for another 5 to 10 minutes. By then, 10 to 20 minutes of damage has accumulated.
The irreversibility problem: Unlike many infrastructure failures, the damage from an unmitigated DDoS attack cannot be undone retroactively. A live training session that went down for 10 minutes cannot have those 10 minutes restored. An e-commerce site that was unreachable during a flash sale cannot recover the abandoned carts. An API that timed out for a customer cannot un-create the bad experience. Manual response doesn't just mean slower protection; it means accepting that the damage will happen and acting after the fact.
Side-by-side: automated vs manual at each time marker
| Time marker | Sub-second detection (automated) | Manual response (flow-sampled) |
|---|---|---|
| T+0.9s | Attack detected, incident created, Slack alert fired | No data yet (sampling window not closed) |
| T+8s | On-node iptables rules active, app processes protected | Still no alert |
| T+11s | BGP FlowSpec active at transit edge, uplink recovered | Still no alert |
| T+30–60s | Mitigation complete, users unaffected | First flow export received, alert may fire |
| T+5min | Attack ongoing upstream, nodes clean throughout | Engineer triaging alert, provider not yet contacted |
| T+15min | 38-min attack still fully absorbed upstream | Provider contacted, rule requested but not yet active |
| T+20–30min | Continues absorbing, zero impact throughout | Mitigation rule possibly active now |
For 70% of attacks, the manual response column never reaches "mitigation active" before the attack ends on its own. The entire response cycle plays out against an attack that is already over.
Lorikeet: the counter-example that proves the rule
The Lorikeet Security incident on March 27, 2026 is instructive precisely because it was not a sub-15-minute attack. The campaign ran for 38 minutes, well within the window where manual response theoretically could have helped. A skilled engineer, working quickly, could plausibly have classified the multi-vector attack and pushed an upstream rule within 20 to 30 minutes of the attack starting.
But "could have helped" is doing a lot of work there. During those 20 to 30 unmitigated minutes, 240 participants in a live cybersecurity training session would have lost service. The event's SLA had a 15-consecutive-minute breach clause. Manual response would have triggered the SLA violation before the first rule was in place.
Instead, Flowtriq detected the attack in 0.9 seconds, pushed on-node iptables rules at T+8s, and had BGP FlowSpec active at the transit provider edge at T+11s. The full mitigation stack was in place 8 seconds before any flow-sampled system would have received its first data export. Not one participant noticed the attack. The SLA was not triggered. The 38-minute campaign ran entirely against upstream mitigation while the training session continued uninterrupted.
"The Flowtriq alert landed in our Slack before I'd even registered anything was wrong on the dashboard. By the time I pulled up the incident view, the on-node rules had already fired, BGP FlowSpec was pushed to our upstream, and cloud scrubbing was routing traffic through an additional layer. Full mitigation stack active in under 15 seconds from detection. Not one participant noticed anything happened."
Ryan Wilke, CEO & Founder, Lorikeet Security
Why sub-second detection is not optional for modern DDoS defense
The 70% figure from NETSCOUT isn't an argument that short attacks are harmless. It's an argument that the attack duration and the response time are in a specific mathematical relationship, and that relationship is fixed unless you change one of the two variables. You cannot change how long attackers run their campaigns. You can change how fast you detect and respond.
Sub-second detection changes the math entirely. When detection fires at T+0.9s and upstream mitigation is in place at T+11s, the entire 70% of attacks that last fewer than 15 minutes are contained before they can cause customer-facing impact. The 30% that last longer are contained before they saturate the uplink. In both cases, the response completes before the attack causes damage, not after.
Flow-sampled detection running on 30 to 60 second export intervals cannot close this gap. The physics of the sampling window mean the earliest possible alert is 30 to 60 seconds after the first packet, and that's before any human decision-making latency is added. For short attacks, the alert arrives after the attack ends. For longer attacks, it arrives after minutes of damage have accumulated.
Per-second, per-node detection changes the detection window from minutes to under one second. That is the only change that makes automated response to short attacks possible, and the only approach that gives you a real chance of protecting infrastructure during the 70% of attacks that end before manual response even starts.
What automated detection requires: Per-second kernel-level monitoring on every node (not flow sampling at the network edge), multi-vector classification that identifies the attack type within the same detection cycle, pre-configured automated mitigation rules that fire without human decision-making, and BGP FlowSpec or cloud scrubbing integration that can push upstream rules programmatically. Each component is necessary. A fast detection system without automated upstream mitigation still requires a human to push the rule. An automated system built on sampled NetFlow still has the 30 to 60 second detection gap. The components work together or not at all.
What this means for your infrastructure
If your current DDoS detection relies on NetFlow or sFlow sampling with export intervals measured in tens of seconds, you are structurally unable to respond to the majority of DDoS attacks before they cause damage. This is not a criticism of your engineering team or your security practices. It is a description of what the technology is capable of. Flow-sampled detection was designed for network-wide visibility and trend analysis, not for sub-second incident response.
The question worth asking is not "do we have DDoS detection?" but "at what point in the attack timeline does our detection fire, and does automated mitigation follow immediately?" If the honest answer to that question involves minutes rather than seconds, the 70% figure from NETSCOUT describes exactly what you are experiencing: attacks that resolve before you can respond, leaving you with log analysis and post-mortems instead of prevention.
The good news is that this is a solvable problem. Per-server agent-based detection, deployed in minutes per node, changes the detection window from 30 to 60 seconds to under one second. Combined with pre-configured automated mitigation and BGP FlowSpec integration, the response time drops from 15 to 30 minutes to under 15 seconds. The Lorikeet incident documented exactly that gap, against a real 48 Gbps attack, during a live event with 240 people depending on the infrastructure holding up.
Stop relying on detection that fires after the attack ends
Flowtriq deploys in under two minutes per node. Sub-second per-server detection, automatic mitigation, BGP FlowSpec, and instant Slack alerts. Starting at $9.99/node/month.
Start Free Trial →No credit card required · 7-day free trial