Back to Blog
<1s
Scenario A: per-server kernel detection
30–60s
Scenario B: flow-sampled detection window
15–30m
Scenario B: time to first upstream rule
11s
Scenario A: Lorikeet BGP FlowSpec active

Two scenarios, one attack

The same volumetric DDoS attack plays out very differently depending on when your detection fires. To make the comparison concrete, this walkthrough uses the March 27, 2026 Lorikeet Security incident as the reference attack: 48.3 Gbps peak, 1.1M PPS, multi-vector (NTP amplification plus SYN flood), 38-minute duration.

Scenario A is what actually happened: per-server kernel-level detection reading every packet at the OS layer, sub-second detection window, automated mitigation rules, and BGP FlowSpec integration. This is how Flowtriq works.

Scenario B represents a flow-sampled detection approach: NetFlow or sFlow at 1:1000 sampling with 30 to 60 second export intervals, flow collector analysis, alert generation, and then manual human response to classify and mitigate. This is how many organizations currently operate. No specific product is named; this describes the architecture, not a vendor.

T+0 to T+1s: the ramp phase

Scenario A: sub-second detection

T+0sFirst attack packets arrive. ftagent begins detecting PPS deviation from baseline in real time.
T+0.9sAttack detected. Incident created. Classification begins in parallel for both vectors. Slack alert queued.

Scenario B: flow-sampled detection

T+0sFirst attack packets arrive. 1:1000 sampling means approximately 1 in 1,000 packets is recorded in the flow export.
T+0s to T+30sNo alert. Flow export interval has not elapsed. Collector has no data.

During this window, NTP amplification traffic is ramping toward its 39 Gbps peak as reflected packets return from 2,140 open NTP servers worldwide. The attack is escalating, and Scenario B has no awareness that anything is happening.

T+5s to T+30s: first mitigation window

Scenario A

T+3sSlack alert delivered. On-call engineer notified with classification, PPS readings, and dashboard link.
T+8sOn-node iptables rules auto-fire. App processes protected. Attack traffic dropped at kernel.
T+11sBGP FlowSpec Rule 1 pushed to transit edge. Cloud scrubbing activated. Uplink recovering.
T+19sBGP FlowSpec Rule 2 pushed for SYN component. SYN rate drops to <200/s within 4 seconds.

Scenario B

T+30sFirst flow export received by collector. Analysis begins on sampled data.
T+45–60sAlert may fire (if volume exceeds threshold in sampled data). Alert enters on-call queue.
This entire windowAttack running at full 48 Gbps. Uplink saturated. Legitimate traffic dropped. Application errors accumulating.

In Scenario A, the full mitigation stack is active 19 seconds into the attack. In Scenario B, 45 to 60 seconds have passed and the first alert may be reaching a human. The uplink has been saturated for approximately one minute in Scenario B. Application connection state tables are filling. TCP connection queues on the SYN-flooded ports are near exhaustion.

T+1min to T+5min: application impact accumulates

By one minute in, Scenario A users have experienced nothing. The mitigation stack has been running for 41 seconds. All legitimate participant traffic passes normally through the uplink because attack traffic is being discarded at the transit edge before it can compete for bandwidth.

In Scenario B, an engineer has now received the alert and is triaging it. This involves opening the monitoring dashboard, confirming the alert is real (not a false positive), identifying which nodes are affected, and beginning to classify the attack vector. The classification step is non-trivial for a multi-vector attack. A NTP amplification + SYN flood combination produces traffic signatures that overlap: high PPS, high bandwidth, multiple source IPs, mixed protocols. Without per-packet classification, the engineer is working from sampled flow data that may show only partial picture of the attack at 1:1000 sampling.

By T+2 to T+3 minutes, the engineer may have enough information to begin deciding on a mitigation approach. They need to determine: Is this volumetric (uplink saturation) or state exhaustion (connection table), or both? What FlowSpec rules would address it without blocking legitimate traffic? Does the transit provider support FlowSpec, or do they need to request RTBH? Who is the right contact at the upstream provider?

The ISP coordination lag: Even after a human engineer has classified the attack and decided on an upstream mitigation approach, pushing a FlowSpec rule or requesting a BGP blackhole requires coordination with the transit provider. The fastest path is a BGP session pre-configured for FlowSpec, where a rule can be pushed programmatically. Without that pre-configuration, the process involves a support ticket, a NOC engineer on the provider's side, and propagation time. Under best-case conditions (pre-configured BGP session, immediate human action), the fastest realistic manual upstream rule push is approximately 5 to 10 minutes from the decision point. That means 7 to 13 minutes from first alert, or 8 to 14 minutes total from attack start.

T+5min to T+15min: SLA clock running

At five minutes, Scenario A users are still unaffected and have been for 4 minutes and 40 seconds. The attack is ongoing at the transit edge and cloud scrubbing layer, but the Lorikeet nodes and their users see only clean traffic.

At five minutes in Scenario B, visible service degradation is well established. Users are experiencing timeouts, slow page loads, or complete inaccessibility depending on how thoroughly the uplink is saturated. If the target has an SLA with customers, the clock is running. For Lorikeet's actual SLA (15-minute breach clause), the SLA violation would already be approximately a third of the way to the trigger point. Support tickets are being filed. The incident is now externally visible.

For many attacks, this is the zone where the 70% figure becomes directly relevant. NETSCOUT's data shows 70% of attacks last fewer than 15 minutes. An attack that started at T+0 and ends at T+10 means a Scenario B operator is still in the process of coordinating upstream mitigation when the attack ends on its own. The attack caused its full damage, and the response cycle will complete into an already-resolved incident.

Complete comparison table

Timeline marker Scenario A: sub-second detection Scenario B: flow-sampled detection
T+0.9s Attack detected, incident created, alert queued No data. Sampling window not elapsed.
T+3s Slack alert delivered, engineer notified No alert. Attack at ~15 Gbps and climbing.
T+8s On-node iptables auto-mitigation active No alert. Attack near peak volume.
T+11s BGP FlowSpec + cloud scrubbing active. Uplink recovering. No alert. Uplink fully saturated.
T+30–60s All mitigations active. Users unaffected throughout. First flow export. Alert may fire. Triage begins.
T+2–3min Attack running at transit edge and scrubbing. Zero node impact. Attack classified. Mitigation strategy being selected.
T+5min Nodes fully clean. Event running normally. ISP being contacted. No upstream rule yet. Visible service degradation ongoing.
T+10–15min Attack ongoing at transit, all clean downstream. Upstream rule possibly active. 10+ min of damage already done.
T+15min (SLA trigger) SLA not triggered. Session running continuously. SLA may have already been breached. Mitigation just active or nearly active.

The compounding cost of link saturation

A saturated uplink is not just an inconvenience. The consequences compound over time in ways that are not immediately obvious from monitoring dashboards.

ISP coordination lag: When you request a BGP blackhole or FlowSpec rule from a transit provider, you are entering their change management process. Even with an emergency NOC hotline, a human on their side must verify the request, apply the rule, and confirm propagation. During this coordination window, your link remains saturated. The coordination lag is additive to your detection delay.

TCP state table exhaustion persists after link recovery: A SYN flood fills the kernel's TCP connection state table with half-open connections. These persist for the duration of the SYN timeout (up to 120 seconds by default, configurable via net.ipv4.tcp_synack_retries). Even after the SYN flood traffic stops arriving, the state table may remain exhausted for up to two minutes, continuing to reject legitimate new connections. Recovery from SYN flood damage is not instantaneous even after mitigation is in place.

Application process recovery time: Web server worker processes, database connection pools, and application queues that filled or failed during link saturation do not instantly recover when traffic returns to normal. Process pools may need to cycle. Connection pools may need to drain and reconnect. Monitoring systems may be in alarm states that require manual acknowledgment. The "all clear" rarely happens the moment the attack traffic stops.

SLA exposure accumulates: SLA breach provisions are typically measured against continuous outage duration, not intermittent degradation. A 38-minute attack with manual response might trigger a breach clause even if mitigation is eventually successful, simply because the degraded service period exceeded the clause threshold before mitigation completed.

The Lorikeet baseline: Scenario A in practice

The Lorikeet Security incident documents Scenario A with real numbers. At T+0.9s, both vectors were detected. At T+8s, on-node iptables rules were active. At T+11s, BGP FlowSpec and cloud scrubbing were operational. The 240 participants never experienced a service interruption. The 38-minute attack ran entirely against upstream mitigation. The SLA was not triggered.

The key point is not that Flowtriq is fast. The key point is that the outcome was determined by the detection and mitigation timeline, not by the attack's duration or volume. A 48 Gbps multi-vector attack that would cause serious damage under Scenario B caused zero customer impact under Scenario A. The infrastructure was the same. The attack was the same. The difference was entirely in when detection fired and what happened automatically as a result.

Move from Scenario B to Scenario A in under two minutes

Flowtriq installs in minutes per node. Sub-second kernel-level detection, automatic mitigation, BGP FlowSpec, and Slack alerts. Starting at $9.99/node/month.

Start Free Trial →

No credit card required  ·  7-day free trial

Back to Blog

Related Articles