What DDoS Detection Actually Means
DDoS detection is the process of identifying when incoming traffic transitions from normal operations to an active attack. Unlike prevention (which blocks traffic before it arrives) or mitigation (which removes malicious traffic during an attack), detection is the recognition layer. It answers a single question: is this traffic pattern an attack, or is it legitimate?
Effective detection requires three capabilities working together. First, you need visibility into the traffic reaching your infrastructure, whether through packet-level monitoring, kernel counter analysis, or flow data. Second, you need a model of what "normal" looks like for your specific workload, because a game server and a SaaS API have vastly different traffic profiles. Third, you need a decision engine that can compare current traffic against that model and raise an alert when the deviation is statistically significant.
Detection speed matters enormously. A system that identifies an attack in one second gives your team (or your automated mitigation) a 59-second head start over a system that takes a minute. In practice, per-second detection is the minimum threshold for modern DDoS defense because volumetric attacks can saturate a 1 Gbps link in under five seconds.
How Traffic Baselines Work
A traffic baseline is a statistical model of your normal network behavior. It captures the typical packets per second (PPS), bandwidth, connection rates, and protocol distribution for a given time window. Without a baseline, any detection system is guessing: it has no way to distinguish a legitimate traffic spike from an attack.
Simple baselines use fixed windows, for example, the average PPS over the last 24 hours. More sophisticated baselines account for time-of-day patterns (your traffic at 3 AM is different from traffic at 3 PM), day-of-week cycles (weekday vs weekend), and seasonal trends (Black Friday, game launch days, marketing campaigns). The best baselines learn these patterns automatically by tracking rolling statistics over hours, days, and weeks.
The key metrics for a useful baseline include inbound PPS (packets per second), outbound PPS, bandwidth utilization (bits per second), new connections per second, SYN-to-ACK ratios, and protocol-level breakdowns (TCP vs UDP vs ICMP). Tracking these at the per-protocol level is critical because many attacks target a single protocol. A UDP flood may push your total PPS above baseline while TCP traffic remains perfectly normal. Per-protocol baselines catch this immediately.
Anomaly Detection Methods
Anomaly detection compares current traffic against the established baseline and flags deviations that exceed a threshold. The simplest approach uses static thresholds: if PPS exceeds 50,000, trigger an alert. This works in controlled environments but generates constant false positives when traffic patterns change, which they always do.
Statistical anomaly detection improves on this by calculating standard deviations from the rolling baseline. If traffic exceeds 3 standard deviations above the expected value for this time of day, that is a statistically significant anomaly. This approach adapts to traffic growth and daily cycles naturally. Machine learning models take it further by identifying multi-dimensional anomalies, for example, detecting when PPS is normal but the source IP entropy has spiked dramatically, indicating a botnet with distributed sources.
Protocol-specific signatures add another detection layer. SYN floods produce a measurable gap between incoming SYN packets and completing ACKs. UDP amplification attacks show a sudden increase in large UDP packets from known amplification ports (DNS port 53, NTP port 123, memcached port 11211). These signatures can trigger detection even when total traffic volume stays within baseline, catching low-and-slow attacks that pure volume-based detection misses.
The Role of Real-Time Alerting
Detection without alerting is monitoring. Alerting turns detection into action. A real-time alerting system must deliver notifications within seconds of detection, include enough context for the responder to act (attack type, target, current PPS, affected services), and route alerts to the right channel based on severity.
Modern alerting architectures support multiple channels: Slack and Discord for team visibility, PagerDuty and OpsGenie for on-call escalation, email for documentation, SMS for critical alerts, and webhooks for triggering automated responses. The alert should include the classified attack type, the affected IP or service, current vs baseline traffic rates, and a direct link to the incident details. Severity-based routing prevents alert fatigue: a 10% deviation above baseline goes to Slack, while a 500% spike pages the on-call engineer.
Common Attack Vectors
DDoS attacks fall into three primary categories, each requiring different detection approaches. Volumetric attacks (UDP floods, DNS amplification, NTP reflection) aim to saturate bandwidth. They are the easiest to detect because they produce obvious spikes in PPS and BPS metrics, but they can overwhelm your link before detection even completes if your monitoring is too slow.
Protocol attacks (SYN floods, ACK floods, fragmented packet attacks) exploit weaknesses in the TCP/IP stack. They may not saturate your bandwidth but will exhaust server resources like the TCP SYN queue, the conntrack table, or CPU cycles spent processing malformed packets. Detecting these requires monitoring kernel-level counters such as TcpExtSyncookiesSent, nf_conntrack_count, and connection state distributions.
Application-layer attacks (HTTP floods, slowloris, DNS query floods) target services at Layer 7. They use legitimate-looking requests to overwhelm application logic, database connections, or API rate limits. These are the hardest to detect because each individual request appears normal. Detection relies on request-rate anomalies, response time degradation, and behavioral analysis of request patterns.
How Flowtriq Handles Detection
Flowtriq takes a per-node approach to DDoS detection. A lightweight agent runs on each server and reads kernel-level counters from /proc/net/snmp, /proc/net/netstat, and the conntrack subsystem every second. This gives it packet-level accuracy without the overhead of deep packet inspection or the latency of flow-based sampling.
The agent maintains rolling baselines per protocol and per metric, adapting automatically to each node's traffic profile. When current traffic deviates beyond a statistically significant threshold, Flowtriq classifies the attack vector (SYN flood, UDP amplification, ICMP flood, mixed), captures a PCAP sample for forensic evidence, and dispatches alerts to your configured channels. The entire pipeline from anomaly detection to alert delivery typically completes in under two seconds. Because the detection happens at the kernel counter level, it works identically on bare metal, virtual machines, cloud instances, and containers without requiring changes to your network architecture.
Start detecting DDoS attacks in under 2 seconds
Flowtriq monitors kernel-level counters every second, classifies attack vectors automatically, and sends alerts to Slack, PagerDuty, or any webhook. $9.99/node/month with a free 7-day trial.
Start your free trial →