The Problem with Static Thresholds
The simplest DDoS detection approach is a static threshold: if packets per second exceed 50,000, fire an alert. This is easy to implement and easy to understand. It is also wrong for most real-world deployments. The problem is that traffic is not static. A game server might idle at 2,000 PPS overnight and spike to 40,000 PPS when a new patch drops. An ecommerce site runs at 5,000 PPS on Tuesday morning and 80,000 PPS during a flash sale. A static threshold set for normal conditions triggers false positives during every legitimate traffic spike. A threshold set high enough to avoid false positives during spikes will miss small but genuine attacks during quiet periods.
In practice, teams that use static thresholds end up doing one of two things: they set the threshold high and miss attacks, or they set it low and drown in false alerts until they start ignoring them entirely. Both outcomes are worse than having no detection at all, because they create a false sense of security. The team believes they have DDoS monitoring in place, but the monitoring is either blind or crying wolf so often that real alerts get lost in the noise.
How Traffic Patterns Change
Network traffic follows predictable cycles at multiple timescales. The most obvious is the daily cycle: traffic rises in the morning as users come online, peaks during business hours or evening entertainment hours, and drops overnight. This pattern can produce a 5x to 10x difference between the daily minimum and maximum PPS on the same server.
Weekly patterns add another layer. B2B SaaS platforms see higher traffic on weekdays. Gaming servers peak on Friday evenings and weekends. Streaming services have distinct weekday vs weekend profiles. Beyond weekly cycles, there are seasonal patterns (holiday shopping, back-to-school, tax season), event-driven spikes (product launches, marketing campaigns, viral social media posts), and long-term growth trends as your user base expands.
Any detection system that does not account for these patterns will either generate false positives during predictable high-traffic periods or fail to detect attacks during predictable low-traffic periods. The threshold that works at 3 PM on a Thursday is wrong for 3 AM on a Sunday. A truly effective detection system must internalize these rhythms and adjust its sensitivity automatically.
How Adaptive Baselines Learn Normal Behavior
An adaptive baseline builds a statistical model of your traffic by observing it over time. The simplest version tracks a rolling mean and standard deviation for each metric (PPS, BPS, connection rate) over a sliding window. A more sophisticated version segments the data by hour of day and day of week, so it compares Monday 2 PM traffic against previous Monday 2 PM traffic rather than against a global average.
The learning process works continuously. Every second, the system records the current PPS value and updates its rolling statistics. Over the first few hours, the baseline is rough, representing only the data it has seen. After 24 hours, it captures a full daily cycle. After a week, it has weekday vs weekend patterns. After a month, it has seen enough variance to build tight confidence intervals. Good baseline systems use exponential moving averages or similar decay functions so that recent data has more influence than old data, allowing the baseline to adapt to gradual traffic growth without manual recalibration.
The key insight is that a baseline is not a single number. It is a distribution. At any given moment, the system knows not just the expected PPS but the expected range of PPS, which lets it calculate exactly how unusual the current traffic is in statistical terms.
Techniques for Reducing False Positives
Dynamic baselines are the foundation of false positive reduction, but several additional techniques work alongside them. Per-protocol baselining tracks TCP, UDP, and ICMP independently. A legitimate traffic spike is almost always protocol-specific (more HTTP requests means more TCP). An attack that only affects UDP while TCP remains normal is a much stronger signal than a total PPS spike.
Multi-metric correlation reduces false positives by requiring multiple signals to agree before raising an alert. A PPS spike alone might be a legitimate traffic surge. A PPS spike combined with a sudden increase in source IP entropy (many new IPs appearing simultaneously) and a shift in packet size distribution (identical-size packets from all sources) is almost certainly an attack. Requiring two or three correlated anomalies before alerting dramatically reduces false positives while maintaining high detection rates.
Maintenance windows and known-event annotations allow the system to temporarily widen its thresholds during planned high-traffic events. If you know a product launch is happening at 10 AM, you can mark that window in advance so the baseline system does not alert on the expected spike. Similarly, warm-up periods after deployment or configuration changes prevent alerts from firing while traffic settles into a new pattern.
Balancing Sensitivity and Accuracy
Every detection system navigates a tradeoff between sensitivity (catching real attacks) and specificity (avoiding false alerts). A system tuned for maximum sensitivity will catch every attack but will also generate false positives on legitimate traffic spikes, infrastructure changes, and normal variance. A system tuned for maximum specificity will never false-positive but will miss subtle attacks, slow ramps, and low-volume protocol exploits.
The right balance depends on your operational context. A financial services platform with strict uptime SLAs may prefer higher sensitivity and accept some false positives, because missing a real attack has severe consequences. A game hosting provider with fluctuating traffic may prefer higher specificity and accept slightly slower detection, because alert fatigue will cause the team to ignore real incidents. The best approach is tiered alerting: low-confidence detections go to a dashboard or low-priority channel, medium-confidence detections go to the team's Slack, and high-confidence detections (multiple correlated signals, strong protocol signatures) page the on-call engineer.
How Flowtriq Implements Adaptive Baselines
Flowtriq's detection engine maintains per-node, per-protocol rolling baselines that update every second. Each metric is tracked with time-of-day and day-of-week segmentation, so the system naturally accounts for daily and weekly traffic cycles. Detection triggers when the current value exceeds the expected value by a configurable number of standard deviations, typically 3 sigma for high-confidence alerts and 2 sigma for early warnings.
When an anomaly is detected, Flowtriq correlates multiple metrics before escalating. A PPS spike in isolation produces a low-confidence alert. A PPS spike combined with abnormal source diversity and protocol-specific signatures (SYN-to-ACK ratio imbalance, identical UDP packet sizes) produces a high-confidence alert with automatic attack classification. Maintenance windows can be configured to suppress alerts during known events, and the baseline system automatically excludes attack traffic from its learning data so that a past attack does not inflate future baselines.
Stop chasing false positives
Flowtriq's adaptive baselines learn your traffic patterns automatically and alert only when it matters. $9.99/node/month with a free 7-day trial.
Start your free trial →