Three Approaches to Traffic Visibility
Every DDoS detection system depends on one fundamental capability: seeing network traffic. The differences between detection tools come down to how they see that traffic, how much of it they actually observe, and how quickly they can act on what they see. There are three dominant approaches in production today: NetFlow/IPFIX, sFlow, and host-based packet inspection.
Each approach makes a different set of trade-offs. NetFlow aggregates flows at the router. sFlow samples individual packets at the switch. Host-based inspection reads every packet that reaches a server. The right choice depends on your network architecture, your detection speed requirements, and what you plan to do with the data once you have it.
This post breaks down each approach with concrete numbers so you can evaluate which one fits your environment, or whether you need a combination of all three.
NetFlow and IPFIX: Flow Records from the Router
NetFlow is Cisco's protocol for exporting summarized traffic data from routers and switches. Rather than forwarding raw packets, the router groups packets into flows (defined by a 5-tuple: source IP, destination IP, source port, destination port, and protocol) and exports a record for each flow containing byte counts, packet counts, timestamps, and TCP flags.
The protocol has gone through several versions. NetFlow v5 is the most widely deployed and uses a fixed record format. NetFlow v9 introduced templates, allowing variable record structures. IPFIX (sometimes called NetFlow v10) is the IETF-standardized version of v9 and is the current standard for new deployments. Juniper's equivalent is called J-Flow, and Huawei exports NetStream, but they all follow the same general pattern.
In most production deployments, NetFlow operates with packet sampling. The router does not inspect every packet. Instead, it samples at a configurable rate, typically between 1:1000 and 1:10000. At 1:4096 sampling on a 100 Gbps link, the router examines roughly one out of every 4096 packets and extrapolates the flow statistics. Export intervals are usually set to 60 seconds, meaning the collector receives a batch of flow records once per minute.
Where NetFlow works well
- Network-wide visibility: Because flow data is exported from routers, you see all traffic traversing the network, not just what reaches a specific endpoint. This is invaluable for detecting attacks targeting infrastructure you do not control directly.
- No endpoint agents: NetFlow requires zero software on servers. You configure the router to export flows, point them at a collector, and you are done. For large networks with thousands of servers, this is a significant operational advantage.
- Mature ecosystem: Tools like Kentik, Arbor Sightline (now Netscout), and FastNetMon have been ingesting NetFlow data for over a decade. The collector and analysis ecosystem is well understood and battle-tested.
- Capacity planning: Flow data is excellent for understanding traffic patterns, top talkers, and bandwidth utilization over time. Many teams use NetFlow primarily for capacity planning and treat DDoS detection as a secondary benefit.
Where NetFlow falls short
- Sampling blindness: At 1:4096 sampling, a 10,000 PPS attack produces roughly 2.4 sampled packets per second. That is barely distinguishable from noise. Short-duration attacks (under 60 seconds) and low-rate attacks (under 50,000 PPS) frequently go undetected because the sampled data does not contain enough signal.
- Export delay: With a 60-second export interval, detection cannot happen faster than one minute after the attack begins. In practice, most NetFlow-based detection systems require 2 to 3 export intervals to confirm an anomaly, putting detection at 2 to 3 minutes behind reality.
- No payload visibility: Flow records contain header metadata only. You cannot inspect packet payloads, identify specific attack signatures, or distinguish between a UDP flood and a DNS amplification attack based on flow data alone.
- Router CPU cost: Enabling NetFlow on high-throughput interfaces adds CPU overhead to the router. On older hardware or at very high traffic rates, this can become a bottleneck that forces higher sampling rates, further degrading detection accuracy.
sFlow: Sampled Packets from the Switch
sFlow takes a fundamentally different approach from NetFlow. Instead of aggregating packets into flow records on the device, sFlow randomly samples individual packets and exports the first 128 bytes of each sampled packet (the header) to an external collector. The collector, not the switch, is responsible for building flow tables and computing statistics.
Developed by InMon Corporation, sFlow is an open standard (RFC 3176) supported by a wide range of switch vendors including Arista, Dell, HP/Aruba, Juniper, and most white-box switches running Memory or SONiC. Sampling rates typically range from 1:1000 to 1:4096, configured per interface. In addition to packet samples, sFlow exports interface counter data (bytes in, bytes out, errors, discards) at a configurable polling interval, usually every 10 to 30 seconds.
Where sFlow works well
- Vendor agnostic: sFlow works on a broader range of hardware than NetFlow, including many lower-cost switches. If your network includes a mix of vendors, sFlow is often the common denominator.
- Packet header access: Because sFlow exports the first 128 bytes of sampled packets, the collector can inspect layer 3 and layer 4 headers directly. This enables richer classification than flow records alone. You can identify DNS amplification by looking at the source port (53) and packet size, or detect NTP monlist attacks by inspecting the NTP mode field.
- Faster export: sFlow exports sampled packets as they are captured, rather than batching them on a timer. This means data reaches the collector in near real-time, reducing the export delay that plagues NetFlow deployments.
- Lower switch overhead: The sampling and export logic is typically implemented in the switch ASIC, not in software. This means sFlow has minimal impact on switch forwarding performance, even at high sampling rates.
Where sFlow falls short
- Still sampled: Like NetFlow, sFlow is a sampling technology. At 1:2048 sampling, a 5,000 PPS attack generates roughly 2 to 3 sampled packets per second. Low-rate application-layer attacks, slowloris-style connection exhaustion, and sub-threshold volumetric attacks remain invisible.
- Statistical accuracy at low rates: The accuracy of sFlow-derived statistics degrades significantly at low traffic volumes. A flow that generates 100 packets per second might produce zero sampled packets in a given interval, or it might produce 3. The variance is too high for reliable detection at these rates.
- Collector infrastructure: Because sFlow pushes raw packet samples to the collector, the collector must do all the heavy lifting: reassembling flows, computing statistics, and running detection algorithms. At high traffic rates, the collector itself can become a bottleneck.
- Limited payload depth: While 128 bytes of header is more than NetFlow provides, it is still not a full packet capture. You cannot inspect HTTP request bodies, TLS handshake details, or application-layer payloads beyond what fits in the first 128 bytes.
Host-Based Packet Inspection: Every Packet, Every Server
Host-based packet inspection takes the opposite approach from flow sampling. Instead of observing traffic at a network device with a sampling ratio, you run a lightweight agent on each server that reads kernel-level traffic counters and optionally captures full packets. The agent sees every packet that reaches the server's network interface, with zero sampling loss.
On Linux, the kernel exposes real-time traffic statistics through /proc/net/dev (interface byte and packet counters) and /proc/net/snmp (protocol-level statistics including TCP, UDP, and ICMP breakdowns). An agent can read these counters every second (or more frequently) and compute PPS, BPS, protocol ratios, and connection rates with perfect accuracy for that host.
When an anomaly is detected, the agent can trigger a full packet capture using the kernel's built-in packet capture facilities. This provides complete PCAP data for forensic analysis, attack classification, and evidence collection, all without requiring any changes to upstream routers or switches.
Where host-based inspection works well
- Zero sampling loss: Every packet is accounted for. A 500 PPS attack targeting a specific service is just as visible as a 5 million PPS volumetric flood. There is no statistical uncertainty about traffic volumes.
- Sub-second detection: Because the agent reads counters every second (or faster), detection latency is measured in seconds, not minutes. An attack that starts at T=0 can trigger an alert by T=2 or T=3.
- Deep classification: With access to full packet data, the agent can classify attacks by type (SYN flood, UDP amplification, DNS water torture, HTTP flood), identify the amplification protocol, extract source IPs, and provide actionable forensic data within seconds of detection.
- PCAP forensics: Automatic packet capture during attacks gives you the raw evidence needed for ISP abuse reports, law enforcement coordination, and post-incident analysis. Flow data cannot provide this level of detail.
- No router changes: Deploying an agent on a server requires no cooperation from the network team. There are no router configurations to modify, no flow export settings to tune, and no collector infrastructure to maintain.
Where host-based inspection falls short
- Agent deployment: You need to install and maintain an agent on every server you want to monitor. For large environments, this means integrating with configuration management tools (Ansible, Puppet, Chef) and maintaining the agent lifecycle across your fleet.
- Blind to upstream drops: If your upstream provider or a transit router drops traffic before it reaches the server, the agent will not see it. You might be under a 100 Gbps attack but only see 10 Gbps at the server because the other 90 Gbps is being absorbed upstream.
- Per-server scope: Each agent has visibility into its own server only. Building a network-wide view requires aggregating data from all agents in a central dashboard. The agent alone cannot tell you about traffic patterns between two other hosts in your network.
Side-by-Side Comparison
The following table summarizes the key differences across the three approaches:
| NetFlow/IPFIX | sFlow | Host-Based | |
|---|---|---|---|
| Detection latency | 1 to 3 minutes | 10 to 30 seconds | 1 to 3 seconds |
| Sampling | 1:1000 to 1:10000 | 1:1000 to 1:4096 | None (every packet) |
| Classification depth | 5-tuple metadata | L3/L4 headers (128 bytes) | Full packet + payload |
| Deployment | Router config + collector | Switch config + collector | Agent per server |
| PCAP capability | No | Partial (sampled headers) | Full capture on demand |
| Network-wide view | Yes (native) | Yes (native) | Aggregated from agents |
| Best for | ISP/enterprise backbone | Multi-vendor environments | Per-server protection |
When to Use NetFlow
NetFlow is the right choice when you need network-wide visibility across a large infrastructure and you already have flow-capable routers in place. ISPs, large enterprises, and organizations with dedicated network operations centers (NOCs) benefit most from NetFlow because they need to see traffic patterns across the entire network, not just at individual endpoints.
If your primary concern is volumetric attacks above 1 Gbps and your acceptable detection window is 2 to 3 minutes, NetFlow provides that coverage without any endpoint agents. Pair it with a mature collector like Kentik or FastNetMon and you have a proven detection pipeline. Just understand that attacks under 50,000 PPS and attacks shorter than 60 seconds will likely slip through the sampling gap.
When to Use sFlow
sFlow makes the most sense in multi-vendor environments where NetFlow support is inconsistent or unavailable. If your network includes a mix of Arista, Dell, and white-box switches, sFlow is likely the only flow protocol supported across all of them. The packet header access also gives sFlow an edge when you need basic protocol identification without deploying endpoint agents.
sFlow's faster export cycle (near real-time vs. 60-second batches) makes it better suited for environments where 10 to 30 second detection latency is acceptable. Combined with the counter polling data, sFlow can provide a reasonable picture of network health with less lag than NetFlow.
When to Use Host-Based Inspection
Host-based inspection is the right fit when sub-second detection, attack classification, and PCAP forensics are requirements rather than nice-to-haves. If you are protecting specific high-value servers (game servers, API endpoints, financial services infrastructure) and need to detect and classify attacks within seconds, host-based agents deliver capabilities that flow sampling cannot match.
The per-server deployment model also makes host-based inspection the easiest approach for teams that do not control the network infrastructure. If you are running servers in a colocation facility or on bare-metal cloud instances, you can deploy an agent without filing a ticket with the network team or waiting for router configuration changes.
The Hybrid Approach
In practice, the strongest detection posture combines multiple approaches. Use NetFlow or sFlow for network-wide baseline visibility and capacity planning. Layer host-based agents on critical servers for sub-second detection, deep classification, and PCAP forensics. The network-level flow data tells you what is happening across the infrastructure. The host-based agents tell you exactly what is hitting each server and what type of attack it is.
This layered model also provides redundancy. If an attack is large enough to overwhelm a server before the agent can report, the flow data from upstream routers still captures the event. If an attack is too small for flow sampling to detect, the host agent catches it at the endpoint. Each layer covers the other's blind spots.
A common production pattern: sFlow on the switch layer for network-wide visibility, host-based agents on every server facing the internet, and NetFlow on the border routers for ISP-level situational awareness. Three layers, three detection speeds, zero gaps.
How Flowtriq Fits
Flowtriq takes the host-based approach. The agent reads kernel counters from /proc/net/dev and /proc/net/snmp every second, computes dynamic baselines using exponentially weighted moving averages, and fires alerts within 1 to 3 seconds of anomaly detection. There is no sampling. Every packet that reaches the server is accounted for.
When an attack is detected, Flowtriq automatically captures PCAPs for forensic analysis and classifies the attack type (SYN flood, UDP amplification, DNS water torture, ICMP flood, and more) based on full packet data. The dashboard aggregates data from all agents across your infrastructure, providing the network-wide view that individual agents lack on their own.
If you are already running NetFlow or sFlow on your network, Flowtriq complements that investment. The flow data gives you the macro view. Flowtriq gives you the per-server micro view with zero sampling loss and sub-second detection. Together, they eliminate the blind spots that either approach has on its own.
Try it yourself: Flowtriq deploys in under 2 minutes per server with a single install command. No router changes, no collector infrastructure, no sampling to tune. Start your 7-day free trial and see every packet that hits your servers, classified and baselined in real time.