Start with what you are protecting and who needs to know

Before configuring any alert, define two things: the list of IPs and services you are protecting, and the list of people who need to be notified when something happens. Alert systems fail most often not because the technology is wrong, but because there was no clear mapping between "this event happened" and "this person needs to act."

For a hosting business, the notification matrix typically looks like:

  • NOC team or on-call engineer: Needs real-time alerts for every detected attack, including those that auto-mitigate. Needs escalation alerts when mitigation fails or attack volume exceeds capacity.
  • Customer (affected server owner): Needs to know their server is under attack, when mitigation is active, and when it ends. Does not need technical details about attack vectors unless they request them.
  • Management or account team: Needs a summary when a SLA-impacting incident occurs. Does not need per-attack alerts for minor events.

Configuring detection thresholds

Thresholds define when an alert fires. Too high and you miss real attacks. Too low and you drown in false positives. The right thresholds are calibrated to your actual traffic baselines, not arbitrary numbers from a documentation page.

Step 1: Establish baselines

Run your detection tool in monitoring-only mode for 7-14 days without alerting. Collect per-IP traffic statistics: average packets per second, peak packets per second, and the 99th percentile of normal traffic for each protected IP. These become your baselines.

Step 2: Set primary thresholds

For hosted servers, typical starting thresholds:

  • Packets per second: Alert at 10x the 99th percentile baseline, or 50 Kpps minimum (whichever is lower)
  • Bits per second: Alert at 5x the 99th percentile baseline, or 500 Mbps minimum
  • SYN packets: Alert at 5,000 SYN/s sustained for more than 5 seconds

These are starting points. Adjust based on your hardware capacity and customer SLA requirements.

Step 3: Set escalation thresholds

A separate, higher threshold triggers escalation beyond automated response:

  • Traffic exceeds 80% of link capacity (requires upstream intervention)
  • Automated mitigation has been active for 5+ minutes without traffic returning to baseline (attack is persisting or evolving)
  • Multiple IPs on the same node are under attack simultaneously (indicating a targeted campaign, not random scanning)

Configuring Slack/Discord webhooks

Most detection tools support webhook notifications via POST requests to a URL. For Slack, create an incoming webhook in your Slack app settings. For Discord, right-click any channel, select Edit Channel, then Integrations, then Webhooks.

A good alert message format for Slack/Discord:

:warning: DDoS Attack Detected
Server: web01.customer47.com (192.0.2.47)
Vector: UDP Amplification (NTP)
Volume: 2.4 Gbps / 1.2 Mpps
Status: Auto-mitigation ACTIVE
Started: 14:32:18 UTC
Dashboard: https://app.flowtriq.com/incidents/abc123

Include a direct link to the incident dashboard. The person seeing the alert should be able to click one link to get full details and PCAP data.

Configuring email notifications

Email alerts serve a different purpose than real-time webhook notifications. Configure email for:

  • Customer notification: When an attack starts on their IP, send an email immediately. Include the server name, attack type, and a note that automated mitigation is active.
  • Incident closure: When the attack ends and mitigation is withdrawn, send a closure email with attack duration, peak volume, and a brief summary.
  • Weekly digest: A summary of all detected attacks in the past week, for customers who want visibility without per-incident emails.

Configuring PagerDuty escalation

For hosting businesses with formal SLA obligations, PagerDuty (or OpsGenie) handles on-call rotation and escalation. The integration is straightforward: Flowtriq and most detection tools can POST to PagerDuty's Events API. Configure escalation rules:

  1. Escalation trigger fires (e.g., attack exceeding link capacity or mitigation failure)
  2. PagerDuty sends push notification and email to on-call engineer
  3. If not acknowledged in 10 minutes, phone call to on-call engineer
  4. If not acknowledged in 20 minutes, phone call to backup engineer

Only use PagerDuty for Tier 3 events (link saturation, mitigation failure). Every other alert goes to Slack. A phone call at 3 AM for a minor DDoS that auto-mitigated in 2 seconds destroys team morale and alert credibility.

Testing your alert configuration

Run a test before relying on your alert setup in production:

  1. Temporarily lower a threshold to a value you can trigger with a controlled traffic test (ping flood from a test machine, or a packet generator tool).
  2. Generate traffic that exceeds the test threshold.
  3. Verify that all notification channels fire within 5 seconds.
  4. Verify that the "attack ended" notification fires when you stop the test traffic.
  5. Restore production thresholds.

Do this test quarterly. Alert routing configurations break when email addresses change, Slack webhook URLs are rotated, or PagerDuty on-call schedules are updated and nobody updates the DDoS alert integration.

Common configuration mistakes

  • Not segmenting alerts by customer tier. A low-margin shared hosting customer and a high-value dedicated server customer should not trigger the same escalation path. Configure customer tier labels in your detection tool and use them to route alerts appropriately.
  • Alert sent only when attack starts, not when it ends. Customers and NOC staff need closure. "Attack ended at 14:47, server fully restored" is as important as the initial alert.
  • No documentation of threshold rationale. Six months from now, nobody will remember why the threshold is 50 Kpps for customer X and 200 Kpps for customer Y. Document the reasoning (customer tier, hardware capacity, SLA requirements) in the detection tool or a wiki entry.

Detect DDoS attacks in under 1 second

Deploy Flowtriq on your infrastructure and get real-time detection, auto-mitigation, and instant alerts. $9.99/node/month.

Start Free Trial
Back to Blog