Smart DDoS Mitigation

Baskerville defends against DDoS attacks at the application layer (Layer 7) by analyzing raw weblogs as input. Every request is grouped into a user session using cookies and IPs, creating a clear picture of how each visitor interacts with the site.Traffic is then separated into human vs. automated flows using signals like user agent, TLS fingerprints, and network characteristics. On top of this, Baskerville trains website-specific machine learning models on the fly.

These models — based on algorithms such as Isolation Forest — learn each site’s unique traffic profile using both statistical features (request rates, timing, payload size) and text-based features (URL structures and browsing sequences).

To push detection further, Baskerville also leverages embeddings generated by large language models (LLMs) to represent URLs and navigation paths in a semantic space. These embeddings are then fed into a neural-network autoencoder, which learns to reconstruct normal traffic patterns and flags deviations as anomalies. This combination of traditional anomaly detection and deep-learning reconstruction allows Baskerville to capture more subtle bot behaviors while adapting to site-specific traffic.

Operators retain full control over detection aggressiveness. Through the configuration dashboard, the sensitivity of each model can be fine-tuned to reduce false positives and balance user experience with security.

To catch bots hiding behind real browsers, Baskerville also collects browser environment metrics (viewport, language, timezone, canvas, WebGL, memory, CPU cores, driver info). Similar or repeated environments across many sessions reveal automated traffic even when bots execute JavaScript like humans.

Suspicious sessions are given smart challenges (JavaScript puzzles, CAPTCHAs, honeypots). Automated traffic that fails is blocked or rate-limited, while genuine human users continue seamlessly.

Behind the scenes, Baskerville uses a Kafka streaming engine to process massive traffic volumes in real time. All traffic data and classification results are saved into a Postgres database, while real-time metrics — including challenge rate, bot rate, AI bot rate, and fingerprinting signals — are published for visualization in Prometheus and Grafana dashboards, giving operators full transparency and control.

Bot Management

Automated traffic has been steadily growing in recent years, and on many sites now makes up nearly 50% of all requests. Baskerville gives you full visibility into this traffic by classifying bots into categories such as:

  • Search engine crawlers (Google, Bing, etc.)
  • Scrapers harvesting your content
  • AI bots consuming data for model training
  • Malicious bots launching attacks or probing weaknesses

With this insight, you can apply the right policy for each type: set custom rate limits, allow verified crawlers, or block AI bots completely.

At the same time, legitimate, verified search engine bots are allowed to crawl your site — as long as they behave properly and do not overload resources.

In addition, Baskerville introduces advanced CAPTCHA-like challenges specifically designed to expose AI-driven bots. These challenges analyze fine-grained behavioural cues such as mouse movements, touch interactions, and browser environment signals. While AI bots can execute scripts, they still fail to fully replicate the natural patterns of a human user behind the screen. These special challenge pages can be selectively served to suspicious IPs or always enforced on sensitive areas such as login pages. For extreme situations, Baskerville provides an “Under Attack” emergency mode, which automatically escalates all protective measures to keep your website online and responsive during a major attack.

AI Firewall

Baskerville’s AI Firewall detects anomalous traffic and automatically issues challenges, while generating firewall policies that keep attackers at bay. Today, it creates rules that can be refined and extended through the dashboard, giving operators full control. Our architecture is built to go further: as the recommendation engine matures, those same insights will power fully scoped, adaptive policies that rate-limit or block abusive behaviours, from scrapers and malicious bots to suspicious ASNs, with minimal human intervention. Every event and outcome is written to Postgres and streamed via Kafka, with metrics ready for Prometheus and Grafana dashboards.

Modern AI bots don’t just scrape content — they behave like browsers, running JavaScript, solving CAPTCHAs, and mimicking human actions. Baskerville counters them with an AI Firewall:

  • Honeypot traps: hidden links, fake forms, and decoy APIs that attract automated crawlers.
  • AI-generated bait pages: realistic but meaningless content that forces AI bots to waste CPU and memory parsing it, draining their resources.
  • Environment fingerprinting: detection of repeated browser environments (timezone, WebGL, canvas, memory, cores, drivers) that reveal automated traffic behind headless browsers.
  • Adaptive response: dynamic challenges and soft-blocking strategies that stop botnets without harming real visitors.

The result is an active, adaptive defense that uses AI against AI — turning attackers’ strengths into weaknesses.