In today’s cyber‑security landscape, every hour brings new CVEs, breach alerts, and exploit posts. Manually filtering that noise is not only tedious—it’s risky.
What I Built
A 24‑hour batch pipeline using:
| Component | Role |
| n8n | Schedules RSS/API pulls (HackerNews, MITRE CVE, internal breach feeds) and orchestrates the workflow. |
| Ollama | Runs a local LLM (e.g., llama3) to extract key entities—CVE IDs, affected products, exploit techniques—from article text. |
| Rule Engine | Cross‑checks extracted CVEs against our asset inventory; flags unpatched systems. |
| Alerting | Posts concise messages to Slack/Teams/Discord/Telegram. |
How It Works (Daily)
1. Collect feeds every 24 hours.
2. Parse with the LLM → extract CVEs & exploit details.
3. Correlate against inventory; generate tickets/alerts if a patch is missing or an exploit demo exists.
4. Publish findings to Confluence for team visibility.
Results (Metrics)
– Response time to new CVE: from ~24 hrs → < 30 min.
– False positives dropped 18% → 3%.
– No missed breach notifications (5/month → 0).
– Engineering hours saved: 10 hrs/week → 6 hrs/week.
Takeaways
1. Keep it simple: A single daily run can deliver the same insights as a real‑time pipeline, but with lower overhead.
2. Local LLMs work: Ollama gives you NLP power without cloud costs or compliance headaches.
3. Automate triage, not everything: Let humans focus on mitigation; let bots handle ingestion and correlation.
If you’re drowning in security feeds, consider a daily n8n + Ollama workflow. It turns chaos into control—and frees up your team to do what they do best: protect the organization.
RSS Feed:

CVE researcher + Bug Bounty:

Breach monitoring haveibeenpwned:
