Threat Intelligence and the mRNA Problem: When Good Instructions Meet Missing Infrastructure
mRNA vaccines worked because they were designed for a body that already had regulatory infrastructure. Threat intelligence assumes the same about your SOC.
Molecular biologists had been working with messenger RNA for decades before the first mRNA vaccine shipped. Katalin Karikó and Drew Weissman's contribution was figuring out how to make synthetic instructions compatible with the immune system's existing machinery. The body had been rejecting unmodified mRNA as foreign junk. Their fix, the one that won the Nobel Prize, was making the message look native enough that the immune system would accept and act on it. The Nobel Prize was for compatibility, not novelty.
That's the word that matters: compatible. mRNA doesn't replace the immune system. It depends on it entirely. It delivers what to detect; the immune network handles everything else: self-tolerance, response calibration, memory, adaptation.
Threat intelligence makes the same bet. New threat appears? Ship a new IOC. New malware variant? Update the YARA rule. New C2 infrastructure? Push the IP list. The instruction is the product. The assumption is that whatever receives it has the infrastructure to do something intelligent with it.
The regulation gap
I've written before about Jerne's Immune Network Theory and what it means for security operations, so I won't retread the immunology here. The relevant point: mRNA succeeded because it delivered instructions into an existing regulatory network. The instructions were elegant, but the network did the hard work.
Most SOCs don't have the equivalent network. The SANS 2025 CTI Survey found that 90% of organizations consume external threat intelligence, most from multiple feeds simultaneously. But the same survey found the majority can't make that intelligence actionable. The pipeline is wide open at the intake and clogged at the output. Vectra's 2024 survey of 2,000 SOC practitioners puts a number on the clog: 4,484 alerts per day on average, 62% ignored. Vendor surveys carry bias, but the pattern shows up consistently, and the burnout is hard to argue with.
The downstream effects are predictable. Analysts burn a third of their day on false positives. The infrastructure demands something no one can provide manually at that volume.
Consider what happens when a C2 IP address from a threat intel feed hits your SIEM. The detection rule matches. Alert fires. But that IP is a Cloudflare endpoint that also serves legitimate traffic for half your SaaS applications. A SOC with a baseline map of normal traffic patterns would recognize this and suppress the alert. Without that baseline, an analyst spends twenty minutes confirming what the environment topology could have told them instantly.
Now take a subtler case. A file hash flagged as malware appears on an endpoint. The hash belongs to PsExec, a legitimate Microsoft administration tool. Your IT team uses it daily. A TIP platform might have a note from three months ago marking this hash as a known false positive. But does that note connect to which teams use PsExec, on which machines, for what purposes? If a marketing intern runs PsExec on the domain controller, the answer should be different than when a sysadmin runs it on a server they maintain. Legitimacy isn't a property of the tool. It's a property of the relationship between the tool, the user, the target, and the context, every edge a flat detection rule can't see.
The immune system handles this naturally. An immune cell doesn't just ask "is this molecule foreign?" It evaluates the molecule in context: where it appears, what signals the surrounding tissue is producing, whether the overall pattern suggests danger. mRNA only had to deliver the target because the network already provided the judgment.
What provides the judgment in your SOC?
Where SOAR playbooks hit the wall
The industry hasn't been standing still. TIP platforms maintain context around indicators: confidence scores, decay timelines, analyst annotations. UEBA tools baseline user behavior and flag anomalies. SOAR playbooks automate enrichment and response. Organizations running these well are better off than those piping raw feeds into a SIEM.
SOAR is worth looking at closely, because it's the tool that most explicitly tries to be the infrastructure for threat intel. A SOAR playbook for a suspicious IP might: query VirusTotal for reputation, check the CMDB for which asset generated the alert, look up the user in Active Directory, check whether the IP appears in a known CDN range, and decide whether to escalate or suppress. That's real work.
The problem is that the playbook only handles scenarios the author anticipated. Someone had to predict that CDN IPs would be a source of false positives and write the CDN-check step. Someone had to predict that the asset's data classification matters and write the CMDB lookup. Each scenario is a hand-coded decision tree. When the alert doesn't match a tree someone already built, it falls through to an analyst, and by then the analyst is already drowning.
A SOAR playbook can tell you that the IP is in a CDN range and the asset is a development server. It can't tell you that the development server has a misconfigured firewall rule granting it a network path to your production PII database, a path the playbook author never imagined and never wrote a check for. The playbook handles known patterns. It can't discover unknown relationships.
TIPs and UEBA hit the same wall from different angles. A TIP annotates the indicator but knows nothing about the environment it landed in. UEBA baselines individual entities but can't connect one entity's anomaly to another's. There's no way to ask "show me everything acting unusual along this particular path." Each tool sees one layer. The relationships between layers are where context lives, and no single tool models them.
What graph-native contextualization looks like
When a threat intel indicator arrives, it shouldn't land in a table. It should land in your environment graph.
A C2 IP address is a string to your SIEM. In a graph, it becomes a node connected to your topology:
MATCH (ioc:ThreatIntel {type: 'ip', value: '198.51.100.23'})
OPTIONAL MATCH (ioc)<-[:COMMUNICATES_WITH]-(asset:Asset)
OPTIONAL MATCH (asset)-[:HOSTS]->(app:Application)
OPTIONAL MATCH (asset)-[:STORES]->(data:Data)
RETURN ioc, asset, app, data,
CASE WHEN asset IS NULL THEN 'no exposure'
WHEN data.classification = 'PII' THEN 'critical'
ELSE 'investigate' END AS priority
The query asks which assets communicated with that IP, what those assets host, and what data they store. An IOC that connects to a development sandbox running test data gets a different response than one connecting to a production database with customer records. Same indicator, different context, different priority.
Unlike the SOAR playbook, this traversal works for any IOC against any topology without someone hand-coding each scenario. New indicator arrives, same traversal, automatic contextualization.
Analyst decisions become structural too. When an analyst confirms an IOC is a false positive because the IP belongs to a known CDN, that relationship is encoded: CDN_Provider -[HOSTS]-> IP_Address -[FALSE_POSITIVE_FOR]-> ThreatIntel_IOC. The next time an IOC arrives for an IP in the same ASN or CIDR range, the graph already has context. That relationship is traversable and changes how future queries behave.
A TIP with good analyst workflows can build real institutional knowledge around individual indicators. But that knowledge stays attached to the indicator. In a graph, the same decisions become relationships in the topology itself, where they affect every connected query going forward.
The real cost
None of this is free. Pretending otherwise would repeat the same mistake the threat intel market makes: selling the instruction while glossing over the infrastructure it requires.
Building an environment graph means mapping assets, applications, data flows, access patterns, and the relationships between them. It means maintaining that map as the environment changes. Gartner's 2022 Market Guide for UEBA noted that many deployments stall after initial setup because maintaining behavioral baselines is operationally expensive. Graph-based initiatives hit the same wall.
And the analogy has a limit that matters more than I've let on so far. Pathogens mutate, but they don't study the immune system's architecture and deliberately craft evasions. Threat actors do. Living-off-the-land techniques are already this problem in practice: attackers using legitimate tools, legitimate credentials, and legitimate network paths precisely because they know those patterns won't trigger detections. An adversary who understands your context model can craft activity that looks contextually normal. A graph gives you better questions to ask about what's happening in your environment. It does not give you guaranteed answers. The PsExec example from earlier cuts both ways: a graph can distinguish the sysadmin from the marketing intern, but a compromised sysadmin account running PsExec on the servers it's supposed to manage will look perfectly normal to the graph too.
Two things keep it tractable.
An incomplete graph is still more useful than no graph. You don't need to map every edge before you start contextualizing. Start with the noisiest alert sources, build the relationships around them, expand outward. Each new relationship makes connected indicators more meaningful.
And the economics compound in a way that subscription feeds don't. The marginal value of another threat intel feed drops fast once your SOC is saturated. A new relationship in the graph works differently because it enriches every connected node. Add data-classification edges to assets you've already mapped, and every existing IOC-to-asset path gains priority context without a single new detection rule. Add user-to-role edges, and the PsExec alert resolves itself: the graph already knows whether the user is a sysadmin or an intern.
Organizations spending heavily on premium threat intel subscriptions while running a SIEM with no environment topology are optimizing the wrong variable.
Instructions without infrastructure
Karikó and Weissman made the instructions compatible with infrastructure that already existed. The mRNA delivered what to detect. The immune network handled everything else.
Threat intelligence needs the same partnership. The feeds deliver indicators. A graph delivers the context that makes them meaningful. Without that context, each new feed adds noise. With it, each feed extends what your environment can recognize and respond to on its own.
This article was originally published on Medium.

