[{"data":1,"prerenderedAt":1178},["ShallowReactive",2],{"article-\u002Fblog\u002Fthreat_intel_mrna":3,"related-\u002Fblog\u002Fthreat_intel_mrna":282},{"id":4,"title":5,"author":6,"body":14,"date":262,"description":263,"extension":264,"image":265,"meta":266,"navigation":270,"path":271,"seo":272,"stem":273,"tags":274,"__hash__":281},"blog\u002Fblog\u002Fthreat_intel_mrna.md","Threat Intelligence and the mRNA Problem: When Good Instructions Meet Missing Infrastructure",{"name":7,"headshot":8,"role":9,"contact":10},"Levente Simon","\u002Fheadshots\u002FLS.jpeg","Creator of Dethernety",{"linkedin":11,"email":12,"twitter":13},"https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Flevente-simon\u002F","levente.simon@dether.net","https:\u002F\u002Fx.com\u002FLevente_Simon",{"type":15,"value":16,"toc":255},"minimark",[17,21,28,35,41,44,49,58,61,64,67,82,85,88,92,95,98,101,104,115,119,122,125,185,188,191,198,201,205,208,211,214,217,220,223,226,230,233,236,239,251],[18,19,5],"h1",{"id":20},"threat-intelligence-and-the-mrna-problem-when-good-instructions-meet-missing-infrastructure",[22,23,24],"p",{},[25,26,27],"em",{},"mRNA vaccines worked because they were designed for a body that already had regulatory infrastructure. Threat intelligence assumes the same about your SOC.",[22,29,30,31,34],{},"Molecular biologists had been working with messenger RNA for decades before the first mRNA vaccine shipped. Katalin Karikó and Drew Weissman's contribution was figuring out how to make synthetic instructions ",[25,32,33],{},"compatible"," with the immune system's existing machinery. The body had been rejecting unmodified mRNA as foreign junk. Their fix, the one that won the Nobel Prize, was making the message look native enough that the immune system would accept and act on it. The Nobel Prize was for compatibility, not novelty.",[22,36,37,38,40],{},"That's the word that matters: ",[25,39,33],{},". mRNA doesn't replace the immune system. It depends on it entirely. It delivers what to detect; the immune network handles everything else: self-tolerance, response calibration, memory, adaptation.",[22,42,43],{},"Threat intelligence makes the same bet. New threat appears? Ship a new IOC. New malware variant? Update the YARA rule. New C2 infrastructure? Push the IP list. The instruction is the product. The assumption is that whatever receives it has the infrastructure to do something intelligent with it.",[45,46,48],"h2",{"id":47},"the-regulation-gap","The regulation gap",[22,50,51,52,57],{},"I've ",[53,54,56],"a",{"href":55},"\u002Fblog\u002Fimmune_network_theory","written before"," about Jerne's Immune Network Theory and what it means for security operations, so I won't retread the immunology here. The relevant point: mRNA succeeded because it delivered instructions into an existing regulatory network. The instructions were elegant, but the network did the hard work.",[22,59,60],{},"Most SOCs don't have the equivalent network. The SANS 2025 CTI Survey found that 90% of organizations consume external threat intelligence, most from multiple feeds simultaneously. But the same survey found the majority can't make that intelligence actionable. The pipeline is wide open at the intake and clogged at the output. Vectra's 2024 survey of 2,000 SOC practitioners puts a number on the clog: 4,484 alerts per day on average, 62% ignored. Vendor surveys carry bias, but the pattern shows up consistently, and the burnout is hard to argue with.",[22,62,63],{},"The downstream effects are predictable. Analysts burn a third of their day on false positives. The infrastructure demands something no one can provide manually at that volume.",[22,65,66],{},"Consider what happens when a C2 IP address from a threat intel feed hits your SIEM. The detection rule matches. Alert fires. But that IP is a Cloudflare endpoint that also serves legitimate traffic for half your SaaS applications. A SOC with a baseline map of normal traffic patterns would recognize this and suppress the alert. Without that baseline, an analyst spends twenty minutes confirming what the environment topology could have told them instantly.",[22,68,69,70,73,74,77,78,81],{},"Now take a subtler case. A file hash flagged as malware appears on an endpoint. The hash belongs to PsExec, a legitimate Microsoft administration tool. Your IT team uses it daily. A TIP platform might have a note from three months ago marking this hash as a known false positive. But does that note connect to ",[25,71,72],{},"which teams"," use PsExec, ",[25,75,76],{},"on which machines",", for ",[25,79,80],{},"what purposes","? If a marketing intern runs PsExec on the domain controller, the answer should be different than when a sysadmin runs it on a server they maintain. Legitimacy isn't a property of the tool. It's a property of the relationship between the tool, the user, the target, and the context, every edge a flat detection rule can't see.",[22,83,84],{},"The immune system handles this naturally. An immune cell doesn't just ask \"is this molecule foreign?\" It evaluates the molecule in context: where it appears, what signals the surrounding tissue is producing, whether the overall pattern suggests danger. mRNA only had to deliver the target because the network already provided the judgment.",[22,86,87],{},"What provides the judgment in your SOC?",[45,89,91],{"id":90},"where-soar-playbooks-hit-the-wall","Where SOAR playbooks hit the wall",[22,93,94],{},"The industry hasn't been standing still. TIP platforms maintain context around indicators: confidence scores, decay timelines, analyst annotations. UEBA tools baseline user behavior and flag anomalies. SOAR playbooks automate enrichment and response. Organizations running these well are better off than those piping raw feeds into a SIEM.",[22,96,97],{},"SOAR is worth looking at closely, because it's the tool that most explicitly tries to be the infrastructure for threat intel. A SOAR playbook for a suspicious IP might: query VirusTotal for reputation, check the CMDB for which asset generated the alert, look up the user in Active Directory, check whether the IP appears in a known CDN range, and decide whether to escalate or suppress. That's real work.",[22,99,100],{},"The problem is that the playbook only handles scenarios the author anticipated. Someone had to predict that CDN IPs would be a source of false positives and write the CDN-check step. Someone had to predict that the asset's data classification matters and write the CMDB lookup. Each scenario is a hand-coded decision tree. When the alert doesn't match a tree someone already built, it falls through to an analyst, and by then the analyst is already drowning.",[22,102,103],{},"A SOAR playbook can tell you that the IP is in a CDN range and the asset is a development server. It can't tell you that the development server has a misconfigured firewall rule granting it a network path to your production PII database, a path the playbook author never imagined and never wrote a check for. The playbook handles known patterns. It can't discover unknown relationships.",[22,105,106,107,110,111,114],{},"TIPs and UEBA hit the same wall from different angles. A TIP annotates the ",[25,108,109],{},"indicator"," but knows nothing about the environment it landed in. UEBA baselines individual ",[25,112,113],{},"entities"," but can't connect one entity's anomaly to another's. There's no way to ask \"show me everything acting unusual along this particular path.\" Each tool sees one layer. The relationships between layers are where context lives, and no single tool models them.",[45,116,118],{"id":117},"what-graph-native-contextualization-looks-like","What graph-native contextualization looks like",[22,120,121],{},"When a threat intel indicator arrives, it shouldn't land in a table. It should land in your environment graph.",[22,123,124],{},"A C2 IP address is a string to your SIEM. In a graph, it becomes a node connected to your topology:",[126,127,132],"pre",{"className":128,"code":129,"language":130,"meta":131,"style":131},"language-cypher shiki shiki-themes github-dark github-dark","MATCH (ioc:ThreatIntel {type: 'ip', value: '198.51.100.23'})\nOPTIONAL MATCH (ioc)\u003C-[:COMMUNICATES_WITH]-(asset:Asset)\nOPTIONAL MATCH (asset)-[:HOSTS]->(app:Application)\nOPTIONAL MATCH (asset)-[:STORES]->(data:Data)\nRETURN ioc, asset, app, data,\n       CASE WHEN asset IS NULL THEN 'no exposure'\n            WHEN data.classification = 'PII' THEN 'critical'\n            ELSE 'investigate' END AS priority\n","cypher","",[133,134,135,143,149,155,161,167,173,179],"code",{"__ignoreMap":131},[136,137,140],"span",{"class":138,"line":139},"line",1,[136,141,142],{},"MATCH (ioc:ThreatIntel {type: 'ip', value: '198.51.100.23'})\n",[136,144,146],{"class":138,"line":145},2,[136,147,148],{},"OPTIONAL MATCH (ioc)\u003C-[:COMMUNICATES_WITH]-(asset:Asset)\n",[136,150,152],{"class":138,"line":151},3,[136,153,154],{},"OPTIONAL MATCH (asset)-[:HOSTS]->(app:Application)\n",[136,156,158],{"class":138,"line":157},4,[136,159,160],{},"OPTIONAL MATCH (asset)-[:STORES]->(data:Data)\n",[136,162,164],{"class":138,"line":163},5,[136,165,166],{},"RETURN ioc, asset, app, data,\n",[136,168,170],{"class":138,"line":169},6,[136,171,172],{},"       CASE WHEN asset IS NULL THEN 'no exposure'\n",[136,174,176],{"class":138,"line":175},7,[136,177,178],{},"            WHEN data.classification = 'PII' THEN 'critical'\n",[136,180,182],{"class":138,"line":181},8,[136,183,184],{},"            ELSE 'investigate' END AS priority\n",[22,186,187],{},"The query asks which assets communicated with that IP, what those assets host, and what data they store. An IOC that connects to a development sandbox running test data gets a different response than one connecting to a production database with customer records. Same indicator, different context, different priority.",[22,189,190],{},"Unlike the SOAR playbook, this traversal works for any IOC against any topology without someone hand-coding each scenario. New indicator arrives, same traversal, automatic contextualization.",[22,192,193,194,197],{},"Analyst decisions become structural too. When an analyst confirms an IOC is a false positive because the IP belongs to a known CDN, that relationship is encoded: ",[133,195,196],{},"CDN_Provider -[HOSTS]-> IP_Address -[FALSE_POSITIVE_FOR]-> ThreatIntel_IOC",". The next time an IOC arrives for an IP in the same ASN or CIDR range, the graph already has context. That relationship is traversable and changes how future queries behave.",[22,199,200],{},"A TIP with good analyst workflows can build real institutional knowledge around individual indicators. But that knowledge stays attached to the indicator. In a graph, the same decisions become relationships in the topology itself, where they affect every connected query going forward.",[45,202,204],{"id":203},"the-real-cost","The real cost",[22,206,207],{},"None of this is free. Pretending otherwise would repeat the same mistake the threat intel market makes: selling the instruction while glossing over the infrastructure it requires.",[22,209,210],{},"Building an environment graph means mapping assets, applications, data flows, access patterns, and the relationships between them. It means maintaining that map as the environment changes. Gartner's 2022 Market Guide for UEBA noted that many deployments stall after initial setup because maintaining behavioral baselines is operationally expensive. Graph-based initiatives hit the same wall.",[22,212,213],{},"And the analogy has a limit that matters more than I've let on so far. Pathogens mutate, but they don't study the immune system's architecture and deliberately craft evasions. Threat actors do. Living-off-the-land techniques are already this problem in practice: attackers using legitimate tools, legitimate credentials, and legitimate network paths precisely because they know those patterns won't trigger detections. An adversary who understands your context model can craft activity that looks contextually normal. A graph gives you better questions to ask about what's happening in your environment. It does not give you guaranteed answers. The PsExec example from earlier cuts both ways: a graph can distinguish the sysadmin from the marketing intern, but a compromised sysadmin account running PsExec on the servers it's supposed to manage will look perfectly normal to the graph too.",[22,215,216],{},"Two things keep it tractable.",[22,218,219],{},"An incomplete graph is still more useful than no graph. You don't need to map every edge before you start contextualizing. Start with the noisiest alert sources, build the relationships around them, expand outward. Each new relationship makes connected indicators more meaningful.",[22,221,222],{},"And the economics compound in a way that subscription feeds don't. The marginal value of another threat intel feed drops fast once your SOC is saturated. A new relationship in the graph works differently because it enriches every connected node. Add data-classification edges to assets you've already mapped, and every existing IOC-to-asset path gains priority context without a single new detection rule. Add user-to-role edges, and the PsExec alert resolves itself: the graph already knows whether the user is a sysadmin or an intern.",[22,224,225],{},"Organizations spending heavily on premium threat intel subscriptions while running a SIEM with no environment topology are optimizing the wrong variable.",[45,227,229],{"id":228},"instructions-without-infrastructure","Instructions without infrastructure",[22,231,232],{},"Karikó and Weissman made the instructions compatible with infrastructure that already existed. The mRNA delivered what to detect. The immune network handled everything else.",[22,234,235],{},"Threat intelligence needs the same partnership. The feeds deliver indicators. A graph delivers the context that makes them meaningful. Without that context, each new feed adds noise. With it, each feed extends what your environment can recognize and respond to on its own.",[237,238],"hr",{},[22,240,241],{},[25,242,243,244,250],{},"This article was originally published on  ",[53,245,249],{"href":246,"rel":247},"https:\u002F\u002Fmedium.com\u002F@levente.simon\u002Fyour-threat-intel-feed-is-an-mrna-vaccine-your-soc-doesnt-have-an-immune-system-7903dcaa3907",[248],"nofollow","Medium",".",[252,253,254],"style",{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":131,"searchDepth":145,"depth":145,"links":256},[257,258,259,260,261],{"id":47,"depth":145,"text":48},{"id":90,"depth":145,"text":91},{"id":117,"depth":145,"text":118},{"id":203,"depth":145,"text":204},{"id":228,"depth":145,"text":229},"2026-03-02","Most SOCs consume threat intel feeds but lack the contextual infrastructure to make them actionable. Drawing on the mRNA vaccine analogy, this article argues that graph-native environment models are the missing regulatory network your indicators depend on.","md","\u002Fimages\u002Fblog\u002Fthreat-intel-mrna.jpg",{"audio":267,"category":268},"\u002Faudio\u002Fthreat_intel_mrna.mp3",[269],"Thinking in Graphs",true,"\u002Fblog\u002Fthreat_intel_mrna",{"title":5,"description":263},"blog\u002Fthreat_intel_mrna",[275,276,277,278,279,280],"threat intelligence","graph theory","IOC","STIX","soc","immune system","ey_YOmyb4lzJOdW0ea7Nj3pXwKOFIFMNc6egoOUOrJ8",[283,895],{"id":284,"title":285,"author":286,"body":289,"date":879,"description":880,"extension":264,"image":881,"meta":882,"navigation":270,"path":886,"seo":887,"stem":888,"tags":889,"__hash__":894},"blog\u002Fblog\u002Fthe_lost_science.md","The Lost Science: How We Forgot Security Was a Graph Problem",{"name":7,"headshot":8,"role":287,"contact":288},"creator of dethernety",{"linkedin":11,"email":12,"twitter":13},{"type":15,"value":290,"toc":855},[291,294,299,302,308,311,314,318,321,326,329,334,340,347,364,371,375,378,381,384,398,401,404,408,414,417,432,435,461,467,474,478,484,487,494,497,501,504,507,511,514,517,521,524,527,530,533,537,540,554,557,560,563,567,570,651,654,657,661,664,668,671,694,701,704,708,711,718,722,725,728,735,738,749,753,756,763,770,773,779,789,793,796,799,802,805,809,812,815,817,821,824,841,846],[18,292,285],{"id":293},"the-lost-science-how-we-forgot-security-was-a-graph-problem",[22,295,296],{},[25,297,298],{},"In 1976, computer scientists proved that security is a graph traversal problem. Then we forgot. Here's why, and why it matters now.",[22,300,301],{},"If you ask a modern security architect how access control works, they'll describe Access Control Lists: users, groups, permissions, roles. Flat tables. Lookup operations.",[22,303,304,305],{},"But if you asked the same question to a computer scientist in 1976, they would have drawn you a graph. Nodes for subjects and objects. Directed edges for rights. And they would have told you: ",[25,306,307],{},"\"Security is the question of whether a path exists.\"",[22,309,310],{},"We knew this. We proved it. Then we forgot it. It was not wrong, but we couldn't afford to compute it.",[22,312,313],{},"This is the story of how security became a graph problem, why we abandoned that insight, and why graph databases now let us pick it back up.",[45,315,317],{"id":316},"the-golden-age-when-security-was-mathematics-1973-1983","The golden age: when security was mathematics (1973-1983)",[22,319,320],{},"The early 1970s were an unusual period for computer security. The field wasn't dominated by vendors selling products. It was dominated by mathematicians asking fundamental questions:",[22,322,323],{},[25,324,325],{},"\"Can we prove that a system is secure?\"",[22,327,328],{},"The answers they developed weren't heuristics or best practices. They were formal models: mathematical frameworks that could provide actual guarantees. And almost all of them were graph problems.",[330,331,333],"h3",{"id":332},"bell-lapadula-confidentiality-as-information-flow-1973","Bell-LaPadula: confidentiality as information flow (1973)",[22,335,336,337],{},"David Elliott Bell and Leonard LaPadula, working for MITRE under a US Air Force contract, asked: ",[25,338,339],{},"\"How do we prevent classified information from leaking to unauthorized users?\"",[22,341,342,343,346],{},"They modeled security clearances as a lattice—a mathematical structure where elements have a defined ordering (",[133,344,345],{},"Top Secret > Secret > Confidential > Unclassified","). Then they defined two simple rules:",[348,349,350,358],"ol",{},[351,352,353,357],"li",{},[354,355,356],"strong",{},"No Read Up (Simple Security):"," A subject cannot read an object at a higher classification level.",[351,359,360,363],{},[354,361,362],{},"No Write Down (Star Property):"," A subject cannot write to an object at a lower classification level.",[22,365,366,367,370],{},"Information flow is directional. If you model clearances as nodes and permitted flows as edges, then a security violation is simply ",[25,368,369],{},"a path that shouldn't exist",". Confidentiality becomes a graph reachability problem.",[330,372,374],{"id":373},"biba-the-integrity-inverse-1977","Biba: the integrity inverse (1977)",[22,376,377],{},"Kenneth Biba, working at MITRE on a different problem, realized that integrity is the mirror image of confidentiality.",[22,379,380],{},"Where Bell-LaPadula asks \"Can secrets leak down?\", Biba asks \"Can corruption flow up?\"",[22,382,383],{},"His model inverted the rules:",[348,385,386,392],{},[351,387,388,391],{},[354,389,390],{},"No Read Down:"," A subject cannot read from a lower integrity level (don't trust untrusted data).",[351,393,394,397],{},[354,395,396],{},"No Write Up:"," A subject cannot write to a higher integrity level (don't corrupt trusted data).",[22,399,400],{},"Same lattice structure. Same graph problem. Different direction of concern.",[22,402,403],{},"Together, Bell-LaPadula and Biba showed that both confidentiality and integrity could be modeled as constrained information flow on a graph. Security was about proving that certain paths could never exist. Not checking permissions.",[330,405,407],{"id":406},"take-grant-security-as-graph-rewriting-1976","Take-Grant: security as graph rewriting (1976)",[22,409,410,411],{},"While Bell-LaPadula and Biba focused on information flow, Jones, Lipton, and Snyder asked a different question: ",[25,412,413],{},"\"How do permissions propagate?\"",[22,415,416],{},"Their Take-Grant model was explicitly a directed graph:",[418,419,420,426],"ul",{},[351,421,422,425],{},[354,423,424],{},"Nodes:"," Subjects (users, processes) and Objects (files, resources)",[351,427,428,431],{},[354,429,430],{},"Edges:"," Rights (read, write, take, grant)",[22,433,434],{},"The model defined four operations that could modify the graph:",[418,436,437,443,449,455],{},[351,438,439,442],{},[354,440,441],{},"Take:"," If A has \"take\" rights to B, and B has rights to C, then A can acquire B's rights to C.",[351,444,445,448],{},[354,446,447],{},"Grant:"," If A has \"grant\" rights to B, A can give B any rights that A possesses.",[351,450,451,454],{},[354,452,453],{},"Create:"," A subject can create new nodes.",[351,456,457,460],{},[354,458,459],{},"Remove:"," A subject can remove edges it controls.",[22,462,463,464],{},"The security question became: ",[25,465,466],{},"\"Given an initial graph and these rewriting rules, can subject X ever acquire right R to object Y?\"",[22,468,469,470,473],{},"This is pure graph theory, and the result: ",[354,471,472],{},"the safety problem in Take-Grant is decidable in linear time",". You can actually prove whether a right can ever leak.",[330,475,477],{"id":476},"harrison-ruzzo-ullman-the-limits-of-decidability-1976","Harrison-Ruzzo-Ullman: the limits of decidability (1976)",[22,479,480,481,250],{},"Not all news was good. Harrison, Ruzzo, and Ullman studied a more general access control model and proved the result: ",[354,482,483],{},"in the general case, the safety problem is undecidable",[22,485,486],{},"You cannot write an algorithm that will always correctly determine whether a given access control system can ever reach an unsafe state.",[22,488,489,490,493],{},"But they also showed that restricted models ",[25,491,492],{},"are"," decidable. If you constrain the graph structure (limit the operations, bound the complexity), you can recover formal guarantees.",[22,495,496],{},"The theorists had mapped the terrain: security is a graph problem, and the key question is whether your graph is constrained enough to be analyzable.",[45,498,500],{"id":499},"the-pragmatic-retreat-why-we-over-corrected-1980s-2000s","The pragmatic retreat: why we over-corrected (1980s-2000s)",[22,502,503],{},"If security was solved in theory by 1983, why are we still struggling with access control in 2026?",[22,505,506],{},"Two forces pushed at once, and we over-corrected for both.",[330,508,510],{"id":509},"the-hardware-reality","The hardware reality",[22,512,513],{},"In 1976, a PDP-11 had 64KB of RAM. Graph traversal algorithms that are trivial today were prohibitively expensive. Running a Take-Grant safety analysis on a real system with thousands of users and objects? Impossible.",[22,515,516],{},"The formal models were correct but practically useless.",[330,518,520],{"id":519},"the-models-own-limitations","The models' own limitations",[22,522,523],{},"Computational cost wasn't the only problem. The formal models themselves had rough edges that made practitioners skeptical.",[22,525,526],{},"Bell-LaPadula's \"tranquility\" property required that security labels never change during operation. That was too rigid for real systems where users legitimately need to reclassify data. Biba's integrity lattice couldn't capture what commercial systems actually needed: separation of duties, well-formed transactions, audit trails.",[22,528,529],{},"Clark and Wilson recognized this in 1987. Their integrity model abandoned the lattice approach entirely. Real-world integrity, they argued, isn't about information flow direction. It's about ensuring that data is only modified through authorized, validated procedures. They were right about that.",[22,531,532],{},"But the lesson the industry drew was broader than Clark and Wilson intended. The takeaway wasn't \"lattice models need refinement.\" It was \"formal models don't work in practice.\" That was the over-correction. The models needed better hardware and more nuance. Instead, we threw out the graph entirely.",[330,534,536],{"id":535},"the-compromise-flatten-the-graph","The compromise: flatten the graph",[22,538,539],{},"System designers needed something that would actually run. Their solution was to flatten the graph into tables:",[418,541,542,548],{},[351,543,544,547],{},[354,545,546],{},"Access Control Lists (ACLs):"," For each object, list who can access it.",[351,549,550,553],{},[354,551,552],{},"Capability Lists:"," For each subject, list what they can access.",[22,555,556],{},"Both are projections of the underlying graph onto flat structures. They're fast to query (O(1) lookup), easy to store, and simple to understand.",[22,558,559],{},"But they can't answer path questions.",[22,561,562],{},"An ACL can tell you \"Alice has read access to File X.\" It cannot tell you \"If Alice compromises Service A, can she eventually reach Database Z?\" That's a multi-hop traversal, exactly what ACLs were designed not to compute.",[330,564,566],{"id":565},"what-we-lost","What we lost",[22,568,569],{},"When we flattened the graph, we lost the ability to answer the questions that actually matter:",[571,572,573,589],"table",{},[574,575,576],"thead",{},[577,578,579,583,586],"tr",{},[580,581,582],"th",{},"Question",[580,584,585],{},"Graph Model",[580,587,588],{},"ACL Model",[590,591,592,603,618,629,640],"tbody",{},[577,593,594,598,601],{},[595,596,597],"td",{},"\"Can Alice access File X?\"",[595,599,600],{},"Trivial",[595,602,600],{},[577,604,605,612,615],{},[595,606,607,608,611],{},"\"Can Alice ",[25,609,610],{},"ever"," reach Database Z?\"",[595,613,614],{},"Path query",[595,616,617],{},"Manual analysis",[577,619,620,623,626],{},[595,621,622],{},"\"If we add this permission, what new paths open?\"",[595,624,625],{},"Graph diff",[595,627,628],{},"Unknown",[577,630,631,634,637],{},[595,632,633],{},"\"Where are the transitive trust relationships?\"",[595,635,636],{},"Traversal",[595,638,639],{},"Invisible",[577,641,642,645,648],{},[595,643,644],{},"\"Is our permission structure safe?\"",[595,646,647],{},"Decidable (constrained)",[595,649,650],{},"Undecidable (in practice, unknown)",[22,652,653],{},"We traded formal guarantees for performance and simplicity. For 30 years, that was a reasonable trade.",[22,655,656],{},"But the constraints that forced it have been gone for over a decade. At some point, a reasonable compromise becomes an unexamined habit.",[45,658,660],{"id":659},"the-irony-we-rebuilt-the-graph-badly","The irony: we rebuilt the graph (badly)",[22,662,663],{},"The irony is that modern enterprise security has recreated the graph problem at massive scale while still pretending we have flat ACLs.",[330,665,667],{"id":666},"cloud-iam-the-implicit-graph","Cloud IAM: the implicit graph",[22,669,670],{},"Look at AWS IAM:",[418,672,673,676,679,682,688],{},[351,674,675],{},"IAM Users and Roles are subjects",[351,677,678],{},"Resources (S3 buckets, EC2 instances, Lambda functions) are objects",[351,680,681],{},"Policies define edges",[351,683,684,687],{},[354,685,686],{},"AssumeRole"," is literally the \"take\" operation from Take-Grant",[351,689,690,693],{},[354,691,692],{},"Resource-based policies"," create cross-account edges",[22,695,696,697,700],{},"AWS IAM ",[25,698,699],{},"is"," a graph. But AWS gives you no native tools to query it as one. You get the IAM Policy Simulator: a point query tool in a world that needs path analysis.",[22,702,703],{},"So security teams discover that a misconfigured role in Account A can assume into Account B, which has a policy that allows access to Account C's production database. Three hops. Invisible to any single ACL review.",[330,705,707],{"id":706},"the-kubernetes-permission-graph","The Kubernetes permission graph",[22,709,710],{},"Kubernetes might be the worst offender. ServiceAccounts, Roles, RoleBindings, ClusterRoles, ClusterRoleBindings. All edges in a graph. Namespace boundaries create subgraphs. Pod security contexts add more nodes.",[22,712,713,714,717],{},"And the graph has hidden edges that RBAC doesn't model. A ServiceAccount with permission to list Secrets in a namespace can read every token stored there, including tokens for more privileged ServiceAccounts. That's a path through two different edge types (RBAC grants access to Secrets, Secrets contain credentials) that no single ",[133,715,716],{},"kubectl auth can-i"," check will ever surface. It's Biba's integrity problem in miniature: low-trust workloads reading their way up to high-trust credentials.",[330,719,721],{"id":720},"active-directory-the-original-sin","Active Directory: the original sin",[22,723,724],{},"Active Directory has been a graph since 1999. Users, Groups, OUs, GPOs, Trust Relationships. All edges in a directed graph. Nested group memberships create transitive paths. Trust relationships create cross-domain paths.",[22,726,727],{},"Every AD privilege escalation attack (Kerberoasting, DCSync, Golden Ticket paths) is a graph traversal exploit. The attackers know this. In 2016, the BloodHound project made it explicit: ingest AD relationships, build a directed graph, find the shortest path to Domain Admin. It works devastatingly well precisely because it models AD as what it actually is.",[22,729,730,731,734],{},"Defenders, meanwhile, run ",[133,732,733],{},"Get-ADUser"," queries and review group memberships in spreadsheets.",[22,736,737],{},"We've spent 25 years defending against graph attacks with table tools.",[22,739,740,741,744,745,748],{},"BloodHound has been open-source since 2016. Defenders can use it too. But BloodHound answers an attacker's question: ",[25,742,743],{},"\"What's the shortest path to Domain Admin?\""," The defensive inverse — ",[25,746,747],{},"\"show me everything that can reach our crown jewels, continuously, across every environment\""," — needs different tooling and a different architectural commitment. One most security teams haven't made, because nobody is selling it to them.",[45,750,752],{"id":751},"the-return-graph-databases-make-this-practical","The return: graph databases make this practical",[22,754,755],{},"In 2007, Neo4j released the first production graph database. By 2015, graph databases were mainstream. The computational barrier that forced us to abandon formal models in the 1980s no longer exists.",[22,757,758,759,762],{},"Graph traversal that was impossible on a PDP-11 now runs in milliseconds on commodity hardware. A path query that answers ",[25,760,761],{},"\"Can Principal X ever reach Resource Y through any chain of permissions?\""," is a single Cypher statement. Tools like AWS Access Analyzer have started nibbling at this problem, but they're still point queries against specific policy combinations, not full path traversals across trust boundaries.",[22,764,765,766,769],{},"The difference matters. A point query tells you whether one specific permission is granted. A graph query tells you whether a ",[25,767,768],{},"path"," exists that you never intended to create. The three-hop AWS role chain, the Kubernetes Secret that bridges two privilege levels, the nested AD group that grants Domain Admin through six degrees of membership. These are all paths. They're invisible to point queries and obvious to graph traversal.",[22,771,772],{},"The 1970s papers showed that if you model your system as a graph with appropriate constraints, you can prove security properties. For 40 years, nobody had the hardware to act on that. Now we do. A graph database holding your IAM policies, network topology, trust relationships, and data flows can answer questions that no combination of ACLs, spreadsheets, and manual reviews can touch.",[22,774,775,776,778],{},"The safety guarantees that Bell, LaPadula, Biba, and the Take-Grant authors described are implementable now, provided you constrain the model appropriately. The HRU undecidability result still holds for the general case. But most real systems ",[25,777,492],{}," constrained, and that's exactly where the formal results apply.",[22,780,781,782,785,786,788],{},"Some vendors are catching on. Wiz builds attack graphs across cloud environments. XM Cyber models attacker paths to critical assets. These are real steps forward — they ask path questions, not point questions. But they solve half the original problem: they find paths that exist ",[25,783,784],{},"right now",". The formal question the 1970s models posed was stronger: can this system ",[25,787,610],{}," reach an unsafe state? That's the difference between a snapshot and a proof. Graph databases give us the machinery for both. The industry has mostly picked up the first half. The mainstream vendor ecosystem is still selling better ACL management.",[45,790,792],{"id":791},"the-question-we-should-be-asking","The question we should be asking",[22,794,795],{},"The security industry has spent two decades building increasingly sophisticated ACL management tools. Better UIs for permission tables. More granular RBAC. More complex policy languages. All of it optimizing the lookup.",[22,797,798],{},"None of it asks whether the path exists.",[22,800,801],{},"The 1970s theorists were decades ahead of the hardware. They understood that security is about paths, flows, and reachability. They built formal models to prove it, and then had to shelve those models because nothing could run them fast enough.",[22,803,804],{},"The hardware caught up 15 years ago. The question is why we're still pretending that flattening a graph into ACLs is anything other than a legacy compromise.",[330,806,808],{"id":807},"whats-left","What's left",[22,810,811],{},"That said, recognizing the problem and fixing it are different things. Graph databases are mature. A few vendors are asking path questions. The attack side has been thinking in graphs for a decade. But this still isn't how most defenders work.",[22,813,814],{},"The theory has been there since the 1970s. The compute is there now. The attackers figured it out a decade ago. We're still waiting for the defenders to close the loop.",[237,816],{},[45,818,820],{"id":819},"references-further-reading","References & further reading",[22,822,823],{},"The original papers, if you're curious:",[418,825,826,829,832,835,838],{},[351,827,828],{},"Bell, D.E. & LaPadula, L.J. (1973). \"Secure Computer Systems: Mathematical Foundations\" - MITRE Technical Report",[351,830,831],{},"Biba, K.J. (1977). \"Integrity Considerations for Secure Computer Systems\" - MITRE Technical Report",[351,833,834],{},"Harrison, M.A., Ruzzo, W.L., & Ullman, J.D. (1976). \"Protection in Operating Systems\" - Communications of the ACM",[351,836,837],{},"Clark, D.D. & Wilson, D.R. (1987). \"A Comparison of Commercial and Military Computer Security Policies\" - IEEE Symposium on Security and Privacy",[351,839,840],{},"Lipton, R.J. & Snyder, L. (1977). \"A Linear Time Algorithm for Deciding Subject Security\" - Journal of the ACM",[22,842,843],{},[25,844,845],{},"These papers are freely available and shorter than you'd expect. The notation looks dated, but the proofs hold.",[22,847,848],{},[25,849,243,850,250],{},[53,851,854],{"href":852,"rel":853},"https:\u002F\u002Fleventesimon.com\u002Finsights\u002Fthe_lost_science",[248],"leventesimon.com",{"title":131,"searchDepth":145,"depth":145,"links":856},[857,863,869,874,875,878],{"id":316,"depth":145,"text":317,"children":858},[859,860,861,862],{"id":332,"depth":151,"text":333},{"id":373,"depth":151,"text":374},{"id":406,"depth":151,"text":407},{"id":476,"depth":151,"text":477},{"id":499,"depth":145,"text":500,"children":864},[865,866,867,868],{"id":509,"depth":151,"text":510},{"id":519,"depth":151,"text":520},{"id":535,"depth":151,"text":536},{"id":565,"depth":151,"text":566},{"id":659,"depth":145,"text":660,"children":870},[871,872,873],{"id":666,"depth":151,"text":667},{"id":706,"depth":151,"text":707},{"id":720,"depth":151,"text":721},{"id":751,"depth":145,"text":752},{"id":791,"depth":145,"text":792,"children":876},[877],{"id":807,"depth":151,"text":808},{"id":819,"depth":145,"text":820},"2026-03-09","In the 1970s, we proved security is a graph problem. Then we abandoned the math for flat ACLs. Now graph databases let us pick it back up.","\u002Fimages\u002Fblog\u002Fthe_lost_science.jpg",{"audio":883,"audioLabel":884,"category":885},"\u002Faudio\u002FGraph_security_versus_access_control_lists.mp3","AI-generated debate",[269],"\u002Fblog\u002Fthe_lost_science",{"title":285,"description":880},"blog\u002Fthe_lost_science",[276,890,891,892,893],"access control","security history","formal methods","security architecture","QrTxjmWfVSyFETf4dCC04tPBpWzk-yjX25RrImBUvO4",{"id":896,"title":897,"author":898,"body":900,"date":1163,"description":1164,"extension":264,"image":1165,"meta":1166,"navigation":270,"path":1169,"seo":1170,"stem":1171,"tags":1172,"__hash__":1177},"blog\u002Fblog\u002Ftmi2_alarm_flood.md","TMI2 Alarm Flood",{"name":7,"headshot":8,"role":287,"contact":899},{"linkedin":11,"email":12,"twitter":13},{"type":15,"value":901,"toc":1154},[902,906,911,914,917,920,926,929,936,940,943,946,949,952,955,958,962,969,972,979,982,985,988,991,995,998,1001,1004,1011,1014,1020,1023,1026,1030,1033,1040,1043,1046,1049,1053,1056,1059,1062,1065,1068,1082,1085,1088,1095,1099,1106,1109,1112,1115,1118,1122,1125,1128,1131,1134,1136,1145,1147],[18,903,905],{"id":904},"_847-alarms-at-4-am","847 Alarms at 4 AM",[22,907,908],{},[25,909,910],{},"At 4:00 AM on March 28, 1979, a pressure relief valve stuck open at Three Mile Island Unit 2. What followed was the most studied nuclear accident in American history, and it had almost nothing to do with the valve.",[22,912,913],{},"The reactor's safety systems did exactly what they were designed to do. The SCRAM triggered. Emergency coolant activated. Alarms sounded.",[22,915,916],{},"All of them. At once.",[22,918,919],{},"Over 100 alarms fired in the first few minutes. The control room had no alarm prioritization. No way to suppress low-relevance alerts or separate the critical from the routine. Every indicator screamed with equal urgency: the critical and the routine, the cause and the symptom, the thing that mattered and the hundred things that didn't.",[22,921,922,923],{},"The operators stood in front of a wall of flashing lights and had to answer one question: ",[25,924,925],{},"what do we fix first?",[22,927,928],{},"They got it wrong. Instruments showed high water levels in the pressurizer, and the operators, unable to distinguish cause from symptom in the flood of alarms, concluded the reactor had too much coolant. They turned off the emergency cooling system. The reactor was actually losing coolant through the stuck valve. They had shut off the one thing keeping the core alive. Within hours, it partially melted.",[22,930,931,932,935],{},"The defense system worked. The defense system's ",[25,933,934],{},"output"," caused the meltdown.",[45,937,939],{"id":938},"_847-alarms-at-4-am-1","847 alarms at 4 AM",[22,941,942],{},"Replace the control room with a Slack channel. Replace the flashing lights with a vulnerability report. Replace the 100 simultaneous alarms with 847 CVEs.",[22,944,945],{},"A security scanner runs against 12 production clusters. It finds 847 vulnerabilities. It scores each one with CVSS. It produces a report. It sends the report to the platform team.",[22,947,948],{},"The platform team has three engineers.",[22,950,951],{},"The report tells them everything and nothing. It lists every CVE but not which ones are reachable from the internet, not what breaks if a given service is compromised. It does not tell them what to fix first.",[22,953,954],{},"So the engineers do what the TMI-2 operators did. They stand in front of the wall of flashing lights and start guessing. Manual correlation. Spreadsheets. Tribal knowledge about which clusters matter more.",[22,956,957],{},"This is a methodology problem. Not staffing, not tooling. The distinction matters, because better tools built on a broken methodology will reproduce the same failure at higher resolution.",[45,959,961],{"id":960},"the-valve-indicator-problem","The valve indicator problem",[22,963,964,965,968],{},"The pressure relief valve was stuck open. Coolant was draining from the reactor. But the indicator on the control panel didn't show whether the valve was open or closed. It showed whether the valve had been ",[25,966,967],{},"commanded"," to close.",[22,970,971],{},"The command had been sent. The indicator showed \"closed.\" The valve was open. The operators trusted the indicator.",[22,973,974,975,978],{},"CVSS scores have the same problem. A CVSS score, even supplemented by EPSS or KEV data, tells you how exploitable a vulnerability is ",[25,976,977],{},"in theory",", under laboratory conditions, absent any context about your environment. It tells you the command was sent. It does not tell you the state of the valve.",[22,980,981],{},"A CVE with a CVSS score of 9.8 on an air-gapped internal build server with no inbound network paths is not a 9.8 in your environment. A CVE with a score of 5.3 on a public-facing service that chains with two other medium-severity issues to reach your database? That might be your actual 9.8.",[22,983,984],{},"CVSS measures theoretical exploitability. It says nothing about whether an attacker can reach the service, whether this CVE chains with others into a viable attack path, or what happens downstream if the service is compromised.",[22,986,987],{},"Calculating risk from CVSS alone is like calculating insurance premiums from the probability of a hurricane without checking whether the house is in Kansas or on the Florida coast.",[22,989,990],{},"The TMI-2 operators didn't lack data. They were drowning in it. What they lacked was a model that connected the data to reality.",[45,992,994],{"id":993},"the-real-failure-mode-risk-displacement","The real failure mode: risk displacement",[22,996,997],{},"Most organizations handle vulnerability management the same way TMI-2 handled its alarms.",[22,999,1000],{},"The reactor's alarm system was designed by one team. The control room was operated by another. The alarm designers built a comprehensive system: every possible anomaly would trigger a notification. Complete coverage. Nothing missed.",[22,1002,1003],{},"They were right. Nothing was missed.",[22,1005,1006,1007,1010],{},"The problem was that \"nothing missed\" and \"useful to the operator\" are not the same thing. The alarm system's completeness became the operator's paralysis. The designers had optimized for ",[25,1008,1009],{},"their"," metric (coverage) and displaced the actual hard problem (prioritization) to someone else.",[22,1012,1013],{},"This is exactly what happens when a security team runs a scanner, generates a report of 847 CVEs, and sends it to the platform team. The security team's job, by their own metrics, is done. Complete coverage. Nothing missed.",[22,1015,1016,1017,250],{},"The platform team now owns the triage. They have the list but not the context, not the topology, not the blast radius analysis. They have a wall of flashing lights and a reactor that needs attention ",[25,1018,1019],{},"now",[22,1021,1022],{},"Call it what it is: risk displacement. The burden of analysis moves from the team that understands threats to the team that doesn't have the tools or the mandate to prioritize them.",[22,1024,1025],{},"The TMI-2 alarm system didn't protect the operators. It made their job harder. The scanner report doesn't protect the platform team. It creates work and calls it security.",[45,1027,1029],{"id":1028},"what-the-nuclear-industry-learned","What the nuclear industry learned",[22,1031,1032],{},"After TMI-2, the nuclear industry redesigned the entire alarm methodology.",[22,1034,1035,1036,1039],{},"The reforms introduced alarm prioritization: suppress low-relevance notifications during high-stress events. They added contextual displays that show operators the ",[25,1037,1038],{},"state of the system"," rather than a list of deviations from normal. And they formalized alarm rationalization, determining which alarms matter under which conditions, and what the operator should actually do about it.",[22,1041,1042],{},"More alarms do not mean more safety. An alarm system that fires 100 alerts when 3 are critical is worse than one that fires 3. The operator's attention is finite, and every irrelevant alarm steals cognitive resources from the ones that matter.",[22,1044,1045],{},"The nuclear industry learned that the alarm system's job is not to tell the operator everything that's wrong. It's to tell the operator what to do next.",[22,1047,1048],{},"Vulnerability management hasn't learned this yet.",[45,1050,1052],{"id":1051},"from-lists-to-topology","From lists to topology",[22,1054,1055],{},"The cybersecurity industry's response has been to build better alarms. The scanner now produces a sorted list instead of an unsorted one. It adds a risk score. Maybe it cross-references with the CISA KEV catalog or flags \"actively exploited in the wild.\"",[22,1057,1058],{},"These are improvements, not solutions. You can't meaningfully sort 847 CVEs without understanding the topology they exist in. Sorting requires knowing which services are reachable, what they connect to, and what breaks if they're compromised. That knowledge doesn't live in a scanner. It lives in the relationships between assets.",[22,1060,1061],{},"A sorted list of CVEs is still a list. You can't ask a list \"what's the shortest path from the internet to my database through these vulnerabilities?\"",[22,1063,1064],{},"That's a graph question. Your assets, services, and identities are nodes. The connections between them, network paths, trust relationships, data flows, are edges. Vulnerabilities attach to nodes, but exploitability is a function of the path, not the node.",[22,1066,1067],{},"The 3-person platform team managing 12 clusters doesn't need a better list. They need answers to questions a list can't answer:",[418,1069,1070,1073,1076,1079],{},[351,1071,1072],{},"Which of these 847 CVEs sit on services reachable from the internet?",[351,1074,1075],{},"Which of those services connect to data stores with customer data?",[351,1077,1078],{},"If this service is compromised, what's the shortest path to a critical asset?",[351,1080,1081],{},"Which three patches would eliminate the most attack paths?",[22,1083,1084],{},"In a graph, blast radius is a query, not a guess. You traverse outward from the compromised node and measure what's reachable. Prioritization is a calculation over topology: which vulnerabilities sit on the most paths to the things that matter?",[22,1086,1087],{},"The scanner flags a critical CVE on a high-profile production service. The team scrambles to patch it. Meanwhile, a chain of three medium-severity CVEs on a forgotten internal service provides a clear path to the same database. Nobody sees the chain because nobody's modeling the relationships.",[22,1089,1090,1091,1094],{},"The TMI-2 valve wasn't dangerous because it was stuck open. It was dangerous because it was stuck open ",[25,1092,1093],{},"on the path between the reactor core and the environment",". Location in the topology defined the severity, not the defect itself.",[45,1096,1098],{"id":1097},"the-tool-is-the-process","The tool is the process",[22,1100,1101,1102,1105],{},"The nuclear industry's post-TMI redesign went further. They embedded the methodology in the control room itself. Alarm rationalization wasn't a document operators consulted alongside their instruments. It became how the instruments worked. The tool ",[25,1103,1104],{},"was"," the methodology.",[22,1107,1108],{},"This is where the \"just buy better tooling\" argument gets it half right. The right tool does embed methodology: reachability analysis, asset relationship mapping, exposure context should be built into how your team works, not bolted on as a separate triage step.",[22,1110,1111],{},"But a tool can only implement a methodology that exists. Reachability and exposure data don't tell you whether a compromised internal API matters more than an exposed storage bucket. That ranking comes from understanding business impact, and business impact is an organizational decision. Someone has to decide what the crown jewels are. The graph can model the paths, but the weight you assign to each destination is a business call.",[22,1113,1114],{},"The NRC understood this. Beyond the control room redesign, they mandated simulator training, licensing requirements, crew resource management borrowed from aviation. They retrained the people, not just the instruments. Because someone still has to look at the output and make the call, and that takes people who understand what the business loses if an attack path gets exploited.",[22,1116,1117],{},"Skip either step and you're back in the TMI-2 control room. A tool without methodology is a fancier wall of flashing lights. A methodology nobody follows is a PDF on a SharePoint nobody opens.",[45,1119,1121],{"id":1120},"before-you-send-the-next-report","Before you send the next report",[22,1123,1124],{},"The operators at TMI-2 were trained, competent, and trying their best. They still turned off the one system keeping the reactor alive. The information architecture made the right decision invisible and the wrong decision obvious.",[22,1126,1127],{},"Before you send the next vulnerability report, ask yourself: am I giving my platform team a decision, or am I giving them a wall of flashing lights?",[22,1129,1130],{},"The nuclear industry answered that question in 1979. It cost them a reactor.",[22,1132,1133],{},"What's it costing you?",[237,1135],{},[22,1137,1138,1141,1142],{},[354,1139,1140],{},"Historical note:"," ",[25,1143,1144],{},"The Three Mile Island Unit 2 reactor was never restarted. Cleanup took 14 years and cost approximately $1 billion. The President's Commission on the Accident (the Kemeny Commission) concluded that the primary cause was not mechanical failure but \"human factors,\" operator confusion compounded by inadequate instrumentation and training. The control room's alarm system was specifically cited as a contributing factor. Unit 1 continued operating until 2019.",[237,1146],{},[22,1148,1149,1150,250],{},"This article originally published on ",[53,1151,249],{"href":1152,"rel":1153},"https:\u002F\u002Fmedium.com\u002F@levente.simon\u002Fthe-meltdown-before-the-meltdown-what-three-mile-island-teaches-about-cve-management-b7ad6fa92f70",[248],{"title":131,"searchDepth":145,"depth":145,"links":1155},[1156,1157,1158,1159,1160,1161,1162],{"id":938,"depth":145,"text":939},{"id":960,"depth":145,"text":961},{"id":993,"depth":145,"text":994},{"id":1028,"depth":145,"text":1029},{"id":1051,"depth":145,"text":1052},{"id":1097,"depth":145,"text":1098},{"id":1120,"depth":145,"text":1121},"2026-03-04","The Three Mile Island operators were drowning in alerts when they shut off the emergency cooling. Your platform team is drowning in CVEs. Both problems have the same root cause — and the nuclear industry solved it decades ago.","\u002Fimages\u002Fblog\u002Ftmi2-alarm-flood.jpg",{"audioLabel":884,"audio":1167,"category":1168},"\u002Faudio\u002FWhy_flat_vulnerability_lists_paralyze_engineers.mp3",[269],"\u002Fblog\u002Ftmi2_alarm_flood",{"title":897,"description":1164},"blog\u002Ftmi2_alarm_flood",[1173,1174,1175,276,1176],"vulnerability management","risk prioritization","threat modeling","security methodology","-h9ISSqQnHFWpEyQHeJR8dN_aXSNO6g4vdzHgMqvFNI",1775732223965]