[{"data":1,"prerenderedAt":1793},["ShallowReactive",2],{"article-\u002Finsights\u002Feleven-steps-you-dont-type":3,"related-\u002Finsights\u002Feleven-steps-you-dont-type":1033},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"author":11,"image":19,"category":20,"tags":22,"body":27,"_type":1026,"_id":1027,"_source":1028,"_file":1029,"_stem":1030,"_extension":1031,"sitemap":1032},"\u002Finsights\u002Feleven-steps-you-dont-type","insights",false,"","Eleven Steps You Don't Type","Threat modeling stalls in shift-left workflows because intent-based interfaces run into an articulation barrier. Staged delegation inside the engineer's editor, backed by specialist agents and a graph-native model, is one resolution.","2026-04-20",{"name":12,"headshot":13,"role":14,"contact":15},"Levente Simon","\u002Fheadshots\u002FLS.jpeg","creator of dethernety",{"linkedin":16,"email":17,"twitter":18},"https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Flevente-simon\u002F","levente.simon@dether.net","https:\u002F\u002Fx.com\u002FLevente_Simon","\u002Fimages\u002Feleven-steps-cover.jpg",[21],"Tech Deep-Dives",[23,24,25,26],"threat modeling","agents","interface design","graph",{"type":28,"children":29,"toc":1012},"root",[30,38,48,56,61,66,71,76,81,86,99,111,123,128,133,140,159,171,176,219,224,236,248,253,258,263,269,281,292,297,302,307,312,317,323,328,337,342,397,405,469,474,490,495,500,505,510,516,521,529,541,552,558,563,568,573,584,609,620,631,639,644,649,654,660,678,683,688,694,699,704,709,714,719,724,729,734,791,796,809,814,819,824,829,842,848,853,872,884,892,904,910,915,920,925,930,935],{"type":31,"tag":32,"props":33,"children":35},"element","h1",{"id":34},"eleven-steps-you-dont-type",[36],{"type":37,"value":8},"text",{"type":31,"tag":39,"props":40,"children":41},"p",{},[42],{"type":31,"tag":43,"props":44,"children":45},"em",{},[46],{"type":37,"value":47},"Staged delegation, and the shape of a guided workflow that actually gets used",{"type":31,"tag":39,"props":49,"children":50},{},[51],{"type":31,"tag":43,"props":52,"children":53},{},[54],{"type":37,"value":55},"First in a series on Dethernety and Dethereal.",{"type":31,"tag":39,"props":57,"children":58},{},[59],{"type":37,"value":60},"Threat modeling has a UX problem. Shift-left made it worse.",{"type":31,"tag":39,"props":62,"children":63},{},[64],{"type":37,"value":65},"For most of those twenty years, in most organizations, it was a specialist activity: a small number of security architects, offline, a schedule set by the security team, an artifact delivered once and filed. The tool was whatever the security team happened to use. Visio, a spreadsheet, a commercial suite none of the developers had ever opened.",{"type":31,"tag":39,"props":67,"children":68},{},[69],{"type":37,"value":70},"DevSecOps changed who is expected to do it. The current orthodoxy says threat modeling belongs with the engineers building and running the system, not with a security team that does it once at the design-review gate and never again. The artifact is supposed to be living. The audience includes the author. The work is supposed to happen early and continuously.",{"type":31,"tag":39,"props":72,"children":73},{},[74],{"type":37,"value":75},"It has not. The reasons are plural: engineers often lack the threat-intel background, incentives reward feature velocity over security artifacts, security teams still gate-keep review, and most organizations have no loop between the model and the runtime controls that would give skipping it a consequence. A better tool does not fix any of those. It only removes the excuse that the tool is in the way.",{"type":31,"tag":39,"props":77,"children":78},{},[79],{"type":37,"value":80},"This piece is about the reason closest to the tool itself: the tools did not follow. Engineers were told to threat-model but handed tools that either assumed they already thought in trust boundaries or assumed they did not write code. Neither assumption fit. The result, predictably, is that most teams either don't do it, do it once during design review and never touch it again, or outsource it back to the security team and pretend otherwise.",{"type":31,"tag":39,"props":82,"children":83},{},[84],{"type":37,"value":85},"The tools we have come in three shapes, and all three fail in the same way: they put the wrong cognitive load on the wrong person at the wrong time.",{"type":31,"tag":39,"props":87,"children":88},{},[89,91,97],{"type":37,"value":90},"The first shape is the ",{"type":31,"tag":92,"props":93,"children":94},"strong",{},[95],{"type":37,"value":96},"diagram",{"type":37,"value":98},". A canvas tool, sometimes a dedicated threat-modeling application, increasingly a diagrams-as-code format checked into the repo. The diagrams-as-code version solves the staleness problem that killed the canvas version: the diagram lives in git and gets updated in pull requests. The serious tools in this family go further: they run rule engines against the declarative model and emit threats, countermeasures, and compliance mappings for the author to adjudicate, and the best of them seed the initial diagram from IaC or reference architectures rather than a blank canvas. That shift, from recall toward adjudication, is the right direction. What the tools still require is that the author declare the structure the seeding guessed at — trust boundaries, data flows, decomposition depth — with no scaffolded adjudication flow around those declarations. The diagram is the question masquerading as the input.",{"type":31,"tag":39,"props":100,"children":101},{},[102,104,109],{"type":37,"value":103},"The second shape is the ",{"type":31,"tag":92,"props":105,"children":106},{},[107],{"type":37,"value":108},"form",{"type":37,"value":110},". Modern form-based threat modeling tools ask the modeler to describe the system through structured inputs and check results against built-in threat libraries and compliance mappings. Some ship with pattern libraries that provide a starter shape for common architectures — microservice behind API gateway, lambda behind queue — which helps on the recognisable cases and does nothing off the template. They validate more than the STRIDE spreadsheets they replaced: they cross-reference countermeasures, flag gaps, link findings to frameworks. What they cannot validate is whether the modeler answered their questions well. The form asks: is this boundary a trust boundary? Is this data PII? A modeler who does not know the answer selects one anyway, and the form records the guess as fact. The form is complete long before the model is.",{"type":31,"tag":39,"props":112,"children":113},{},[114,116,121],{"type":37,"value":115},"The third shape is the ",{"type":31,"tag":92,"props":117,"children":118},{},[119],{"type":37,"value":120},"chat box",{"type":37,"value":122},". The naive version is \"describe your system and I'll generate a threat model.\" The serious versions, scoped by a host product's data model so they cannot invent components outside its vocabulary, ask follow-up questions and validate against built-in libraries. Both still ask the modeler to know, up front, what the tool needs: sensitivity, boundaries, adversary classes, compliance drivers, crown jewels. The serious versions do shift work toward adjudication: the tool proposes, the user confirms. What they do poorly is scope and sequence the proposals. The output arrives as a long prose draft that the author validates line by line, hoping to catch hallucinated components and missed ones in flow. Staged delegation produces the same class of proposals in small structured batches instead of one continuous draft, and the batch is the unit of adjudication. Same instinct, different shape.",{"type":31,"tag":39,"props":124,"children":125},{},[126],{"type":37,"value":127},"Three failure modes, one common root: the tool asks the human to structure the problem, and the human is not good at it. Diagrams demand graphical structure the modeler does not have. Forms demand taxonomic structure the modeler has not internalized. Chat demands a well-formed prompt the modeler cannot produce because they don't yet know what the tool needs.",{"type":31,"tag":39,"props":129,"children":130},{},[131],{"type":37,"value":132},"There is a fourth shape, and it has been hiding in plain sight.",{"type":31,"tag":134,"props":135,"children":137},"h2",{"id":136},"the-third-paradigm",[138],{"type":37,"value":139},"The third paradigm",{"type":31,"tag":39,"props":141,"children":142},{},[143,145],{"type":37,"value":144},"Jakob Nielsen has argued that AI marks a paradigm shift in user interfaces: away from command-based systems, where users \"strike every blow\" by executing step-by-step instructions, and toward intent-based systems, where users specify desired outcomes and let the system figure out procedures. The user stops being an operator and becomes a supervisor. The computer stops being a tool and becomes an agent.",{"type":31,"tag":146,"props":147,"children":148},"sup",{},[149],{"type":31,"tag":150,"props":151,"children":156},"a",{"href":152,"ariaDescribedBy":153,"dataFootnoteRef":7,"id":155},"#user-content-fn-1",[154],"footnote-label","user-content-fnref-1",[157],{"type":37,"value":158},"1",{"type":31,"tag":39,"props":160,"children":161},{},[162,164,169],{"type":37,"value":163},"The shift is real, but it has a failure mode Nielsen himself names: the ",{"type":31,"tag":92,"props":165,"children":166},{},[167],{"type":37,"value":168},"articulation barrier",{"type":37,"value":170},". Intent-based UIs assume the user can express what they want in a single well-formed statement. In expert-knowledge domains, the user often cannot. An engineer asked to threat-model their system does not walk up to a chat box already knowing the crown jewels, adversary classes, compliance drivers, trust boundaries, and decomposition depth they need to articulate. They know those things after thinking about the system, which is the work they are trying to do.",{"type":31,"tag":39,"props":172,"children":173},{},[174],{"type":37,"value":175},"So both ends of Nielsen's spectrum fail for this kind of work. Command-based fails because the novice does not know which command to run next. Intent-based fails because the novice cannot state the intent; they do not yet know what the tool needs from them.",{"type":31,"tag":39,"props":177,"children":178},{},[179,181,188,190,196,197,203,204,210,212,217],{"type":37,"value":180},"Consider what a command-driven interface looks like when stripped to its bones: ",{"type":31,"tag":182,"props":183,"children":185},"code",{"className":184},[],[186],{"type":37,"value":187},"discover",{"type":37,"value":189},", ",{"type":31,"tag":182,"props":191,"children":193},{"className":192},[],[194],{"type":37,"value":195},"classify",{"type":37,"value":189},{"type":31,"tag":182,"props":198,"children":200},{"className":199},[],[201],{"type":37,"value":202},"enrich",{"type":37,"value":189},{"type":31,"tag":182,"props":205,"children":207},{"className":206},[],[208],{"type":37,"value":209},"sync",{"type":37,"value":211},". Clean. Also unusable. Nobody running ",{"type":31,"tag":182,"props":213,"children":215},{"className":214},[],[216],{"type":37,"value":195},{"type":37,"value":218}," on a blank directory knows what they are classifying, or why, or in what order, or against which taxonomy.",{"type":31,"tag":39,"props":220,"children":221},{},[222],{"type":37,"value":223},"Consider the opposite: a single prompt, a long system message, and an agent free to ask whatever it wants. This fails differently. The agent goes three turns into a conversation, decides it has enough context, and generates a model. Or it burrows into one component for thirty turns and forgets the rest of the system exists. The output looks like a threat model written by someone who has read a lot of threat models but has never debugged one.",{"type":31,"tag":39,"props":225,"children":226},{},[227,229,234],{"type":37,"value":228},"Nielsen proposes one resolution to the articulation barrier: ",{"type":31,"tag":92,"props":230,"children":231},{},[232],{"type":37,"value":233},"intent by discovery",{"type":37,"value":235},", helping users recognize what they want through exploration. The user starts without a clear intent and surfaces it through interaction with the system. This is the right resolution when the problem is that the user does not yet know what they want.",{"type":31,"tag":39,"props":237,"children":238},{},[239,241,246],{"type":37,"value":240},"There is a second resolution, appropriate when the user roughly knows what they want (\"a threat model of this system\") but cannot articulate the structure that definition requires. The resolution is not to let them explore until they discover it. It is to break the final artifact into its component parts, in a fixed order, and to hand each part to a specialist agent that can do most of the articulation work on the user's behalf. The user is left with the part they are actually good at: recognising whether a proposal is right, and saying where it is not. Call this ",{"type":31,"tag":92,"props":242,"children":243},{},[244],{"type":37,"value":245},"staged delegation",{"type":37,"value":247},": break one large intent into an ordered sequence of smaller ones, delegate each to the actor best placed to articulate it (the user where only the user knows the answer, a specialist agent where the answer is discoverable from the codebase or the model), and require the user to supervise every proposal before it becomes part of the model.",{"type":31,"tag":39,"props":249,"children":250},{},[251],{"type":37,"value":252},"The cognitive shift is from recall (what do I want?) to recognition (is this right?), which is the easier problem for a non-expert author by a wide margin.",{"type":31,"tag":39,"props":254,"children":255},{},[256],{"type":37,"value":257},"Against Nielsen's frame, the diagnosis is the same and the resolution is different: staged delegation rather than intent by discovery. This is not a new UI era. It is a pattern for a specific kind of problem: expert-knowledge work where the user is both author and novice, and where the artifact has enough internal structure to decompose.",{"type":31,"tag":39,"props":259,"children":260},{},[261],{"type":37,"value":262},"What follows is what one implementation looks like, concretely.",{"type":31,"tag":134,"props":264,"children":266},{"id":265},"meet-the-user-where-they-work",[267],{"type":37,"value":268},"Meet the user where they work",{"type":31,"tag":39,"props":270,"children":271},{},[272,274,279],{"type":37,"value":273},"Before the eleven steps, a question a fair reader is probably already asking: ",{"type":31,"tag":43,"props":275,"children":276},{},[277],{"type":37,"value":278},"if the whole workflow is a conversation in a terminal, what about the security analyst who wants the graph, or the reviewer who wants to see the whole model laid out?",{"type":37,"value":280}," The answer has two parts.",{"type":31,"tag":39,"props":282,"children":284},{"align":283},"center",[285],{"type":31,"tag":286,"props":287,"children":291},"img",{"src":288,"alt":289,"width":290},"\u002Fimages\u002Fdiagram-architecture.svg","Two user populations, two interfaces, one platform: the plugin and the Web UI both talk to the Dethernety backend, which loads modules that call a graph DB, OPA, and an analysis engine",1000,[],{"type":31,"tag":39,"props":293,"children":294},{},[295],{"type":37,"value":296},"The workflow runs inside Claude Code. This is not an implementation detail; it is the point. An engineer already in Claude Code does not want to stop working, open a browser, create an account on a threat modeling SaaS, upload a diagram, and answer a form. The moment you break their flow, the threat model stops happening. The context-switch tax is one of the reasons shift-left threat modeling stalls in practice. Telling an engineer to model a change, and then making them leave their editor to do it, kills the work for an activity with no immediate reward.",{"type":31,"tag":39,"props":298,"children":299},{},[300],{"type":37,"value":301},"So the conversation lives in the editor. The model is a directory tree on disk. The output is committable. The workflow is resumable across sessions. An engineer reviewing a pull request can threat-model the change in the same window where they are reading the diff.",{"type":31,"tag":39,"props":303,"children":304},{},[305],{"type":37,"value":306},"This does not mean the web UI is obsolete. A security analyst reviewing an attack surface wants the graph: a visual editor where boundaries, flows, and exposures are laid out spatially. Forcing them into a CLI is the mirror-image mistake of forcing an engineer into a web form. The two interfaces serve different populations and different tasks: the plugin for authors in the loop, the web UI for analysts and reviewers who need to see the whole model at once. Both read and write the same underlying graph.",{"type":31,"tag":39,"props":308,"children":309},{},[310],{"type":37,"value":311},"Pair modeling — a developer and a security engineer sitting together for the exercise, still the most productive way to do this work — happens in the same session, at the same terminal. The tool does not replace the collaboration. It gives both people a shared surface to argue over.",{"type":31,"tag":39,"props":313,"children":314},{},[315],{"type":37,"value":316},"Threat modeling has to meet users where they work.",{"type":31,"tag":134,"props":318,"children":320},{"id":319},"eleven-steps",[321],{"type":37,"value":322},"Eleven steps",{"type":31,"tag":39,"props":324,"children":325},{},[326],{"type":37,"value":327},"The guided workflow has eleven steps. They are not commands. The user does not pick them. The agent walks through them in order, and each one corresponds to a specific transition in the model's state machine.",{"type":31,"tag":39,"props":329,"children":330},{"align":283},[331],{"type":31,"tag":286,"props":332,"children":336},{"src":333,"alt":334,"width":335},"\u002Fimages\u002Fdiagram-state-machine.svg","Eleven steps across six states, with the session break between step 5 and step 6",900,[],{"type":31,"tag":39,"props":338,"children":339},{},[340],{"type":37,"value":341},"The steps themselves:",{"type":31,"tag":343,"props":344,"children":345},"ol",{},[346,357,367,377,387],{"type":31,"tag":347,"props":348,"children":349},"li",{},[350,355],{"type":31,"tag":92,"props":351,"children":352},{},[353],{"type":37,"value":354},"Scope Definition",{"type":37,"value":356}," — what is this system, what are the crown jewels, what compliance drivers apply",{"type":31,"tag":347,"props":358,"children":359},{},[360,365],{"type":31,"tag":92,"props":361,"children":362},{},[363],{"type":37,"value":364},"Discovery",{"type":37,"value":366}," — scan the codebase for infrastructure, containers, IaC, API definitions",{"type":31,"tag":347,"props":368,"children":369},{},[370,375],{"type":31,"tag":92,"props":371,"children":372},{},[373],{"type":37,"value":374},"Model Review",{"type":37,"value":376}," — confirm the discovered elements, match them against the platform's class library",{"type":31,"tag":347,"props":378,"children":379},{},[380,385],{"type":31,"tag":92,"props":381,"children":382},{},[383],{"type":37,"value":384},"Boundary Refinement",{"type":37,"value":386}," — adjust trust boundaries, set enforcement attributes",{"type":31,"tag":347,"props":388,"children":389},{},[390,395],{"type":31,"tag":92,"props":391,"children":392},{},[393],{"type":37,"value":394},"Data Flow Mapping",{"type":37,"value":396}," — connect components, add operational flows the scanner missed",{"type":31,"tag":39,"props":398,"children":399},{},[400],{"type":31,"tag":43,"props":401,"children":402},{},[403],{"type":37,"value":404},"— Session Break —",{"type":31,"tag":343,"props":406,"children":408},{"start":407},6,[409,419,429,439,449,459],{"type":31,"tag":347,"props":410,"children":411},{},[412,417],{"type":31,"tag":92,"props":413,"children":414},{},[415],{"type":37,"value":416},"Classification",{"type":37,"value":418}," — LLM-assisted class assignment for ambiguous elements",{"type":31,"tag":347,"props":420,"children":421},{},[422,427],{"type":31,"tag":92,"props":423,"children":424},{},[425],{"type":37,"value":426},"Data Item Classification",{"type":37,"value":428}," — tag sensitive data on cross-boundary flows",{"type":31,"tag":347,"props":430,"children":431},{},[432,437],{"type":31,"tag":92,"props":433,"children":434},{},[435],{"type":37,"value":436},"Enrichment",{"type":37,"value":438}," — security attributes and credentials against each class's schema",{"type":31,"tag":347,"props":440,"children":441},{},[442,447],{"type":31,"tag":92,"props":443,"children":444},{},[445],{"type":37,"value":446},"Validation",{"type":37,"value":448}," — quality score, gate checks, readiness assessment",{"type":31,"tag":347,"props":450,"children":451},{},[452,457],{"type":31,"tag":92,"props":453,"children":454},{},[455],{"type":37,"value":456},"Sync",{"type":37,"value":458}," — push to the platform for analysis",{"type":31,"tag":347,"props":460,"children":461},{},[462,467],{"type":31,"tag":92,"props":463,"children":464},{},[465],{"type":37,"value":466},"Post-Sync Linking",{"type":37,"value":468}," — link countermeasures to exposures",{"type":31,"tag":39,"props":470,"children":471},{},[472],{"type":37,"value":473},"The eleven-step shape is not arbitrary. Each split is where it is for one of three reasons: a distinct reasoning mode, a distinct agent invocation, or a distinct moment when the user has to decide something. The rest of this section walks through the three places that make the shape's logic visible.",{"type":31,"tag":39,"props":475,"children":476},{},[477,479],{"type":37,"value":478},"Scope comes first because every later decision depends on it. You cannot classify a data item as Restricted under PCI-DSS if you have not established that PCI-DSS is in scope. You cannot tag a component as a crown jewel if you have not named the crown jewels. The scope file is short, the questions are conversational, and the answers are referenced all the way through to validation.",{"type":31,"tag":146,"props":480,"children":481},{},[482],{"type":31,"tag":150,"props":483,"children":487},{"href":484,"ariaDescribedBy":485,"dataFootnoteRef":7,"id":486},"#user-content-fn-2",[154],"user-content-fnref-2",[488],{"type":37,"value":489},"2",{"type":31,"tag":39,"props":491,"children":492},{},[493],{"type":37,"value":494},"Discovery is separate from classification because the two require different reasoning. Discovery is \"does this thing exist in the codebase, and what is it called.\" Classification is \"what kind of thing is it, and how does it fit in our taxonomy.\" Collapsing them produces a model full of plausibly-named components that turn out to be config files, or job schedulers that got classified as web servers because they happened to bind a port.",{"type":31,"tag":39,"props":496,"children":497},{},[498],{"type":37,"value":499},"The session break between step five and step six is deliberate. Steps one through five build the structure of the model: what components exist, how they connect, which boundaries they sit in. Steps six through ten populate that structure with security context. The two phases are different cognitive modes. Discovery reasons from evidence to structure; enrichment reasons from structure to security properties. Keeping them in one session mixes two kinds of thinking in the same working memory, for the LLM and the human both. The practical consequence is that starting enrichment in a fresh session produces better output at lower cost, and the session break makes that explicit. It also gives the user a natural place to commit the structural model to git before the richer, more revisable enrichment passes happen.",{"type":31,"tag":39,"props":501,"children":502},{},[503],{"type":37,"value":504},"Those three are enough to see the pattern. Each sits where it sits because moving it or dropping it breaks something concrete, usually not at the step you touched but downstream. The others rhyme.",{"type":31,"tag":39,"props":506,"children":507},{},[508],{"type":37,"value":509},"Eleven is not a magic number. A reasonable decomposition of this artifact could have landed on nine or thirteen; the commitments are that every split sit on one of the three reasons above, and that every step be a precondition for the next. Eleven is what that produced here.",{"type":31,"tag":134,"props":511,"children":513},{"id":512},"two-layers",[514],{"type":37,"value":515},"Two layers",{"type":31,"tag":39,"props":517,"children":518},{},[519],{"type":37,"value":520},"Seen from above, the workflow has two layers.",{"type":31,"tag":39,"props":522,"children":523},{"align":283},[524],{"type":31,"tag":286,"props":525,"children":528},{"src":526,"alt":527,"width":335},"\u002Fimages\u002Fdiagram-two-layers.svg","The outer layer is the fixed step sequence; the inner layer is a specialist agent proposing a batch that the user adjudicates before anything is persisted",[],{"type":31,"tag":39,"props":530,"children":531},{},[532,534,539],{"type":37,"value":533},"The ",{"type":31,"tag":92,"props":535,"children":536},{},[537],{"type":37,"value":538},"outer layer",{"type":37,"value":540}," is the fixed sequence above. It is not command-based, because the user does not pick which step runs when. It is not intent-based, because the workflow itself infers nothing; its shape is fixed by the shape of the artifact the tool has to produce. Each step corresponds to a specific, named part of the model that must exist before the next step can proceed. Not a suggested order — a required one.",{"type":31,"tag":39,"props":542,"children":543},{},[544,545,550],{"type":37,"value":533},{"type":31,"tag":92,"props":546,"children":547},{},[548],{"type":37,"value":549},"inner layer",{"type":37,"value":551}," is what happens inside each step. A specialist agent runs a well-defined procedure (scan the codebase, match elements against the class library, fill attribute schemas, score quality) and presents its output as a batch the user confirms, modifies, or rejects before anything is written to disk. Scope definition at the start is the exception: the user articulates the crown jewels, the compliance drivers, what is in scope and what is out, and the agent's job is to capture, not propose. Everywhere else the direction is inverted — the agent proposes, the user adjudicates — and the user supervises a pipeline of specialist proposals rather than a single black-box agent.",{"type":31,"tag":134,"props":553,"children":555},{"id":554},"the-multi-brain",[556],{"type":37,"value":557},"The multi-brain",{"type":31,"tag":39,"props":559,"children":560},{},[561],{"type":37,"value":562},"The other half of the system is that the agent is not one agent. It is four. The first version was one, and it drifted.",{"type":31,"tag":39,"props":564,"children":565},{},[566],{"type":37,"value":567},"A single orchestrator with a long system prompt covering discovery, classification, enrichment, and validation produces the failure mode described earlier: plausible-looking models with fabricated details, because the agent has no structural reason to separate \"I am scanning for infrastructure\" from \"I am filling security attributes against a class schema.\" In one context, it is all one job, and the job drifts in whichever direction the last few turns pushed it.",{"type":31,"tag":39,"props":569,"children":570},{},[571],{"type":37,"value":572},"The four-agent split mirrors the four cognitive jobs.",{"type":31,"tag":39,"props":574,"children":575},{},[576,577,582],{"type":37,"value":533},{"type":31,"tag":92,"props":578,"children":579},{},[580],{"type":37,"value":581},"threat-modeler",{"type":37,"value":583}," is the orchestrator. It reads the discovery report, presents it to the user, writes the confirmed elements to disk, and drives the workflow through its states. It owns the state machine. It handles user confirmations, batch-table presentations, and state transitions. It delegates enrichment and review rather than doing them inline, because each task has different context-budget needs.",{"type":31,"tag":39,"props":585,"children":586},{},[587,588,593,595,600,602,607],{"type":37,"value":533},{"type":31,"tag":92,"props":589,"children":590},{},[591],{"type":37,"value":592},"infrastructure-scout",{"type":37,"value":594}," scans the codebase. It is read-only. It does not write any model files. It produces a discovery report: a structured list of components, each one carrying the source that produced it (file, line, resource) and two confidence buckets — one for ",{"type":31,"tag":43,"props":596,"children":597},{},[598],{"type":37,"value":599},"existence",{"type":37,"value":601},", one for ",{"type":31,"tag":43,"props":603,"children":604},{},[605],{"type":37,"value":606},"classification",{"type":37,"value":608}," — picked against a fixed rubric (high for an explicit declaration like a Kubernetes Service or a Terraform resource, medium for a strong inference like a Docker image or an import statement, low for a weak inference like a string literal or a comment). The scores are rubric assignments against observable source properties, not a free-form self-assessment. Its exploration budget is bounded — discovery on a real codebase needs to look at a lot of files, but not unbounded files. It has no concept of security attributes, classes, or MITRE. It does one thing.",{"type":31,"tag":39,"props":610,"children":611},{},[612,613,618],{"type":37,"value":533},{"type":31,"tag":92,"props":614,"children":615},{},[616],{"type":37,"value":617},"security-enricher",{"type":37,"value":619}," writes attributes. It is the only sub-agent with write access to attribute files. It runs a two-pass classification (embedding-based matching against the class library first, LLM-assisted for the residue), pulls each matched class's attribute schema from the backend, and fills those attributes from what the scout discovered. It produces the credential topology — which identities hold which credentials to reach which resources. It does not assign ATT&CK techniques; that mapping happens on the platform, deterministically, from the attributes once the model is synced. Its budget is larger than the scout's because enrichment on a medium-sized model touches a lot of files. It has no opinion on whether a component should exist in the first place; that is the modeler's job.",{"type":31,"tag":39,"props":621,"children":622},{},[623,624,629],{"type":37,"value":533},{"type":31,"tag":92,"props":625,"children":626},{},[627],{"type":37,"value":628},"model-reviewer",{"type":37,"value":630}," is a read-only auditor. It cannot modify any files. It computes a seven-factor quality score and evaluates three quality gates. The three gate-relevant factors — classification coverage, attribute completion, flow coverage — are grounded by construction: the numbers come from counting conditions over the graph, not from LLM judgment about whether a classification is sensible. The other four factors in the score carry some heuristic weighting and inform the dashboard, but they do not gate the workflow. The LLM's role at the review step is narrating the result, not producing it.",{"type":31,"tag":39,"props":632,"children":633},{"align":283},[634],{"type":31,"tag":286,"props":635,"children":638},{"src":636,"alt":637,"width":335},"\u002Fimages\u002Fdiagram-four-agent-permissions.svg","Permission matrix: each agent gets a tool allowlist that scopes it to one narrow job",[],{"type":31,"tag":39,"props":640,"children":641},{},[642],{"type":37,"value":643},"Four agents, four roles, four permission sets, four exploration budgets scaled to the task. Each one has a narrow, well-defined job. None of them can accidentally do each other's work, because the tooling does not let them. The tools live on an MCP server the plugin ships with — twenty-two in total. Each agent starts with a role-scoped allowlist; the scout's omits write primitives entirely, and it cannot write a file even if its prompt told it to. The reviewer cannot mutate state. The enricher cannot create new components. The boundaries are enforced by the allowlist, not by the prompt.",{"type":31,"tag":39,"props":645,"children":646},{},[647],{"type":37,"value":648},"That is what staged delegation needs on the agent side: specialization enforced at the tool layer, not in the system message. The scout's tools end where the modeler's begin. The reviewer can read everything and write nothing. The enricher owns its attribute files. A single generalist agent would have access to everything and would use it, all the time, for every task, exactly as the first version of the system did.",{"type":31,"tag":39,"props":650,"children":651},{},[652],{"type":37,"value":653},"Tool permissions prevent the scout from writing an attribute file, but they do not prevent the scout from hallucinating a component with a plausible filename and confidence bucket. Role separation solves the cross-contamination problem: the scout cannot quietly rewrite attributes. It does not solve the hallucination problem, and the three agents that touch the model are not checked equally. The enricher classifies against a class library defined outside the agent and fills attributes whose schema lives on the backend; the reviewer counts conditions over the platform's graph. Both check themselves against something external. The scout does not — its evidence is its own narration of its own work, and a hallucinated component can come with plausible-looking evidence. Which is why step three — Model Review — is the adjudication step where the author's attention matters most: the scout's cited sources (file, line, resource) and rubric-based buckets exist so the author walks the evidence rather than rubber-stamping the conclusion. Permissions and grounding are different problems, and the less grounded agent is the one whose output the human has to touch first.",{"type":31,"tag":134,"props":655,"children":657},{"id":656},"what-this-shape-is-not-for",[658],{"type":37,"value":659},"What this shape is not for",{"type":31,"tag":39,"props":661,"children":662},{},[663,665,676],{"type":37,"value":664},"Staged delegation is not a substitute for the live conversation. The continuous-threat-modeling school",{"type":31,"tag":146,"props":666,"children":667},{},[668],{"type":31,"tag":150,"props":669,"children":673},{"href":670,"ariaDescribedBy":671,"dataFootnoteRef":7,"id":672},"#user-content-fn-3",[154],"user-content-fnref-3",[674],{"type":37,"value":675},"3",{"type":37,"value":677}," argues two things worth taking seriously: that the right unit of threat modeling is the change — a pull request, a design decision, a sprint ticket — not a quarterly all-day session, and that any artifact-centric workflow risks producing a clean document that gives the team permission to stop thinking. The document becomes the point, the practice withers, and a threat model in the repo becomes a worse outcome than no threat model at all, because it looks like coverage.",{"type":31,"tag":39,"props":679,"children":680},{},[681],{"type":37,"value":682},"On the cadence claim, staged delegation is not in competition. The workflow lives on the pull request and is resumable at the diff level. The same sequence that produces the initial model runs on a later change, with only the stale elements going through classification and enrichment again. That is the cadence CTM demands — embedded in the engineer's loop, at the unit of the change — with the difference that the capture step is cheap enough that the continuous practice produces a durable trail instead of evaporating.",{"type":31,"tag":39,"props":684,"children":685},{},[686],{"type":37,"value":687},"On the artifact-crowding-out-practice worry, the concession is real. A usable artifact can become a substitute for the thinking that produced it. The defence is not in the tool; it is in how the team uses it. Staged delegation does not enforce conversation, and a team that runs the workflow solo every time has given up the thing that made the CTM school right to begin with. What the artifact does offer is a surface where self-deception is more expensive: attributes are either filled or they are not, and the reviewer counts what is missing. The workflow raises the cost of lying to yourself; it does not eliminate it. Calling the artifact the output is not the same as calling it the point.",{"type":31,"tag":134,"props":689,"children":691},{"id":690},"the-constraint-is-the-feature",[692],{"type":37,"value":693},"The constraint is the feature",{"type":31,"tag":39,"props":695,"children":696},{},[697],{"type":37,"value":698},"Staged delegation trades flexibility for structure, and the trade is deliberate.",{"type":31,"tag":39,"props":700,"children":701},{},[702],{"type":37,"value":703},"What it gives up is flexibility. A power user who knows exactly what they want cannot skip straight to enrichment without at least a degenerate pass through scope and discovery. There is a command-based interface for those users, and individual commands work fine on their own. But even the commands enforce the state-machine preconditions: you cannot enrich a component that does not exist, you cannot classify a data item without a flow. When the preconditions are not met the commands fail loudly rather than auto-running the upstream steps and producing a silent partial model. The ordering is not in the UX; it is in the model. The guided workflow simply makes the ordering explicit and comfortable for a user who would otherwise have to discover it by running into errors.",{"type":31,"tag":39,"props":705,"children":706},{},[707],{"type":37,"value":708},"It also assumes the shape of the artifact is roughly known. For the common case (a service with a codebase, IaC, a CI pipeline, a recognisable architecture) the shape fits. For systems where it does not — brownfield models inherited from an acquisition with no source to scan, SaaS integrations where most of the system is someone else's, regulated environments where scope is dictated by an auditor rather than a conversation, or architectures whose components fall outside the installed class library (a mainframe tier, a bespoke message bus, a medical-device subsystem) — the eleven-step shape is the wrong shape. The individual commands remain available and the guided workflow is not forced. What is given up is the scaffolding, and with it, the population the scaffolding was built for.",{"type":31,"tag":39,"props":710,"children":711},{},[712],{"type":37,"value":713},"Two engineers running the same eleven steps on the same repo will not produce byte-identical models. The LLM-assisted steps (pass-two classification, the enricher's attribute inference) are non-deterministic, and two authors will make different calls when they adjudicate. The workflow narrows the variance by fixing scope, by matching against a class library, by forcing the same sequence of questions, but it does not eliminate it. The same engineer running the same workflow against the same repo six months later, after the model provider has updated the underlying LLM, will also see variance.",{"type":31,"tag":39,"props":715,"children":716},{},[717],{"type":37,"value":718},"For regulated environments where reproducibility matters — an auditor reviewing the model against a specific point-in-time version of the system — this is not just a variance concern but a reproducibility one. The model lives on disk and commits to the same repo as the codebase, so git carries the versioning for both together: a tag pins the commit, which pins the model, the codebase, and the timestamp. Pinning the LLM is not enough on its own. The class library and the ATT&CK\u002FD3FEND graphs version too, and reproducibility needs those recorded alongside the commit — a note in the commit message, a footer in the scope file, whatever the team's practice allows. There is no separate manifest in the tool; there is git and there is the author's discipline. If you want byte-identical models without that plumbing, you write them by hand, which brings the articulation barrier back. The trade is reducing variance while keeping articulation affordable, not zeroing it.",{"type":31,"tag":39,"props":720,"children":721},{},[722],{"type":37,"value":723},"And the quality floor is still set by the adjudicator. An engineer who cannot recognise a bad proposal will accept one, and a user who clicks through proposals without reading them reproduces the form failure one layer up — the workflow completes, the model is wrong, the failure has just moved from the input side to the adjudication side. Staged delegation cannot prevent this. What it offers is proposals that are small enough and specific enough that reading them costs less than ignoring them.",{"type":31,"tag":39,"props":725,"children":726},{},[727],{"type":37,"value":728},"There is a related limit worth naming, and it has two parts.",{"type":31,"tag":39,"props":730,"children":731},{},[732],{"type":37,"value":733},"The first is what the scout cannot see no matter how much access you grant it: systems behind credentials it does not hold, third-party SaaS whose APIs it cannot reach, human-process steps that form part of the real threat model (a Slack approval gate in a deployment pipeline), and anything that lives only in someone's head. Even with full cluster access, runtime-only behaviour — cron jobs buried in container entrypoints, sidecar injections from admission controllers, service meshes that rewrite traffic paths — is only partially visible: what the scout sees and what the system does at runtime are not the same set. These gaps do not close with more tooling; they close only with an author who notices and fills them in by hand.",{"type":31,"tag":39,"props":735,"children":736},{},[737,739,745,746,752,754,760,762,768,770,775,777,782,784,789],{"type":37,"value":738},"The second is what it can see but might not be allowed to. The scout reads the codebase by default, and if the workstation has ",{"type":31,"tag":182,"props":740,"children":742},{"className":741},[],[743],{"type":37,"value":744},"kubectl",{"type":37,"value":189},{"type":31,"tag":182,"props":747,"children":749},{"className":748},[],[750],{"type":37,"value":751},"aws",{"type":37,"value":753},", or ",{"type":31,"tag":182,"props":755,"children":757},{"className":756},[],[758],{"type":37,"value":759},"terraform",{"type":37,"value":761}," configured, it can introspect live infrastructure through read-only commands and pick up runtime-only components that never appear in source. Whether you grant it that access is a trust decision, not a default. An LLM running ",{"type":31,"tag":182,"props":763,"children":765},{"className":764},[],[766],{"type":37,"value":767},"aws describe",{"type":37,"value":769}," against a production account is not a choice to make casually, and the answers address two different risks. Read-only roles and test-environment restrictions mitigate the ",{"type":31,"tag":43,"props":771,"children":772},{},[773],{"type":37,"value":774},"blast radius",{"type":37,"value":776},": even if the agent misbehaves, it cannot mutate state it was not given permission to mutate. Pre-extracting the data and handing the agent a file is a different mitigation — it addresses the ",{"type":31,"tag":43,"props":778,"children":779},{},[780],{"type":37,"value":781},"data egress",{"type":37,"value":783}," risk. Everything the scout reads, including ",{"type":31,"tag":182,"props":785,"children":787},{"className":786},[],[788],{"type":37,"value":744},{"type":37,"value":790}," stdout, IAM role names, security group rules, S3 ARNs, along with the Terraform code and the source itself, is sent to whoever hosts the model as context. Read-only does not mean read-nothing-sensitive.",{"type":31,"tag":39,"props":792,"children":793},{},[794],{"type":37,"value":795},"Which model provider sees that context is part of the workflow decision, not just the procurement decision. A regulated shop picks along three axes: self-hosted inference (no third-party provider sees the context), redaction at the scout boundary (identifiers, secrets, and customer data stripped before they leave the workstation), or scope restriction (point the agent only at what you are already willing to send outside the perimeter). This is true of every AI-assisted workflow, not just this tool.",{"type":31,"tag":39,"props":797,"children":798},{},[799,801,807],{"type":37,"value":800},"A related risk neither half of the split addresses: the agent can be manipulated by its inputs. An IaC comment, a Dockerfile, or ",{"type":31,"tag":182,"props":802,"children":804},{"className":803},[],[805],{"type":37,"value":806},"kubectl describe",{"type":37,"value":808}," output in a compromised repo is attacker-controlled in adversarial settings, and a prompt-injection payload riding that input is something neither read-only roles nor pre-extraction prevents. It lives in the same class of problem as malicious pull-request reviews and needs the same kinds of defence — input sanitisation, scope limits on what tools the agent can call on what it reads — with the caveat that sanitisation here is best-effort: LLM context has no parameterised-query equivalent.",{"type":31,"tag":39,"props":810,"children":811},{},[812],{"type":37,"value":813},"The workflow is only as good as what the scout can see, given the access you are willing to grant it, plus what the author adds by hand.",{"type":31,"tag":39,"props":815,"children":816},{},[817],{"type":37,"value":818},"The trade buys three things: resumability, inspectability, composability. Each falls out of the state machine and the directory tree on disk. None of them survive in a free-form chat.",{"type":31,"tag":39,"props":820,"children":821},{},[822],{"type":37,"value":823},"Because the workflow is a state machine, the user can stop at any step, close the session, come back a week later, and pick up exactly where they left off. The progress table shows what is done, what is auto-skipped, what is current, and what is pending. When the session dies in a chat the context dies with it; here the context is the directory.",{"type":31,"tag":39,"props":825,"children":826},{},[827],{"type":37,"value":828},"Every step produces an output that lives on disk in a human-readable format. Scope is a JSON file. Discovery produces a structure file. Classification updates class fields. Enrichment writes per-element attribute files. The user can read any of it, edit any of it in a text editor, commit any of it to git, and point an auditor at any of it. The model is not a hidden state inside an agent; it is a directory tree on the user's disk, under their filesystem permissions, in their git history.",{"type":31,"tag":39,"props":830,"children":831},{},[832,834,840],{"type":37,"value":833},"And because the steps are semantically well-defined, the agent can auto-skip steps whose conditions are already met. If every discovered component matches a class unambiguously on the first pass, step six shows a green check and the LLM-assisted pass has nothing to resolve. If the user adds a component during enrichment, the state reverts to ",{"type":31,"tag":182,"props":835,"children":837},{"className":836},[],[838],{"type":37,"value":839},"STRUCTURE_COMPLETE",{"type":37,"value":841},", the new element gets flagged as stale, and enrichment re-runs only on the stale element. Staleness propagates: a changed component invalidates flows crossing it, which invalidates data items on those flows, and the agent computes the closure before re-running. The same logic holds across sessions — a developer opening a pull request two months later runs the workflow on the diff, and only the stale elements go through classification and enrichment again. None of this would work if the steps were just narrative waypoints in a long prompt.",{"type":31,"tag":134,"props":843,"children":845},{"id":844},"three-things-underneath",[846],{"type":37,"value":847},"Three things underneath",{"type":31,"tag":39,"props":849,"children":850},{},[851],{"type":37,"value":852},"This shape rests on three things the later pieces in the series will take on directly.",{"type":31,"tag":39,"props":854,"children":855},{},[856,858,863,865,870],{"type":37,"value":857},"First, a ",{"type":31,"tag":92,"props":859,"children":860},{},[861],{"type":37,"value":862},"graph-native backend",{"type":37,"value":864}," whose topology actually enforces state transitions. A graph with typed nodes, typed edges, and enforced classes, where ",{"type":31,"tag":182,"props":866,"children":868},{"className":867},[],[869],{"type":37,"value":839},{"type":37,"value":871}," is not a flag on a document but a condition on the graph: every component belongs to exactly one boundary, no orphan flows, every flow has a classified source and target. The state machine above is the user-visible expression of invariants the graph already enforces.",{"type":31,"tag":39,"props":873,"children":874},{},[875,877,882],{"type":37,"value":876},"Second, a ",{"type":31,"tag":92,"props":878,"children":879},{},[880],{"type":37,"value":881},"modular analysis layer",{"type":37,"value":883}," on the platform. Analyses like attack path generation, compliance mapping, and control coverage read the same graph through the same interface and produce exposures — reachable attack paths, policy violations, control gaps — rather than proprietary output. Because MITRE ATT&CK and D3FEND are loaded into the same graph, each exposure is linked to the ATT&CK techniques an adversary would use against it and, where a countermeasure applies, to the D3FEND techniques that defend against it. \"Deterministic\" here means the mapping is a pure function of the graph state, given a fixed ruleset and taxonomy version. Not authoritative. Reproducible. Attacker and defender views meet on the same model, not in a spreadsheet next to it.",{"type":31,"tag":39,"props":885,"children":886},{"align":283},[887],{"type":31,"tag":286,"props":888,"children":891},{"src":889,"alt":890,"width":335},"\u002Fimages\u002Fdiagram-graph-fragment.svg","Exposures are written by the plugin; their links to ATT&CK and D3FEND techniques are added deterministically on the backend, closing the loop between attacker and defender views",[],{"type":31,"tag":39,"props":893,"children":894},{},[895,897,902],{"type":37,"value":896},"Third, a ",{"type":31,"tag":92,"props":898,"children":899},{},[900],{"type":37,"value":901},"role-separated agent architecture",{"type":37,"value":903}," where cross-agent permission boundaries are enforced at the tool layer rather than in the prompt. That last guarantee is narrower than it sounds — permissions stop one agent from doing another's work, they do not prevent any of them from being wrong — but it is real, and almost nothing else in the multi-agent space bothers to enforce it.",{"type":31,"tag":134,"props":905,"children":907},{"id":906},"the-paradigm-is-transferable",[908],{"type":37,"value":909},"The paradigm is transferable",{"type":31,"tag":39,"props":911,"children":912},{},[913],{"type":37,"value":914},"The shape is not really about threat modeling.",{"type":31,"tag":39,"props":916,"children":917},{},[918],{"type":37,"value":919},"Any expert-knowledge domain where a non-expert has to produce a structured artifact, on a pace that allows per-step supervision, runs into the same articulation barrier. Architecture review is the clearest example. An engineer proposing a design is asked to produce a component diagram, a trust-boundary analysis, a failure-mode table, and a written rationale in a specific format. The established tooling there — ADRs, C4 diagrams, RFC templates — gives them the outline but not the content. They cannot articulate what goes inside in one prompt, not because they lack skill but because the work of thinking about it is exactly what they are being asked to do. A guided workflow that delegates the legwork to specialists — one agent reads the codebase and proposes the component diagram, another walks the trust boundaries, a third scans for failure modes, a reviewer checks against architectural principles — does not turn them into a senior architect. It lets them produce a usable review by recognising good answers rather than recalling them, inside the editor they are already writing the design in. Compliance gap analysis, regulated-document drafting, and clinical decision support have the same shape. Real-time adversarial work does not: an analyst paged at two in the morning has no time for eleven supervised steps, and incident response is about moving faster than the attacker, not producing an inspectable artifact. The pattern is for slow expert-knowledge work.",{"type":31,"tag":39,"props":921,"children":922},{},[923],{"type":37,"value":924},"Staged delegation is one operational answer to the articulation barrier: a fixed outer workflow, specialist agents that articulate where they can, supervised proposals where the user has to adjudicate, delivered in the tool the user already has open. It generalizes to any domain where the artifact has enough internal structure to decompose. The hard part is not the technology. It is the discipline of saying no to the free-form chat box, to the general-purpose agent, and to the power-user shortcut.",{"type":31,"tag":39,"props":926,"children":927},{},[928],{"type":37,"value":929},"A better interface removes the excuse that the tool was in the way. It does not remove the rest. The incentive and loop problems named at the start are still there, untouched, and a team that does not threat-model will not start just because the tool got better. Those are different conversations, and this piece was about only one of them.",{"type":31,"tag":39,"props":931,"children":932},{},[933],{"type":37,"value":934},"The constraint is not a limitation to apologize for. It is what makes a language model useful for this kind of work.",{"type":31,"tag":936,"props":937,"children":940},"section",{"className":938,"dataFootnotes":7},[939],"footnotes",[941,948],{"type":31,"tag":134,"props":942,"children":945},{"className":943,"id":154},[944],"sr-only",[946],{"type":37,"value":947},"Footnotes",{"type":31,"tag":343,"props":949,"children":950},{},[951,979,992],{"type":31,"tag":347,"props":952,"children":954},{"id":953},"user-content-fn-1",[955,957,968,970],{"type":37,"value":956},"Jakob Nielsen, ",{"type":31,"tag":150,"props":958,"children":962},{"href":959,"rel":960},"https:\u002F\u002Fwww.uxtigers.com\u002Fpost\u002Fintent-ux",[961],"nofollow",[963],{"type":31,"tag":43,"props":964,"children":965},{},[966],{"type":37,"value":967},"Intent by Discovery: Designing the AI User Experience",{"type":37,"value":969},", March 26, 2026. \"Articulation barrier\" and \"intent by discovery\" are his terms, picked up here because they name the problem cleanly. The broader HCI tradition (mixed-initiative interaction, scaffolded workflows, progressive disclosure, wizard-style UIs) has been working this territory for decades and the debt is acknowledged. \"Staged delegation\" is used here to name a distinct resolution to the same problem. ",{"type":31,"tag":150,"props":971,"children":976},{"href":972,"ariaLabel":973,"className":974,"dataFootnoteBackref":7},"#user-content-fnref-1","Back to reference 1",[975],"data-footnote-backref",[977],{"type":37,"value":978},"↩",{"type":31,"tag":347,"props":980,"children":982},{"id":981},"user-content-fn-2",[983,985],{"type":37,"value":984},"How the workflow produces artifacts an auditor will accept — the connection between attributes, the compliance taxonomies loaded in the graph, and what lands in an evidence bundle — is deferred to a later piece on the analysis layer. ",{"type":31,"tag":150,"props":986,"children":990},{"href":987,"ariaLabel":988,"className":989,"dataFootnoteBackref":7},"#user-content-fnref-2","Back to reference 2",[975],[991],{"type":37,"value":978},{"type":31,"tag":347,"props":993,"children":995},{"id":994},"user-content-fn-3",[996,998,1003,1005],{"type":37,"value":997},"The book-length articulation is Izar Tarandach and Matthew J. Coles, ",{"type":31,"tag":43,"props":999,"children":1000},{},[1001],{"type":37,"value":1002},"Threat Modeling: A Practical Guide for Development Teams",{"type":37,"value":1004}," (O'Reilly, 2020), and the continuous-threat-modeling community that grew around it. ",{"type":31,"tag":150,"props":1006,"children":1010},{"href":1007,"ariaLabel":1008,"className":1009,"dataFootnoteBackref":7},"#user-content-fnref-3","Back to reference 3",[975],[1011],{"type":37,"value":978},{"title":7,"searchDepth":1013,"depth":1013,"links":1014},4,[1015,1017,1018,1019,1020,1021,1022,1023,1024,1025],{"id":136,"depth":1016,"text":139},2,{"id":265,"depth":1016,"text":268},{"id":319,"depth":1016,"text":322},{"id":512,"depth":1016,"text":515},{"id":554,"depth":1016,"text":557},{"id":656,"depth":1016,"text":659},{"id":690,"depth":1016,"text":693},{"id":844,"depth":1016,"text":847},{"id":906,"depth":1016,"text":909},{"id":154,"depth":1016,"text":947},"markdown","content:insights:eleven-steps-you-dont-type.md","content","insights\u002Feleven-steps-you-dont-type.md","insights\u002Feleven-steps-you-dont-type","md",{"loc":4},[1034,1513],{"_path":1035,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":1036,"description":1037,"date":1038,"author":1039,"image":1042,"audio":1043,"category":1044,"tags":1046,"body":1050,"_type":1026,"_id":1509,"_source":1028,"_file":1510,"_stem":1511,"_extension":1031,"sitemap":1512},"\u002Finsights\u002Fcu_chi_graph_warfare","Where the Map Ends","The Củ Chi tunnels were 250 kilometers of passages dug with shovels. The US had satellites. The tunnels worked because they exploited topology, not topography. Attackers still do.","2026-02-04",{"name":12,"headshot":13,"role":1040,"contact":1041},"Creator of Dethernety",{"linkedin":16,"email":17,"twitter":18},"\u002Fimages\u002Fcu_chi_graph_warfare-hero.jpg","\u002Faudio\u002Fcu_chi_graph_warfare.mp3",[1045],"Thinking in Graphs",[1047,1048,23,1049],"graph theory","security architecture","asymmetric warfare",{"type":28,"children":1051,"toc":1500},[1052,1057,1065,1069,1074,1079,1084,1103,1108,1114,1119,1129,1134,1139,1144,1150,1155,1160,1165,1188,1193,1198,1208,1213,1219,1224,1229,1234,1268,1273,1278,1284,1289,1297,1302,1307,1312,1320,1325,1330,1335,1343,1348,1353,1358,1364,1369,1379,1384,1389,1395,1400,1405,1415,1425,1442,1452,1458,1463,1468,1473,1478,1483,1486],{"type":31,"tag":32,"props":1053,"children":1055},{"id":1054},"where-the-map-ends",[1056],{"type":37,"value":1036},{"type":31,"tag":39,"props":1058,"children":1059},{},[1060],{"type":31,"tag":43,"props":1061,"children":1062},{},[1063],{"type":37,"value":1064},"A note on the analogy: The Vietnam War caused immense human suffering on all sides. This article examines the strategic and geometric principles at play, not to glorify conflict, but because the lessons about asymmetric warfare translate directly to how we think about defensive security today.",{"type":31,"tag":1066,"props":1067,"children":1068},"hr",{},[],{"type":31,"tag":39,"props":1070,"children":1071},{},[1072],{"type":37,"value":1073},"The US military in Vietnam had overwhelming superiority in every measurable category: air power, artillery, personnel, logistics, technology. They controlled the skies. They had satellites. They had the map.",{"type":31,"tag":39,"props":1075,"children":1076},{},[1077],{"type":37,"value":1078},"The Củ Chi tunnel network was approximately 250 kilometers of underground passages, built with shovels and baskets. By any resource accounting, it should not have posed a strategic challenge.",{"type":31,"tag":39,"props":1080,"children":1081},{},[1082],{"type":37,"value":1083},"Yet it did. The tunnels allowed fighters to appear inside fortified perimeters, strike, and vanish. Air superiority was irrelevant underground. Artillery couldn't hit what couldn't be located. The map was useless because the map only showed the surface.",{"type":31,"tag":39,"props":1085,"children":1086},{},[1087,1089,1094,1096,1101],{"type":37,"value":1088},"The US forces were fighting a war of ",{"type":31,"tag":92,"props":1090,"children":1091},{},[1092],{"type":37,"value":1093},"topography",{"type":37,"value":1095},": hills, fire zones, documented positions. The tunnel builders were fighting a war of ",{"type":31,"tag":92,"props":1097,"children":1098},{},[1099],{"type":37,"value":1100},"topology",{"type":37,"value":1102},": connectivity, hidden paths, network resilience.",{"type":31,"tag":39,"props":1104,"children":1105},{},[1106],{"type":37,"value":1107},"This distinction matters for cybersecurity because we are currently on the wrong side of it.",{"type":31,"tag":134,"props":1109,"children":1111},{"id":1110},"the-list-based-war",[1112],{"type":37,"value":1113},"The List-Based War",{"type":31,"tag":39,"props":1115,"children":1116},{},[1117],{"type":37,"value":1118},"The military measured success with quantifiable outputs: enemy casualties, territory swept, patrols completed. These are list-based metrics. They count items. They don't measure structural relationships.",{"type":31,"tag":39,"props":1120,"children":1121},{},[1122,1124],{"type":37,"value":1123},"The tunnel strategy operated on different mathematics entirely. The relevant questions were: ",{"type":31,"tag":43,"props":1125,"children":1126},{},[1127],{"type":37,"value":1128},"Can we connect point A to point B without being observed? If one route is compromised, does an alternative path exist? What is the minimum distance between our network and their high-value targets?",{"type":31,"tag":39,"props":1130,"children":1131},{},[1132],{"type":37,"value":1133},"These are graph questions. They concern edges, reachability, and path redundancy.",{"type":31,"tag":39,"props":1135,"children":1136},{},[1137],{"type":37,"value":1138},"The asymmetry was not primarily about courage or ideology. It was geometric. One side was optimizing for node elimination (body counts). The other side was optimizing for edge preservation (connectivity). When you destroy nodes in a resilient graph, the graph routes around them. When you preserve edges, you maintain operational capability regardless of individual losses.",{"type":31,"tag":39,"props":1140,"children":1141},{},[1142],{"type":37,"value":1143},"Modern security operations make the same mistake.",{"type":31,"tag":134,"props":1145,"children":1147},{"id":1146},"the-contemporary-version",[1148],{"type":37,"value":1149},"The Contemporary Version",{"type":31,"tag":39,"props":1151,"children":1152},{},[1153],{"type":37,"value":1154},"Nobody defends the \"castle-and-moat\" approach anymore. The response has been Zero Trust, micro-segmentation, identity-aware access controls.",{"type":31,"tag":39,"props":1156,"children":1157},{},[1158],{"type":37,"value":1159},"These are real improvements. Sophisticated implementations do model relationships—identity graphs, device trust chains, behavioral baselines. But most organizations don't get there. They end up implementing Zero Trust as a list-based exercise.",{"type":31,"tag":39,"props":1161,"children":1162},{},[1163],{"type":37,"value":1164},"Consider the typical implementation:",{"type":31,"tag":343,"props":1166,"children":1167},{},[1168,1173,1178,1183],{"type":31,"tag":347,"props":1169,"children":1170},{},[1171],{"type":37,"value":1172},"Inventory all users (a list)",{"type":31,"tag":347,"props":1174,"children":1175},{},[1176],{"type":37,"value":1177},"Inventory all applications (a list)",{"type":31,"tag":347,"props":1179,"children":1180},{},[1181],{"type":37,"value":1182},"Define access policies mapping users to applications (a list of pairs)",{"type":31,"tag":347,"props":1184,"children":1185},{},[1186],{"type":37,"value":1187},"Enforce policies at access points (checkpoints)",{"type":31,"tag":39,"props":1189,"children":1190},{},[1191],{"type":37,"value":1192},"This is better than a single perimeter. But it still operates on the documented topology. It secures the surface.",{"type":31,"tag":39,"props":1194,"children":1195},{},[1196],{"type":37,"value":1197},"The attacker is not constrained to the surface.",{"type":31,"tag":39,"props":1199,"children":1200},{},[1201,1203],{"type":37,"value":1202},"When a threat actor compromises an identity, they inherit that identity's graph position. They don't care about your application inventory. They care about: ",{"type":31,"tag":43,"props":1204,"children":1205},{},[1206],{"type":37,"value":1207},"What service accounts does this application use? What databases do those accounts access? What other systems trust those databases? What credentials are cached on those systems?",{"type":31,"tag":39,"props":1209,"children":1210},{},[1211],{"type":37,"value":1212},"These questions trace paths through a graph that your access policy spreadsheet does not contain. The attacker is in the tunnels. Your checkpoints are above ground.",{"type":31,"tag":134,"props":1214,"children":1216},{"id":1215},"the-hidden-graph",[1217],{"type":37,"value":1218},"The Hidden Graph",{"type":31,"tag":39,"props":1220,"children":1221},{},[1222],{"type":37,"value":1223},"Every organization has two architectures: the one that was designed, and the one that actually exists.",{"type":31,"tag":39,"props":1225,"children":1226},{},[1227],{"type":37,"value":1228},"The designed architecture appears in network diagrams, access control matrices, compliance docs.",{"type":31,"tag":39,"props":1230,"children":1231},{},[1232],{"type":37,"value":1233},"The actual architecture includes:",{"type":31,"tag":1235,"props":1236,"children":1237},"ul",{},[1238,1243,1248,1253,1258,1263],{"type":31,"tag":347,"props":1239,"children":1240},{},[1241],{"type":37,"value":1242},"Service accounts with permissions granted \"temporarily\" years ago",{"type":31,"tag":347,"props":1244,"children":1245},{},[1246],{"type":37,"value":1247},"Trust relationships between systems established for a migration that completed in 2019",{"type":31,"tag":347,"props":1249,"children":1250},{},[1251],{"type":37,"value":1252},"API keys embedded in configuration files that no one remembers creating",{"type":31,"tag":347,"props":1254,"children":1255},{},[1256],{"type":37,"value":1257},"Network paths that exist because a firewall rule was never removed",{"type":31,"tag":347,"props":1259,"children":1260},{},[1261],{"type":37,"value":1262},"Identity federation chains that create transitive trust across security boundaries",{"type":31,"tag":347,"props":1264,"children":1265},{},[1266],{"type":37,"value":1267},"Identity tokens that allow traversing from a developer laptop to a production database via the cloud control plane, bypassing the network firewall entirely",{"type":31,"tag":39,"props":1269,"children":1270},{},[1271],{"type":37,"value":1272},"This is the tunnel network. It exists. It is traversable. It does not appear on the map.",{"type":31,"tag":39,"props":1274,"children":1275},{},[1276],{"type":37,"value":1277},"When we say \"the attacker moves laterally,\" we mean they're walking through tunnels our visibility tools don't show. They're exploiting edges in the actual graph, not the documented one.",{"type":31,"tag":134,"props":1279,"children":1281},{"id":1280},"the-economics-of-asymmetry",[1282],{"type":37,"value":1283},"The Economics of Asymmetry",{"type":31,"tag":39,"props":1285,"children":1286},{},[1287],{"type":37,"value":1288},"The economics explain why this asymmetry persists.",{"type":31,"tag":39,"props":1290,"children":1291},{},[1292],{"type":31,"tag":92,"props":1293,"children":1294},{},[1295],{"type":37,"value":1296},"Defense:",{"type":31,"tag":39,"props":1298,"children":1299},{},[1300],{"type":37,"value":1301},"Security budgets fund tools that operate on lists: asset inventories, vulnerability scanners, log aggregators, endpoint agents. These tools are expensive. They require licensing, integration, staffing, and maintenance.",{"type":31,"tag":39,"props":1303,"children":1304},{},[1305],{"type":37,"value":1306},"They produce more lists. Vulnerabilities to patch, alerts to triage, compliance gaps to close. The queue never empties.",{"type":31,"tag":39,"props":1308,"children":1309},{},[1310],{"type":37,"value":1311},"Each item on each list consumes resources. The marginal cost of processing the 10,000th alert is similar to processing the 1st. Scale does not help; it multiplies the problem.",{"type":31,"tag":39,"props":1313,"children":1314},{},[1315],{"type":31,"tag":92,"props":1316,"children":1317},{},[1318],{"type":37,"value":1319},"Offense:",{"type":31,"tag":39,"props":1321,"children":1322},{},[1323],{"type":37,"value":1324},"An attacker needs one viable path. Not a complete map. Not every vulnerability. One sequence of edges from their entry point to your valuable assets.",{"type":31,"tag":39,"props":1326,"children":1327},{},[1328],{"type":37,"value":1329},"Their resource expenditure is proportional to path discovery, not comprehensive coverage. They can ignore 99% of your infrastructure if the 1% they need is traversable.",{"type":31,"tag":39,"props":1331,"children":1332},{},[1333],{"type":37,"value":1334},"This is the tunnel economics. Dig where it matters. Ignore the rest.",{"type":31,"tag":39,"props":1336,"children":1337},{},[1338],{"type":31,"tag":92,"props":1339,"children":1340},{},[1341],{"type":37,"value":1342},"The gap:",{"type":31,"tag":39,"props":1344,"children":1345},{},[1346],{"type":37,"value":1347},"We spend resources enumerating and monitoring surfaces. Attackers spend resources discovering and traversing graphs. Our tool categories—SIEM, EDR, CSPM, CNAPP—are built for surface visibility. Graph traversal capability exists, but it's rarely central to security operations.",{"type":31,"tag":39,"props":1349,"children":1350},{},[1351],{"type":37,"value":1352},"The US military had functionally unlimited resources compared to tunnel construction costs. Irrelevant—because those resources went to surface control while the adversary operated underground.",{"type":31,"tag":39,"props":1354,"children":1355},{},[1356],{"type":37,"value":1357},"Same with security budgets. The advantage disappears if the money flows to list-processing while adversaries navigate graphs.",{"type":31,"tag":134,"props":1359,"children":1361},{"id":1360},"the-tunnel-rat-problem",[1362],{"type":37,"value":1363},"The Tunnel Rat Problem",{"type":31,"tag":39,"props":1365,"children":1366},{},[1367],{"type":37,"value":1368},"The only effective counter to the tunnel network was direct engagement: soldiers who entered the tunnels and fought in the graph, node by node. This was dangerous, slow, and required entirely different skills than surface warfare.",{"type":31,"tag":39,"props":1370,"children":1371},{},[1372,1374],{"type":37,"value":1373},"The security equivalent is threat hunting. The real kind—not triaging alerts or processing vulnerability reports, but actually investigating graph relationships. ",{"type":31,"tag":43,"props":1375,"children":1376},{},[1377],{"type":37,"value":1378},"Why does this identity have a path to that system? What would an attacker do from this position? Which edges shouldn't exist?",{"type":31,"tag":39,"props":1380,"children":1381},{},[1382],{"type":37,"value":1383},"This is resource-intensive. It requires analysts who think in graphs, not checklists, and tooling that models relationships, not inventories.",{"type":31,"tag":39,"props":1385,"children":1386},{},[1387],{"type":37,"value":1388},"Most security teams can't sustain it. The list-processing workload consumes all available capacity. The tunnel rat function is either absent or permanently understaffed.",{"type":31,"tag":134,"props":1390,"children":1392},{"id":1391},"mapping-the-subsurface",[1393],{"type":37,"value":1394},"Mapping the Subsurface",{"type":31,"tag":39,"props":1396,"children":1397},{},[1398],{"type":37,"value":1399},"So how do you get tunnel rat capability when you can't staff tunnel rats?",{"type":31,"tag":39,"props":1401,"children":1402},{},[1403],{"type":37,"value":1404},"You make the graph visible. Instead of asking analysts to manually reconstruct attack paths every time, you give them tooling that models the actual architecture. The work shifts from \"discover the tunnels\" to \"decide which tunnels matter.\"",{"type":31,"tag":39,"props":1406,"children":1407},{},[1408,1413],{"type":31,"tag":92,"props":1409,"children":1410},{},[1411],{"type":37,"value":1412},"Accept that two architectures exist.",{"type":37,"value":1414}," Your documentation describes one. Reality contains another. Until you model the actual graph, your security controls address a system that does not quite exist.",{"type":31,"tag":39,"props":1416,"children":1417},{},[1418,1423],{"type":31,"tag":92,"props":1419,"children":1420},{},[1421],{"type":37,"value":1422},"Shift observability from nodes to edges.",{"type":37,"value":1424}," Asset inventory answers \"What do we have?\" Identity inventory answers \"Who are our users?\" Neither answers \"What can reach what, and how?\" Graph databases model relationships as first-class entities—which makes reachability a question you can actually ask.",{"type":31,"tag":39,"props":1426,"children":1427},{},[1428,1433,1435,1440],{"type":31,"tag":92,"props":1429,"children":1430},{},[1431],{"type":37,"value":1432},"Prioritize by risk, not by count.",{"type":37,"value":1434}," Some edges connect low-value nodes through paths that never reach critical assets. Others provide one-hop access to crown jewels. The question is: ",{"type":31,"tag":43,"props":1436,"children":1437},{},[1438],{"type":37,"value":1439},"What's the risk reduction per dollar spent removing this edge versus that one?",{"type":37,"value":1441}," The graph structure tells you which edges matter. Without the graph, you're guessing.",{"type":31,"tag":39,"props":1443,"children":1444},{},[1445,1450],{"type":31,"tag":92,"props":1446,"children":1447},{},[1448],{"type":37,"value":1449},"Collapse unnecessary tunnels.",{"type":37,"value":1451}," Unused permissions nobody remembers granting. Trust relationships left over from a 2019 migration. Service accounts with admin rights to half your infrastructure. These serve no business purpose but remain traversable. Find them. Remove them. That's structural risk reduction, not just policy enforcement.",{"type":31,"tag":134,"props":1453,"children":1455},{"id":1454},"the-geometric-lesson",[1456],{"type":37,"value":1457},"The Geometric Lesson",{"type":31,"tag":39,"props":1459,"children":1460},{},[1461],{"type":37,"value":1462},"The tunnel network succeeded not because of superior resources, but because it operated on a different geometric plane. Topography (the surface) was contested. Topology (the connections) was not.",{"type":31,"tag":39,"props":1464,"children":1465},{},[1466],{"type":37,"value":1467},"Modern attackers don't compete with our surface tools. They navigate edges we haven't mapped.",{"type":31,"tag":39,"props":1469,"children":1470},{},[1471],{"type":37,"value":1472},"We can keep increasing security budgets, deploying more list-processing tools, hiring more analysts to triage alerts. Or we can acknowledge that the fight is happening in the graph—and that controlling connectivity matters more than controlling territory.",{"type":31,"tag":39,"props":1474,"children":1475},{},[1476],{"type":37,"value":1477},"Until we model the actual graph of trust and access in our environments, we're deploying artillery against an enemy who isn't on the surface.",{"type":31,"tag":39,"props":1479,"children":1480},{},[1481],{"type":37,"value":1482},"The graph exists. It's discoverable. The tunnels can be mapped before someone else walks through them.",{"type":31,"tag":1066,"props":1484,"children":1485},{},[],{"type":31,"tag":39,"props":1487,"children":1488},{},[1489,1491,1498],{"type":37,"value":1490},"This article originally published on ",{"type":31,"tag":150,"props":1492,"children":1495},{"href":1493,"rel":1494},"https:\u002F\u002Fmedium.com\u002F@levente.simon\u002Ftopology-beats-topography-what-the-c%E1%BB%A7-chi-tunnels-teach-us-about-graph-based-security-f482f25f687f",[961],[1496],{"type":37,"value":1497},"Medium",{"type":37,"value":1499},".",{"title":7,"searchDepth":1013,"depth":1013,"links":1501},[1502,1503,1504,1505,1506,1507,1508],{"id":1110,"depth":1016,"text":1113},{"id":1146,"depth":1016,"text":1149},{"id":1215,"depth":1016,"text":1218},{"id":1280,"depth":1016,"text":1283},{"id":1360,"depth":1016,"text":1363},{"id":1391,"depth":1016,"text":1394},{"id":1454,"depth":1016,"text":1457},"content:insights:cu_chi_graph_warfare.md","insights\u002Fcu_chi_graph_warfare.md","insights\u002Fcu_chi_graph_warfare",{"loc":1035},{"_path":1514,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":1515,"description":1516,"date":1517,"author":1518,"image":1520,"audioLabel":1521,"audio":1522,"category":1523,"tags":1524,"body":1528,"_type":1026,"_id":1789,"_source":1028,"_file":1790,"_stem":1791,"_extension":1031,"sitemap":1792},"\u002Finsights\u002Fdethernety-podcast","Dethernety Podcast: Threat Modeling is a Graph Problem","A 20-minute podcast covering Dethernety's graph-native architecture, MITRE ATT&CK integration, hybrid AI tiers, and how the platform secures its own infrastructure.","2026-03-11",{"name":12,"headshot":13,"role":1040,"contact":1519},{"linkedin":16,"email":17,"twitter":18},"\u002Fimages\u002Fdethernety-podcast.jpg","AI-generated podcast","\u002Faudio\u002Fthreat_modeling_graph_problem.mp3",[1045,21],[23,1047,1525,1526,1527],"dethernety","MITRE ATT&CK","neo4j",{"type":28,"children":1529,"toc":1787},[1530,1535,1543,1548,1567],{"type":31,"tag":32,"props":1531,"children":1533},{"id":1532},"dethernety-podcast-threat-modeling-is-a-graph-problem",[1534],{"type":37,"value":1515},{"type":31,"tag":39,"props":1536,"children":1537},{},[1538],{"type":31,"tag":43,"props":1539,"children":1540},{},[1541],{"type":37,"value":1542},"AI-generated podcast. Listen to the full episode using the audio player above.",{"type":31,"tag":39,"props":1544,"children":1545},{},[1546],{"type":37,"value":1547},"Most threat modeling still happens in spreadsheets and relational databases built for inventory tracking, not attack path analysis. This episode breaks down why that's an architectural mismatch and what a graph-native alternative looks like in practice.",{"type":31,"tag":39,"props":1549,"children":1550},{},[1551,1558,1560],{"type":31,"tag":150,"props":1552,"children":1555},{"href":1553,"rel":1554},"https:\u002F\u002Fdether.net",[961],[1556],{"type":37,"value":1557},"dether.net",{"type":37,"value":1559}," | ",{"type":31,"tag":150,"props":1561,"children":1564},{"href":1562,"rel":1563},"https:\u002F\u002Fgithub.com\u002Fdether-net\u002Fdethernety-oss",[961],[1565],{"type":37,"value":1566},"GitHub",{"type":31,"tag":1568,"props":1569,"children":1570},"transcript",{},[1571,1576,1581,1586,1591,1596,1601,1606,1611,1616,1621,1626,1631,1636,1641,1646,1651,1656,1661,1666,1671,1689,1694,1699,1704,1709,1714,1719,1724,1729,1734,1739,1744,1749,1754,1759,1777,1782],{"type":31,"tag":39,"props":1572,"children":1573},{},[1574],{"type":37,"value":1575},"This is the transcript of an AI-generated podcast episode about the Dethernety platform. The podcast is available as an audio player on the article page. The content below is a normalized summary of what the two podcast hosts discuss in the episode.",{"type":31,"tag":39,"props":1577,"children":1578},{},[1579],{"type":37,"value":1580},"CORE ARGUMENT: THREAT MODELING IS A GRAPH PROBLEM",{"type":31,"tag":39,"props":1582,"children":1583},{},[1584],{"type":37,"value":1585},"Traditional security tools use relational databases. For business logic like payroll or inventory, relational databases work fine. But in cyber security, the relationships between components matter more than the components themselves.",{"type":31,"tag":39,"props":1587,"children":1588},{},[1589],{"type":37,"value":1590},"An attacker starts at a public-facing web server, finds a vulnerability, moves laterally through a firewall, exploits an internal API, and exfiltrates data from a customer database. Mapping that path across relational tables requires SQL JOIN operations that become clunky and resource-heavy with highly interconnected data. A ten-hop attack path in SQL is called \"join hell\" in the industry.",{"type":31,"tag":39,"props":1592,"children":1593},{},[1594],{"type":37,"value":1595},"GRAPH-NATIVE ARCHITECTURE",{"type":31,"tag":39,"props":1597,"children":1598},{},[1599],{"type":37,"value":1600},"Dethernety uses graph databases (Neo4j, Memgraph) instead of relational databases. Data exists as nodes (entities like servers, users) and edges (relationships between them). Relationships are stored physically on disk as first-class citizens, not computed at query time.",{"type":31,"tag":39,"props":1602,"children":1603},{},[1604],{"type":37,"value":1605},"Scalability concern: graph databases historically had scaling issues, but modern graph databases are optimized for traversing connections at large scale.",{"type":31,"tag":39,"props":1607,"children":1608},{},[1609],{"type":37,"value":1610},"Code comparison: tracing a multi-hop attacker path through a corporate network requires about 150 lines of recursive SQL. In Cypher (graph query language), the same result takes about 10 lines. That is 90% less code.",{"type":31,"tag":39,"props":1612,"children":1613},{},[1614],{"type":37,"value":1615},"Analogy: using a relational database for threat modeling is like understanding a family tree by reading a spreadsheet where names are linked by ID numbers across 10 tabs. A graph database is like looking at the visual family tree directly.",{"type":31,"tag":39,"props":1617,"children":1618},{},[1619],{"type":37,"value":1620},"WHY CONTEXT MATTERS",{"type":31,"tag":39,"props":1622,"children":1623},{},[1624],{"type":37,"value":1625},"Attackers traverse relationships. They find a misconfigured server, steal an admin credential, use it to bypass an internal firewall. Security tools that analyze components in isolation miss the attack path. The context itself is the vulnerability.",{"type":31,"tag":39,"props":1627,"children":1628},{},[1629],{"type":37,"value":1630},"MITRE FRAMEWORK INTEGRATION",{"type":31,"tag":39,"props":1632,"children":1633},{},[1634],{"type":37,"value":1635},"Dethernety integrates MITRE ATT&CK (catalog of known adversary tactics and techniques) and MITRE D3FEND (catalog of defensive countermeasures).",{"type":31,"tag":39,"props":1637,"children":1638},{},[1639],{"type":37,"value":1640},"Traditional workflow: an engineer manually cross-references a spreadsheet of servers against a MITRE PDF, guessing which techniques apply to which server.",{"type":31,"tag":39,"props":1642,"children":1643},{},[1644],{"type":37,"value":1645},"Dethernety workflow: a system component automatically links to an exposure, which links to an ATT&CK technique. Example: a web server flagged as missing TLS encryption generates an exposure node mapped to the MITRE technique for transmitted data manipulation.",{"type":31,"tag":39,"props":1647,"children":1648},{},[1649],{"type":37,"value":1650},"The defensive side works the same way: a component links to a control, which links to a countermeasure, which maps to a D3FEND technique. A network firewall maps to the MITRE technique for network traffic filtering.",{"type":31,"tag":39,"props":1652,"children":1653},{},[1654],{"type":37,"value":1655},"GAP ANALYSIS",{"type":31,"tag":39,"props":1657,"children":1658},{},[1659],{"type":37,"value":1660},"With this architecture, you can run a one-second query: \"show me all ATT&CK techniques in our network that have zero defensive controls.\" This replaces days of manual spreadsheet cross-referencing for audit preparation.",{"type":31,"tag":39,"props":1662,"children":1663},{},[1664],{"type":37,"value":1665},"MODULE ECOSYSTEM",{"type":31,"tag":39,"props":1667,"children":1668},{},[1669],{"type":37,"value":1670},"Dethernety uses JavaScript packages as native platform extensions (not superficial webhook integrations). Three types of module classes:",{"type":31,"tag":343,"props":1672,"children":1673},{},[1674,1679,1684],{"type":31,"tag":347,"props":1675,"children":1676},{},[1677],{"type":37,"value":1678},"Design classes: extend the elements you can model. Example: a module adding Kubernetes components that pull live security context from a cloud API.",{"type":31,"tag":347,"props":1680,"children":1681},{},[1682],{"type":37,"value":1683},"Analysis classes: extend how you analyze the graph.",{"type":31,"tag":347,"props":1685,"children":1686},{},[1687],{"type":37,"value":1688},"Issue classes: allow bi-directional syncing with external trackers like JIRA or GitHub.",{"type":31,"tag":39,"props":1690,"children":1691},{},[1692],{"type":37,"value":1693},"HYBRID AI ANALYSIS APPROACH",{"type":31,"tag":39,"props":1695,"children":1696},{},[1697],{"type":37,"value":1698},"Three tiers of analysis:",{"type":31,"tag":39,"props":1700,"children":1701},{},[1702],{"type":37,"value":1703},"Tier 1 — No AI (air-gapped\u002Fquery-based): for organizations with strict data sovereignty, defense contracts, or compliance requirements. Uses deterministic Cypher or GraphQL queries only. Fully predictable and audit-ready. No sensitive data leaves the environment.",{"type":31,"tag":39,"props":1705,"children":1706},{},[1707],{"type":37,"value":1708},"Tier 2 — Simple AI: a single LLM for everyday tasks like writing threat descriptions or recommending security controls for a specific node.",{"type":31,"tag":39,"props":1710,"children":1711},{},[1712],{"type":37,"value":1713},"Tier 3 — Multi-agent AI: uses orchestrators like LangGraph or CrewAI. Multiple specialized AI agents collaborate — one acts as a security architect, another as an attacker, a third as a compliance officer. They analyze the graph, debate the blueprint, and produce a detailed threat report. Can generate custom interactive dashboards using Vue.js.",{"type":31,"tag":39,"props":1715,"children":1716},{},[1717],{"type":37,"value":1718},"DETHERNETY STUDIO",{"type":31,"tag":39,"props":1720,"children":1721},{},[1722],{"type":37,"value":1723},"A built-in AI agent that generates production-ready JavaScript module classes from natural language prompts. Instead of waiting months for a vendor to support a new cloud service, teams describe the component they need and the AI writes the integration code.",{"type":31,"tag":39,"props":1725,"children":1726},{},[1727],{"type":37,"value":1728},"USER INTERFACES",{"type":31,"tag":39,"props":1730,"children":1731},{},[1732],{"type":37,"value":1733},"All interfaces read from the same graph and MITRE integrations:",{"type":31,"tag":39,"props":1735,"children":1736},{},[1737],{"type":37,"value":1738},"GUI: visual interactive canvas for security teams and architects. Drag-and-drop components, draw data flows, review auto-generated exposures in real time.",{"type":31,"tag":39,"props":1740,"children":1741},{},[1742],{"type":37,"value":1743},"CLI: command-line interface for DevOps teams. Automates threat modeling as code in CI\u002FCD pipelines. Every code push triggers an automatic threat model update based on the delta from the last version. This is shift-left security — identifying threats during the build phase.",{"type":31,"tag":39,"props":1745,"children":1746},{},[1747],{"type":37,"value":1748},"MCP (Model Context Protocol): allows AI agents (like Claude Code) to interact directly with the threat model. An AI agent can discover infrastructure, import data or screenshots, and map architecture into the graph autonomously.",{"type":31,"tag":39,"props":1750,"children":1751},{},[1752],{"type":37,"value":1753},"PLATFORM SECURITY",{"type":31,"tag":39,"props":1755,"children":1756},{},[1757],{"type":37,"value":1758},"Dethernety's own infrastructure uses three layers of defense:",{"type":31,"tag":343,"props":1760,"children":1761},{},[1762,1767,1772],{"type":31,"tag":347,"props":1763,"children":1764},{},[1765],{"type":37,"value":1766},"Blast radius isolation: dedicated siloed resources per customer with strict IAM. A compromise of one customer environment cannot spread to another at the network level.",{"type":31,"tag":347,"props":1768,"children":1769},{},[1770],{"type":37,"value":1771},"Immutable infrastructure: runs on a stripped-down container-optimized OS. No patching allowed. Instead of patching (which causes configuration drift over time), containers are torn down and replaced with newly built secure versions. This is a \"replace, don't repair\" philosophy that guarantees a known-good state.",{"type":31,"tag":347,"props":1773,"children":1774},{},[1775],{"type":37,"value":1776},"Zero trust access: triple request validation, cryptographic measures to prevent DNS hijacking, strict isolation of internal routing.",{"type":31,"tag":39,"props":1778,"children":1779},{},[1780],{"type":37,"value":1781},"FORWARD-LOOKING QUESTION",{"type":31,"tag":39,"props":1783,"children":1784},{},[1785],{"type":37,"value":1786},"If MCP allows AI agents to read and comprehend an entire corporate infrastructure graph in milliseconds, how long until those agents are not just identifying threats but actively redesigning and deploying self-healing architectures without human intervention?",{"title":7,"searchDepth":1013,"depth":1013,"links":1788},[],"content:insights:dethernety-podcast.md","insights\u002Fdethernety-podcast.md","insights\u002Fdethernety-podcast",{"loc":1514},1776721603560]