[{"data":1,"prerenderedAt":776},["ShallowReactive",2],{"reference-\u002Freferences\u002Ftechnical-inm-unified-operations":3,"related-ref-\u002Freferences\u002Ftechnical-inm-unified-operations":329},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"period":11,"sector":12,"scale":13,"role":14,"mandate":15,"category":16,"tags":17,"body":24,"_type":322,"_id":323,"_source":324,"_file":325,"_stem":326,"_extension":327,"sitemap":328},"\u002Freferences\u002Ftechnical-inm-unified-operations","references",false,"","Technical foundations for iNM Unified Operations at EUROCONTROL","Digital Platform redesign and four technical-foundation workstreams — observability, CMDB, target operating model, and software supply chain — inside EUROCONTROL's integrated Network Manager programme.","2023-01-01","2023 – 2025","Air traffic management, European regulated body","Unified Operations programme for a pan-European safety-adjacent platform","Senior architecture consultant, via ATOS to EUROCONTROL iNM","Deliver the technical designs that would let the Unified Operations programme stand up an enterprise-grade operations capability for the integrated Network Manager.","Technical",[18,19,20,21,22,23],"architecture","observability","CMDB","CI\u002FCD","target operating model","regulated environments",{"type":25,"children":26,"toc":311},"root",[27,36,42,47,52,58,63,69,74,79,85,96,101,106,116,126,136,146,151,156,162,197,203,208,213,218,223,229,234,239,244,249,254,258,267],{"type":28,"tag":29,"props":30,"children":32},"element","h2",{"id":31},"context",[33],{"type":34,"value":35},"text","Context",{"type":28,"tag":37,"props":38,"children":39},"p",{},[40],{"type":34,"value":41},"EUROCONTROL's integrated Network Manager (iNM) runs air traffic flow management for European airspace. It is a safety-adjacent, mission-critical platform under steady pressure to modernise without disturbing the operational system flying traffic every day.",{"type":28,"tag":37,"props":43,"children":44},{},[45],{"type":34,"value":46},"I joined the Unified Operations programme as a senior architecture consultant through ATOS. Unified Operations was the initiative to raise the technical operations side of iNM to enterprise-grade maturity. It covered the foundations an enterprise needs but that had not yet been put in place consistently for iNM: observability, configuration management, software supply chain, and the target operating model that ties those together.",{"type":28,"tag":37,"props":48,"children":49},{},[50],{"type":34,"value":51},"I worked five parallel tracks. Each had its own sponsors and its own scope. The most interesting part of the job was the ground between them.",{"type":28,"tag":29,"props":53,"children":55},{"id":54},"mandate",[56],{"type":34,"value":57},"Mandate",{"type":28,"tag":37,"props":59,"children":60},{},[61],{"type":34,"value":62},"Deliver the technical designs across five workstreams that each closed a gap in the iNM operational foundation. The designs had to be implementable by internal teams after I was gone.",{"type":28,"tag":29,"props":64,"children":66},{"id":65},"role",[67],{"type":34,"value":68},"Role",{"type":28,"tag":37,"props":70,"children":71},{},[72],{"type":34,"value":73},"Senior architecture consultant, engaged via ATOS on the iNM Unified Operations programme. I led the Digital Platform design activity until I left the engagement, and contributed as an individual on the other tracks. I worked alongside EUROCONTROL engineers and architects and other ATOS consultants on the same programme. For a short stretch I also acted as head of the DevOps team, which gave me a closer look at the function the target operating model had to describe.",{"type":28,"tag":37,"props":75,"children":76},{},[77],{"type":34,"value":78},"The engagement shape matters for how the work got done. You contribute, you do not own. The artefact is the deliverable, and it has to survive without you in the room.",{"type":28,"tag":29,"props":80,"children":82},{"id":81},"approach",[83],{"type":34,"value":84},"Approach",{"type":28,"tag":37,"props":86,"children":87},{},[88,94],{"type":28,"tag":89,"props":90,"children":91},"strong",{},[92],{"type":34,"value":93},"Digital Platform redesign.",{"type":34,"value":95}," The largest track, and the one I led. A Digital Platform was already in place when I arrived, but it was not operationally viable: a monolith where components shipped on a single version line, no clear upgrade path, insufficient observability, heavy manual overhead, high run cost, and lock-in to a specific container platform. The redesign was the response, structured around a two-phase move. A tactical phase that deconstructed the monolith into independently manageable components and externalised the shared services — secrets, observability, load balancing, identity — while holding a hard constraint of zero impact on the Digital Products running on top. And a strategic phase that shifted the platform to cloud-native managed services, GitOps-driven deployment, and a multi-cloud posture with a second hyperscaler for disaster recovery.",{"type":28,"tag":37,"props":97,"children":98},{},[99],{"type":34,"value":100},"The design work itself had three layers that fit together. An architecture framework written as a set of chapters, each covering one dimension of the platform: security, resilience, disaster recovery, cost, evolution, integration. A modernisation and cost-optimisation strategy that gave the framework a direction of travel and took the run-cost problem on explicitly, with a tiered-resource model per environment and a shared-services shift that drove most of the projected saving. And high-level designs at the cloud-platform and tenant-network layers that turned the framework into buildable artefacts, including concept work on deployment patterns and the sunsetting of the incumbent product-operator model.",{"type":28,"tag":37,"props":102,"children":103},{},[104],{"type":34,"value":105},"Alongside the written design I built a PoC on Terraform, Vault, and Kubernetes to validate the secrets-management and provisioning flows the framework assumed. The redesign also carried an organisation proposal: merging two previously separate operations teams into a single platform competence centre with a common backlog that reconciled the existing ITIL posture with a Scaled Agile delivery model. The team side ran in parallel: job descriptions for the Digital Platform design-and-implementation team and evaluations on the candidate pipeline. The track was in flight when I left; the team I had been building was the one that would carry it.",{"type":28,"tag":37,"props":107,"children":108},{},[109,114],{"type":28,"tag":89,"props":110,"children":111},{},[112],{"type":34,"value":113},"Observability.",{"type":34,"value":115}," The track opened with a strategy question: what stack, and how does it fit with what iNM already runs? I built an Elastic-stack PoC before writing the strategy document. The PoC ran against a representative slice of the platform and answered the architecture question faster than a paper could have. The strategy document came after and carried less weight than the PoC did. I also wrote a short analysis of where Instana could complement the Elastic stack, so the strategy did not foreclose a commercial-tool option the programme might want later.",{"type":28,"tag":37,"props":117,"children":118},{},[119,124],{"type":28,"tag":89,"props":120,"children":121},{},[122],{"type":34,"value":123},"CMDB.",{"type":34,"value":125}," The design anchored on ServiceNow CSDM. That framework choice saved me from defending the structure from first principles. The energy went into the iNM-specific content: Kubernetes and OpenShift class modelling, the technical-services decomposition, and the CI design that would let discovery and automation do real work once implemented. I wrote the design as layered documents: high-level requirements, CI design, and technical-services mapping. Each audience could read the part they needed without having to read the whole thing.",{"type":28,"tag":37,"props":127,"children":128},{},[129,134],{"type":28,"tag":89,"props":130,"children":131},{},[132],{"type":34,"value":133},"Target operating model.",{"type":34,"value":135}," The R&R work was the messy one. iNM is served by multiple organisations and functions. A target operating model across that topology has to survive sponsors who each think their own function is the one that should grow. I wrote the org blueprint, a competence matrix for the iNM digital platform, and the job descriptions for the critical roles: senior SRE, DevOps engineer, operations architect, head of DevOps. The job descriptions were the artefact that travelled furthest. A role written with real depth gets hired against; a role written with generic language gets watered down in recruitment, and the team you end up with reflects that.",{"type":28,"tag":37,"props":137,"children":138},{},[139,144],{"type":28,"tag":89,"props":140,"children":141},{},[142],{"type":34,"value":143},"Software supply chain.",{"type":34,"value":145}," Two HLDs: a Nexus proxy for artifact management, and a Jenkins-based CD toolset for deployments into the iNM environments. The environment model mattered here. OPSTEST and OPS are separated for good reasons, and the CD pipeline had to encode that separation in its deployment flows rather than work around it. I also ran a Safety Support Assessment pass on the CD toolset to check the design against the regulatory posture. In iNM that check is part of the design conversation, not a downstream gate.",{"type":28,"tag":37,"props":147,"children":148},{},[149],{"type":34,"value":150},"The through-line across the five tracks was coherence. Each design had to be internally consistent and also consistent with the others. The Digital Platform defined the environment the other four tracks served. The CMDB class model had to carry the assets the observability stack would monitor on that platform. The observability stack was supposed to read signals from workloads the CD toolset would deploy onto it. Roles defined in the target operating model were the ones that would own the platform once it landed. Move any one of those pieces and the others shift with it. Leading the Digital Platform track gave me direct control over the spine; on the other four I noticed where they leaned on each other and worked the seams where I could.",{"type":28,"tag":37,"props":152,"children":153},{},[154],{"type":34,"value":155},"Alongside the design work, a material share of the role was landing the designs with the right audiences. Each track had its own review cadence, from internal engineers and architects at one end up to CTO-level readouts at the other, with counterparts on the vendor side in between. Executive reviews were a different exercise. By that point the design being right was a given; the question was whether the programme could commit to what the design implied.",{"type":28,"tag":29,"props":157,"children":159},{"id":158},"deliverables",[160],{"type":34,"value":161},"Deliverables",{"type":28,"tag":163,"props":164,"children":165},"ul",{},[166,172,177,182,187,192],{"type":28,"tag":167,"props":168,"children":169},"li",{},[170],{"type":34,"value":171},"Digital Platform design body of work: an architecture framework written across the platform's major dimensions (security, resilience, disaster recovery, cost, evolution, integration), a modernisation and cost-optimisation strategy, cloud-platform and tenant-network HLDs for the mission-critical environment, and a working Terraform\u002FVault\u002FKubernetes PoC for the secrets-management and provisioning flows the framework depended on.",{"type":28,"tag":167,"props":173,"children":174},{},[175],{"type":34,"value":176},"Digital Platform team build: job descriptions for the Digital Platform design-and-implementation team and evaluations on the candidate pipeline.",{"type":28,"tag":167,"props":178,"children":179},{},[180],{"type":34,"value":181},"Elastic-stack observability PoC, working against representative workloads, plus a written strategy and an analysis of Instana as a complementary commercial tool.",{"type":28,"tag":167,"props":183,"children":184},{},[185],{"type":34,"value":186},"ServiceNow CSDM-aligned CMDB design, layered across high-level requirements, CI design, and technical-services mapping, with dedicated modelling for the container platforms.",{"type":28,"tag":167,"props":188,"children":189},{},[190],{"type":34,"value":191},"Target operating model for the iNM digital platform: org blueprint, R&R matrix across organisations and functions, competence matrix, and job descriptions for the critical roles.",{"type":28,"tag":167,"props":193,"children":194},{},[195],{"type":34,"value":196},"HLDs for the software supply chain: Nexus proxy and Jenkins-based CD toolset, with environment-specific deployment flows for OPSTEST and OPS, and a Safety Support Assessment view on the toolset design.",{"type":28,"tag":29,"props":198,"children":200},{"id":199},"what-made-it-hard",[201],{"type":34,"value":202},"What made it hard",{"type":28,"tag":37,"props":204,"children":205},{},[206],{"type":34,"value":207},"The regulated posture shaped everything. Routine-change definitions were a real artefact, not a formality. Maintenance windows were governed by safety assessments. A design that ignored that topology would not land, regardless of how clean it looked on paper. The regulatory shape of the platform had to be read into every track.",{"type":28,"tag":37,"props":209,"children":210},{},[211],{"type":34,"value":212},"The five tracks moved in parallel and drifted apart if you let them. Each had its own sponsors, its own review cadence, its own deliverable dates. Holding coherence across them was a second job on top of the first, and it was only partly anyone's formal responsibility. I did it because the work needed it done.",{"type":28,"tag":37,"props":214,"children":215},{},[216],{"type":34,"value":217},"The Digital Platform track was the highest-stakes of the five. Replacing an incumbent platform in a large regulated organisation is not a decision that gets taken quickly, and commitment to the redesign came in stages. Part of the work was remaking the case for it in forums where the technical argument was only one input among several.",{"type":28,"tag":37,"props":219,"children":220},{},[221],{"type":34,"value":222},"Consulting distance was its own constraint. You can write a design that is technically correct and watch it sit on a shelf. The designs that moved were the ones I spent conversation time on, not just writing time. A good design with a bad review conversation lands worse than an average design with a good one.",{"type":28,"tag":29,"props":224,"children":226},{"id":225},"what-i-took-from-it",[227],{"type":34,"value":228},"What I took from it",{"type":28,"tag":37,"props":230,"children":231},{},[232],{"type":34,"value":233},"Three things stuck.",{"type":28,"tag":37,"props":235,"children":236},{},[237],{"type":34,"value":238},"One: parallel workstreams have a coherence problem that sits above any one of them. Each track wants to be internally consistent. The harder constraint is making them mutually consistent. If no one is watching the seams, the seams come apart, and by the time they do it is expensive to fix.",{"type":28,"tag":37,"props":240,"children":241},{},[242],{"type":34,"value":243},"Two: a running PoC changes a strategy conversation more than a strategy document does. On contested choices, the fastest way to settle the argument is to build the thing. The Elastic-stack PoC did more for the observability direction than the strategy paper that followed it.",{"type":28,"tag":37,"props":245,"children":246},{},[247],{"type":34,"value":248},"Three: a framework pays rent on your behalf. Anchoring the CMDB on CSDM meant I was not defending the structure from first principles. I was defending iNM-specific departures from a framework that was already accepted. That shift in frame saves weeks.",{"type":28,"tag":37,"props":250,"children":251},{},[252],{"type":34,"value":253},"And the residue. Writing for someone else's hands is different from writing for your own. A design owned by you and one you hand off on a consulting contract are not the same artefact. The second has to survive without its author, and that has to be designed into the writing itself. A lot of how I write designs now traces back to that second kind.",{"type":28,"tag":255,"props":256,"children":257},"hr",{},[],{"type":28,"tag":37,"props":259,"children":260},{},[261],{"type":28,"tag":262,"props":263,"children":264},"em",{},[265],{"type":34,"value":266},"Sources (public record on EUROCONTROL iNM and the programme context):",{"type":28,"tag":163,"props":268,"children":269},{},[270,285,298],{"type":28,"tag":167,"props":271,"children":272},{},[273],{"type":28,"tag":262,"props":274,"children":275},{},[276],{"type":28,"tag":277,"props":278,"children":282},"a",{"href":279,"rel":280},"https:\u002F\u002Fwww.eurocontrol.int\u002Fproject\u002Fintegrated-network-management",[281],"nofollow",[283],{"type":34,"value":284},"integrated Network Management (iNM) programme, EUROCONTROL",{"type":28,"tag":167,"props":286,"children":287},{},[288],{"type":28,"tag":262,"props":289,"children":290},{},[291],{"type":28,"tag":277,"props":292,"children":295},{"href":293,"rel":294},"https:\u002F\u002Fwww.eurocontrol.int\u002Fpress-release\u002Feurocontrol-indra-atos-cronos-design-next-gen-nm-ops-system",[281],[296],{"type":34,"value":297},"EUROCONTROL enters into new partnership with Indra and Atos-Cronos to design the next generation of Network Management operational systems",{"type":28,"tag":167,"props":299,"children":300},{},[301],{"type":28,"tag":262,"props":302,"children":303},{},[304],{"type":28,"tag":277,"props":305,"children":308},{"href":306,"rel":307},"https:\u002F\u002Fatos.net\u002Fen\u002F2024\u002Fpress-release_2024_11_26\u002Fatos-secures-e165-million-contract-extension-with-eurocontrol",[281],[309],{"type":34,"value":310},"Atos secures €165 million contract extension with EUROCONTROL, November 2024",{"title":7,"searchDepth":312,"depth":312,"links":313},4,[314,316,317,318,319,320,321],{"id":31,"depth":315,"text":35},2,{"id":54,"depth":315,"text":57},{"id":65,"depth":315,"text":68},{"id":81,"depth":315,"text":84},{"id":158,"depth":315,"text":161},{"id":199,"depth":315,"text":202},{"id":225,"depth":315,"text":228},"markdown","content:references:technical-inm-unified-operations.md","content","references\u002Ftechnical-inm-unified-operations.md","references\u002Ftechnical-inm-unified-operations","md",{"loc":4},[330],{"_path":331,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":332,"description":333,"date":334,"period":335,"sector":336,"scale":337,"role":338,"mandate":339,"category":16,"tags":340,"body":347,"_type":322,"_id":772,"_source":324,"_file":773,"_stem":774,"_extension":327,"sitemap":775},"\u002Freferences\u002Ftechnical-dethernety","Dethernety: a graph-native threat modeling platform","A solo-built threat modeling platform where security models live in version control. The build process shifted partway through from traditional development to a spec-first, agent-reviewed, human-adjudicated workflow.","2024-01-01","2024 – present","Security tooling and AI-native development","Solo-built platform: multi-tier SaaS on AWS, Claude Code plugin, open-core monorepo","Founder and builder","Build a graph-native threat modeling platform usable by engineers day to day, designed commercially from the start with an open core.",[341,342,343,344,345,346],"threat modeling","AI-native development","graph databases","SaaS architecture","Claude Code","security",{"type":25,"children":348,"toc":758},[349,353,358,363,368,373,377,382,386,391,403,407,414,424,455,460,470,491,501,507,527,533,543,548,588,610,616,644,648,653,658,663,668,672,676,681,686,691,696,699,707],{"type":28,"tag":29,"props":350,"children":351},{"id":31},[352],{"type":34,"value":35},{"type":28,"tag":37,"props":354,"children":355},{},[356],{"type":34,"value":357},"Dethernety started as a side project. Several things lined up: a chance to sharpen my development, graph, security, cloud, and AI work in the same place; something I could use directly in client engagements; and a genuine product underneath if it landed. I went full focus when I closed my last consulting engagement in mid-2025.",{"type":28,"tag":37,"props":359,"children":360},{},[361],{"type":34,"value":362},"The existing tooling sat on the wrong foundation. Security is a graph problem — components, trust boundaries, attack paths, and controls all relate as a graph — and graph-native threat modeling did not exist. It still does not, beyond what I have built.",{"type":28,"tag":37,"props":364,"children":365},{},[366],{"type":34,"value":367},"The status quo is the part you might recognize. Threat modeling as most organizations practice it produces diagrams that sit on shelves: models that are not executable, not versionable in any real sense, and not connected to the code they describe. Security architects draw them once and move on. Engineers never see them again. The gap between \"we did threat modeling\" and \"our threat model reflects what we actually ship\" is where most of the risk lives.",{"type":28,"tag":37,"props":369,"children":370},{},[371],{"type":34,"value":372},"What I set out to build was a graph-native threat modeling platform that treats models as code and lives in the engineer's editor.",{"type":28,"tag":29,"props":374,"children":375},{"id":54},[376],{"type":34,"value":57},{"type":28,"tag":37,"props":378,"children":379},{},[380],{"type":34,"value":381},"A self-set one. Build Dethernety as a graph-native threat modeling platform that engineers can use day to day, not only security architects. Design it commercially from the start, with multiple plausible revenue paths in mind: SaaS in tiers, on-prem deployment, a module marketplace, supporting tools like Studio, and services around the product. Ship the SaaS first, with an open core. The open-source layer has to be genuinely useful on its own; the proprietary layer handles the infrastructure and provisioning nobody wants to solve themselves.",{"type":28,"tag":29,"props":383,"children":384},{"id":65},[385],{"type":34,"value":68},{"type":28,"tag":37,"props":387,"children":388},{},[389],{"type":34,"value":390},"Solo builder. Product, architecture, code, ops, documentation, every decision. No team, no cofounder, no external sponsor pushing for a specific direction. Every technical call is mine; every misjudgement is mine to recover from.",{"type":28,"tag":37,"props":392,"children":393},{},[394,396,401],{"type":34,"value":395},"The shape of solo building is not what it looks like from the outside. A material share of the work is deciding what ",{"type":28,"tag":262,"props":397,"children":398},{},[399],{"type":34,"value":400},"not",{"type":34,"value":402}," to do. Solo time is the most finite resource on the project, and there is always more work visible than can fit into it.",{"type":28,"tag":29,"props":404,"children":405},{"id":81},[406],{"type":34,"value":84},{"type":28,"tag":408,"props":409,"children":411},"h3",{"id":410},"platform-components",[412],{"type":34,"value":413},"Platform components",{"type":28,"tag":37,"props":415,"children":416},{},[417,422],{"type":28,"tag":89,"props":418,"children":419},{},[420],{"type":34,"value":421},"Backend.",{"type":34,"value":423}," The backend is a NestJS service exposing a GraphQL API with queries, mutations, and subscriptions. The domain model is a graph, so the storage is a graph: Neo4j or Memgraph holds the live model, with components, trust boundaries, data flows, attack paths, and countermeasures as first-class graph entities. GraphQL query definitions are shared across the platform's consumers: the web UI, the CLI, the Claude Code plugin, and the MCP server all pull from one source of truth.",{"type":28,"tag":37,"props":425,"children":426},{},[427,432,434,439,441,446,448,453],{"type":28,"tag":89,"props":428,"children":429},{},[430],{"type":34,"value":431},"Module ecosystem.",{"type":34,"value":433}," A module provides the classes of the system: ",{"type":28,"tag":262,"props":435,"children":436},{},[437],{"type":34,"value":438},"design classes",{"type":34,"value":440}," for the things being modelled, ",{"type":28,"tag":262,"props":442,"children":443},{},[444],{"type":34,"value":445},"analysis classes",{"type":34,"value":447}," for the lenses applied to them, and ",{"type":28,"tag":262,"props":449,"children":450},{},[451],{"type":34,"value":452},"issue classes",{"type":34,"value":454}," for issues and their integration with external trackers like GitHub or Jira. Modules are JavaScript libraries, so the integration surface is extensible: a new tracker means a new module, not a platform change. The MITRE ATT&CK and D3FEND frameworks are loaded as graph; how a model's components link to specific techniques and countermeasures is decided by the module's logic on the relevant attributes, not by a platform default.",{"type":28,"tag":37,"props":456,"children":457},{},[458],{"type":34,"value":459},"Analysis runs at two levels. Component-level analyses evaluate one element at a time, and the engine is swappable per module: a generic module can use OPA\u002FRego, another can use static graph queries, others can do something different again. Model-level analyses operate across the whole graph, and that is where the graph-native shape matters most; an integrated LangGraph service is one of the paths a module can take for AI-assisted analyses. The first-party modules cover the core domain and the MITRE frameworks; custom rules ship as new modules, not platform forks.",{"type":28,"tag":37,"props":461,"children":462},{},[463,468],{"type":28,"tag":89,"props":464,"children":465},{},[466],{"type":34,"value":467},"Web frontend.",{"type":34,"value":469}," The web UI is a Vue 3 single-page application built around an interactive diagram, and the module system extends into it. Property panels are generated from module-defined JSON Schemas via JSONForms, so when a module ships new classes the form UI for those classes appears without a frontend release. Modules can also register custom Vue components at runtime, with the host application exposing the Vue runtime and composables so modules do not bundle their own. Vue Flow drives the data-flow editor, with hierarchical trust boundaries and direct assignment of MITRE techniques on diagram elements. Authentication is OIDC with PKCE against the usual identity providers, Cognito and Keycloak among them.",{"type":28,"tag":37,"props":471,"children":472},{},[473,478,480,489],{"type":28,"tag":89,"props":474,"children":475},{},[476],{"type":34,"value":477},"Dethereal plugin.",{"type":34,"value":479}," The platform's second frontend is a Claude Code plugin for building threat models, sitting in the engineer's editor. It replaces the blank-page problem of traditional threat modeling tools with a fixed eleven-step staged-delegation workflow, where each step is a specialist agent proposing changes the user adjudicates before anything persists. The assumption underneath: novice modellers cannot articulate what a threat model needs up front, but they can recognize good answers. A staged workflow with agent proposals moves the work from articulation to recognition, and that shift is the innovation. Models persist as disk files, resumable across sessions and committable to git. I wrote the plugin design up separately in ",{"type":28,"tag":277,"props":481,"children":483},{"href":482},"\u002Finsights\u002Feleven-steps-you-dont-type",[484],{"type":28,"tag":262,"props":485,"children":486},{},[487],{"type":34,"value":488},"Eleven Steps You Don't Type",{"type":34,"value":490},".",{"type":28,"tag":37,"props":492,"children":493},{},[494,499],{"type":28,"tag":89,"props":495,"children":496},{},[497],{"type":34,"value":498},"Studio.",{"type":34,"value":500}," Authoring modules has its own surface. Studio is a standalone application for designing, testing, and packaging modules: AI-assisted class generation through LangGraph pipelines, a form editor with live preview that renders classes the way end users will see them, Rego authoring with sample-input validation, and module packaging for deployment. Dethereal builds threat models out of existing modules; Studio builds the modules those threat models use.",{"type":28,"tag":408,"props":502,"children":504},{"id":503},"deployment",[505],{"type":34,"value":506},"Deployment",{"type":28,"tag":37,"props":508,"children":509},{},[510,515,517,526],{"type":28,"tag":89,"props":511,"children":512},{},[513],{"type":34,"value":514},"Multi-tenant SaaS on AWS, designed for compromise.",{"type":34,"value":516}," The SaaS side is built on the assumption that any multi-tenant system will eventually be partially compromised, and that the right question is what a compromise can reach. The answer in Dethernety is: not much. Each customer gets their own network segment, their own identity pool, their own compute (single-instance on the entry tier, K3s on the higher tiers), their own CloudFront distribution over a VPC Origin, and their own IAM role scoped by hardcoded resource ARNs. Terraform state is per-customer. There is no shared runtime data plane between tenants. The entry tier runs on Fedora CoreOS with an immutable read-only root, so a compromised node cannot persist changes that survive a reboot; the higher tiers move to K3s with the same isolation posture. I wrote the architecture up across a five-part series, starting with ",{"type":28,"tag":277,"props":518,"children":520},{"href":519},"\u002Finsights\u002Farchitecture-overview",[521],{"type":28,"tag":262,"props":522,"children":523},{},[524],{"type":34,"value":525},"Architecture Overview",{"type":34,"value":490},{"type":28,"tag":408,"props":528,"children":530},{"id":529},"development-methodology",[531],{"type":34,"value":532},"Development methodology",{"type":28,"tag":37,"props":534,"children":535},{},[536,541],{"type":28,"tag":89,"props":537,"children":538},{},[539],{"type":34,"value":540},"AI-native, spec-first, agent-reviewed.",{"type":34,"value":542}," The methodology shifted partway through. Dethernety started as a normal development project: specs as prose, implementation as a series of commits, tests written against features. As the generation of tooling around Claude Code matured, I moved the project to a spec-driven, AI-native workflow that now carries most of the platform's development.",{"type":28,"tag":37,"props":544,"children":545},{},[546],{"type":34,"value":547},"The architecture stays mine. AI generates implementation; I own the system shape, the data model, the API surface, the analysis subsystem boundaries. Code review depends on the surface: the backend gets read line by line; the web frontend and Studio ride the workflow more directly, with review at the gate rather than at every line.",{"type":28,"tag":37,"props":549,"children":550},{},[551,553,558,560,565,567,572,574,579,581,586],{"type":34,"value":552},"The workflow has five phases with an explicit human-in-the-loop at each. ",{"type":28,"tag":262,"props":554,"children":555},{},[556],{"type":34,"value":557},"Intent by exploration",{"type":34,"value":559},": I describe what I want to build, and a specialist agent drafts a spec by exploring the existing code, asking clarifying questions, and proposing the shape. ",{"type":28,"tag":262,"props":561,"children":562},{},[563],{"type":34,"value":564},"Multi-agent review",{"type":34,"value":566},": the spec is reviewed by a set of agents with distinct specialties — security, architecture, graph theory, operations — each producing findings in its own voice rather than a merged consensus. ",{"type":28,"tag":262,"props":568,"children":569},{},[570],{"type":34,"value":571},"Sprint plan",{"type":34,"value":573},": once the spec clears blocking issues, it becomes a plan with user stories, definitions of done, references to the relevant code and docs, and test and evaluation strategies per story. ",{"type":28,"tag":262,"props":575,"children":576},{},[577],{"type":34,"value":578},"AI-driven implementation",{"type":34,"value":580},": the plan is executed with specialist agents where the work calls for it. ",{"type":28,"tag":262,"props":582,"children":583},{},[584],{"type":34,"value":585},"Comprehensive testing",{"type":34,"value":587},": unit, integration, and evaluation suites, with the eval layer specifically for agent-mediated work where traditional assertions fall short.",{"type":28,"tag":37,"props":589,"children":590},{},[591,593,600,602,608],{"type":34,"value":592},"Every phase gates on my judgement before the next one starts. The goal is to put the human where adjudication and direction actually matter, not where the human is a bottleneck on typing. All of this is encoded in the project's ",{"type":28,"tag":594,"props":595,"children":597},"code",{"className":596},[],[598],{"type":34,"value":599},".claude\u002F",{"type":34,"value":601}," configuration and ",{"type":28,"tag":594,"props":603,"children":605},{"className":604},[],[606],{"type":34,"value":607},".github\u002F",{"type":34,"value":609}," workflows, with slash commands gating PRs on boundary checks, security review, and documentation-staleness detection before anything ships.",{"type":28,"tag":29,"props":611,"children":613},{"id":612},"where-it-stands",[614],{"type":34,"value":615},"Where it stands",{"type":28,"tag":163,"props":617,"children":618},{},[619,624,629,634,639],{"type":28,"tag":167,"props":620,"children":621},{},[622],{"type":34,"value":623},"Graph-native threat modeling platform with a multi-tier SaaS deployment on AWS, per-customer infrastructure isolation, and an open-core split (the OSS monorepo sits as a subtree of the private monorepo).",{"type":28,"tag":167,"props":625,"children":626},{},[627],{"type":34,"value":628},"Dethereal Claude Code plugin: eleven-step staged-delegation workflow, four specialist agents, a set of MCP tools, with permissions enforced at the tool layer rather than in prompts.",{"type":28,"tag":167,"props":630,"children":631},{},[632],{"type":34,"value":633},"Module system covering the core threat modeling domain and the MITRE ATT&CK \u002F D3FEND frameworks, with OPA\u002FRego policy evaluation and an extensibility boundary that avoids platform forks.",{"type":28,"tag":167,"props":635,"children":636},{},[637],{"type":34,"value":638},"AI-native development toolchain: specialist agents, slash commands, and workflow gates that operationalize the spec-first, multi-agent-reviewed, sprint-planned, agent-executed methodology across the monorepo.",{"type":28,"tag":167,"props":640,"children":641},{},[642],{"type":34,"value":643},"Six published essays on the underlying architecture and plugin design, with more in progress.",{"type":28,"tag":29,"props":645,"children":646},{"id":199},[647],{"type":34,"value":202},{"type":28,"tag":37,"props":649,"children":650},{},[651],{"type":34,"value":652},"Solo breadth is the first constraint. Threat modeling, graph databases, SaaS infrastructure, immutable compute, AI-native tooling, and Claude Code plugin design are six different disciplines, each with depth I had to either reach into myself or delegate to a specialist agent. The scope of the work is not a decision I get to revisit. It is the shape of the product.",{"type":28,"tag":37,"props":654,"children":655},{},[656],{"type":34,"value":657},"The methodology pivot was expensive. Moving an in-flight project onto a spec-driven AI-native workflow is not a matter of configuring tools. It changes what \"done\" means, what a review looks like, and where the cost of a bad decision shows up. I lost time before I gained it. The gain came later and is now structural, but the transition was a cost I paid over several months with eyes open.",{"type":28,"tag":37,"props":659,"children":660},{},[661],{"type":34,"value":662},"Positioning is harder than the technology. A graph-native, AI-native, shift-left threat modeling platform is easy to describe technically and harder to place in a market used to document-first threat modeling tools on one side and chat-first AI copilots on the other. The product is neither of those, and naming that clearly without sounding like yet another \"we reinvented threat modeling\" pitch is a genuine writing problem.",{"type":28,"tag":37,"props":664,"children":665},{},[666],{"type":34,"value":667},"Solo pacing is its own discipline. Nobody else is going to notice that test coverage drifted, that a module interface is generating more coupling than it should, or that a dependency upgrade has sat on a branch for a week. The internal review function has to be real. The specialist agents help, and catch things a solo builder would miss, but the ultimate review is mine and I have to budget for it explicitly.",{"type":28,"tag":29,"props":669,"children":670},{"id":225},[671],{"type":34,"value":228},{"type":28,"tag":37,"props":673,"children":674},{},[675],{"type":34,"value":233},{"type":28,"tag":37,"props":677,"children":678},{},[679],{"type":34,"value":680},"One: AI-native development, run with discipline, changes what you can build alone. Specialist agents do the bulk of the implementation; architecture stays human, and so does review on the parts that warrant it. The multi-year, multi-team work I have scoped for clients in the past is a different shape under that combination. The work is not easier. The ceiling of what one person can carry end to end has shifted, and I am still calibrating where the new one sits.",{"type":28,"tag":37,"props":682,"children":683},{},[684],{"type":34,"value":685},"Two: staged delegation beats free-form prompting in any domain where users cannot articulate what they want. The novice threat modeler does not know what a good threat model contains, and no amount of open prompting fixes that. A fixed workflow with specialist proposals at each step meets the user where they actually are. That pattern generalizes past threat modeling, and I am watching for the other domains it applies to.",{"type":28,"tag":37,"props":687,"children":688},{},[689],{"type":34,"value":690},"Three: treating infrastructure isolation as a design principle, not a configuration task, produces a posture you cannot retrofit. Designing Dethernety from the first line for per-customer isolation was more work up front than a shared-everything SaaS would have been, and it is now the part of the architecture I have to defend the least. The right default, chosen early, pays back every month.",{"type":28,"tag":37,"props":692,"children":693},{},[694],{"type":34,"value":695},"And the residue. Building solo with AI-native methods changed how I think about what consulting can deliver. A design I wrote as a consultant assumed the team on the other side could carry it. A system I build as Dethernety carries itself, with me doing the adjudication a team would otherwise do collectively. Those are not the same craft, and knowing where they converge is a live question I expect to be answering for a while.",{"type":28,"tag":255,"props":697,"children":698},{},[],{"type":28,"tag":37,"props":700,"children":701},{},[702],{"type":28,"tag":262,"props":703,"children":704},{},[705],{"type":34,"value":706},"Sources:",{"type":28,"tag":163,"props":708,"children":709},{},[710,723,736,748],{"type":28,"tag":167,"props":711,"children":712},{},[713],{"type":28,"tag":262,"props":714,"children":715},{},[716],{"type":28,"tag":277,"props":717,"children":720},{"href":718,"rel":719},"https:\u002F\u002Fdether.net",[281],[721],{"type":34,"value":722},"dether.net — project site",{"type":28,"tag":167,"props":724,"children":725},{},[726],{"type":28,"tag":262,"props":727,"children":728},{},[729],{"type":28,"tag":277,"props":730,"children":733},{"href":731,"rel":732},"https:\u002F\u002Fgithub.com\u002Fdether-net\u002Fdethernety-oss",[281],[734],{"type":34,"value":735},"dethernety-oss on GitHub",{"type":28,"tag":167,"props":737,"children":738},{},[739,746],{"type":28,"tag":262,"props":740,"children":741},{},[742],{"type":28,"tag":277,"props":743,"children":744},{"href":519},[745],{"type":34,"value":525},{"type":34,"value":747}," — entry point for a five-part series on the AWS infrastructure (the four follow-up essays are linked at the end of the overview)",{"type":28,"tag":167,"props":749,"children":750},{},[751],{"type":28,"tag":262,"props":752,"children":753},{},[754],{"type":28,"tag":277,"props":755,"children":756},{"href":482},[757],{"type":34,"value":488},{"title":7,"searchDepth":312,"depth":312,"links":759},[760,761,762,763,769,770,771],{"id":31,"depth":315,"text":35},{"id":54,"depth":315,"text":57},{"id":65,"depth":315,"text":68},{"id":81,"depth":315,"text":84,"children":764},[765,767,768],{"id":410,"depth":766,"text":413},3,{"id":503,"depth":766,"text":506},{"id":529,"depth":766,"text":532},{"id":612,"depth":315,"text":615},{"id":199,"depth":315,"text":202},{"id":225,"depth":315,"text":228},"content:references:technical-dethernety.md","references\u002Ftechnical-dethernety.md","references\u002Ftechnical-dethernety",{"loc":331},1777227384553]