<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The AI Runtime]]></title><description><![CDATA[For Curious minds shipping AI ]]></description><link>https://theairuntime.com</link><generator>Substack</generator><lastBuildDate>Sat, 09 May 2026 03:52:37 GMT</lastBuildDate><atom:link href="https://theairuntime.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Kranthi Manchikanti]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aiengineerweekly@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aiengineerweekly@substack.com]]></itunes:email><itunes:name><![CDATA[The AI Runtime]]></itunes:name></itunes:owner><itunes:author><![CDATA[The AI Runtime]]></itunes:author><googleplay:owner><![CDATA[aiengineerweekly@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aiengineerweekly@substack.com]]></googleplay:email><googleplay:author><![CDATA[The AI Runtime]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[A Portfolio That Practices MRE]]></title><description><![CDATA[Vishnu Purohitham&#8217;s four shipped projects are a worked example of Model Reliability Engineering &#8212; and a soft hit on most of the AIfolio.]]></description><link>https://theairuntime.com/p/a-portfolio-that-practices-mre</link><guid isPermaLink="false">https://theairuntime.com/p/a-portfolio-that-practices-mre</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Fri, 08 May 2026 11:02:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ysT0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - Most early-career AI portfolios show the <a href="https://aiengineerweekly.substack.com/p/your-portfolio-website-wont-get-you">AIfolio pillars</a> &#8212; RAG, tool-use, multi-agent orchestration &#8212; and stop at &#8220;demo runs once.&#8221; <a href="https://github.com/TheJASSZ">Vishnu Purohitham&#8217;s GitHub</a> is rarer because the projects come pre-equipped with the parts MRE calls <strong>harness engineering</strong>: fallback chains, validation gates, quality thresholds, graceful degradation. The context engineering layer is real too &#8212; a T5 fine-tuned on the 226K-article XSum corpus (or 300K-article CNN-DailyMail) on Northeastern&#8217;s H200 cluster, BLIP adapted with LoRA r=16, <a href="https://github.com/TheJASSZ/InfoRetrieval_v2#tech-stack">BGE-base-en-v1.5 embeddings</a> at 768 dimensions, hybrid dense + keyword search. Three of four AIfolio pillars are touched. Persistent memory is the honest gap. The hire/study signal isn&#8217;t completeness &#8212; it&#8217;s that the harness wasn&#8217;t an afterthought. If you&#8217;re staffing AI engineers and you want a filter for MRE instincts, this is the kind of portfolio to compare against. If you&#8217;re building one, copy the disposition: harness <em>with</em> the model, not <em>after</em> it.</p></div><h2>Why this builder is worth a closer look</h2><p>There&#8217;s a recognizable shape to most AI engineering portfolios in late 2025 and 2026: a chatbot, a RAG demo, a &#8220;GPT wrapper for [niche],&#8221; and maybe one fine-tuning notebook. They show familiarity with the stack. They don&#8217;t show that the builder has internalized what <em>production</em> AI actually requires &#8212; the unglamorous infrastructure that sits around the model and decides whether the system survives contact with real input.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p><a href="https://www.linkedin.com/in/vishnupurohitham/">Vishnu Purohitham</a> is a Northeastern-affiliated builder whose portfolio inverts that ratio. Across four shipped projects &#8212; one a graduate-class capstone, three from hackathons spanning local Northeastern events to MIT&#8217;s Bitcoin Expo &#8212; the same architectural commitments show up. It&#8217;s the consistency that&#8217;s interesting, not any single project. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ysT0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ysT0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!ysT0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!ysT0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!ysT0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ysT0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:971859,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://theairuntime.com/i/196788986?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ysT0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!ysT0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!ysT0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!ysT0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64aa3fd2-93b7-4910-a018-b8a6abc19246_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                    Vishnu&#8217;s AIFolio</em></p><p>This Builder Spotlight reads the work through two frameworks. The <a href="https://substack.com/@theairuntime/p-192378432">AIfolio framework</a> gives us a way to talk about <em>what</em> an AI portfolio should contain &#8212; RAG with real evaluation, multi-agent orchestration, tool-use boundaries, persistent memory. <a href="https://substack.com/@theairuntime/p-193536389">Model Reliability Engineering (MRE)</a> gives us a way to talk about <em>how</em> it should be built &#8212; split into context engineering (what the model sees at inference time) and harness engineering (the control layer governing what the user sees). Together they answer the question hiring managers actually care about: does this builder ship things, or does this builder ship things that <em>hold up</em>?</p><div><hr></div><h2>The four projects, in one paragraph each</h2><p><strong><a href="https://github.com/TheJASSZ/InfoRetrieval_v2">InfoRetrieval v2</a></strong> &#8212; A multimodal RAG system for personal knowledge management. Ingests URLs, PDFs, DOCX files, raw text, images, and Chrome bookmarks through a four-layer pipeline. Web scraping uses Playwright with a Trafilatura fallback. OCR runs EasyOCR first, then Tesseract if the first pass returns less than 20 characters. Summarization uses a T5 fine-tuned on either XSum (226K articles) or CNN-DailyMail (300K articles) on Northeastern&#8217;s H200 HPC cluster. Image captioning uses BLIP with a LoRA adapter (r=16, alpha=32). Storage is <a href="https://github.com/TheJASSZ/InfoRetrieval_v2#layer-4--storage--retrieval">ChromaDB with hybrid dense + keyword search</a>. Whole thing ships as a Docker Compose stack with a React frontend.</p><p><strong><a href="https://github.com/BhanuHarshaY/Boston-311-Hack">Boston 311 AI Agent</a></strong> &#8212; A multilingual (English / Spanish / Portuguese) agent for Boston city services, built in under 36 hours at a Northeastern hackathon. The interesting choice isn&#8217;t the agent &#8212; it&#8217;s the orchestration. The agent fans out parallel tool calls across four live Boston Open Data sources (311 cases, weather, events, neighborhood trends) and streams reasoning back to the frontend over SSE. The visible reasoning panel isn&#8217;t a UX flourish; it&#8217;s a trust mechanism for users (older adults, non-English speakers) who would otherwise have no way to evaluate whether the answer is grounded.</p><p><strong><a href="https://github.com/TheJASSZ/zero-shot-annotator">Zero-Shot Video Annotator</a></strong> &#8212; A FiftyOne plugin built at the Voxel51 / Twelve Labs hackathon. The interesting design move: instead of training a classifier, it uses Twelve Labs Pegasus to generate natural-language descriptions of each clip, then matches those descriptions to a user-defined taxonomy via cosine similarity over Marengo embeddings (512-dim). Tested on a 691-clip workplace safety dataset across 8 behavior categories. Local API caching reportedly cut inference costs by 80%. Built-in human-in-the-loop review surfaces low-confidence predictions for manual sign-off.</p><p><strong><a href="https://github.com/TheJASSZ/PulseMesh">PulseMesh</a></strong> &#8212; A smartphone-based environmental DePIN built at the MIT Bitcoin Expo 2026 Virtual Hackathon. Native Android app collects sensor data (air pressure, noise, light) in the background, with a built-in Lightning wallet for instant micropayments via the L402 protocol. Backend includes a four-stage validation pipeline that detects spoofed readings before data hits the buyer-facing marketplace. Privacy-first design aggregates locations to city-block level before sale.</p><p>Two are flagship-quality builds. Two are 36-hour hackathon outputs. The architectural commitments are identical.</p><div><hr></div><h2>Where the AIfolio shows up &#8212; and where it doesn&#8217;t</h2><p>The AIfolio framework names four pillars an AI engineer&#8217;s portfolio should evidence: a RAG pipeline with real evaluation, a multi-agent system that solves a real problem, an MCP / tool-use integration with sensible boundaries, and a persistent memory architecture. We don&#8217;t score Vishnu&#8217;s portfolio against this &#8212; that turns a spotlight into an audit, and the AIfolio is a reference for the <em>concepts present</em>, not a checklist a builder has to pass. The interesting reading is which pillars Vishnu has built around and which one he hasn&#8217;t.</p><p><strong>RAG with real evaluation</strong> is built around in InfoRetrieval v2 &#8212; and &#8220;evaluation&#8221; is the word that earns it the hit. The <a href="https://github.com/TheJASSZ/InfoRetrieval_v2#training-scripts-hpc">training pipeline</a> reports ROUGE-1, ROUGE-2, and ROUGE-L on summarization, plus BLEU for captioning. Most &#8220;AIfolio RAG&#8221; demos skip the eval. This one ships it.</p><p><strong>Tool-use with sensible boundaries</strong> is built around in two places. The Boston 311 agent fans out parallel tool calls across four data sources with the reasoning panel exposed to the user &#8212; boundary as transparency. Zero-Shot Annotator routes low-confidence predictions to a human reviewer instead of writing them blindly to the labelset &#8212; boundary as fallback. Different mechanisms, same disposition: the tool-use isn&#8217;t the whole answer, and the system knows it.</p><p><strong>Multi-agent orchestration</strong> is approached, not fully delivered. The Boston 311 build is parallel tool-calling, not multi-agent in the canonical sense (no negotiation between agents, no planner-worker split). Worth naming honestly: the orchestration skill is real, the <em>multi-agent</em> label is generous.</p><p><strong>Persistent memory</strong> is the honest gap. Nothing in the four projects builds a cross-session memory layer (Mem0, Letta, Zep, or a custom architecture). Worth being clear about &#8212; if Vishnu wanted to round out the AIfolio, this is the next project to ship.</p><p>The pillars are reference points for what&#8217;s present. The more interesting question is <em>how</em> what&#8217;s present has been built. That&#8217;s MRE.</p><div><hr></div><h2>What the projects look like through the MRE lens</h2><p>MRE splits production AI work along two axes. <strong>Context engineering</strong> governs what the model knows at inference time &#8212; fine-tuning, RAG, embedding strategy, knowledge freshness, retrieval precision. <strong>Harness engineering</strong> governs what the user sees &#8212; guardrails, output validation, fallback paths, faithfulness checks, graceful degradation, auditability.</p><p>Most AI demos do the first. Vishnu&#8217;s projects do both. That&#8217;s the signal.</p><h3>Context engineering, layer by layer</h3><p>InfoRetrieval v2 is the project where the context engineering is most visible, and it&#8217;s done with care.</p><p>The summarizer isn&#8217;t FLAN-T5 off the shelf &#8212; it&#8217;s a T5-base fine-tuned for 3 epochs on XSum or CNN-DailyMail at batch size 16 and learning rate 3e-5, with beam search at 4 beams and a 1.2 repetition penalty for inference. The image captioner isn&#8217;t BLIP off the shelf &#8212; it&#8217;s BLIP with a LoRA adapter trained on Flickr8k at r=16, alpha=32, dropout 0.05. The embedder is <a href="https://github.com/TheJASSZ/InfoRetrieval_v2#tech-stack">BGE-base-en-v1.5</a> at 768 dimensions &#8212; a deliberate choice over default OpenAI embeddings, with retrieval running as hybrid dense + keyword search rather than pure cosine.</p><p>What&#8217;s worth naming: this isn&#8217;t fine-tuning for the sake of &#8220;I trained something.&#8221; Each model on the path has been picked or adapted to the role it plays in the pipeline. T5 because summarization is a sequence-to-sequence problem with strong public benchmarks. BGE because the embedder is a retrieval surface with its own SLO and the <a href="https://huggingface.co/spaces/mteb/leaderboard">MTEB leaderboard</a> is a real signal. Hybrid search because pure dense retrieval misses keyword-exact matches and the system has to handle both.</p><p>The Chrome bookmark sync and watchdog file consumer are the part most readers will overlook. These are <em>context freshness</em> mechanisms &#8212; automatic re-ingestion as new content lands. MRE treats freshness as a context-layer SLO; this project ships the plumbing for it.</p><h3>Harness engineering as the standout signal</h3><p>Harness engineering is where Vishnu&#8217;s portfolio separates itself from the median. The pattern repeats across all four projects: any layer where input variation can break the system has a backup path <em>and</em> a quality check that decides which path runs.</p><p>The minimal viable shape:</p><div class="callout-block" data-callout="true"><p><code>def extract(input_data):</code></p><p><code>    primary_result = primary_extractor(input_data)</code></p><p><code>    if quality_check(primary_result) &gt;= THRESHOLD:</code></p><p><code>        return primary_result, &#8220;primary&#8221;</code></p><p><code>    fallback_result = fallback_extractor(input_data)</code></p><p><code>    return fallback_result, &#8220;fallback&#8221;</code></p></div><p>InfoRetrieval v2&#8217;s web scraper runs Trafilatura first because it&#8217;s faster and lighter, and falls back to Playwright only if static extraction returns less than 50 characters. The OCR pipeline runs EasyOCR first and falls back to Tesseract if the first pass returns less than 20 characters, then returns a tuple of (text, method) where method is one of &#8220;easyocr&#8221;, &#8220;tesseract&#8221;, &#8220;combined&#8221;, or &#8220;none&#8221;. That last detail matters &#8212; auditability of which path actually ran is what makes the system debuggable three months later.</p><p>PulseMesh&#8217;s four-stage spoofing detection is the harness pointed at sensor data instead of extractor output, but it&#8217;s the same architectural move. Zero-Shot Annotator&#8217;s HITL review queue is the same move applied to model confidence &#8212; low-confidence predictions don&#8217;t get written silently, they get surfaced. The Boston 311 agent&#8217;s visible reasoning panel is the same move applied to user trust &#8212; the user can see what tools the agent called and decide whether to trust the answer.</p><p>What to call out: the validation layer isn&#8217;t decorative. It&#8217;s the part that lets the system <em>know its own confidence</em>, which is the precondition for graceful degradation. MRE treats this as the harness engineer&#8217;s primary deliverable. Vishnu ships it on a hackathon timeline.</p><div><hr></div><h2>Where the edges show</h2><p>Every project has visible trade-offs. Calling them out is the difference between a profile and a puff piece.</p><p><strong>InfoRetrieval v2 doesn&#8217;t scale past one machine.</strong> ChromaDB&#8217;s persistent client is single-process. The watchdog file consumer is async but in-process. None of this is wrong for a CS5130 capstone &#8212; but the architecture as written maxes out around one user with one Chrome bookmark file and one watched directory. Multi-user deployment would require a real DB tier, a job queue, and an actual auth layer. The README is honest about this; it doesn&#8217;t claim to be SaaS-ready.</p><p><strong>The Boston 311 agent was built in 36 hours.</strong> That shows. Sub-2-second latency is impressive for a parallel-tool-calling agent, but error handling for stale data sources, partial tool failures, or rate-limited Open Data endpoints would all need real work for a public deployment.</p><p><strong>Zero-Shot Annotator&#8217;s 80% cost reduction is from caching.</strong> The <em>first</em> annotation pass on any new dataset is expensive. The plugin is a good fit for &#8220;annotate this dataset once, then iterate on labels&#8221; &#8212; and a poor fit for &#8220;annotate streaming video as it arrives.&#8221; Worth knowing before you adopt it.</p><p><strong>PulseMesh&#8217;s four-stage validation adds latency and a trust assumption.</strong> The validators themselves can be wrong. A determined spoofer with knowledge of the validation pipeline can defeat statistical detection. The architecture is correct for an MVP DePIN; it would need a slashing or reputation mechanism to survive at scale.</p><p><strong>The persistent memory pillar isn&#8217;t built around at all.</strong> None of the four projects ship a cross-session memory architecture. For an AIfolio that&#8217;s &#8220;complete,&#8221; this is the next project. The honest read: three of four pillars touched, with strong harness engineering compensating for the gap.</p><p>None of these are dealbreakers. They&#8217;re the edges of work shipped fast against real constraints. The portfolio doesn&#8217;t try to hide them.</p><div><hr></div><h2>What readers can take away</h2><p>For new AI engineers building portfolios:</p><p><strong>The AIfolio pillars name what to build. MRE names how to build it.</strong> Both matter, and most portfolios over-invest in the first and under-invest in the second. A demo that hits all four AIfolio pillars but has no harness around any of them is weaker than three pillars built with real harness engineering.</p><p><strong>Pick one project and ship the harness.</strong> The minimum viable harness has three pieces: a fallback path on the layer most likely to fail, a quality gate that decides which path runs, and a way to audit which path actually ran (logs, return tuples, method tags). The cost is small. The signal is large.</p><p><strong>Context engineering doesn&#8217;t require an H200.</strong> T5-base on a Kaggle GPU works. The signal isn&#8217;t the compute &#8212; it&#8217;s that you can defend a dataset choice, an eval metric, and a hyperparameter. Without that, your context layer is indistinguishable from the median.</p><p><strong>Show the trade-offs.</strong> A README that says &#8220;this maxes out at one user, here&#8217;s why, here&#8217;s what would change for multi-tenant&#8221; reads as more senior than a README that claims SaaS-readiness it can&#8217;t back up. The InfoRetrieval v2 README&#8217;s frank acknowledgment that BLIP falls back to CPU on Apple Silicon &#8220;due to operator support limitations&#8221; is the right tone.</p><p>For mid-level engineers reviewing portfolios: the cheapest filter for MRE instincts is <em>does the harness exist at all</em>. Run through the candidate&#8217;s repos and ask &#8212; where does primary extraction live, what happens if it fails, and how would I know which path ran? The absence of an answer is the answer.</p><p>For hiring managers: a portfolio that ships hackathon-grade builds with the same architectural rigor as classroom flagship projects is a stronger signal than either taken alone. It says the patterns are <em>reflexive</em>, not assignment-driven. That&#8217;s what you&#8217;re hiring for.</p><div><hr></div><p>The most underrated skill in early-career AI engineering isn&#8217;t model selection or prompt design. It&#8217;s the discipline to architect around the model the same way you&#8217;d architect around any other unreliable dependency. Vishnu&#8217;s portfolio is interesting because every project assumes the unreliability and designs for it from line one &#8212; context engineering on the input side, harness engineering on the output side, with the AIfolio pillars showing up as the natural shape rather than the assignment. If you&#8217;re hiring, look for this. If you&#8217;re building, copy it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Three Weeks of Opus 4.7 in Production: What Teams Are Actually Reporting]]></title><description><![CDATA[The launch numbers were one story. The production patterns are a different one.]]></description><link>https://theairuntime.com/p/three-weeks-of-opus-47-in-production</link><guid isPermaLink="false">https://theairuntime.com/p/three-weeks-of-opus-47-in-production</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Thu, 07 May 2026 22:31:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-7BI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - <a href="https://www.anthropic.com/news/claude-opus-4-7">Anthropic released Claude Opus 4.7 on April 16, 2026</a> at unchanged pricing ($5/$25 per million tokens). After three weeks of production traffic from teams that shipped early, the most important changes are not the headline benchmark gains &#8212; they&#8217;re the <strong>behavior shifts</strong>. Stricter instruction following has broken prompts that relied on charitable interpretation. The new tokenizer can <a href="https://platform.claude.com/docs/en/about-claude/pricing">produce up to 35% more tokens for the same input text</a>, shifting cost calculations even at unchanged pricing. Self-verification has materially reduced agent hallucination on tool-use tasks; Hex reports the model surfaces missing data states honestly rather than confabulating. The migration is not drop-in &#8212; teams that flipped the model string in config and shipped are the teams reporting regressions. The four practices that worked: re-run the eval suite, audit per-task cost in the first 48 hours, bump the effort tier when comparing benchmarks, and test vision workloads explicitly. The deeper lesson: every Opus release on the current ~2-month cadence is now a release event with its own pre-flight, and the Harness Half-Life is playing out in real time on every team&#8217;s prompt suite.</p></div><h2>What was promised at launch</h2><p>The April 16 launch positioned Opus 4.7 as a targeted upgrade over Opus 4.6 &#8212; improvements in software engineering, vision, instruction following, and self-verification, with <a href="https://www.anthropic.com/news/claude-opus-4-7">particular gains on the most difficult tasks</a>. Anthropic&#8217;s framing was that users should be able to hand off their hardest coding work to the model with less supervision than 4.6 required.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The benchmark numbers Anthropic published: 64.3% on SWE-bench Pro, 87.6% on SWE-bench Verified, and 69.4% on Terminal-Bench 2.0, with <a href="https://www.harshrastogi.tech/blog/claude-opus-4-7-release-developer-guide">3x higher image resolution</a> (up to 2,576 pixels on the long edge) and a new <code>xhigh</code> effort tier between high and max. Pricing held flat at $5 per million input tokens and $25 per million output tokens.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-7BI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-7BI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!-7BI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!-7BI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!-7BI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-7BI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:727946,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://theairuntime.com/i/196825084?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-7BI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!-7BI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!-7BI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!-7BI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a65d31-c6dc-4ca8-9eb0-a0fa7d55470b_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                               Opus Updates</em></p><p>That was the launch. What&#8217;s emerged in the three weeks since is more textured &#8212; and the texture is where the engineering decisions actually live.</p><h2>The instruction-following shift is the biggest change</h2><p>The headline that matters for any team running production prompts: Opus 4.7 follows instructions more literally than 4.6 did.</p><p>The behavioral pattern, reported across multiple post-launch evaluations: prompts that relied on the model &#8220;reading between the lines&#8221; now do exactly what they were told. If the prompt says &#8220;respond in JSON format,&#8221; the model does &#8212; even when a clarifying question would have been more useful. If the prompt says &#8220;use Postgres, not SQLite&#8221; early in the run, the model now <a href="https://www.mindstudio.ai/blog/claude-opus-4-7-what-developers-need-to-know">honors that constraint twenty steps later</a> where 4.6 would sometimes drift toward whatever the broader context implied.</p><p>Three concrete patterns have shown up most often in the regression triage:</p><p><strong>Implicit fallback prompts.</strong> Teams shipped prompts that effectively said &#8220;if you can&#8217;t do X, do Y.&#8221; The 4.6 behavior was to interpret this as a soft preference and frequently produce X anyway when X was clearly the right answer. The 4.7 behavior is to follow the literal instruction &#8212; Y appears when X would have been better, because the prompt said Y was acceptable. Fix: rewrite to express constraints as preferences rather than fallbacks where appropriate.</p><p><strong>Format-overriding-content.</strong> A prompt that ends with &#8220;respond in JSON&#8221; gets JSON, even when the right response is a clarifying question. The 4.6 model would often violate the format instruction to ask the question. The 4.7 model produces malformed JSON or a JSON object containing the question, both of which break downstream parsers. Fix: split format instructions from content instructions, or explicitly say &#8220;if you need clarification, ask in plain text and skip the JSON wrapper.&#8221;</p><p><strong>Negation drift.</strong> &#8220;Don&#8217;t do X&#8221; instructions that 4.6 sometimes interpreted as &#8220;X is unusual but not forbidden&#8221; now produce strict refusal of X even when context shifts. Fix: state the positive form (&#8221;do Y&#8221;) rather than the negation, where possible.</p><p>This is good for production systems. Predictability beats cleverness, and stricter instruction following is exactly the property agentic systems need to scale beyond babysitting. It is bad for teams who shipped prompts that depended on the model&#8217;s charitable interpretation. Those prompts now produce different outputs, sometimes subtly worse, and the regression is not always visible in eval &#8212; it shows up as a 3% increase in user complaints two weeks after launch.</p><p>The practical implication: every team migrating from 4.6 to 4.7 needs to re-run their prompt suite against the new model and re-tune. Not because anything is broken &#8212; because the model is now answering the literal question, and the literal question may not have been quite what the prompt intended.</p><h2>The tokenizer change is a silent cost shift</h2><p>Pricing did not change. Effective spend did.</p><p>Anthropic&#8217;s pricing documentation states the change explicitly: <a href="https://platform.claude.com/docs/en/about-claude/pricing">Opus 4.7 uses a new tokenizer that may use up to 35% more tokens for the same fixed text</a>. Independent post-launch testing has reported <a href="https://www.mindstudio.ai/blog/claude-opus-4-7-review">token counts up roughly 12-18% on typical workloads</a>, with code-heavy and multilingual content sitting closer to the upper bound.</p><p>The 35% number is the worst case. The realistic number for most production workloads is in the 10-20% range. Either way, the implication for a team running production traffic is concrete:</p><ul><li><p><strong>Cost rises</strong> at the same pricing per token, because the same prompts now consume more tokens. A workload that ran at $50K/month on 4.6 likely runs at $55-60K/month on 4.7 with no other changes.</p></li><li><p><strong>Rate limits hit sooner</strong> for any team running close to the ceiling, because the limits are denominated in tokens per minute. Teams who previously had headroom may need to request a quota increase or restructure their request distribution.</p></li><li><p><strong>Context window math changes</strong> &#8212; prompts that comfortably fit in 200K under the old tokenizer now sit closer to the edge. Teams who routinely ran at 180K input may now be hitting 220K and getting truncated.</p></li><li><p><strong>Cache hit accounting</strong> is unchanged at the multiplier level (5m write at 1.25x, 1h write at 2.0x, read at 0.1x), but the absolute number of cached tokens is higher, which changes the savings calculation in absolute terms.</p></li></ul><p>This is a benign change on paper and an expensive one in practice. The teams that ran a careful migration audited their per-task cost metric in the first 48 hours and adjusted budgets. The teams that did not are now finding out via the monthly bill.</p><p>The broader lesson: <strong>token consumption is now part of the migration audit.</strong> A model upgrade is not a cost-neutral event even when per-token pricing is unchanged. The metric that matters is cost-per-task, not cost-per-token, and it must be measured before and after every migration.</p><h2>Self-verification has been the standout improvement</h2><p>The behavioral change practitioners report most consistently is self-verification on agentic tasks. The model proactively checks its own outputs before declaring a task complete &#8212; writing tests and running them, re-checking tool results before synthesizing, flagging missing data rather than confabulating around it.</p><p>Hex&#8217;s CTO captured the practical impact: the model surfaces missing-data states honestly rather than fabricating around them, and it resists the kind of conflicting-evidence patterns that previously confused 4.6. On Hex&#8217;s 93-task internal benchmark, the resolution rate moved up by 13 points against 4.6, and Opus 4.7 closed four problems that neither 4.6 nor Sonnet 4.6 had been able to finish.</p><p>Notion AI reported it as <a href="https://www.verdent.ai/guides/what-is-claude-opus-4-7">the first model to pass their implicit-need tests</a> &#8212; tasks where the model must infer required actions rather than being told what tools to invoke.</p><p>For teams running coding agents and other multi-step automation in production, this is the change that justifies the migration on its own. The error rate that previously forced human checkpoints on every meaningful action drops, and the human checkpoint can move one layer up the stack. That is a different shape of human-in-the-loop, and it changes the economics of agent oversight.</p><p>The economics shift is concrete. If a team was running a coding agent that required human review on every PR, and 4.7 reduces the review-required rate from 100% to 60%, the per-PR human time falls by 40%. Aggregated across an engineering org&#8217;s PR volume, that&#8217;s a meaningful productivity multiplier &#8212; and it lands on the same headcount, not new hires.</p><p>For agent product teams, this also reshapes the handoff layer. The escalation triggers that fired when the model was uncertain now fire less often, because the model resolves more cases internally. The handoff payload still has to be tight when escalations do happen &#8212; but the volume of escalations falls, which means the human queue shortens, which means each escalation gets faster human attention, which means handoff quality improves end-to-end.</p><h2>The xhigh effort tier and task budgets</h2><p>Two new control surfaces shipped with 4.7. Both have meaningful implications for production economics.</p><p><code>xhigh</code><strong> sits between </strong><code>high</code><strong> and </strong><code>max</code> &#8212; finer-grained control over the reasoning-vs-latency tradeoff. Anthropic recommends starting with <code>high</code> or <code>xhigh</code> for coding and agentic use cases, and Claude Code now <a href="https://www.nxcode.io/resources/news/claude-opus-4-7-developer-guide-api-claude-code-migration-2026">defaults to xhigh across all plans</a>.</p><p>Hex&#8217;s observation is the load-bearing one for cost calibration: low-effort 4.7 sits at roughly the quality of medium-effort 4.6. This means a team comparing the two should benchmark at one tier higher on 4.7 to match equivalent quality at lower cost. Concretely:</p><ul><li><p>Workloads that ran at <code>medium</code> on 4.6 &#8594; try <code>low</code> on 4.7 first; you may match or exceed quality at lower cost</p></li><li><p>Workloads that ran at <code>high</code> on 4.6 &#8594; try <code>medium</code> or <code>high</code> on 4.7; match quality at meaningful cost reduction</p></li><li><p>Workloads that need the absolute ceiling &#8594; <code>xhigh</code> is the new tier worth exercising; <code>max</code> remains for the genuinely hardest tasks</p></li></ul><p>The teams treating effort tiers as fixed config rather than tunable parameters are leaving real cost savings on the table. A migration sprint that includes effort-tier audits typically recovers a meaningful portion of the tokenizer cost increase.</p><p><strong>Task budgets</strong> (public beta) are a token cap on a complete agentic loop &#8212; thinking, tool calls, tool results, and final output combined. The model sees a running countdown and prioritizes accordingly. This is the agent-system equivalent of a request timeout. It does not optimize cost per call; it bounds the worst case.</p><p>The implementation pattern is direct: set a per-task budget at invocation time, and the model receives the running count as part of its prompt context. As the budget approaches zero, the model wraps gracefully &#8212; finishing the current step, summarizing where it is, returning a partial answer rather than hitting a hard cutoff mid-tool-call.</p><p>For any team that has had a runaway agent loop in production &#8212; the kind that eats a day&#8217;s budget retrying the same failing tool call &#8212; this is the primitive that closes that failure mode. The combination with the <a href="https://platform.claude.com/docs/en/build-with-claude/compaction">server-side compaction beta</a> (the <code>compact-2026-01-12</code> header) means teams now have provider-native primitives for both the cost ceiling and the context overflow problem. Less custom infrastructure to build; less to maintain.</p><h2>The vision jump is real</h2><p>The vision change is the one most likely to be undervalued because it requires a workflow that exercises it. For teams that work with screenshots, diagrams, dense PDFs, or any high-DPI input, the practical impact is large.</p><p>The maximum image resolution moved from ~1.15 megapixels to <a href="https://www.verdent.ai/guides/what-is-claude-opus-4-7">~3.75 megapixels</a> &#8212; a 3.3x increase in pixel count. Independent reports flag this as an inflection for document extraction, log screenshot analysis, architecture diagram understanding, and similar workflows.</p><p>The use cases where this materially changes feasibility:</p><ul><li><p><strong>Dense document extraction</strong> &#8212; financial statements, medical records, technical drawings &#8212; where text or detail at the original resolution was previously too small to reliably extract.</p></li><li><p><strong>UI testing and visual regression</strong> &#8212; full-page screenshots of complex web apps where individual components or text strings were previously below the resolution threshold.</p></li><li><p><strong>Architecture diagrams and technical illustrations</strong> &#8212; where the relationships between components depend on small text labels and connection details.</p></li><li><p><strong>Log and dashboard screenshots</strong> &#8212; where a workflow involves the agent reading rendered UI rather than structured data.</p></li></ul><p>The cost: higher resolution images consume more tokens. Anthropic recommends downsampling when the extra fidelity is not needed. The pattern that has emerged: tier images by resolution requirement, and route to lower-resolution input for routine cases. Treat the high-resolution capability as a tool to invoke, not as a default.</p><p>This is not a &#8220;nice to have&#8221; change for vision-adjacent workloads. It is the difference between vision capabilities that worked in demos and vision capabilities that work in production.</p><h2>The regressions</h2><p>Not every change is an improvement. Two regressions are worth flagging.</p><p><strong>Web research quality</strong>, by some independent reports, <a href="https://www.mindstudio.ai/blog/claude-opus-4-7-review">has dropped relative to 4.6</a> &#8212; source attribution accuracy, contradiction detection, and citation specificity all reportedly weaker. The hypothesis circulating among teams who migrated then partially reverted: the training tradeoff that improved agentic persistence shifted the model away from the careful cross-referential reasoning that made 4.6 strong on research tasks.</p><p>The practical guidance from teams who ran both side-by-side: if your primary workload is research synthesis where source fidelity matters, evaluate carefully before migrating. Some teams are running 4.7 for coding workflows and 4.6 for research workflows on the same product surface, routed by task type. The cost of running two models is real but smaller than the cost of regression on the workload that regressed.</p><p><strong>Self-reported numbers vs independent testing.</strong> As is now standard with frontier model launches, <a href="https://www.mindstudio.ai/blog/claude-opus-4-7-review">independent testing tends to show tighter margins than vendor numbers</a>. The 13% lift on coding benchmarks reported by Hex may be closer to 5-6 points in real-world workloads, particularly when controlling for the effort tier difference. This is not specific to Anthropic; it is a category property of self-reported AI evaluations and a reason to run independent benchmarks before relying on launch numbers for production decisions.</p><h2>The patterns that worked</h2><p>The migration patterns that worked in the first three weeks share four practices:</p><ol><li><p><strong>Re-run the eval suite</strong> before flipping production traffic. The instruction-following shift exposes prompt regressions that are not obvious from spot-checking. Teams that have a regression suite ran it against 4.7 first, triaged the failures, and then either fixed the prompts or held the model upgrade until they could.</p></li><li><p><strong>Audit per-task cost</strong> in the first 48 hours after migration. The tokenizer change is a silent cost shift, and the only honest measurement is the per-task metric. A 30% increase in median cost-per-task with no quality change is the signal that effort tier or task budget tuning is needed.</p></li><li><p><strong>Bump effort tier</strong> when comparing benchmarks. If the previous workload ran at <code>high</code> on 4.6, equivalent quality on 4.7 may sit at <code>xhigh</code> &#8212; and equivalent cost at <code>high</code> may now match what <code>medium</code> did on 4.6. The tier-shift opportunity is the largest under-claimed win in the migration.</p></li><li><p><strong>Test vision workloads explicitly.</strong> The 3.3x resolution jump changes what is feasible. Teams that don&#8217;t exercise vision are leaving capability on the table &#8212; and teams whose workloads include any document, screenshot, or diagram processing should explicitly test whether the new resolution unlocks workflows that weren&#8217;t viable before.</p></li></ol><p>The teams that struggled in the first three weeks did the opposite: flipped the model string, watched some prompts regress, and spent days triaging without a structured re-evaluation. Several reported partial reversion to 4.6 for specific high-value workloads while they did the migration audit they should have done before the cutover.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qdec!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qdec!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png 424w, https://substackcdn.com/image/fetch/$s_!qdec!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png 848w, https://substackcdn.com/image/fetch/$s_!qdec!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png 1272w, https://substackcdn.com/image/fetch/$s_!qdec!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qdec!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png" width="717" height="1075" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1075,&quot;width&quot;:717,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:62573,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://theairuntime.com/i/196825084?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qdec!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png 424w, https://substackcdn.com/image/fetch/$s_!qdec!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png 848w, https://substackcdn.com/image/fetch/$s_!qdec!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png 1272w, https://substackcdn.com/image/fetch/$s_!qdec!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15ea418a-d01e-4774-a2c9-3b389c907535_717x1075.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                           Migration Plan</em></p><h2>The verdict three weeks in</h2><p>For agentic coding workflows: migrate. The self-verification and tool-call reliability gains compound into materially fewer failed loops and less wasted compute. The teams running coding agents in production are the clearest beneficiaries.</p><p>For vision-heavy workflows: migrate immediately. The resolution jump is the kind of capability change that opens new product surfaces &#8212; workflows that were demo-viable but production-fragile become production-viable.</p><p>For research-heavy workflows: evaluate carefully. The reported regression on cross-referential reasoning is real for some tasks. Some teams are running 4.6 for research and 4.7 for coding on the same product, routed by task type, until the gap closes.</p><p>For everyone: budget time for prompt audit, audit per-task cost, and treat the migration as a release event with its own pre-flight. The model is better. The migration is not free.</p><h2>What this release teaches about model upgrades generally</h2><p>The deeper pattern this release illustrates is the Harness Half-Life playing out in real time. The custom prompt scaffolding, the fallback heuristics, the workarounds for 4.6&#8217;s quirks &#8212; many of them are now obsolete. Some of them are now actively suppressing capabilities the new model could provide. A team that built a custom verification step on top of 4.6 because the model didn&#8217;t reliably check its own work is now running that custom step <em>and</em> the model&#8217;s stronger built-in self-verification &#8212; paying for both, getting marginal benefit from the custom layer.</p><p>Auditing the harness on every model release is no longer optional. With a release cadence of roughly two months on the Opus line, it is now part of the operating rhythm.</p><p>The teams who treat each model release as a discrete project &#8212; its own pre-flight, its own audit, its own dashboard for tracking the migration &#8212; are the teams whose harnesses stay lean. The teams who treat each release as a config flip accumulate harness debt at compounding rates, and pay it off in larger and more painful migrations later.</p><p>The model is improving faster than the harnesses around it. That asymmetry is now a structural feature of building on frontier models, and the engineering response &#8212; instrumented migrations, structured audits, and a culture of harness pruning &#8212; is what separates teams whose costs shrink with each release from teams whose costs only grow.</p><p>Three weeks of production data from Opus 4.7 is enough to see the shape. The teams who learned this lesson cleanly are already preparing for the next release. The teams who didn&#8217;t are still triaging the last one.</p><div><hr></div><h2>Dont miss out on the next editions from The AI Runtime</h2><p><strong><a href="https://theairuntime.substack.com/">The Cost Layer</a></strong> &#8212; The xhigh effort tier and the tokenizer change are both cost levers. Caching, routing, and task budgets are how teams absorb the per-task cost shift on migration.</p><p><strong><a href="https://theairuntime.substack.com/">The Shipped Agent&#8217;s First 90 Days</a></strong> &#8212; Treat every model release as a release event with its own pre-flight. The first 90 days framework formalizes the operating rhythm that catches regressions before users do.</p><p><strong><a href="https://theairuntime.substack.com/">Long-Running Agent State Management</a></strong> &#8212; The <code>compact-2026-01-12</code> beta header pairs with Opus 4.7&#8217;s task budgets. Both are provider-native primitives that close failure modes teams used to build themselves.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for above releases</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Inside Mintlify’s Agent Stack]]></title><description><![CDATA[A teardown of the two-harness architecture &#8212; async sandboxes for writes, virtual filesystems for reads &#8212; and what it teaches about wrapping a model in production.]]></description><link>https://theairuntime.com/p/inside-mintlifys-agent-stack</link><guid isPermaLink="false">https://theairuntime.com/p/inside-mintlifys-agent-stack</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Wed, 06 May 2026 08:03:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-xTO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - Mintlify just <a href="https://www.mintlify.com/blog/series-b">raised $45M at a $500M valuation</a> on the bet that documentation has stopped being something humans read and started being infrastructure that agents query. Their own traffic data backs the bet: across 30 days and roughly 790M requests on Mintlify-powered sites, <a href="https://www.mintlify.com/blog/state-of-ai">AI coding agents accounted for 45.3% of traffic versus 45.8% for browsers</a>, with Claude Code alone generating more requests than Chrome on Windows.</p><p>Underneath the bet sits a three-part architecture worth studying. The <strong>write agent</strong> runs inside ephemeral <a href="https://www.mintlify.com/blog/knowledge-management-agent-era">Daytona sandboxes with a headless OpenCode session driven by Opus 4.6</a>, triggered by Slack mentions, dashboard prompts, API calls, or YAML-defined Workflows in your repo. The <strong>read assistant</strong> does the opposite &#8212; it <a href="https://www.mintlify.com/blog/how-we-built-a-virtual-filesystem-for-our-assistant">skips real sandboxes entirely</a> in favor of ChromaFs, a virtual filesystem layered over their existing Chroma database, taking session creation from roughly 46 seconds to about 100 milliseconds. The <strong>public surface</strong> auto-generates llms.txt, llms-full.txt, and skill.md at the root, <a href="https://www.mintlify.com/library/mintlify-alternatives-what-to-consider-and-why-theres-no-true-substitute">serves clean Markdown when you append </a><code>.md</code> to a page URL, and hosts an MCP server for every docs site it powers.</p><p>The architectural lesson isn&#8217;t that they built a doc agent. It&#8217;s that they built <strong>two</strong> harnesses with deliberately asymmetric constraints &#8212; async writes get full sandboxes, sync reads get a virtual filesystem &#8212; and the asymmetry is what makes the system economical at <a href="https://www.mintlify.com/blog/mintlify-acquires-trieve-to-improve-rag-search-in-documentation">over 23 million queries a month</a>. If you&#8217;re wrapping a model around a code repository for any reason, this is the reference implementation to study.</p></div><h2>The 45% problem</h2><p>Start with the data, because the architecture only makes sense once you accept the premise.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In April 2026, Mintlify&#8217;s co-founder Han Wang published a Cloudflare-header analysis covering 30 days of traffic across all Mintlify-powered docs sites. The headline number: AI coding agents had reached <a href="https://www.mintlify.com/blog/state-of-ai">45.3% of total requests, narrowly behind 45.8% from browsers</a>. The distribution was lopsided. Claude Code alone produced 199.4M requests, ahead of Chrome on Windows at 119.4M. Cursor produced 142.3M. Together those two tools accounted for roughly 96% of identified AI agent traffic. Mintlify itself notes the real share is likely higher, since Codex traffic is invisible to user-agent header analysis and disappears into generic HTTP requests.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-xTO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-xTO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!-xTO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!-xTO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!-xTO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-xTO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:779871,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/196074000?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-xTO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!-xTO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!-xTO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!-xTO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51724eea-0d78-4888-98d3-beb3f8cd0d44_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                Architecture Patterns</em></p><p>If half your readers are agents pulling context to generate code, the design pressure on documentation flips. Browsers want navigation chrome, syntax highlighting, expandable sections. Agents want clean Markdown, exact strings, and stable URLs. The same content has to render correctly to both audiences, and &#8212; critically &#8212; has to <em>stay current</em> as the underlying product ships at agent-swarm speed.</p><p>That second pressure is the one that produced the agent stack. As Mintlify&#8217;s other co-founder Hahnbee Lee frames it, when a chatbot gives a wrong answer it is usually a documentation failure rather than a model failure, because the corpus the model retrieved against is out of date. The gap between what your docs say and what your product does compounds quarter over quarter unless something automated keeps the two in sync. Their answer is two distinct agents with two distinct harnesses, plus a public surface that exposes the maintained corpus to every other agent in the ecosystem.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xxag!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xxag!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png 424w, https://substackcdn.com/image/fetch/$s_!xxag!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png 848w, https://substackcdn.com/image/fetch/$s_!xxag!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png 1272w, https://substackcdn.com/image/fetch/$s_!xxag!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xxag!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png" width="871" height="579" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:579,&quot;width&quot;:871,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:46305,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/196074000?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xxag!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png 424w, https://substackcdn.com/image/fetch/$s_!xxag!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png 848w, https://substackcdn.com/image/fetch/$s_!xxag!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png 1272w, https://substackcdn.com/image/fetch/$s_!xxag!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6e34ec3-4ccb-4678-8d18-9a62e425c387_871x579.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Two harnesses, two latency budgets. The write path optimizes for capability; the read path optimizes for cost-per-conversation.</em></p><div><hr></div><h2>Layer 1 &#8212; The write agent: a sandbox is the whole product</h2><p>Most &#8220;AI doc writer&#8221; features on the market today are roughly one prompt, one model call, one diff. Mintlify&#8217;s write agent is structurally different. When you trigger it &#8212; by <code>@mintlify</code>-ing the bot in Slack, hitting <code>Cmd+I</code> in the dashboard, calling the agent API, or merging a PR that fires a Workflow &#8212; what runs on the other side is a headless OpenCode session driven by Opus 4.6, scoped to a fresh Daytona container that has the docs repo and any context repositories cloned in. The sandbox is the unit of work.</p><p>This decision is more load-bearing than it sounds. The Mintlify team is explicit about the reasoning: pointing a stateless model at a codebase produces, in their phrase, &#8220;chaos with a byline&#8221;. The agent needs a real environment to read code, plan changes, and edit files safely &#8212; not an API call decorated with retrieved chunks. So they gave it one. A trigger lands on a job queue, a worker provisions the container, and the result of the run is reported back through GitHub commit checks and the Mintlify dashboard. Inside the container, the agent runs through a fixed pipeline: it pulls in relevant material across the docs and the connected code repos, drafts a multi-step plan if the work calls for one, applies edits while honoring the project&#8217;s writing standards, runs a <a href="https://www.mintlify.com/docs/agent">local Mintlify CLI build to confirm the docs still compile</a>, and opens a pull request &#8212; direct commits to main are not on the menu.</p><p>Two design choices inside that loop are worth pulling out.</p><p><strong>Slack-first, not terminal-first.</strong> The Mintlify agent originally shipped only in Slack and via API, with <a href="https://www.mintlify.com/blog/agent-dashboard">the dashboard surface added later in December 2025</a>. The team&#8217;s stated reason: opening a terminal triggers a <a href="https://www.mintlify.com/blog/we-built-our-coding-agent-for-slack">&#8220;mentally draining switch&#8221;</a> that opening Slack does not, and documentation work is exactly the kind of task people procrastinate on. By living where the relevant context already lives &#8212; the PR thread that explained the change, the customer Slack message that surfaced the gap &#8212; the trigger surface matches the source of the work.</p><p><strong>Behavior-as-code through </strong><code>AGENTS.md</code><strong>.</strong> The agent reads a config file at <code>.mintlify/AGENTS.md</code> in your repo, and appends its contents to its system prompt for every task it runs &#8212; whether the trigger comes from Slack, the dashboard, or the API. The path matters: Mintlify&#8217;s docs explicitly warn that placing the file at the project root exposes it as a public asset under <code>/agents.md</code>, since the <code>.mintlify/</code> directory is not served on the docs site. What you put inside is style preferences, code standards, project-specific terminology &#8212; the kind of guidance a senior reviewer would otherwise repeat fifty times a year. It is the same pattern as Anthropic&#8217;s <code>CLAUDE.md</code> or the AGENTS.md spec emerging across the agent tooling space, and it makes agent behavior version-controlled and reviewable.</p><p>The most interesting trigger surface is <strong>Workflows</strong>, where the YAML config gets explicit. A workflow file lives in your repo. The schema looks roughly like this:</p><pre><code><code>---
name: 'Update API reference on backend changes'
on:
  push:
    - repo: 'your-org/backend'
      branch: main
context:
  - repo: 'your-org/docs'
  - repo: 'your-org/openapi-specs'
automerge: false
---

When the backend repo merges a PR, scan the diff for changes to public API
endpoints, request/response schemas, or authentication behavior. Update the
matching API reference pages and code examples. Skip internal refactors.</code></code></pre><p>The structure is a trigger (cron job or push event), a list of context repos to clone in, an automerge flag, and natural-language instructions in markdown. When the trigger fires, the agent evaluates the conditions, runs the task, and either commits directly or opens a PR depending on configuration, so cost stays predictable. Documentation maintenance becomes a downstream event of shipping, not a separate task someone has to remember.</p><p>The whole arrangement maps onto a pattern emerging across serious agent products: give the AI a sandbox, version-control the instructions, keep humans in the review loop, and let the model do the actual work inside well-defined guardrails. The reviewer-on-PRs analogy is doing real work here. The agent is treated like a junior contributor with full repo access &#8212; capable, but reviewed.</p><div><hr></div><h2>Layer 2 &#8212; The read assistant: when a real sandbox is the wrong answer</h2><p>If the write agent shows what it looks like to spend latency to gain capability, the read assistant shows the opposite trade-off &#8212; and it is the more architecturally surprising of the two.</p><p>The read assistant is the chat widget your readers use on a Mintlify-powered docs site. It now serves over thirty thousand conversations a day across hundreds of thousands of users. The natural design &#8212; and the one Mintlify started with &#8212; was the same shape that powers the write agent: spin up a sandbox, clone the docs repo, let the model run real <code>grep</code>, <code>cat</code>, <code>ls</code>, and <code>find</code> against the filesystem.</p><p>That design hit two walls. First, latency: <a href="https://www.mintlify.com/blog/how-we-built-a-virtual-filesystem-for-our-assistant">p90 session boot time, including the GitHub clone and other setup, came in around 46 seconds</a> &#8212; fine for an async write task where someone fires a Slack message and walks to get coffee, fatal for a reader staring at a loading spinner on a docs page. Second, cost. At nearly a million conversations a month, even a minimal sandbox setup at 1 vCPU, 2 GiB RAM, and a five-minute lifetime would have run north of $70,000 a year on Daytona&#8217;s per-second pricing, with longer sessions doubling the bill.</p><p>So the team built <strong>ChromaFs</strong> &#8212; a virtual filesystem that gives the agent the <em>illusion</em> of a real shell, layered over the Chroma database that already stored the docs as embedded chunks. Session creation collapsed from tens of seconds to roughly 100 milliseconds, and because ChromaFs reuses infrastructure they were already paying for, the marginal compute cost per conversation dropped to zero. The implementation runs on top of <code>just-bash</code>, a TypeScript reimplementation of bash from Vercel Labs that exposes a pluggable <code>IFileSystem</code><a href="https://www.mintlify.com/blog/how-we-built-a-virtual-filesystem-for-our-assistant"> interface</a>. <code>just-bash</code> parses commands, pipes, and flags; ChromaFs translates each underlying filesystem call into a Chroma query.</p><p>The mechanics are worth dwelling on, because they reveal how thoughtful harness design beats brute-force sandboxing.</p><p>The directory tree is bootstrapped from a single gzipped JSON document called <code>__path_tree__</code> stored inside the Chroma collection. On startup, the server fetches and decompresses it into two in-memory structures &#8212; a set of file paths and a map from directories to their children. After that, <code>ls</code>, <code>cd</code>, and <code>find</code> resolve in local memory with zero network calls, and the tree is cached so subsequent sessions for the same site skip the fetch entirely. Per-user access control happens at tree-build time: ChromaFs prunes paths the user can&#8217;t see and applies a matching filter to all subsequent Chroma queries, with the result that <a href="https://www.mintlify.com/blog/how-we-built-a-virtual-filesystem-for-our-assistant">pruned paths cannot even be referenced by the agent</a>. Reading a page is a chunk-reassembly operation &#8212; <code>cat /auth/oauth.mdx</code> fetches all chunks with the matching slug, sorts them by <code>chunk_index</code>, and joins them into the full page. Writes throw <code>EROFS</code>, making the system stateless by construction.</p><p>The most clever piece is <code>grep</code>. A naive recursive grep over a virtual filesystem would be agonizing &#8212; every file would round-trip to the database. ChromaFs intercepts the grep call, parses flags with <code>yargs-parser</code>, and translates them into a Chroma query (<code>$contains</code> for fixed strings, <code>$regex</code> for patterns) that acts as a coarse filter to identify which files might contain a hit. The matched chunks are bulk-prefetched into a Redis cache, and the rewritten grep is handed back to <code>just-bash</code> for in-memory fine filtering. Large recursive queries finish in milliseconds.</p><p>Sitting beneath ChromaFs in the read path is <strong>Trieve</strong>, the RAG infrastructure company <a href="https://www.mintlify.com/blog/mintlify-acquires-trieve-to-improve-rag-search-in-documentation">Mintlify acquired in July 2025</a>. Trieve had been Mintlify&#8217;s search backbone since before the team finished its Y Combinator batch, and the acquisition brought retrieval ownership in-house at a moment when the assistant was already serving more than 23 million queries a month. Trieve&#8217;s stack &#8212; dense vector search, re-ranker models, sub-sentence highlighting, and date recency biasing on a single endpoint &#8212; does the heavy lifting underneath ChromaFs&#8217;s UNIX-style interface. Trieve also <a href="https://www.trieve.ai/blog/trieve-is-being-acquired-by-mintlify">moved to an MIT license as part of the acquisition</a>, so the same retrieval kernel is inspectable on GitHub.</p><p>The pattern in the read assistant is the part most teams underweight. Mintlify&#8217;s team observed that <a href="https://www.mintlify.com/blog/how-we-built-a-virtual-filesystem-for-our-assistant">agents are converging on filesystems as their primary interface</a>, because <code>grep</code>, <code>cat</code>, <code>ls</code>, and <code>find</code> are sufficient primitives for an agent to reason over arbitrary structured content. Most builders take that observation and reach for a real sandbox. Mintlify took the same observation and asked whether the <em>interface</em> could be virtualized while keeping the <em>primitives</em> real. For their workload, the answer was yes &#8212; and the cost curve in their post (sandbox cost grows linearly with conversation duration; ChromaFs stays flat) is a clean argument for why.</p><div><hr></div><h2>Layer 3 &#8212; The public surface: content negotiation as the unification trick</h2><p>The third layer is the cheapest to describe and the easiest to overlook.</p><p>Every Mintlify-hosted docs site automatically generates a set of agent-readable artifacts at the root: llms.txt, llms-full.txt, and skill.md. The first two are an emerging convention for telling LLMs what content lives on a site and giving them a parseable bulk dump. The third is more interesting. As Mintlify describes it, <code>skill.md</code> is the action-layer manifest &#8212; it enumerates not just what the documentation contains but what an agent can actually invoke against the product, with required inputs and operating constraints attached to each capability. It is, in other words, the difference between an agent that can find information and an agent that can take action. Mintlify also exposes the <code>/.well-known/agent-skills</code> and <code>/.well-known/skills</code> paths &#8212;  so any agent that knows the convention can find capabilities without hard-coded paths.</p><p>The unification trick that ties everything together is <strong>content negotiation</strong>. The same URL serves rich HTML to browsers and clean Markdown to agents &#8212; appending <code>.md</code> to any page URL returns a Markdown view of the same content, with no separate agent-facing site to maintain. This avoids the failure mode where teams maintain a &#8220;human site&#8221; and a separate &#8220;AI site&#8221; that drift out of sync; there is only one content store, with two rendering targets selected by the request.</p><p>Finally, every Mintlify site auto-hosts an MCP server, which lets coding agents like Cursor, Claude Code, and Windsurf query current documentation while a task is running. Authentication is supported when the docs site itself is gated &#8212; the MCP server respects whatever auth protocol the docs already use. The architectural significance is that retrieval is no longer something only the docs site itself can do. Every external agent that supports MCP gets a structured handle into your corpus, on the same terms as Mintlify&#8217;s own assistant.</p><div><hr></div><h2>What the architecture teaches</h2><p>A few patterns are general enough to lift out of Mintlify&#8217;s specific case and apply elsewhere.</p><p>First, <strong>the sandbox is the unit of work for write tasks, but the wrong unit for read tasks</strong>. Most builders default to one or the other. Mintlify&#8217;s own bill clarifies the trade-off: a sandbox that boots in tens of seconds and costs a fraction of a cent per session is fine for asynchronous PR drafting, and ruinous for a chat widget. If you&#8217;re building both surfaces, expect to want both harnesses.</p><p>Second, <strong>version-controlled, natural-language instructions are the right encoding for agent behavior</strong>. Workflows YAML and <code>AGENTS.md</code> are the same idea applied at different scopes &#8212; one configures a recurring task, the other configures the agent globally. Both live in the repo, both go through code review, both evolve with the project. This is what &#8220;config as code&#8221; looks like when the configured component is a model.</p><p>Third, <strong>virtualizing the agent&#8217;s interface, not its environment, is often the better move</strong>. ChromaFs is the cleanest example: a real grep, a real ls, a real cat &#8212; but resolved against a database, not a disk. The agent doesn&#8217;t need a sandbox, it needs the sandbox&#8217;s API. Once you internalize that, a lot of &#8220;we need a Daytona for this&#8221; becomes &#8220;we need an <code>IFileSystem</code> shim for this,&#8221; with two orders of magnitude less infrastructure.</p><p>Fourth, <strong>content negotiation is the right unification primitive when you&#8217;re serving humans and agents from the same corpus</strong>. Maintaining parallel &#8220;human docs&#8221; and &#8220;AI docs&#8221; is how you guarantee they drift. Same URL, different format, selected by the request &#8212; and the cost of supporting the agent surface drops to near-zero.</p><p>Finally, <strong>harnesses are not edge cases, they&#8217;re the product</strong>. If you remove ChromaFs from the read assistant, the bill blows up. If you remove the sandbox boundary from the write agent, you stop being able to safely run on customer codebases. If you remove the auto-generated llms.txt and MCP server, the 45.3% of agent traffic loses its grip on the corpus. The model is doing model work in the middle, but everything around it &#8212; the sandbox, the virtual filesystem, the YAML triggers, the public surface &#8212; is what makes the product trustworthy and economical.</p><div><hr></div><h2>What to do with this</h2><p>Three concrete moves for practitioners building anything adjacent to this space.</p><p>If you operate a documentation site, run it through Mintlify&#8217;s free <a href="https://www.mintlify.com/blog/agent-score">Agent Score tool</a>, which checks twenty-nine signals of agent-readability and tells you where the gaps are. The data is right there: half your traffic is agents you cannot see, and most teams are still building only for browsers. If you&#8217;d rather audit on your own, start by checking whether <code>curl -L https://yourdocs.com/some-page.md</code> returns clean Markdown or a 404 &#8212; that one HTTP request tells you whether you&#8217;re on the agent map at all.</p><p>If you&#8217;re building any agent that needs to read or modify a code repository, start with the harness, not the prompt. Decide your latency budget before you decide your model. If the answer is &#8220;tens of seconds and the agent edits files,&#8221; the Mintlify write agent &#8212; sandbox, headless OpenCode, version-controlled config &#8212; is your reference. If the answer is &#8220;milliseconds and the agent only reads,&#8221; the ChromaFs pattern (virtualize the interface, not the environment) is your reference.</p><p>And if you&#8217;re shipping a product that other agents will need to understand &#8212; an API, an SDK, a developer tool &#8212; treat your documentation as a programmatic interface that happens to also be human-readable. Auto-generate llms.txt and skill.md, expose an MCP server, serve clean Markdown via content negotiation. The asymmetric world Mintlify is betting on already exists. The teams whose docs are agent-readable get evaluated. The teams whose docs aren&#8217;t get skipped.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[How Vertical Agents Self-Improve in Production]]></title><description><![CDATA[Field notes on the harness loop at Harvey, Hippocratic, Anterior, and Azure SRE &#8212; where production failures compound into skill without retraining the model.]]></description><link>https://theairuntime.com/p/how-vertical-agents-self-improve</link><guid isPermaLink="false">https://theairuntime.com/p/how-vertical-agents-self-improve</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Sat, 02 May 2026 11:03:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!V7Rg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - In regulated verticals &#8212; healthcare, legal, insurance, finance &#8212; the most reliable way to make a deployed agent better is not a new model. It is a closed loop that turns production failures into harness updates: prompts, tools, sub-agents, memory files, judge rubrics, routing logic. Harvey ran this loop on twelve legal tasks and moved average success from <a href="https://www.artificiallawyer.com/2026/04/07/harvey-drives-legal-agent-learning-via-harness-engineering/">40.8% to 87.7% with model weights frozen</a>, with <a href="https://x.com/nikogrupen/status/2041166953902203157">complaint drafting going from 2% to 98% rubric coverage</a>. Hippocratic AI vendor-published clinical accuracy improvements <a href="https://hippocraticai.com/polaris-3/">from ~80% pre-Polaris to 99.38% in Polaris 3.0</a> by feeding ~1.85M real patient calls and 307K clinician-reviewed test calls back into the system. Anterior (vendor-published) puts a <a href="https://www.zenml.io/llmops-database/building-scalable-llm-evaluation-systems-for-healthcare-prior-authorization">reference-free LLM-as-judge in front of every prior auth decision</a>, routes only the low-confidence ones to under ten clinicians, and reports 96% F1 at over 100K decisions/day. Microsoft&#8217;s Azure SRE Agent moved its <a href="https://techcommunity.microsoft.com/blog/appsonazureblog/the-agent-that-investigates-itself/4500073">Intent-Met score from 45% to 75% on novel incidents</a> by letting the agent investigate its own bugs and submit PRs against its own codebase. The shared pattern is the same six nodes everywhere: trace &#8594; judge &#8594; cluster &#8594; mutate harness &#8594; gate &#8594; deploy. <strong>If you cannot run that loop, you are shipping a frozen artifact in a moving market.</strong> Start by instrumenting traces and writing one rubric. The judge and the mutation loop come after.</p></div><h2>The frozen-agent problem</h2><p>A vertical agent that ships at 90% accuracy and stays there is not a 90% accurate system. It is a 90% accurate system at the moment of deployment, decaying.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The decay has three sources. <strong>Distribution drift</strong>: real patients ramble, real lawyers redline contracts in non-canonical ways, real claims arrive with new denial codes. <strong>Policy drift</strong>: CMS coverage determinations change, <a href="https://www.healthaffairs.org/doi/10.1377/hlthaff.2025.00897">EU AI Act provisions phase in on staggered enforcement timelines</a>, insurer rulesets get rewritten quarterly. <strong>Long-tail surface area</strong>: the failure modes you didn&#8217;t see in eval are the ones production discovers, one in ten thousand at a time. At 100K medical decisions per day, <a href="https://www.zenml.io/llmops-database/building-scalable-llm-evaluation-systems-for-healthcare-prior-authorization">a one-in-ten-thousand subtle hallucination &#8212; &#8220;suspicious for multiple sclerosis&#8221; when the patient has a confirmed MS diagnosis &#8212; fires ten times daily</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V7Rg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V7Rg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!V7Rg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!V7Rg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!V7Rg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V7Rg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:556426,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/196073139?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V7Rg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!V7Rg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!V7Rg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!V7Rg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5caaf98-dc3c-4fc0-8487-7f9fa24ff038_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                Agent Improvement</em></p><p>In low-stakes consumer apps you can absorb that. In a vertical where the cost of a single error is a denied surgery, a missed disclosure schedule, or a regulatory finding, you cannot. So the question that defines vertical agent engineering in 2026 is not &#8220;which model do we use&#8221; &#8212; it is &#8220;how does this agent get better next week than it is today, <em>without</em> a new base model release, and <em>with</em> the audit trail a regulator will demand.&#8221;</p><p>The answer that has emerged across legal, healthcare, insurance, and incident response is the same architecture, sometimes given different names. Anthropic&#8217;s engineering team and Viv Trivedy refer to it as <a href="https://addyosmani.com/blog/agent-harness-engineering/">harness engineering</a>. Microsoft frames it as the <a href="https://techcommunity.microsoft.com/blog/appsonazureblog/the-agent-that-investigates-itself/4500073">agent investigating itself</a>. NVIDIA borrows MAPE-K from autonomic computing and <a href="https://arxiv.org/pdf/2510.27051">calls it a data flywheel</a>. LangChain calls it <a href="https://www.langchain.com/conceptual-guides/traces-start-agent-improvement-loop">the agent improvement loop powered by traces</a>. The mechanics are the same.</p><div><hr></div><h2>The shape of the loop</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6iB3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6iB3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png 424w, https://substackcdn.com/image/fetch/$s_!6iB3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png 848w, https://substackcdn.com/image/fetch/$s_!6iB3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png 1272w, https://substackcdn.com/image/fetch/$s_!6iB3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6iB3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png" width="841" height="763" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:763,&quot;width&quot;:841,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:33712,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/196073139?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6iB3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png 424w, https://substackcdn.com/image/fetch/$s_!6iB3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png 848w, https://substackcdn.com/image/fetch/$s_!6iB3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png 1272w, https://substackcdn.com/image/fetch/$s_!6iB3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F037db51c-ee9f-4106-a460-95d25164e7cd_841x763.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                      The loop</em></p><p>Six nodes. Every component carries weight; every break in the chain causes silent degradation.</p><p><strong>Production traces</strong> are the substrate. Without per-step tool calls, model inputs, model outputs, latency, token counts, and final outcomes, none of the downstream work is possible. LangChain&#8217;s formulation is the cleanest: <a href="https://www.langchain.com/conceptual-guides/traces-start-agent-improvement-loop">traces come from staging environments, benchmark runs, local development, and especially from production</a>, and they are the input to every subsequent step. The trace store doubles as the audit trail regulators ask for.</p><p><strong>Evaluation and judging</strong> is where most teams over-rely on offline benchmarks. The shift in 2025&#8211;26 has been toward online evaluators that score every production trace &#8212; typically an LLM-as-judge augmented with deterministic checks (schema validation, citation existence, tool-call shape) and routed human review on a configurable sample. Anterior&#8217;s framing is sharper than most: their judge is <em>reference-free</em>, scoring outputs against guidelines and clinical reasoning rather than a held-out ground truth, because the volume &#8212; over 100K decisions a day &#8212; makes ground truth impossible to maintain.</p><p><strong>Failure clustering</strong> is where the leverage is. A pile of low-scored traces is not actionable. Grouping them by failure pattern &#8212; &#8220;agent missed exhibit B in 30% of due diligence runs,&#8221; &#8220;agent emits &#8216;suspicious for X&#8217; on confirmed-X patients,&#8221; &#8220;agent hits LLM 429s during streaming&#8221; &#8212; turns symptoms into hypotheses. LangChain runs <a href="https://blog.langchain.com/improving-deep-agents-with-harness-engineering/">parallel error-analysis subagents and synthesizes their findings into harness change proposals</a>. Microsoft&#8217;s SRE Agent runs <a href="https://techcommunity.microsoft.com/blog/appsonazureblog/the-agent-that-investigates-itself/4500073">a daily monitoring task that searches the last 24 hours of errors, clusters the top hitters, traces each to its root cause, and submits a PR</a>.</p><p><strong>Harness mutation</strong> is the change itself. We will spend a section on the levers that actually move; for now: <em>most of these changes never touch model weights</em>. They edit the system prompt, add a skill or sub-agent, modify a tool definition, append to a memory file, tighten a routing threshold, or rewrite the judge&#8217;s rubric.</p><p><strong>Validation gate</strong> is the hill-climbing safety. Every proposed harness change runs against a frozen eval set before it ships, and any regression &#8212; even on a task the change was not targeting &#8212; blocks the merge. Harvey runs this against <a href="https://x.com/nikogrupen/status/2041166953902203157">twelve internal benchmark tasks per iteration</a>; LangChain marks proposed changes that overfit as discarded runs in their iteration log. Without the gate, the loop generates regressions as fast as it generates improvements.</p><p><strong>Deploy</strong> then closes the cycle. The new harness produces new traces; new traces feed new judges; new clusters drive new mutations. The model is the one piece of this picture that does not change between weekly cycles.</p><p>The non-obvious property of this loop is what compounds. As Anterior describes it, the loop creates a <a href="https://www.zenml.io/llmops-database/building-scalable-llm-evaluation-systems-for-healthcare-prior-authorization">virtuous improvement cycle where the evaluator itself gets calibrated against human review, and confidence grades from that calibrated evaluator route which cases need humans next time</a>. The judge improves. The clustering improves. The mutations get more targeted. The agent appears to learn &#8212; without a single weight changing.</p><div><hr></div><h2>Case 1: Harvey &#8212; autoresearch and the rubric ceiling</h2><p>The cleanest published demonstration is Harvey&#8217;s recent <a href="https://x.com/nikogrupen/status/2041166953902203157">autoresearch experiment</a>, summarized externally by <a href="https://www.artificiallawyer.com/2026/04/07/harvey-drives-legal-agent-learning-via-harness-engineering/">Artificial Lawyer</a>. Niko Grupen, Head of Applied Research, ran twelve tasks from Harvey&#8217;s internal agent benchmark &#8212; commercial lease review, complaint drafting, tax memos, disclosure schedules, due diligence questionnaires &#8212; through a loop where an outer agent is allowed to edit the inner agent&#8217;s harness based on rubric-graded judge feedback.</p><p>The setup: each task ships with source documents, instructions, and a detailed grading rubric. After an attempt, an LLM judge scores against the rubric and produces written feedback on what the agent got right, what it missed, and where its reasoning was wrong. A coding agent reads the judge feedback, clusters the failures, forms a hypothesis about which harness components would help, edits or builds those components &#8212; skills, hooks, scripts, sub-agents, <em>not</em> model weights &#8212; and reruns.</p><p>The result: across all twelve tasks, average success rose from 40.8% to 87.7%. Five of the twelve started in the 2&#8211;7% range. After optimization, seven exceeded 90% and one hit 100%. The complaint drafting task is the most striking &#8212; it <a href="https://x.com/nikogrupen/status/2041166953902203157">moved from 2% rubric coverage to 98% over a handful of iterations, producing a 164-paragraph complaint with a 33-exhibit list</a>.</p><p>Two patterns from Grupen&#8217;s log are worth quoting on terms. First, the early iterations correct basic structural failures &#8212; wrong file types, missing deliverables, weak structure. Later iterations show domain-specific expertise emerging: cross-document issue spotting, risk classification, distinguishing genuinely problematic provisions from market-standard distractors. Second, the ceiling is the rubric. &#8220;When the rubric is high quality, the agent can hill-climb surprisingly far.&#8221; When it isn&#8217;t, the loop stalls.</p><p>This generalizes. The same auto-improvement pattern works in a generic coding domain: LangChain&#8217;s deepagents-cli moved <a href="https://blog.langchain.com/improving-deep-agents-with-harness-engineering/">from 52.8% to 66.5% on Terminal Bench 2.0 &#8212; a 13.7-point jump from harness changes alone, with the model fixed at GPT-5.2-Codex</a>. The mechanism is the same trace analyzer skill, parallel error agents, and targeted prompt/tool/middleware changes per iteration.</p><p>The Harvey caveat is real and worth surfacing: this is a vendor-run experiment on twelve tasks; it does not yet generalize to all legal work, and it is bound by the quality of the rubrics Harvey wrote. But the directional finding &#8212; that harness-layer changes can deliver model-upgrade-sized improvements in a regulated domain &#8212; is now hard to dismiss.</p><div><hr></div><h2>Case 2: Hippocratic AI &#8212; clinicians as a learning signal at scale</h2><p>Hippocratic AI&#8217;s Polaris is a different shape of the same loop, scaled to a 22-LLM constellation that handles <a href="https://arxiv.org/pdf/2603.29893">over 10 million real patient calls</a> and a network of 6,234 US-licensed clinicians who review production output.</p><p>The vendor-published trajectory across three model generations: <a href="https://hippocraticai.com/polaris-3/">pre-Polaris baseline ~80%, Polaris 1.0 at 96.79%, Polaris 2.0 at 98.75%, Polaris 3.0 at 99.38% clinical accuracy</a>, validated under their Real-World Evaluation of Large Language Models in Healthcare framework. The framework leverages <a href="https://hippocraticai.com/real-world-evaluation-llm/">6,234 US-licensed clinicians (5,969 nurses and 265 physicians) evaluating 307,038 unique calls</a> through a three-tier review process: nurse review first, physician adjudication when needed, structured error categorization in between. Errors flagged at any tier feed back into the next iteration&#8217;s training and harness.</p><p>The subsystem-level numbers tell the more interesting story, because they show what specifically improved between Polaris 2.0 and 3.0 by listening to production:</p><ul><li><p><a href="https://hippocraticai.com/polaris-3/">Health Risk Assessment documentation accuracy: 90.5% &#8594; 98.5%</a></p></li><li><p>Explanation-of-Benefits policy quoting: 86.4% &#8594; 99.4%</p></li><li><p>Complex appointment scheduling error rate: 8% &#8594; 0.5%</p></li><li><p>Background-noise speech recognition error rate: 9.3% &#8594; 2.3%</p></li><li><p>Clarification engine error rate (gracefully handling unclear patient speech): 16.3% &#8594; 2.0%</p></li></ul><p>These aren&#8217;t random improvements. They&#8217;re the long-tail issues that surfaced once 1.85M patient calls had run through Polaris 1.0 and 2.0 and clinicians had flagged categorical failure modes. Speech recognition fails in noisy environments &#8594; train a dedicated background-noise engine. Patients answer HRAs in rambling, context-shifting ways &#8594; ship a &#8220;deep thinking&#8221; model that triple-checks documentation. Policy quotes occasionally drift from source documents &#8594; tighten the harness around source attribution.</p><p>The honest framing: these are vendor-self-published numbers, and there is no independent third party validating Hippocratic AI&#8217;s safety scores. What is independently verifiable is the <em>architecture</em> of the feedback loop &#8212; clinician review network, structured error categorization, real-world evidence accumulation across versions &#8212; which is now <a href="https://www.medrxiv.org/content/10.1101/2025.03.17.25324157v1">described in the underlying RWE-LLM paper on medRxiv</a> and is replicable by anyone willing to invest in a comparable review apparatus.</p><div><hr></div><h2>Case 3: Anterior &#8212; judge first, route smartly, validate the validator</h2><p>Anterior <a href="https://www.zenml.io/llmops-database/building-scalable-llm-evaluation-systems-for-healthcare-prior-authorization">runs the same loop in healthcare prior authorization</a>, but with two design choices that are worth studying separately because they generalize beyond healthcare.</p><p>First, reference-free real-time evaluation. Anterior&#8217;s primary system makes a coverage determination by reasoning across unstructured clinical documentation, payer rulesets, and clinical guidelines. A second LLM-as-judge then evaluates the determination against those same guidelines &#8212; without needing a held-out ground truth &#8212; and produces a confidence grade. Reference-free evaluation matters because at 100K+ decisions a day, no organization can maintain a labeled gold set that keeps up with policy drift.</p><p>Second, dynamic case prioritization. The confidence grade combines with contextual factors &#8212; procedure cost, bias risk, historical error rates for that procedure category &#8212; to decide which cases are sent to human clinicians for review. High-confidence cases auto-resolve; low-confidence and high-stakes cases route to a small clinical team. Anterior reports a team of fewer than ten clinical reviewers handling tens of thousands of cases, against a competitor reportedly employing 800+ nurses for comparable review volume. (Caveat: scope of work may differ. Take the comparison directionally.)</p><p>The third move is the one most teams miss. Anterior runs alignment metrics between the LLM-judge and the human reviewers on cases that get both, and uses that data to validate &#8212; and continuously recalibrate &#8212; the judge itself. They call this &#8220;validating the validator.&#8221; It is the missing piece in most LLM-judge deployments. Without it, the judge can drift, and you only learn about it when the harness has been mutating against bad signal for weeks.</p><p>Anterior&#8217;s <a href="https://www.anterior.com/insights/ahip-commitment-health-plans">vendor-reported numbers</a>: 99.26% accuracy on automated approvals, against 86% baseline human accuracy, with 76% reduction in human review needed and 74% less time per escalated case. Cross-reference with Anterior&#8217;s <a href="https://arxiv.org/abs/2603.14631">own arXiv paper on fairness evaluation</a>, which reports model error rates across 7,166 human-reviewed cases spanning 27 medical necessity guidelines. Independent validation remains an open need; the 96% F1 figure that has circulated comes from Anterior&#8217;s own talks, not a peer-reviewed audit.</p><p>The architectural lesson generalizes far past healthcare. Any vertical agent operating at scale where ground truth is expensive &#8212; fraud review, AML, KYC, contract triage, claims adjudication, security alert triage &#8212; can adopt the same three-part move: reference-free judge in line, dynamic routing on confidence and stakes, alignment metrics that validate the judge against the humans that exist.</p><div><hr></div><h2>Case 4: Azure SRE Agent &#8212; when the agent debugs itself</h2><p>Microsoft&#8217;s Azure Site Reliability Engineering Agent handles <a href="https://techcommunity.microsoft.com/blog/appsonazureblog/the-agent-that-investigates-itself/4500073">tens of thousands of incidents weekly</a> for internal Microsoft services and external teams. The team published a remarkably honest engineering retrospective in March 2026 about how they closed their improvement loop.</p><p>The starting point: incident resolution rates were climbing toward 50% on high-instrumented scenarios &#8212; but the high-performing scenarios all shared a trait. They had been built with heavy human scaffolding: custom response plans, hand-built sub-agents for known failure modes, pre-written log queries exposed as opaque tools. On any new incident class, the agent had nowhere to start. Engineers were reading 50 lower-scored threads a week against an agent handling 10,000 &#8212; debugging at human speed.</p><p>The inversion they made: stop pre-computing the answer space. Instead, give the agent a filesystem as its world (source code, runbooks, query schemas, past investigation notes &#8212; all files; no <code>SearchCodebase</code> API), context hooks that orient it on what it can access, and frugal context management that keeps long investigations sharp. Three architectural bets, in their words. The result: <a href="https://techcommunity.microsoft.com/blog/appsonazureblog/the-agent-that-investigates-itself/4500073">Intent-Met score on novel incidents &#8212; whether the agent&#8217;s investigation actually addressed the root cause as judged by the on-call engineer &#8212; rose from 45% to 75%</a>.</p><p>The closing move is the one to study. They set up a daily monitoring task: the agent searches the last 24 hours for LLM errors &#8212; timeouts, 429s, mid-stream failures, malformed payloads &#8212; clusters the top hitters, traces each to its root cause in its own codebase, and submits a PR. Engineers review before merging. Over two weeks, errors dropped by more than 80%.</p><p>The agent, in other words, became its own debugger. The harness that runs the SRE agent is now updated by the SRE agent itself, gated by human PR review. The team&#8217;s framing is the title of their post: &#8220;The agent that investigates itself.&#8221; It is not a metaphor.</p><div><hr></div><h2>What actually changes (the levers)</h2><p>The most under-appreciated property of these loops is <em>what</em> they mutate. Across every case study above, the changes that produced the gains were:</p><p>The <strong>system prompt</strong> and <strong>task instructions</strong>. ILWS, the &#8220;Instruction-Level Weight Shaping&#8221; framework, formalizes this: <a href="https://arxiv.org/pdf/2509.00251">a session-level reflection engine proposes a structured edit to the system prompt &#8212; a knowledge delta &#8212; that is gated, accepted only if a sliding-window quality rating improves with statistical significance, and rolled back otherwise</a>. Most production teams do this informally. Formalizing it gives you reversibility under governance, which regulators ask for.</p><p><strong>Tool definitions and skills</strong>. <a href="https://blog.langchain.com/improving-deep-agents-with-harness-engineering/">LangChain&#8217;s improvement was largely middleware</a>: a <code>LocalContextMiddleware</code> that maps the working directory and onboards the agent into its environment, a <code>LoopDetectionMiddleware</code> that intercepts repeated edits to the same file and forces a plan reconsideration, a <code>PreCompletionChecklistMiddleware</code> that blocks the agent from exiting before it runs a verification pass. None of these are model changes. All are tool-and-hook surface.</p><p><strong>Memory and knowledge files</strong>. Microsoft replaced their RAG-over-past-sessions memory with <a href="https://techcommunity.microsoft.com/blog/appsonazureblog/the-agent-that-investigates-itself/4500073">structured Markdown files the agent reads and writes through its standard tool interface &#8212; overview.md, team.md, logs.md, debugging.md</a>. The model navigates memory by following links, not by retrieving via embedding similarity. This is the &#8220;the repo is the schema&#8221; insight. Memory becomes a write-able artifact that future runs read.</p><p><strong>Sub-agents and routing</strong>. Anterior routes by confidence &#215; stakes. Azure SRE spawns parallel sub-agents per hypothesis when a single context is at risk of getting polluted. Hippocratic uses a 21-model supervisory constellation around a primary conversational model. None of these compositions require retraining the underlying weights; they require designing the orchestration layer.</p><p><strong>Judge rubrics</strong>. The Harvey ceiling is the rubric ceiling. The Anterior calibration is the judge alignment with humans. The fastest leverage in most teams&#8217; first improvement loop is not a fancier judge &#8212; it is a better-written rubric and a small humans-vs-judge alignment dataset.</p><p><strong>Fine-tuning the small models in the harness</strong>. Sometimes weights do change, but on the <em>components</em>, not the primary model. NVIDIA NeMo&#8217;s case study on an enterprise data flywheel: a routing model fine-tuned from Llama 3.1 70B down to a Llama 3.1 8B variant achieved <a href="https://arxiv.org/pdf/2510.27051">96% accuracy with a 10&#215; model size reduction and 70% latency improvement</a>. The query rephrasal model gained 3.7% accuracy with a 40% latency cut. The orchestrating LLM was untouched.</p><p>The pattern is consistent: when you map &#8220;improvements shipped&#8221; against &#8220;components that changed&#8221; across these case studies, the primary reasoning model is the <em>least</em> common thing that gets edited. The harness layer carries the weight.</p><div><hr></div><h2>Where these loops break</h2><p>Six failure modes show up repeatedly. None are theoretical; each one has burned at least one of the case studies above.</p><p><strong>Overfitting to recent failures.</strong> Aggregate harness changes against last week&#8217;s top errors and you regress on tasks the change wasn&#8217;t targeting. LangChain&#8217;s iteration log explicitly marks these as discarded runs. Without a frozen eval set that the validation gate runs <em>every</em> mutation against, you&#8217;ll fix Monday&#8217;s bug and silently break Tuesday&#8217;s working flow.</p><p><strong>Reward hacking against the rubric.</strong> When the agent edits its own harness against an LLM judge&#8217;s scoring, the judge&#8217;s scoring is the optimization target &#8212; including any blind spots in the rubric. Harvey caveats this directly: the improvements track the rubric, and the rubric is human-authored and incomplete. Periodic out-of-distribution evals from a <em>separate</em> judge with a <em>separate</em> rubric catch this.</p><p><strong>Judge drift and validator fragility.</strong> Anterior&#8217;s validate-the-validator move exists because LLM-judges drift, and the drift is silent. If the judge is the substrate for routing, clustering, and mutation decisions, judge drift propagates everywhere. Alignment metrics against humans on a rolling sample of cases is the only known fix.</p><p><strong>Memory staleness.</strong> Microsoft flagged this as their unsolved problem: <a href="https://techcommunity.microsoft.com/blog/appsonazureblog/the-agent-that-investigates-itself/4500073">when two sessions write conflicting patterns to debugging.md, the model has to reconcile them; when a service changes behavior, old memory entries become misleading</a>. Timestamps and explicit deprecation help, but no production team has solved this systematically.</p><p><strong>Privacy and regulatory constraints on production data.</strong> Healthcare and finance can&#8217;t freely route production traces into a learning loop the way a generic SaaS product can. The TikTok Pay ARIA paper handles this by having the agent <a href="https://arxiv.org/abs/2507.17131">self-identify uncertainty through structured self-dialogue and request targeted explanations from human experts at runtime</a>, keeping learning at test time inside the regulatory boundary. Hippocratic uses synthetic test calls plus consented real-call evidence; Anterior keeps clinician review and AI determination in the same compliance perimeter.</p><p><strong>Compounding errors when the validator itself fails.</strong> A bad judge calibrated against a small alignment set drifts. A bad alignment set lets the judge calibrate against itself. A bad clustering layer groups the wrong failures together. Each layer of the loop is a place errors can go undetected and propagate. The defense is treating every layer as an evaluable artifact &#8212; the judge has a precision/recall, the cluster labels have inter-rater agreement, the harness mutations have a regression budget.</p><p>The seventh failure mode, which is institutional rather than technical: nobody owns the loop. In every case study above, the loop is owned by a named team with a named lead &#8212; Grupen at Harvey, Mukherjee at Hippocratic, Mehta and team at Microsoft. Loops without owners decay quietly.</p><div><hr></div><h2>Build order</h2><p>If you&#8217;re standing up a vertical agent and don&#8217;t yet have this loop, the build order is fixed and the order matters. None of the steps require the next-generation model.</p><p>Start with <strong>traces</strong>. Every tool call, every model input, every model output, every latency, every outcome, with a stable trace ID per session. If you can&#8217;t reconstruct what happened, none of the rest of the loop works. LangSmith, Arize Phoenix, Braintrust, and OpenTelemetry-based stacks all do this; pick one and instrument every call path before anything else.</p><p>Then write <strong>one rubric</strong> for one task. Not a benchmark suite. One task that matters, one rubric that an expert in your domain would sign off on. Score 50 production traces against it manually. The rubric you ship will be wrong in instructive ways; the act of writing and applying it surfaces the failure modes you didn&#8217;t know you had.</p><p>Add a <strong>judge</strong> against that rubric. Run it inline on a sample of production. Run it against the 50 you scored manually. Compute alignment. If alignment is below ~70%, the rubric is the problem, not the judge.</p><p>Add the <strong>clustering and mutation step</strong> last. Cluster the lowest-scored traces, propose one harness change, gate it against your offline eval, ship if it passes, measure the production effect. This is one cycle. Run it weekly.</p><p>The model upgrade question takes care of itself once the loop is running. When a better base model ships, you swap it in, rerun the validation gate, and observe whether your harness over-fits to the old model. (Different models reward different harnesses &#8212; <a href="https://blog.langchain.com/improving-deep-agents-with-harness-engineering/">Claude Opus 4.6 scored 59.6% with a harness tuned for GPT-5.2-Codex on Terminal Bench 2.0; the same Claude with its own harness moved several positions</a>.) The harness tax of switching models is real, but it&#8217;s a calibration problem, not a foundational one.</p><p>The reason this matters now and not in twelve months is asymmetry. Vertical agent winners in 2026 will not be the teams with the best zero-shot model. They will be the teams whose deployed agents are quietly compounding skill every week the rest of the market sits frozen. The loop is the moat.</p><p>Build the trace store this week. Write the first rubric next week. The rest of it follows.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Felix Is a Harness, Not a Model: How Rogo Built an Agent for High Finance]]></title><description><![CDATA[Rogo just raised $160M Series D led by Kleiner Perkins. The architecture behind their Felix agent is what AI engineers should be studying.]]></description><link>https://theairuntime.com/p/felix-is-a-harness-not-a-model-how</link><guid isPermaLink="false">https://theairuntime.com/p/felix-is-a-harness-not-a-model-how</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Fri, 01 May 2026 11:03:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dqwt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - Rogo serves <a href="https://www.prnewswire.com/news-releases/rogo-raises-160m-series-d-to-scale-the-agentic-platform-for-finance-302756546.html">more than 35,000 professionals at over 250 institutions</a> &#8212; Rothschild &amp; Co, Jefferies, Lazard, Moelis, Nomura &#8212; with an AI agent called Felix that bankers email like a junior analyst and get back finished decks, models, and memos. The interesting part is not the model. Rogo&#8217;s <a href="https://rogo.ai/news/gpt-5.5-now-available-in-rogo">own product team calls Felix their &#8220;agent harness&#8221;</a> &#8212; a vertical scaffolding designed to be model-agnostic across GPT 5.5, Claude Opus 4.7, and Gemini. Felix is the playbook for vertical AI: the moat is the harness, the evals, the data integrations, and the deployment model &#8212; not which frontier LLM is wired in this quarter. If you are building a vertical agent, study how Rogo decomposed the problem before you pick a model.</p></div><h2>What Rogo Actually Sells</h2><p>A precision note first: when people say &#8220;banking&#8221; in this conversation, they don&#8217;t mean retail or commercial banking. Rogo sits inside high finance &#8212; investment banking, private equity, hedge funds, equity research, asset management. <a href="https://rogo.ai/felix">Rogo&#8217;s own product page</a> explicitly calls out its three audiences: Banking, Private Markets, Public Markets. The workflows are deal-shaped: pitchbooks, comps, models, memos, CIMs, diligence trackers.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Rogo was founded by <a href="https://rogo.ai/company">Gabriel Stengel and John Willett</a> &#8212; both ex-investment-bankers (Lazard, J.P. Morgan, Barclays) &#8212; with Tumas Rackaitis. That founder profile matters because the company&#8217;s edge is not the LLM; it is the granular, painful familiarity with what a 2 AM CIM revision actually looks like.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dqwt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dqwt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!dqwt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!dqwt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!dqwt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dqwt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1085822,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/196065605?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dqwt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!dqwt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!dqwt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!dqwt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d8c23a-23c7-48a9-bc63-ad7fdf07018e_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                Felix Architecture</em></p><p>Yesterday&#8217;s <a href="https://www.kleinerperkins.com/perspectives/rogo-the-ai-platform-for-global-finance/">$160M Series D, led by Kleiner Perkins</a> with participation from Sequoia, Thrive, Khosla, and J.P. Morgan Growth Equity Partners, brings total funding <a href="https://www.prnewswire.com/news-releases/rogo-raises-160m-series-d-to-scale-the-agentic-platform-for-finance-302756546.html">past $300M</a>. The capital is going toward two things that tell you what they actually believe: deeper data integrations and more forward-deployed bankers embedded inside client institutions.</p><h2>Felix Is a Harness, Not a Model</h2><p>The single most useful sentence Rogo has published this year shows up in their <a href="https://rogo.ai/news/gpt-5.5-now-available-in-rogo">GPT 5.5 release note</a>: &#8220;we&#8217;ve begun incorporating GPT 5.5 into our agent harness, Felix.&#8221; Read that twice.</p><p>Felix is not a fine-tuned model. Felix is the <em>harness</em> &#8212; the orchestration scaffold, tool layer, citation system, output formatters, audit trail, and policy controls &#8212; into which Rogo plugs whichever frontier model performs best on their internal benchmark this week. They are explicit that they are model-agnostic across <a href="https://rogo.ai/news/gpt-5.5-now-available-in-rogo">OpenAI, Google, and Anthropic</a>, and TAMradar&#8217;s coverage notes the platform <a href="https://www.tamradar.com/funding-rounds/rogo-series-d-160m">supports GPT 5.5 and Anthropic Opus 4.7</a> concurrently.</p><p>This separation is load-bearing. In the <a href="https://aiengineerweekly.substack.com/p/model-reliability-engineering-who">Model Reliability Engineering</a> frame, the harness is one of the two reliability axes &#8212; the scaffolding you build <em>around</em> the model to make its behavior production-safe. The harness-vs-model split is the same separation MRE treats as one of its two reliability axes. Rogo's product team uses the word the same way. The implication for builders: when frontier labs ship a 4% improvement on your domain, you swap the engine; when they ship a 40% improvement two years from now, your harness is what survives.</p><p>Here is the rough shape of what&#8217;s inside Felix:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eHei!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eHei!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png 424w, https://substackcdn.com/image/fetch/$s_!eHei!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png 848w, https://substackcdn.com/image/fetch/$s_!eHei!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png 1272w, https://substackcdn.com/image/fetch/$s_!eHei!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eHei!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png" width="840" height="584" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:584,&quot;width&quot;:840,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:44263,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/196065605?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eHei!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png 424w, https://substackcdn.com/image/fetch/$s_!eHei!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png 848w, https://substackcdn.com/image/fetch/$s_!eHei!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png 1272w, https://substackcdn.com/image/fetch/$s_!eHei!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b0c0af4-e7be-4350-bfec-34d12bcc909b_840x584.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Detail belongs in the prose, not the diagram. Three components below carry the real weight.</p><h2>The Email Interface Is the Real Interface</h2><p>The product surface that ships with Felix is unusual: bankers send Felix an email the same way they would a colleague, get an acknowledgment in under a minute with an ETA, and receive PowerPoint, Excel, Word, and PDF deliverables back when ready. Iteration happens by replying to the email thread.</p><p>This is not a UX gimmick. It tells you something about how the team thinks about adoption. Investment bankers already live in Outlook. Asking them to adopt a new interface is a tax. Email-as-API removes the tax. It also imposes async semantics on the agent: a long-running task with intermediate status, observable state via the inbox, and a clean handoff back to the human reviewer. The harness has to absorb that asynchrony &#8212; request queuing, intermediate progress, partial results, source attribution surviving the round-trip &#8212; without leaking it back to the user.</p><p>The output substrate matters too. Felix returns work in Excel, PowerPoint, and Word formatted in the firm&#8217;s own templates and house style. A pitchbook that doesn&#8217;t match house formatting is not 90% done; it is 0% done. Vertical AI rises or falls on output substrate fidelity.</p><h2>The Big Finance Benchmark: Vertical Evals Are the Moat</h2><p>Rogo curates an internal evaluation set called the Big Finance Benchmark &#8212; real financial tasks designed by their ex-finance team. Tasks include valuing companies, benchmarking peers on specific metrics, and building theses across disparate documents. They are explicit that these come from real workflows, not synthetic prompts.</p><p>This is the unsexy infrastructure that compounds. When OpenAI ships GPT 5.6 next quarter, Rogo will know within a day whether it improves CIM drafting on real deals or just MMLU. That is the kind of judgment a horizontal benchmark cannot give you. Every serious vertical AI company will need its own version of this. If you are building one and you don&#8217;t have a domain-specific eval suite, you are flying without instruments.</p><h2>Workflow Surface: What Felix Actually Does</h2><p>The concrete capabilities Rogo has shipped span deal screening, CIM generation, buyer outreach, and data room diligence. Decomposed:</p><ul><li><p><strong>Deal screening.</strong> Filtering thousands of potential targets against thesis criteria.</p></li><li><p><strong>CIM generation.</strong> Drafting Confidential Information Memoranda &#8212; the 50-to-100-page sell-side documents that anchor M&amp;A processes.</p></li><li><p><strong>Buyer outreach.</strong> Generating personalized contact lists and initial communications.</p></li><li><p><strong>Data room diligence.</strong> Synthesizing across the document piles that buyers and bankers wade through.</p></li><li><p><strong>Comps and models.</strong> Building Excel spreadsheets with historical financials and forward forecasts.</p></li><li><p><strong>Pitchbooks and memos.</strong> Decks for a CEO meeting, memos for an investment committee.</p></li></ul><p><a href="https://siliconangle.com/2026/04/29/rogo-raises-160m-speed-financial-analysis-ai-agents/">SiliconANGLE&#8217;s coverage</a> notes that Felix can also offer to keep a report current &#8212; for example, an analyst covering Apple can have the agent re-run the report each time the company reports earnings. Scheduled, recurring agent runs are part of the surface.</p><p>The data substrate behind these tasks is extensive. <a href="https://www.tamradar.com/funding-rounds/rogo-series-d-160m">TAMradar lists integrations</a> with PitchBook, LSEG, Cap IQ, FactSet, Fitch Solutions, and Third Bridge, plus internal CRM and SharePoint connectors. Auditable outputs are positioned for SOC 2, ISO 27001, GDPR, and EU AI Act compliance &#8212; the table-stakes regulatory surface for institutional finance.</p><h2>Sisyphus: The Other Harness</h2><p>The most under-covered part of Rogo&#8217;s stack is a second internal agent called <a href="https://rogo.ai/news/introducing-sisyphus-autonomous-security-for-financial-ai-infrastructure">Sisyphus</a> &#8212; an autonomous offensive-security agent that pen-tests Rogo&#8217;s own infrastructure once or twice a day, calibrated to deployment cadence. It runs structured campaigns across authentication abuse, authorization bypass, injection, SSRF, and LLM-specific exploit categories, and it chains findings to validate exploitability rather than just flagging signals.</p><p>Two numbers from Rogo&#8217;s own writeup are worth remembering. One week after a third-party penetration test, Sisyphus identified 18 additional exploitable vulnerabilities in a single afternoon, most chained, all remediated within hours. And on calibration: high-confidence findings now carry a &gt;95% true-positive rate after the team tuned the recon phase and compared the agent&#8217;s triage against their human security team.</p><p>This is the harness for the harness. If your vertical agent platform handles consequential workflows, &#8220;we get pen-tested twice a year&#8221; is not a posture; it is a vulnerability window. Sisyphus is what the security side of vertical AI starts to look like.</p><h2>Forward-Deployed Bankers: The Human Harness</h2><p>Rogo&#8217;s go-to-market is structured around an embedded role they call Forward Deployed Bankers &#8212; ex-bankers from top firms who sit inside client institutions and onboard teams from analyst to managing director. The new capital is funding expansion of this team from New York into London.</p><p>This is not professional services in disguise. It is closer to what Palantir built for defense and intelligence: domain-fluent humans who translate between the workflow and the platform, calibrate the agent&#8217;s outputs to firm-specific style, and surface workflow gaps that become product. They understand model formatting and how a positioning section actually reads. Without them, the harness loses ground truth on what &#8220;good&#8221; looks like inside each firm&#8217;s house style.</p><p>For builders: the lesson is that adoption inside regulated, high-status industries is bottlenecked on trust transfer, not feature parity. The forward-deployed model is expensive and it is a moat.</p><h2>What&#8217;s Actually Being Transformed</h2><p>Bankers do not get replaced; their pyramid does. Rogo&#8217;s Series D announcement is explicit that leading firms are &#8220;restructuring workflows, rethinking staffing pyramids, and deploying autonomous agents that work asynchronously across every transaction.&#8221; A managing director at one client described Felix as having tripled team output with no headcount additions. That is the shape of the transformation: same senior judgment layer, compressed junior layer, agent layer doing the asynchronous grunt work, forward-deployed bankers tuning the seams.</p><p>Rogo&#8217;s two recent acquisitions tell you where they are aiming next. <a href="https://techfundingnews.com/rogo-160m-series-d-kleiner-perkins-investment-banking-ai/">Plux AI</a> &#8212; a UK firm tracking complex financial market developments &#8212; adds European market coverage. <a href="https://siliconangle.com/2026/04/29/rogo-raises-160m-speed-financial-analysis-ai-agents/">Offset</a>, an AI agent company whose tech automatically updates financial models when new information arrives, plugs directly into the live-model side of the harness.</p><h2>Five Lessons If You Are Building a Vertical Agent</h2><ol><li><p><strong>The harness is the moat, not the model.</strong> Build it so frontier-model upgrades are a config change, not a rewrite.</p></li><li><p><strong>Domain-specific evals beat horizontal benchmarks.</strong> Curate real tasks from real practitioners. Run them every model release.</p></li><li><p><strong>Output substrate must match the destination workflow.</strong> A correct answer in the wrong format is the wrong answer.</p></li><li><p><strong>Forward deployment changes adoption math.</strong> Domain-fluent humans embedded in the customer org are a feature, not overhead.</p></li><li><p><strong>Security needs its own harness.</strong> When agents do consequential work, periodic pen tests leave a window. Continuous adversarial testing is the new floor.</p></li></ol><h2>What to Do This Week</h2><p>Pick one workflow you&#8217;ve watched a domain expert do that you suspect an agent could absorb. Don&#8217;t model it yet. Instead, write down four things: the data sources they pull from, the output format they hand back, the audit trail they leave, and the colleague they email when they get stuck. Those four are your harness specification. The model goes in the middle of that, and you can swap it out next quarter.</p><p>If your current agent prototype only handles one or two of those four, you have not built a harness yet. You have built a wrapper.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Privacy Filter Is Not an LLM]]></title><description><![CDATA[OpenAI&#8217;s open-weight PII model is a bidirectional token classifier &#8212; what that architecture buys, where the headline benchmark misleads, and why Anthropic ships nothing comparable.]]></description><link>https://theairuntime.com/p/privacy-filter-is-not-an-llm</link><guid isPermaLink="false">https://theairuntime.com/p/privacy-filter-is-not-an-llm</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Wed, 29 Apr 2026 11:44:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iaZS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - OpenAI <a href="https://openai.com/index/introducing-openai-privacy-filter/">released Privacy Filter</a> on April 22, 2026 &#8212; an <a href="https://github.com/openai/privacy-filter">Apache 2.0</a>, <a href="https://huggingface.co/openai/privacy-filter">1.5B-parameter (50M active)</a> model for detecting and masking eight categories of personally identifiable information. The headline is the <a href="https://openai.com/index/introducing-openai-privacy-filter/">96% F1 score on PII-Masking-300k</a>. The actual story is the architecture: Privacy Filter takes a <a href="https://huggingface.co/openai/privacy-filter">gpt-oss autoregressive checkpoint, swaps its language-modeling head for a token-classification head, and post-trains it as a bidirectional banded-attention classifier with BIOES span decoding</a>. It labels every token in a single forward pass instead of generating one. That single design decision is why it runs in a browser, supports <a href="https://huggingface.co/openai/privacy-filter">128K context without chunking</a>, and is <a href="https://huggingface.co/openai/privacy-filter">designed for high-throughput data sanitization workflows</a>. But the 96% F1 is on synthetic data &#8212; a <a href="https://www.tonic.ai/blog/benchmarking-openai-privacy-filter-pii-detection">third-party benchmark by Tonic.ai</a> (a competing redaction vendor) on real EHR notes and web crawls puts F1 between 0.18 and 0.65 at default settings, almost entirely as a recall problem. <strong>Treat Privacy Filter as a fine-tuning starting point and a precision-tuned default, not a drop-in production redactor &#8212; and notice that Anthropic, despite having every reason to ship something equivalent, has not.</strong></p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The architecture: a generative model with its head replaced</h2><p>Most coverage describes Privacy Filter as &#8220;a small open-weight model for PII detection.&#8221; That misses the interesting part. Privacy Filter is not a small LLM that happens to do classification. It is structurally a different model class.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iaZS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iaZS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!iaZS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!iaZS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!iaZS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iaZS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:854061,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/195825056?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iaZS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!iaZS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!iaZS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!iaZS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F122f87cc-4f71-4b14-8e41-15c7c1140f80_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>                                                                    Privacy Filter</p><p>The base checkpoint is a gpt-oss-style decoder pretrained autoregressively. OpenAI then performs three modifications to convert it into a classifier:</p><ol><li><p><strong>Replace the head.</strong> The language-modeling head is removed and a token-classification head is bolted on, <a href="https://huggingface.co/openai/privacy-filter">emitting 33 logits per token</a> (1 background class plus 8 PII categories &#215; 4 BIOES boundary tags).</p></li><li><p><strong>Switch attention from causal to bidirectional banded.</strong> Each token now attends to a window of <a href="https://huggingface.co/openai/privacy-filter">128 tokens on each side (effective receptive field: 257 tokens including itself)</a>, in both directions. The causal mask &#8212; the thing that makes a model &#8220;generative&#8221; &#8212; is gone.</p></li><li><p><strong>Post-train with supervised classification loss.</strong> No next-token prediction. The objective is BIOES tag accuracy on a privacy-labeled dataset (the public PII-Masking-300k corpus plus synthetic data, <a href="https://openai.com/index/introducing-openai-privacy-filter/">augmented with model-assisted annotation review</a>).</p></li></ol><p>The retained pieces are also informative: <a href="https://huggingface.co/openai/privacy-filter">grouped-query attention (14 query heads, 2 KV heads), rotary positional embeddings, and a sparse mixture-of-experts feed-forward block</a>. The MoE is what gives the <a href="https://openai.com/index/introducing-openai-privacy-filter/">50M-active-out-of-1.5B-total figure</a>. Only a small fraction of weights actually fire on any single forward pass, which is what makes CPU inference viable.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Pfx9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Pfx9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png 424w, https://substackcdn.com/image/fetch/$s_!Pfx9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png 848w, https://substackcdn.com/image/fetch/$s_!Pfx9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png 1272w, https://substackcdn.com/image/fetch/$s_!Pfx9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Pfx9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png" width="707" height="739" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:739,&quot;width&quot;:707,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:44602,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/195825056?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Pfx9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png 424w, https://substackcdn.com/image/fetch/$s_!Pfx9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png 848w, https://substackcdn.com/image/fetch/$s_!Pfx9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png 1272w, https://substackcdn.com/image/fetch/$s_!Pfx9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b9551d8-1cb0-4a9b-9491-67a59bae5975_707x739.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                 The Architecture</em></p><p>The decoder is the other piece worth surfacing. Per-token classifications produce incoherent spans on their own &#8212; &#8220;John&#8221; tagged as begin-name, the next token tagged as begin-address, and so on. To prevent that, Privacy Filter <a href="https://github.com/openai/privacy-filter">applies constrained Viterbi decoding over the BIOES transition graph</a>. Begin must be followed by Inside, Inside, or End. End cannot transition to Inside. Single is its own one-token span. The decoder enforces these transitions globally over the sequence, so the output is always a clean set of contiguous spans.</p><p>This architecture is not novel by NLP standards &#8212; BIOES tagging and Viterbi decoding date back to pre-transformer NER systems. What is novel is using a frontier-quality pretrained generative model as the substrate, then surgically retargeting its head and attention pattern for a different objective. The world model the autoregressive pretraining gave the network &#8212; the contextual sense of when &#8220;Alice&#8221; is a literary character versus a person in a customer email &#8212; is preserved. That world model is what classical Presidio-style regex-plus-NER doesn&#8217;t have, and it is the entire reason Privacy Filter outperforms rule-based systems on ambiguous spans.</p><h2>Why the architecture matters in production</h2><p>Three properties fall out of this design that an LLM-based redactor wouldn&#8217;t have.</p><p><strong>Single-pass labeling.</strong> A 128K-token document is processed once. There is no autoregressive decoding loop over the output, no chain-of-thought reasoning, no JSON parsing of the result. OpenAI describes the model as <a href="https://huggingface.co/openai/privacy-filter">designed for high-throughput data sanitization workflows</a> but does not publish specific tokens-per-second numbers; the architecture&#8217;s single-forward-pass design is what enables a sanitization-on-every-prompt deployment pattern even at modest hardware budgets.</p><p><strong>No prompt engineering surface.</strong> A generative model used for classification has prompts, which means it has prompt injection risk. A token classifier has neither. There is no instruction the input can override.</p><p><strong>Adjustable precision/recall via the decoder, not the weights.</strong> OpenAI <a href="https://github.com/openai/privacy-filter">exposes the Viterbi transition biases as runtime knobs</a>. You can shift the operating point toward higher recall without retraining, just by re-tuning decoder priors.</p><p>The flip side is genuine: token classifiers cannot reason about context the way an LLM can. They cannot rewrite, synthesize, or follow a custom redaction policy (&#8221;redact only PII belonging to non-employees&#8221;). Privacy Filter does what it does and nothing else.</p><h2>The 96% F1 trap</h2><p>The PII-Masking-300k benchmark is a synthetic corpus generated specifically to evaluate PII-masking systems. OpenAI reports <a href="https://openai.com/index/introducing-openai-privacy-filter/">F1 = 96% on the original (94.04% precision, 98.04% recall) and 97.43% on a corrected version</a> where they fixed annotation errors. Both numbers are real and reproducible.</p><p>They are also nearly useless as a production signal.</p><p><a href="https://www.tonic.ai/blog/benchmarking-openai-privacy-filter-pii-detection">Tonic.ai &#8212; itself a vendor of competing redaction tooling &#8212; published a benchmark</a> within days of release, running Privacy Filter against four real-world test groups: electronic health record notes, call-center transcripts, loan contracts, and web crawls. Their methodology is transparent (token-level evaluation projected to Privacy Filter&#8217;s 8-class taxonomy on 500+ documents) and the comparison product is their own. With those caveats noted: <a href="https://www.tonic.ai/blog/benchmarking-openai-privacy-filter-pii-detection">Privacy Filter&#8217;s F1 ranged from 0.18 to 0.65 at default settings. Tonic&#8217;s purpose-built redactor scored 0.92&#8211;0.99 on the same data. Precision was comparable across both systems (around 0.77&#8211;0.85 for Privacy Filter). The gap was almost entirely recall: on web-crawl PII, default recall was 10%; on EHR notes, 38%</a>.</p><p>Two things explain this. First, OpenAI ships Privacy Filter with a precision-tuned default operating point. Over-redaction destroys downstream utility, and the company chose to under-flag rather than over-flag. The Viterbi knobs can recover most of the gap, but <a href="https://www.tonic.ai/blog/benchmarking-openai-privacy-filter-pii-detection">at the cost of multiplying total predictions roughly 5&#215;</a> &#8212; with a corresponding hit to precision on common words like &#8220;our&#8221; and &#8220;please.&#8221; Second, real-world PII has a long tail of formats &#8212; international phone numbers, forum-handle-style usernames, obfuscated contact blocks, region-specific identifiers &#8212; that the <a href="https://huggingface.co/openai/privacy-filter">default eight-category taxonomy</a> doesn&#8217;t even attempt to cover. SSNs, MRNs, NHS numbers, and Brazilian CPFs are not in the default label set.</p><p>Fine-tuning closes the gap. OpenAI&#8217;s own announcement reports <a href="https://openai.com/index/introducing-openai-privacy-filter/">fine-tuning improves F1 from 54% to 96% on a domain-adaptation benchmark and approaches saturation</a>, and the model card explicitly recommends <a href="https://huggingface.co/openai/privacy-filter">task-specific fine-tuning when policy differs from base boundaries</a>. The lesson: Privacy Filter&#8217;s value as a base model is real. Its value as a drop-in production redactor at default settings is not.</p><h2>Where Anthropic fits &#8212; and conspicuously doesn&#8217;t</h2><p>Anthropic does not ship anything equivalent to Privacy Filter. There is no open-weight Anthropic PII detector. There is no Claude API endpoint specifically for PII redaction. The <a href="https://www.anthropic.com/research/next-generation-constitutional-classifiers">Constitutional Classifiers</a> Anthropic publishes about &#8212; including the <a href="https://www.anthropic.com/research/next-generation-constitutional-classifiers">more recent two-stage cascade with activation probes</a> &#8212; are jailbreak and CBRN safety filters, scanning for harmful intent rather than personal data. They are also closed-weight and operated only inside Anthropic&#8217;s own deployment.</p><p>This is a structural difference between the two labs in 2026. OpenAI now maintains an open-weight model family (gpt-oss-20b, gpt-oss-120b, and now Privacy Filter as a derivative). Anthropic does not. For an engineering team using Claude in a regulated environment &#8212; healthcare, legal, financial &#8212; there is no first-party path to local PII filtering on Claude&#8217;s own infrastructure. The viable options are:</p><ul><li><p><strong>Run Privacy Filter or Presidio in front of Claude as a proxy.</strong> This is what community tooling like the <a href="https://pasqualepillitteri.it/en/news/1361/claude-privacy-tool-hook-privacy-claude-code-desktop">Claude Privacy Tool</a> already does &#8212; it intercepts prompts locally, swaps PII for placeholders using OpenAI&#8217;s open-weight model, sends the masked version to Claude, and re-substitutes on the way back.</p></li><li><p><strong>Use a commercial proxy.</strong> Tools like <a href="https://grepture.com/en/guides/redact-pii-anthropic-claude-api">Grepture</a> or <a href="https://www.tonic.ai/blog/benchmarking-openai-privacy-filter-pii-detection">Tonic Textual</a> sit between the client and the Claude API, performing token-level redaction with a reversible token map.</p></li><li><p><strong>Build it in-app.</strong> <a href="https://github.com/anthropics/claude-code/issues/29434">Open issues like anthropics/claude-code#29434</a> are explicitly requesting a first-party redaction hook in Claude Code so secrets and PII don&#8217;t enter the context window in the first place.</p></li></ul><p>The strategic reading: OpenAI is positioning small, specialized open-weight models &#8212; what&#8217;s worth calling <strong>safety SLMs</strong> &#8212; as infrastructure they want the broader ecosystem to standardize on. Anthropic&#8217;s safety story is built around training-time alignment plus closed classifiers integrated tightly into Claude itself. Both are legitimate strategies. Only one of them gives you a model you can run locally.</p><h2>The alternatives landscape</h2><p>For teams evaluating PII redaction in 2026, Privacy Filter joins a crowded field. The relevant tradeoffs:</p><p><strong><a href="https://microsoft.github.io/presidio/faq/">Microsoft Presidio</a></strong> is open source, mature, and combines <a href="https://microsoft.github.io/presidio/faq/">regex pattern recognizers, spaCy-based NER, and contextual checks</a>. It supports more languages out of the box than Privacy Filter and ships with <a href="https://microsoft.github.io/presidio/faq/">image and structured-data redactors</a> that Privacy Filter lacks. Its weakness is exactly where Privacy Filter is strong: ambiguous, contextual PII that requires language understanding rather than pattern matching, since its defaults rely heavily on regex and pre-trained NER models rather than purpose-trained PII classification.</p><p><strong><a href="https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html">AWS Comprehend</a></strong> is a managed cloud API. AWS&#8217;s docs state PII detection <a href="https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html">supports English or Spanish text documents only</a>, with no on-prem option. It is a reasonable pick only if your data is already in AWS and your sensitivity tolerance allows cross-network calls.</p><p><strong><a href="https://docs.cloud.google.com/sensitive-data-protection/docs">Google Cloud Sensitive Data Protection (formerly DLP)</a></strong> has the broadest taxonomy &#8212; <a href="https://docs.cloud.google.com/sensitive-data-protection/docs">over 200 built-in infoType detectors</a> &#8212; but is also cloud-only and the most complex to configure.</p><p><strong><a href="https://www.private-ai.com/">Private AI</a></strong> is the commercial purpose-built option. The <a href="https://www.private-ai.com/en/blog/pii-solutions-benchmark">vendor publishes its own benchmark</a> showing it leading on recall across domains, with multilingual support and a containerized on-prem deployment path. Treat the numbers as vendor-published rather than independent.</p><p><strong><a href="https://www.tonic.ai/blog/benchmarking-openai-privacy-filter-pii-detection">Tonic Textual</a></strong> is the production-trained option for teams with real customer data &#8212; its head-to-head against Privacy Filter is the only public comparison on non-synthetic corpora to date.</p><p>The architectural takeaway across these options: Privacy Filter is the first frontier-lab open-weight entry into a category that has been dominated by closed cloud APIs and SDK-based regex-NER hybrids. Its long-term value is probably less as a finished tool and more as a base checkpoint that shifts the ecosystem from rule-based to learned context-aware redaction.</p><h2>What this means for your stack</h2><p>If you are building production AI features today and PII handling is part of the threat model, three concrete decisions follow.</p><p>First, decide where redaction lives in your pipeline. The two viable spots are at-source &#8212; a proxy or hook that scrubs prompts before they reach any LLM API &#8212; and in-batch &#8212; a sanitization pass on training data, logs, and indexed corpora before they reach a vector store. These have different operating-point requirements. At-source needs low latency and reversibility (the token-to-real-value map persists for the session). In-batch can be slower, can run in parallel, and is one-way.</p><p>Second, do not adopt Privacy Filter at default settings if your data doesn&#8217;t look like PII-Masking-300k. Either fine-tune on a few hundred to a few thousand domain examples, or tune the Viterbi knobs aggressively and accept the precision hit, or run Privacy Filter as one detector among several with rule-based and pattern-based detectors filling the gaps. The eight-category taxonomy is also static &#8212; if your domain has SSNs, MRNs, NHS numbers, or non-US tax IDs, you will need to fine-tune to add those classes.</p><p>Third, reversibility is the real production problem, not detection. If your application needs to mask PII before sending to an LLM and then un-mask it in the response, you are doing pseudonymization, not anonymization. The LLM might rewrite, paraphrase, or modify the placeholders, and your un-masking logic has to handle that. Privacy Filter solves none of this. Tools like <a href="https://www.protecto.ai/blog/why-presidio-other-data-masking-tools-fall-short-ai-use-cases-part-1/">Protecto</a> and <a href="https://www.tonic.ai/blog/benchmarking-openai-privacy-filter-pii-detection">Tonic</a> position themselves explicitly around the un-masking robustness problem, which is harder than the F1 score implies.</p><h2>Safety SLMs as a model class</h2><p>Privacy Filter is the clearest signal yet that &#8220;small, specialized model trained for one safety task&#8221; is becoming a stable category &#8212; distinct from foundation models and distinct from classical NLP libraries. The pattern is consistent: take a frontier-pretrained checkpoint as the substrate, surgically modify the head and attention pattern for a single classification or scoring objective, post-train on labeled safety data, and ship the weights under a permissive license so the ecosystem can fine-tune for vertical domains.</p><p>The next entries in this category are predictable. Prompt-injection detectors. Toxicity classifiers. Output policy auditors. Code-secret scanners. Some already exist as research artifacts. Privacy Filter is the first that is small enough to run in a browser, accurate enough to ship, and open enough to adapt without negotiating a license. If safety SLMs become the standard infrastructure layer for production AI &#8212; the privacy and safety equivalent of TLS termination &#8212; Privacy Filter is the v1.</p><p>What&#8217;s worth watching is whether Anthropic continues to keep its safety classifiers internal, or whether the competitive pressure of an open ecosystem forces a shift. The <a href="https://www.anthropic.com/research/next-generation-constitutional-classifiers">Constitutional Classifiers research</a> is, technically, exactly the kind of work that could ship as open weights for the broader community to build on. So far, it hasn&#8217;t.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Shadow AI Agents]]></title><description><![CDATA[Your enterprise has more AI agents than employees. Most don&#8217;t have identities, owners, or audit trails. Agent identity is the reliability surface that everything else depends on &#8212; and the control plan]]></description><link>https://theairuntime.com/p/shadow-ai-agents</link><guid isPermaLink="false">https://theairuntime.com/p/shadow-ai-agents</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Mon, 27 Apr 2026 11:03:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cZam!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - Per Gravitee&#8217;s 2026 State of AI Agent Security report, 88% of organizations reported confirmed or suspected AI agent security incidents in the past year. The same survey found three million agents running inside corporations today, only 47.1% of which are actively monitored or secured. Deloitte&#8217;s 2026 State of AI in the Enterprise adds that only one in five companies has a mature governance model for agentic AI. The numbers describe a single underlying problem: most enterprise AI agents are <strong>shadow agents</strong> &#8212; autonomous workers with persistent permissions, no owner, no registry entry, and no audit trail. This is shadow IT&#8217;s faster, more dangerous successor. Shadow IT was unsanctioned software. Shadow AI was unsanctioned LLM use. Shadow agents are unsanctioned <em>workers</em> &#8212; they move files, send emails, execute transactions, and call APIs at machine speed, often borrowing a human&#8217;s credentials with no separation of action.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The fix is <strong>agent identity</strong> as a first-class reliability surface &#8212; sitting beneath context engineering and harness engineering as the precondition both rely on. Microsoft&#8217;s Agent 365, generally available May 1 at $15 per user per month, is the first major reference architecture: every agent gets a unique Entra Agent ID, a sponsor, a registry entry, and a managed lifecycle. It&#8217;s not the whole answer &#8212; cross-cloud governance is still unsolved &#8212; but it&#8217;s the clearest blueprint enterprises have today for what an agent control plane needs to do. If you can&#8217;t answer three questions about your environment in five minutes &#8212; <em>how many agents we have, what each one can actually do, and who is accountable when one misbehaves</em> &#8212; you have shadow agents. This is a guide to making them visible.</p></div><h2>The Office Building Analogy</h2><p>Imagine you walk into your office tomorrow and discover that your company hired forty-five people overnight for every existing employee. They don&#8217;t have badges. They report to no one. They have access to your filesystem, email, CRM, customer database, and bank accounts. They never go home, never take vacation, and when something breaks at 3 AM on a Saturday, no one even knows they were there.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cZam!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cZam!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!cZam!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!cZam!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!cZam!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cZam!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1126460,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/195587304?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cZam!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!cZam!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!cZam!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!cZam!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2de2e01f-f003-48b4-9e93-aec992def2dd_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                     Shadow AI Agents</em></p><p>This is not hyperbole. It is the actual ratio. Non-human identities &#8212; service accounts, API tokens, robotic process automation, and now AI agents &#8212; outnumber human identities in average enterprises by 45 to 1, according to Gartner research, climbing to 80 to 1 in cloud-native organizations. Most operate with excessive privileges. Most run unmonitored. And most are essential to keeping production systems running.</p><p>The traditional security playbook was simple: lock down the humans. Enforce MFA. Train employees not to phish. Review badges. The shadow agents problem rewrites the question entirely. The mandate is no longer &#8220;who has admin rights?&#8221; but &#8220;what has access to what?&#8221; &#8212; and answering that requires infrastructure most organizations have not built yet.</p><div><hr></div><h2>What Shadow Agents Actually Are</h2><p>Shadow IT was the previous era&#8217;s problem. Employees signed up for SaaS tools without IT approval. Procurement found out months later when the renewal invoice landed.</p><p>Shadow AI was the bridge. Employees pasted proprietary data into ChatGPT, Claude, or Gemini. The exposure was real but bounded &#8212; a single conversation, a single export, a single user.</p><p>Shadow agents are categorically different. Unlike shadow AI, which is the use of unapproved LLMs, shadow agents are granted <strong>persistent permissions to your systems</strong>. They don&#8217;t just answer questions. They move files, send emails, update records, and communicate with customers and other agents. They authenticate continuously. They make decisions while no human is watching. And they typically piggyback on a human user&#8217;s credentials &#8212; which means in your audit logs, the agent&#8217;s actions are indistinguishable from the human&#8217;s.</p><p>When an agent updates a file, the log says &#8220;John Doe updated a file.&#8221; It should say &#8220;John Doe&#8217;s Agent [ID 042] updated a file.&#8221; That single missing distinction is the source of most attribution failures, most incident response delays, and most of the 88% incident rate Gravitee found in its 2026 State of AI Agent Security report.</p><p>The pattern is predictable and already widespread. Marketing deploys an agent for content generation. Sales spins up one for lead scoring. Finance automates invoice processing. Each was approved by a manager who reasonably assumed IT would catch anything risky. IT never sees them, because the agents enter the environment through OAuth grants, browser extensions, MCP integrations, and developer pipelines that no central registry tracks. Six months later the agents are doing critical work. Twelve months later one of them malfunctions and exposes a customer database. The post-mortem reveals nobody knew it existed.</p><p>Gravitee&#8217;s research puts the steady-state at three million agents operating inside corporations today, of which an estimated 1.5 million are running with no oversight, accessing sensitive data, making decisions, and connecting to critical systems with no audit trail. Gartner expects 40% of enterprise applications to embed task-specific AI agents by the end of this year, up from less than 5% in 2025. IDC projects 1.3 billion autonomous agents in circulation by 2028. None of those agents will govern themselves.</p><div><hr></div><h2>Why Reliability Engineering Alone Doesn&#8217;t Solve This</h2><p>I&#8217;ve written extensively about Model Reliability Engineering &#8212; the discipline of ensuring AI behavior is reliable in production. MRE has two surfaces: context engineering (what the model knows at inference) and harness engineering (what users see, with what guardrails).</p><p>Both surfaces assume something they shouldn&#8217;t: that you know <em>which agent</em> is calling the model, <em>whose permissions</em> it carries, and <em>who is accountable</em> if it misbehaves.</p><p>Take a faithfulness SLO failure. An agent generates a response unsupported by the retrieved context. MRE tells you the metric fired. It does not tell you which of your 412 agents fired it, which user it was acting on behalf of, what permissions it was operating under, or whether the failure exposed data the agent should never have been able to access in the first place. That investigation requires identity &#8212; and most organizations cannot produce it.</p><p>Agent identity is therefore not a sibling discipline to MRE. It&#8217;s a <strong>precondition</strong>. Reliability without identity is unauditable. Observability without attribution is theater. You cannot enforce a purpose limitation on an agent whose purpose was never declared. Kiteworks&#8217; 2026 Data Security and Compliance Risk Forecast quantifies the gap directly: 63% of organizations cannot enforce purpose limitations on what their agents are authorized to do, and 60% cannot terminate a misbehaving agent once it starts operating.</p><p>This is why agent identity belongs as the next reliability surface &#8212; not in addition to context and harness engineering, but underneath them. Without it, the rest of the stack cannot carry weight.</p><div><hr></div><h2>The Four Pillars of an Agent Control Plane</h2><p>Across the most coherent enterprise frameworks emerging in the last six months &#8212; Microsoft&#8217;s Agent 365, the Cloud Adoption Framework guidance for agent governance, the OWASP Top 10 for Agentic Applications, and the NIST AI Agent Standards Initiative announced in January 2026 &#8212; the same four pillars surface repeatedly. Together they describe what an agent control plane has to do.</p><p><strong>Discovery and registry.</strong> Every agent in the environment is inventoried. Not just the ones IT sanctioned. The ones running through OAuth grants, browser extensions, MCP servers, low-code platforms, and developer scripts. If you don&#8217;t know an agent exists, you cannot govern it. Most organizations cannot produce this list today.</p><p><strong>Identity and sponsorship.</strong> Each agent receives a unique, durable identifier &#8212; distinct from any human user&#8217;s credentials. Each identity has a <em>sponsor</em>: a human accountable for the agent&#8217;s lifecycle, its permissions, and its decommissioning. Microsoft&#8217;s Entra Agent ID is the most concrete implementation of this primitive available today, but the principle is portable: no agent operates without an owner.</p><p><strong>Policy and permission.</strong> Agents authenticate using short-lived, task-specific tokens, not long-lived shared credentials. Permissions are scoped to least privilege by default. Conditional access policies adapt in real time to risk signals. Purpose limitation is encoded &#8212; what the agent is allowed to do, and equally important, what it is <em>not</em> allowed to do, even when prompted to.</p><p><strong>Observability and attribution.</strong> Every action an agent takes is logged with the agent&#8217;s identity, the user it was acting on behalf of, the tools it called, and the data it touched. Behavioral baselines detect drift. Anomalies trigger investigation. When something goes wrong, the audit trail answers &#8220;what happened&#8221; in minutes, not in days of forensic archaeology.</p><p>These four pillars are not novel individually. Identity governance has been a discipline for decades. What is new is applying them to entities that operate continuously, autonomously, at machine speed, with permissions equal to or exceeding privileged human users &#8212; and doing so before the agent population grows past the point of practical inventory.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bqPe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bqPe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png 424w, https://substackcdn.com/image/fetch/$s_!bqPe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png 848w, https://substackcdn.com/image/fetch/$s_!bqPe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png 1272w, https://substackcdn.com/image/fetch/$s_!bqPe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bqPe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png" width="845" height="799" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:799,&quot;width&quot;:845,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:38549,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/195587304?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bqPe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png 424w, https://substackcdn.com/image/fetch/$s_!bqPe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png 848w, https://substackcdn.com/image/fetch/$s_!bqPe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png 1272w, https://substackcdn.com/image/fetch/$s_!bqPe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb926fb50-0644-4094-bfa5-d2b05ee42838_845x799.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>                                                  <em>Pillars of an Agent Control Plane</em></p><div><hr></div><h2>Microsoft Agent 365 as the Reference Architecture</h2><p>Agent 365, generally available May 1, 2026, is the most complete implementation of these four pillars shipping today. It deserves attention not because it is the only solution but because it is the first concrete blueprint enterprises can point to and copy.</p><p>The Agent 365 inventory in the Microsoft 365 admin center captures every agent registered through Microsoft channels &#8212; Copilot Studio, Microsoft Foundry, Teams, and third-party agents that integrate via the Agent 365 SDK. Microsoft Entra issues each agent a unique Agent ID and applies identity governance: lifecycle controls, conditional access, sponsor relationships, and access packages. Microsoft Purview applies data protection policies and audits agent activity. Microsoft Defender provides threat detection and incident response, with visibility into attack paths.</p><p>Microsoft is its own first proof point. The company has been running Agent 365 internally as &#8220;Customer Zero&#8221; and reports more than 500,000 agents mapped within its own environment, generating more than 65,000 responses per day for employees in a representative 28-day window. In the public preview phase, tens of millions of agents have been registered in the Agent 365 registry across customer environments. The control plane has been load-tested before launch.</p><p>Worth understanding what Agent 365 does <em>not</em> solve. Its strength is also its boundary: it is anchored to the Microsoft ecosystem. Agents running in AWS Bedrock, GCP Vertex, OpenAI&#8217;s platform, Anthropic&#8217;s API, GitHub Actions, or internal frameworks built on LangChain or CrewAI do not automatically appear in the Agent 365 registry. Cross-cloud governance still requires configuration or third-party tooling. Several aspects of the security story are also incomplete on day one &#8212; runtime threat protection through the Agent 365 tools gateway is entering public preview in April rather than shipping at GA, and security posture management for Foundry and Copilot Studio agents remains in public preview after launch.</p><p>Agent 365 is the most coherent reference architecture today, but it is one path among several. To pick well, architects need the broader landscape.</p><div><hr></div><h2>The Control Plane Is a Category, Not a Product</h2><p>Microsoft is not alone in this space. As of mid-2026, six distinct categories of vendor are racing toward the same control-plane primitives, with overlapping and sometimes conflicting approaches.</p><p><strong>Hyperscaler-native control planes.</strong> Each major cloud is building its own version of Agent 365. AWS Bedrock AgentCore added a managed Agent Registry in April 2026, with identity, gateway, sandboxed runtime, observability, and a policy module that runs outside the agent. VentureBeat&#8217;s framing of the difference is sharp &#8212; AWS optimizes for build-velocity, with identity baked into the runtime layer rather than sitting on top. Google rebranded Vertex AI as Gemini Enterprise Platform and built a Kubernetes-style governance control plane around it, with Agent Registry integrations via Apigee, plus VPC Service Controls, CMEK, and a new Vertex AI Governance layer. Three hyperscalers, three philosophies, each bound to its own ecosystem. Forrester analyst Charlie Dai flagged the corollary risk: enterprises adopting AWS, Microsoft, and Google registries in parallel could end up recreating the exact fragmentation these tools are meant to solve. Registry sprawl is the second-order failure mode of the control-plane era.</p><p><strong>The neutral identity-fabric play.</strong> Okta plus Auth0 is the most ambitious cross-ecosystem competitor. Okta for AI Agents entered Early Access in March 2026; Auth0 for AI Agents handles the build-time identity primitives &#8212; Token Vault, Fine-Grained Authorization for RAG, CIBA for asynchronous human consent. The strategically important move is Cross App Access (XAA), an OAuth extension built specifically for agent-to-application delegation, with launch support from AWS, Google Cloud, Salesforce, Box, Glean, and others. XAA was recently merged into MCP as &#8220;Enterprise-Managed Authorization.&#8221; If XAA becomes the actual interoperability standard, it matters more than any single vendor&#8217;s control plane. Strata Identity&#8217;s Maverics Agentic Identity is a similar pure-play approach, with just-in-time provisioning and OIDC/OAuth subject-actor binding.</p><p><strong>Non-human-identity vendors.</strong> Entro Security, TrustLogix, BeyondTrust Pathfinder, CyberArk, GitGuardian, Keeper, and AppViewX with Eos came from privileged access, non-human identity, or secrets management and extended into agents. BeyondTrust Pathfinder is the closest a non-hyperscaler comes to a true unified control plane, combining PAM, CIEM, ITDR, secrets management, and agentic AI security in a single telemetry layer. Their thesis is the cross-environment one: agents do not respect ecosystem boundaries, so neither should governance.</p><p><strong>IGA retrofit.</strong> Saviynt shipped ISPM for AI Agents and ISPM for NHI in early 2026. SailPoint and others are extending traditional identity governance to agents. &#8220;Extending&#8221; is the operative word. This is the retrofit path, with the trade-offs that implies.</p><p><strong>Cross-cloud data-policy layer.</strong> Bedrock Data&#8217;s ArgusAI sits adjacent to identity, governing what <em>data</em> agents can access across AWS Bedrock, Snowflake Cortex, ChatGPT Enterprise, and Google Vertex AI. Write a policy in plain English once, enforce it across clouds. Identity governance and data governance are converging.</p><p><strong>The open-standard foundation few are pointing to.</strong> SPIFFE/SPIRE &#8212; CNCF-graduated, production-proven for workload identity in cloud-native environments, integrated natively into HashiCorp Vault Enterprise as of version 1.21, shipping as a Red Hat OpenShift operator. SPIFFE was not built for AI agents specifically, but it solves precisely the right problem: short-lived cryptographic identities for non-human workloads, attested by what the workload <em>is</em> rather than what secret it holds. Most enterprise architects have not connected SPIFFE to agent governance yet. They should. For platform-agnostic, multi-cloud agent identity, SPIFFE/SPIRE is the most mature and standards-aligned foundation available &#8212; and it composes cleanly underneath any of the higher-level control planes above.</p><p>Practical guidance breaks down by deployment shape. Heavily Microsoft stacks should default to Agent 365 at $15 per user per month standalone, or included in the new M365 E7 bundle at $99, as the path of least resistance. Heavily AWS or Google deployments should look at AgentCore Registry and Gemini Enterprise&#8217;s governance layer respectively as the analogous bets, with the same architectural pattern and same ecosystem boundary. Multi-cloud organizations need Okta plus Auth0&#8217;s identity fabric or one of the NHI-pedigree platforms &#8212; BeyondTrust Pathfinder, Entro, TrustLogix &#8212; for cross-environment governance that hyperscaler-native tools cannot deliver. Cloud-native shops running Kubernetes and a service mesh should evaluate SPIFFE/SPIRE as the open-standard foundation that composes underneath any of the above. Teams still early, with fewer than a dozen agents in production, should build identity in from day one rather than retrofit it later. The shadow agents problem is what retrofit looks like at scale, and the cost grows by an order of magnitude with every doubling of agent population.</p><div><hr></div><h2>A Three-Question Diagnostic</h2><p>Before any tooling decision, every organization running agents should be able to answer three questions in under five minutes. The number of &#8220;no&#8221; or &#8220;I&#8217;m not sure&#8221; responses correlates directly with shadow agent exposure.</p><p><strong>How many AI agents are running in our environment right now?</strong> Not the ones IT approved. The total &#8212; including the ones spun up via OAuth grants, browser extensions, MCP integrations, and developer scripts. Most organizations cannot answer this within an order of magnitude.</p><p><strong>What can each agent actually do?</strong> Not what it was designed to do. What permissions does its token carry, what systems does it have read access to, what systems does it have write access to, and what would happen if a malicious prompt convinced it to use the broadest interpretation of its access? The 63% of organizations that cannot enforce purpose limitations are by definition unable to bound this.</p><p><strong>Who is accountable if an agent misbehaves at 3 AM on a Saturday?</strong> Not &#8220;the team that built it.&#8221; A specific human, on call, with the authority to decommission the agent. If the answer requires a meeting to determine, the agent has no owner.</p><p>Three &#8220;no&#8217;s&#8221; means a major incident is a question of when, not if. The organizations that will survive the next 24 months of agent adoption without a public incident are the ones that can answer all three today, with names, numbers, and pages.</p><div><hr></div><h2>The Bottom Line</h2><p>Agent adoption is moving faster than identity governance. Forty percent of enterprise applications embedding agents by year-end is not an adoption curve &#8212; it is a vertical line. The 1.3 billion agent projection by 2028 means that within two years, autonomous non-human workers will outnumber every other class of digital identity inside the enterprise.</p><p>The organizations that treat agent identity as a first-class reliability surface &#8212; with discovery, sponsorship, scoped permissions, and audit-grade observability &#8212; will spend the next two years building production capability. The organizations that don&#8217;t will spend them doing post-incident forensics on agents they didn&#8217;t know they had.</p><p>Reliability begins with identity. If you cannot tell who acted, you cannot tell what happened. If you cannot tell what happened, you cannot fix it. Everything else in the agent stack &#8212; context engineering, harness engineering, evaluation, incident response &#8212; assumes that question is already answered.</p><p>It usually isn&#8217;t. That&#8217;s the work.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Builder Spotlight - Armaan Agrawal ships like a forward-deployed engineer already]]></title><description><![CDATA[Seven production systems, one repeatable pattern, and the real-world skills most new grads don&#8217;t show up with.]]></description><link>https://theairuntime.com/p/builder-spotlight-armaan-agrawal</link><guid isPermaLink="false">https://theairuntime.com/p/builder-spotlight-armaan-agrawal</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Fri, 24 Apr 2026 11:03:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UMJE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR.</strong> Armaan Agrawal (CS @ Northeastern, class of 2026) has a new-grad portfolio that reads like a scout report for forward-deployed engineering. SamGPT is a RAG engine over the My First Million corpus with timestamp and speaker-attributed chunks that bridge into a Viral Clip Generator via FFmpeg &#8212; the retrieval and action paths share a schema on purpose, which is the move a Solutions Architect makes. He wired prompt-injection and harmful-request guardrails into an OpenAI Agents SDK build <em>in hour one of three</em> at a hackathon and placed 2nd. His co-op recommender solved cold-start with a staged text-match &#8594; collaborative-filtering rollout that will be deployed to 2500 students. He&#8217;s demonstrating the concepts <a href="https://aiengineerweekly.substack.com/p/your-portfolio-website-wont-get-you">the AIfolio framework</a> calls for &#8212; RAG, tool-use, voice continuity &#8212; but what makes the portfolio FDE-shaped is the <em>real-world skills around</em> those concepts: latency as a product feature, guardrails as a default, schema as operational design, staged rollout as architecture. If you&#8217;re staffing an FDE or Solutions Architect role, talk to him before he takes a traditional SWE offer.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The habit, stated</h2><p>Most new-grad portfolios are a pile of frameworks. Armaan&#8217;s is a pile of <em>systems shipped to specific users whose operational reality he understood</em>. That sounds soft until you look at the architectural choices &#8212; they&#8217;re the ones you make when the user&#8217;s failure mode, not the rubric, is what you&#8217;re optimizing against.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UMJE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UMJE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!UMJE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!UMJE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!UMJE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UMJE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UMJE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!UMJE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!UMJE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!UMJE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e09d90f-b816-46af-af02-1e7980e0c033_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                           AIfolio Projects</em></p><p>Forward-deployed engineering and solutions architecture are the same job at different scales: drop into a domain you didn&#8217;t grow up in, compose a working system out of heterogeneous pieces, land it with safety and observability already in it, and iterate on the signal instead of the stack. Most new grads learn this over two years of production pain. Armaan has already shipped it seven times.</p><p>The AIfolio framework names the <em>concepts</em> an AI-engineering portfolio should demonstrate &#8212; RAG pipelines, tool-use architecture, agent design, memory and voice continuity. Armaan hits those concepts. What&#8217;s more interesting is what he does <em>around</em> them: the habits that make the concepts production-viable instead of demo-viable. That&#8217;s what this piece walks through.</p><h2>RAG with schema foresight (SamGPT + Viral Clip Generator)</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Xlp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Xlp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6Xlp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6Xlp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6Xlp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Xlp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png" width="1456" height="1165" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1165,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Xlp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6Xlp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6Xlp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6Xlp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995f692d-44f1-43e6-bc18-c9afa3a4c200_1920x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                            Viral clip generator</em></p><p><strong>Stack:</strong> Whisper / ASR, speaker diarization, embeddings, vector search, FFmpeg, Next.js.</p><p>SamGPT is a RAG system over the My First Million podcast corpus: semantic search, query expansion, suggested prompts, YouTube deep links to the exact timestamp. The Viral Clip Generator is the adjacent tool: paste a YouTube URL, get the top 3 sub-2-minute cuts auto-extracted as 16:9 exports.</p><p>What makes this architecturally non-obvious isn&#8217;t the RAG itself. It&#8217;s the <strong>bridge between the two services</strong>.</p><p>The data model carries timestamped, speaker-attributed chunks <em>all the way through</em> retrieval. A user who finds a quote in SamGPT can jump to the video at the exact second, or pass the chunk to FFmpeg and get a shippable 16:9 cut. Retrieval and generation aren&#8217;t separate products wearing the same skin &#8212; they share metadata, and the shared metadata <em>is</em> the feature.</p><p>This is the Solutions Architect move. It would have been easier to build two independent tools and call it a suite. Instead he built one pipeline with two exits, and the marginal cost of the second exit was near zero <em>because he designed the chunking schema for it upfront</em>. Most AI engineers bolt that on later and lose half the data.</p><p>The portable real-world skill: <strong>your chunking schema is a product decision, not an infrastructure decision</strong>. Armaan&#8217;s schema already had timestamps and speaker attribution because he knew a second surface (clip extraction) would need them. That&#8217;s designing the system to be legible to the next tool you&#8217;ll build against it &#8212; the skill that separates an AI engineer from a solutions architect.</p><h2>Tool-use architecture without MCP (Content Engine)</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j5oq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j5oq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png 424w, https://substackcdn.com/image/fetch/$s_!j5oq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png 848w, https://substackcdn.com/image/fetch/$s_!j5oq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!j5oq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j5oq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png" width="1456" height="1165" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1165,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j5oq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png 424w, https://substackcdn.com/image/fetch/$s_!j5oq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png 848w, https://substackcdn.com/image/fetch/$s_!j5oq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!j5oq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec99b164-fbbc-431f-8134-2bed799d83bc_1920x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                            Content Engine</em></p><p><strong>Stack:</strong> Next.js, content pipelines, carousel export, AI rewrite with tone presets, personalized to voice and style data.</p><p>One source tweet, four output formats: LinkedIn long-form, IG carousel, newsletter, quote card. Three-pane UI: source feed on the left, tabbed editor in the middle (one tab per target format), live preview on the right.</p><p>Two choices worth calling out:</p><ul><li><p><strong>Format-specific editor tabs instead of a single &#8220;transform&#8221; button.</strong> Each target format has its own constraints, and he exposes them as first-class surfaces. This is the difference between treating output formats as parameters to one generator versus treating them as distinct tools that share an upstream source. The second is what a Solutions Architect picks when the user has real editorial control needs. It&#8217;s also the tool-use design pattern MCP formalizes &#8212; you don&#8217;t need MCP to pick it, you need the instinct that separates tool boundaries along user-decision boundaries.</p></li><li><p><strong>Voice personalization.</strong> Most &#8220;AI rewrite&#8221; tools regress your writing toward a generic model voice. Armaan&#8217;s design carries personal-voice signal into every tab, so the four generated variants don&#8217;t all sound vaguely like a LinkedIn guru. The failure mode of cross-platform content tools is well-known: you write once, four generated variants all need a hand-rewrite, the tool saves you zero minutes. Closing that gap is a real-world skill the canonical AIfolio memory pillar hints at but most projects miss.</p></li></ul><h2>Latency as a product feature (Red Sox)</h2><p><strong>Stack:</strong> Django, Vue.js, Redis, Celery, PostgreSQL, Okta SSO, Docker. Live at Fenway Park, Jan&#8211;Sep 2024.</p><p>Live batting-lineup API for journalists during games. Previous method: a handwritten whiteboard. If the API went down mid-game, press couldn&#8217;t report the lineup before first pitch.</p><p>The number most new grads would chase is features. Armaan chased <strong>tail latency</strong>: 1.2s &#8594; 121ms, a ~90% cut, via Redis on the hot path and Celery for everything off-path. Two architectural choices inside that:</p><ul><li><p><strong>Cache on the read path, not everywhere.</strong> Redis in front of lineup reads means the request journalists actually make &#8212; &#8220;what&#8217;s the roster right now&#8221; &#8212; never waits on downstream services. Cache invalidation is keyed to lineup changes, so staleness lives in a narrow, owned window.</p></li><li><p><strong>Celery for everything the user isn&#8217;t waiting on.</strong> Notifications, logging, eventual-consistency writes &#8212; off the request thread. The hot path becomes trivial to reason about because it does one thing.</p></li></ul><p>None of this is AI engineering. It matters for an AI portfolio anyway, because <a href="https://aiengineerweekly.substack.com/p/the-retrofit-tax">the Retrofit Tax</a> is what teams pay when they try to add observability, latency discipline, or governance to a system that was shipped without them. Armaan doesn&#8217;t retrofit. Production-shaped defaults go in the original design, where the cost of adding them is close to zero. That&#8217;s the posture that keeps his work from accruing tax as he scales it.</p><h2>Guardrails in hour one, not hour forty (AgentOps hackathon)</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Kjud!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Kjud!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png 424w, https://substackcdn.com/image/fetch/$s_!Kjud!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png 848w, https://substackcdn.com/image/fetch/$s_!Kjud!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png 1272w, https://substackcdn.com/image/fetch/$s_!Kjud!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Kjud!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Kjud!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png 424w, https://substackcdn.com/image/fetch/$s_!Kjud!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png 848w, https://substackcdn.com/image/fetch/$s_!Kjud!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png 1272w, https://substackcdn.com/image/fetch/$s_!Kjud!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F028efe86-f165-495f-bccf-57388ab4b007_1920x1440.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                             Guardrails Setup</em></p><p><strong>Stack:</strong> OpenAI Agents SDK. 3 hours. 2nd place.</p><p>Most hackathon demos ship a working prototype and skip safety entirely &#8212; the rubric doesn&#8217;t require it, and guardrails feel like production overhead. Armaan shipped <strong>input guardrails from line one</strong>, with prompt-injection blocks and harmful-request blocks both live in the demo.</p><p>Reading this as a minor detail misses what it signals. The OpenAI Agents SDK exposes guardrail primitives cheaply; almost nobody uses them on a hackathon timeline. Using them anyway is the same instinct as caching the Red Sox hot path: <em>production-shaped defaults on demo-shaped timelines</em>.</p><p>This is also where the behavioral-reliability work most AI engineers learn after their first incident shows up <em>pre-incident</em>. Validation gates, input filters, behavioral guardrails before a model&#8217;s output reaches the user &#8212; these are not optional for production systems, but they&#8217;re almost always added reactively. Reaching for them at hour one of a three-hour build is the instinct, and it&#8217;s not teachable under deadline.</p><p>For an FDE hiring manager, this is the cheapest-to-evaluate signal in the portfolio.</p><h2>Schema as operational product design (Feedshare)</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9Vaj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9Vaj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png 424w, https://substackcdn.com/image/fetch/$s_!9Vaj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png 848w, https://substackcdn.com/image/fetch/$s_!9Vaj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!9Vaj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9Vaj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png" width="1456" height="1165" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1165,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9Vaj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png 424w, https://substackcdn.com/image/fetch/$s_!9Vaj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png 848w, https://substackcdn.com/image/fetch/$s_!9Vaj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!9Vaj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e202140-13c5-47bd-bd72-19c67d0167bc_1920x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>                                  <em>Feedshare</em></p><p><strong>Stack:</strong> SwiftUI, Firebase, iOS. 100+ campus users.</p><p>The framing &#8212; &#8220;campus free food shouldn&#8217;t die in a group chat&#8221; &#8212; constrains the whole system. The schema isn&#8217;t &#8220;post + comments.&#8221; It&#8217;s:</p><ul><li><p>Photo-first feed (you don&#8217;t walk across campus on a text description)</p></li><li><p>Map pins (location is a first-class field, not a comment)</p></li><li><p>Multi-photo upload, up to 5 (proof, not hype)</p></li><li><p>Room + headcount fields (so you know whether it&#8217;s worth the walk before you leave)</p></li></ul><p>Every field on the post form corresponds to a decision the user makes: <em>is this real, where is it, is it still there, is it worth the walk</em>. The schema is the product.</p><p>This distinguishes FDE work from generic backend work. The post schema isn&#8217;t generic &#8212; it encodes the decision-making workflow of the specific user on the specific campus. Firebase gets him there fast because the real work isn&#8217;t the backend; it&#8217;s figuring out what data the user&#8217;s decision actually requires and refusing to collect anything else. Shipping to 100+ students on a campus with real food-waste pressure means the hypothesis has already been validated in the field.</p><h2>Cold-start as staged architecture, not a hack (co-op recommender)</h2><p>While at NExT Consulting, he built a co-op recommender for Northeastern students &#8212; planning to be used by 2,500 in intro university classes. New students have no history, the classical cold-start trap that kills most recommendation systems before they ship.</p><p>His rollout: <strong>text matching first</strong> (profile-to-role matching for the cold-start population), <strong>then shift to collaborative filtering</strong> once interaction data accumulates. &#8220;Good matches from day one, better over time.&#8221;</p><p>This is the staged-architecture move a Solutions Architect picks. You don&#8217;t wait for data to deploy the system, and you don&#8217;t stay on cold-start forever. You design the data pipeline so the transition is a config change, not a rewrite. For a new grad to pick the staged approach on a real user-impact system is unusual &#8212; most new engineers either over-engineer the eventual collaborative-filter stack and ship late, or ship the text-match version with no path off it and accumulate the tax later.</p><h2>Why this maps to FDE / Solutions Architect work</h2><p>Forward-deployed engineering is:</p><ol><li><p>Understand a domain you didn&#8217;t grow up in <em>faster than the customer thinks is possible</em>.</p></li><li><p>Compose a working system from heterogeneous pieces (their stack + yours).</p></li><li><p>Land it with safety, observability, and latency budgets already wired in.</p></li><li><p>Iterate on the signal, not the stack.</p></li></ol><p>Armaan has already run this pattern across five unrelated domains: chemical plant telemetry, a baseball press box, campus food logistics, long-form podcast content, and an AI agent under safety scrutiny. The domains are portable. The <em>habit</em> is portable.</p><p>The concepts an AI-engineering portfolio needs to demonstrate &#8212; RAG, tool-use, voice continuity, agent design &#8212; are necessary. The real-world skills that make those concepts production-viable are what&#8217;s rare: schema foresight, tail-latency discipline, pre-incident guardrails, schema-as-product thinking, staged rollout. Armaan&#8217;s portfolio has both layers. That&#8217;s the thing most new-grad hires don&#8217;t come with.</p><p>Solutions Architect work has a narrower shape &#8212; more &#8220;compose a durable reference architecture for customers&#8221; than &#8220;ship a one-off&#8221; &#8212; but the underlying disposition is identical. Pick the production-shaped default, not the demo-shaped one. Design the data model for the surface you&#8217;ll build next. Treat latency and guardrails as product features. Refuse to accrue Retrofit Tax.</p><h2>How to reach him</h2><ul><li><p><strong>Portfolio:</strong> <a href="https://armaanagrawal.com">armaanagrawal.com</a> &#8212; worth reading in order, it&#8217;s structured as seven chapters</p></li><li><p><strong>GitHub:</strong> <a href="https://github.com/airman416">github.com/airman416</a> &#8212; SamParrBot (SamGPT) and Content-Engine are the deepest reads</p></li><li><p><strong>LinkedIn:</strong> <a href="https://linkedin.com/in/agr1">linkedin.com/in/agr1</a></p></li><li><p><strong>Target roles:</strong> Forward Deployed Engineer, Solutions Architect</p></li></ul><div><hr></div><p><strong>For readers building their own AIfolio:</strong> the pattern that repeats across his work is cheaper to adopt than you&#8217;d think. Ship guardrails in hour one, not hour forty. Design your RAG chunking schema around the second surface you&#8217;ll build, not the first. Stage your cold-start into a config change instead of a rewrite. Cache the read path before you need to. None of that is senior-only work. It&#8217;s just the production-shaped default most engineers don&#8217;t pick until the first incident teaches them to &#8212; and the reason their portfolios end up carrying Retrofit Tax instead of compounding.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Vercel Breach RCA: Agent Identity Is the New Attack Surface]]></title><description><![CDATA[One OAuth grant, one compromised AI vendor, one platform breach. Every team deploying agents shares the same architecture.]]></description><link>https://theairuntime.com/p/the-vercel-breach-rca-agent-identity</link><guid isPermaLink="false">https://theairuntime.com/p/the-vercel-breach-rca-agent-identity</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Thu, 23 Apr 2026 11:05:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WCPK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - On April 19, 2026, Vercel <a href="https://vercel.com/kb/bulletin/vercel-april-2026-security-incident">disclosed a breach of its internal systems</a>. The root cause wasn&#8217;t a zero-day, a supply chain poisoning of an npm package, or a perimeter failure. It was an OAuth grant &#8212; a Vercel employee signed into <a href="https://context.ai/">Context.ai</a>, a 300-connector agentic &#8220;AI office suite,&#8221; using their Vercel enterprise Google Workspace account and granted &#8220;Allow All&#8221; permissions. Context.ai was already compromised from a February 2026 infostealer infection on an employee laptop. The attacker inherited that OAuth session, pivoted into Vercel&#8217;s Google Workspace, and enumerated customer environment variables that were stored in plaintext-recoverable form because they weren&#8217;t explicitly marked &#8220;sensitive.&#8221; Vercel CEO Guillermo Rauch <a href="https://thehackernews.com/2026/04/vercel-breach-tied-to-context-ai-hack.html">publicly attributed</a> the attacker&#8217;s &#8220;operational velocity&#8221; to AI-accelerated tradecraft. Stolen data was listed on BreachForums for $2M. The mainstream framing &#8212; &#8220;shadow AI,&#8221; &#8220;third-party risk,&#8221; &#8220;OAuth supply chain&#8221; &#8212; is correct but incomplete. The right framing for AI engineers: <strong>this is the first major platform breach where an AI agent holding delegated identity was the pivot point</strong>. Every agent, every MCP server, every AI productivity tool your team is shipping or consuming runs on exactly this pattern. If you operate agents, audit your OAuth grants this week, default-sensitive every secret you store, and stop treating agent vendors as if they were ordinary SaaS.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>What actually happened</h2><p>Here is the compressed attack chain, reconstructed from <a href="https://vercel.com/kb/bulletin/vercel-april-2026-security-incident">Vercel&#8217;s bulletin</a>, <a href="https://therecord.media/cloud-platform-vercel-says-company-breached-through-ai-tool">Context.ai&#8217;s advisory</a>, <a href="https://www.helpnetsecurity.com/2026/04/20/vercel-breached/">Hudson Rock&#8217;s infostealer analysis</a>, and <a href="https://www.trendmicro.com/en_us/research/26/d/vercel-breach-oauth-supply-chain.html">Trend Micro&#8217;s post-incident writeup</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ICLw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ICLw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png 424w, https://substackcdn.com/image/fetch/$s_!ICLw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png 848w, https://substackcdn.com/image/fetch/$s_!ICLw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png 1272w, https://substackcdn.com/image/fetch/$s_!ICLw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ICLw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png" width="530" height="694" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:694,&quot;width&quot;:530,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:35823,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194970480?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ICLw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png 424w, https://substackcdn.com/image/fetch/$s_!ICLw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png 848w, https://substackcdn.com/image/fetch/$s_!ICLw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png 1272w, https://substackcdn.com/image/fetch/$s_!ICLw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8af08f1c-b440-4a1a-9b04-41b0820a9491_530x694.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                              Attack chain</em></p><p>Each hop is worth pausing on.</p><p><strong>The initial compromise was human, not technical.</strong> According to <a href="https://thehackernews.com/2026/04/vercel-breach-tied-to-context-ai-hack.html">Hudson Rock&#8217;s analysis</a>, the Context.ai employee&#8217;s browser history showed active searches for Roblox &#8220;auto-farm&#8221; scripts &#8212; a classic Lumma Stealer distribution vector. An enterprise SaaS vendor&#8217;s entire security posture was compromised because one employee downloaded game cheats on a corporate laptop. This is a failure of endpoint policy, not crypto or architecture.</p><p><strong>The pivot was an OAuth grant, not a credential theft.</strong> Context.ai&#8217;s own <a href="https://therecord.media/cloud-platform-vercel-says-company-breached-through-ai-tool">statement</a> is worth reading carefully: Vercel wasn&#8217;t even a Context.ai customer. A single Vercel employee had signed up for the product using their Vercel enterprise Google account and granted full read access to Google Drive during onboarding. When Context.ai&#8217;s OAuth token store was compromised, the attacker acquired not a password, but a <em>delegated session</em> &#8212; the authority to act as that employee inside Vercel&#8217;s Google Workspace.</p><p><strong>The blast radius was set by Vercel&#8217;s &#8220;sensitive vs. non-sensitive&#8221; environment variable model.</strong> Vercel encrypts all env vars at rest. But it has a distinction: env vars marked as &#8220;sensitive&#8221; are stored such that they cannot be read back even by the platform itself; non-sensitive env vars can be decrypted to plaintext for display in dashboards. The attacker couldn&#8217;t touch sensitive vars. Everything else &#8212; API keys, database credentials, signing keys that customers had never opted into the sensitive treatment &#8212; was <a href="https://www.trendmicro.com/en_us/research/26/d/vercel-breach-oauth-supply-chain.html">readable by enumeration</a>.</p><p><strong>The velocity was the tell.</strong> Rauch&#8217;s <a href="https://thehackernews.com/2026/04/vercel-breach-tied-to-context-ai-hack.html">public claim</a> is that the attacker moved fast enough, with enough understanding of Vercel&#8217;s internal structure, that AI augmentation is the most likely explanation. This is interpretive &#8212; attribution-by-velocity is not a forensic artifact &#8212; but it lines up with a pattern Trend Micro, Microsoft, and others have flagged across 2026: LLM-driven reconnaissance that parallelizes schema discovery, endpoint probing, and credential-format recognition at rates that break detection baselines calibrated to human attackers.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WCPK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WCPK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!WCPK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!WCPK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!WCPK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WCPK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/12700739-2cc1-423e-875a-cd3a34883123_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1027006,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194970480?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WCPK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!WCPK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!WCPK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!WCPK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12700739-2cc1-423e-875a-cd3a34883123_1024x559.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>                                                               <em>Breach RCA</em></p><div><hr></div><h2>Why the standard framings are incomplete</h2><p>The Vercel breach is getting framed three ways in the security press. All three are partially right and all three miss the point for AI engineers.</p><p><strong>Framing 1: &#8220;Third-party risk / shadow AI.&#8221;</strong> True. But this framing leads to the wrong remediation &#8212; better vendor questionnaires, annual SOC 2 reviews, procurement gates. None of that would have prevented this. Context.ai likely had SOC 2. A Vercel employee signed up as a consumer, bypassing procurement entirely. Point-in-time vendor assessments are worthless against active compromise.</p><p><strong>Framing 2: &#8220;OAuth supply chain attack.&#8221;</strong> True. But OAuth supply chain attacks have been understood for years &#8212; Codecov, CircleCI, the Heroku/Travis CI incident. What&#8217;s new here isn&#8217;t the OAuth mechanism. It&#8217;s the <em>category of vendor</em> on the other side of the grant.</p><p><strong>Framing 3: &#8220;Platform env var model needs defaults.&#8221;</strong> True. Vercel has <a href="https://www.bleepingcomputer.com/news/security/vercel-confirms-breach-as-hackers-claim-to-be-selling-stolen-data/">already rolled out</a> dashboard changes and is pushing customers toward the sensitive-variable feature. This is good, and every platform should copy it. But this is a Vercel-specific lesson, not an industry-wide one.</p><p>The framing that actually matters for AI engineers is the one none of these capture: <strong>the intermediary in this breach was an AI agent holding delegated identity, and the pattern that made it dangerous is the pattern every agent deployment replicates.</strong></p><p>Context.ai markets itself as an agent platform. Per their own <a href="https://www.businesswire.com/news/home/20250708658619/en/Context-Launches-the-Worlds-First-AI-Native-Office-Suite-to-Automate-2.5-Trillion-Hours-of-Annual-Knowledge-Work">launch materials</a>, its agents &#8220;dynamically traverse entire organizational knowledge bases.&#8221; To do that well, it needs broad, persistent access to Drive, Slack, email, code repos &#8212; and it acquires that access through long-lived OAuth grants from individual users. This is not a Context.ai pathology. It&#8217;s the architectural baseline for every agentic product shipping today: Cursor&#8217;s enterprise connectors, Glean&#8217;s agents, the exploding MCP server ecosystem, every &#8220;connect your Google Drive&#8221; button in every AI startup demo.</p><p>When the agent is compromised, the delegated identity is compromised. When the delegated identity is an enterprise Google Workspace account, the compromise propagates to everything that account can touch.</p><div><hr></div><h2>A useful handle: Delegated Identity Blast Radius</h2><p>A shorthand for this pattern, which I&#8217;ll use for the rest of the piece: <strong>Delegated Identity Blast Radius (DIBR)</strong> &#8212; the scope of systems an attacker inherits by compromising an agent, equal to the union of all permissions granted to that agent across all delegating users and tenants.</p><p>DIBR has three properties that distinguish it from pre-agent OAuth risk.</p><p><strong>1. Delegation collapses identity.</strong> A traditional SaaS integration might hold a scoped API key for &#8220;read Slack messages.&#8221; That&#8217;s a credential, and it&#8217;s bounded. An agent holding an OAuth grant with &#8220;Allow All&#8221; on Drive doesn&#8217;t hold a credential &#8212; it holds a <em>session</em>. If the agent&#8217;s vendor is compromised, the attacker is now the human. They can read everything the human can read, compose everything the human can compose, move laterally through every system the human&#8217;s SSO has reach into. The credential/identity distinction that security teams rely on stops working at the agent boundary.</p><p><strong>2. Consent UX was never designed for agents.</strong> OAuth scopes describe what an app <em>can</em> do at authorization time. They don&#8217;t describe what an autonomous agent <em>will</em> do at runtime. A user approving &#8220;read your Drive&#8221; is not meaningfully consenting to &#8220;this agent will read your Drive, reason over every document, and potentially generate outputs that contain exfiltrated content.&#8221; Google&#8217;s own consent screen shows a list of scopes, not a behavioral model. In the Vercel case, Context.ai&#8217;s onboarding asked for Drive read access &#8212; exactly what the product needs to function. Nothing about the consent flow would flag this as risky. The scope was honest. The runtime behavior was the risk.</p><p><strong>3. Blast radius scales with agent ambition.</strong> The more capable the agent, the worse the breach. A narrow AI &#8212; say, a meeting summarizer that only touches calendar events from the last 48 hours &#8212; has a bounded DIBR. A &#8220;universal office suite&#8221; agent marketed as being able to understand <em>everything</em> about how your organization works has, by design, maximal DIBR. The product&#8217;s value proposition and its worst-case blast radius are the same vector. Context.ai&#8217;s sales pitch &#8212; 300 connectors, cross-tool reasoning, organizational memory &#8212; is also a perfect description of its breach impact.</p><p>This is the uncomfortable part: <strong>you cannot reduce DIBR without reducing agent capability.</strong> The only knobs are scope minimization, token lifetime, and vendor security posture &#8212; and all three trade off against the reason you bought the agent in the first place.</p><div><hr></div><h2>This is not a Vercel problem. It&#8217;s an agent-era problem.</h2><p>The instinct right now is to look at the Vercel incident and ask: &#8220;What did Vercel do wrong, and how do I avoid being Vercel?&#8221; That&#8217;s useful but it&#8217;s the wrong axis. Vercel&#8217;s specific mistakes &#8212; non-sensitive-by-default env vars, enterprise Google Workspace OAuth config permissive enough to allow broad grants &#8212; are patchable and already being patched.</p><p>The unpatchable part is structural. Right now, across the AI ecosystem:</p><ul><li><p>Millions of developers have connected OpenAI, Anthropic, and other API keys to Cursor, Continue, Claude Code, Zed, and dozens of other AI coding tools &#8212; in many cases through OAuth to their GitHub identity, not just a local API key.</p></li><li><p>Every &#8220;connect your Google Drive&#8221; AI product demo creates a long-lived OAuth grant. Most of those grants are never revoked, never rotated, and never audited.</p></li><li><p>The Model Context Protocol (MCP) ecosystem is accelerating the pattern: MCP servers are effectively generalized delegation endpoints, and the current norm is to trust them implicitly because they run &#8220;locally&#8221; or &#8220;in the enterprise.&#8221;</p></li><li><p>Agentic IDE integrations &#8212; the kind that autonomously read, edit, and commit across an entire codebase &#8212; hold scopes that would horrify a security auditor if they were attached to a human service account.</p></li></ul><p>Every one of these is a future Context.ai, waiting for its Lumma Stealer moment. The attack pattern is replicable. The defenses, so far, are not standardized.</p><p>There are two structural responses.</p><p><strong>Product-side (if you build agent tools):</strong> Default to the narrowest scope that lets your product demo, not the scope your product&#8217;s full feature set needs. Expose scope minimization as a first-class UI element &#8212; &#8220;Context.ai full access&#8221; versus &#8220;Context.ai research only&#8221; &#8212; so users can make real trust decisions. Short-lived tokens with explicit re-authorization for high-impact actions. Invalidate tokens on any vendor-side incident, not just on user-triggered rotation. Publish an incident response SLA for token compromise.</p><p><strong>Deployment-side (if you ship software that depends on agent vendors):</strong> Treat every agent vendor&#8217;s breach as your breach. The Vercel env var issue isn&#8217;t unique &#8212; audit whether your platform&#8217;s secret store is sensitive-by-default or sensitive-by-opt-in, and switch the defaults. Build a disaster recovery playbook for &#8220;assume our primary AI vendor is compromised right now.&#8221; Most teams don&#8217;t have one. The ones that will survive the next incident in this category are the ones that already wrote it.</p><div><hr></div><h2>What to change this week</h2><p>If you&#8217;re reading this and asking &#8220;OK, what do I do Tuesday morning&#8221; &#8212; here is the ordered list. This is the most concrete thing in the piece, so don&#8217;t skip it.</p><p><strong>1. Audit your Google Workspace OAuth grants right now.</strong> In <code>admin.google.com</code> &#8594; Security &#8594; Access and data control &#8594; API controls &#8594; App access control. Export the full list. For every app, check the scopes. The Secure Annex researcher <a href="https://cybernews.com/security/vercel-hacked-after-oauth-compromise/">John Tuckner put it sharply</a>: spend a week asking yourself which scopes you&#8217;ve allowed and whether you recognize all the services. Most teams have never done this exercise and are shocked by what comes back.</p><p><strong>2. Identify every OAuth grant with &#8220;broad&#8221; or &#8220;Allow All&#8221; scopes on Drive, Mail, or Calendar.</strong> These are your highest-DIBR connections. Revoke the ones you don&#8217;t actively use. For the ones you keep, set a calendar reminder to re-audit quarterly. Treat &#8220;broad Drive access&#8221; as a permission on par with production database access, because in breach terms it is.</p><p><strong>3. Check whether your platform&#8217;s secrets are sensitive-by-default.</strong> Vercel&#8217;s model &#8212; sensitive is opt-in &#8212; is common. Netlify, Render, Railway, and Fly.io all have variations on this pattern. Go into your secret store, identify every non-sensitive secret that carries production access, and either rotate-and-mark-sensitive or move to a dedicated secrets manager (AWS Secrets Manager, GCP Secret Manager, Doppler, Infisical, 1Password).</p><p><strong>4. If you ship an agent product, publish your scope minimization story.</strong> This is both a security posture and a differentiation opportunity. Buyers in 2026 are going to start asking &#8220;what happens when you get breached&#8221; &#8212; teams that have a good answer will win. Teams that don&#8217;t, won&#8217;t.</p><p><strong>5. If you run agents in production, assume the AI vendor is already compromised and plan the blast radius.</strong> The exercise: pick your most-connected agent. Write down every credential, scope, and system it touches. Imagine you wake up tomorrow to a vendor breach disclosure. Which secrets rotate first? Which systems need re-authorization? Which customers need notification? If this exercise takes more than four hours, you don&#8217;t have a runbook.</p><p><strong>6. Recalibrate your detection baselines for AI-accelerated enumeration.</strong> If your SIEM alerts are tuned to &#8220;human-paced&#8221; attacker behavior &#8212; unique resource enumeration rate, error-to-success ratio recovery &#8212; they may under-alert against AI-augmented operators. Trend Micro&#8217;s writeup has <a href="https://www.trendmicro.com/en_us/research/26/d/vercel-breach-oauth-supply-chain.html">specific guidance</a> on thresholds to revisit. This is worth a security team afternoon.</p><div><hr></div><h2>What to watch</h2><p>Two questions will shape the next six months.</p><p><strong>Will any OAuth provider ship &#8220;agent consent&#8221; as a distinct flow?</strong> Google, Microsoft, and Okta all have the signal that agent grants are different in character from traditional app grants. What the ecosystem needs is a new consent primitive &#8212; something like a &#8220;delegated agent session&#8221; with mandatory short lifetime, mandatory re-authorization for high-impact actions, and a scope model expressive enough to describe runtime behavior, not just capability surface. The first provider to ship this will reset the security baseline for every agent product downstream.</p><p><strong>Will platform providers make sensitive-by-default the standard?</strong> Vercel is clearly moving that direction post-incident. If competitors follow, the industry gets safer. If they don&#8217;t, Vercel customers end up paying a security tax while customers of other platforms keep eating the old default. Watch the next 60 days of product announcements from Netlify, Render, and Cloudflare.</p><p>The Vercel breach is going to be cited for years. Not because the technical details are novel &#8212; they mostly aren&#8217;t &#8212; but because it&#8217;s the first high-profile case where the intermediary was an AI agent holding delegated identity, and the ecosystem reaction will set precedent for how we treat agent vendors from here on.</p><p>If you&#8217;re building agents, you have a few months to fix your defaults before someone else&#8217;s breach becomes your problem. Use them.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[OpenAI’s AI Deployment Playbook Is Missing a Chapter]]></title><description><![CDATA[Their whitepaper nails the org chart. It ignores the engineering discipline that determines whether AI products actually stay in production.]]></description><link>https://theairuntime.com/p/openais-ai-deployment-playbook-is</link><guid isPermaLink="false">https://theairuntime.com/p/openais-ai-deployment-playbook-is</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Wed, 22 Apr 2026 11:03:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Rnfg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR:</strong> OpenAI&#8217;s &#8220;From Experiments to Deployments&#8221; <a href="https://cdn.openai.com/business-guides-and-resources/from-experiments-to-deployments_whitepaper_11-25.pdf">whitepaper </a>lays out a solid four-phase framework for scaling AI &#8212; foundations, fluency, prioritization, build. But Phase 4 reveals a critical gap: the whitepaper treats evaluation as a step in a checklist rather than a continuous engineering discipline. It describes <em>what</em> to measure (retrieval quality, summarization accuracy, guardrail compliance) without naming <em>who owns it</em> or <em>how it operates at scale</em>. That missing chapter is Model Reliability Engineering &#8212; the discipline that sits between the eval checklist and the production system that keeps your AI products trustworthy over time. If you&#8217;re an AI engineer reading OpenAI&#8217;s playbook, understand the organizational framework, but build MRE into your Phase 4 from day one.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The Whitepaper Gets a Lot Right</h2><p>Credit where it&#8217;s earned. OpenAI&#8217;s whitepaper, published in late 2025, distills real lessons from enterprise partnerships with BBVA, Uber, Lowe&#8217;s, Booking.com, and others into a four-phase model for scaling AI:</p><p><strong>Phase 1: Set the foundations</strong> &#8212; executive alignment, governance, data access. The &#8220;compliance fast path&#8221; example from Figma is particularly instructive: data guardrails that enable experimentation rather than blocking it.</p><p><strong>Phase 2: Create AI fluency</strong> &#8212; literacy programs, champion networks, SME development. BBVA&#8217;s journey from 3,000 to 11,000 (and now 120,000) ChatGPT Enterprise licenses, powered by a distributed champion network, is the best public case study of this phase working at scale.</p><p><strong>Phase 3: Scope and prioritize</strong> &#8212; repeatable intake processes, impact/effort scoring, reuse-first design. Standard portfolio management, adapted well for AI&#8217;s unique characteristics.</p><p><strong>Phase 4: Build and scale products</strong> &#8212; cross-functional teams, incremental builds, gated checkpoints, continuous evaluation.</p><p>Phase 4 is where the whitepaper gets interesting &#8212; and where it stops too soon.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rnfg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rnfg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!Rnfg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!Rnfg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Rnfg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rnfg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2469135,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194025504?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rnfg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!Rnfg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!Rnfg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Rnfg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9edf0fef-1e73-4d11-969e-ef9629217f9e_1408x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                   MRE in the mix</em></p><h2>Where MRE Fills the Gap</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!W8WE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!W8WE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png 424w, https://substackcdn.com/image/fetch/$s_!W8WE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png 848w, https://substackcdn.com/image/fetch/$s_!W8WE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png 1272w, https://substackcdn.com/image/fetch/$s_!W8WE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!W8WE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png" width="960" height="1184" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1184,&quot;width&quot;:960,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:70991,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194025504?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!W8WE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png 424w, https://substackcdn.com/image/fetch/$s_!W8WE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png 848w, https://substackcdn.com/image/fetch/$s_!W8WE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png 1272w, https://substackcdn.com/image/fetch/$s_!W8WE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4dc876c-9e88-4972-a0a7-ce8adf3cf6ff_960x1184.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The whitepaper's four phases get you to the launch gate. <a href="https://aiengineerweekly.substack.com/p/model-reliability-engineering-who">MRE </a>- Model Reliability Engineering is the operational discipline that keeps AI products reliable after deployment &#8212; monitoring behavioral SLOs, detecting drift, and feeding failures back into the build cycle.</p><h2>The Gap in Phase 4</h2><p>The whitepaper includes a table that traces a Q&amp;A agent through three evaluation stages: retrieval (does it find the right information?), summarization and grounding (does it synthesize useful, cited answers?), and guardrails (does it stay within approved data, tone, and safety guidelines?). Each stage has a decision gate: continue, refine, or stop.</p><p>This is a good checklist. It is not an engineering discipline.</p><p>Here&#8217;s what the table doesn&#8217;t address:</p><p><strong>Who owns these evaluations after launch?</strong> The whitepaper assigns &#8220;SME review&#8221; and &#8220;safety review&#8221; as activities, but never identifies a team or role responsible for ongoing behavioral monitoring. In traditional software, SRE owns uptime. In ML systems, MLOps owns pipeline health. In AI products built on LLMs, who owns <em>behavioral reliability</em> &#8212; the question of whether the model is still doing what you deployed it to do?</p><p><strong>What happens when the model changes underneath you?</strong> The whitepaper acknowledges that &#8220;AI systems don&#8217;t follow fixed rules&#8221; and that &#8220;capabilities evolve in weeks, not quarters.&#8221; But the evaluation framework is presented as a build-time activity. When your model provider ships a new version &#8212; and they will, roughly every three days according to the whitepaper&#8217;s own graphic &#8212; who reruns those evals? Who detects behavioral drift before your users do?</p><p><strong>Where are the SLOs?</strong> The table has qualitative goals (&#8221;accurate, grounded, and useful&#8221;) but no quantitative thresholds. In SRE, you don&#8217;t say &#8220;the system should be reliable&#8221; &#8212; you say &#8220;99.9% availability measured over a 30-day rolling window.&#8221; AI products need the same precision: &#8220;faithfulness score above 0.85 on our evaluation suite, measured daily across a stratified sample of production queries.&#8221;</p><p><strong>What&#8217;s the incident response playbook?</strong> When a guardrail fails &#8212; and it will &#8212; what happens? The whitepaper&#8217;s &#8220;continue/refine/stop&#8221; gates are pre-launch decisions. Post-launch, you need detection, triage, mitigation, and postmortem processes. You need to know whether to roll back the prompt, switch models, tighten the guardrail, or escalate to a human.</p><h2>The Missing Chapter: Model Reliability Engineering</h2><p>These aren&#8217;t minor gaps. They&#8217;re the difference between a successful pilot and a production system that earns trust over months and years.</p><p>The discipline that fills this gap is what I call <strong>Model Reliability Engineering (MRE)</strong> &#8212; the practice of owning model behavior reliability in production. MRE borrows the operational rigor of Site Reliability Engineering and applies it to the unique challenges of AI systems that generate outputs based on patterns rather than predefined logic.</p><p>MRE operates through two layers:</p><p><strong>Context Engineering</strong> &#8212; ensuring the model receives the right information, in the right format, at the right time. This covers retrieval quality, prompt construction, tool orchestration, and the entire input pipeline. When the whitepaper&#8217;s &#8220;retrieval&#8221; and &#8220;summarization&#8221; stages fail in production, it&#8217;s usually a Context Engineering problem: the retrieval pipeline returned stale data, the prompt template drifted, or the context window was consumed by irrelevant information.</p><p><strong>Harness Engineering</strong> &#8212; everything that wraps around model output before it reaches the user. Output validation, consistency checking, safety filtering, fallback logic, and the instrumentation that makes all of this observable. The whitepaper&#8217;s &#8220;guardrails&#8221; stage lives here, but MRE treats it as a continuous runtime concern rather than a pre-launch checkpoint.</p><p>Think of it this way: the whitepaper&#8217;s Phase 4 table is a <em>construction inspection checklist</em>. MRE is the <em>building management system</em> that keeps the building safe after the inspectors leave.</p><h2>What This Means for Your Team</h2><p>If you&#8217;re building AI products and following OpenAI&#8217;s playbook &#8212; which, again, is genuinely good organizational advice &#8212; here&#8217;s how to fill in the gap:</p><p><strong>Define behavioral SLOs before launch.</strong> Not &#8220;the system should be accurate&#8221; but &#8220;faithfulness &#8805; 0.85, relevance &#8805; 0.80, guardrail violation rate &lt; 0.1%, measured daily on a stratified sample of 500 production queries.&#8221; These become the contract between your AI product and your organization.</p><p><strong>Assign MRE ownership explicitly.</strong> Someone &#8212; a person, a team, a rotation &#8212; needs to own behavioral reliability the way your SRE team owns uptime. They monitor the behavioral SLOs, investigate violations, and coordinate with product and engineering on fixes.</p><p><strong>Build for model-provider instability.</strong> Pin your model versions. Run behavioral regression tests on every model update. Maintain a rollback capability. The whitepaper says innovation happens every three days &#8212; your evaluation system needs to keep pace.</p><p><strong>Create an incident response playbook for behavioral failures.</strong> When your Q&amp;A agent starts hallucinating, who gets paged? What&#8217;s the first mitigation? How do you determine blast radius? These are engineering operations questions, not product management questions.</p><p><strong>Instrument everything.</strong> Log prompts, retrieved context, raw model outputs, post-processing transformations, and final user-facing responses. Without this trace, you can&#8217;t diagnose failures and you can&#8217;t run meaningful evals.</p><h2>The Bigger Pattern</h2><p>This gap isn&#8217;t unique to OpenAI&#8217;s whitepaper. It reflects a broader industry blind spot: we&#8217;ve gotten good at <em>building</em> AI systems and reasonably good at <em>evaluating</em> them before launch, but we haven&#8217;t yet developed the operational discipline for <em>keeping them reliable in production</em>.</p><p>SRE emerged because uptime required its own discipline, separate from software engineering. MLOps emerged because model pipelines required their own discipline, separate from DevOps. MRE is the next layer &#8212; the discipline that owns the behavior of AI systems that are neither deterministic nor static.</p><p>OpenAI&#8217;s playbook will get you to production. Model Reliability Engineering is what keeps you there.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Eval Lifecycle: What Actually Happens Between “Proof of Concept” and “Production”]]></title><description><![CDATA[Most AI projects die in the gap between &#8220;it works on my laptop&#8221; and &#8220;it works in production.&#8221; The eval lifecycle is the bridge nobody teaches you to build.]]></description><link>https://theairuntime.com/p/the-eval-lifecycle-what-actually</link><guid isPermaLink="false">https://theairuntime.com/p/the-eval-lifecycle-what-actually</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Mon, 20 Apr 2026 11:03:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!m_ms!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR:</strong> OpenAI&#8217;s enterprise <a href="https://cdn.openai.com/business-guides-and-resources/from-experiments-to-deployments_whitepaper_11-25.pdf">whitepaper</a> quietly introduced a three-stage evaluation framework for AI agents &#8212; retrieval, summarization/grounding, and guardrails &#8212; with a continue/refine/stop gate at each stage. This framework is more important than anything else in the 25-page document, and the whitepaper spends exactly one table on it. Here&#8217;s the expanded version: how each eval stage actually works, what tools exist to run them, what &#8220;good&#8221; looks like at each gate, and how the entire lifecycle repeats at MVP, pilot, and production scale. If you&#8217;re building AI products, this is the technical architecture that determines whether your proof of concept ever graduates.</p></div><h2>Why Evals Are the Whole Game</h2><p>There&#8217;s a moment in every AI project where the demo works. The retrieval is pulling relevant chunks, the model is generating coherent answers, and the stakeholders are nodding. This moment is dangerous.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>It&#8217;s dangerous because the gap between &#8220;works in a demo&#8221; and &#8220;works in production&#8221; is not a linear improvement problem. It&#8217;s a <em>category shift</em>. In a demo, you control the inputs, you cherry-pick the questions, and you evaluate by gut feel. In production, real users ask unpredictable questions against messy data, and you evaluate by numbers you&#8217;ve committed to in advance.</p><p>The eval lifecycle is the structured process that bridges this gap. OpenAI&#8217;s enterprise whitepaper sketches it in a single table. Let&#8217;s build the full architecture.</p><h2>Stage 1: Retrieval Evaluation</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m_ms!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m_ms!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png 424w, https://substackcdn.com/image/fetch/$s_!m_ms!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png 848w, https://substackcdn.com/image/fetch/$s_!m_ms!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!m_ms!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m_ms!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png" width="518" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:518,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:677177,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194026415?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m_ms!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png 424w, https://substackcdn.com/image/fetch/$s_!m_ms!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png 848w, https://substackcdn.com/image/fetch/$s_!m_ms!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!m_ms!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0a8aff-995b-4135-8911-d71198a2dfdc_518x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                   Retrieval Evals</em></p><p>Each stage has its own metrics, its own evaluation set, and its own continue/refine/stop gate. The lifecycle repeats at MVP, pilot, and production scale &#8212; with the evaluation set roughly doubling at each stage.</p><p><strong>The question:</strong> Does the system reliably find the right information?</p><p>This is where most AI products fail first &#8212; not because retrieval is hard to build, but because retrieval is hard to evaluate well. A retrieval system that returns <em>plausible</em> results will pass casual inspection. A retrieval system that returns the <em>right</em> results for edge cases is what separates a demo from a product.</p><p><strong>What you&#8217;re measuring:</strong></p><p><em>Recall</em> &#8212; of all the documents that should have been retrieved, what fraction did the system actually find? Low recall means the system is missing relevant information. For a Q&amp;A agent over company docs, this might mean missing the updated policy while retrieving the obsolete one.</p><p><em>Precision</em> &#8212; of all the documents retrieved, what fraction are actually relevant? Low precision means the model&#8217;s context window is polluted with irrelevant material, degrading downstream generation quality.</p><p><em>Mean Reciprocal Rank (MRR)</em> &#8212; is the most relevant document appearing first, or buried in position five? Models pay more attention to what appears early in context. If your best document consistently ranks third, your answers will be worse than they should be.</p><p><strong>How you build the evaluation set:</strong></p><p>Start with 50-100 representative queries drawn from actual user conversations (or realistic simulations). For each query, a domain expert labels which documents <em>should</em> be retrieved. This labeled set becomes your retrieval ground truth.</p><p>This is tedious and irreplaceable. Automated approaches &#8212; using an LLM to judge retrieval relevance &#8212; are useful for scaling evaluations but unreliable for building the initial ground truth. The domain expert knows that &#8220;Q3 revenue guidance&#8221; should retrieve the board deck, not the press release. The LLM doesn&#8217;t know your organization well enough to make that distinction.</p><p><strong>The gate decision:</strong></p><p>Continue if recall &#8805; 0.85 and precision &#8805; 0.75 on your evaluation set. Refine if metrics are between 0.60 and 0.85 &#8212; this usually means adjusting chunking strategy, embedding model, or retrieval parameters. Stop if recall is below 0.60 &#8212; the retrieval pipeline needs fundamental rework before downstream evaluation is meaningful.</p><p>Track token costs at this stage. Retrieving too many documents burns context window space and money. Retrieving too few misses information. The right balance is specific to your use case.</p><h2>Stage 2: Summarization and Grounding Evaluation</h2><p><strong>The question:</strong> Does the system synthesize clear, consistent, useful, and cited answers? Did it follow the right steps and access the right data?</p><p>This is the stage where the whitepaper&#8217;s description &#8212; &#8220;evals on traces/logs + SME review&#8221; &#8212; is most dangerously compressed. &#8220;SME review&#8221; alone can mean anything from &#8220;my colleague glanced at five outputs&#8221; to &#8220;three domain experts independently rated 200 outputs on a structured rubric.&#8221; The difference in quality assurance is enormous.</p><p><strong>What you&#8217;re measuring:</strong></p><p><em>Faithfulness</em> &#8212; does the answer only contain claims that are supported by the retrieved context? An answer can be correct according to the model&#8217;s training data but <em>unfaithful</em> to the retrieved context, which means it&#8217;s hallucinating in a way that&#8217;s invisible to the user. This is the most important metric in the entire eval lifecycle and the one most teams measure poorly.</p><p><em>Relevance</em> &#8212; does the answer actually address the question? A faithfully grounded answer that doesn&#8217;t answer the user&#8217;s question is useless.</p><p><em>Completeness</em> &#8212; does the answer cover all the relevant information from the retrieved context? Partial answers erode trust over time even when they&#8217;re technically accurate.</p><p><em>Citation accuracy</em> &#8212; if the system claims &#8220;according to document X,&#8221; is that claim actually in document X? Citation errors are trust-destroying because they&#8217;re verifiable &#8212; a user who checks a citation and finds it doesn&#8217;t match will never trust the system again.</p><p><strong>How you build the evaluation:</strong></p><p>For each query in your evaluation set, have domain experts write the &#8220;gold standard&#8221; answer &#8212; the response a knowledgeable human would give. Then compare model outputs against these references.</p><p>Automated faithfulness evaluation is one of the areas where LLM-as-judge approaches are genuinely useful. Have a separate model (not the one generating the answer) check whether each claim in the output is supported by the retrieved context. Tools like RAGAS, DeepEval, and TruLens provide frameworks for this, but the key insight is: <em>use a different model for evaluation than the one generating answers</em>. Models are unreliable judges of their own outputs.</p><p><strong>The gate decision:</strong></p><p>Continue if faithfulness &#8805; 0.85, relevance &#8805; 0.80, and citation accuracy &#8805; 0.90 on a sample of 200+ queries. Refine if faithfulness is between 0.70 and 0.85 &#8212; this usually means adjusting the system prompt to enforce stricter grounding, or improving the retrieval stage to provide better context. Stop if faithfulness is below 0.70. A system that hallucinates in 30%+ of responses is not ready for any form of user testing.</p><h2>Stage 3: Guardrail Evaluation</h2><p><strong>The question:</strong> Does it stay within approved data, tone, and safety guidelines?</p><p>Guardrails get treated as an afterthought in most AI projects &#8212; the safety review that happens the week before launch. That&#8217;s backwards. Guardrail failures are the ones that make the news, generate legal liability, and destroy user trust in ways that no amount of accuracy improvement can repair.</p><p><strong>What you&#8217;re measuring:</strong></p><p><em>Topic boundary compliance</em> &#8212; does the system stay within its defined scope? A legal Q&amp;A agent that starts offering medical advice has failed a topic boundary guardrail, even if the medical advice happens to be accurate.</p><p><em>Tone and brand consistency</em> &#8212; does the system&#8217;s voice match organizational guidelines? A customer-facing agent that suddenly becomes casual or sarcastic when asked difficult questions has a tone guardrail failure.</p><p><em>Safety filtering</em> &#8212; does the system refuse or redirect harmful, offensive, or manipulative inputs? This isn&#8217;t just about explicit toxicity &#8212; it includes prompt injection attempts, jailbreaking, and social engineering.</p><p><em>PII handling</em> &#8212; does the system avoid exposing, generating, or echoing personally identifiable information? This is both a safety and a regulatory requirement.</p><p><strong>How you build the evaluation:</strong></p><p>Create an adversarial test set. This is distinct from the representative test set used in stages 1 and 2. Adversarial tests specifically probe boundaries: out-of-scope questions, prompt injection attempts, requests for information the system shouldn&#8217;t have, edge cases where tone guidance is ambiguous.</p><p>A strong adversarial test set has 100+ cases across these categories, built by people who actively try to break the system. This is one area where &#8220;red teaming&#8221; (having humans try to elicit harmful outputs) provides signal that automated evaluation cannot replicate.</p><p><strong>The gate decision:</strong></p><p>Continue if guardrail violation rate &lt; 0.5% on the adversarial test set and &lt; 0.1% on the representative test set. Refine if violations are between 0.5% and 2% &#8212; usually by tightening the system prompt, adding output filters, or restricting tool access. Stop if violation rate exceeds 2% on the adversarial set. Safety is not a gradient.</p><h2>The Lifecycle Repeats at Every Scale</h2><p>Here&#8217;s what the whitepaper mentions but doesn&#8217;t emphasize enough: this three-stage evaluation runs at <em>every</em> deployment gate, not just once.</p><p><strong>MVP gate:</strong> Run all three stages on your evaluation set. Small scale (50-100 queries for retrieval, 200 for summarization, 100 adversarial). The goal is to validate the architecture, not achieve production quality.</p><p><strong>Pilot gate:</strong> Re-run with production data from pilot users. The evaluation set should now include real queries you didn&#8217;t anticipate. Expand the adversarial set based on actual user behavior. Introduce latency and cost measurements &#8212; a system that takes 30 seconds per response won&#8217;t be adopted regardless of accuracy.</p><p><strong>Production gate:</strong> Full evaluation suite plus continuous monitoring. This is where the eval lifecycle transitions from a build activity to an operational responsibility. The same metrics you used to gate deployment now become the SLOs your team monitors daily.</p><p>The whitepaper&#8217;s &#8220;once proven in a narrow scope, the same checks repeat at pilot and production scale&#8221; is correct, but it undersells the expansion that happens at each gate. Your evaluation set should roughly double at each stage. Your adversarial set should incorporate everything users tried during the previous stage. And your automated monitoring should replace the manual SME review that gates earlier stages.</p><h2>The Tooling Stack</h2><p>You don&#8217;t need to build this from scratch. The eval tooling ecosystem has matured significantly:</p><p><strong>Retrieval evaluation:</strong> RAGAS and DeepEval both provide retrieval metrics out of the box. LangSmith and Arize Phoenix offer tracing that connects retrieval to downstream generation quality.</p><p><strong>Faithfulness and grounding:</strong> RAGAS faithfulness metrics, DeepEval&#8217;s hallucination detection, and custom LLM-as-judge evaluations using structured prompts. Braintrust and HumanLoop provide platforms for managing evaluation datasets and running automated evals at scale.</p><p><strong>Guardrails:</strong> Guardrails AI, NeMo Guardrails (NVIDIA), and Lakera Guard for safety filtering. LangFuse for observability and trace-level analysis.</p><p><strong>End-to-end:</strong> LangSmith, Braintrust, and Arize Phoenix each provide integrated platforms that span all three stages, with tracing, evaluation, and monitoring in a single tool.</p><p>Pick one end-to-end platform and supplement with specialized tools where needed. The worst outcome is building a custom evaluation framework from scratch &#8212; you&#8217;ll spend months replicating what these tools provide on day one.</p><h2>The Real Lesson</h2><p>The whitepaper frames evaluation as Phase 4 &#8212; something that happens when you build products. That&#8217;s wrong. Evaluation is the <em>connective tissue</em> that links every phase.</p><p>Your Phase 1 data access decisions determine whether you <em>can</em> build a retrieval evaluation set. Your Phase 2 fluency programs determine whether you have SMEs capable of writing gold-standard answers. Your Phase 3 prioritization determines whether you&#8217;ve chosen use cases where evaluation is tractable.</p><p>The eval lifecycle isn&#8217;t a step in the process. It&#8217;s the process.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Your AI Strategy Doesn’t Need More Use Cases. It Needs a Production System.]]></title><description><![CDATA[Why most enterprise AI strategies fail at the same point &#8212; and the five decisions that separate companies shipping AI products from companies running perpetual pilots.]]></description><link>https://theairuntime.com/p/your-ai-strategy-doesnt-need-more</link><guid isPermaLink="false">https://theairuntime.com/p/your-ai-strategy-doesnt-need-more</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Sat, 18 Apr 2026 11:02:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FS6r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR:</strong> Most enterprise AI strategies are lists of use cases hunting for approval. The companies that actually reach production &#8212; BBVA (120,000 employees), Lowe&#8217;s (1,700 stores), Intercom (millions of monthly resolutions), Booking.com (global trip planning) &#8212; didn&#8217;t succeed because they found better use cases. They succeeded because they built production systems: repeatable engineering, governance, and organizational infrastructure that turns <em>any</em> validated idea into a deployed product. After analyzing seven enterprise deployments from <a href="https://cdn.openai.com/business-guides-and-resources/from-experiments-to-deployments_whitepaper_11-25.pdf">OpenAI&#8217;s whitepaper</a>, the path to production comes down to five architectural decisions most companies either skip or get wrong. This article is the strategy document your CTO needs &#8212; not another use-case brainstorm, but the engineering and organizational blueprint for making AI deployable by default.</p></div><h2>The Pilot Trap</h2><p>Here&#8217;s what happens at most companies: A team identifies a promising AI use case. They build a prototype. It works in the demo. Stakeholders are excited. Then nothing happens for six months.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>The prototype needs production data &#8212; but the data team hasn&#8217;t classified which datasets are approved for AI use. The prototype needs a deployment environment &#8212; but the infrastructure team hasn&#8217;t provisioned one for AI workloads. The prototype needs a compliance review &#8212; but legal doesn&#8217;t have a framework for evaluating AI-specific risks. The prototype needs an evaluation suite &#8212; but nobody has defined what &#8220;good enough&#8221; means.</p><p>Each of these is a solvable problem. The issue is that they&#8217;re solved sequentially, per-project, by the same team that built the prototype. The team that&#8217;s good at building AI prototypes is now spending 80% of its time on governance, infrastructure, and cross-functional coordination.</p><p>This is the pilot trap: the gap between prototype and production isn&#8217;t a technology problem. It&#8217;s a systems problem. And it requires a systems solution.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FS6r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FS6r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!FS6r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!FS6r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!FS6r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FS6r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1826036,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194026730?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FS6r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!FS6r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!FS6r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!FS6r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087ac9a0-633d-4d70-ac55-865e8f5bdbf9_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                             Pilot to Prod</em></p><h2>Decision 1: Build the Production Infrastructure Before You Need It</h2><p>The companies that reached to production with AI fastest didn&#8217;t wait for a use case to justify infrastructure investment. They built the production path first.</p><p>Figma created a &#8220;compliance fast path&#8221; &#8212; pre-classified data, pre-defined guardrails, pre-approved experiment categories &#8212; so that any team could test AI tools without triggering a per-project compliance review. The governance infrastructure existed before the use cases that needed it.</p><p>BBVA established data boundaries, security protocols, and a Center of Excellence before expanding from 3,000 to 11,000 licenses. By the time they were ready to scale to 120,000, the infrastructure was battle-tested.</p><p><strong>What this means for your strategy:</strong> Before you prioritize your top 10 use cases, answer these five infrastructure questions:</p><p><em>Data readiness</em> &#8212; Which datasets are classified and approved for AI use? What&#8217;s the process for approving new ones? How fast can a team get access to production data for a validated use case?</p><p><em>Governance framework</em> &#8212; What types of AI experiments are pre-approved? What triggers a full review? Who has decision rights, and what are the escalation paths?</p><p><em>Evaluation infrastructure</em> &#8212; Do you have an eval framework that any team can plug into? Can you define and measure behavioral SLOs before launch?</p><p><em>Deployment pipeline</em> &#8212; Can a team go from approved prototype to production deployment without building custom infrastructure? Is there a standard path with gated checkpoints?</p><p><em>Monitoring</em> &#8212; Once deployed, who owns ongoing behavioral reliability? What gets measured, how often, and what triggers intervention?</p><p>If you can&#8217;t answer these questions, your first AI project isn&#8217;t a use case &#8212; it&#8217;s building this infrastructure. Every subsequent use case becomes faster and cheaper because the path already exists.</p><h2>Decision 2: Treat AI Fluency as Engineering Capacity, Not HR Training</h2><p>The <a href="https://cdn.openai.com/business-guides-and-resources/from-experiments-to-deployments_whitepaper_11-25.pdf">whitepaper from OpenAI</a> frames AI fluency as a training and culture initiative &#8212; workshops, champion networks, hackathons. That framing misses the most important dimension: <strong>engineering fluency determines your production velocity.</strong></p><p>Intercom&#8217;s ability to migrate models in days comes from engineers who deeply understand their evaluation pipeline. Booking.com shipped a prototype in 8-10 weeks because their engineers could integrate OpenAI&#8217;s API with existing ML infrastructure without rearchitecting. BBVA&#8217;s 3,000+ custom GPTs were built by employees who understood enough about prompt engineering to create useful tools without engineering support.</p><p><strong>What this means for your strategy:</strong> Fluency investment should be tiered:</p><p><em>Tier 1: Universal literacy.</em> Everyone in the organization understands what AI can and can&#8217;t do, when to use it, and how to interact with it effectively. This is the workshop-and-hackathon layer.</p><p><em>Tier 2: Builder capability.</em> Product managers, analysts, and domain experts can build custom GPTs, design prompts, and evaluate AI outputs against domain-specific quality standards. BBVA&#8217;s &#8220;wizards&#8221; operate at this tier.</p><p><em>Tier 3: Production engineering.</em> Engineers can build, evaluate, deploy, and monitor AI systems in production. They can design evaluation suites, implement guardrails, instrument observability, and run behavioral regression tests against model updates. This tier determines how fast you can ship.</p><p>Most enterprise AI strategies invest heavily in Tier 1, modestly in Tier 2, and almost nothing in Tier 3. Then they wonder why pilots don&#8217;t reach production. The bottleneck is almost always Tier 3 engineering capacity &#8212; not use-case ideas, not executive sponsorship, not data access.</p><h2>Decision 3: Prioritize Reuse Over Innovation</h2><p>The whitepaper advises designing &#8220;for reuse from the start.&#8221; This understates how transformative reuse-first thinking actually is.</p><p>Lowe&#8217;s built one AI foundation and deployed it as two products &#8212; customer-facing Mylow and associate-facing Mylow Companion. Same knowledge base, same model, different interfaces. The second product was dramatically cheaper and faster than the first because the foundational engineering was already done.</p><p>BBVA&#8217;s internal GPT Store means solutions built by one team are immediately available to the entire organization. A legal team&#8217;s document analysis GPT becomes a compliance team&#8217;s document analysis GPT with minimal modification.</p><p><strong>What this means for your strategy:</strong> When prioritizing use cases, the highest-value next project isn&#8217;t always the highest-impact standalone idea. It&#8217;s often the one that shares the most infrastructure with what you&#8217;ve already built.</p><p>Score each candidate use case on two dimensions: <em>standalone value</em> (impact if built in isolation) and <em>infrastructure leverage</em> (how much existing code, data pipelines, evaluations, and governance it can reuse). The use case that scores highest on the product of both dimensions is your next build &#8212; not the one with the highest standalone value.</p><p>Concretely: if you&#8217;ve already built a retrieval pipeline, evaluation framework, and guardrail system for an internal knowledge Q&amp;A tool, your next use case should probably be <em>another knowledge Q&amp;A tool for a different domain</em> &#8212; not a completely different architecture that requires building everything from scratch.</p><p>This feels counterintuitive because organizations reward novelty (&#8221;we&#8217;re building something new!&#8221;) over leverage (&#8221;we&#8217;re deploying what we already have to a new domain&#8221;). But leverage is what compounds. Novelty is what creates one-off pilots.</p><h2>Decision 4: Measure Causally, Not Correlatively</h2><p>Uber ran controlled experiments comparing AI-augmented workflows with traditional ones. OpenAI&#8217;s internal sales assistant was measured against corrections from top performers. Booking.com tracked engagement time, search-to-booking conversion, and support ticket volume against baselines.</p><p>Most companies measure AI adoption metrics: number of users, messages sent, satisfaction surveys. These metrics can show adoption without proving value. A tool that&#8217;s widely used but subtly wrong &#8212; plausible but inaccurate answers, faster-but-lower-quality outputs &#8212; will show positive adoption metrics while degrading actual business outcomes.</p><p><strong>What this means for your strategy:</strong> Define your measurement architecture before you deploy:</p><p><em>Causal measurement</em> &#8212; Can you run controlled comparisons? A/B tests between AI-augmented and traditional workflows? Before/after analysis with matched cohorts? If you can&#8217;t establish causation, you&#8217;re optimizing for adoption, not impact.</p><p><em>Business outcome metrics</em> &#8212; What business metric does this use case actually move? Not &#8220;time saved&#8221; (self-reported) but &#8220;resolution speed&#8221; (measured). Not &#8220;user satisfaction with the tool&#8221; but &#8220;customer satisfaction with the outcome.&#8221;</p><p><em>Counterfactual tracking</em> &#8212; What would have happened without the AI? This is the hardest measurement to build and the most important. Without it, you attribute every improvement to AI and every failure to something else.</p><p><em>Cost-per-outcome</em> &#8212; What does each AI-generated outcome actually cost, including compute, human review, error correction, and organizational overhead? Lowe&#8217;s discovered that 68% of their queries didn&#8217;t need their flagship model &#8212; a discovery only possible with per-query cost instrumentation.</p><p>The goal isn&#8217;t to measure everything. It&#8217;s to measure the right things with enough rigor to make deployment and expansion decisions based on evidence rather than enthusiasm.</p><h2>Decision 5: Assign Production Ownership Before Launch</h2><p>The whitepaper describes building cross-functional teams with &#8220;engineers, SMEs, data leads, and executive sponsors.&#8221; What it doesn&#8217;t specify &#8212; and what matters most &#8212; is who owns the system <em>after</em> launch.</p><p>In traditional software, this is obvious: the engineering team that built it operates it, with SRE support. In AI products, it&#8217;s ambiguous. The model changes without you deploying anything. The data changes without you modifying anything. The behavior changes without you touching anything. Someone needs to own this.</p><p><strong>What this means for your strategy:</strong> Before any AI product launches, assign three ownership roles:</p><p><em>Behavioral reliability owner</em> &#8212; monitors behavioral SLOs (faithfulness, relevance, safety), detects drift, coordinates response to behavioral incidents. This is the MRE function, whether you call it that or not.</p><p><em>Model management owner</em> &#8212; tracks model provider updates, runs regression tests on new versions, manages model selection and routing decisions. This role prevents the &#8220;silent model update breaks production&#8221; failure mode.</p><p><em>Business value owner</em> &#8212; monitors the causal metrics from Decision 4, determines whether the product is still delivering the value that justified deployment, and decides when to expand, refine, or sunset.</p><p>These can be the same person on a small team, but they can&#8217;t be no one. The most common failure mode in enterprise AI isn&#8217;t a spectacular crash &#8212; it&#8217;s a slow, invisible degradation where the model gets slightly worse over weeks and nobody notices because nobody is watching.</p><h2>Building Your Path-to-Production Document</h2><p>If you&#8217;re a CTO, VP of Engineering, or AI lead, here&#8217;s the strategic document you should build &#8212; not a list of use cases, but a production system specification:</p><p><strong>Page 1: Infrastructure readiness assessment.</strong> Where do you stand on data classification, governance framework, evaluation infrastructure, deployment pipeline, and monitoring? What&#8217;s the gap between current state and production-ready?</p><p><strong>Page 2: Fluency investment plan.</strong> How are you building Tier 1 (literacy), Tier 2 (builder), and Tier 3 (production engineering) capabilities? What&#8217;s the timeline for each, and how do you measure progress?</p><p><strong>Page 3: First three use cases, scored on standalone value &#215; infrastructure leverage.</strong> Not your ten best ideas &#8212; your three best <em>first</em> ideas, chosen because they build infrastructure that makes everything after them faster.</p><p><strong>Page 4: Measurement architecture.</strong> For each use case, what&#8217;s the causal measurement strategy? What business outcomes are you tracking, and how are you establishing counterfactuals?</p><p><strong>Page 5: Ownership model.</strong> Who owns behavioral reliability, model management, and business value for each deployed product? What&#8217;s the incident response playbook?</p><p>This document isn&#8217;t a strategy deck that gets presented once and forgotten. It&#8217;s a living system specification that evolves with every deployment. Each new product strengthens the infrastructure, expands the evaluation framework, deepens organizational fluency, and makes the next deployment faster.</p><p>The companies in OpenAI&#8217;s whitepaper didn&#8217;t scale AI because they had better ideas. They scaled because they built production systems that turn good ideas into deployed products &#8212; repeatedly, reliably, and with compounding returns.</p><p>Your AI strategy should do the same.</p><div><hr></div><p><em>Building your own path-to-production document? I&#8217;m collecting examples of enterprise AI production system designs for a future AIEW deep-dive. Reply with what you&#8217;re building &#8212; anonymized details welcome.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Claude Opus 4.7: The Production Engineer’s Breakdown]]></title><description><![CDATA[Four breaking changes, seven behavior shifts, two new control surfaces, and a quietly throttled cyber capability. What actually changed inside Anthropic&#8217;s new flagship &#8212; and what that means for anyone]]></description><link>https://theairuntime.com/p/claude-opus-47-the-production-engineers</link><guid isPermaLink="false">https://theairuntime.com/p/claude-opus-47-the-production-engineers</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Fri, 17 Apr 2026 11:04:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MowX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - Anthropic <a href="https://www.anthropic.com/news/claude-opus-4-7">released Claude Opus 4.7 on April 16, 2026</a>, available via the Claude API as <code>claude-opus-4-7</code>, plus Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Pricing is unchanged from Opus 4.6 at $5 per million input tokens and $25 per million output tokens. The marketing line is &#8220;better coding, better vision, same price.&#8221; That is true and it understates what shipped. Opus 4.7 introduces two new control surfaces (the <code>xhigh</code> effort level and task budgets in beta), four breaking changes to the Messages API that will silently affect existing integrations, seven behavior shifts that will affect how your prompts perform, more than 3x the maximum image resolution with 1:1 coordinate mapping, file-system memory improvements that change how persistent agents work, deliberately throttled cyber capabilities as part of Project Glasswing, and a tokenizer change that can move your bill by up to 35%. If you run agents in production, this release is less about a smarter model and more about a model engineered to behave more predictably under load. The benchmark gains follow from the engineering, not the other way around.</p></div><h2>What you actually get</h2><p>Strip out the marketing and the technical envelope is straightforward. According to <a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7">Anthropic&#8217;s developer documentation</a>, Opus 4.7 supports the 1M token context window, 128k max output tokens, adaptive thinking, and the same set of tools and platform features as Claude Opus 4.6. The 1M context window comes at standard API pricing with no long-context premium &#8212; a meaningful change for anyone who has been chunking aggressively to stay under the previous tier boundaries.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MowX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MowX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!MowX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!MowX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!MowX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MowX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:798145,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194474027?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MowX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!MowX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!MowX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!MowX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f6d8a3-4925-4bfd-a735-3d7bef13f343_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                 Opus 4.7</em></p><p>The model is generally available across Claude products and the API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. For business users, Opus 4.7 is available on Claude for Pro, Max, Team, and Enterprise users. Per <a href="https://www.anthropic.com/claude/opus">Anthropic&#8217;s product page</a>, pricing for Opus 4.7 starts at $5 per million input tokens and $25 per million output tokens, with up to 90% cost savings via prompt caching and 50% via batch processing.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The architectural lift over Opus 4.6 is concentrated in three places: a retrained tokenizer, a redesigned thinking-effort surface, and significantly improved high-resolution vision. Everything else in the release &#8212; the new tools, the breaking changes, the behavior shifts &#8212; flows from those three.</p><div><hr></div><h2>Two new control surfaces</h2><p>The most consequential additions for engineers building autonomous workflows are the new effort level and task budgets. They change what &#8220;tuning a Claude integration&#8221; actually means.</p><h3>The <code>xhigh</code> effort level</h3><p>The new <code>xhigh</code> level sits between <code>high</code> and <code>max</code>. Per the <a href="https://platform.claude.com/docs/en/build-with-claude/effort">effort documentation</a>, Anthropic recommends starting with <code>xhigh</code> for coding and agentic use cases, with <code>high</code> as the minimum for most intelligence-sensitive workloads. The API default is <code>high</code>. In Claude Code, <code>xhigh</code><a href="https://code.claude.com/docs/en/model-config"> is now the default</a> for all plans and providers on Opus 4.7.</p><p>What changed beyond the new tier is how strictly the model respects effort. Per Anthropic&#8217;s <a href="https://platform.claude.com/docs/en/about-claude/models/migration-guide">migration guide</a>, Opus 4.7 respects effort levels more strictly than Opus 4.6, especially at low and medium. At those lower levels, the model scopes its work to what was asked rather than going above and beyond. The practical implication is that a moderately complex task running at low effort will under-think rather than silently escalate. If you observe shallow reasoning on complex problems, raise effort to <code>high</code> or <code>xhigh</code> rather than prompting around it.</p><p>Two production-relevant data points worth knowing before you migrate. First, per a <a href="https://www.anthropic.com/news/claude-opus-4-7">Hex testimonial in the launch post</a>, low-effort Opus 4.7 is roughly equivalent to medium-effort Opus 4.6. Second, per Anthropic's launch post, on their internal agentic coding evaluation the <em>net</em> token usage across all effort levels improved versus Opus 4.6 &#8212; meaning the efficiency gains outweighed the tokenizer increase and the deeper thinking. Anthropic explicitly notes the evaluation runs autonomously from a single prompt and may not represent interactive coding patterns.</p><h3>Task budgets (beta)</h3><p>Task budgets are the more architecturally interesting new control surface, because they are the first time a Claude model is given visibility into its own remaining budget. Per the <a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7">docs</a>, a task budget gives Claude a rough estimate of how many tokens to target for a full agentic loop, including thinking, tool calls, tool results, and final output. The model sees a running countdown and uses it to prioritize work and finish the task gracefully as the budget is consumed.</p><p>The API surface is straightforward. Set the beta header <code>task-budgets-2026-03-13</code> and add the following to your output config:</p><pre><code><code>response = client.beta.messages.create(
    model="claude-opus-4-7",
    max_tokens=128000,
    output_config={
        "effort": "high",
        "task_budget": {"type": "tokens", "total": 128000},
    },
    messages=[
        {"role": "user", "content": "Review the codebase and propose a refactor plan."}
    ],
    betas=["task-budgets-2026-03-13"],
)</code></code></pre><p>The minimum value for a task budget is 20k tokens. If the model is given a task budget that is too restrictive for a given task, it may complete the task less thoroughly or refuse to do it entirely. For open-ended agentic tasks where quality matters more than speed, Anthropic recommends not setting a task budget; reserve them for workloads where you need the model to scope its work to a token allowance.</p><p>What makes this design different from a hard cap is that the model is aware of it. A task budget is advisory &#8212; it is a suggestion the model is aware of, not a hard cap. This is distinct from <code>max_tokens</code>, which is a hard per-request ceiling that is not passed to the model at all. <code>max_tokens</code> is a guillotine &#8212; the model never sees it and gets cut off when it hits. <code>task_budget</code> is a clock &#8212; the model sees the countdown and adjusts behavior to land cleanly within the budget. For long-running agentic work where graceful degradation matters more than abrupt termination, this is a meaningfully better primitive.</p><div><hr></div><h2>Four breaking changes you might miss</h2><p>These breaking changes apply to the Messages API only. If you use Claude Managed Agents, there are no breaking API changes for Claude Opus 4.7. The first two return 400 errors that flag the issue clearly. The third and fourth are silent &#8212; they surface as subtle behavior changes downstream if you skip the migration audit. All four are documented in the official <a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7">What&#8217;s new in Claude Opus 4.7</a> reference.</p><p><strong>Extended thinking budgets are removed.</strong> Setting <code>thinking: {"type": "enabled", "budget_tokens": N}</code> will return a 400 error. Adaptive thinking is the only thinking-on mode, and Anthropic reports their internal evaluations show it reliably outperforms extended thinking. The new pattern uses adaptive thinking with effort as the depth control:</p><pre><code><code># Before (Opus 4.6)
thinking = {"type": "enabled", "budget_tokens": 32000}

# After (Opus 4.7)
thinking = {"type": "adaptive"}
output_config = {"effort": "high"}</code></code></pre><p>There is also a subtler shift here. Adaptive thinking is off by default on Claude Opus 4.7. Requests with no <code>thinking</code> field run without thinking. Set <code>thinking: {type: "adaptive"}</code> explicitly to enable it.</p><p><strong>Sampling parameters are removed.</strong> Setting <code>temperature</code>, <code>top_p</code>, or <code>top_k</code> to any non-default value will return a 400 error. The safest migration path is to omit these parameters entirely from requests and use prompting to guide the model&#8217;s behavior. The prior trick of setting <code>temperature = 0</code> for &#8220;determinism&#8221; is also gone &#8212; per Anthropic&#8217;s own note, it never guaranteed identical outputs, and now it does not even run.</p><p><strong>Thinking content is omitted by default.</strong> Thinking blocks still appear in the response stream, but their <code>thinking</code> field will be empty unless the caller explicitly opts in. This is a silent change &#8212; no error is raised &#8212; and response latency will be slightly improved. If your product streams reasoning to users, the new default will appear as a long pause before output begins. Set <code>"display": "summarized"</code> to restore visible progress during thinking.</p><p><strong>Updated token counting.</strong> Claude Opus 4.7 uses a new tokenizer that contributes to its improved performance on a wide range of tasks. Per the docs, this new tokenizer may use roughly 1x to 1.35x as many tokens when processing text compared to previous models, varying by content, and <code>/v1/messages/count_tokens</code> will return a different number of tokens for Opus 4.7 than it did for Opus 4.6. The 1.0&#8211;1.35x range is wide enough that &#8220;your bill went up 5%&#8221; and &#8220;your bill went up 30%&#8221; are both plausible outcomes &#8212; measure on real traffic before extrapolating. Anthropic suggests updating your <code>max_tokens</code> parameters to give additional headroom, including for compaction triggers.</p><div><hr></div><h2>Seven behavior shifts that will change how your prompts perform</h2><p>These are not breaking changes in the API contract sense, but they will silently affect the quality of your existing prompts. The <a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7">official behavior change list</a> reads almost like a release note for an operations-focused fork:</p><p><strong>Instruction following is now literal</strong>, particularly at lower effort levels. The model will not silently generalize an instruction from one item to another, and will not infer requests you didn&#8217;t make. The most common failure mode in early migration coverage: bullet-list &#8220;suggestions&#8221; that earlier Claude models treated as optional hints are now treated as hard requirements.</p><p><strong>Response length calibrates to perceived task complexity</strong>, rather than defaulting to a fixed verbosity. Short queries get short answers. Complex queries get longer ones. If you have prompt scaffolding that forced specific response lengths, expect different behavior.</p><p><strong>Fewer tool calls by default.</strong> The model uses tools less often than Opus 4.6 and uses reasoning more. Raising effort increases tool usage; per the <a href="https://platform.claude.com/docs/en/about-claude/models/migration-guide">migration guide</a>, high or xhigh effort settings show substantially more tool usage in agentic search and coding.</p><p><strong>More direct, opinionated tone.</strong> Less validation-forward phrasing and fewer emoji than Claude Opus 4.6&#8217;s warmer style. Whether this is what your end users want depends entirely on your product surface.</p><p><strong>More regular progress updates</strong> during long agentic traces. If you&#8217;ve added scaffolding to force interim status messages, try removing it.</p><p><strong>Fewer subagents spawned by default.</strong> Steerable through prompting.</p><p><strong>Real-time cybersecurity safeguards.</strong> Newly added in Claude Opus 4.7, requests that involve prohibited or high-risk topics may lead to refusals. Legitimate security teams can apply to the <a href="https://claude.com/form/cyber-use-case">Cyber Verification Program</a> for reduced restrictions.</p><p>The cumulative effect across all seven is a model that does more of what you tell it to do and less of what it inferred you wanted. For teams with mature prompt libraries built against Opus 4.6, this is a real audit obligation. For teams writing new integrations, it is a meaningful reduction in &#8220;magical&#8221; behavior that you cannot test for.</p><div><hr></div><h2>Vision: the genuinely large step function</h2><p>The vision upgrade is the single largest capability jump in the release. Per the <a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7">docs</a>, maximum image resolution increased to 2576px / 3.75MP, up from the previous limit of 1568px / 1.15MP. That is more than 3x the pixel count.</p><p>Two technical details matter beyond the headline number. First, the model&#8217;s coordinates now map 1:1 with actual pixels, so there&#8217;s no scale-factor math required for any computer-use agent that needs to point at specific UI elements. Second, the upgrades extend beyond resolution: low-level perception (pointing, measuring, counting) and image localization (bounding-box detection) both improved.</p><p>The biggest reported lift comes from XBOW, building autonomous penetration testing. Per their <a href="https://www.anthropic.com/news/claude-opus-4-7">testimonial in the launch post</a>, visual acuity moved from 54.5% on Opus 4.6 to 98.5% on Opus 4.7. That is the kind of step function that obsoletes architectural workarounds. If your computer-use or document-analysis agent has ever included logic to chunk, crop, or downsample images to compensate for the previous resolution ceiling, that code is now technical debt. One tradeoff to plan for: higher-resolution images consume more tokens &#8212; downsample images before sending if the additional fidelity is unnecessary.</p><div><hr></div><h2>File-system memory improvements</h2><p>Per the <a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7">docs</a>, Opus 4.7 is better at writing and using file-system-based memory. If an agent maintains a scratchpad, notes file, or structured memory store across turns, that agent should improve at jotting down notes to itself and leveraging its notes in future tasks.</p><p>For teams that have built persistent agents &#8212; the kind that work across multiple sessions on long-running projects &#8212; this is a quietly significant improvement. The agent that previously needed extensive context restoration at the start of each session can now do more of that work itself by writing better notes and using them more effectively. Anthropic&#8217;s <a href="https://platform.claude.com/docs/en/agents-and-tools/tool-use/memory-tool">client-side memory tool</a> gives you a managed scratchpad if you do not want to roll your own.</p><p>The downstream effect is fewer tokens spent on context restoration and more on actual work. Multi-session agentic workflows that previously felt like they were starting from scratch each time should feel more continuous.</p><div><hr></div><h2>Training and the cyber capability story</h2><p>The most editorially interesting decision in this release is what Anthropic deliberately did <em>not</em> improve. Per the <a href="https://www.anthropic.com/news/claude-opus-4-7">launch post</a>, during training Anthropic experimented with efforts to differentially reduce Opus 4.7&#8217;s cyber capabilities relative to Mythos Preview. The model also ships with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses.</p><p>This is the first generally available model carrying the <a href="https://www.anthropic.com/glasswing">Project Glasswing</a> safeguard stack &#8212; Anthropic&#8217;s approach to staging powerful model releases by testing new safeguards on less-capable models before broader rollout of Mythos-class capabilities. Per <a href="https://www.vellum.ai/blog/claude-opus-4-7-benchmarks-explained">Vellum AI&#8217;s benchmark analysis</a>, on CyberGym, Opus 4.7 scores 73.1%, effectively flat against Opus 4.6&#8217;s revised 73.8%, while Mythos Preview scores 83.1% on the same benchmark but remains restricted to vetted partners.</p><p>For production teams, two takeaways. First, if you have legitimate security workloads &#8212; vulnerability research, penetration testing, red-teaming &#8212; the Cyber Verification Program is the path to reduced restrictions. Apply early; the program is new and the enrollment cycle is unclear. Second, the safeguard-first deployment pattern is likely to repeat. Anthropic states that what they learn from real-world deployment of these safeguards will inform their goal of a broad release of Mythos-class models, which means the next Mythos-class model will likely not arrive without similar testing on a less capable model first.</p><div><hr></div><h2>What the alignment evals actually say</h2><p>The safety profile is honest about being incomplete. Per the <a href="https://www.anthropic.com/news/claude-opus-4-7">launch post</a>, Anthropic&#8217;s alignment assessment concluded that the model is &#8220;largely well-aligned and trustworthy, though not fully ideal in its behavior.&#8221; Mythos Preview remains the better-aligned model by Anthropic&#8217;s own evaluations.</p><p>Specifics worth knowing if you operate Opus 4.7 in user-facing contexts:</p><ul><li><p>Honesty and resistance to malicious prompt injection attacks are improvements on Opus 4.6. For agents that consume web content, customer documents, or third-party tool output, prompt injection resistance is the most active reliability threat surface, and the improvement is meaningful.</p></li><li><p>The model is modestly weaker on overly detailed harm-reduction advice for controlled substances.</p></li><li><p>Per <a href="https://the-decoder.com/anthropics-claude-opus-4-7-makes-a-big-leap-in-coding-while-deliberately-scaling-back-cyber-capabilities/">reporting by The Decoder</a> on the system card, Opus 4.7 still refuses to assist in 33% of simulated AI safety research tasks, a significant drop from 88% with Opus 4.6. Still imperfect, but a categorical shift.</p></li><li><p>The system card distinguishes between factual hallucinations (wrong claims about the world) and input hallucinations (the model acting as if it has access to a tool or attachment that doesn&#8217;t actually exist), and Opus 4.7 performs better than or on par with Opus 4.6 across factual hallucination benchmarks.</p></li></ul><p>The customer feedback in the launch post is consistent with these numbers. Hex reports the model correctly reports when data is missing instead of providing plausible-but-incorrect fallbacks, and resists dissonant-data traps that even Opus 4.6 falls for. Vercel notes the model is more honest about its own limits and even runs proofs on systems code before starting work &#8212; behavior they had not seen in earlier Claude models. Notion measured a 14% improvement at fewer tokens and a third of the tool errors, with the model continuing to execute through tool failures that previously stopped Opus cold.</p><p>None of these are intelligence claims. They are behavioral consistency claims. For anyone operating the model in production, behavioral consistency is the metric that drives or kills a deployment.</p><div><hr></div><h2>The cost story (with real numbers)</h2><p>Pricing has not changed: $5 per million input tokens, $25 per million output tokens. Three things that have changed will move your actual bill:</p><p><strong>The tokenizer.</strong> As covered above, expect 1.0&#8211;1.35x more tokens on the same text. The token efficiency of Claude Opus 4.7 can vary by workload shape. The first thing to measure on your traffic before any production rollout.</p><p><strong>Higher effort means more thinking.</strong> Per the launch post, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings &#8212; this improves reliability on hard problems but produces more output tokens. Anthropic&#8217;s own internal coding evaluation shows token usage improving across all effort levels for that specific workload, but the result is workload-dependent.</p><p><strong>Counter-evidence from actual deployments.</strong> Per Box&#8217;s Head of AI Yashodha Bhavnani <a href="https://9to5mac.com/2026/04/16/anthropic-reveals-new-opus-4-7-model-with-focus-on-advanced-software-engineering/">as reported by 9to5Mac</a>, in Box&#8217;s evaluations Opus 4.7 had a 56% reduction in model calls and 50% reduction in tool calls. The Hex observation that low-effort 4.7 matches medium-effort 4.6 points the same direction. The honest read: per-token costs may rise; per-task costs often fall, because the model finishes work in fewer iterations. Whether your bill goes up or down depends on whether your workflow is throttled by tokens-per-call or by calls-per-task.</p><p>The practical playbook: instrument cost-per-completed-task, not just tokens-per-call, before you decide whether the upgrade is favorable for your specific workload.</p><div><hr></div><h2>Claude Code: /ultrareview, auto mode, and new defaults</h2><p>For Claude Code users, three changes ship alongside the model:</p><p><code>/ultrareview</code><strong> slash command.</strong> A dedicated review session that reads through changes and flags bugs and design issues a careful reviewer would catch. Pro and Max Claude Code users get three free ultrareviews to try it out.</p><p><strong>Auto mode extended to Max.</strong> Auto mode is a permissions option where Claude makes decisions on your behalf, meaning longer tasks run with fewer interruptions and with less risk than skipping all permissions. Per <a href="https://9to5mac.com/2026/04/16/anthropic-reveals-new-opus-4-7-model-with-focus-on-advanced-software-engineering/">9to5Mac&#8217;s reporting</a>, it was previously available for Teams, Enterprise, and API customers, and is now also available to Max plan subscribers.</p><p><code>xhigh</code><strong> is now the default in Claude Code</strong> across all plans and providers on Opus 4.7. Per the <a href="https://code.claude.com/docs/en/model-config">Claude Code docs</a>, when you first run Opus 4.7, Claude Code applies xhigh even if you previously set a different effort level for Opus 4.6 or Sonnet 4.6. Sessions will use more thinking tokens by default, which produces higher-quality results at slightly higher cost. Override via <code>/effort high</code> if you preferred the old behavior.</p><div><hr></div><h2>Migration playbook</h2><p>A concrete sequence for moving production workloads, distilled from Anthropic&#8217;s <a href="https://platform.claude.com/docs/en/about-claude/models/migration-guide">official migration guide</a>:</p><p>Audit your existing prompts against the new literal instruction-following behavior on your top three workflows. Look specifically for bullet-list suggestions, imperative verbs used loosely, and any prompt that depends on the model &#8220;filling in&#8221; implied context.</p><p>Re-test integrations that set <code>thinking: {"type": "enabled"}</code> or any sampling parameter. Both will return 400 errors now. Migrate to adaptive thinking with effort as the depth control.</p><p>Measure tokenizer impact on a representative sample of real traffic before extrapolating cost. Code-heavy and prose-heavy workloads land at different points in the 1.0&#8211;1.35x band.</p><p>Set <code>task_budget</code> on long-running agentic workflows. Even if you do not yet need it as a cost guard, the discipline of declaring an upper bound forces clarity on what &#8220;done&#8221; looks like for autonomous runs.</p><p>If you are running computer-use agents, prioritize re-evaluating the vision pipeline. The 3.75MP ceiling and 1:1 coordinate mapping change architectural decisions that were made under earlier constraints.</p><p>If you have legitimate security workloads, apply to the Cyber Verification Program. The new safeguards will refuse some requests that Opus 4.6 handled.</p><p>For teams running Opus 4.6 at high or max as a reliability fallback, test Opus 4.7 one tier lower against the same evaluations. The cost-per-task math may justify staying at lower effort.</p><div><hr></div><h2>Bottom line</h2><p>Opus 4.7 is the clearest signal yet that frontier model releases are bifurcating along a new axis. One axis is raw capability, where the field has visibly converged &#8212; on graduate-level reasoning measured by GPQA Diamond, <a href="https://thenextweb.com/news/anthropic-claude-opus-4-7-coding-agentic-benchmarks-release">as reported by The Next Web</a>, Opus 4.7 scores 94.2%, GPT-5.4 Pro scores 94.4%, and Gemini 3.1 Pro scores 94.3%, with the differences within noise. The other axis is operational maturity: how predictably the model behaves under load, how cleanly it integrates with engineering controls, how honestly it reports its own limits.</p><p>Anthropic invested in the second axis. Self-verification before reporting, loop resistance, lower variance, fewer tool errors, honest uncertainty, task-aware budgets, literal instruction following, prompt injection resistance &#8212; the entire shape of this release is about the model being a better operational citizen, not a smarter conversationalist. The benchmark gains follow from that engineering. They do not lead it.</p><p>For anyone running agents in production, the upgrade is straightforward but the prompt audit is real. For anyone designing new agentic workflows, the launch post explicitly frames this as the model where users can hand off their hardest work with less supervision than before &#8212; a claim worth testing against your own evaluations rather than taking on faith.</p><p>The next model release will tell us whether this becomes the new norm. If it does, the era of treating frontier models as raw intelligence to be wrangled by external scaffolding is ending, and the era of treating them as engineered systems with first-class operational primitives is beginning.</p><p>Opus 4.7 is the strongest single data point so far that we are already in that second era.</p><div><hr></div><h2>Sources &amp; further reading</h2><p><strong>Primary (Anthropic):</strong></p><ul><li><p><a href="https://www.anthropic.com/news/claude-opus-4-7">Introducing Claude Opus 4.7</a> &#8212; the official launch post, including all partner testimonials cited above</p></li><li><p><a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7">What&#8217;s new in Claude Opus 4.7</a> &#8212; developer documentation covering breaking changes, behavior shifts, and capability improvements</p></li><li><p><a href="https://platform.claude.com/docs/en/about-claude/models/migration-guide">Migration guide: Opus 4.6 &#8594; Opus 4.7</a> &#8212; official upgrade guidance</p></li><li><p><a href="https://platform.claude.com/docs/en/build-with-claude/effort">Effort parameter documentation</a> &#8212; recommended effort levels per workload type</p></li><li><p><a href="https://platform.claude.com/docs/en/build-with-claude/task-budgets">Task budgets documentation</a> &#8212; full setup and tuning guidance</p></li><li><p><a href="https://code.claude.com/docs/en/model-config">Claude Code model configuration</a> &#8212; Claude Code-specific defaults and overrides</p></li><li><p><a href="https://www.anthropic.com/glasswing">Project Glasswing</a> &#8212; context for the cyber capability staging strategy</p></li><li><p><a href="https://claude.com/form/cyber-use-case">Cyber Verification Program</a> &#8212; application form for security professionals</p></li><li><p>Claude Opus 4.7 System Card &#8212; referenced throughout the launch post</p></li></ul><p><strong>Secondary (third-party reporting and analysis):</strong></p><ul><li><p><a href="https://www.vellum.ai/blog/claude-opus-4-7-benchmarks-explained">Vellum AI: Claude Opus 4.7 Benchmarks Explained</a> &#8212; source for CyberGym scores cited above</p></li><li><p><a href="https://the-decoder.com/anthropics-claude-opus-4-7-makes-a-big-leap-in-coding-while-deliberately-scaling-back-cyber-capabilities/">The Decoder: Anthropic&#8217;s Claude Opus 4.7 makes a big leap in coding</a> &#8212; source for the AI safety research refusal numbers from the system card</p></li><li><p><a href="https://9to5mac.com/2026/04/16/anthropic-reveals-new-opus-4-7-model-with-focus-on-advanced-software-engineering/">9to5Mac: Anthropic reveals new Opus 4.7 model</a> &#8212; source for Box&#8217;s deployment numbers and auto mode availability details</p></li><li><p><a href="https://thenextweb.com/news/anthropic-claude-opus-4-7-coding-agentic-benchmarks-release">The Next Web: Claude Opus 4.7 leads on SWE-bench and agentic reasoning</a> &#8212; source for cross-model GPQA Diamond comparison</p></li></ul><div><hr></div><p><em>Subscribe to AI Engineer Weekly for technical breakdowns like this on every major model release, plus original analysis on production AI engineering. Forward to one engineer who would benefit.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share AI Engineer Weekly&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://theairuntime.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share AI Engineer Weekly</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Responses API Is OpenAI’s Bet That State Belongs on the Server]]></title><description><![CDATA[OpenAI&#8217;s new API primitive replaces three years of duct tape with one opinionated call. Here&#8217;s every feature that matters, what it fixes, and how to migrate a real customer support system.]]></description><link>https://theairuntime.com/p/the-responses-api-is-openais-bet</link><guid isPermaLink="false">https://theairuntime.com/p/the-responses-api-is-openais-bet</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Thu, 16 Apr 2026 11:03:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bAMf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - OpenAI launched the Responses API in March 2025 to replace both Chat Completions (for new projects) and the Assistants API (sunsetting August 2026). The core bet: move conversation state, reasoning token persistence, and tool execution to the server so developers stop rebuilding the same plumbing. The result is 40&#8211;80% better cache utilization than Chat Completions, chain-of-thought that survives across turns, built-in tools (web search, file search, code interpreter, computer use, MCP), and a compaction system that lets agents run beyond the context window. If you&#8217;re building anything multi-turn on OpenAI today, the Responses API isn&#8217;t optional &#8212; it&#8217;s the surface where new capabilities land first.</p></div><h2>The Problem the Responses API Solves</h2><p>Every developer who has built a production chatbot on the Chat Completions API knows the ritual. User sends a message. You fetch the entire conversation history from your database. You prepend the system prompt. You serialize the whole thing into a <code>messages</code> array. You send it. You get a response. You store it. Next turn, you do it all again &#8212; with one more message appended.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This works. It also wastes money, breaks prompt caching, and throws away the model&#8217;s reasoning between turns.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bAMf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bAMf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!bAMf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!bAMf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!bAMf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bAMf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1085296,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194323702?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bAMf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!bAMf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!bAMf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!bAMf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5d32e28-1b43-4da9-af88-cc54c96fce47_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                   Responses API</em></p><p>The Assistants API tried to fix this in late 2023 by moving state server-side. Persistent threads. Managed runs. Built-in tools. The abstraction was right, but the execution was painful: creating a thread, adding a message, kicking off a run, polling for completion, then finally retrieving the response. Five API calls for one answer. Rate limits tied to threads. Opaque state that was hard to debug. And because no other provider implemented the Assistants API, adopting it meant full vendor lock-in to a perpetual beta.</p><p>The Responses API is OpenAI&#8217;s second attempt. It takes the right ideas from Assistants &#8212; server-side state, built-in tools, persistent reasoning &#8212; and delivers them through the simplicity of a single API call. No threads. No runs. No polling.</p><p>Every architectural choice has a regime where it&#8217;s right and a regime where it&#8217;s wrong. Stateless APIs were the right answer for the workloads LLMs were first built against: classification, single-turn Q&amp;A, one-shot generation. What you sent was what you paid for, and the abstraction was symmetric, clean, and cheap to reason about.</p><p>Agentic systems break that regime. An agent isn&#8217;t a classifier &#8212; it&#8217;s a sequential decision process in which every step depends on the reasoning, tool calls, and results of every prior step. Forcing that shape onto a stateless API creates what I call the <strong>Stateless Tax</strong> &#8212; three compounding costs that scale with conversation depth and never appear as a single line item on your bill.</p><p><strong>Replay cost</strong> is the visible one. A 20-turn conversation resends 20 messages every turn, with the system prompt bolted to the front each time. Prompt caching is supposed to fix this, and does &#8212; until a single dynamic token at the start of the prefix shatters the cache and you&#8217;re paying full freight again. The longer the agent runs, the larger the tax, and the more fragile the mitigation.</p><p><strong>Reasoning amnesia</strong> is the cost most developers never see. GPT-5 and o3 generate hidden chain-of-thought tokens that shape the final answer. On a stateless API, those tokens are discarded the moment the response returns. Next turn, the model reasons from absolute zero &#8212; not from where it left off. The conversation looks continuous to the user; the cognition restarts on every call. This is why OpenAI&#8217;s own evals show a ~3% SWE-bench lift and a ~4-point Tau-Bench Retail gain just from switching APIs, with no model change. Persisting reasoning isn&#8217;t a minor optimization. It&#8217;s the model being functionally smarter, because it stops getting wiped between turns.</p><p><strong>Observability debt</strong> is the silent one. Stateless APIs return a final message; everything between input and output &#8212; tool calls, reasoning items, retrieval decisions &#8212; is opaque by construction. You can reconstruct it with careful logging, but you&#8217;re rebuilding state the API already had and discarded. In production debugging, this is the difference between a stack trace and a single error code.</p><p>Server-managed state collapses all three costs into a single API primitive. Response chains eliminate replay. Reasoning items persist cognition across turns. Typed output items turn every step the agent took into an inspectable artifact.</p><p>This is why calling the Responses API &#8220;a better Chat Completions&#8221; undersells what actually happened in March 2025. It&#8217;s the first major commercial inference API to treat agentic workloads as a distinct architectural category &#8212; one where statelessness isn&#8217;t the clean default. It&#8217;s a misconfiguration that gets more expensive the longer your agent runs.</p><div><hr></div><h2>The Nine Features That Matter</h2><h3>1. Server-Side State via <code>store</code> and <code>previous_response_id</code></h3><p>This is the single biggest architectural change. With Chat Completions, you resend the entire conversation every turn. With the Responses API, you set <code>store: true</code> and the server remembers. On the next turn, pass <code>previous_response_id</code> instead of the full history.</p><pre><code><code># Turn 1
response1 = client.responses.create(
    model="gpt-5",
    store=True,
    instructions="You are a customer support agent for Acme Corp.",
    input="What's your return policy for electronics?"
)

# Turn 2 &#8212; no history resending needed
response2 = client.responses.create(
    model="gpt-5",
    store=True,
    previous_response_id=response1.id,
    input="What if I lost the receipt?"
)</code></code></pre><p>Response objects are saved for 30 days by default. You can delete them explicitly with <code>client.responses.delete(response_id)</code>. For organizations with Zero Data Retention requirements, OpenAI provides encrypted reasoning items &#8212; you get the reasoning persistence benefit without server-side storage.</p><p><strong>Why this matters:</strong> A 20-turn customer support conversation on Chat Completions resends 20 messages every turn. On the Responses API, you send exactly one: the new user input. The server handles the rest.</p><h3>2. Reasoning Token Persistence</h3><p>This is the feature most developers don&#8217;t know they&#8217;re missing.</p><p>When you use a reasoning model like GPT-5 or o3 through Chat Completions, the model generates chain-of-thought tokens during inference. But those tokens aren&#8217;t returned to you. On the next turn, the model starts reasoning from scratch &#8212; like a detective who forgets all the clues every time they leave the room.</p><p>With the Responses API&#8217;s <code>previous_response_id</code>, reasoning tokens from the previous turn survive into the next turn. The model builds on its prior thinking instead of starting over.</p><p>OpenAI&#8217;s internal evals show a 3% improvement on SWE-bench with the same prompt and setup when using Responses instead of Chat Completions. That number sounds modest, but on agentic benchmarks like TAU-bench the gap widens to 5%, because multi-step reasoning tasks compound the benefit of persistent chain-of-thought.</p><h3>3. Built-In Tools</h3><p>Chat Completions gives you function calling &#8212; you define schemas, the model returns <code>tool_calls</code>, you execute them, you send results back. Every tool call is a round trip through your backend.</p><p>The Responses API adds hosted tools that OpenAI executes for you:</p><pre><code><code>response = client.responses.create(
    model="gpt-5",
    instructions="You are a research assistant.",
    input="What were the key announcements at GTC 2026?",
    tools=[
        {"type": "web_search"},         # OpenAI runs the search
        {"type": "code_interpreter"},   # OpenAI runs the code
        {"type": "file_search"},        # OpenAI searches uploaded files
        {"type": "computer_use"},       # Model interacts with UIs
        {"type": "mcp"},               # Connect to external MCP servers
    ]
)</code></code></pre><p>Because tool execution happens server-side for hosted tools, you eliminate the round-trip latency of bouncing every call through your own backend. You can still define custom function tools alongside the hosted ones &#8212; the two compose naturally.</p><p>The <code>web_search</code> tool uses the same models powering ChatGPT search, which score around 90% accuracy on the SimpleQA benchmark &#8212; dramatically better than plain GPT models without search. File search integrates with OpenAI&#8217;s vector stores for a RAG pipeline without custom infrastructure. And the MCP tool connects to any Model Context Protocol server, meaning your agent can interact with external services through a standardized interface.</p><h3>4. The <code>instructions</code> Parameter Replaces System Messages</h3><p>Chat Completions overloads the <code>messages</code> array with a <code>system</code> role message. The Responses API separates concerns: <code>instructions</code> define what the model is, <code>input</code> defines what the user asks.</p><pre><code><code>response = client.responses.create(
    model="gpt-5",
    instructions="You are a tax assistant. Always cite relevant IRS publications.",
    input="What deductions can I claim for my home office?"
)</code></code></pre><p>This isn&#8217;t just cosmetic. Because <code>instructions</code> sit at the start of the context as a stable prefix, they cache far more effectively than a system message buried in a mutable <code>messages</code> array. The architectural separation between static identity and dynamic conversation is what enables the 40&#8211;80% cache improvement OpenAI reports in internal tests.</p><h3>5. Output Items Instead of Choices</h3><p>Chat Completions returns a <code>choices</code> array where each choice contains a single <code>message</code>. The Responses API returns an <code>output</code> array of typed items. A single response can contain reasoning items, tool calls, tool results, and the final message &#8212; all as separate, inspectable objects.</p><pre><code><code>output: [
  { type: "reasoning",    ... },   # Chain-of-thought (if visible)
  { type: "tool_call",    ... },   # Tool invocation
  { type: "tool_result",  ... },   # Tool output
  { type: "message",      ... },   # Final text response
]</code></code></pre><p>This is transformative for debugging and observability. With Chat Completions, tool execution is a black box &#8212; you see what went in and what came out, but the intermediate steps are invisible. With Items, you get receipts. Every step the model took is an inspectable object in the response. You can build richer UIs, structured audit logs, and step-by-step tracing from a single response.</p><h3>6. The Conversations API</h3><p>For applications that need durable, long-lived conversations &#8212; think customer support tickets that span days &#8212; the Conversations API provides a persistent container:</p><pre><code><code># Create a persistent conversation
conversation = client.conversations.create(
    metadata={"user_id": "user_123", "session_type": "support"}
)

# Use it across multiple responses
response = client.responses.create(
    model="gpt-5",
    store=True,
    conversation=conversation.id,
    input="How do I reset my password?"
)</code></code></pre><p>Conversations persist indefinitely (no 30-day TTL like standalone responses). You can retrieve all items from a conversation, fork it at any point, and resume across sessions and devices. It replaces the Assistants API&#8217;s Threads concept without the polling overhead.</p><h3>7. Compaction for Long-Running Agents</h3><p>Every agentic workflow eventually hits the context window ceiling. The Responses API introduces compaction &#8212; an intelligent summarization of older conversation content to make room for new work while preserving critical context.</p><p>Two modes are available. Server-side compaction triggers automatically when the context crosses a threshold you set:</p><pre><code><code>response = client.responses.create(
    model="gpt-5.4",
    input=conversation_history,
    store=False,
    context_management=[{
        "type": "compaction",
        "compact_threshold": 200000
    }]
)</code></code></pre><p>Client-side compaction gives you explicit control via the <code>/responses/compact</code> endpoint &#8212; you send a full context window, and the API returns a compressed version with an encrypted compaction item that carries forward key state.</p><p>This is what enables GPT-5.4 to sustain coherent progress across agent trajectories that would previously collapse when the context window filled up. The compaction endpoint is fully stateless and ZDR-friendly.</p><h3>8. Tool Search for Large Tool Surfaces</h3><p>If your agent has 50+ function definitions, sending all of them in every request wastes tokens, breaks cache prefixes, and degrades tool selection accuracy. GPT-5.4 introduces tool search: deferred tool loading where the model dynamically discovers relevant tools at runtime.</p><p>Instead of defining every tool upfront, you make tools searchable. The model loads only the definitions it needs for the current request. This preserves cache performance, reduces token usage, and improves latency for enterprise applications with large tool inventories.</p><h3>9. Flexible Input Formats</h3><p>Chat Completions requires a <code>messages</code> array with <code>role</code> and <code>content</code> objects. The Responses API accepts three formats:</p><pre><code><code># Simple string
input="What is the return policy?"

# Message array (familiar from Chat Completions)
input=[{"role": "user", "content": "What is the return policy?"}]

# Multimodal input with images, audio, documents
input=[
    {"role": "user", "content": [
        {"type": "input_text", "text": "Summarize this document"},
        {"type": "input_file", "file_id": "file_abc123"}
    ]}
]</code></code></pre><p>The string shorthand eliminates boilerplate for simple single-turn calls. The multimodal support makes text, images, PDFs, and audio first-class citizens in the same input array.</p><div><hr></div><h2>Case Study: Migrating a Customer Support RAG System</h2><p>Let&#8217;s make this concrete. Consider a mid-size e-commerce company running a customer support bot on Chat Completions with GPT-4o. Here&#8217;s their current architecture and what changes with a Responses API migration.</p><h3>The Before: Chat Completions Architecture</h3><pre><code><code>User message arrives
  &#8594; App fetches full conversation history from Postgres (all turns)
  &#8594; App prepends system prompt (800 tokens of instructions)
  &#8594; App calls embeddings API with the user's question
  &#8594; App queries Pinecone for relevant knowledge base chunks
  &#8594; App injects retrieved chunks into the messages array
  &#8594; App sends everything to Chat Completions
  &#8594; App parses response
  &#8594; App stores response in Postgres
  &#8594; If tool call: app executes tool, sends result back, waits again
  &#8594; Repeat for every turn</code></code></pre><p><strong>The pain points:</strong> Every turn resends the full conversation (0% prompt cache hit rate). The system prompt is 800 tokens of static instructions re-sent identically every request. RAG requires a separate embeddings call plus a vector DB query before every API call. Tool execution requires multiple round trips. A 15-turn conversation means the system prompt alone costs 12,000 redundant tokens. And the model&#8217;s reasoning resets between every turn.</p><h3>The After: Responses API Architecture</h3><pre><code><code>User message arrives
  &#8594; App sends one API call with previous_response_id + new input
  &#8594; Built-in file_search handles RAG (vector store configured once)
  &#8594; Built-in web_search handles real-time queries
  &#8594; Model's reasoning persists from prior turns
  &#8594; Static instructions cached via `instructions` parameter
  &#8594; Response returned with full item trail for observability
  &#8594; Repeat</code></code></pre><h3>What You Actually Save</h3><p><strong>Token costs:</strong> The <code>instructions</code> parameter creates a stable prefix that caches across turns. OpenAI&#8217;s extended prompt cache retention (up to 24 hours) means the system prompt stays cached throughout a support agent&#8217;s entire shift. For a 15-turn conversation, you eliminate roughly 12,000 redundant instruction tokens and gain 40&#8211;80% cache improvement on the remaining context.</p><p><strong>Infrastructure:</strong> You can retire your Pinecone instance (or equivalent) for this use case &#8212; file search with vector stores handles the RAG pipeline. You eliminate the embeddings call, the vector query, and the chunk injection logic.</p><p><strong>Quality:</strong> Reasoning persistence means the model remembers not just what was said, but how it was thinking about the problem. When a customer asks a follow-up that builds on a complex refund calculation, the model&#8217;s prior chain-of-thought carries forward instead of starting from scratch.</p><p><strong>Observability:</strong> Every response contains typed output items &#8212; you can log exactly which knowledge base documents were retrieved, which tools were called, and what reasoning the model applied, all from a single response object.</p><div><hr></div><h2>The Migration Decision Matrix</h2><p>Not every application should migrate today. Here&#8217;s how to think about it:</p><p><strong>Migrate now</strong> if you have multi-turn conversations with reasoning models, applications resending full conversation history every turn, workflows that need built-in web search or file search, or agentic systems hitting context window limits.</p><p><strong>Migrate incrementally</strong> if you have a mix of simple and complex flows. The Responses API is a superset of Chat Completions &#8212; you can migrate individual user flows that benefit from reasoning persistence while keeping simpler flows on Chat Completions.</p><p><strong>Wait and watch</strong> if you have single-turn, stateless workloads with no tools (basic classification, single-shot generation). Chat Completions handles these fine and will be supported indefinitely.</p><p><strong>Be cautious</strong> if your architecture requires full control over conversation state for compliance reasons, though encrypted reasoning items and ZDR support address most of these concerns.</p><div><hr></div><h2>The Assistants &#8594; Responses Concept Map</h2><p>If you&#8217;re migrating from the Assistants API (sunset: August 26, 2026), the mapping is straightforward:</p><pre><code><code>Assistants API              &#8594; Responses API
&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;
Assistant object            &#8594; instructions + model + tools (inline config)
Thread                      &#8594; Conversation (or previous_response_id chain)
Message                     &#8594; Input items
Run (create &#8594; poll &#8594; get)   &#8594; Single responses.create() call
Run Steps                   &#8594; Output items (inspectable per-step)
Code Interpreter            &#8594; {"type": "code_interpreter"} built-in tool
File Search / Retrieval     &#8594; {"type": "file_search"} built-in tool
Thread-based state          &#8594; store: true + conversation or previous_response_id</code></code></pre><p>The biggest win: you go from a five-step async flow (create thread &#8594; add message &#8594; create run &#8594; poll status &#8594; get response) to a single synchronous API call that returns the complete result.</p><div><hr></div><h2>What to Watch</h2><p>The Responses API is clearly where OpenAI is investing. New capabilities &#8212; tool search, compaction, computer use, MCP support &#8212; are landing in Responses first, sometimes exclusively. GPT-5.4&#8217;s tool calling with <code>reasoning: none</code> is only supported in the Responses API, not Chat Completions.</p><p>But there are trade-offs to keep eyes on. Server-side state means you&#8217;re trusting OpenAI with your conversation data (responses are retained for 30 days by default). The in-memory fast path caches only the most recent response; older IDs are hydrated from persisted state when <code>store: true</code>, and if unresolvable you must fall back to full context. And despite being billed as simpler, the Items-based response format is a different mental model that takes adjustment.</p><p>The broader signal is architectural. OpenAI is pushing developers toward a world where the API provider manages state, runs tools, and handles context &#8212; and developers focus on defining behavior and building UIs. Whether that trade-off works for your stack depends on how much control you&#8217;re willing to delegate.</p><p>But for the majority of applications resending full conversation histories and rebuilding tool execution loops from scratch &#8212; the Responses API isn&#8217;t just an improvement. It&#8217;s the API you wished existed three years ago.</p><div><hr></div><p><em>Building on the Responses API or migrating from Assistants? I&#8217;d love to hear what&#8217;s working and what&#8217;s breaking. </em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[You’re Paying 10x Too Much for LLM Inference (And Your Provider Already Has the Fix)]]></title><description><![CDATA[A practitioner&#8217;s guide to prompt caching across OpenAI, Anthropic, and Google &#8212; the single biggest lever for cutting cost and latency in production AI systems.]]></description><link>https://theairuntime.com/p/youre-paying-10x-too-much-for-llm</link><guid isPermaLink="false">https://theairuntime.com/p/youre-paying-10x-too-much-for-llm</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Wed, 15 Apr 2026 11:03:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pq1b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - Prompt caching stores the KV (key-value) computations from transformer attention layers so repeated prompt prefixes skip the expensive prefill step entirely. Every major provider now offers it, but they&#8217;ve made fundamentally different design choices: OpenAI caches automatically with zero code changes and now offers up to 90% discounts on newer models. Anthropic gives you explicit control with <code>cache_control</code> breakpoints and a strict hierarchy (tools &#8594; system &#8594; messages) that rewards careful prompt architecture. Google Gemini offers both implicit (automatic) and explicit caching with the longest TTL options &#8212; up to custom durations &#8212; plus per-hour storage fees for explicit caches. If you&#8217;re running a production AI application and haven&#8217;t optimized for cache hits, you&#8217;re leaving 50&#8211;90% of your inference budget on the table. Start by structuring your prompts with static content first and variable content last, then monitor <code>cached_tokens</code> in your API responses to measure your hit rate.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Why This Matters Right Now</h2><p>Here&#8217;s a number that should make you uncomfortable: in a 100-turn coding session with Claude Opus, you&#8217;re sending roughly 10&#8211;20 million input tokens. Without caching, that&#8217;s $50&#8211;100 in input costs alone. With caching, it&#8217;s $10&#8211;19.</p><p>That&#8217;s not a hypothetical. The Claude Code team has said publicly that prompt caching is the architectural constraint around which their entire product is built. They declare SEV incidents when cache hit rates drop.</p><p>And it&#8217;s not just Anthropic. OpenAI&#8217;s Prompt Caching 201 cookbook (published February 2026) shows their Realtime API offering a 98.75% discount on cached audio tokens &#8212; from $32 per million tokens down to $0.40. Google&#8217;s Gemini 2.5 Pro drops cached input from $1.25 to $0.13 per million tokens.</p><p>The question isn&#8217;t whether to use prompt caching. It&#8217;s whether you understand it well enough to actually get the cache hits you&#8217;re paying for.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pq1b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pq1b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!pq1b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!pq1b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!pq1b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pq1b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1744224,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194204037?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pq1b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!pq1b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!pq1b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!pq1b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2c6a4c1-4658-4e9a-85cc-178c0438d081_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                      Prompt Caching</em></p><div><hr></div><h2>What&#8217;s Actually Being Cached (It&#8217;s Not What You Think)</h2><p>A common misconception is that prompt caching stores your text and retrieves it later, like a Redis layer for prompts. It doesn&#8217;t work that way.</p><p>LLM inference has two phases. In the <strong>prefill</strong> phase, the model processes every input token through its transformer layers, computing key and value projections inside the attention mechanism. These projections &#8212; the &#8220;KV cache&#8221; &#8212; capture how each token relates to every other token in the sequence. In the <strong>decode</strong> phase, the model generates output tokens one at a time, each step referencing the KV cache it built during prefill.</p><p>Prompt caching stores those KV projections in GPU memory. When your next request starts with the same prefix, the model skips recomputing those attention layers and jumps straight to processing new tokens. You&#8217;re not caching text. You&#8217;re caching the result of the most computationally expensive part of inference.</p><p>This is why the savings are so dramatic. Prefill is the dominant cost driver &#8212; it scales with both sequence length and model size. Skip it, and you cut latency by up to 80% and costs by up to 90%.</p><p>It also explains why caching only works on <strong>prefixes</strong>. The KV cache is sequential. Token 500&#8217;s attention values depend on tokens 1&#8211;499. You can&#8217;t cache the middle of a prompt because the middle depends on everything before it.</p><div><hr></div><h2>The Three Approaches: A Design Philosophy Comparison</h2><p>Each major provider has made distinct design choices about caching that reflect deeper philosophies about developer experience versus control.</p><h3>OpenAI: &#8220;It Just Works&#8221;</h3><p>OpenAI&#8217;s approach is fully automatic. There&#8217;s no flag to set, no API parameter to enable. If your prompt exceeds 1,024 tokens and shares a prefix with a recent request, the system attempts a cache hit behind the scenes.</p><p>The mechanism works through <strong>routing</strong>: OpenAI hashes the first ~256 tokens of your prompt and routes the request to a machine that recently processed a matching prefix. If that machine still has the KV cache in memory, you get a hit. Cache matches happen in 128-token increments &#8212; so if you change one token at position 2,048 in a 10,000-token prompt, you still get a cache hit on the first 2,048 tokens.</p><p><strong>What&#8217;s unique about OpenAI&#8217;s approach:</strong></p><ul><li><p><strong>Zero code changes required.</strong> You monitor cache performance by checking <code>usage.prompt_tokens_details.cached_tokens</code> in the response &#8212; but you don&#8217;t need to <em>do</em> anything to enable it.</p></li><li><p><code>prompt_cache_key</code><strong> parameter.</strong> This is OpenAI&#8217;s concession to developers who want more control. By setting a consistent key across related requests, you improve the odds that they route to the same machine. Useful when many requests share a common long prefix.</p></li><li><p><strong>Extended retention.</strong> Beyond the default 5&#8211;10 minute in-memory cache, OpenAI offers extended retention (up to 24 hours) via the <code>prompt_cache_retention</code> parameter. Same pricing either way.</p></li><li><p><strong>Flex Processing.</strong> For latency-insensitive workloads, <code>service_tier="flex"</code> gives you the same 50% Batch API discount but runs through the standard API, where you can tune cache locality more precisely. OpenAI&#8217;s own testing showed an 8.5% higher cache hit rate with Flex + extended caching versus Batch.</p></li></ul><p><strong>The trade-off:</strong> You have less deterministic control. Cache hits depend on routing, which depends on server-side decisions. You can influence routing with <code>prompt_cache_key</code>, but you can&#8217;t guarantee hits the way you can with Anthropic&#8217;s explicit breakpoints.</p><h3>Anthropic: &#8220;You Decide What Gets Cached&#8221;</h3><p>Anthropic takes the opposite approach. You explicitly mark what should be cached using <code>cache_control</code> parameters on individual content blocks. This gives you deterministic control &#8212; when you mark a block, Anthropic stores its KV projections and serves cache hits 100% of the time on matching prefixes (within the TTL window).</p><p>The key architectural detail is Anthropic&#8217;s <strong>strict processing hierarchy</strong>: Tools &#8594; System Message &#8594; Messages. Caching is cumulative along this chain, and changes at any level invalidate that level and everything below it. Change a tool definition? Your system prompt cache breaks too. Change the system prompt? Your conversation history cache breaks.</p><p><strong>What&#8217;s unique about Anthropic&#8217;s approach:</strong></p><ul><li><p><strong>Explicit breakpoints.</strong> Place <code>cache_control: {"type": "ephemeral"}</code> on up to 4 content blocks. The cache stores everything from the beginning of the prompt up to that breakpoint.</p></li><li><p><strong>Automatic caching mode.</strong> Anthropic now also offers a simpler path: add a single <code>cache_control</code> at the top level of your request, and the system automatically applies the breakpoint to the last cacheable block and moves it forward as conversations grow.</p></li><li><p><strong>Cache write surcharge.</strong> Unlike OpenAI (no extra fee for cache writes), Anthropic charges 1.25x the base input price for 5-minute cache writes and 2x for 1-hour cache writes. Cache reads are 0.1x &#8212; so you need roughly 2 cache reads to break even on a 5-minute write.</p></li><li><p><strong>Model-specific minimum thresholds.</strong> Claude Sonnet and Opus require at least 1,024 tokens to trigger caching. Claude Haiku 4.5 requires 4,096 tokens. Below these thresholds, your <code>cache_control</code> annotation is silently ignored.</p></li><li><p><strong>Extended TTL option.</strong> Beyond the default 5-minute window, you can set <code>"ttl": "1h"</code> for a 1-hour cache at the 2x write premium.</p></li></ul><p><strong>The trade-off:</strong> More setup work, more things that can silently break (JSON key ordering in tool definitions, subtle changes in system prompts), but also more predictable behavior. When you ask for a cache, you get a cache.</p><p><strong>Pricing multipliers (all models):</strong></p><p>Operation Multiplier vs. Base Input Cache write (5-min) 1.25x Cache write (1-hour) 2x Cache read 0.1x</p><h3>Google Gemini: &#8220;Choose Your Adventure&#8221;</h3><p>Google offers <strong>both</strong> implicit and explicit caching &#8212; and they work differently enough that you need to understand both.</p><p><strong>Implicit caching</strong> is automatic (enabled by default on Gemini 2.5 and newer). Like OpenAI, it detects repeated prefixes and applies discounts opportunistically. Unlike OpenAI, there&#8217;s no storage fee and no guarantee of savings &#8212; you get discounts only when the system determines a cache hit occurred.</p><p><strong>Explicit caching</strong> is a managed resource. You create a cache object via the API, assign it a TTL (default 60 minutes, customizable), and reference it by resource name in subsequent requests. This guarantees discounts but introduces <strong>storage costs</strong> &#8212; typically $1.00 per million tokens per hour, depending on the model.</p><p><strong>What&#8217;s unique about Google&#8217;s approach:</strong></p><ul><li><p><strong>Longest TTL flexibility.</strong> Explicit caches can be set to custom durations with configurable <code>ttl</code> or <code>expire_time</code>. No other provider offers this level of TTL control.</p></li><li><p><strong>Storage fees for explicit caches.</strong> This is the critical differentiator. OpenAI and Anthropic don&#8217;t charge for cache storage. Google does &#8212; approximately $1.00 per million tokens per hour. This means you need to do break-even math: a 100K-token cache costs about $0.10/hour. If cached reads save you $0.10+ per hour in input token discounts, you&#8217;re ahead.</p></li><li><p><strong>Multimodal caching.</strong> Gemini caches text, images, audio, and video &#8212; and each modality has different pricing for cached reads.</p></li><li><p><strong>Cache lifecycle management.</strong> You can update TTLs, list caches, and delete them explicitly &#8212; a level of cache management that neither OpenAI nor Anthropic provides.</p></li></ul><p><strong>Pricing multipliers (Gemini 2.5 Flash example):</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KY7E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KY7E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png 424w, https://substackcdn.com/image/fetch/$s_!KY7E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png 848w, https://substackcdn.com/image/fetch/$s_!KY7E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png 1272w, https://substackcdn.com/image/fetch/$s_!KY7E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KY7E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png" width="726" height="250" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:250,&quot;width&quot;:726,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:21243,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194204037?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KY7E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png 424w, https://substackcdn.com/image/fetch/$s_!KY7E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png 848w, https://substackcdn.com/image/fetch/$s_!KY7E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png 1272w, https://substackcdn.com/image/fetch/$s_!KY7E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdf1e637-81d8-4afb-861b-8603caa297fe_726x250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>The Comparison Matrix That Actually Matters</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Em7t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Em7t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png 424w, https://substackcdn.com/image/fetch/$s_!Em7t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png 848w, https://substackcdn.com/image/fetch/$s_!Em7t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png 1272w, https://substackcdn.com/image/fetch/$s_!Em7t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Em7t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png" width="1456" height="784" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:784,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:881994,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/194204037?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Em7t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png 424w, https://substackcdn.com/image/fetch/$s_!Em7t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png 848w, https://substackcdn.com/image/fetch/$s_!Em7t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png 1272w, https://substackcdn.com/image/fetch/$s_!Em7t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6eba25a1-5417-4808-8f64-3653442824fd_1492x803.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                              Comparison Matrix</em></p><div><hr></div><h2>The Five Use Cases Where Caching Transforms Economics</h2><p><strong>1. Multi-turn chatbots and agents.</strong> Every turn resends the full conversation history. Without caching, turn 50 costs 50x what turn 1 costs. With caching, turns 2&#8211;50 only pay full price for the new message &#8212; everything before it is a cache hit.</p><p><strong>2. Document Q&amp;A.</strong> Embed a 100K-token document in the system prompt and let users ask questions. Without caching, each question reprocesses the entire document. With caching, the document is processed once and subsequent queries against it cost 90% less.</p><p><strong>3. Few-shot and many-shot prompting.</strong> High-quality few-shot examples can be 10K+ tokens. Caching lets you include 50&#8211;100 examples without paying full price on every call.</p><p><strong>4. Agentic tool use.</strong> Agents make multiple tool calls per task, each requiring a new API request with the full context. Tool definitions and system instructions remain stable across calls &#8212; perfect cache candidates.</p><p><strong>5. Code assistants.</strong> The canonical case. Claude Code&#8217;s system prompt alone is ~4,000 tokens. Add tool definitions, CLAUDE.md files, and conversation history, and you&#8217;re sending 100K+ tokens per turn. Caching keeps this economically viable.</p><div><hr></div><h2>What Breaks Your Cache (And How to Prevent It)</h2><p>The most expensive bug in production AI isn&#8217;t a wrong answer &#8212; it&#8217;s a silently broken cache. Here&#8217;s what invalidates caches across providers:</p><p><strong>Universal cache killers:</strong></p><ul><li><p>Changing any token in the cached prefix (even a single character)</p></li><li><p>Reordering JSON keys in tool definitions (watch out for languages like Go and Swift that randomize key order)</p></li><li><p>Adding timestamps or per-request IDs to system prompts</p></li><li><p>Switching models mid-session</p></li></ul><p><strong>Anthropic-specific:</strong></p><ul><li><p>Changing <code>tool_choice</code> parameter</p></li><li><p>Adding or removing images anywhere in the prompt</p></li><li><p>Enabling/disabling extended thinking or changing the thinking budget (invalidates message-level cache, but system and tool caches survive)</p></li><li><p>Exceeding 20 content blocks without additional <code>cache_control</code> markers</p></li></ul><p><strong>OpenAI-specific:</strong></p><ul><li><p>High request volume on the same prefix (&gt;15 RPM per <code>prompt_cache_key</code>) causing overflow to additional machines</p></li><li><p>The routing hash only considers ~256 tokens &#8212; so two prompts that differ only after token 256 might route to different machines</p></li></ul><p><strong>Google-specific:</strong></p><ul><li><p>Explicit caches can expire if TTL isn&#8217;t updated</p></li><li><p>Referencing a deleted or expired cache object causes request failure (implement retry logic that recreates the cache)</p></li></ul><div><hr></div><h2>Practical Prompt Architecture for Maximum Cache Hits</h2><p>The universal rule across all providers: <strong>static content first, variable content last.</strong></p><p>Think of your prompt as having concentric layers of stability:</p><pre><code><code>Most Stable (cache these)
&#9500;&#9472;&#9472; Tool definitions
&#9500;&#9472;&#9472; System instructions
&#9500;&#9472;&#9472; Reference documents / few-shot examples
&#9500;&#9472;&#9472; Conversation history (grows but prefix stays stable)
&#9492;&#9472;&#9472; Current user message
Most Variable (don't try to cache this)</code></code></pre><p>For <strong>Anthropic</strong>, place your first <code>cache_control</code> breakpoint after your system instructions and a second after your reference documents. Use automatic caching mode for the conversation history &#8212; it moves the breakpoint forward as the conversation grows.</p><p>For <strong>OpenAI</strong>, structure is the only lever you have (plus <code>prompt_cache_key</code>). Put your most stable, longest content at the very beginning. Don&#8217;t embed per-request metadata in your system prompt.</p><p>For <strong>Google</strong>, create an explicit cache for your reference documents and set an appropriate TTL. Use implicit caching for everything else.</p><div><hr></div><h2>The Decision Framework: Which Provider&#8217;s Caching Fits Your Use Case?</h2><p><strong>Choose OpenAI&#8217;s caching when</strong> you want zero implementation effort, you&#8217;re running standard chat or completion workloads, and you value simplicity over control. The newer GPT-5 family&#8217;s 90% discounts make this increasingly attractive.</p><p><strong>Choose Anthropic&#8217;s caching when</strong> you need guaranteed cache hits, you&#8217;re building long-context applications (document analysis, code assistants), and you&#8217;re willing to invest in prompt architecture. The explicit control means you can debug and optimize with certainty.</p><p><strong>Choose Google&#8217;s caching when</strong> you&#8217;re working with multimodal content (especially video and audio), you need long cache durations, or you&#8217;re already in the Google Cloud ecosystem. Be aware of storage fees &#8212; do the break-even math.</p><div><hr></div><h2>Monitoring: The Metric That Tells You If You&#8217;re Doing It Right</h2><p>Regardless of provider, there&#8217;s one metric you should track: <strong>cache hit rate</strong>, defined as cached tokens divided by total input tokens.</p><p>For OpenAI, check <code>usage.prompt_tokens_details.cached_tokens</code> in every response. For Anthropic, monitor <code>cache_read_input_tokens</code> versus <code>cache_creation_input_tokens</code> plus <code>input_tokens</code>. For Google, look at <code>cachedContentTokenCount</code> in the response metadata.</p><p>A healthy production system should see 70%+ cache hit rates after the first few requests in a session. Claude Code reports 95%+ in sustained coding sessions. If you&#8217;re below 50%, something is breaking your cache &#8212; review the invalidation checklist above.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Model Bills Are the New Headcount]]></title><description><![CDATA[Inference costs are replacing salaries as the fastest-growing line item at AI startups. Nobody has a discipline for managing them. That&#8217;s about to change.]]></description><link>https://theairuntime.com/p/model-bills-are-the-new-headcount</link><guid isPermaLink="false">https://theairuntime.com/p/model-bills-are-the-new-headcount</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Mon, 13 Apr 2026 11:03:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tMqz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="pullquote"><p><strong>TL:DR</strong> - At a growing number of AI startups, the monthly model inference bill has surpassed individual engineer salaries as the most scrutinized cost on the P&amp;L. This isn&#8217;t a temporary artifact of early adoption &#8212; it&#8217;s the permanent economic structure of AI-native businesses. Yet most teams manage inference costs the way early startups managed cloud bills: reactively, after the damage is done. The emerging discipline of Model Reliability Engineering (MRE) treats model behavior and model cost as two sides of the same operational problem, giving teams a framework to monitor, optimize, and control inference economics alongside output quality. If your model bill is growing faster than your revenue, you don&#8217;t have a pricing problem &#8212; you have an engineering problem.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The New P&amp;L</h2><p>In 2024, when founders discussed their burn rate, the conversation was almost entirely about payroll. &#8220;We&#8217;re a team of twelve, burning $180K per month.&#8221; The model API line item &#8212; if it existed at all &#8212; was a rounding error. A few hundred dollars for prototyping.</p><p>In 2026, that conversation has inverted at AI-native companies. A team of four might burn $50K per month on salaries and $25K&#8211;$40K per month on inference. The model bill isn&#8217;t a rounding error &#8212; it&#8217;s the second-largest expense after payroll, and at some companies, it&#8217;s approaching the first.</p><p>This creates a cost structure that&#8217;s fundamentally different from traditional software businesses in three ways.</p><p>First, the marginal cost of serving a customer is non-trivial. In traditional SaaS, the marginal cost of an additional user is essentially zero &#8212; server costs are negligible per user. In AI-native products, every user interaction triggers model inference that costs real money. A complex query might cost $0.05&#8211;$0.50 in model calls. At scale, this adds up fast.</p><p>Second, costs are partially unpredictable. Traditional infrastructure scales predictably &#8212; you know roughly what a new server instance costs. Model costs depend on input complexity, output length, which model handles the request, retry rates, and dozens of other factors that vary by user and use case.</p><p>Third, cost and quality are directly coupled. In traditional software, you can usually cut costs without affecting user experience &#8212; optimize a query, compress an asset, cache a result. In AI systems, cheaper often means worse. Routing to a smaller model saves money but may degrade output quality. Shorter prompts cost less but may produce less reliable results. Every cost optimization decision is simultaneously a quality decision.</p><h2>Why Cloud-Era Thinking Doesn&#8217;t Work</h2><p>Most engineering teams default to treating model costs the way they treat cloud infrastructure costs. Set up billing alerts, review the dashboard monthly, optimize the biggest spenders when the bill gets uncomfortable.</p><p>This approach fails for AI inference because it addresses the wrong problem. Cloud cost optimization is primarily about resource utilization &#8212; right-sizing instances, eliminating waste, reserving capacity. The decisions are mostly independent of the product&#8217;s behavior.</p><p>Inference cost optimization is inseparable from product behavior. When you change how a model is called &#8212; the prompt, the model choice, the context window size &#8212; you change both the cost and the output. You can&#8217;t optimize one without affecting the other. An engineer who reduces inference costs by 40% but degrades response quality by 20% hasn&#8217;t saved money &#8212; they&#8217;ve broken the product.</p><p>This coupling is why inference economics requires its own discipline, not just a tab in your existing monitoring dashboard.</p><h2>Enter Model Reliability Engineering</h2><p>Model Reliability Engineering (MRE) is an engineering discipline that owns model behavior reliability in production &#8212; and inference economics is one of its core concerns.</p><p>MRE sits at the intersection of several existing disciplines. Site Reliability Engineering (SRE) gives it operational rigor &#8212; uptime targets, incident response, monitoring. MLOps gives it the deployment and pipeline perspective. AI Safety gives it the behavioral constraint framework. But none of these disciplines adequately cover the specific problem of maintaining reliable model behavior at manageable cost in production systems.</p><p>MRE addresses this through a two-layer architecture: <strong>Context Engineering</strong> (designing and managing what goes into the model) and <strong>Harness Engineering</strong> (building the infrastructure that wraps, monitors, and controls model interactions). Together, they form a framework for thinking about inference costs as an engineering problem, not a finance problem.</p><p>The MRE approach to inference economics centers on five operational concerns:</p><h3>1. Cost Observability</h3><p>You can&#8217;t optimize what you can&#8217;t see. Most teams track their aggregate model bill &#8212; total spend per month. That&#8217;s like tracking your total cloud bill without knowing which service consumes the most. Useless for optimization.</p><p>Effective cost observability means tracking cost per request, segmented by model, feature, user tier, and request complexity. It means knowing that your document summarization feature costs $0.12 per request while your chatbot costs $0.03 per request &#8212; and understanding why.</p><p>The implementation is straightforward: instrument every model call with metadata (feature name, model used, input tokens, output tokens, latency) and aggregate it in a monitoring system. The hard part is building the organizational habit of reviewing this data with the same rigor you&#8217;d review error rates or latency percentiles.</p><h3>2. Model Routing</h3><p>Not every task requires the same model. A classification decision &#8212; &#8220;is this email spam or not?&#8221; &#8212; can be handled by a small, fast, cheap model. A complex reasoning task &#8212; &#8220;analyze this legal document and identify liability risks&#8221; &#8212; requires a frontier model.</p><p>Model routing is the practice of sending each request to the most cost-effective model that can handle it at the required quality level. In practice, this means defining quality thresholds for each task type, benchmarking multiple models against those thresholds, building a routing layer that selects the appropriate model per request, and continuously evaluating whether routing decisions are still optimal as models evolve.</p><p>Teams that implement routing consistently report 40&#8211;60% reductions in inference costs. It&#8217;s the single highest-leverage optimization available, and most teams haven&#8217;t done it because it requires evaluation infrastructure they don&#8217;t have.</p><h3>3. Prompt Economics</h3><p>Prompt length directly affects cost &#8212; more input tokens means higher cost per request. But prompt optimization for cost can&#8217;t be done in isolation from quality.</p><p>The MRE approach treats prompts as economic artifacts. Every prompt has a cost (measured in tokens) and a quality level (measured by evaluation). The goal is to find the minimum-cost prompt that meets the quality threshold &#8212; not the cheapest prompt possible, and not the longest prompt that maximizes quality.</p><p>This requires evaluation infrastructure: a way to systematically test prompt variations against quality metrics and cost metrics simultaneously. Without evaluation, prompt optimization is guesswork. With evaluation, it&#8217;s engineering.</p><h3>4. Caching and Deduplication</h3><p>Many production workloads involve repeated or near-identical requests. Semantic caching &#8212; returning cached results for requests that are similar enough to previous ones &#8212; can significantly reduce inference costs without affecting user experience.</p><p>The engineering challenge is defining &#8220;similar enough.&#8221; Exact-match caching is trivial but catches few cases. Semantic similarity caching (using embedding distance to find near-matches) catches more cases but introduces a quality risk: the cached response might not be appropriate for the new request.</p><p>The MRE framework treats caching as a reliability decision, not just a performance optimization. Every cache hit is an assertion that the cached response is good enough for the new request. That assertion needs validation.</p><h3>5. Budget Governance</h3><p>As inference costs become a material portion of company spend, they need governance mechanisms similar to other significant cost centers.</p><p>This means per-feature cost budgets (this feature should cost no more than $X per month), cost-per-request limits (if a single request exceeds $Y, flag it for review), trend alerting (if costs are growing faster than usage, investigate), and cost-quality tradeoff documentation (recording why each routing or prompt decision was made).</p><p>Budget governance sounds bureaucratic, but without it, inference costs grow unchecked until they trigger a crisis.</p><h2>The Cost-Quality Tradeoff in Practice</h2><p>Here&#8217;s a concrete example of how MRE thinking changes inference economics.</p><p>Consider a customer support AI that handles 10,000 requests per day. Without optimization, every request goes to a frontier model with a long system prompt. Cost: roughly $0.15 per request. Monthly bill: $45,000.</p><p>An MRE approach would look like this:</p><p>Step 1 &#8212; Classify requests by complexity. Analysis reveals that 60% of requests are simple FAQ-type questions, 30% are moderately complex, and 10% require deep reasoning.</p><p>Step 2 &#8212; Build a routing layer. Simple requests go to a small model ($0.01/request). Moderate requests go to a mid-tier model ($0.05/request). Complex requests go to the frontier model ($0.15/request).</p><p>Step 3 &#8212; Optimize prompts per tier. The simple model gets a short, focused prompt. The mid-tier model gets a moderate prompt with examples. The frontier model gets the full system prompt.</p><p>Step 4 &#8212; Add semantic caching for the simple tier, where many requests are near-identical.</p><p>Result: Simple requests (6,000/day &#215; $0.008 with caching) = $48/day. Moderate requests (3,000/day &#215; $0.05) = $150/day. Complex requests (1,000/day &#215; $0.15) = $150/day. Total: $348/day. Monthly bill: roughly $10,400.</p><p>That&#8217;s a 77% cost reduction. But it only works because each step was validated against quality metrics. The small model&#8217;s responses to simple queries were evaluated and confirmed to meet quality thresholds. The routing classifier was tested for accuracy. The caching system was validated against semantic similarity scores.</p><p>Without evaluation infrastructure, you&#8217;re just guessing about where to cut. With it, you&#8217;re engineering.</p><h2>Who Owns This?</h2><p>At most companies today, nobody owns inference economics. The engineering team builds features. The finance team pays the bills. Nobody connects the two systematically.</p><p>MRE argues that inference economics is an engineering responsibility &#8212; specifically, it&#8217;s the responsibility of whoever owns model behavior in production. The person who decides which model to use, how to prompt it, and how to evaluate the output is also the person best positioned to optimize the cost, because they understand the cost-quality tradeoff for each decision.</p><p>This doesn&#8217;t mean every engineer needs to become a financial analyst. It means the team responsible for model interactions needs cost visibility, cost targets, and the tools to optimize against them. Just as SRE teams own uptime targets, MRE teams own cost-quality targets.</p><p>For teams without dedicated MRE roles (which is most teams right now), the minimum viable version is: instrument every model call, review costs weekly by feature, and set per-feature cost budgets. That alone puts you ahead of 90% of teams managing inference costs today.</p><h2>The Compounding Problem</h2><p>Here&#8217;s why this matters now and not later: inference costs compound with growth. Unlike traditional infrastructure costs that grow sub-linearly with scale (thanks to efficiency gains), inference costs grow roughly linearly &#8212; and sometimes super-linearly when complex features get more usage.</p><p>A startup spending $25K/month on inference at 1,000 users will likely spend $250K/month at 10,000 users unless they actively optimize. At 100,000 users, the unoptimized bill would approach a $3M annual run rate &#8212; on inference alone.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tMqz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tMqz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png 424w, https://substackcdn.com/image/fetch/$s_!tMqz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png 848w, https://substackcdn.com/image/fetch/$s_!tMqz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png 1272w, https://substackcdn.com/image/fetch/$s_!tMqz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tMqz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png" width="1060" height="1008" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1008,&quot;width&quot;:1060,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:60383,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193431499?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tMqz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png 424w, https://substackcdn.com/image/fetch/$s_!tMqz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png 848w, https://substackcdn.com/image/fetch/$s_!tMqz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png 1272w, https://substackcdn.com/image/fetch/$s_!tMqz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F698a321f-1629-42d1-82dd-ed16b0e56d08_1060x1008.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                          Cost Observability with AI</em></p><p>Every month you delay implementing cost observability, routing, and evaluation is a month where cost inefficiencies compound into your growth trajectory. The startups that survive the transition from early traction to real scale will be the ones that treated inference economics as a first-class engineering discipline from the beginning, not the ones that panicked when the bill arrived.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Your ETL Pipeline Won’t Save You. Your AI Data Stack Will.]]></title><description><![CDATA[The data engineer&#8217;s guide to building an AIfolio that proves you&#8217;ve made the leap from pipeline plumber to AI infrastructure architect.]]></description><link>https://theairuntime.com/p/your-etl-pipeline-wont-save-you-your</link><guid isPermaLink="false">https://theairuntime.com/p/your-etl-pipeline-wont-save-you-your</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Sun, 12 Apr 2026 11:25:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!IEZq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR:</strong> Data engineering isn&#8217;t dying &#8212; it&#8217;s splitting. The BLS projects <strong>36% job growth</strong> through 2034, one of the fastest rates in tech. But the <em>work</em> is unrecognizable. AI copilots now generate boilerplate SQL in seconds, anomaly detection tools learn &#8220;normal&#8221; without hand-written rules, and natural-language interfaces let business users build their own simple pipelines. The data engineers who thrive in 2026 aren&#8217;t the ones writing more dbt models &#8212; they&#8217;re the ones <strong>designing the data infrastructure that makes AI systems actually work.</strong> In my last article, I introduced the concept of an <a href="https://aiengineerweekly.substack.com/p/your-portfolio-website-wont-get-you">AIfolio</a> &#8212; a portfolio built around AI-native projects that prove you can architect AI systems, not just code. That article was aimed at developers broadly. This one is for data engineers specifically, because your version of an AIfolio looks fundamentally different &#8212; and your existing skills give you an unfair advantage in building it. The old resume line was &#8220;built ETL pipeline processing 10M rows/day.&#8221; The new one is &#8220;built the data infrastructure that reduced our LLM hallucination rate from 23% to 4%.&#8221; Here are the five pillars of a data engineer&#8217;s AIfolio, the exact tools to build them with, and the presentation layer that makes hiring managers say yes.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The Tectonic Shift Nobody Warned You About</h2><p>Here&#8217;s the thing about data engineering in 2026: the profession is simultaneously booming and being hollowed out from the inside.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IEZq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IEZq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!IEZq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!IEZq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!IEZq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IEZq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2041021,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193935407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IEZq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!IEZq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!IEZq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!IEZq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dd83b41-07be-4da4-9c11-bb3198b10243_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                              AI-Native Data Engineer</em></p><p>The demand numbers look fantastic on the surface. The O&#8217;Reilly 2025 Tech Trends Report showed data engineering skills grew <strong>29% year-over-year</strong>. The BLS projects 36% growth through 2034. Median salaries sit comfortably between $120K and $200K. By every macro measure, data engineering is thriving.</p><p>But zoom into <em>what</em> data engineers are actually doing day-to-day, and the picture shifts dramatically. Snowflake launched Cortex Code in February 2026 &#8212; a CLI that generates dbt models from natural language, reads your actual schema (no hallucinated table names), and supports Claude Opus 4.6 and GPT-5.2 as underlying models. Describe what you want in plain English, and it writes the SQL, the schema YAML, <em>and</em> the tests. Databricks has Agent Bricks running at 250K+ queries per second for structured extraction and text transformation. GitHub Copilot, at $19-$39 per seat per month, is already standard on most data teams.</p><p>The result? A study examining 285,000 companies found that hiring for senior positions is still increasing while hiring for junior positions is decreasing. The pattern is identical to what happened in software engineering &#8212; AI doesn&#8217;t replace the experienced architect, it eliminates the apprenticeship that <em>creates</em> experienced architects.</p><p>If you&#8217;re a data engineer whose primary value is &#8220;I write SQL and Python to move data from point A to point B,&#8221; you&#8217;re in the blast radius. If your value is &#8220;I design the data systems that make AI applications reliable, governable, and cost-effective,&#8221; you&#8217;re in the most in-demand job market in a decade.</p><p>The question is: which one are you building toward?</p><div><hr></div><h2>The Data Engineer&#8217;s Role Has Inverted</h2><p>Think about how a hospital pharmacy works. A decade ago, pharmacists spent most of their time physically counting pills and putting them in bottles &#8212; the mechanical act of fulfillment. Today, automated dispensing machines handle that. Pharmacists didn&#8217;t disappear. They moved <em>up the stack</em> &#8212; clinical consultations, drug interaction analysis, treatment optimization. The mechanical work was automated; the judgment work became more valuable.</p><p>Data engineering is undergoing the exact same inversion.</p><p><strong>The old job:</strong> Write ingestion scripts. Build transformation logic. Schedule pipelines. Monitor for failures. Debug broken DAGs at 2 AM.</p><p><strong>The new job:</strong> Design the data architecture that powers AI applications. Build embedding pipelines for RAG systems. Implement data quality frameworks that prevent AI models from making dangerous decisions on bad data. Create semantic layers that let LLMs understand organizational knowledge. Govern the data estate so AI adoption doesn&#8217;t create compliance nightmares.</p><p>Erik Duffield, co-founder of data platform company Ascend, captured it precisely: we&#8217;ve moved from a world where 80% of data is served to human analysts through traditional BI tools to one where <strong>machines are the primary data consumers</strong>. When your main customer was a human looking at a dashboard, &#8220;good enough&#8221; data quality was often fine. When your main customer is an LLM making autonomous decisions, &#8220;good enough&#8221; can be catastrophic.</p><p>This inversion creates a massive opportunity for data engineers who see it coming &#8212; because you already have the foundational skills (SQL, Python, cloud infrastructure, orchestration) that AI engineers typically lack. You understand data modeling, schema design, governance, and operational reliability. The gap isn&#8217;t in your foundations. It&#8217;s in your <strong>AI application layer.</strong></p><p>Here&#8217;s how to close it.</p><div><hr></div><h2>Why Data Engineers Need a Different AIfolio</h2><p>In the <a href="https://aiengineerweekly.substack.com/p/your-portfolio-website-wont-get-you">AIfolio article</a>, I laid out four pillars for a developer&#8217;s AI portfolio: RAG pipelines, multi-agent systems, MCP integrations, and persistent memory. Those pillars are calibrated for software engineers crossing into AI.</p><p>Data engineers need a different set of pillars. Not because the AIfolio framework is wrong &#8212; but because your <em>superpower</em> is different.</p><p>An AI engineer&#8217;s AIfolio proves: &#8220;I can architect systems that think.&#8221;</p><p>A data engineer&#8217;s AIfolio proves: <strong>&#8220;I can build the data infrastructure that makes those thinking systems reliable, accurate, and governable.&#8221;</strong></p><p>Most AI engineers build impressive demos on toy datasets, then watch them crumble when fed real-world data at scale. They don&#8217;t know how to handle schema evolution, data contracts, incremental processing, or data quality monitoring. They&#8217;ve never debugged a pipeline that silently dropped 12% of records at 3 AM.</p><p>You have. That&#8217;s your edge.</p><p>A data engineer&#8217;s AIfolio doesn&#8217;t replace the four original pillars &#8212; it <em>complements</em> them. Where the AI engineer builds the RAG application, you build the pipeline that keeps its knowledge base fresh, accurate, and governed. Where the AI engineer designs the agent workflow, you build the feature store and embedding infrastructure that powers it. Where the AI engineer wires up MCP, you build the semantic layer it queries.</p><p>The combination is absurdly valuable &#8212; and almost nobody has both sides. Here are the five pillars of a data engineer&#8217;s AIfolio.</p><div><hr></div><h2>The Five Pillars of a Data Engineer&#8217;s AIfolio</h2><h3>Pillar 1: A RAG-Ready Data Pipeline (Your Foundation Project)</h3><p>Every AI application needs data, and most AI engineers are terrible at data engineering. This is your superpower &#8212; if you know how to wield it.</p><p>A RAG-ready data pipeline doesn&#8217;t just move data. It ingests unstructured documents (PDFs, Confluence pages, Slack threads, API responses), parses them intelligently, chunks them with semantic awareness, generates embeddings, and loads them into a vector store &#8212; all with the orchestration, monitoring, and data quality checks you&#8217;d apply to any production pipeline.</p><p>This is where your existing skills translate directly. You already know how to build reliable ingestion pipelines. You already understand idempotency, backfills, and incremental processing. You just need to add the AI-specific layers: document parsing, chunking strategy, embedding generation, and vector database management.</p><p><strong>What this proves to a hiring manager:</strong> You understand that RAG systems live or die based on data quality &#8212; not model quality. A brilliant LLM with a poorly chunked knowledge base will hallucinate. A mediocre LLM with a well-engineered data pipeline will be reliable. You&#8217;re the person who builds the reliable version.</p><p><strong>The tech stack:</strong></p><p>For <strong>orchestration</strong>, use what you know &#8212; Airflow, Prefect, or Dagster. The pipeline structure is familiar: extract documents from source systems, transform them through parsing and chunking stages, load embeddings into a vector store. The DAG looks like any ELT pipeline; the transformations are just different.</p><p>For <strong>document parsing</strong>, LlamaParse handles PDFs with tables, nested headers, and images. For simpler documents, LangChain&#8217;s document loaders cover most formats.</p><p>For <strong>chunking</strong>, start with RecursiveCharacterTextSplitter (predictable, tunable) and graduate to semantic chunking when you&#8217;re ready. Chunk size matters enormously &#8212; too large and you dilute relevance, too small and you lose context. Production systems in 2026 typically use 200-1,000 token windows with 10-20% overlap.</p><p>For <strong>vector databases</strong>, Postgres with pgvector is the secret weapon for data engineers. You already know Postgres. pgvectorscale benchmarks show strong throughput even at 50M vectors. For dedicated vector stores, start with Chroma (zero-config, embedded) and graduate to Qdrant (production-grade, Rust-based) or Pinecone (fully managed).</p><p>For <strong>embedding models</strong>, use OpenAI&#8217;s text-embedding-3-small for prototypes. For production, consider open-source models from Hugging Face that you can self-host &#8212; eliminating per-token costs entirely.</p><p><strong>The repos to study:</strong></p><ul><li><p><a href="https://github.com/NirDiamant/RAG_Techniques">NirDiamant/RAG_Techniques</a> (~26K stars) &#8212; 30+ advanced RAG implementations. Start here to understand the patterns before building your own pipeline around them.</p></li><li><p><a href="https://github.com/infiniflow/ragflow">infiniflow/ragflow</a> (~73K stars) &#8212; Production-grade RAG engine with deep document understanding. Study this to understand what &#8220;production RAG&#8221; looks like from a data engineering perspective.</p></li><li><p><a href="https://github.com/HKUDS/LightRAG">HKUDS/LightRAG</a> (~30K stars) &#8212; Graph-based RAG that builds knowledge graphs from documents. Building a LightRAG pipeline over a real corpus is the kind of project that makes data engineering <em>and</em> AI engineering teams lean forward.</p></li></ul><p><strong>The AIfolio differentiator:</strong> This is where your version diverges from the standard AIfolio. Don&#8217;t just build a RAG pipeline. Add the data engineering discipline that most AI engineers skip &#8212; data quality checks on your chunks (are they coherent? do they preserve table structure?), monitoring on embedding drift, automated re-indexing when source documents change, and lineage tracking from source document to vector store to LLM response. An AI engineer&#8217;s RAG demo says &#8220;look, it answers questions!&#8221; Your RAG pipeline says &#8220;look, it answers questions <em>correctly, reliably, with auditability from source to response.</em>&#8220; That&#8217;s the difference.</p><div><hr></div><h3>Pillar 2: AI-Powered Data Quality Monitoring (Your Competitive Advantage)</h3><p>This is the pillar that screams &#8220;I&#8217;m a data engineer who understands AI&#8221; rather than &#8220;I&#8217;m a data engineer who&#8217;s trying to become an AI engineer.&#8221; It plays directly to your strengths.</p><p>Traditional data quality monitoring requires writing explicit rules for every check: this column should never be null, this value should be between X and Y, this count should match within 5% of yesterday&#8217;s. It&#8217;s exhausting, brittle, and never comprehensive enough.</p><p>AI-powered data quality flips the script. Instead of writing rules, you train anomaly detection models that learn what &#8220;normal&#8221; looks like for each dataset and alert only on meaningful deviations. The system notices when weekend sales patterns suddenly match weekdays, when a typically stable metric shows unusual variance, or when subtle correlations between datasets shift &#8212; things hand-written rules would never catch.</p><p><strong>What this proves to a hiring manager:</strong> You understand the production reality that most AI projects ignore &#8212; that AI systems are only as good as the data feeding them. You can build the monitoring layer that prevents garbage-in-garbage-out at scale.</p><p><strong>The tech stack:</strong></p><p>For anomaly detection, start with statistical methods (z-scores, interquartile range) on your most critical tables, then graduate to ML-based detection using isolation forests or autoencoders. Great Expectations gives you the rule-based foundation; layer learned anomaly detection on top.</p><p>For metadata management, look at open-source data catalogs like DataHub or OpenMetadata. These tools track lineage, auto-generate documentation, and increasingly integrate AI for data discovery.</p><p>For observability, Monte Carlo is the industry leader (integrates with Snowflake, Databricks, dbt, and Airflow), but building your own lightweight version is the AIfolio project. The goal is a system that monitors freshness, volume, schema changes, and distribution shifts &#8212; and distinguishes between acceptable variations and genuine problems.</p><p><strong>The AIfolio differentiator:</strong> Build a pipeline that ingests real data (public datasets work &#8212; NYC taxi data, weather data, stock prices), monitors it continuously for quality issues, and automatically alerts when anomalies occur. Add a dashboard showing historical data quality scores, detected anomalies, and resolution status. Then &#8212; here&#8217;s the move that elevates this from &#8220;project&#8221; to &#8220;AIfolio pillar&#8221; &#8212; intentionally inject data quality issues and show that your system catches them <em>before they corrupt downstream AI models.</em> Deploy it with a live link a recruiter can interact with. This is the kind of project you can only build if you understand both data engineering and AI failure modes.</p><div><hr></div><h3>Pillar 3: A Semantic Layer with MCP Integration (The Architecture Pillar)</h3><p>This is the pillar nobody else is building yet &#8212; and it&#8217;s the one that will define data engineering&#8217;s next chapter. It also directly extends the MCP pillar from the original AIfolio framework, but from the data infrastructure side.</p><p>The problem: every company deploying LLMs needs those models to understand organizational data. But LLMs can&#8217;t query your data warehouse directly. They don&#8217;t know your business logic, your metric definitions, or which tables to join. Natural-language-to-SQL translation is better than it was, but it&#8217;s still unreliable for complex queries.</p><p>A semantic layer solves this by creating a structured, governed interface between LLMs and your data. It defines metrics, dimensions, and relationships in a way that both humans and machines can understand. Think of it as the &#8220;API&#8221; for your data &#8212; instead of letting AI tools write arbitrary SQL against raw tables, they query through a semantic layer that enforces business logic and access controls.</p><p><strong>What this proves to a hiring manager:</strong> You think at the system design level. You understand that AI applications need governed, structured access to data &#8212; not just raw table scans.</p><p><strong>The tech stack:</strong></p><p>For the semantic layer itself, dbt&#8217;s semantic layer (via MetricFlow) is the production standard &#8212; it defines metrics as code that can be version-controlled, tested, and governed. Cube is another option that adds a caching and API layer.</p><p>For the LLM integration, build an MCP server (Model Context Protocol) that exposes your semantic layer to AI assistants. This means Claude, Copilot, or any MCP-compatible AI can query your organizational data through a governed interface &#8212; asking questions in natural language that get translated to semantically correct queries.</p><p><strong>The repos to study:</strong></p><ul><li><p><a href="https://github.com/modelcontextprotocol/python-sdk">modelcontextprotocol/python-sdk</a> (~22K stars) &#8212; The official Python SDK for building MCP servers. FastMCP lets you build a working server in under 20 lines of code.</p></li><li><p><a href="https://github.com/modelcontextprotocol/servers">modelcontextprotocol/servers</a> (~76K stars) &#8212; Reference implementations. Study the database server examples.</p></li></ul><p><strong>The AIfolio differentiator:</strong> Build an MCP server that wraps a dbt semantic layer. An AI assistant asks &#8220;What was our revenue last quarter by region?&#8221; and your server translates that through the semantic layer into a governed, correct query &#8212; with access controls, audit logging, and metric definitions enforced automatically. Document the governance model alongside the technical architecture. This single project sits at the intersection of data engineering, AI infrastructure, and data governance &#8212; exactly where the profession is heading. In the original AIfolio, MCP was about connecting AI to tools. In a data engineer&#8217;s AIfolio, MCP is about connecting AI to <em>your organization&#8217;s data &#8212; safely.</em></p><div><hr></div><h3>Pillar 4: A Feature Store and Real-Time Embedding Pipeline (The ML Infrastructure Pillar)</h3><p>Every company building recommendation engines, fraud detection, or personalization needs a feature store. Every company deploying LLMs needs an embedding pipeline. These are data engineering problems wearing AI costumes &#8212; and they&#8217;re the infrastructure that AI engineers assume &#8220;someone else&#8221; builds.</p><p>A feature store ensures consistent feature computation across training and serving &#8212; preventing the dreaded &#8220;training-serving skew&#8221; where your model was trained on features calculated one way but serves predictions using features calculated slightly differently. An embedding pipeline continuously generates and updates vector representations of your data as it changes.</p><p><strong>What this proves to a hiring manager:</strong> You understand ML infrastructure &#8212; the plumbing that makes models work reliably in production, not just in a Jupyter notebook.</p><p><strong>The tech stack:</strong></p><p>For feature stores, Feast (open-source) is the standard for learning. It handles both batch features (computed in your warehouse) and real-time features (computed from streaming data). Tecton is the enterprise option if you want to demonstrate awareness of the commercial landscape.</p><p>For the embedding pipeline, build a Kafka-based streaming pipeline that generates embeddings in near-real-time as new data arrives &#8212; documents added, records updated, content changed. Embeddings flow into your vector store, keeping your RAG system current without full re-indexing.</p><p>For streaming infrastructure, Apache Kafka is still the backbone. Combine it with Flink or Spark Structured Streaming for the processing layer.</p><p><strong>The AIfolio differentiator:</strong> Build a feature store that serves features for a simple recommendation model, <em>and</em> an embedding pipeline that keeps a vector store current. Show that when new data arrives via Kafka, embeddings are generated and searchable within seconds &#8212; not hours. Then connect this to your Pillar 1 RAG pipeline. Now you have two AIfolio projects that work together as a system, not isolated demos. This compound effect &#8212; projects that reference and extend each other &#8212; is what separates an AIfolio from a list of disconnected repos.</p><div><hr></div><h3>Pillar 5: A Data Governance Framework for AI (The Senior-Level Pillar)</h3><p>This is the pillar that signals staff/principal-level thinking. It&#8217;s less about code and more about systems design &#8212; and it&#8217;s the most underbuilt layer in the entire AI ecosystem.</p><p>Every organization racing to adopt AI is creating a governance nightmare. Business teams launch AI initiatives with zero regard for data lineage, access controls, or compliance. AI models are trained on data that may contain PII. LLMs access data stores without audit trails. The EU AI Act requires audit trails for model-training data. Nobody&#8217;s building the governance infrastructure to handle any of this.</p><p><strong>What this proves to a hiring manager:</strong> You understand the organizational and regulatory dimensions of AI &#8212; not just the technical ones. You&#8217;re the engineer who prevents the compliance disaster, not the one who creates it.</p><p><strong>The implementation:</strong></p><p>Build a governance-as-code framework that includes data classification (automatically tagging PII, sensitive, public data), access control policies (who and what systems can access which data, with audit logging), lineage tracking (from raw source through transformations to AI model training data), and data contracts between producing and consuming teams.</p><p>Implement it using open-source tools: OpenMetadata or DataHub for the catalog, Great Expectations for data contracts, and your orchestrator&#8217;s built-in lineage tracking. Add a policy layer that automatically enforces classification-based access rules.</p><p><strong>The AIfolio differentiator:</strong> Write a companion blog post explaining how your framework maps to EU AI Act requirements and organizational data governance policies. This transforms a technical project into a business-level asset. The original AIfolio article emphasized &#8220;documenting your design decisions&#8221; &#8212; this pillar is that principle taken to its logical extreme. You&#8217;re not just building infrastructure; you&#8217;re publishing the <em>governance blueprint</em> that other organizations can learn from. That&#8217;s the kind of thought leadership that gets you noticed by hiring managers <em>and</em> builds your professional reputation.</p><div><hr></div><h2>The Data Engineer&#8217;s AIfolio Tech Stack Cheat Sheet</h2><p>You don&#8217;t need to learn everything. Here&#8217;s the focused stack, organized by what you actually need:</p><p><strong>Your Core (Keep and Deepen):</strong> SQL, Python, dbt, Airflow/Prefect/Dagster, Snowflake or Databricks or BigQuery, Kafka</p><p><strong>Add for AI Readiness:</strong> Vector databases (pgvector for Postgres teams, Qdrant or Pinecone for dedicated), embedding models (OpenAI API for prototypes, Hugging Face for self-hosted), LangChain/LlamaIndex for RAG orchestration, MCP SDK for AI integration layers</p><p><strong>Add for Observability:</strong> Monte Carlo (study the concepts even if you use open-source), Great Expectations + custom anomaly detection, OpenMetadata or DataHub for AI-era data cataloging</p><p><strong>Add for Streaming AI:</strong> Kafka + Flink for real-time embedding pipelines, Feast for feature stores</p><p><strong>AI Copilots to Master Now:</strong> GitHub Copilot (universal), Snowflake Cortex Code (if on Snowflake), Altimate Code (open-source, dbt + SQL native)</p><p><strong>Deployment (Your AIfolio Needs Live Links):</strong> Streamlit Community Cloud or Hugging Face Spaces (free, zero-config &#8212; for dashboards and demos), Vercel + Supabase (full-stack AI apps with pgvector), any major cloud free tier for containerized services</p><div><hr></div><h2>What Separates a Good Data Engineer&#8217;s AIfolio From a Great One</h2><p>Building the five pillars is necessary but not sufficient. The original AIfolio article laid out a presentation layer that applies just as forcefully here &#8212; with some data-engineering-specific additions.</p><p><strong>Every project needs a README that sells &#8212; with architecture diagrams.</strong> Hiring managers spend less than two minutes on a GitHub repo. For data engineers specifically, an architecture diagram isn&#8217;t optional &#8212; it&#8217;s the first thing they look for. Show the full pipeline: sources &#8594; ingestion &#8594; transformation &#8594; vector store &#8594; retrieval &#8594; LLM response. Show the monitoring layer. Show the governance layer. A clean Mermaid diagram in your README communicates more architectural thinking than a thousand lines of code.</p><p><strong>Deploy everything with a clickable link.</strong> A pipeline without a live demo is a pipeline that doesn&#8217;t exist. Deploy your RAG pipeline&#8217;s query interface to Streamlit. Deploy your data quality dashboard. Deploy your MCP server and show an AI assistant querying your data live. Hugging Face Spaces, Streamlit Community Cloud, and Supabase all offer generous free tiers. There&#8217;s no excuse.</p><p><strong>Add observability &#8212; especially on your data pipelines.</strong> This is where data engineers have a natural advantage over AI engineers building AIfolios. You already think about monitoring, alerting, and debugging in production. Integrate Langfuse or LangSmith for AI observability, and combine it with your existing pipeline monitoring. Show metrics: latency per query, retrieval precision, embedding freshness, data quality scores over time. This is the kind of production thinking that makes a hiring manager think &#8220;this person can build real systems.&#8221;</p><p><strong>Document your design decisions &#8212; with trade-off reasoning.</strong> Why did you choose pgvector over Qdrant? Why did you set chunk size to 500 tokens with 15% overlap? Why did you use semantic chunking for some document types and recursive splitting for others? Write this up &#8212; in a blog post, a detailed README section, or even a short companion article. The original AIfolio article made this point for all developers: the reasoning reveals more than the code. For data engineers, the specific trade-offs you&#8217;ve navigated (cost vs. performance, freshness vs. computational overhead, governance strictness vs. developer velocity) are the exact conversations hiring managers want to have in interviews.</p><p><strong>Be explicit about AI tool usage.</strong> Note in your documentation: &#8220;Used Cortex Code to generate initial dbt model definitions, then customized the chunking logic and added data quality tests manually&#8221; or &#8220;Used Copilot to scaffold the Airflow DAG structure, then wrote the embedding generation and quality monitoring operators by hand.&#8221; This signals a modern mindset. As one engineering leader put it: the goal isn&#8217;t to pretend you don&#8217;t use AI &#8212; it&#8217;s to show you use AI to accelerate the routine work so you can spend your time on the architectural decisions that matter.</p><p><strong>Connect your pillars into a system.</strong> This is the meta-move that elevates a data engineer&#8217;s AIfolio above a list of disconnected projects. Your RAG pipeline (Pillar 1) feeds into your data quality monitoring (Pillar 2). Your semantic layer and MCP server (Pillar 3) provides governed access to the same data. Your embedding pipeline (Pillar 4) keeps the RAG system current in real-time. Your governance framework (Pillar 5) wraps the entire system in compliance and auditability. When a hiring manager can trace the connections between your projects and see a <em>coherent data architecture</em> rather than five isolated repos &#8212; that&#8217;s when they know you think like a staff engineer.</p><div><hr></div><h2>What Actually Gets You Hired</h2><p>The pillars give you the <em>what</em> to build. The presentation layer gives you the <em>how</em> to show it. But after conversations with founders and hiring leaders at companies building AI-native data infrastructure, four traits emerged that determine whether you get the offer.</p><p><strong>1. You understand that machines are the new data consumer.</strong> The shift from human-facing dashboards to AI-facing data infrastructure is the defining change of this era. Every architectural decision you make &#8212; schema design, data quality thresholds, freshness requirements, access patterns &#8212; should account for the fact that your primary consumers are increasingly models, not analysts. When you can articulate <em>how</em> this changes your design decisions, you signal that you&#8217;ve internalized the shift.</p><p><strong>2. You have a point of view on data architecture trade-offs.</strong> &#8220;Should we use a dedicated vector database or pgvector?&#8221; is a question every data team is debating. Having a specific, defensible answer &#8212; backed by your actual project experience &#8212; matters more than having built the project in the first place. &#8220;I started with pgvector because our team already knew Postgres, and at our scale (under 10M vectors) the performance was comparable to dedicated solutions. I&#8217;d switch to Qdrant if we hit 50M+ vectors or needed sub-5ms p99 latency.&#8221; That answer gets you hired. Your AIfolio is the evidence that your opinions are earned, not theoretical.</p><p><strong>3. A learning mindset that&#8217;s visible in the work.</strong> Does your commit history show iteration &#8212; not just &#8220;initial commit&#8221; and &#8220;final version,&#8221; but a progression of experiments, dead ends, and improvements? Does your README explain what you <em>tried</em> that didn&#8217;t work? Did you start with fixed-size chunking, measure the retrieval quality, switch to semantic chunking, and document the improvement? A data engineer&#8217;s AIfolio that shows measured, iterative improvement signals something tutorials never can: you know how to diagnose and fix problems in production AI systems.</p><p><strong>4. You think about governance before someone makes you.</strong> The organizations that will win the AI race are the ones that can deploy AI <em>without</em> creating compliance disasters. Data engineers who proactively build governance frameworks &#8212; data contracts, lineage tracking, access controls, PII classification &#8212; are the ones who end up in the room where strategic decisions are made. You stop being a cost center and start being a profit enabler. Your AIfolio&#8217;s Pillar 5 is the proof.</p><div><hr></div><h2>Your Minimum Viable Data Engineer&#8217;s AIfolio</h2><p>If you&#8217;re a data engineer reading this and feeling overwhelmed, here&#8217;s the path in order:</p><p><strong>Month 1-2: Build Pillar 1 &#8212; your RAG-ready data pipeline.</strong> Install pgvector on your Postgres instance. Learn how embeddings work. Build a RAG pipeline over real documents (legal docs, technical documentation, research papers &#8212; not toy datasets) using your existing Airflow/dbt setup for orchestration. Add data quality checks on your chunks. Deploy the query interface to Streamlit or Gradio. One project, deployed, with a clean README and architecture diagram.</p><p><strong>Month 3-4: Build Pillar 2 &#8212; AI-powered data quality.</strong> Add anomaly detection to your most critical tables. Start with statistical methods, then layer in ML-based detection. Connect it to your Pillar 1 pipeline so it monitors the data feeding your RAG system. Document what your system catches that hand-written rules miss. Deploy the monitoring dashboard.</p><p><strong>Month 5-6: Build Pillar 3 &#8212; your semantic layer with MCP.</strong> Create an MCP server that exposes your data warehouse through a governed semantic layer. Show that an AI assistant can query your data correctly and safely. This is the pillar that makes hiring managers lean forward &#8212; almost nobody has built this yet.</p><p><strong>When ready: Build Pillars 4 and 5.</strong> Add a real-time embedding pipeline (Pillar 4) to keep your RAG system current without full re-indexing. Build the governance framework (Pillar 5) when you&#8217;re ready to make the case for staff-level roles.</p><p><strong>Throughout: Master an AI copilot for data engineering.</strong> Use Copilot for your daily SQL and Python work. Try Cortex Code if you&#8217;re on Snowflake. The productivity gains are real &#8212; developers report 88% productivity increases &#8212; and showing that you use AI as a power tool signals a modern mindset.</p><div><hr></div><p>The hand-coded ETL pipeline is the new to-do app. It proves you completed a tutorial. It signals nothing about whether you can design the data infrastructure that AI systems depend on.</p><p>The original AIfolio replaced the traditional developer portfolio with proof that you can architect AI systems. A data engineer&#8217;s AIfolio goes one layer deeper &#8212; proof that you can build the data infrastructure those AI systems <em>can&#8217;t function without.</em></p><p>Your pipelines don&#8217;t end at a dashboard anymore. They end at a vector store. At a feature store. At an LLM&#8217;s context window. At a governed semantic layer that lets AI systems understand organizational knowledge without creating compliance nightmares.</p><p>The data engineers who build this AIfolio won&#8217;t just survive the AI era. They&#8217;ll own the infrastructure layer that makes the entire AI era possible.</p><p>That&#8217;s not a bad position to be in.</p><p>Start building.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[PromptOps Is Dead, Long Live SkillOps]]></title><description><![CDATA[The shift from managing prompts to governing skills is the most important ops change in agentic AI &#8212; and most teams are already behind.]]></description><link>https://theairuntime.com/p/promptops-is-dead-long-live-skillops</link><guid isPermaLink="false">https://theairuntime.com/p/promptops-is-dead-long-live-skillops</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Fri, 10 Apr 2026 11:03:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!r2Ww!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR</strong> - Enterprise teams are drowning in prompts scattered across Claude Code, Copilot, Cursor, Codex, and internal tools &#8212; no versioning, no governance, no reuse. The fix isn&#8217;t better prompt management. It&#8217;s treating <em>skills</em> &#8212; self-contained packages of instructions, metadata, scripts, and guardrails &#8212; as first-class ops artifacts with registries, evaluation loops, and supply-chain controls. SkillOps &#8212; the practice of versioning, evaluating, governing, and composing skills &#8212; is the new operational layer for agentic systems. If you&#8217;re still doing PromptOps, you&#8217;re optimizing the wrong primitive.</p></div><h2>The Prompt Sprawl Problem You Already Have</h2><p>Here&#8217;s a pattern across every enterprise customer: someone writes a great prompt for code review in Claude Code. Someone else writes a different one for Copilot. A third person pastes a variation into Cursor. None of them know the others exist. None are versioned. None are tested. When the LLM vendor changes model behavior in an update, all three break silently.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This is PromptOps at its logical endpoint &#8212; a graveyard of undiscoverable, untested, ungoverned text blobs. The fundamental problem isn&#8217;t tooling. It&#8217;s that <em>prompts are the wrong unit of reuse</em>.</p><p>A prompt is a string. A skill is an <em>asset</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!r2Ww!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!r2Ww!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png 424w, https://substackcdn.com/image/fetch/$s_!r2Ww!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png 848w, https://substackcdn.com/image/fetch/$s_!r2Ww!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png 1272w, https://substackcdn.com/image/fetch/$s_!r2Ww!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!r2Ww!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png" width="1387" height="766" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:766,&quot;width&quot;:1387,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1869261,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193763181?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!r2Ww!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png 424w, https://substackcdn.com/image/fetch/$s_!r2Ww!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png 848w, https://substackcdn.com/image/fetch/$s_!r2Ww!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png 1272w, https://substackcdn.com/image/fetch/$s_!r2Ww!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa55072c9-3696-483d-81f1-61b6fbfe9647_1387x766.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                         Skillops</em></p><h2>What a Skill Actually Is</h2><p>The SKILL.md format &#8212; originally published by Anthropic at agentskills.io in December 2025 &#8212; has become the de facto standard across every major agentic platform in under six months. Here&#8217;s the structure:</p><pre><code><code>my-skill/
&#9500;&#9472;&#9472; SKILL.md        # Required: metadata + instructions
&#9500;&#9472;&#9472; scripts/        # Optional: executable code
&#9500;&#9472;&#9472; references/     # Optional: documentation
&#9492;&#9472;&#9472; assets/         # Optional: templates, resources</code></code></pre><p>The SKILL.md file contains YAML frontmatter (name, description) and markdown instructions. That&#8217;s it. But the design is deceptively powerful because of <em>progressive disclosure</em> &#8212; the mechanism that makes skills scale where prompts don&#8217;t.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JqBY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JqBY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png 424w, https://substackcdn.com/image/fetch/$s_!JqBY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png 848w, https://substackcdn.com/image/fetch/$s_!JqBY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png 1272w, https://substackcdn.com/image/fetch/$s_!JqBY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JqBY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png" width="960" height="190" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/984f765a-2303-4a5d-be7f-766561326879_960x190.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:190,&quot;width&quot;:960,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:32448,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193763181?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JqBY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png 424w, https://substackcdn.com/image/fetch/$s_!JqBY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png 848w, https://substackcdn.com/image/fetch/$s_!JqBY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png 1272w, https://substackcdn.com/image/fetch/$s_!JqBY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984f765a-2303-4a5d-be7f-766561326879_960x190.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>L1 &#8212; Discovery</strong>: At startup, the agent loads only the name and description of every available skill. Fifty skills might cost 2,500 tokens total. This is what the agent uses to decide <em>whether</em> to activate a skill.</p><p><strong>L2 &#8212; Activation</strong>: When a task matches a skill&#8217;s description, the agent reads the full SKILL.md body into context. Only the relevant skill loads. Everything else stays on disk at zero token cost.</p><p><strong>L3 &#8212; Execution</strong>: If instructions reference scripts, templates, or documentation, those load on demand. A skill can bundle dozens of reference files, but a given invocation might use one.</p><p>The result: you can install hundreds of skills with no context bloat. Compare this to PromptOps, where every prompt is always in context or requires manual selection.</p><h2>The Convergence Nobody Predicted</h2><p>Six months ago, skills were a Claude Code concept. Today:</p><ul><li><p><strong>Anthropic Claude</strong> &#8212; Skills across Claude Code, Claude.ai, and the API via the Skills API (/v1/skills endpoints)</p></li><li><p><strong>OpenAI Codex</strong> &#8212; Full SKILL.md support with <code>.codex/skills/</code> directories, implicit and explicit invocation</p></li><li><p><strong>GitHub Copilot</strong> &#8212; Agent Skills in VS Code with the same SKILL.md format, progressive disclosure built in</p></li><li><p><strong>Google ADK</strong> &#8212; <code>load_skill_from_dir</code> for file-based skills, meta-skills that generate new SKILL.md files at runtime</p></li></ul><p>This is not each vendor independently inventing a similar format. This is a <em>shared specification</em> at agentskills.io that every major player adopted. A skill built for Claude Code drops into Codex or Copilot with minimal changes. The runtime behaviors differ (session management, tool permissions, invocation modes), but the format is portable.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xMVo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xMVo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png 424w, https://substackcdn.com/image/fetch/$s_!xMVo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png 848w, https://substackcdn.com/image/fetch/$s_!xMVo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png 1272w, https://substackcdn.com/image/fetch/$s_!xMVo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xMVo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png" width="999" height="460" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:460,&quot;width&quot;:999,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:54499,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193763181?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xMVo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png 424w, https://substackcdn.com/image/fetch/$s_!xMVo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png 848w, https://substackcdn.com/image/fetch/$s_!xMVo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png 1272w, https://substackcdn.com/image/fetch/$s_!xMVo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8156fd-6cc2-49aa-a953-0504a7d845cc_999x460.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                     skills spec</em></p><p>This convergence is the inflection point. It means skills are no longer a platform feature &#8212; they&#8217;re an interoperable standard. And that changes the operational model entirely.</p><h2>From PromptOps to SkillOps: What Actually Changes</h2><p>PromptOps treated prompts as the unit of optimization: version them, A/B test them, track their performance. SkillOps treats skills as the unit &#8212; but the operational surface is fundamentally different.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Jxwb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Jxwb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png 424w, https://substackcdn.com/image/fetch/$s_!Jxwb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png 848w, https://substackcdn.com/image/fetch/$s_!Jxwb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png 1272w, https://substackcdn.com/image/fetch/$s_!Jxwb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Jxwb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png" width="841" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:841,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:73288,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193763181?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Jxwb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png 424w, https://substackcdn.com/image/fetch/$s_!Jxwb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png 848w, https://substackcdn.com/image/fetch/$s_!Jxwb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png 1272w, https://substackcdn.com/image/fetch/$s_!Jxwb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a23060b-17aa-445c-baac-b52f13fc7c1b_841x437.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>                                                      <em>&#8230;SkillOps</em></p><p>Here&#8217;s what each layer means in practice:</p><p><strong>Skill Registry</strong> &#8212; A centralized system of record for all skills across your organization. JFrog launched theirs at NVIDIA GTC in March 2026, positioning it as the trust layer for enterprise agent deployments. SkillRegistry.io serves the open-source community with 61 skills and 6,000+ downloads. The point isn&#8217;t which registry you pick &#8212; it&#8217;s that skills become discoverable, governed assets rather than files someone shared on Slack.</p><p><strong>Progressive Loading</strong> &#8212; The agent decides which skills to use, not the developer. This is the operational shift that kills PromptOps: you stop manually selecting prompts and start trusting that good metadata enables good discovery. Write better descriptions, not better selection logic.</p><p><strong>Evaluation Loops</strong> &#8212; Skills get scored on real tasks by agents. Did the code review skill catch the bug? Did the documentation skill produce accurate output? This is where platforms like LangSmith and Langfuse are moving &#8212; from prompt-level tracking to skill-level observability.</p><p><strong>Supply Chain Security</strong> &#8212; JFrog&#8217;s core insight: skills are the new packages. An unvetted skill can instruct an agent to exfiltrate data, call unauthorized APIs, or bypass guardrails. Scanning, signing, and policy-driven approval workflows aren&#8217;t optional for enterprise deployments. Anthropic&#8217;s own documentation warns that skills with external URL fetches pose particular risk because fetched content can contain malicious instructions.</p><p><strong>Compositional Testing</strong> &#8212; The hardest and least solved problem. A &#8220;summarize patient record&#8221; skill is HIPAA-compliant in isolation. Compose it with a &#8220;send email&#8221; skill and you&#8217;ve got a violation. No major platform has compositional compliance testing today.</p><h2>The Enterprise Skill Governance Gap</h2><p>Here&#8217;s what I don&#8217;t see anyone talking about yet: skills solve the <em>reuse</em> problem but create a <em>governance</em> problem that&#8217;s arguably worse than what we had with prompts.</p><p>With prompts, governance was simple &#8212; there was nothing to govern. Prompts were disposable. Skills are durable, versioned, shared, and composed. They&#8217;re organizational IP. And in regulated industries (healthcare, financial services, mortgage), they touch compliance boundaries that current registries don&#8217;t model.</p><p>JFrog gives you the software supply chain layer &#8212; scan, sign, verify. That&#8217;s necessary but not sufficient. What&#8217;s missing is the <em>requirements traceability</em> layer: the ability to map a skill&#8217;s behavior to the specific regulatory obligations it must satisfy, and to detect when skill composition violates those obligations even when individual skills are compliant.</p><p>This is the problem I&#8217;m working on with the CART (Cloud-AI Requirements Traceability) framework, specifically extending it for agentic systems where execution paths aren&#8217;t deterministic and skills compose at runtime. The gap between supply-chain security and regulatory traceability is where the next wave of enterprise SkillOps tooling needs to go.</p><h2>What You Should Do This Week</h2><p><strong>If you&#8217;re starting from zero</strong>: Pick one workflow your team does repeatedly (code review, PR descriptions, incident response). Write a SKILL.md for it. Drop it in <code>.claude/skills/</code> or <code>.codex/skills/</code>. Test it. You&#8217;ll learn more about progressive disclosure and description-writing in an hour than from any documentation.</p><p><strong>If you already have scattered prompts</strong>: Audit them. Pick the five most-used. Convert each to a skill directory with proper metadata. Commit them to your repo. You&#8217;ve just started your skill library.</p><p><strong>If you&#8217;re operating at scale</strong>: Evaluate registry options. For startups, SkillRegistry.io and GitHub repos work. For enterprise with compliance requirements, look at JFrog&#8217;s Agent Skills Registry or build an internal registry with the Agent Skills SDK (open-source Python library from Microsoft). Either way, add evaluation loops &#8212; track which skills agents actually use and how they perform.</p><p><strong>If you&#8217;re in a regulated industry</strong>: Start thinking about the governance gap now. Current registries handle supply-chain security but not regulatory traceability. Map your most critical skills to the compliance obligations they touch. You&#8217;ll want this mapping before auditors start asking for it &#8212; and they will.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Anthropic's Mythos Uncovered Decades-Old Vulnerabilities. Your Governance Model Needs to Catch Up.]]></title><description><![CDATA[Project Glasswing just exposed thousands of zero-days across every major OS and browser. Here&#8217;s what that actually means if you ship AI agents in regulated industries.]]></description><link>https://theairuntime.com/p/anthropics-mythos-uncovered-decades</link><guid isPermaLink="false">https://theairuntime.com/p/anthropics-mythos-uncovered-decades</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Thu, 09 Apr 2026 11:04:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Dmz9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR - </strong> Anthropic&#8217;s Project Glasswing coalition &#8212; AWS, Microsoft, Google, Apple, CrowdStrike, JPMorganChase, the Linux Foundation, and six others &#8212; used an unreleased model called Claude Mythos Preview to find thousands of zero-day vulnerabilities across every major OS and browser, some hidden for 27 years. For AI engineers shipping in regulated industries, this breaks three assumptions simultaneously: that your open-source dependencies are &#8220;good enough,&#8221; that quarterly governance keeps you safe, and that your AI agent infrastructure isn&#8217;t attack surface. Here&#8217;s what to do about each, this week.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The 27-Year Bug and the Five-Million-Test Miss</h2><p>Let me start with the two numbers that should keep you up tonight.</p><p><strong>Twenty-seven years.</strong> That&#8217;s how long a remote crash vulnerability survived in OpenBSD &#8212; an operating system whose entire reputation is built on being security-hardened. It runs firewalls. It runs critical infrastructure. Mythos Preview found it.</p><p><strong>Five million.</strong> That&#8217;s how many times automated security tests hit the vulnerable line of code in FFmpeg without catching the bug. Mythos Preview caught it on what amounts to a first read.</p><p>These aren&#8217;t edge cases. These are the libraries underneath your production systems right now.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Dmz9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Dmz9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png 424w, https://substackcdn.com/image/fetch/$s_!Dmz9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png 848w, https://substackcdn.com/image/fetch/$s_!Dmz9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png 1272w, https://substackcdn.com/image/fetch/$s_!Dmz9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Dmz9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png" width="1384" height="763" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:763,&quot;width&quot;:1384,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2114724,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193648689?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Dmz9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png 424w, https://substackcdn.com/image/fetch/$s_!Dmz9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png 848w, https://substackcdn.com/image/fetch/$s_!Dmz9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png 1272w, https://substackcdn.com/image/fetch/$s_!Dmz9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b85e61d-fc8d-4741-abb9-0543e5595769_1384x763.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                    Project GLASSWING</em></p><div><hr></div><h2>Three Things That Just Broke</h2><p>Enterprises started deploying AI across healthcare, financial services, airlines, and other regulated industries. These are the industries where you don&#8217;t get to say &#8220;we&#8217;ll patch it next sprint&#8221; &#8212; you answer to regulators, patients, and auditors. Glasswing broke three foundational assumptions we see in nearly every deployment we touch.</p><div><hr></div><h3>Broken Assumption #1: &#8220;We Track Our Dependencies&#8221;</h3><p>You track your direct dependencies. Maybe your first layer of transitive dependencies. But Glasswing exposed vulnerabilities in the deep layers &#8212; the <em>FFmpegs</em> and <em>OpenSSLs</em> and <em>zlibs</em> that your dependencies&#8217; dependencies depend on.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nzpj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nzpj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png 424w, https://substackcdn.com/image/fetch/$s_!Nzpj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png 848w, https://substackcdn.com/image/fetch/$s_!Nzpj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png 1272w, https://substackcdn.com/image/fetch/$s_!Nzpj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nzpj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png" width="967" height="593" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:593,&quot;width&quot;:967,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:22393,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193648689?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Nzpj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png 424w, https://substackcdn.com/image/fetch/$s_!Nzpj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png 848w, https://substackcdn.com/image/fetch/$s_!Nzpj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png 1272w, https://substackcdn.com/image/fetch/$s_!Nzpj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06fc34c7-3f13-4a2d-a735-102bbe794695_967x593.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>The deeper you go, the less you track &#8212; and that&#8217;s where Mythos found the bugs.</em></p><p>The Linux Foundation joined Glasswing because the people maintaining the software at the bottom of that chain don&#8217;t have security teams. Your SBOM was a compliance artifact. It needs to become an operational dependency map with patching SLAs attached to every node.</p><div><hr></div><h3>Broken Assumption #2: &#8220;Our Governance Cadence Is Sufficient&#8221;</h3><p>CrowdStrike&#8217;s CTO said it plainly: what once took months now happens in minutes. Mythos Preview autonomously chained together multiple Linux kernel vulnerabilities to escalate from user to root &#8212; no human steering required.</p><p>Your quarterly vulnerability review doesn&#8217;t survive this. You need dependency scanning on every build, and a fast-track patching path that bypasses the standard change advisory timeline for critical zero-days.</p><div><hr></div><h3>Broken Assumption #3: &#8220;Our AI Agent Layer Isn&#8217;t Attack Surface&#8221;</h3><p>This is the one nobody&#8217;s talking about, and it&#8217;s the one I see every day.</p><p>If you&#8217;re building multi-agent systems &#8212; agents calling tools via MCP, persisting memory, chaining decisions across services &#8212; you&#8217;ve built execution paths that no traditional penetration test covers.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3-ti!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3-ti!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png 424w, https://substackcdn.com/image/fetch/$s_!3-ti!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png 848w, https://substackcdn.com/image/fetch/$s_!3-ti!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png 1272w, https://substackcdn.com/image/fetch/$s_!3-ti!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3-ti!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png" width="936" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:936,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:31439,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193648689?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3-ti!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png 424w, https://substackcdn.com/image/fetch/$s_!3-ti!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png 848w, https://substackcdn.com/image/fetch/$s_!3-ti!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png 1272w, https://substackcdn.com/image/fetch/$s_!3-ti!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46964ab-7c7b-48da-8b9a-f35689c50a26_936x437.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Traditional security tests the infrastructure. Nobody tests the agent paths that sit on top of it.</em></p><p>Here&#8217;s the connection nobody&#8217;s making: the agentic reasoning that lets Mythos Preview autonomously chain kernel exploits is architecturally the same capability your agents use to chain tool calls. If a compromised dependency injects malicious context into your agent&#8217;s execution chain, what layer catches it?</p><p>For most systems? Nothing. The guardrails check the model&#8217;s outputs. They don&#8217;t check what flows into the model from compromised upstream tools.</p><div><hr></div><h2>Your Playbook: This Week, This Month</h2><h3>This Week</h3><p><strong>Map your Glasswing exposure now.</strong> Anthropic published cryptographic hashes of unpatched vulnerabilities. When full disclosures land, you need to already know your dependency overlap. Don&#8217;t start the audit after the CVEs drop.</p><p><strong>Benchmark your real patching SLA.</strong> Not the number in your security policy &#8212; the actual elapsed time from &#8220;critical zero-day announced&#8221; to &#8220;patched in production.&#8221; If it&#8217;s measured in weeks, you&#8217;ve found the gap.</p><p><strong>Tabletop an AI-speed attack.</strong> Get your security, platform, and AI engineering leads in a room. Scenario: a Mythos-class model finds a zero-day in a dependency your agents use. An exploit is weaponized in hours. Walk through your response. Find where it breaks.</p><h3>This Month</h3><p><strong>Shift SBOM from compliance to CI/CD.</strong> Dependency scanning on every build. Automated alerts when any dependency matches a Glasswing disclosure. No exceptions.</p><p><strong>Audit your agent attack surface.</strong> Document every tool-calling interface, memory layer, and cross-agent trust boundary. Test what happens when one node in the chain serves compromised context.</p><p><strong>Design a fast-track patch path.</strong> Your standard CAB process can&#8217;t be the only route for critical zero-days.</p><h2>The 90-Day Clock</h2><p>Anthropic committed to publishing findings within 90 days &#8212; vulnerabilities fixed, lessons learned, and recommendations for how security practices should evolve. They&#8217;re working on guidance covering disclosure processes, patching automation, supply chain security, and standards for regulated industries.</p><p>That 90-day report will matter. But the vulnerabilities exist now. The exploitation tools are advancing now. And the gap between AI-speed offense and quarterly-cadence defense is only getting wider.</p><p>The Glasswing butterfly hides in plain sight &#8212; transparent wings, invisible against the forest. These vulnerabilities did the same thing for decades. The question isn&#8217;t whether your systems are affected. It&#8217;s whether your response will move at the speed this moment demands.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Model Reliability Engineering: Who Owns It When the AI Is Confidently Wrong?]]></title><description><![CDATA[Teams know their AI can be wrong. What's missing is the engineering discipline to make it reliably right.]]></description><link>https://theairuntime.com/p/model-reliability-engineering-who</link><guid isPermaLink="false">https://theairuntime.com/p/model-reliability-engineering-who</guid><dc:creator><![CDATA[The AI Runtime]]></dc:creator><pubDate>Wed, 08 Apr 2026 11:51:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wgsw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>TL;DR:</strong> Companies deploying LLMs in production are discovering a reliability gap that none of the existing engineering disciplines &#8212; SRE, MLOps, AI Safety &#8212; are designed to close. Infrastructure stays up. Pipelines keep running. Models keep generating. But the outputs users depend on can be wrong, inconsistent, or unsafe, and no team owns that problem. What&#8217;s emerging to fill this gap is something that might be called Model Reliability Engineering (MRE) &#8212; the practice of ensuring that AI model <em>behavior</em> is reliable in production, not just the infrastructure underneath it. This piece maps the gap, explains why it exists now and didn&#8217;t before, and sketches the shape of the discipline forming around it. The framework is early and evolving &#8212; the goal here is to start a conversation, not finish one.</p></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wgsw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wgsw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png 424w, https://substackcdn.com/image/fetch/$s_!wgsw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png 848w, https://substackcdn.com/image/fetch/$s_!wgsw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png 1272w, https://substackcdn.com/image/fetch/$s_!wgsw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wgsw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png" width="1396" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1396,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1529781,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193536389?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wgsw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png 424w, https://substackcdn.com/image/fetch/$s_!wgsw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png 848w, https://substackcdn.com/image/fetch/$s_!wgsw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png 1272w, https://substackcdn.com/image/fetch/$s_!wgsw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f21ed42-c788-4fb5-be27-e6b34140826c_1396x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                        Model Reliability Engineering</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Something Is Missing</h2><p>A healthcare system deploys an AI assistant to help clinicians review patient records and surface relevant clinical guidelines. The infrastructure team runs it on managed Kubernetes with auto-scaling. The ML platform team built a solid RAG pipeline with nightly document ingestion. The system passes load testing. The SRE dashboard is green across every metric.</p><p>A nurse practitioner asks: &#8220;What&#8217;s the recommended dosing adjustment for metformin in patients with reduced renal function?&#8221; The system retrieves a clinical guideline, passes it to the model, and generates a clear, confident answer with a specific dosage recommendation. The recommendation is subtly wrong &#8212; the model extracted a dosage figure from a retrieved passage but missed that the passage described a <em>contraindicated</em> scenario, not a recommended one. The qualifying context was in the previous chunk, which didn&#8217;t make the top-K retrieval cutoff.</p><p>The error isn&#8217;t caught. No alarm fires. The system&#8217;s correctness monitoring consists of a thumbs-up/thumbs-down button that fewer than 3% of users click. The next time anyone knows something went wrong is when a pharmacist catches the discrepancy during medication review &#8212; days later.</p><p>This isn&#8217;t a hypothetical. Variants of this failure pattern play out across every industry deploying LLMs in production:</p><p><strong>In financial services</strong>, a compliance assistant retrieves an outdated regulatory interpretation and generates advice based on a rule that was superseded six months ago. The retrieval pipeline ran perfectly. The document was in the corpus &#8212; it just shouldn&#8217;t have been, or should have been flagged as superseded. No existing monitoring caught it because &#8220;the model returned a well-formed answer from a successfully retrieved document&#8221; looks like success to every metric being tracked.</p><p><strong>In legal</strong>, a contract review tool summarizes a liability clause but drops a carve-out exception that fundamentally changes the clause&#8217;s meaning. The LLM&#8217;s summary is grammatically perfect, tonally appropriate, and 80% accurate. The missing 20% is the part that matters. The tool&#8217;s evaluation framework tests for &#8220;is the summary relevant to the clause?&#8221; but not &#8220;does the summary preserve all material qualifications?&#8221;</p><p><strong>In enterprise knowledge management</strong>, an internal Q&amp;A system answers &#8220;What&#8217;s our policy on remote work eligibility?&#8221; by combining fragments from three different policy documents &#8212; a 2022 version, a 2023 update, and an FAQ that was drafted but never approved. The answer reads coherently but reflects a policy that never existed. Each source was individually legitimate. The synthesis was not.</p><p>In every case, infrastructure reliability was excellent. Pipeline reliability was excellent. The model performed exactly as designed &#8212; it generated fluent, confident text based on the context it received. The failure was in a layer that no existing discipline is structured to monitor: the reliability of the model&#8217;s <em>behavior</em> as experienced by the user.</p><div><hr></div><h2>Why This Gap Exists Now</h2><p>This isn&#8217;t a problem that people have been ignoring. It&#8217;s a problem that didn&#8217;t fully exist until recently. Three shifts created it.</p><h3>Shift 1: From prediction to generation</h3><p>Traditional ML in production outputs predictions: a classification, a score, a probability. A fraud detection model returns 0.87. A recommendation engine ranks items. These outputs are narrow, measurable, and directly testable against ground truth. You can compute precision, recall, F1, and AUC on every production prediction and track them in real time.</p><p>LLMs produce <em>open-ended text</em>. The output space is effectively infinite. Two correct answers to the same question can be worded completely differently. A wrong answer can be syntactically identical to a right one except for a single word. Traditional ML monitoring &#8212; tracking prediction distributions, feature drift, data quality &#8212; doesn&#8217;t tell you whether a generated paragraph is <em>true</em>. This is fundamentally different from anything software reliability or ML monitoring was designed to handle.</p><h3>Shift 2: From self-contained models to compound systems</h3><p>A traditional ML model is a single artifact: data goes in, prediction comes out. Its reliability surface is the model itself plus its input pipeline.</p><p>An LLM in production is a <em>compound system</em> &#8212; the term Berkeley researchers used in early 2024. It&#8217;s a model wrapped in a retrieval pipeline, a prompt template, a set of guardrails, possibly tool-calling infrastructure, memory, re-ranking, citation logic, and output formatting. The model is one component among many. A failure in any component degrades the final output, and the failure modes are combinatorial. Bad chunking + good retrieval + good generation = wrong answer. Good chunking + good retrieval + bad extraction = wrong answer. Good everything + stale source document = wrong answer.</p><p>No single component owner sees the full picture. The retrieval team sees retrieval metrics. The model provider sees generation metrics. The infrastructure team sees latency and throughput. Nobody sees &#8220;the user got a wrong answer because of an interaction between retrieval ranking and chunk boundary placement,&#8221; because that&#8217;s not any one team&#8217;s metric.</p><h3>Shift 3: From technical users to everyone</h3><p>When ML models served data scientists and internal analytics teams, a slightly wrong output was caught and corrected by experts who understood the model&#8217;s limitations. When LLMs serve nurses, compliance officers, customer support agents, and end consumers, the user often lacks the domain expertise to recognize when the model is wrong &#8212; especially when the model&#8217;s errors are articulate, confident, and well-structured.</p><p>The consequence of this shift: model behavior reliability is no longer a nice-to-have quality attribute. It&#8217;s a safety property. And unlike traditional safety properties in software, it can&#8217;t be addressed through static analysis, type checking, or deterministic testing. It requires continuous, probabilistic monitoring of outputs that are non-deterministic by nature.</p><div><hr></div><h2>What Existing Disciplines Cover &#8212; and What They Don&#8217;t</h2><p>It&#8217;s worth being precise about why existing practices don&#8217;t close this gap. Not because they&#8217;re insufficient at what they do, but because none of them are <em>scoped</em> to cover model behavior reliability.</p><p><strong>Site Reliability Engineering</strong> operates at the infrastructure layer. SRE&#8217;s tools &#8212; SLOs, error budgets, incident response, capacity planning &#8212; are designed for systems with deterministic or statistically predictable behavior. A web server either returns the right page or an error code. An SRE can define &#8220;success&#8221; as a 200 response within 300ms. For an LLM, a 200 response within 300ms tells you nothing about whether the <em>content</em> of that response is reliable. Todd Underwood, who built ML SRE at Google and later led reliability teams at OpenAI and Anthropic, has written directly about this: infrastructure failures in ML systems manifest as quality problems, and SRE&#8217;s monitoring isn&#8217;t designed to distinguish &#8220;the system returned an error&#8221; from &#8220;the system returned a confident wrong answer.&#8221; SRE monitors the vehicle. It doesn&#8217;t know if the vehicle is driving to the right destination.</p><p><strong>MLOps</strong> operates at the pipeline and lifecycle layer. MLOps ensures models get from development to production, stay updated, and remain monitored for data and distribution drift. These are necessary functions. But MLOps drift detection typically tracks input distributions, feature statistics, and prediction distribution shifts &#8212; not whether individual outputs are correct, faithful to sources, or safe in context. MLOps monitors the assembly line. It doesn&#8217;t inspect what&#8217;s coming off the end of it.</p><p><strong>AI Safety</strong> operates at the training and alignment layer. AI safety research produces the techniques &#8212; RLHF, constitutional AI, red-teaming &#8212; that make foundation models safer before deployment. For practitioners deploying models they didn&#8217;t train, in applications the model provider didn&#8217;t anticipate, AI safety provides crucial principles but not an operational engineering practice. A model can be aligned at training time and still produce unreliable outputs in a specific deployment context because of retrieval failures, prompt interactions, or domain-specific edge cases the training process never encountered. AI safety establishes the building code. It doesn&#8217;t do the home inspection.</p><p><strong>ModelOps</strong> operates at the governance layer. ModelOps tracks which models are deployed where, who approved them, and whether they comply with organizational policies. It&#8217;s necessary for enterprise governance. It doesn&#8217;t monitor whether the model&#8217;s Tuesday afternoon output to a specific user was correct.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZHx_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZHx_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png 424w, https://substackcdn.com/image/fetch/$s_!ZHx_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png 848w, https://substackcdn.com/image/fetch/$s_!ZHx_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png 1272w, https://substackcdn.com/image/fetch/$s_!ZHx_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZHx_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png" width="941" height="810" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:810,&quot;width&quot;:941,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:55083,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193536389?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZHx_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png 424w, https://substackcdn.com/image/fetch/$s_!ZHx_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png 848w, https://substackcdn.com/image/fetch/$s_!ZHx_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png 1272w, https://substackcdn.com/image/fetch/$s_!ZHx_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09fec11f-15db-4bf5-ac0f-6eb403ab562e_941x810.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                        Existing Disciplines</em></p><p>The gap between these disciplines isn&#8217;t narrow. It&#8217;s the entire layer that users experience.</p><div><hr></div><h2>The Shape of What&#8217;s Emerging</h2><p>Across organizations deploying LLMs seriously, a set of practices is forming to address this gap. Different teams call it different things &#8212; &#8220;LLM quality engineering,&#8221; &#8220;AI output monitoring,&#8221; &#8220;model behavior testing&#8221; &#8212; or don&#8217;t name it at all, just bolt it onto existing SRE or MLOps responsibilities. But the practices converge. What&#8217;s emerging has a recognizable shape, and giving it a name might help the community develop it faster.</p><p>The term that seems to fit is <strong>Model Reliability Engineering (MRE)</strong> &#8212; the practice of ensuring that AI model behavior is reliable in production. Not infrastructure uptime. Not pipeline health. The actual outputs the system produces.</p><p>MRE focuses on a simple question that turns out to be operationally complex: <strong>does the model&#8217;s output deserve the user&#8217;s trust, right now, for this query?</strong></p><p>The practices forming around this question tend to organize along two layers.</p><h3>The Context Layer</h3><p>Every production LLM system has to solve the problem of getting the right information to the model at the right time. The methods span a wide spectrum &#8212; from static knowledge baked into model weights through fine-tuning, to dynamic retrieval from external sources, to real-time tool use and agentic research. Each method has a different reliability profile.</p><p>RAG systems can fail through stale indexes, bad chunking, missed retrieval, or context overload. Fine-tuned models can fail through knowledge staleness or catastrophic forgetting. Long-context approaches can fail through attention drift and the well-documented &#8220;lost in the middle&#8221; effect. Tool-calling systems can fail through API errors, schema mismatches, or the model misinterpreting returned data.</p><p>What&#8217;s emerging is the recognition that <em>context is a reliability surface</em>. It can be monitored, measured, and held to standards the same way infrastructure performance can. Retrieval precision isn&#8217;t just a search quality metric &#8212; it&#8217;s a leading indicator of output reliability. Context freshness isn&#8217;t just a data management concern &#8212; it&#8217;s a behavioral SLO. Source authority scoring, chunk boundary analysis, multi-source corroboration &#8212; these are reliability practices for the context layer, and teams are beginning to treat them that way.</p><h3>The Harness Layer</h3><p>Between the model&#8217;s raw output and what the user sees sits a control layer &#8212; the guardrails, evaluators, validators, safety filters, and orchestration logic that constrain and verify model behavior. This layer is where reliability is <em>enforced</em>.</p><p>In practice, this includes faithfulness scoring (does the output contradict its source context?), citation verification (do cited sources actually support the claims?), confidence calibration (does the system communicate uncertainty when it should?), output validation gates (does the response meet formatting, safety, and quality thresholds before serving?), graceful degradation (does the system fail safely when context is insufficient?), and permission-aware filtering (does retrieval respect access controls?).</p><p>In the Claude Code ecosystem, practitioners are already building harness components intuitively &#8212; CLAUDE.md files that establish behavioral constraints, hooks that enforce validation at lifecycle events, skills that encode domain-specific guardrails, subagents that verify outputs. What hasn&#8217;t happened yet is treating these as components of a reliability discipline with measurable SLOs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_1Ai!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_1Ai!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png 424w, https://substackcdn.com/image/fetch/$s_!_1Ai!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png 848w, https://substackcdn.com/image/fetch/$s_!_1Ai!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png 1272w, https://substackcdn.com/image/fetch/$s_!_1Ai!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_1Ai!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png" width="953" height="785" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:785,&quot;width&quot;:953,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:50973,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193536389?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_1Ai!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png 424w, https://substackcdn.com/image/fetch/$s_!_1Ai!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png 848w, https://substackcdn.com/image/fetch/$s_!_1Ai!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png 1272w, https://substackcdn.com/image/fetch/$s_!_1Ai!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19769f71-658d-4c28-9e63-ddbaf3ccda61_953x785.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                                                Two evolving layers</em></p><p>The two layers are complementary. Context without harness gives the model the right information but no way to catch when it uses that information wrong. Harness without context constrains a model that&#8217;s working with bad information to begin with. Reliable model behavior requires both.</p><div><hr></div><h2>What Behavioral SLOs Look Like</h2><p>The most concrete contribution MRE makes is extending the SLO concept from infrastructure to model behavior. This isn&#8217;t fully developed yet &#8212; the right metrics and thresholds are still being discovered in practice &#8212; but the emerging shape looks something like this:</p><p><strong>Correctness rate</strong> &#8212; the percentage of outputs that are factually accurate against source material. This requires automated evaluation plus regular human calibration, because purely automated scoring drifts. A team might set a 90% correctness SLO, with the understanding that measuring it is harder than measuring uptime and that the metric itself will evolve.</p><p><strong>Faithfulness</strong> &#8212; how often the model&#8217;s response stays grounded in its provided context versus fabricating beyond it. RAGAS, TruLens, and similar tools provide automated scoring here. A faithfulness SLO sets a floor: below this threshold, the system is considered unreliable for its use case.</p><p><strong>Abstention accuracy</strong> &#8212; how often the model correctly identifies when it lacks sufficient information to answer, rather than fabricating a plausible response. This is arguably the most important behavioral SLO for high-stakes applications. A system that says &#8220;I don&#8217;t have enough information to answer this reliably&#8221; when it genuinely doesn&#8217;t is <em>more reliable</em> than a system that always produces an answer.</p><p><strong>Consistency</strong> &#8212; given the same question and context, how stable are the model&#8217;s answers across repeated queries? Non-determinism is inherent in LLMs, but the <em>factual content</em> of answers to the same question should be stable even if the wording varies. Inconsistency often indicates that the model is uncertain and resolving that uncertainty differently on each pass.</p><p><strong>Safety compliance</strong> &#8212; the rate at which outputs pass content safety, policy compliance, and domain-specific filters. What constitutes &#8220;safety&#8221; is domain-dependent: a medical system has different safety thresholds than a creative writing assistant.</p><p>These aren&#8217;t meant as a definitive list. They&#8217;re the SLOs that keep showing up across teams doing this work. The right behavioral SLOs for a specific system depend on the domain, the risk tolerance, and the user population. What matters is that they exist at all &#8212; that model behavior is treated as a measurable, monitorable dimension with explicit quality targets.</p><div><hr></div><h2>Incident Response for Model Behavior</h2><p>One of the clearest signs that a reliability gap exists is looking at how organizations handle model misbehavior today. When infrastructure goes down, SRE has a well-defined incident response practice: detection, triage, response, postmortem, prevention. When a model generates a harmful or incorrect output, most organizations have... nothing. A user complains. Someone files a ticket. Eventually, someone looks at the logs. Maybe the prompt gets tweaked.</p><p>The same rigor can be applied to model behavior:</p><p><strong>Detection</strong> should be automated. Faithfulness scoring, retrieval quality monitoring, and adversarial probing should catch behavioral degradation before users do. A drop in faithfulness scores below the SLO threshold is an incident &#8212; not a metric to review next sprint.</p><p><strong>Triage</strong> matters because not all model failures are equal. A hallucination in a casual Q&amp;A session has different severity than a hallucination in a compliance response. Incident classification needs domain-specific severity frameworks.</p><p><strong>Postmortems</strong> should be blameless and systemic. Why did the model produce this output? Was it a context failure (wrong documents retrieved), a generation failure (model misinterpreted correct context), a harness failure (validation should have caught this but didn&#8217;t), or a coverage failure (the knowledge base lacked the needed information)? Each root cause points to a different remediation.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9UoR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9UoR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png 424w, https://substackcdn.com/image/fetch/$s_!9UoR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png 848w, https://substackcdn.com/image/fetch/$s_!9UoR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png 1272w, https://substackcdn.com/image/fetch/$s_!9UoR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9UoR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png" width="939" height="628" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:628,&quot;width&quot;:939,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:41436,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiengineerweekly.substack.com/i/193536389?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9UoR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png 424w, https://substackcdn.com/image/fetch/$s_!9UoR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png 848w, https://substackcdn.com/image/fetch/$s_!9UoR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png 1272w, https://substackcdn.com/image/fetch/$s_!9UoR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b203cf-ddaa-4d5b-afcd-8599164aa6a4_939x628.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>                                               Incident Response for Model behaviour</em></p><p><strong>Error budgets</strong> are the mechanism that makes behavioral SLOs operational rather than aspirational. If your correctness SLO is 92% and you&#8217;ve burned through your error budget this month, the team shifts from building new features to improving reliability &#8212; the same trade-off SRE pioneered for infrastructure.</p><div><hr></div><h2>RAG as the Primary Proving Ground</h2><p>If this discipline needs a place to prove its value, RAG is it. RAG is the most widely deployed LLM architecture in production, and it&#8217;s where model behavior reliability challenges are most visible and most painful.</p><p>RAG systems have at least ten well-documented failure modes, cataloged by Barnett et al. (2024) and expanded significantly by production experience since. Every one of them is a model <em>behavior</em> reliability problem that doesn&#8217;t appear on an infrastructure dashboard: stale retrievals, bad chunking, missed context, context overload and the &#8220;lost in the middle&#8221; effect, unfaithful extraction, security leaks through retrieval, embedding drift, retrieval-generation timing failures, scattered evidence synthesis failures, and the model answering when it should abstain.</p><p>The evolution of RAG architectures &#8212; from naive single-shot retrieval through advanced hybrid retrieval, self-correcting RAG (Self-RAG, Corrective RAG), and now agentic RAG with autonomous retrieval planning &#8212; can itself be understood as an evolution toward greater model behavior reliability. Each generation added mechanisms to detect and recover from failure modes the previous generation couldn&#8217;t handle. Self-RAG taught models to judge whether they need to retrieve at all. Corrective RAG added evaluators that score document relevance before generation. Agentic RAG introduced multi-step planning, self-correction loops, and dynamic tool selection.</p><p>These advances happened organically, driven by practitioners hitting reliability walls. A model reliability framework provides a way to understand <em>where</em> on the reliability spectrum a system sits and <em>what</em> needs to happen to improve it &#8212; turning ad-hoc iteration into systematic engineering.</p><div><hr></div><h2>How This Relates to What Exists</h2><p>MRE isn&#8217;t replacing anything. It&#8217;s filling a gap between things that already exist and work well at what they do.</p><p>The relationship to SRE is generational. SRE was created because software systems became too complex for traditional operations practices. This discipline is forming because AI systems are too complex for traditional software reliability practices. SRE&#8217;s operational philosophy &#8212; SLOs, error budgets, blameless postmortems, the principle that reliability is a feature &#8212; transfers directly. What changes is the object of measurement: from system behavior (latency, availability, error rates) to model behavior (correctness, faithfulness, appropriate abstention).</p><p>The relationship to MLOps is complementary. MLOps handles the lifecycle &#8212; getting models from development to production and keeping them updated. Model behavior reliability handles the runtime &#8212; ensuring that what the model <em>does</em> in production meets quality standards. A mature AI organization needs both, the same way a mature software organization needs both CI/CD and production monitoring.</p><p>The relationship to AI Safety is layered. AI safety establishes the foundation: models that are aligned, harmless, and honest at training time. Model behavior reliability builds on that foundation for specific deployment contexts: ensuring that a generally safe model behaves reliably <em>in this application, with this data, for these users</em>. A model can be well-aligned and still produce unreliable outputs when deployed in a context its training didn&#8217;t anticipate.</p><div><hr></div><h2>What&#8217;s Still Unknown</h2><p>Honesty requires acknowledging what isn&#8217;t figured out yet. This discipline is early. Several hard problems remain open:</p><p><strong>Measuring correctness at scale is hard.</strong> Unlike infrastructure metrics that can be computed from logs, output correctness often requires domain expertise to evaluate. Automated faithfulness scoring is getting better (RAGAS, TruLens, LLM-as-judge approaches), but these tools measure <em>consistency with context</em>, not <em>truth</em>. A model that faithfully reproduces information from a wrong document scores high on faithfulness and low on correctness. Bridging this gap requires human calibration, golden datasets, and evaluation frameworks that aren&#8217;t mature yet.</p><p><strong>Setting the right thresholds is domain-specific.</strong> What correctness rate is acceptable? 95% for a customer support bot might be fine. 95% for a medical decision support system might be catastrophic. The thresholds need to come from domain expertise and risk analysis, not from engineering defaults. The framework can provide the structure, but it can&#8217;t prescribe universal thresholds.</p><p><strong>Non-determinism complicates everything.</strong> LLMs are inherently probabilistic. The same input can produce different outputs on consecutive calls. This makes behavioral SLOs fundamentally different from infrastructure SLOs, where the same request should always produce the same response. Model reliability has to reason about distributions of behavior, not individual outputs &#8212; and the statistical tools for this are still developing.</p><p><strong>The boundary with prompt engineering is fuzzy.</strong> Is improving a system prompt to reduce hallucinations a reliability activity or a development activity? Probably both, depending on context. The discipline&#8217;s boundaries will sharpen through practice, not through definitional fiat.</p><p><strong>The tooling is immature.</strong> The evaluation tools that exist &#8212; RAGAS, TruLens, custom LLM-as-judge pipelines &#8212; are first-generation. They work but require significant integration effort, produce metrics that need calibration, and don&#8217;t yet connect to the kind of operational dashboards that SRE teams take for granted. This will improve, but it&#8217;s a real limitation right now.</p><p>These unknowns aren&#8217;t reasons to wait. SRE had plenty of open questions in its early years too. The discipline formed through practice, with refinements accumulating as more teams adopted and adapted the core ideas. This will likely follow the same path.</p><div><hr></div><h2>An Invitation, Not a Manifesto</h2><p>If this framing resonates, the most useful thing that can happen is for practitioners to pressure-test it against their own experience. The questions worth asking:</p><p>Does the gap described here match what you see in your organization? Is there a team or role that owns model behavior reliability, or does it fall between the cracks?</p><p>Are the two layers &#8212; context reliability and harness reliability &#8212; the right decomposition, or is there a third layer missing?</p><p>Which behavioral SLOs matter most in your domain, and how are you measuring them today (if at all)?</p><p>What failure modes have you encountered that don&#8217;t fit neatly into the categories described here?</p><p>The discipline will be shaped by the practitioners who adopt and adapt it, not by any single definition. What&#8217;s offered here is a starting point &#8212; a way to talk about a problem that many teams are experiencing but that doesn&#8217;t yet have a shared vocabulary. If naming it helps teams think more clearly about it, build better systems around it, and hold themselves to higher standards for what their AI systems deliver to users, then the name is doing its job.</p><p>The infrastructure reliability problem is largely solved. The model behavior reliability problem is wide open. This is how we start closing it.</p><div><hr></div><p><em><strong>References:</strong> Lewis et al. (2020), &#8220;Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,&#8221; Meta AI. Barnett et al. (2024), &#8220;Seven Failure Points When Engineering a RAG System.&#8221; Asai et al. (2024), &#8220;Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection,&#8221; ICLR 2024. Yan et al. (2024), &#8220;Corrective Retrieval Augmented Generation.&#8221; Chen, Murphy, Parisa, Sculley &amp; Underwood (2022), &#8220;Reliable Machine Learning,&#8221; O&#8217;Reilly. Sculley et al. (2015), &#8220;Hidden Technical Debt in Machine Learning Systems,&#8221; NeurIPS. Singh et al. (2025), &#8220;A Survey on Agentic RAG.&#8221; Microsoft Research (2024), &#8220;GraphRAG.&#8221; Hummer &amp; Muthusamy (2018), &#8220;ModelOps,&#8221; IBM Research.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://theairuntime.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>