Somewhere in a data center supporting a program you've never heard of, there is Ada code running right now that was compiled in 1999. It controls radar handoffs. It manages fire control sequences. It calculates trajectory corrections at 30 cycles per second. It works. Nobody touches it.
That's the real legacy software problem in the Department of Defense — not COBOL in finance offices or FORTRAN in weather models, but Ada 95 embedded deep in weapon system software that was written under MIL-STD-1815, compiled for hardware that predates the iPhone by a decade, and maintained by a shrinking workforce of engineers who remember when the language was mandated by law.
The DoD Ada mandate was formally dropped in 1997 as the department embraced commercial off-the-shelf technology. What remained was the installed base: tens of millions of lines of strongly-typed, concurrent, safety-critical Ada code distributed across every branch of service. Avionics. C2 systems. Missile guidance. Target tracking. Ground-based radar. Space-based sensor fusion. All Ada. Much of it vintage 1995–2002.
by Ada mandate (1983)
Ada codebase was last refactored
that drove the original Ada initiative
Why Nobody Touches It
The reason Ada code doesn't get modernized isn't technical. Ada is actually a remarkably well-structured language — strong typing, explicit concurrency via tasks and protected objects, rigorous exception handling, no pointer arithmetic to create undefined behavior. The engineers who wrote it in 1999 knew what they were doing. The code is often well-documented, modular, and readable.
The problem is institutional. Touching legacy Ada means:
Finding people who can read it. The workforce that built these systems is retiring. Younger engineers aren't trained in Ada. There are no coding bootcamps. The talent pipeline is close to zero for new Ada developers, and experienced hands are expensive, hard to find, and increasingly reluctant to re-engage with codebases they left a decade ago.
Understanding what it actually does. The real danger in legacy modernization isn't rewriting the code — it's not understanding the intent behind it. Ada embedded systems often contain undocumented business logic, hardware-specific timing assumptions, and operational constraints that never made it into the comments. Getting it wrong in a weapons system isn't a bug report — it's a mishap.
Surviving the bureaucratic weight of a modernization program. A formal legacy code modernization effort requires a new program of record, a new contract, a new SOW, a new ATO process, and years of schedule. By the time the effort is staffed and authorized, the hardware the Ada code runs on may already be obsolete again.
What a Refactor Cell Is
A Refactor Cell is a Fulcrum agent team configuration specifically composed for legacy code analysis and incremental modernization. It's not a replacement for the entire program — it's a standing AI workforce that runs inside your existing environment, continuously working the codebase in parallel with your human engineers.
The cell is composed of specialized agents, each assigned a role that maps to a distinct phase of the refactoring mission:
Each agent operates inside the Fulcrum MCP workspace via structured tool calls — reading context, writing task updates, posting outputs for human review. The human SME stays in the loop at every critical decision point. The agents handle the analytical and translational labor. The human makes the call on anything touching operational behavior.
How the Cell Works on Ada Code
Ada 95 codebases present specific structural challenges that the Refactor Cell is designed to address directly. Here's how the cell processes a representative Ada package — the kind of module you'd find in a 1999-era fire control or tracking system:
The Analyst Agent reads this package and immediately surfaces several facts that a human engineer would need days to reconstruct from documentation: the correlation threshold is hardware-specific to a particular radar's update rate, the protected object signals a concurrent access pattern that must be preserved in any translation, and the Coast track state implies a specific timeout behavior that lives in the package body.
The Doc Agent then produces an intent document — not a comment restatement, but a prose explanation of why this code exists, what operational behavior it implements, and what constraints any refactored version must preserve. This document goes into the Fulcrum Context Vault as shared knowledge available to every subsequent agent in the cell.
The Translation Layer
With intent captured, the Translate Agent produces a target-language equivalent — typically Python for analysis workflows, Rust for performance-critical embedded replacement, or C for systems that interface with existing hardware drivers. The translation preserves the concurrent semantics explicitly: Ada protected objects become Rust Arc<Mutex> patterns or Python asyncio.Lock equivalents, depending on the target runtime requirements.
Ada's strong typing — one of its genuine strengths — actually makes this translation tractable in a way that weakly-typed legacy languages never are. The type constraints in Ada code tell the Translate Agent exactly what the operational bounds are. A subtype Valid_Range is Float range 0.0 .. 1.0 declaration isn't just a type hint — it's a specification that survives into the translation as an explicit validation boundary.
The Test Agent simultaneously builds a regression suite against the original Ada binary outputs. Before a single line of translated code is approved, there is a test suite that validates the new implementation against the behavioral profile of the original — including edge cases, error paths, and the specific timing behaviors that embedded systems engineers obsess over.
Human-in-the-Loop at Every Critical Gate
This is where Fulcrum's architecture makes the Refactor Cell different from any automated code migration tool. The Review Agent doesn't just output a translated module — it surfaces a structured review package to a human SME through Fulcrum's Human-in-the-Loop gate system.
The review package includes: the original Ada package and its extracted intent document, the translated implementation in the target language, the regression test results, any behavioral discrepancies flagged during testing, and a risk assessment generated by the Review Agent indicating which elements of the translation carry the most operational uncertainty.
The human SME reviews this package and makes the approval decision. The translated code cannot proceed to integration without explicit human sign-off. Every approval is logged immutably to the Fulcrum audit trail — giving the program office a full chain of custody for every modernization decision made by the cell.
Speed: What Changes
A traditional Ada modernization effort staffs a team of senior engineers, runs them through codebase orientation for weeks, produces a modernization plan that takes months to approve, and begins actual translation work somewhere around month six of a 24-month program.
A Fulcrum Refactor Cell begins producing intent documents in hours. The Analyst Agent doesn't need codebase orientation. It reads the package headers, traces the dependency graph, and starts producing structured analysis immediately. By the end of the first week of operation, the cell has produced more institutional documentation about the legacy codebase than most programs have accumulated in their entire history.
for a 50K-line Ada package
traditional staff augmentation
every translated module
OpenClaw Integration: Autonomous Execution
For Refactor Cell deployments that require autonomous execution capability — running the translated code against test environments, executing build pipelines, or interacting with legacy hardware in the loop — Fulcrum integrates directly with OpenClaw, the open agent execution framework.
OpenClaw agents connected to Fulcrum via MCP can be assigned execution tasks by the Refactor Cell: spin up a test environment, compile the Ada original against a known-good toolchain, run the translated Rust equivalent against the same inputs, compare outputs. The OpenClaw agent executes and returns results. The Fulcrum workspace records every action in the audit trail.
This gives the Refactor Cell a closed loop: analysis, translation, testing, and execution all coordinated through a single Fulcrum workspace, with human review gates at the boundaries that matter operationally.
Starting a Refactor Cell at IL2
A Refactor Cell requires no new ATO to begin operating at IL2. The initial phase — codebase analysis, intent documentation, dependency mapping — can run against declassified or development representations of the codebase in a free Fulcrum sandbox on AWS Bedrock AgentCore. No procurement. No contract vehicle. No ATO overhead. A PM with a GitHub account can have the cell running this week.
The IL4 and IL5 pathways through Second Front Systems extend the cell into classified environments — where the actual operational Ada code lives. The authorization timeline for that pathway is known, proven, and current.
If your program has Ada code from 1999 that nobody wants to touch — and every major defense program does — the Refactor Cell isn't a theoretical future capability. It's deployable today. The agents are standing by. The only variable is whether your program is ready to let them go to work.