AI pilots are easy. Production is the hard part.
/Robert Cooke is CEO of 3forge, a software company that helps banks and trading firms build and operate real-time financial systems.
Capital markets software rarely gets rebuilt. It gets extended. A pricing engine survives multiple architecture eras because no one can pause a desk, a control, or a regulatory obligation long enough to start over. Over time, stacks become layered. A trade capture system here, a risk calc there, a surveillance workflow bolted on after the fact, and a UI on top so users can move faster without touching the core.
That layering worked until the next wave arrived. AI is not just another tool sitting beside the stack. To deliver value, it has to plug into workflows constrained by entitlements, audit trails, approvals, and operational controls. Teams can build pilots quickly, but the stall happens when they try to productionize, and every integration becomes a multi team project across data access, messaging, processing, storage, UI, and governance. Small changes turn into cascades of redesign, retesting and reapproval.
AI pilots are easy. Production is harder. Banks can experiment with models quickly, but integrating them into workflows often exposes deeper problems with fragmented systems, data access and governance controls. Increasingly, institutions are responding by rethinking the foundation itself, adopting application “engine” approaches that standardize common infrastructure so new capabilities can plug in without rebuilding the plumbing each time.
DBS has pulled $580 million in value from AI running in live business environments, a reminder that outcomes scale when models are embedded into governed workflows. That’s why pilot programs pile up. Experimentation becomes the default because production change is too expensive. At some point, the constraint is no longer talent or ideas. It’s architecture. Every new product, control, or workflow adds more internal software to maintain. Over time, the burden isn’t just integration, it’s the cumulative weight of rebuilding similar foundations again and again.
Engines are common elsewhere and finance is catching up
Most complex industries hit this wall earlier and found the same answer. Standardize the foundation so teams can build new capabilities without repeatedly rebuilding plumbing.
Gaming did this with engines. Commerce did it with platforms. Enterprise software packaged common building blocks into environments that make extension cheaper than reinvention. Finance is moving in that direction, not because it wants to copy other industries, but because the economics of constant rebuilding no longer work. Business needs continuous adaptation, while systems must stay live, governed, and dependable.
Why finance took longer to embrace engines
Finance moved cautiously for a practical reason. The environment is less forgiving. Systems must stay online, controls must be enforced continuously, and changes must hold up under audit and oversight. An engine that works in that setting must carry operational weight.
In financial institutions, business logic must sit close to the data, evolve continuously and remain intelligible across teams. Calculations, transformations and formatting rules are rarely confined to a single language or layer, and hard boundaries between data access, computation and presentation introduce friction and duplication. As a result, modern platforms are expected to provide a unified scripting environment where logic can be expressed once, reused safely and applied consistently across models, workflows and user interfaces.
For financial organizations, that means one approach must meet four always-on needs.
Live scripting
A clean way to express domain logic without a maze of adapters and brittle glue code.Live UI
Role specific workflows that can change without rebuilding the interface stack each time requirements shift.Live data
Governed access across streaming, historical, and legacy sources so controls travel with the data.Live workbench
Build and run close together so teams can update software while it stays operational, observable, and under control.
The goal isn’t speed for speed’s sake. It’s sustained change under constraints. Most teams are trying to deliver updates reliably in systems that can’t pause or lose guardrails. Not every application is latency-sensitive, but most require faster, safer delivery while systems remain live and governed.
Why is it necessary now?
Two pressures are colliding. Demand for internal software keeps compounding as products, controls, markets, client needs and regulatory expectations expand. At the same time, AI raises the cost of fragmented architecture. Value is limited when AI lives outside core workflows. The payoff comes when it can participate in governed operations with entitlements, auditability, and consistent business rules.
There is also a new operational signal that many technology leaders are feeling. A widely shared thread about AI writing code drew more than 100 million views because it captured something engineers can already see. Software creation is accelerating, which makes governance harder. Faster code production increases the need for provenance, review, testing, and controlled deployment in regulated environments.
This is where an application engine helps. It provides a contained environment that keeps delivery inside guardrails, rather than spreading logic across ad hoc pipelines and disconnected tooling.
What changes with an engine approach
An application engine reduces software friction by standardizing non-differentiating layers that every team needs, but no team wants to rebuild repeatedly. It unifies how data is accessed and governed, how logic is expressed and deployed, how workflows reach users, and how change is observed and controlled in production.
That does not remove complexity from finance. It relocates complexity into a coherent environment where it can be managed consistently. Instead of treating change as a special project every time, change becomes a routine capability across risk, surveillance, operations and other teams.
A parallel shift toward autonomy
Some institutions are reevaluating delivery models that depend heavily on embedded services and ongoing customization. Those approaches can accelerate early deployments, but they can also create dependency when too much operational knowledge sits outside the institution. An engine-based approach supports the opposite goal. It keeps more build and change capability inside the organization.
The takeaway
Finance is reaching the limits of experimentation because architecture has become the bottleneck. Application engines offer a way to reduce software friction, keep systems live and enable continuous change without rebuilding core systems. Pilots are easy to count. Production changes are harder. The firms that can deliver it consistently, inside governance, are the ones that pull ahead.