Skip to content
The Algorithm
InsightsVendor Recovery
Vendor RecoveryCross-Industry12 min read · 2026-03-18

Why Large SI Implementations Fail: The Architecture Debt They Leave Behind

67%
Of post-large-SI assessments we've conducted found undocumented architectural decisions with compliance implications
Large system integrator engagements — Accenture, TCS, Infosys, and their peers — produce recoverable but architecturally debt-laden systems for structural reasons that have nothing to do with individual engineer quality. The factory model, rotation economics, change-order incentives, and knowledge transfer gaps create a predictable class of technical debt. Understanding the failure pattern is prerequisite to assessing what to keep and what to rebuild after a large SI engagement exits.

The observation that large SI engagements produce systems requiring significant post-delivery remediation is not controversial — it is a documented pattern across federal IT (GAO high-risk list), state government technology (California DMV, Colorado PEAK, HealthCare.gov), and enterprise regulated industries. What is less documented is why the pattern is structural rather than accidental, and what it means for the technical assessment that must follow a large SI exit.

The Factory Model Economics

Large SIs — Accenture, TCS, Infosys, Wipro, Capgemini, and their peers — are optimised for margin on labour arbitrage, not for knowledge retention or compliance depth. The factory model routes work to the lowest-cost capable resource. In practice: a junior engineer in an offshore delivery centre implements HIPAA audit log requirements by reading the HIPAA Security Rule summary on the firm's internal knowledge base, not from experience building a system that passed an OCR audit. The implementation technically checks the box. It fails the audit.

Rotation economics compound this. Large SI contracts are typically structured around T&M or fixed-price tranches with defined deliverables. Engineer rotation — moving engineers off a programme when their utilisation falls below target — is a standard practice. A senior architect who made key design decisions in sprint 3 may be off the programme by sprint 15. The decisions they made, the reasons they made them, and the compliance implications are gone with them. The replacement engineer inherits the code, not the rationale.

The Engineering Reality

Change-order economics create a systematic bias toward deferring compliance work. When an SI prices a change order to retrofit compliance controls, the margin on that work is higher than the original delivery — compliance retrofits require senior engineers with specific knowledge, and the firm has legitimate justification for premium rates. The economic incentive is to deliver systems that require paid remediation.

Knowledge Transfer as a Structural Gap

Large SI contracts typically include a "knowledge transfer" phase at programme conclusion. In practice, knowledge transfer phases produce documentation artefacts — architecture decision records, runbooks, data flow diagrams — created in the final weeks of the programme by engineers who have already mentally moved to their next engagement. The documentation captures what was built. It rarely captures why specific decisions were made, what alternatives were considered, or what the compliance implications of the chosen approach are.

For regulated systems, the undocumented decisions are the high-risk ones. Why was this field not encrypted at rest? The SI engineer knew the answer — it's a derived value that changes every request, and encrypting it would have required a key management change that was out of scope. The replacement team doesn't know the answer. When the auditor asks, the answer is "we don't know" — which is an audit finding.

The Technical Assessment Framework

After a large SI engagement exits, the assessment that matters is not a code review — it is a compliance gap analysis structured around the regulatory obligations that the system must satisfy. The assessment framework:

  1. Map every regulatory obligation to a specific technical implementation — for each HIPAA control, each PCI requirement, each FedRAMP control: where is the implementation, and does it actually satisfy the control or just reference it?
  2. Audit undocumented architectural decisions: for each major design pattern (authentication model, data store, API gateway, logging infrastructure), reconstruct the decision rationale from code and ask whether the compliance implications were understood
  3. Test the audit trail end-to-end: generate a business transaction and verify that the complete access record is present in audit logs, correctly attributed, and queryable in the format regulators expect
  4. Inventory configuration drift: large SI deliveries routinely include hardcoded values, disabled security controls, and environment-specific configurations that were correct in UAT but wrong in production
  5. Document what was not built: identify which compliance controls were deferred as out-of-scope change orders and have not been built at all
Related Articles
Compliance Engineering

EU AI Act: What CTOs Actually Need to Do Before August 2026

Read →
Vendor Recovery

The Vendor Rescue Pattern: How to Recover a Failed Implementation in 12 Weeks

Read →
AI in Regulated Industries

The LLM Hallucination Problem in Regulated Environments: What 'Acceptable Error Rate' Actually Means

Read →
Facing This?

The engineering behind this article is available as a service.

We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.

Talk to an EngineerSee Case Studies →
Engage Us