Policy administration systems are the operational core of every insurance carrier. They maintain the master record of every policy in force — coverages, limits, exclusions, endorsements, premium history, renewal dates, and the claims history associated with each contract. Most PAS platforms at established carriers were implemented between 1975 and 2000, built on mainframe or early client-server architectures, and have accumulated decades of customisation that makes them simultaneously indispensable and unmaintainable. The actuarial and underwriting logic embedded in these systems often exists nowhere else in documented form; it is expressed in COBOL, RPG, or PL/1 code that no current employee fully understands.
Why PAS Projects Fail
Policy administration system replacements have a failure rate that is high enough to have generated significant academic and industry literature. The common failure modes are well documented: scope creep as legacy functionality is discovered incrementally, timeline extensions as the complexity of data migration becomes apparent, and testing failures when the new system produces subtly different results from the legacy system on edge-case policy structures that affect a small number of policies but have outsized premium or liability implications.
The fundamental technical cause of most PAS project failures is inadequate understanding of the legacy system's behaviour. Carriers that begin a replacement project with the assumption that the legacy PAS's business logic can be re-specified from product documentation discover, typically during parallel running, that the legacy system does many things the documentation does not describe. Rate calculations that differ from the filed rating manual by a rounding convention established in 1983. Billing cycles that have been patched differently for specific agents or distribution channels. Endorsement processing that works correctly in 99.7% of cases and differently in 0.3% of cases involving a specific combination of coverages that happens to represent $400 million in premium. Finding these discrepancies during parallel running rather than after cutover is the difference between a difficult project and a catastrophic one.
The legacy system's behaviour is the specification. Product documentation, filed rates, and business requirements documents all describe the intended behaviour. The legacy system expresses the actual behaviour, including all the deviations from intent that have been introduced and maintained over decades. Any replacement that does not replicate the actual behaviour of the legacy system will produce financial and regulatory discrepancies after cutover.
The Parallel Running Approach
The most reliable approach to PAS replacement is extended parallel running: operating both the legacy and new systems simultaneously, processing all policy transactions through both systems, and comparing output at every step. Premium calculations, billing statements, endorsement processing, renewal offers, and claims transactions should all be compared between systems for a defined period — typically a minimum of two renewal cycles for the full book — before any policy population is migrated to the new system as the system of record.
Parallel running at scale requires automation: manual comparison of policy outputs is feasible for a sample but not for a full book of millions of policies. The comparison infrastructure must be built as a first-class component of the migration project, with defined tolerance thresholds (penny-level agreement on premium calculations, for example) and automated exception reporting that routes discrepancies to the appropriate product or underwriting team for investigation and resolution.
Data Migration Architecture
Policy data migration is distinct from the business logic migration problem and is equally complex. In-force policies must be migrated with their full history — endorsement history, billing history, claims history — in a format the new system can interpret correctly. Policies that have been on the legacy system for 20 or 30 years may have data structures that predate current data standards, reference codes from product generations that no longer exist, and relationships between data elements that are expressed in the legacy data model but have no direct equivalent in the new system's schema.
The data migration project requires a complete inventory of the legacy data model, a mapping from each legacy data element to the new system's data model, transformation logic for fields that require conversion rather than direct mapping, and a data quality assessment that identifies records requiring manual remediation before migration. Records with known data quality issues should be remediated in the legacy system before migration, not migrated in their degraded state and remediated in the new system — the latter approach multiplies complexity and introduces risk during the most sensitive phase of the project.
Regulatory and Actuarial Considerations
Insurance carriers are regulated at the state level in the US, and state insurance departments expect continuity of service and regulatory compliance throughout a PAS replacement project. Rate filings must produce identical premium calculations in both the legacy and new systems; any discrepancy that could affect premium accuracy must be disclosed to the relevant department. The actuarial function must sign off on the rating engine in the new system before any business is written on it. Market conduct examination readiness must be maintained throughout the transition, meaning that all required records — applications, declarations pages, endorsements, cancellation notices — must be producible from the active system at any point during the migration.
The engineering behind this article is available as a service.
We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.