Skip to content
The Algorithm
The Algorithm Labs

Research at the edge of
regulated AI.

Foundational research into the safety, governance, and failure modes of AI systems operating in regulated environments. Not proof-of-concepts. Problems we have encountered in production, studied, and published.

What Labs Is
LAB

The Algorithm Labs is the research function of The Algorithm. Every platform we build — ALICE, SentienGuard, Regure, clinIQ — generates edge cases that become research problems. Labs documents, studies, and publishes those problems.

We publish on SSRN. We share findings with our clients. We present at relevant conferences.

This is not marketing. It is engineering honesty about the hardest problems in the spaces we work.

Research Areas

Three active
research programs.

Every area originated in a production problem. The research informs architecture. The architecture informs the next research question.

AI Governance in Autonomous Systems

When AI agents make decisions autonomously — in clinical workflows, compliance pipelines, financial processing — the question of accountability becomes structural. Who is responsible when a HIPAA-compliant AI misclassifies a patient record? When an autonomous trading system breaches an FCA rule between update cycles? Traditional governance frameworks assume a human decision-maker at every step. We study what governance looks like when the decision-maker is a model, and what audit trail requirements must be built into the architecture from the start.

Status: Active
Published: SSRN Working Paper series

Healthcare Data Sovereignty

The assumption that data residency equals data sovereignty is breaking down. PHI processed by a US-based cloud provider for a UK NHS trust, governed by UK GDPR, processed under HIPAA, stored in an EU data center — the jurisdictional layers are not theoretical. They are in production. We study the technical and legal intersections of multi-jurisdiction PHI handling, including cross-border AI training data governance and the implications of AI Act Article 9 for regulated health AI.

Status: Active
Applied in clinIQ and Vizier platform architecture

Stochastic Logic Drift in AI Agents

Large language models and agentic systems do not degrade uniformly. They drift. A model that passes compliance validation in testing may produce subtly non-compliant outputs in production — not because the model was updated, but because the distribution of real-world inputs diverges from the test distribution over time. We call this stochastic logic drift. We are studying detection methods, alerting thresholds, and architectural patterns that make drift observable before it produces a regulatory incident.

Status: Active
Integrated into ALICE and SentienGuard monitoring architecture
Research Into Production

Platforms built on
Labs research.

The findings do not stay in papers. They become architecture decisions, enforcement gates, and monitoring patterns inside the platforms we operate.

Publications

What we've put
our name on.

Research authored by Piyoosh Rai, founder. Published on SSRN. Open access.

Autonomous Compliance Systems in Regulated AI Pipelines
Piyoosh Rai·SSRN Working Paper·2024
View on SSRN →
Healthcare Data Sovereignty Under Multi-Jurisdiction AI Governance
Piyoosh Rai·SSRN Working Paper·2024
View on SSRN →
Stochastic Logic Drift: Detection and Architectural Mitigation in Agentic AI Systems
Piyoosh Rai·SSRN Working Paper·2025
View on SSRN →
Collaborate

Working on adjacent
problems?

If you are working on adjacent problems — in healthcare AI, autonomous compliance systems, or multi-jurisdiction data governance — we are interested in talking.

Get in Touch
Engage Us