Redshift-to-Snowflake migration has become the dominant data warehouse migration pattern in 2024. The drivers are consistent: Redshift's concurrency scaling model creates unpredictable cost at high query volumes, Snowflake's separation of compute and storage enables more flexible cost management, and Snowflake's multi-cluster warehouse architecture handles mixed workload concurrency better than Redshift's concurrency scaling slots. For regulated industries, Snowflake's Business Critical tier provides HIPAA BAA coverage, Tri-Secret Secure customer-managed encryption, and native column-level security capabilities that many Redshift deployments implement through workarounds. The migration is technically achievable. The failure mode is almost always in the areas that the SQL dialect compatibility tools cannot automate.
Access Control Model Divergence
Redshift's access control model is derived from PostgreSQL: users, groups, schemas, and object-level privileges. Snowflake's access control model is fundamentally different: roles, role hierarchies, and the principle that all access flows through role grants rather than direct user grants. Snowflake's RBAC is more expressive and more auditable than Redshift's group-based model, but the migration of access control policies is not a mechanical translation.
A Redshift deployment with 50 users, 20 groups, and hundreds of schema and table privileges must be inventoried, rationalised, and redesigned as a Snowflake role hierarchy before migration. The inventory reveals permission accumulations that the migration is an opportunity to remediate. For regulated organisations with HIPAA or SOX access control requirements, the migration access control design phase is a compliance project, not just a technical task. The Snowflake role hierarchy that emerges from the migration should reflect the principle of least privilege more precisely than the Redshift permission model it replaces.
Network Architecture and Private Connectivity
Redshift clusters live inside a VPC with security group controls. Snowflake's default connection model routes through Snowflake's cloud-native endpoint, meaning data leaves the organisation's cloud VPC to reach Snowflake's service. For regulated workloads with network isolation requirements under HIPAA or FedRAMP, this default model may not be acceptable. Snowflake Private Link provides a VPC-native connection path that keeps traffic within the cloud provider's network. Private Link configuration must be completed before any PHI or regulated data is migrated to Snowflake.
IP allowlisting in Snowflake's network policy feature restricts which IP ranges can connect to a Snowflake account. For regulated environments, configuring network policies that mirror the source IP restrictions on the Redshift cluster is a prerequisite to the migration, not an afterthought. The Snowflake network policy also applies to BI tool connections, dbt Cloud runs, and any other external services that need to query the warehouse, all of which have different source IPs from the application servers and must be explicitly permitted.
SQL Dialect Compatibility and the Long Tail
Snowflake's SQL dialect is broadly compatible with Redshift. Most SELECT, INSERT, CREATE TABLE, and common window function syntax translates without modification. The compatibility tools handle the majority of automated translation. The long tail of incompatibility is where migration projects stall: Redshift's distribution keys and sort keys have no direct equivalent in Snowflake, Redshift's LISTAGG syntax differs, and Redshift's COPY command syntax for S3 data loading is entirely replaced by Snowflake's COPY INTO with different stage object syntax. dbt models that use Redshift-specific materialisation strategies or Redshift-adapter macros require review and modification.
For regulated pipelines where SQL transformation logic is a compliance control, the migration cannot proceed until every modified SQL object has been reviewed, tested, and approved through the change management process. The automated conversion tool output is a starting point for human review, not a deployable artefact. Scheduling adequate change management review time for the full inventory of SQL objects is consistently underestimated in migration project plans.
Downstream Consumer Migration
The data warehouse is not an island. Tableau, Power BI, Looker, and other BI tools hold connection strings, custom SQL queries, and data source definitions that reference Redshift clusters by hostname. Application code holds JDBC connection strings and Redshift-specific driver configurations. dbt profiles.yml files reference the Redshift connection. ETL pipelines in Fivetran, Airbyte, or custom Glue jobs are configured to write to Redshift. Migrating the warehouse without a complete inventory of downstream consumers and a migration plan for each consumer is the most common cause of extended parallel operation periods and escalating migration costs.
The migration plan must schedule consumer cutover in dependency order: consumers that produce data to the warehouse migrate before consumers that read from it. The parallel operation period has a cost: double the storage cost, double the ETL job execution, and the engineering overhead of maintaining two production pipelines. Setting a hard deadline for the parallel operation period, with escalation if consumer teams have not completed cutover by the deadline, is the project governance mechanism that prevents indefinite parallel operation.
Compliance Validation Before Cutover
For regulated organisations, cutover to Snowflake is not complete when the SQL queries run correctly. It is complete when the compliance controls in Snowflake have been validated to the same standard as the Redshift controls they replace. The HIPAA BAA with Snowflake must be signed and in place before PHI is loaded. The column-level masking policies must be tested with users at each permission level. Private Link connectivity must be verified from all source IP ranges. Audit log delivery to the SIEM must be tested and confirmed. Network policies must be verified to reject unauthorised IP ranges. Each of these validations should be documented as a cutover checklist item with sign-off evidence retained for the compliance audit record.
EU AI Act: What CTOs Actually Need to Do Before August 2026
The Vendor Rescue Pattern: How to Recover a Failed Implementation in 12 Weeks
The LLM Hallucination Problem in Regulated Environments: What 'Acceptable Error Rate' Actually Means
The engineering behind this article is available as a service.
We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.