Blue-green deployment
The dbt-materialize adapter ships with helper macros to automate blue/green
deployments. We recommend using the blue/green pattern any time you need to
deploy changes to the definition of objects in Materialize in production
environments and can’t tolerate downtime.
For development environments with no downtime considerations, you might prefer to use the slim deployment pattern instead for quicker iteration and reduced CI costs.
RBAC permissions requirements
When using blue/green deployments with role-based access control (RBAC), ensure that the role executing the deployment operations has sufficient privileges on the target objects:
- The role must have ownership privileges on the schemas being deployed
- The role must have ownership privileges on the clusters being deployed
These permissions are required because the blue/green deployment process needs to create, modify, and swap resources during the deployment lifecycle.
Configuration and initialization
In a blue/green deployment, you first deploy your code changes to a deployment environment (“green”) that is a clone of your production environment (“blue”), in order to validate the changes without causing unavailability. These environments are later swapped transparently.
-
In
dbt_project.yml, use thedeploymentvariable to specify the cluster(s) and schema(s) that contain the changes you want to deploy. The dedicated schemas and clusters for sinks shouldn’t be included in your deployment configuration.vars: deployment: default: clusters: # To specify multiple clusters, use [<cluster1_name>, <cluster2_name>]. - <cluster_name> schemas: # to specify multiple schemas, use [<schema1_name>, <schema2_name>]. - <schema_name> -
Use the
run-operationcommand to invoke thedeploy_initmacro:dbt run-operation deploy_initThis macro spins up a new cluster named
<cluster_name>_dbt_deployand a new schema named<schema_name>_dbt_deployusing the same configuration as the current environment to swap with (including privileges). -
Run the dbt project containing the code changes against the new deployment environment.
dbt run --vars 'deploy: True'The
deploy: Truevariable instructs the adapter to append_dbt_deployto the original schema or cluster specified for each model scoped for deployment, which transparently handles running that subset of models against the deployment environment.You must exclude sources and sinks when running the dbt project.
If you encounter an error like
String 'deploy:' is not valid YAML, you might need to use an alternative syntax depending on your terminal environment. Different terminals handle quotes differently, so try:dbt run --vars "{\"deploy\": true}"This alternative syntax is compatible with Windows terminals, PowerShell, or PyCharm Terminal.
Validation
We strongly recommend validating the results of the deployed changes on the deployment environment to ensure it’s safe to cutover.
-
After deploying the changes, the objects in the deployment cluster need to fully hydrate before you can safely cut over. Use the
run-operationcommand to invoke thedeploy_awaitmacro, which periodically polls the cluster readiness status, and waits for all objects to meet a minimum lag threshold to return successfully.dbt run-operation deploy_await #--args '{poll_interval: 30, lag_threshold: "5s"}'By default,
deploy_awaitpolls for cluster readiness every 15 seconds, and waits for all objects in the deployment environment to have a lag of less than 1 second before returning successfully. To override the default values, you can pass the following arguments to the macro:Argument Default Description poll_interval15sThe time (in seconds) between each cluster readiness check. lag_threshold1sThe maximum lag threshold, which determines when all objects in the environment are considered hydrated and it’s safe to perform the cutover step. We do not recommend changing the default value, unless prompted by the Materialize team. -
Once
deploy_awaitreturns successfully, you can manually run tests against the new deployment environment to validate the results.
Cutover and cleanup
-
Once
deploy_awaitreturns successfully and you have validated the results of the deployed changes on the deployment environment, it is safe to push the changes to your production environment.Use the
run-operationcommand to invoke thedeploy_promotemacro, which (atomically) swaps the environments. To perform a dry run of the swap, and validate the sequence of commands that dbt will execute, you can pass thedry_run: Trueargument to the macro.# Do a dry run to validate the sequence of commands to execute dbt run-operation deploy_promote --args '{dry_run: true}'# Promote the deployment environment to production dbt run-operation deploy_promote #--args '{wait: true, poll_interval: 30, lag_threshold: "5s"}'By default,
deploy_promotedoes not wait for all objects to be hydrated — we recommend carefully validating the results of the deployed changes in the deployment environment before running this operation, or setting--args '{wait: true}'. To override the default values, you can pass the following arguments to the macro:Argument Default Description dry_runfalseWhether to print out the sequence of commands that dbt will execute without actually promoting the deployment, for validation. waitfalseWhether to wait for all objects in the deployment environment to fully hydrate before promoting the deployment. We recommend setting this argument to trueif you skip the validation step.poll_interval15sWhen waitis set totrue, the time (in seconds) between each cluster readiness check.lag_threshold1sWhen waitis set totrue, the maximum lag threshold, which determines when all objects in the environment are considered hydrated and it’s safe to perform the cutover step.NOTE: Thedeploy_promoteoperation might fail if objects are concurrently modified by a different session. If this occurs, re-run the operation.This macro ensures all deployment targets, including schemas and clusters, are deployed together as a single atomic operation, and that any sinks that depend on changed objects are automatically cut over to the new definition of their upstream dependencies. If any part of the deployment fails, the entire deployment is rolled back to guarantee consistency and prevent partial updates.
-
Use the run
run-operationcommand to invoke thedeploy_cleanupmacro, which (cascade) drops the_dbt_deploy-suffixed cluster(s) and schema(s):dbt run-operation deploy_cleanupNOTE: Any activeSUBSCRIBEcommands attached to the swapped cluster(s) will break. On retry, the client will automatically connect to the newly deployed cluster