Upgrade on GCP
To upgrade your Materialize instances, upgrade the Materialize operator first and then the Materialize instances. The following tutorial upgrades your Materialize deployment running on GCP Google Kubernetes Engine (GKE).
The tutorial assumes you have installed Materialize on GCP Google Kubernetes Engine (GKE) using the instructions on Install on GCP (either from the examples/simple directory or the root).
Version compatibility
The following table presents the versions compatibility for the operator and the applications:
| Materialize Operator | orchestratord version | environmentd version | Release date | Notes |
|---|---|---|---|---|
| v25.1.10 | v0.142.1 | v0.130.11 | 2025-04-24 | |
| v25.1.9 | v0.141.0 | v0.130.10 | 2025-04-24 | |
| v25.1.8 | v0.138.0 | v0.130.9 | 2025-04-24 | |
| v25.1.7 | v0.138.0 | v0.130.8 | 2025-04-08 | |
| v25.1.6 | v0.130.8 | v0.130.8 | 2025-03-26 |
This release uses an incorrect version of orchestratord as its
default (v0.130.8 instead of v0.138.0). This has been fixed in
v25.1.7.
|
| v25.1.5 | v0.138.0 | v0.130.7 | 2025-03-25 | |
| v25.1.4 | v0.138.0 | v0.130.7 | 2025-03-25 | |
| v25.1.2 | v0.130.4 | v0.130.4 | 2025-03-11 |
| Terraform version | Notable changes |
|---|---|
| v0.4.1 |
|
| v0.4.0 |
|
| v0.3.4 |
|
| v0.3.1 |
|
| v0.3.0 |
|
| v0.2.0 |
|
| v0.1.7 |
|
Prerequisites
The following procedure performs an in-place upgrade, which incurs downtime.
To perform a rolling upgrade(where both the old and new Materialize instances
are running before the the old instances are removed), you can specify
inPlaceRollout to false. When performing a rolling upgrade, ensure you have
enough resources to support having both the old and new Materialize instances
running.
Google cloud project
You need a GCP project for which you have a role (such as
roles/resourcemanager.projectIamAdmin or roles/owner) that includes
permissions to manage access to the
project.
gcloud CLI
If you do not have gcloud CLI, install. For details, see the Install the gcloud CLI documentation.
Google service account
The tutorial assumes the use of a service account. If you do not have a service account to use for this tutorial, create a service account. For details, see Create service accounts.
Terraform
If you do not have Terraform installed, install Terraform.
kubectl and plugins
gcloud to install kubectl will also install the needed plugins.
Otherwise, you will need to manually install the gke-gcloud-auth-plugin for
kubectl.
-
If you do not have
kubectl, installkubectl. To install, see Install kubectl and configure cluster access for details. You will configurekubectlto interact with your GKE cluster later in the tutorial. -
If you do not have
gke-gcloud-auth-pluginforkubectl, install thegke-gcloud-auth-plugin. For details, see Install the gke-gcloud-auth-plugin.
Helm 3.2.0+
If you do not have Helm version 3.2.0+ installed, install. For details, see the Helm documentation.
jq (Optional)
Optional. jq is used to parse the EKS cluster name and region from the
Terraform outputs. Alternatively, you can manually specify the name and region.
If you want to use jq and do not have jq installed, install.
Procedure
A. Setup GCP service account and authenticate
-
Open a Terminal window.
-
Initialize the gcloud CLI (
gcloud init) to specify the GCP project you want to use. For details, see the Initializing the gcloud CLI documentation.💡 Tip: You do not need to configure a default Compute Region and Zone as you will specify the region. -
To the service account that will be used to perform the upgrade, grant the following IAM roles (if the account does not have them already):
roles/editorroles/iam.serviceAccountAdminroles/storage.admin
-
Enter your GCP project ID.
read -s PROJECT_ID -
Find your service account email for your GCP project
gcloud iam service-accounts list --project $PROJECT_ID -
Enter your service account email.
read -s SERVICE_ACCOUNT -
Grant the service account the neccessary IAM roles.
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$SERVICE_ACCOUNT" \ --role="roles/editor" gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$SERVICE_ACCOUNT" \ --role="roles/iam.serviceAccountAdmin" gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$SERVICE_ACCOUNT" \ --role="roles/storage.admin"
-
For the service account, authenticate to allow Terraform to interact with your GCP project. For details, see Terraform: Google Cloud Provider Configuration reference.
For example, if using User Application Default Credentials, you can run the following command:
gcloud auth application-default login💡 Tip: If usingGOOGLE_APPLICATION_CREDENTIALS, use absolute path to your key file.
B. Upgrade Materialize operator and instances
-
Go to the
examples/simplefolder in the Materialize Terraform repo directory.cd terraform-google-materialize/examples/simple -
Optional. You may need to update your fork of the Terraform module to upgrade.
💡 Tip:If upgrading from a deployment that was set up using an earlier version of the Terraform modules, additional considerations may apply when using an updated Terraform modules to your existing deployments.
See Materialize on GCP releases for notable changes.
-
Configure
kubectlto connect to your EKS cluster, specifying:-
<cluster name>. Your cluster name has the form<your prefix>-gke; e.g.,mz-simple-gke. -
<region>. By default, the example Terraform module uses theus-central1region. -
<project>. Your GCP project ID.
gcloud container clusters get-credentials <cluster-name> \ --region <region> \ --project <project>Alternatively, you can use the following command to get the cluster name and region from the Terraform output and the project ID from the environment variable set earlier.
gcloud container clusters get-credentials $(terraform output -json gke_cluster | jq -r .name) \ --region $(terraform output -json gke_cluster | jq -r .location) --project $PROJECT_IDTo verify that you have configured correctly, run the following command:
kubectl cluster-infoFor help with
kubectlcommands, see kubectl Quick reference. -
-
Back up your
terraform.tfvarsfile.cp terraform.tfvars original_terraform.tfvars -
Update the
terraform.tfvarsto set the Materialize Operator version:Variable Description operator_versionNew Materialize Operator version.
- If the variable does not exist, add the variable and set to the new version.
- If the variable exists, update the value to the new version.
##... Existing content not shown for brevity ##... Leave the existing variables unchanged operator_version="v25.1.13" # Set to the desired operator version -
Initialize the terraform directory.
terraform init -
Run
terraform planwith both theterraform.tfvarsand yourmz_instances.tfvarsfiles and review the changes to be made.terraform plan -var-file=terraform.tfvars -var-file=mz_instances.tfvarsThe plan should show the changes to be made for the
materialize_operator. -
If you are satisfied with the changes, apply.
terraform apply -var-file=terraform.tfvars -var-file=mz_instances.tfvarsTo approve the changes and apply, enter
yes.Upon successful completion, you should see output with a summary of changes.
-
Verify that the operator is running:
kubectl -n materialize get allVerify the operator upgrade by checking its events:
MZ_OPERATOR=$(kubectl -n materialize get pods --no-headers | grep operator | awk '{print $1}') kubectl -n materialize describe pod/$MZ_OPERATOR-
The Containers section should show the
--helm-chart-versionargument set to the new version. -
The Events section should list that the new version of the orchestratord have been pulled.
-
-
Back up your
mz_instances.tfvarsfile.cp mz_instances.tfvars original_mz_instances.tfvars -
Update the
mz_instances.tfvarsto specify the upgrade variables for each instance:Variable Description create_databaseSet to false.environmentd_versionNew Materialize instance version. request_rolloutorforce_rolloutA new UUID string. Can be generated with uuidgen.
request_rollouttriggers a rollout only if changes exist.force_rollouttriggers a rollout even if no changes exist.
inPlaceRolloutSet to trueto perform an in-place upgrade.
Set tofalseto perform a rolling upgrade. For rolling upgrades, ensure you have enough resources to support having both the old and new Materialize instances running during the upgrade.For example, the following instance specifies:
- a
create_databaseoffalse, - an
inPlaceRolloutoftrue, - an
environmentd_versionof"v0.130.14", and - a
request_rolloutof"12345678-1305-1304-1304-123456781304".
materialize_instances = [ { name = "demo" namespace = "materialize-environment" database_name = "demo_db" cpu_request = "1" memory_request = "2Gi" memory_limit = "2Gi" create_database = false environmentd_version = "v0.130.14" inPlaceRollout = true request_rollout="12345678-1305-1304-1304-123456781304" } ] -
Run
terraform planwith both theterraform.tfvarsand yourmz_instances.tfvarsfiles and review the changes to be made.terraform plan -var-file=terraform.tfvars -var-file=mz_instances.tfvarsThe plan should show the changes to be made for the Materialize instance.
-
If you are satisfied with the changes, apply.
terraform apply -var-file=terraform.tfvars -var-file=mz_instances.tfvarsTo approve the changes and apply, enter
yes.Upon successful completion, you should see output with a summary of changes.
-
Verify that the components are running after the upgrade:
kubectl -n materialize-environment get allVerify upgrade by checking the
balancerdevents:MZ_BALANCERD=$(kubectl -n materialize-environment get pods --no-headers | grep balancerd | awk '{print $1}') kubectl -n materialize-environment describe pod/$MZ_BALANCERDThe Events section should list that the new version of the
balancerdhave been pulled.Verify the upgrade by checking the
environmentdevents:MZ_ENVIRONMENTD=$(kubectl -n materialize-environment get pods --no-headers | grep environmentd | awk '{print $1}') kubectl -n materialize-environment describe pod/$MZ_ENVIRONMENTDThe Events section should list that the new version of the
environmentdhave been pulled. -
Open the Materialize Console. The Console should display the new version.