Upgrade on GCP
To upgrade your Materialize instances, upgrade the Materialize operator first and then the Materialize instances. The following tutorial upgrades your Materialize deployment running on GCP Google Kubernetes Engine (GKE).
The tutorial assumes you have installed Materialize on GCP Google Kubernetes Engine (GKE). using the instructions on Install on GCP (either from the examples/simple directory or the root).
Version compatibility
When upgrading, you need to specify the Materialize Operator version,
orchestratord
version, and the environmentd
versions. The following table
presents the versions compatibility for the operator and the applications:
Materialize Operator | orchestratord version | environmentd version |
---|---|---|
v25.1.2 | v0.130.4 | v0.130.4 |
Prerequisites
The following procedure performs an in-place upgrade, which incurs downtime.
To perform a rolling upgrade(where both the old and new Materialize instances
are running before the the old instances are removed), you can specify
inPlaceRollout
to false. When performing a rolling upgrade, ensure you have
enough resources to support having both the old and new Materialize instances
running.
Google cloud project
You need a GCP project for which you have a role (such as
roles/resourcemanager.projectIamAdmin
or roles/owner
) that includes
permissions to manage access to the
project.
gcloud CLI
If you do not have gcloud CLI, install. For details, see the Install the gcloud CLI documentation.
Google service account
The tutorial assumes the use of a service account. If you do not have a service account to use for this tutorial, create a service account. For details, see Create service accounts.
Terraform
If you do not have Terraform installed, install Terraform.
kubectl and plugins
gcloud
to install kubectl
will also install the needed plugins.
Otherwise, you will need to manually install the gke-gcloud-auth-plugin
for
kubectl
.
-
If you do not have
kubectl
, installkubectl
. To install, see Install kubectl and configure cluster access for details. You will configurekubectl
to interact with your GKE cluster later in the tutorial. -
If you do not have
gke-gcloud-auth-plugin
forkubectl
, install thegke-gcloud-auth-plugin
. For details, see Install the gke-gcloud-auth-plugin.
Helm 3.2.0+
If you do not have Helm version 3.2.0+ installed, install. For details, see the Helm documentation.
jq (Optional)
Optional. jq
is used to parse the EKS cluster name and region from the
Terraform outputs. Alternatively, you can manually specify the name and region.
If you want to use jq
and do not have jq
installed, install.
Procedure
-
Open a Terminal window.
-
Initialize the gcloud CLI (
gcloud init
) to specify the GCP project you want to use. For details, see the Initializing the gcloud CLI documentation.💡 Tip: You do not need to configure a default Compute Region and Zone as you will specify the region. -
To the service account that will be used to perform the upgrade, grant the following IAM roles (if the account does not have them already):
roles/editor
roles/iam.serviceAccountAdmin
roles/storage.admin
-
Enter your GCP project ID.
read -s PROJECT_ID
-
Find your service account email for your GCP project
gcloud iam service-accounts list --project $PROJECT_ID
-
Enter your service account email.
read -s SERVICE_ACCOUNT
-
Grant the service account the neccessary IAM roles.
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$SERVICE_ACCOUNT" \ --role="roles/editor" gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$SERVICE_ACCOUNT" \ --role="roles/iam.serviceAccountAdmin" gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$SERVICE_ACCOUNT" \ --role="roles/storage.admin"
-
For the service account, authenticate to allow Terraform to interact with your GCP project. For details, see Terraform: Google Cloud Provider Configuration reference.
For example, if using User Application Default Credentials, you can run the following command:
gcloud auth application-default login
💡 Tip: If usingGOOGLE_APPLICATION_CREDENTIALS
, use absolute path to your key file. -
Go to the
examples/simple
folder in the Materialize Terraform repo directory.cd terraform-google-materialize/examples/simple
-
Optional. You may need to update your Materialize on Google Cloud Terraform modules to upgrade.
-
Configure
kubectl
to connect to your EKS cluster, specifying:-
<cluster name>
. Your cluster name has the form<your prefix>-gke
; e.g.,mz-simple-gke
. -
<region>
. By default, the example Terraform module uses theus-central1
region. -
<project>
. Your GCP project ID.
gcloud container clusters get-credentials <cluster-name> \ --region <region> \ --project <project>
Alternatively, you can use the following command to get the cluster name and region from the Terraform output and the project ID from the environment variable set earlier.
gcloud container clusters get-credentials $(terraform output -json gke_cluster | jq -r .name) \ --region $(terraform output -json gke_cluster | jq -r .location) --project $PROJECT_ID
To verify that you have configured correctly, run the following command:
kubectl cluster-info
For help with
kubectl
commands, see kubectl Quick reference. -
-
Back up your
terraform.tfvars
file.cp terraform.tfvars original_terraform.tfvars
-
Update the
terraform.tfvars
to set the Materialize Operator version and the orchestratord version:Variable Description operator_version
New Materialize Operator version.
- If the variable does not exist, add the variable and set to the new version.
- If the variable exists, update the value to the new version.
orchestratord_version
New orchestratord version.
- If the variable does not exist, add the variable and set to the new version.
- If the variable exists, update the value to the new version.
##... Existing content not shown for brevity ##... Leave the existing variables unchanged operator_version="v25.1.2" # Set to the desired operator version orchestratord_version="v0.130.4" # Set to the desired orchestratord version
-
Initialize the terraform directory.
terraform init
-
Run
terraform plan
with both theterraform.tfvars
and yourmz_instances.tfvars
files and review the changes to be made.terraform plan -var-file=terraform.tfvars -var-file=mz_instances.tfvars
The plan should show the changes to be made for the
materialize_operator
. -
If you are satisfied with the changes, apply.
terraform apply -var-file=terraform.tfvars -var-file=mz_instances.tfvars
To approve the changes and apply, enter
yes
.Upon successful completion, you should see output with a summary of changes.
-
Verify that the operator is running:
kubectl -n materialize get all
Verify the operator upgrade by checking its events:
MZ_OPERATOR=$(kubectl -n materialize get pods --no-headers | grep operator | awk '{print $1}') kubectl -n materialize describe pod/$MZ_OPERATOR
-
The Containers section should show the
--helm-chart-version
argument set to the new version. -
The Events section should list that the new version of the orchestratord have been pulled.
-
-
Back up your
mz_instances.tfvars
file.cp mz_instances.tfvars original_mz_instances.tfvars
-
Update the
mz_instances.tfvars
to specify the upgrade variables for each instance:Variable Description create_database
Set to false
.environmentd_version
New Materialize instance version. request_rollout
orforce_rollout
A new UUID string. Can be generated with uuidgen
.
request_rollout
triggers a rollout only if changes exist.force_rollout
triggers a rollout even if no changes exist.
inPlaceRollout
Set to true
to perform an in-place upgrade.
Set tofalse
to perform a rolling upgrade. For rolling upgrades, ensure you have enough resources to support having both the old and new Materialize instances running during the upgrade.For example, the following instance specifies:
- a
create_database
offalse
, - an
inPlaceRollout
oftrue
, - an
environmentd_version
of"v0.130.4"
, and - a
request_rollout
of"12345678-1305-1304-1304-123456781304"
.
materialize_instances = [ { name = "demo" namespace = "materialize-environment" database_name = "demo_db" cpu_request = "1" memory_request = "2Gi" memory_limit = "2Gi" create_database = false inPlaceRollout = true environmentd_version = "v0.130.4" request_rollout="12345678-1305-1304-1304-123456781304" } ]
-
Run
terraform plan
with both theterraform.tfvars
and yourmz_instances.tfvars
files and review the changes to be made.terraform plan -var-file=terraform.tfvars -var-file=mz_instances.tfvars
The plan should show the changes to be made for the Materialize instance.
-
If you are satisfied with the changes, apply.
terraform apply -var-file=terraform.tfvars -var-file=mz_instances.tfvars
To approve the changes and apply, enter
yes
.Upon successful completion, you should see output with a summary of changes.
-
Verify that the components are running after the upgrade:
kubectl -n materialize-environment get all
Verify upgrade by checking the
balancerd
events:MZ_BALANCERD=$(kubectl -n materialize-environment get pods --no-headers | grep balancerd | awk '{print $1}') kubectl -n materialize-environment describe pod/$MZ_BALANCERD
The Events section should list that the new version of the
balancerd
have been pulled.Verify the upgrade by checking the
environmentd
events:MZ_ENVIRONMENTD=$(kubectl -n materialize-environment get pods --no-headers | grep environmentd | awk '{print $1}') kubectl -n materialize-environment describe pod/$MZ_ENVIRONMENTD
The Events section should list that the new version of the
environmentd
have been pulled. -
Open the Materialize Console. The Console should display the new version.