Upgrade on GCP

To upgrade your Materialize instances, upgrade the Materialize operator first and then the Materialize instances. The following tutorial upgrades your Materialize deployment running on GCP Google Kubernetes Engine (GKE).

The tutorial assumes you have installed Materialize on GCP Google Kubernetes Engine (GKE). using the instructions on Install on GCP (either from the examples/simple directory or the root).

Version compatibility

When upgrading, you need to specify the Materialize Operator version, orchestratord version, and the environmentd versions. The following table presents the versions compatibility for the operator and the applications:

Materialize Operator orchestratord version environmentd version
v25.1.2 v0.130.4 v0.130.4

Prerequisites

! Important:

The following procedure performs an in-place upgrade, which incurs downtime.

To perform a rolling upgrade(where both the old and new Materialize instances are running before the the old instances are removed), you can specify inPlaceRollout to false. When performing a rolling upgrade, ensure you have enough resources to support having both the old and new Materialize instances running.

Google cloud project

You need a GCP project for which you have a role (such as roles/resourcemanager.projectIamAdmin or roles/owner) that includes permissions to manage access to the project.

gcloud CLI

If you do not have gcloud CLI, install. For details, see the Install the gcloud CLI documentation.

Google service account

The tutorial assumes the use of a service account. If you do not have a service account to use for this tutorial, create a service account. For details, see Create service accounts.

Terraform

If you do not have Terraform installed, install Terraform.

kubectl and plugins

💡 Tip: Using gcloud to install kubectl will also install the needed plugins. Otherwise, you will need to manually install the gke-gcloud-auth-plugin for kubectl.

Helm 3.2.0+

If you do not have Helm version 3.2.0+ installed, install. For details, see the Helm documentation.

jq (Optional)

Optional. jq is used to parse the EKS cluster name and region from the Terraform outputs. Alternatively, you can manually specify the name and region. If you want to use jq and do not have jq installed, install.

Procedure

  1. Open a Terminal window.

  2. Initialize the gcloud CLI (gcloud init) to specify the GCP project you want to use. For details, see the Initializing the gcloud CLI documentation.

    💡 Tip: You do not need to configure a default Compute Region and Zone as you will specify the region.
  3. To the service account that will be used to perform the upgrade, grant the following IAM roles (if the account does not have them already):

    • roles/editor
    • roles/iam.serviceAccountAdmin
    • roles/storage.admin
    1. Enter your GCP project ID.

      read -s PROJECT_ID
      
    2. Find your service account email for your GCP project

      gcloud iam service-accounts list --project $PROJECT_ID
      
    3. Enter your service account email.

      read -s SERVICE_ACCOUNT
      
    4. Grant the service account the neccessary IAM roles.

      gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:$SERVICE_ACCOUNT" \
      --role="roles/editor"
      
      gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:$SERVICE_ACCOUNT" \
      --role="roles/iam.serviceAccountAdmin"
      
      gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:$SERVICE_ACCOUNT" \
      --role="roles/storage.admin"
      
  4. For the service account, authenticate to allow Terraform to interact with your GCP project. For details, see Terraform: Google Cloud Provider Configuration reference.

    For example, if using User Application Default Credentials, you can run the following command:

    gcloud auth application-default login
    
    💡 Tip: If using GOOGLE_APPLICATION_CREDENTIALS, use absolute path to your key file.
  5. Go to the examples/simple folder in the Materialize Terraform repo directory.

    cd terraform-google-materialize/examples/simple
    
  6. Optional. You may need to update your Materialize on Google Cloud Terraform modules to upgrade.

  7. Configure kubectl to connect to your EKS cluster, specifying:

    • <cluster name>. Your cluster name has the form <your prefix>-gke; e.g., mz-simple-gke.

    • <region>. By default, the example Terraform module uses the us-central1 region.

    • <project>. Your GCP project ID.

    gcloud container clusters get-credentials <cluster-name>  \
     --region <region> \
     --project <project>
    

    Alternatively, you can use the following command to get the cluster name and region from the Terraform output and the project ID from the environment variable set earlier.

    gcloud container clusters get-credentials $(terraform output -json gke_cluster | jq -r .name) \
     --region $(terraform output -json gke_cluster | jq -r .location) --project $PROJECT_ID
    

    To verify that you have configured correctly, run the following command:

    kubectl cluster-info
    

    For help with kubectl commands, see kubectl Quick reference.

  8. Back up your terraform.tfvars file.

    cp terraform.tfvars original_terraform.tfvars
    
  9. Update the terraform.tfvars to set the Materialize Operator version and the orchestratord version:

    Variable Description
    operator_version New Materialize Operator version.
    • If the variable does not exist, add the variable and set to the new version.
    • If the variable exists, update the value to the new version.
    orchestratord_version New orchestratord version.
    • If the variable does not exist, add the variable and set to the new version.
    • If the variable exists, update the value to the new version.
    ##... Existing content not shown for brevity
    ##... Leave the existing variables unchanged
    operator_version="v25.1.2"  # Set to the desired operator version
    orchestratord_version="v0.130.4"  # Set to the desired orchestratord version
    
  10. Initialize the terraform directory.

    terraform init
    
  11. Run terraform plan with both the terraform.tfvars and your mz_instances.tfvars files and review the changes to be made.

    terraform plan -var-file=terraform.tfvars -var-file=mz_instances.tfvars
    

    The plan should show the changes to be made for the materialize_operator.

  12. If you are satisfied with the changes, apply.

    terraform apply -var-file=terraform.tfvars -var-file=mz_instances.tfvars
    

    To approve the changes and apply, enter yes.

    Upon successful completion, you should see output with a summary of changes.

  13. Verify that the operator is running:

    kubectl -n materialize get all
    

    Verify the operator upgrade by checking its events:

    MZ_OPERATOR=$(kubectl -n materialize get pods --no-headers | grep operator  | awk '{print $1}')
    kubectl -n materialize describe pod/$MZ_OPERATOR
    
    • The Containers section should show the --helm-chart-version argument set to the new version.

    • The Events section should list that the new version of the orchestratord have been pulled.

  14. Back up your mz_instances.tfvars file.

    cp mz_instances.tfvars original_mz_instances.tfvars
    
  15. Update the mz_instances.tfvars to specify the upgrade variables for each instance:

    Variable Description
    create_database Set to false.
    environmentd_version New Materialize instance version.
    request_rollout or force_rollout A new UUID string. Can be generated with uuidgen.
    • request_rollout triggers a rollout only if changes exist.
    • force_rollout triggers a rollout even if no changes exist.
    inPlaceRollout Set to true to perform an in-place upgrade.
    Set to false to perform a rolling upgrade. For rolling upgrades, ensure you have enough resources to support having both the old and new Materialize instances running during the upgrade.

    For example, the following instance specifies:

    • a create_database of false,
    • an inPlaceRollout of true,
    • an environmentd_version of "v0.130.4", and
    • a request_rollout of "12345678-1305-1304-1304-123456781304".
    materialize_instances = [
        {
          name           = "demo"
          namespace      = "materialize-environment"
          database_name  = "demo_db"
          cpu_request    = "1"
          memory_request = "2Gi"
          memory_limit   = "2Gi"
          create_database = false
          inPlaceRollout = true
          environmentd_version = "v0.130.4"
          request_rollout="12345678-1305-1304-1304-123456781304"
        }
    ]
    
  16. Run terraform plan with both the terraform.tfvars and your mz_instances.tfvars files and review the changes to be made.

    terraform plan -var-file=terraform.tfvars -var-file=mz_instances.tfvars
    

    The plan should show the changes to be made for the Materialize instance.

  17. If you are satisfied with the changes, apply.

    terraform apply -var-file=terraform.tfvars -var-file=mz_instances.tfvars
    

    To approve the changes and apply, enter yes.

    Upon successful completion, you should see output with a summary of changes.

  18. Verify that the components are running after the upgrade:

    kubectl -n materialize-environment get all
    

    Verify upgrade by checking the balancerd events:

    MZ_BALANCERD=$(kubectl -n materialize-environment get pods --no-headers | grep balancerd  | awk '{print $1}')
    kubectl -n materialize-environment describe pod/$MZ_BALANCERD
    

    The Events section should list that the new version of the balancerd have been pulled.

    Verify the upgrade by checking the environmentd events:

    MZ_ENVIRONMENTD=$(kubectl -n materialize-environment get pods --no-headers | grep environmentd  | awk '{print $1}')
    kubectl -n materialize-environment describe pod/$MZ_ENVIRONMENTD
    

    The Events section should list that the new version of the environmentd have been pulled.

  19. Open the Materialize Console. The Console should display the new version.

Back to top ↑