ALTER CLUSTER
ALTER CLUSTER changes the configuration of a cluster, such as the SIZE or
REPLICATON FACTOR.
Syntax
ALTER CLUSTER has the following syntax variations:
To set a cluster configuration:
ALTER CLUSTER <cluster_name>
SET (
SIZE = <text>,
REPLICATION FACTOR = <int>,
MANAGED = <bool>,
SCHEDULE = { MANUAL | ON REFRESH (...) }
)
[WITH ({ WAIT UNTIL READY({TIMEOUT | ON TIMEOUT {COMMIT|ROLLBACK}}) | WAIT FOR <duration> })]
;
To reset a cluster configuration back to its default value:
ALTER CLUSTER <cluster_name>
RESET (
REPLICATION FACTOR,
MANAGED,
SCHEDULE
)
;
To rename a cluster:
ALTER CLUSTER <cluster_name> RENAME TO <new_cluster_name>;
To change the owner of a cluster:
ALTER CLUSTER <cluster_name> OWNER TO <new_owner_role>;
To rename a cluster, you must have ownership of the cluster and membership in
the <new_owner_role>. See also Required privileges.
SWAP WITH operation is provided for completeness. The
SWAP WITH operation is used for blue/green deployments. In general, you will
not need to manually perform this operation.
To swap the name of this cluster with another cluster:
ALTER CLUSTER <cluster1> SWAP WITH <cluster2>;
Cluster configuration
| Field | Value | Description |
|---|---|---|
SIZE
|
text |
The size of the resource allocations for the cluster.
See Size for details as well as legacy sizes available.
WARNING! Changing the size of a cluster may incur downtime. For more information,
see Resizing considerations.
Not available for |
REPLICATION FACTOR
|
int |
The number of replicas to provision for the cluster. Each replica of the cluster provisions a new pool of compute resources to perform exactly the same computations on exactly the same data. For more information, see Replication factor considerations. Default: |
MANAGED
|
bool |
Whether to automatically manage the cluster’s replicas based on the configured size and replication factor. If |
SCHEDULE
|
[MANUAL,ON REFRESH]
|
The scheduling type for the cluster.
Default: MANUAL
|
WITH options
| Command options (optional) | Value | Description | ||||||
|---|---|---|---|---|---|---|---|---|
WAIT UNTIL READY(...) |
Private preview. This option has known performance or stability issues and is under active development.
|
|||||||
WAIT FOR |
interval |
Private preview. This option has known performance or stability issues and is under active development. A fixed duration to wait for the new replicas to be ready. This option can lead to downtime. As such, we recommend using the WAIT UNTIL READY option instead. |
Considerations
Resizing
Available sizes
| Cluster size | Compute Credits/Hour | Total Capacity | Notes |
|---|---|---|---|
| M.1-nano | 0.75 | 26 GiB | |
| M.1-micro | 1.5 | 53 GiB | |
| M.1-xsmall | 3 | 106 GiB | |
| M.1-small | 6 | 212 GiB | |
| M.1-medium | 9 | 318 GiB | |
| M.1-large | 12 | 424 GiB | |
| M.1-1.5xlarge | 18 | 636 GiB | |
| M.1-2xlarge | 24 | 849 GiB | |
| M.1-3xlarge | 36 | 1273 GiB | |
| M.1-4xlarge | 48 | 1645 GiB | |
| M.1-8xlarge | 96 | 3290 GiB | |
| M.1-16xlarge | 192 | 6580 GiB | Available upon request |
| M.1-32xlarge | 384 | 13160 GiB | Available upon request |
| M.1-64xlarge | 768 | 26320 GiB | Available upon request |
| M.1-128xlarge | 1536 | 52640 GiB | Available upon request |
In most cases, you should not use legacy sizes. M.1 sizes offer better performance per credit for nearly all workloads. We recommend using M.1 sizes for all new clusters, and recommend migrating existing legacy-sized clusters to M.1 sizes. Materialize is committed to supporting customers during the transition period as we move to deprecate legacy sizes.
The legacy size information is provided for completeness.
Valid legacy cc cluster sizes are:
25cc50cc100cc200cc300cc400cc600cc800cc1200cc1600cc3200cc6400cc128C256C512C
For clusters using legacy cc sizes, resource allocations are proportional to the
number in the size name. For example, a cluster of size 600cc has 2x as much
CPU, memory, and disk as a cluster of size 300cc, and 1.5x as much CPU,
memory, and disk as a cluster of size 400cc.
Clusters of larger sizes can process data faster and handle larger data volumes.
See also:
Resource allocation
To determine the specific resource allocation for a given cluster size, query
the mz_cluster_replica_sizes
system catalog table.
mz_cluster_replica_sizes table may change at any
time. You should not rely on them for any kind of capacity planning.
Downtime
Resizing operation can incur downtime unless used with WAIT UNTIL READY option. See zero-downtime cluster resizing for details.
Zero-downtime cluster resizing
To enable this feature in your Materialize region, contact our team.
You can use the WAIT UNTIL READY option to perform a zero-downtime resizing,
which incurs no downtime. Instead of restarting the cluster, this approach
spins up an additional cluster replica under the covers with the desired new
size, waits for the replica to be hydrated, and then replaces the original
replica.
ALTER CLUSTER c1
SET (SIZE 'M.1-xsmall') WITH (WAIT UNTIL READY (TIMEOUT = '10m', ON TIMEOUT = 'COMMIT'));
The ALTER statement is blocking and will return only when the new replica
becomes ready. This could take as long as the specified timeout. During this
operation, any other reconfiguration command issued against this cluster will
fail. Additionally, any connection interruption or statement cancelation will
cause a rollback — no size change will take effect in that case.
Using WAIT UNTIL READY requires that the session remain open: you need to
make sure the Console tab remains open or that your psql connection remains
stable.
Any interruption will cause a cancellation, no cluster changes will take effect.
Replication factor
The REPLICATION FACTOR option determines the number of replicas provisioned
for the cluster. Each replica of the cluster provisions a new pool of compute
resources to perform exactly the same computations on exactly the same data.
Each replica incurs cost, calculated as cluster size * replication factor per
second. See Usage & billing for more details.
Replication factor and fault tolerance
Provisioning more than one replica provides fault tolerance. Clusters with multiple replicas can tolerate failures of the underlying hardware that cause a replica to become unreachable. As long as one replica of the cluster remains available, the cluster can continue to maintain dataflows and serve queries.
-
Each replica incurs cost, calculated as
cluster size * replication factorper second. See Usage & billing for more details. -
Increasing the replication factor does not increase the cluster’s work capacity. Replicas are exact copies of one another: each replica must do exactly the same work (i.e., maintain the same dataflows and process the same queries) as all the other replicas of the cluster.
To increase the capacity of a cluster, you must increase its size.
Materialize automatically assigns names to replicas (e.g., r1, r2). You can
view information about individual replicas in the Materialize console and the system
catalog.
Availability guarantees
When provisioning replicas,
-
For clusters sized under
3200cc, Materialize guarantees that all provisioned replicas in a cluster are spread across the underlying cloud provider’s availability zones. -
For clusters sized at
3200ccand above, even distribution of replicas across availability zones cannot be guaranteed.
Required privileges
To execute the ALTER CLUSTER command, you need:
-
Ownership of the cluster.
-
To rename a cluster, you must also have membership in the
<new_owner_role>. -
To swap names with another cluster, you must also have ownership of the other cluster.
See also:
Examples
Replication factor
The following example uses ALTER CLUSTER to update the REPLICATION FACTOR of cluster c1 to 2:
ALTER CLUSTER c1 SET (REPLICATION FACTOR 2);
Increasing the REPLICATION FACTOR increases the cluster’s fault
tolerance, not its work capacity.
Resizing
You can alter the cluster size with no downtime (i.e., zero-downtime
cluster resizing) by running the ALTER CLUSTER command with the WAIT UNTIL READY option:
ALTER CLUSTER c1
SET (SIZE 'M.1-xsmall') WITH (WAIT UNTIL READY (TIMEOUT = '10m', ON TIMEOUT = 'COMMIT'));
Using WAIT UNTIL READY requires that the session remain open: you need to
make sure the Console tab remains open or that your psql connection remains
stable.
Any interruption will cause a cancellation, no cluster changes will take effect.
Alternatively, you can alter the cluster size immediately, without waiting, by
running the ALTER CLUSTER command:
ALTER CLUSTER c1 SET (SIZE 'M.1-xsmall');
This will incur downtime when the cluster contains objects that need re-hydration before they are ready. This includes indexes, materialized views, and some types of sources.
Schedule
To enable this feature in your Materialize region, contact our team.
For use cases that require using scheduled clusters,
you can set or change the originally configured schedule and related options
using the ALTER CLUSTER command.
ALTER CLUSTER c1 SET (SCHEDULE = ON REFRESH (HYDRATION TIME ESTIMATE = '1 hour'));
See the reference documentation for CREATE CLUSTER or CREATE MATERIALIZED VIEW for more details on
scheduled clusters.
Converting unmanaged to managed clusters
Alter the managed status of a cluster to managed:
ALTER CLUSTER c1 SET (MANAGED);
Materialize permits converting an unmanged cluster to a managed cluster if the following conditions are met:
- The cluster replica names are
r1,r2, …,rN. - All replicas have the same size.
- If there are no replicas,
SIZEneeds to be specified. - If specified, the replication factor must match the number of replicas.
Note that the cluster will not have settings for the availability zones, and compute-specific settings. If needed, these can be set explicitly.