How to Manage Lifecycle of Stack

How to Manage Lifecycle of Stack

This video shows how to use snapblocs UI to pause, resume, clone, move the stack created through snapblocs.
Watch the video here.

Clone and move
Once a stack has been configured for one project, an authorized user can easily clone the stack configuration to a different project and then update the configuration (such as cluster parameters and target environment settings). The new cloned stack can then be deployed to staging or production environments. 
Stacks can also be moved across projects while retaining their original configuration. 

Resume and Pause
Teardown any stack that doesn’t need to keep running. All data and configurations will be terminated. 

Pause any stack without destroying the data or configuration. Pausing a stack saves cloud provider costs and the stack can be deployed again when needed without having to rebuild the stack. 

For this use case, use the snapblocs pause and resume features, instead of the teardown.

Once a stack is in a paused state, all Kubernetes worker nodes will be released (0 worker nodes) except for the control plane. During the paused state, all EC2 nodes will be terminated and no stack components or business applications will be accessible, as those Kubernetes worker pods were terminated. The storage volumes will remain so that no data will be lost during the paused state. When a stack is resumed those volumes will be attached to newly created worker nodes (running containers for the application, data, and stack components).


    • Related Articles

    • How to Deploy Stack

      This video shows how to deploy a Data Flow stack. Data Flow stack is a managed service using Kubernetes that can be used to move data from various input data sources to target data destinations in-stream or bulk mode.  See here for detail of the Data ...
    • How to Use Provisioned Applications

      This video shows how to interact with applications provisioned by a Data Flow stack. Data Flow stack is a managed service using Kubernetes that can be used to move data from various input data sources to target data destinations in-stream or bulk ...
    • Data Ingestion to Elasticsearch and S3 using Kafka and Streamsets Data Collector

      Creating a reliable and scalable custom data ingestion pipeline platform can be a tedious process requiring a lot of manual configuration and coding. To address that, the snapblocs Data Flow platform blueprint makes it easy to create a multi-source, ...
    • How to Add Cloud Provider Access Key

      When deploying a snapblocs stack, snapblocs provisions the stack within the customer's cloud account. The provider access key is used to allow snapblocs to access your cloud provider environment for deploying your stacks, collecting statistics of ...