How to customize Data Flow Platform
After configuring a new stack of Data Flow Platform by following this, you can customize the stack.
Test / Proof of Concept (POC) Stack
To create a simple test Data Flow stack on cloud prividers, set the following parameters.
AWS and K8S Component:
Provider Key Name: Choose the AWS Provider Access name which is assigned to you. See this for creating an AWS Provider Access method.
Regions: Choose the AWS region where you want to deploy the stack.
VPC: Choose VPC where you want to deploy the stack into
Subnets: Choose at least 2 public and at least 1 private Subnets.
Desired Capacity: 6 nodes
Maximum Size: 6 nodes
Minimum Size: 6 nodes
Instance Type: t2.large
On GCP & K8S component:
Provider Key Name: Choose the cloud provider key name which is assigned to you. See here on creating a Provider Key Name. Location Type: Choose between Regional and Zonal (Regions are independent geographic areas that consist of zones. A zone is a deployment area for Google Cloud resources within a region. Zones should be considered a single failure domain within a region.)
Regions: Choose the GCP region where you want to deploy the stack.
Node Location: Choose the Zone where nodes will be located. Recommend to select 3 or more zones for the high availability.
VPC Network: Choose the VPC Network where you want to deploy the Data Platform.
VPC Sub Network: Choose the VPC Sub Network where you want to deploy the Data Platform.
Desired Capacity: 5 nodes
Maximum Size: 5 nodes
Minimum Size: 5 nodes
Instance Type: t2.large
Other Components:
Production Stack
[TBD] - To create a scalable and highly available Data Flow stack, set each component's settings accordingly.
What's Next?
After customizing all components, you can deploy the stack following this.
Related Articles
What is Data Flow Platform
Data Flow Platform provides a SaaS-managed service using Kubernetes for moving data from various input data sources to target data destinations in-stream or bulk mode. This allows a user to use snapblocs UI to create a Data Flow Stack by configuring ...
How to configure a new stack for Data Flow Platform
You can initiate configuring a new stack from a few different places: On the Home page, "Configure stack" button on the Stacks statistics block. On the Stacks page, the "Configure new stack" button on the top page On the Projects page, select Project ...
How to customize Data as a Service Platform
After configuring a new stack of Data as a Service Platform by following this, you can customize the stack. Test / Proof of Concept (POC) Stack To create a simple test DaaS stack on AWS, set the following parameters. AWS and K8S for DaaS Component: ...
What are common use cases for Data Flow Platform
Example use cases for Data Flow Platform Data Ingestion Stream Data Ingestion Ingest data in real-time as they arrive. Good for real-time data-driven decision processing for improving customer experience, minimizing fraud, and optimizing operations ...
How to customize Kubernetes+ Platform
After configuring a new stack of Kubernetes+ Platform by following this, you can customize the stack. Test / Proof of Concept (POC) Stack To create a simple test Kubernetes+ Platform stack, set the following parameters. On AWS & K8S component: ...