How to customize Data as a Service Platform

How to customize Data as a Service Platform

After configuring a new stack of Data as a Service Platform by following thisyou can customize the stack.
Test / Proof of Concept (POC) Stack
To create a simple test DaaS stack on AWS, set the following parameters.
AWS and K8S for DaaS Component:
  • Provider Key Name: Choose the AWS Provider Access name which is assigned to you. See this for creating an AWS Provider Access method.
  • Regions: Choose the AWS region where you want to deploy the stack.
  • VPC: Choose VPC where you want to deploy the stack into
  • Subnets: Choose at least 2 public and at least 1 private Subnets.
  • Kubernetes Configuration
    • Desired Capacity: 6 nodes
    • Maximum Size: 6 nodes
    • Minimum Size: 6 nodes
    • Worker Node Volume Size: 20 GB
    • Instance Type: t2.large
  • Dremio Node Group Configuration
    • Desired Capacity: 3 nodes
    • Maximum Size: 3 nodes
    • Minimum Size: 3 nodes
    • Worker Node Volume Size: 20 GB
    • Instance Type: r5d.xlarge
On GCP & K8S component:
  • Provider Key Name: Choose the cloud provider key name which is assigned to you. See here on creating a Provider Key Name.
  • Location Type: Choose between Regional and Zonal (Regions are independent geographic areas that consist of zones. A zone is a deployment area for Google Cloud resources within a regionZones should be considered a single failure domain within a region.)
  • Regions: Choose the GCP region where you want to deploy the stack.
  • Node Location: Choose the Zone where nodes will be located. Recommend to select 3 or more zones for the high availability.
  • VPC Network: Choose the VPC Network where you want to deploy the Data Platform.
  • VPC Sub Network: Choose the VPC Sub Network where you want to deploy the Data Platform.
  • Kubernetes Configuration
    • Desired Capacity: 5 nodes
    • Maximum Size: 5 nodes
    • Minimum Size: 5 nodes
    • Instance Type: t2.large
  • Dremio Node Group Configuration
    • Desired Capacity: 3 nodes
    • Maximum Size: 3 nodes
    • Minimum Size: 3 nodes
    • Worker Node Volume Size: 20 GB
    • Instance Type: r5d.xlarge
Other Components:
  • No changes are required.
Production Stack
[TBD] - To create a scalable and highly available Data Flow stack, set each component's settings accordingly.

What's Next?
After customizing all components, you can deploy the stack following this.




    • Related Articles

    • How to customize Data Flow Platform

      After configuring a new stack of Data Flow Platform by following this, you can customize the stack. Test / Proof of Concept (POC) Stack To create a simple test Data Flow stack on cloud prividers, set the following parameters. AWS and K8S Component: ...
    • What is Data as a Service Platform

      DaaS Platform is a self-service analytics platform on cloud using Kubernetes to simplify access, accelerate analytical processing, secure and masking data, curate datasets, and provide a unified catalog of data across all data sources. This allows ...
    • How to configure a new stack for Data as a Service Platform

      You can initiate configuring a new stack from a few different places: On the Home page, "Configure stack" button on the Stacks statistics block. On the Stacks page, the "Configure new stack" button on the top page On the Projects page, select Project ...
    • Self-Service Cloud Platform

      Self-Service Infrastructure accelerates the testing and development efforts while reducing IT management and development costs. What is Self-Service? DevOps and DataOps teams want the ability to quickly launch cloud infrastructure or an entire ...
    • How to customize Kubernetes+ Platform

      After configuring a new stack of Kubernetes+ Platform by following this, you can customize the stack. Test / Proof of Concept (POC) Stack To create a simple test Kubernetes+ Platform stack, set the following parameters. On AWS & K8S component: ...