How to configure a GCP and K8S component

How to configure a GCP and K8S component

Google Cloud Platform (GCP) is a suite of cloud computing services, including computing, data storage, data analytics, and machine learning.
Google Cloud Platform provides infrastructure as a service, platform as a service, and serverless computing environments.

Kubernetes is a system for automating the deployment, scaling, and management of containerized applications.

Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machines, specifically Compute Engine instances grouped to form a cluster.

snapblocs provisions Data Platforms on GCP following GKE Best Practices and provides provisioning and configuring for production-grade Kubernetes clusters, and deploying workloads into the clusters. With snapblocs, benefit from patterns that work for many customers already in production. Also, snapblocs make it easy to get started and deploy to production in no time.

Once creating a stack that has GCP and K8S components, you can customize the GCP and K8S components with the following settings.

Provider Key Name
  • Use a Provider Access Key to deploy a stack to a specific GCP account.
  • Provider Access Keys created at the account level are available for use by all Projects and their Stacks. In essence, they have shared keys.
  • Stacks can only use Provider Access Keys created at the project level within that Project. 
  • Stacks can use a key from their Project or an account key.
  • Delete Keys only when they're not in use by Stacks.
  • To create a Provider Key, see How to Create GCP Service Account Keys.
Compute Engine Location
  1. Location Type: Compute Engine resources are hosted in multiple locations worldwide. These locations are composed of regions and zones. Only the Regional location option is supported for now.
  2. Region: Choose the GCP Region where you want to deploy the Data Platform. Note that as the list of Regions is dynamically generated by calling GCP API, it may take a few seconds to show the list of Regions to be selected.
  3. Node Location: In order to deploy fault-tolerant applications that have high availability, Google recommends deploying applications across multiple zones and multiple regions. This helps protect against unexpected failures of components, up to and including a single zone or region.
VPC Network
  1. Each VPC network consists of one or more useful IP range partitions called subnets. Each subnet is associated with a region. VPC networks do not have any IP address ranges associated with them. IP ranges are defined for the subnets.
  2. When you create a resource in Google Cloud, you choose a network and subnet.
Kubernetes Configuration
  • A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. The worker node(s) host the Pods that are the components of the application workload. 
  • A machine type is a set of virtualized hardware resources available to a virtual machine (VM) instance, including the system memory size, virtual CPU (vCPU) count, and persistent disk limits. You must choose a machine type when you create an instance. See the spec of the machine type here.
  • When you create a GKE cluster or node pool, you can choose the operating system image that runs on each node. See the list of node image types here.
  • Enter the number of nodes per zone. The total number of nodes will be calculated based on the number of Node Locations and the number of nodes per zone.
  • To enable secure access to the worker nodes, provide the CIDR blocks that are allowed SSH access to your Kubernetes
  • Select the Autosacling option with a minimum and a maximum number of nodes. Autoscaling capabilities to automatically add or delete virtual machine (VM) instances. See here for the detail on Autoscaling. 

1.1-JPE



    • Related Articles

    • How to configure a Kafka component

      Kafka is a distributed streaming platform used to publish and subscribe to streams of records. Kafka gets used for fault-tolerant storage. Kafka replicates topic log partitions to multiple servers. Use it to stream data to other data platforms such ...
    • How to configure an AWS and K8S component

      Amazon Web Services (AWS) AWS is one of the most comprehensive and broadly adopted Cloud platforms providing on-demand cloud computing platforms. It offers over 175 fully-featured services from data centers globally. Kubernetes is a system for ...
    • How to configure an AWS and K8S for DaaS component

      Amazon Web Services (AWS) AWS is one of the most comprehensive and broadly adopted cloud platforms providing on-demand cloud computing platforms. It offers over 175 fully-featured services from data centers globally. Kubernetes is a system for ...
    • How to configure a Grafana component

      Grafana is open-source visualization and analytics software. Query, visualize and explore key metrics, set an alert to quickly identifying problems with the system to minimize disruption to services. snapblocs uses Grafana with Elastic Stack (ELK) to ...
    • How to configure an Elastic Stack component

      snapblocs use Elastic Observability for providing Observability of the running Data Platform. Observability of the Data Platform ensures that DevOps can easily detect undesirable behaviors (service downtime, errors, slow responses, etc.). And have ...