Kubernetes Engine
Introduction
Instantiate the cluster
Ensure that your clusters are automatically upgraded with the latest and greatest version
Service will automatically repair unhealthy nodes for you it'll make periodic health checks on each node in the cluster if a node is determined to be unhealthy
GKE supports scaling the cluster itself gke seamlessly integrates with google cloud's build and container registry this allows you to automate deployment
GKE is integrated with google virtual private clouds or vpcs and makes use of gcp's networking features
For kube-APIserver, you can accept commands that view or change the state of the cluster, including launching pods, so that you can use the kubectl command frequently
Etcd is the clusters database, includes all of the cluster configuration data and more dynamic information, such as what nodes are part of the cluster, what pods should be running and where they should be running.
Kube-scheduler is responsible for scheduling pods onto nodes, it discovers a pod object that doesn't yet have an assignment to a node, it chooses a node and simply writes the name of that node into the pod object.
Kube-controller-manager continuously monitors the state of the cluster through kubeAPIserver, Whenever the current state of the cluster doesn't match the desired state, kube-controller-manager will attempt to make changes to achieve the desired state.
Kube-cloud manager manages controllers that interact with the underlying cloud providers. For example, if you manually launched a Kubernetes cluster on Google compute engine, kube-cloud manager would be responsible for bringing in Google cloud features like load balancers and storage volumes when you needed them.
When the kube-APIserver wants to start a pod on a node, it connects to that nodes kubelet. Kubelet uses the container runtime to start the pod and monitors its life cycle, including readiness and liveliness probes and reports back to kube-APIserver.
The number of nodes can be changed during or after the creation of the cluster adding more nodes and deploying multiple replicas of an application will improve an application's availability
Regional cluster regional clusters control planes and nodes are spread across multiple compute engine zones within a region
Node pools are a gke feature rather than the kubernetes feature
Can enable automatic node upgrades , automatic node repairs and cluster autoscaling at this node pool level
Once you build a zonal cluster you cannot convert it into a regional cluster or vice versa
Three nodes these numbers can be increased or decreased for example if you have five nodes in zone one you will have exactly the same number of nodes for each of the other zones for a total of 15 nodes
Autopilot is a new mode of operation in Google Kubernetes Engine (GKE) that is designed to reduce the operational cost of managing clusters, optimize your clusters for production, and yield higher workload availability, provisions and manages the cluster's underlying infrastructure, including nodes and node pools
Entire cluster that is the control plane and its nodes are hidden from the public internet cluster control planes can be accessed by google cloud products such as cloud logging or cloud monitoring through an internal ip address they also can be accessed by authorized networks true external ip address
Provide scope for naming resources such as pods deployments and controllers
Implement resource quotas across the cluster these quotas define limits for resource consumption within a namespace
PersistentVolumes are durable and persistent storage resources managed at the cluster level
PersistentVolumeClaims are request and claims made by Pods to use PersistentVolumes
A Pod uses this PersistentVolumeClaim to request a PersistentVolume. If a PersistentVolume matches all the requirements defined in a PersistentVolumeClaim, the PersistentVolumeClaim is bound to that PersistentVolume.
Operation
Install kubectl component and connect with your cluster
// install kubectl
gcloud components install kubectl
kubectl version
// connect with cluster
gcloud container clusters get-credentials CLUSTER_NAME
kubectl get namespaces
Create the artifact registry
gcloud artifacts repositories create hello-repo \
--repository-format=docker \
--location=REGION \
--description="Docker repository"
Build docker image
docker build -t REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1 .
Push docker image to artifact registry
gcloud auth configure-docker REGION-docker.pkg.dev
docker push REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1
// Verify
gcloud artifacts docker images list REGION-docker.pkg.dev/${{PROJECT_ID}}/hello-repo
Create cluster
// Standard
gcloud container clusters create hello-cluster
// Autopilotbash
gcloud container clusters create-auto hello-cluster
// Check the status
kubectl get nodes
Create single pod
kubectl run <name of pod>
--image=REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1
Create deployment (Method 1)
// create deployment
kubectl create deployment hello-app
--image=REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1
// set the replica
kubectl scale deployment hello-app --replicas=3
// create autoscaler
kubectl autoscale deployment hello-app --cpu-percent=80 --min=1 --max=5
// Verify the changes
kubectl get pods
kubectl describe pod ${POD_NAME}
kubectl get deployments
kubectl describe deployment ${DEPLOYMENT_NAME}
Create deployment (Method 2)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-website-deployment
namespace: default
spec:
selector:
matchLabels:
name: my-website
replicas: 1
template:
metadata:
labels:
name: my-website
spec:
containers:
- name: my-website
image: asia-east2-docker.pkg.dev/geometric-shine-316204/docker/my_website:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
nodeSelector:
cloud.google.com/gke-nodepool: POOL_NAME
Notice: the match labels must the same as the template label in order to apply the rules to the pods
kubectl apply -f deployment.yaml
Create the service (Method 1)
kubectl expose deployment hello-app --name=hello-app-service --type=LoadBalancer --port 80 --target-port 8080
kubectl get services
kubectl describe service ${SERVICE_NAME}
Create the service (Method 2)
apiVersion: v1
kind: Service
metadata:
name: my-website-service
namespace: default
spec:
selector:
name: my-website
ports:
- protocol: TCP
port: 80
targetPort: 3000
nodePort: 30100
type: NodePort (open the port of the cluster)
/ LoadBalancer (need to specify the public ip of load balancer)
Notice: target port should be the same as the container port as it is the exposed port of the pod
Delete the resource
// Delete the cluster
gcloud container clusters delete hello-cluster --zone COMPUTE_ZONE
// Delete the service/deployment/pod
kubectl delete service/deployment/pod {{NAME}}
// Delete image
gcloud artifacts docker images delete \
REGION-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1 \
--delete-tags --quiet
Created Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hello-web-disk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Giyaml
kubectl apply -f pvc-demo.yaml
kubectl get persistentvolumeclaim
Mount PVC to Pod
kind: Pod
apiVersion: v1
metadata:
name: pvc-demo-pod
spec:
containers:
- name: frontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: pvc-demo-volume
volumes:
- name: pvc-demo-volume
persistentVolumeClaim:
claimName: hello-web-disk
Mount PVC to StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-demo
spec:
selector:
matchLabels:
app: MyApp
replicas: 3
template:
metadata:
labels:
app: MyApp
spec:
containers:
- name: stateful-set-container
image: nginx
ports:
- containerPort: 80
name: http
volumeMounts:
- name: hello-web-disk
mountPath: "/var/www/html"
volumeClaimTemplates:
- metadata:
name: hello-web-disk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 30Gi
kubectl apply -f statefulset.yaml
kubectl describe statefulset statefulset-demo
kubectl get pods
kubectl get pvc
Last updated
Was this helpful?