Thursday, April 4, 2024
GCP GKE Juara GCP Kubernetes

GKE Workload Optimization

workload

“GKE Workload Optimization”

Salah satu dari banyak manfaat menggunakan Google Cloud adalah model penagihannya yang menagih Anda hanya untuk sumber daya yang Anda gunakan. Dengan mengingat hal itu, Anda tidak hanya harus mengalokasikan sumber daya dalam jumlah yang wajar untuk aplikasi dan infrastruktur Anda, tetapi juga memanfaatkannya secara efisien. Dengan GKE, ada sejumlah alat dan strategi yang tersedia untuk Anda yang dapat mengurangi penggunaan berbagai sumber daya dan layanan sekaligus meningkatkan ketersediaan aplikasi Anda.

Praktikum

Provision lab environment

gcloud config set compute/zone us-central1-a
gcloud container clusters create test-cluster --num-nodes=3  --enable-ip-alias
  • buat manifest untuk gb-frontend pod
cat << EOF > gb_frontend_pod.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: gb-frontend
  name: gb-frontend
spec:
    containers:
    - name: gb-frontend
      image: gcr.io/google-samples/gb-frontend:v5
      resources:
        requests:
          cpu: 100m
          memory: 256Mi
      ports:
      - containerPort: 80
EOF
  • create manifest
kubectl apply -f gb_frontend_pod.yaml

Task 1. Container-native load balancing through ingress

  • buat manifest cluster IP
cat << EOF > gb_frontend_cluster_ip.yaml
apiVersion: v1
kind: Service
metadata:
  name: gb-frontend-svc
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
spec:
  type: ClusterIP
  selector:
    app: gb-frontend
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
EOF
  • apply ke cluster
kubectl apply -f gb_frontend_cluster_ip.yaml
  • Buat ingress untuk app
cat << EOF > gb_frontend_ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gb-frontend-ingress
spec:
  defaultBackend:
    service:
      name: gb-frontend-svc
      port:
        number: 80
EOF
kubectl apply -f gb_frontend_ingress.yaml
  • cek health status untuk backend service
BACKEND_SERVICE=$(gcloud compute backend-services list | grep NAME | cut -d ' ' -f2)
  • get health status dari service
gcloud compute backend-services get-health $BACKEND_SERVICE --global
  • retrieve
kubectl get ingress gb-frontend-ingress

Task 2. Load testing an application

gsutil -m cp -r gs://spls/gsp769/locust-image .
gcloud builds submit \
    --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/locust-tasks:latest locust-image
  • cek list docker image
gcloud container images list
  • apply manifest
gsutil cp gs://spls/gsp769/locust_deploy_v2.yaml .
sed 's/${GOOGLE_CLOUD_PROJECT}/'$GOOGLE_CLOUD_PROJECT'/g' locust_deploy_v2.yaml | kubectl apply -f -
  • cek IP external
kubectl get service locust-main
  • cek web via -> [EXTERNAL_IP_ADDRESS]:8089
  • Masukan 200 user , dan 20 hatch rate
  • Click Start swarming
  • Buka Navigation menu > Kubernetes Engine
  • pilih workload
  • klik gb-frontend pod
Baca Juga :  Autoscaling an Instance Group with Custom Cloud Monitoring Metrics

Task 3. Readiness and liveness probes

  • buat demo
cat << EOF > liveness-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    demo: liveness-probe
  name: liveness-demo-pod
spec:
  containers:
  - name: liveness-demo-pod
    image: centos
    args:
    - /bin/sh
    - -c
    - touch /tmp/alive; sleep infinity
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/alive
      initialDelaySeconds: 5
      periodSeconds: 10
EOF
kubectl apply -f liveness-demo.yaml
  • cek list pod
kubectl describe pod liveness-demo-pod
kubectl exec liveness-demo-pod -- rm /tmp/alive
  • cek pod event
kubectl describe pod liveness-demo-pod

Setting up a readiness probe

  • buat single pod
cat << EOF > readiness-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    demo: readiness-probe
  name: readiness-demo-pod
spec:
  containers:
  - name: readiness-demo-pod
    image: nginx
    ports:
    - containerPort: 80
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/healthz
      initialDelaySeconds: 5
      periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: readiness-demo-svc
  labels:
    demo: readiness-probe
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    demo: readiness-probe
EOF
kubectl apply -f readiness-demo.yaml
  • cek service
kubectl get service readiness-demo-svc
  • cek pod event
kubectl describe pod readiness-demo-pod
  • generate file
kubectl exec readiness-demo-pod -- touch /tmp/healthz
kubectl describe pod readiness-demo-pod | grep ^Conditions -A 5

Task 4. Pod disruption budgets

  • delete single app
kubectl delete pod gb-frontend
  • edit jadi 5 replica
cat << EOF > gb_frontend_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gb-frontend
  labels:
    run: gb-frontend
spec:
  replicas: 5
  selector:
    matchLabels:
      run: gb-frontend
  template:
    metadata:
      labels:
        run: gb-frontend
    spec:
      containers:
        - name: gb-frontend
          image: gcr.io/google-samples/gb-frontend:v5
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
          ports:
            - containerPort: 80
              protocol: TCP
EOF
kubectl apply -f gb_frontend_deployment.yaml
  • drain node
for node in $(kubectl get nodes -l cloud.google.com/gke-nodepool=default-pool -o=name); do
  kubectl drain --force --ignore-daemonsets --grace-period=10 "$node";
done
  • cek replica count
kubectl describe deployment gb-frontend | grep ^Replicas
  • uncordoning node
for node in $(kubectl get nodes -l cloud.google.com/gke-nodepool=default-pool -o=name); do
  kubectl uncordon "$node";
done
kubectl describe deployment gb-frontend | grep ^Replicas
  • buat pod disruption budget
kubectl create poddisruptionbudget gb-pdb --selector run=gb-frontend --min-available 4
  • drain lagi
for node in $(kubectl get nodes -l cloud.google.com/gke-nodepool=default-pool -o=name); do
  kubectl drain --timeout=30s --ignore-daemonsets --grace-period=10 "$node";
done
kubectl describe deployment gb-frontend | grep ^Replicas

Penutup

Sahabat Blog Learning & Doing demikianlah penjelasan mengenai GKE Workload Optimization. Semoga Bermanfaat . Sampai ketemu lagi di postingan berikut nya.

(Visited 101 times, 1 visits today)

Similar Posts