verification de la version
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.9", GitCommit:"d15213f69952c79b317e635abff6ff4ec81475f8", GitTreeState:"clean", BuildDate:"2023-12-19T13:39:19Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}Mettre à jour le fichier /etc/hosts en renseignant les IP des trois VM. Prenez soin de remplacer les adresses IP par celles de vos VM.
exemple :
1.1 (Préparation de l’environnement) Installation de la completion pour kubectl
echo 'source <(kubectl completion bash)' >>~/.bashrc
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
source ~/.bashrc
# 1: test
k version1.2 Installation du cluster kubernetes
Machine : master
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
...Un token sera généré à l’issue du processus d’initialisation. Il est important de le sauvegarder car il servira à connecter les worker nodes au cluster
Notez la commande de join :
exemple :
kubeadm join 10.10.3.243:6443 --token m03nzv.vtfeaij5yu876u7z \
--discovery-token-ca-cert-hash sha256:2da9df40f55f901d221d30cf0574264bcd4c62b7c38200498e99e2797a55753fmkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configkubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yamlVérification :
ubuntu@master:~$ k get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 3m34s v1.27.9
ubuntu@master:~$ k get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 4m18s v1.27.9Note : Si on souhaite utiliser les network policies (que nous explorerons plus tard), il faut utiliser un plugin supportant cette fonctionnalité. (Il faut éviter flannel notamment)
Machines : worker-0, worker-1 4. Nous allons maintenant ajouter les deux noeuds worker à notre cluster. Pour ce faire, nous allons utiliser la commande suivante sur les noeuds worker worker-0 et worker-1:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.NAME STATUS ROLES AGE VERSION
master Ready master 25m v1.19.3
worker-0 Ready <none> 2m24s v1.19.3
worker-1 Ready <none> 1m24s v1.19.3pod/test-pod createdNAME READY STATUS RESTARTS AGE
test-pod 1/1 Running 0 34sMachine : master
Namespace
Aide à résoudre la complexité de l’organisation des objets au sein d’un cluster. Les namespaces permettent de regrouper des objets afin que vous puissiez les filtrer et les contrôler comme une unité. Qu’il s’agisse d’appliquer des politiques de contrôle d’accès personnalisées ou de séparer tous les composants d’un environnement de test, les namespaces sont un concept puissant et flexible pour gérer les objets en tant que groupe.
En ligne de commande
Autre méthode
Pod
Unité d’exécution de base d’une application Kubernetes. Il constitue la plus petite et la plus simple unité dans le modèle d’objets de Kubernetes pouvant être créer ou déployer. Un Pod représente des process en cours d’exécution dans un cluster.
Pod Unité d’exécution de base d’une application Kubernetes. Il constitue la plus petite et la plus simple unité dans le modèle d’objets de Kubernetes pouvant être créer ou déployer. Un Pod représente des process en cours d’exécution dans un cluster.
apiVersion: v1
kind: Pod
metadata:
name: lab-pod
namespace: lab
labels:
app: web
spec:
containers:
- image: nginx
name: nginxDeployment
Un déploiement Kubernetes est un objet Kubernetes qui fournit des mises à jour déclaratives aux applications. Un déploiement permet de décrire le cycle de vie d’une application, comme les images à utiliser, le nombre de pods qu’il devrait y avoir et la manière dont ils doivent être mis à jour.
apiVersion: apps/v1
kind: Deployment
metadata:
name: lab-deployment
namespace: lab
labels:
app: httpd
spec:
replicas: 2
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd:2.4.43
ports:
- containerPort: 80Autres manières de déployer les applications :
A part le type Deployment, il existe aussi les Statefulsets et les Daemonsets.
Les objets StatefulSet sont conçus pour déployer des applications avec état et des applications en cluster qui enregistrent des données sur un espace de stockage persistant .
Les DaemonSets sont utilisés pour garantir que tous vos nœuds exécutent une copie d’un pod, ce qui vous permet d’exécuter l’application sur chaque nœud. Lorsque vous ajoutez un nouveau nœud au cluster, un pod est créé automatiquement sur le nœud.
Service
Dans Kubernetes, un service est une abstraction qui définit un ensemble logique de pods et une politique permettant d’y accéder (parfois ce modèle est appelé un micro-service). L’ensemble des pods ciblés par un service est généralement déterminé par un “Selector”.
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: lab
spec:
type: NodePort
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 80Avec le contenu yaml suivant :
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
namespace: storage
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"persistentvolume/postgres-pv created
Name: postgres-pv
Labels: type=local
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/data
HostPathType:
Events: <none>Avec le contenu yaml suivant :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: storage
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gipersistentvolumeclaim/postgres-pvc created 6. Nous pouvons maintenant inspecter ce pvc :
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-pvc Bound postgres-pv 10Gi RWO manual 14sNotre pvc est maintenant bound a notre pv.
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: postgres-with-pvc-pod
namespace: storage
spec:
volumes:
- name: postgres-volume
persistentVolumeClaim:
claimName: postgres-pvc
containers:
- name: postgres-with-pvc
image: postgres
env:
- name: POSTGRES_PASSWORD
value: password
volumeMounts:
- mountPath: "/var/lib/postgresql/data"
name: postgres-volume
subPath: pgdatapod/postgres-with-pvc-pod created
...
Volumes:
postgres-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-pvc
ReadOnly: false
...helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespaceNAME: longhorn
LAST DEPLOYED: Fri Jul 1 11:45:19 2022
NAMESPACE: longhorn-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Longhorn is now installed on the cluster!
Please wait a few minutes for other Longhorn components such as CSI deployments, Engine Images, and Instance Managers to be initialized.
Visit our documentation at https://longhorn.io/docs/
...Name: longhorn
IsDefaultClass: Yes
Annotations: longhorn.io/last-applied-configmap=kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Delete"
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fromBackup: ""
fsType: "ext4"
dataLocality: "disabled"
,storageclass.kubernetes.io/is-default-class=true
Provisioner: driver.longhorn.io
Parameters: dataLocality=disabled,fromBackup=,fsType=ext4,numberOfReplicas=3,staleReplicaTimeout=30
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>Avec le contenu yaml suivant :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-longhorn-pvc
namespace: storage
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gipersistentvolumeclaim/postgres-openebs-pvc created
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-longhorn-pvc Bound pvc-69b06a24-90e3-4ad9-8a25-5d7f4d216616 3Gi RWO longhorn 32sNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-69b06a24-90e3-4ad9-8a25-5d7f4d216616 3Gi RWO Delete Bound storage/postgres-longhorn-pvc longhorn 73sAvec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: postgres-with-longhorn-pvc-pod
namespace: storage
spec:
volumes:
- name: postgres-volume
persistentVolumeClaim:
claimName: postgres-longhorn-pvc
containers:
- name: postgres-with-pvc
image: postgres
env:
- name: POSTGRES_PASSWORD
value: password
volumeMounts:
- mountPath: "/var/lib/postgresql/data"
name: postgres-volume
subPath: pgdatapod/postgres-with-longhorn-pvc-pod created
Nous pouvons supprimer les objets générés par cet exercice de la façon suivante :
kubectl delete -f postgres-longhorn-pvc.yaml -f postgres-pv.yaml -f postgres-pvc.yaml -f postgres-with-pvc-pod.yaml -f postgres-with-longhorn-pvc-pod.yaml
persistentvolumeclaim "postgres-openebs-pvc" deleted
persistentvolume "postgres-pv" deleted
persistentvolumeclaim "postgres-pvc" deleted
pod "postgres-with-pvc-pod" deleted
pod "postgres-with-longhorn-pvc-pod" deletedMachine : master
Avec le contenu yaml suivant :
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-env
data:
redis_host: "redis-svc"
redis_port: "6349"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
data:
log_level: "NOTICE"
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: redis-env
key: redis_host
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: env-config
key: log_level
restartPolicy: Never
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-v
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "cat /etc/config/redis-config" ]
volumeMounts:
- name: redis-conf-volume
mountPath: /etc/config
volumes:
- name: redis-conf-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: redis-config
restartPolicy: Nevervaleurs.txt avec les valeurs suivantes :valeurs.json avec les valeurs suivantes :kubectl create configmap cmjson --from-file=valeurs.json
kubectl create configmap cmtxt --from-file=valeurs.txtapiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh","-c","cat /etc/config/keys" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: cmjson
items:
- key: valeurs.json
path: keys
restartPolicy: NeverMachine : master
kubectl create secret generic dev-db-secret -n secrets --from-literal postgres_password=password
secret/dev-db-secret created
Name: dev-db-secret
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
postgres_password: 8 bytesAvec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: pod-with-secret
namespace: secrets
spec:
containers:
- name: pod-with-secret
image: postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: postgres_passwordpod/pod-with-secret created
...
Ready: True
Restart Count: 0
Environment:
POSTGRES_PASSWORD: <set to the key 'postgres_password' in secret 'dev-db-secret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4xjhx (ro)
......
HOSTNAME=pod-with-secret
TERM=xterm
POSTGRES_PASSWORD=password
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
...Notre secret se trouve bien en tant que variable d’environnement dans notre conteneur.
secret/secret-file created
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: pod-with-volume-secret
namespace: secrets
spec:
containers:
- name: pod-with-volume-secret
image: redis
volumeMounts:
- name: secret-mount
mountPath: "/tmp"
readOnly: true
volumes:
- name: secret-mount
secret:
secretName: secret-filepod/pod-with-volume-secret created
...
Environment: <none>
Mounts:
/tmp from secret-mount (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4xjhx (ro)
...Iamasecret
curl -Lo kubeseal https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/kubeseal-linux-amd64
chmod +x kubeseal
sudo mv kubeseal /usr/local/bin/helm repo add stable https://charts.helm.sh/stable
helm repo update
helm install --namespace kube-system sealed-secrets stable/sealed-secrets
NAME: sealed-secrets
LAST DEPLOYED: Sun Nov 1 10:18:33 2020
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You should now be able to create sealed secretskubectl create secret generic -n secrets secret-example --dry-run --from-literal=secret=value -o yaml > secret-example.yaml
cat secret-example.yamlapiVersion: v1
data:
secret: dmFsdWU=
kind: Secret
metadata:
creationTimestamp: null
name: sealed-secret-examplekubeseal --controller-name=sealed-secrets --controller-namespace=kube-system --format yaml <secret-example.yaml > sealed-secret-example.yaml
cat sealed-secret-example.yamlapiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: secret-example
namespace: secrets
spec:
encryptedData:
secret: AgBy3DUDSGCwPLFOJ+jYp1wm1Wqf9PlCFLvIdUDPMdSr0tBIniBLNBpQbdZ+bqP6Tq7zBhDuJz4hNq5qchgfHXyKb6qxhSP30BuquSBHboO+19NHMEG6GOYT1TatHJwUVFlzGtqHcIRFwwEOZpJs9FRByYMf4jSbfu1Lb9u1E1Q49I3Ycw+LprqSZG4rZXtnBL+d6R1iO9OKsx6uQ3fklSYRyYuNWCrqGPYINcX9pcShvJHa8N30H6xZT8jrTpp+UPNXQTI3iaBHxHMTcc5jQCcduOp5Wgbm4G8OEr1Pd4fiNCb7QBAuiGLQa81RhdN887cifdv6mweDLnsRJk09fWGIyTXTezgCYnpsBQv0RFk/EEFiL7pm7w6zMHjp+ldy8NwonoJ8DL6mXFM2otdstGiDayoELrr47MEMp+Y4VVvbQai2YufUKdbF0/unBeB0BRMCMHYgqkCoKG5UPaekIVaYSPjUvT69WjY6DJnFoMz8uVtTqIaCpFAZ8Lm0G3cpfko3rwUGDefmVi4E8eLmcLn3t8KSdzkY5TLP+s58LFjFeDPz+OWvxnJ+1NmOig4OgzhItC0ngtulwhY2lXbuLgNhkjTXHTqRlCF4PXu/vcYHFhq4sBp+bTCvVsJYJTBpkNNCefT51KMTIg+xqOWC73/FqFwujJ4JAue4N99Fvh+7qbEYEw5sPPv6CmwuO0oVzNv52bjBRQ==
template:
metadata:
creationTimestamp: null
name: secret-example
namespace: secretssealedsecret.bitnami.com/secret-example created
NAME AGE
secret-example 25sNAME TYPE DATA AGE
secret-example Opaque 1 2m13sSupprimons les différents objets créés par ces exercices :
kubectl delete -f .
kubectl delete secret secret-file -n secrets
kubectl delete secret dev-db-secret -n secretsMachine : master
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: test-resources
namespace: resources
spec:
containers:
- name: app
image: redis
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"pod/test-resources created...
Host Port: <none>
State: Running
Started: Wed, 28 Oct 2020 13:18:59 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 128Mi
Requests:
cpu: 250m
memory: 128Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-587zl (ro)
...Avec le contenu yaml suivant :
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: resources
spec:
limits:
- default:
memory: 768Mi
defaultRequest:
memory: 256Mi
type: Containerlimitrange/mem-limit-range createdName: mem-limit-range
Namespace: resources
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 768Mi -Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: test2-resources
namespace: resources
spec:
containers:
- name: app
image: redispod/test2-resources created...
Host Port: <none>
State: Running
Started: Wed, 28 Oct 2020 13:34:12 +0000
Ready: True
Restart Count: 0
Limits:
memory: 768Mi
Requests:
memory: 256Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-587zl (ro)
...Avec le contenu yaml suivant :
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-quota
namespace: resources
spec:
hard:
requests.memory: 1Gi
limits.memory: 2Gi
resourcequota/resource-quota createdName: resource-quota
Namespace: resources
Resource Used Hard
-------- ---- ----
limits.memory 896Mi 2Gi
requests.memory 384Mi 1GiAvec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: test3-resources
namespace: resources
spec:
containers:
- name: app
image: redis
resources:
requests:
memory: "768Mi"
cpu: "250m"
Error from server (Forbidden): error when creating "test3-resources.yaml": pods "test3-resources" is forbidden: exceeded quota: resource-quota, requested: requests.memory=768Mi, used: requests.memory=384Mi, limited: requests.memory=1GiLa création échoue puisque la request demandée, s’ajoutant aux requests des deux pods existants, est supérieur à celle définie par la ressource quota qui est de 1Gi.
Nous pouvons maintenant supprimer les ressources que nous avons crées dans ces exercices :
limitrange "mem-limit-range" deleted
resourcequota "resource-quota" deleted
pod "test-resources" deleted
pod "test2-resources" deleted
Error from server (NotFound): error when deleting "test3-resources.yaml": pods "test3-resources" not foundMachine : master
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: file-liveness
namespace: healthchecking
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 10; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5pod/file-liveness created
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29s default-scheduler Successfully assigned default/liveness-exec to worker
Normal Pulling 29s kubelet Pulling image "busybox"
Normal Pulled 27s kubelet Successfully pulled image "busybox" in 1.59651835s
Normal Created 27s kubelet Created container liveness
Normal Started 27s kubelet Started container liveness
Warning Unhealthy 5s (x3 over 15s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 5s kubelet Container liveness failed liveness probe, will be restartedLa liveness probe fini donc par échouer comme prévu, étant donne que le fichier /tmp/healthy n’existe plus. On remarque également que Kubernetes kill le conteneur a l’intérieur du pod et le recrée.
Nous allons cette fois mettre en place une liveness probe mais avec une requête http exécutée périodiquement.
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: http-liveness
namespace: healthchecking
spec:
containers:
- name: liveness
image: nginx
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 3Cette fois ci, la liveness probe utilise une requête http avec la méthode GET sur la racine toute les 3 secondes. La liveness probe échouera selon le code d’erreur de la requête http.
pod/http-liveness created
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 118s default-scheduler Successfully assigned healthchecking/http-liveness to worker
Normal Pulling 118s kubelet Pulling image "nginx"
Normal Pulled 114s kubelet Successfully pulled image "nginx" in 3.862745132s
Normal Created 114s kubelet Created container liveness
Normal Started 113s kubelet Started container livenesskubectl describe pods -n healthchecking http-liveness
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 59s default-scheduler Successfully assigned healthchecking/http-liveness to worker
Normal Pulled 57s kubelet Successfully pulled image "nginx" in 1.609742987s
Normal Pulling 34s (x2 over 58s) kubelet Pulling image "nginx"
Warning Unhealthy 34s (x3 over 40s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 403
Normal Killing 34s kubelet Container liveness failed liveness probe, will be restarted
Normal Created 32s (x2 over 57s) kubelet Created container liveness
Normal Started 32s (x2 over 57s) kubelet Started container liveness
Normal Pulled 32s kubelet Successfully pulled image "nginx" in 2.031773864sOn voit que le conteneur a été tué par Kubernetes étant donné que la liveness probe a échoué.
Nous allons maintenant voir une autre façon de faire du healthchecking sur un pod : la readiness probe. Elle permet à Kubernetes de savoir lorsque l’application se trouvant dans un pod a bel et bien démarré. Comme la liveness probe, elle fait ça a l’aide de commandes, de requêtes http/tcp, etc.
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: file-readiness
namespace: healthchecking
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- sleep 60; touch /tmp/healthy; sleep 600
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5Ce pod est un peu similaire à celui de file-liveness dans l’exercice 1. Cette fois ci, le pod attend 30 secondes au démarrage avant de créer un fichier /tmp/healthy. Ce pod contient également une readiness probe vérifiant l’existence de ce fichier /tmp/healthy.
pod/file-readiness created
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39s default-scheduler Successfully assigned healthchecking/file-readiness to worker
Normal Pulling 38s kubelet Pulling image "busybox"
Normal Pulled 37s kubelet Successfully pulled image "busybox" in 1.64435698s
Normal Created 37s kubelet Created container liveness
Normal Started 36s kubelet Started container liveness
Warning Unhealthy 1s (x7 over 31s) kubelet Readiness probe failed: cat: can't open '/tmp/healthy': No such file or directory
NAME READY STATUS RESTARTS AGE
file-liveness 0/1 CrashLoopBackOff 7 14m
file-readiness 1/1 Running 0 105s
http-liveness 1/1 Running 1 6m3s Nous allons supprimer les ressources créées par cet exercice de la façon suivante :
Machine : master
node/worker-0 tainted
node/worker-1 tainted
CreationTimestamp: Sun, 01 Nov 2020 09:49:52 +0000
Taints: dedicated=experimental:NoSchedule
Unschedulable: falseAvec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: pod-without-toleration
namespace: scheduling
spec:
containers:
- name: nginx
image: nginxpod/pod-without-toleration created
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-without-toleration 0/1 Pending 0 11m <none> <none> <none> <none>Notre pod n’ayant pas de toleration pour la taint que nous avons mis sur les noeuds worker-0 et worker-1, il n’a pu être déployé.
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: pod-toleration
namespace: scheduling
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: "dedicated"
value: "experimental"
operator: "Equal"
effect: "NoSchedule"pod/pod-toleration created
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-toleration 1/1 Running 0 49s 10.44.0.1 worker-0 <none> <none>Le pod peut maintenant être schedulé sur le noeud worker-0
pod “pod-toleration” deleted
pod “pod-without-toleration” deleted
node/worker-0 untainted
node/worker-1 untainted
NodeSelector
node/master untainted
node/worker-1 labeled
Name: worker-1
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
disk=ssd
kubernetes.io/arch=amd64
kubernetes.io/hostname=worker
kubernetes.io/os=linuxAvec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeselector
namespace: scheduling
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
disk: ssdpod/pod-nodeselector created
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 1/1 Running 0 17s 10.44.0.1 worker-1 <none> <none>Sans surprise, sur le noeud worker-1.
pod “pod-nodeselector” deleted
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity
namespace: scheduling
labels:
pod: alone
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disk
operator: In
values:
- ssd
containers:
- name: pod-nodeaffinity
image: nginxpod/pod-nodeaffinity created
kubectl get pods -n scheduling pod-nodeaffinity -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity 1/1 Running 0 36s 10.44.0.1 worker-1 <none> <none>Sans surprise, dans le noeud worker-1.
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: pod-podantiaffinity
namespace: scheduling
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: pod
operator: In
values:
- alone
topologyKey: "kubernetes.io/hostname"
containers:
- name: pod-podantiaffinity
image: nginxpod/pod-podantiaffinity created
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podantiaffinity 1/1 Running 0 14s 10.32.0.4 master <none> <none>Cette fois-ci, soit sur le noeud master ou worker-0.
Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
namespace: scheduling
spec:
containers:
- name: nginx
image: nginx
nodeName: masterpod/pod-nodename created
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 1/1 Running 0 4s 10.44.0.4 master <none> <none>Sans surprise le noeud master. :)
Nous pouvons supprimer les ressources générées par cet exercice de la façon suivante :
pod "pod-nodeaffinity" deleted
pod "pod-nodename" deleted
pod "pod-podantiaffinity" deleted
Error from server (NotFound): error when deleting "pod-nodeselector.yaml": pods "pod-nodeselector" not found
Error from server (NotFound): error when deleting "pod-toleration.yaml": pods "pod-toleration" not found
Error from server (NotFound): error when deleting "pod-without-toleration.yaml": pods "pod-without-toleration" not foundMachine : master
Avec respectivement les contenus yaml suivants :
kubectl describe serviceaccounts -n rbac
Name: default
Namespace: rbac
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-4mpqg
Tokens: default-token-4mpqg
Events: <none>Note : Le token utilisé par le service account est stocké dans un secret, que l’on peut voir ci-dessus
openssl req -new -newkey rsa:4096 -nodes -keyout ${TRIG}-kubernetes.key -out ${TRIG}-kubernetes.csr -subj "/CN=${TRIG}/O=devops"cat << EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: ${TRIG}-kubernetes-csr
spec:
groups:
- system:authenticated
request: $REQUEST
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOFkubectl get csr ${TRIG}-kubernetes-csr -o jsonpath='{.status.certificate}' | base64 --decode > ${TRIG}-kubernetes-csr.crtkubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - > kubernetes-ca.crtkubectl config set-cluster $(kubectl config view -o jsonpath='{.clusters[0].name}') --server=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=kubernetes-ca.crt --kubeconfig=${TRIG}-kubernetes-config --embed-certskubectl config set-credentials ${TRIG} --client-certificate=${TRIG}-kubernetes-csr.crt --client-key=${TRIG}-kubernetes.key --embed-certs --kubeconfig=${TRIG}-kubernetes-config
kubectl config set-context ${TRIG} --cluster=$(kubectl config view -o jsonpath='{.clusters[0].name}') --namespace=rbac --user=${TRIG} --kubeconfig=${TRIG}-kubernetes-config
KUBECONFIG=hel-kubernetes-config kubectx hel
sudo mkdir -p /home/${TRIG}/.kube
sudo cp ${TRIG}-kubernetes-config /home/${TRIG}/.kube/config
sudo chown -R ${TRIG}:${TRIG} /home/${TRIG}/.kubesudo su - ${TRIG}
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "${TRIG}" cannot list resource "pods" in API group "" in the namespace "default"
#!!! A faire pour repasser sur ubuntu ou utiliser un autre onglet
exitAvec le contenu yaml suivant :
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: rbac
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]Avec le contenu yaml suivant :
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-creator
namespace: rbac
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "create", "update", "patch"]kubectl apply -f pod-reader.yaml -f pod-creator.yaml
role.rbac.authorization.k8s.io/pod-reader created
role.rbac.authorization.k8s.io/pod-creator createdkubectl describe roles -n rbac pod-reader
Name: pod-reader
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch list]
kubectl describe roles -n rbac pod-creator
Name: pod-creator
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch list create update patch]# 9: ! En tant que ubuntu
kubectl run --image nginx nginx -n rbac
# 9: En tant que hel
kubectl get po -n rbac
# 9: Essayer de supprimer le pod en tant que hel
kubectl delete po nginx -n rbac #! erreur…
Avec respectivement les contenus yaml suivants :
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: rbac
subjects:
- kind: User
name: reader
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: create-pods
namespace: rbac
subjects:
- kind: User
name: creator
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-creator
apiGroup: rbac.authorization.k8s.iokubectl apply -f read-pods.yaml -f create-pods.yaml
rolebinding.rbac.authorization.k8s.io/read-pods created
rolebinding.rbac.authorization.k8s.io/create-pods createdkubectl describe rolebindings -n rbac create-pods
Name: create-pods
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: pod-creator
Subjects:
Kind Name Namespace
---- ---- ---------
User creator
kubectl describe rolebindings -n rbac read-pods
Name: read-pods
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: pod-reader
Subjects:
Kind Name Namespace
---- ---- ---------
User readerkubectl run --image nginx test-rbac -n rbac --as reader
Error from server (Forbidden): pods is forbidden: User "reader" cannot create resource "pods" in API group "" in the namespace "rbac"kubectl get pods test-rbac -n rbac --as unauthorized
Error from server (Forbidden): pods "test-rbac" is forbidden: User "unauthorized" cannot get resource "pods" in API group "" in the namespace "rbac"kubectl get pods test-rbac -n rbac --as reader
NAME READY STATUS RESTARTS AGE
test-rbac 1/1 Running 0 58sAvec le contenu yaml suivant :
apiVersion: v1
kind: Secret
metadata:
name: secret-rbac
namespace : default
type: Opaque
stringData:
iam: asecretAvec le contenu yaml suivant :
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- resources: ["secrets"]
verbs: ["get", "watch", "list"]
apiGroups: [""]Avec le contenu yaml suivant :
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-reader
rules:
- resources: ["nodes"]
verbs: ["get", "watch", "list"]
apiGroups: [""]kubectl apply -f secret-reader.yaml -f node-reader.yaml -f secret-rbac.yaml
clusterrole.rbac.authorization.k8s.io/secret-reader created
clusterrole.rbac.authorization.k8s.io/node-reader created
secret/secret-rbac createdAvec respectivement les contenus yaml suivants :
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: User
name: reader
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-nodes-global
subjects:
- kind: User
name: reader
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: node-reader
apiGroup: rbac.authorization.k8s.iokubectl apply -f read-secrets-global.yaml -f read-nodes-global.yaml
clusterrolebinding.rbac.authorization.k8s.io/read-secrets-global created
clusterrolebinding.rbac.authorization.k8s.io/read-nodes-global createdkubectl get secrets secret-rbac --as unauthorized
Error from server (Forbidden): secrets "secret-rbac" is forbidden: User "unauthorized" cannot get resource "secrets" in API group "" in the namespace "default"kubectl get nodes --as unauthorized
Error from server (Forbidden): nodes is forbidden: User "unauthorized" cannot list resource "nodes" in API group "" at the cluster scopekubectl get nodes --as reader
NAME STATUS ROLES AGE VERSION
master Ready master 25h v1.19.3
worker Ready <none> 25h v1.19.3kubectl delete -f .
rolebinding.rbac.authorization.k8s.io "create-pods" deleted
serviceaccount "example-serviceaccount" deleted
clusterrole.rbac.authorization.k8s.io "node-reader" deleted
role.rbac.authorization.k8s.io "pod-creator" deleted
role.rbac.authorization.k8s.io "pod-reader" deleted
clusterrolebinding.rbac.authorization.k8s.io "read-nodes-global" deleted
rolebinding.rbac.authorization.k8s.io "read-pods" deleted
clusterrolebinding.rbac.authorization.k8s.io "read-secrets-global" deleted
secret "secret-rbac" deleted
clusterrole.rbac.authorization.k8s.io "secret-reader" deletedMachine : master
Avec le contenu yaml suivant :
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-update
namespace: updating
labels:
app: httpd
spec:
replicas: 4
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd:2.4.43
ports:
- containerPort: 80deployment.apps “example-update” created
deployment “example-update” successfully rolled out
Waiting for deployment "example-update" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "example-update" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "example-update" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "example-update" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "example-update" rollout to finish: 1 old replicas are pending termination...
deployment "example-update" successfully rolled outREVISION CHANGE-CAUSE
1 kubectl apply --filename=example-update.yaml --record=true
2 kubectl apply --filename=example-update.yaml --record=truehttpd:2.4.46
httpd:2.4.46
httpd:2.4.46
httpd:2.4.46deployment.apps/example-update rolled back
Waiting for deployment "example-update" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "example-update" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "example-update" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "example-update" rollout to finish: 3 of 4 updated replicas are available...
deployment "example-update" successfully rolled outhttpd:2.4.43
httpd:2.4.43
httpd:2.4.43
httpd:2.4.43...
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
...Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
labels:
run: blue
name: app-v1
namespace: updating
spec:
containers:
- image: nginx
name: nginxAvec le contenu yaml suivant :
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: updating
spec:
type: ClusterIP
selector:
run: blue
ports:
- protocol: TCP
port: 80
targetPort: 80pod/app-v1 created
service/app-service created
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service ClusterIP 10.106.61.45 <none> 80/TCP 23s<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>Avec le contenu yaml suivant :
apiVersion: v1
kind: Pod
metadata:
labels:
run: green
name: app-v2
namespace: updating
spec:
containers:
- image: httpd
name: httpdpod/app-v2 created
service/app-service configured
<html><body><h1>It works!</h1></body></html>service/app-service configured
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>Nous pouvons maintenant supprimer les ressources que nous avons créées dans ces exercices :
kubectl delete -f .
service "app-service" deleted
pod "app-v1" deleted
pod "app-v2" deleted
deployment.apps "example-update" deletedMachine : master
Avec respectivement ces contenus yaml :
apiVersion: v1
kind: Pod
metadata:
name: source1-pod
namespace: network-policies
labels:
role: source1
spec:
containers:
- name: source1
image: nginxapiVersion: v1
kind: Pod
metadata:
name: source2-pod
namespace: network-policies
labels:
role: source2
spec:
containers:
- name: source2
image: nginxapiVersion: v1
kind: Pod
metadata:
name: dest-pod
namespace: network-policies
labels:
role: dest
spec:
containers:
- name: dest
image: nginxpod/dest-pod created
pod/source1-pod created
pod/source2-pod createdAvec respectivement les contenus yaml suivants :
apiVersion: v1
kind: Service
metadata:
name: source1-service
namespace: network-policies
spec:
type: ClusterIP
selector:
role: source1
ports:
- protocol: TCP
port: 80
targetPort: 80apiVersion: v1
kind: Service
metadata:
name: source2-service
namespace: network-policies
spec:
type: ClusterIP
selector:
role: source2
ports:
- protocol: TCP
port: 80
targetPort: 80apiVersion: v1
kind: Service
metadata:
name: dest-service
namespace: network-policies
spec:
type: ClusterIP
selector:
role: dest
ports:
- protocol: TCP
port: 80
targetPort: 80service/dest-service created
service/source1-service created
service/source2-service createdkubectl exec -n network-policies -it source1-pod -- curl dest-service
kubectl exec -n network-policies -it source2-pod -- curl dest-service<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>Sans Network Policies, on remarque que les requêtes se déroulent bien.
Avec le contenu yaml suivant :
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-network-policy
namespace: network-policies
spec:
podSelector:
matchLabels:
role: dest
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: source1
ports:
- protocol: TCP
port: 80networkpolicy.networking.k8s.io/ingress-network-policy created
Name: ingress-network-policy
Namespace: network-policies
Created on: 2020-11-02 09:29:07 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: role=dest
Allowing ingress traffic:
To Port: 80/TCP
From:
PodSelector: role=source1
Not affecting egress traffic
Policy Types: Ingress<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>curl: (7) Failed to connect to dest-service port 80: Connection timed out
command terminated with exit code 7Avec le contenu yaml suivant :
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-network-policy
namespace: network-policies
spec:
podSelector:
matchLabels:
role: dest
policyTypes:
- Egress
egress: []networkpolicy.networking.k8s.io/egress-network-policy created
curl: (6) Could not resolve host: source2-service
command terminated with exit code 6apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-network-policy
namespace: network-policies
spec:
podSelector:
matchLabels:
role: dest
policyTypes:
- Egress
egress:
- to:
ports:
- protocol: TCP
port: 80
- port: 53
protocol: UDP
- port: 53
protocol: TCP12 Appliquons la modification :
networkpolicy.networking.k8s.io/egress-network-policy configured
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>Nous allons supprimer les ressources créées par cet exercice de la façon suivante :
pod "dest-pod" deleted
service "dest-service" deleted
networkpolicy.networking.k8s.io "egress-network-policy" deleted
networkpolicy.networking.k8s.io "ingress-network-policy" deleted
pod "source1-pod" deleted
service "source1-service" deleted
pod "source2-pod" deleted
service "source2-service" deletedMachine : master
en modifiant IP-PUB-MASTER et IP-PRIV-MASTER par vos valeurs
controller:
hostNetwork: false
hostPort:
enabled: true
ports:
http: 80
https: 443
service:
enabled: true
externalIPs:
- IP-PUB-MASTER
- IP-PRIV-MASTER
type: NodePort kubectl create namespace ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace -f values.yamlNAME: ingress-nginx
LAST DEPLOYED: Tue Oct 27 13:03:35 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-nginx-controller'
An example Ingress that makes use of the controller:
...Avec respectivement les yaml suivants :
apiVersion: v1
kind: Pod
metadata:
labels:
run: ingress-nginx-pod
name: ingress-nginx-pod
namespace: ingress
spec:
containers:
- image: nginx
name: nginxapiVersion: v1
kind: Pod
metadata:
labels:
run: ingress-httpd-pod
name: ingress-httpd-pod
namespace: ingress
spec:
containers:
- image: httpd
name: httpd kubectl apply -f ingress-nginx-pod.yaml -f ingress-httpd-pod.yaml
pod/ingress-nginx-pod created
pod/ingress-httpd-pod createdAvec les yaml suivants :
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-service
namespace: ingress
spec:
type: ClusterIP
selector:
run: ingress-nginx-pod
ports:
- protocol: TCP
port: 80
targetPort: 80apiVersion: v1
kind: Service
metadata:
name: ingress-httpd-service
namespace: ingress
spec:
type: ClusterIP
selector:
run: ingress-httpd-pod
ports:
- protocol: TCP
port: 80
targetPort: 80service/ingress-nginx-service created
service/ingress-httpd-service createdAvec le contenu yaml suivant
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress-with-paths
namespace: ingress
spec:
rules:
- http:
paths:
- path: /nginx
pathType: Prefix
backend:
service:
name: ingress-nginx-service
port:
number: 80
- path: /httpd
pathType: Prefix
backend:
service:
name: ingress-httpd-service
port:
number: 80Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.networking.k8s.io/ingress-with-paths createdWarning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-with-paths <none> * 80 19s
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.99.141.243 <pending> 80:32527/TCP,443:30666/TCP 14m
ingress-nginx-controller-admission ClusterIP 10.97.240.239 <none> 443/TCP 14m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-httpd-service ClusterIP 10.110.184.101 <none> 80/TCP 19m
ingress-nginx-service ClusterIP 10.97.71.54 <none> 80/TCP 19m<html><body><h1>It works!</h1></body></html>
curl IP_NGINX_SERVICE
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html><html><body><h1>It works!</h1></body></html>
curl IP_INGRESS/nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>Avec le contenu yaml suivant :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: ingress-with-hosts
namespace: ingress
spec:
rules:
- host: nginx.example.com
http:
paths:
- backend:
service:
name: ingress-nginx-service
port:
number: 80
path: /
pathType: Prefix
- host: httpd.example.com
http:
paths:
- backend:
service:
name: ingress-httpd-service
port:
number: 80
path: /
pathType: Prefixingress.networking.k8s.io/ingress-with-hosts created<html><body><h1>It works!</h1></body></html>
curl nginx.example.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>Machine : master
Les déploiements Canary permettent le déploiement progressif de nouvelles versions d’applications sans aucune interruption de service.
Le NGINX Ingress Controller prend en charge les politiques de répartition du trafic basées sur les en-têtes (header) , le cookie et le poids. Alors que les politiques basées sur les en-têtes et les cookies servent à fournir une nouvelle version de service à un sous-ensemble d’utilisateurs, les politiques basées sur le poids servent à détourner un pourcentage du trafic vers une nouvelle version de service.
Le NGINX Ingress Controller utilise les annotations suivantes pour activer les déploiements Canary :
- nginx.ingress.kubernetes.io/canary-by-header
- nginx.ingress.kubernetes.io/canary-by-header-value
- nginx.ingress.kubernetes.io/canary-by-header-pattern
- nginx.ingress.kubernetes.io/canary-by-cookie
- nginx.ingress.kubernetes.io/canary-weight
Les règles s’appliquent dans cet ordre :
canary-by-header
canary-by-cookie
canary-weight
Les déploiements Canary nécessitent que vous créiez deux entrées : une pour le trafic normal et une pour le trafic alternatif. Sachez que vous ne pouvez appliquer qu’une seule entrée Canary.
Vous activez une règle de répartition du trafic particulière en définissant l’annotation Canary associée sur true dans la ressource Kubernetes Ingress, comme dans l’exemple suivant :
nginx.ingress.kubernetes.io/canary-by-header : "true"Exemple :
apiVersion: v1
kind: Service
metadata:
name: echo-v1
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
name: http
selector:
app: echo
version: v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-v1
spec:
replicas: 1
selector:
matchLabels:
app: echo
version: v1
template:
metadata:
labels:
app: echo
version: v1
spec:
containers:
- name: echo
image: "hashicorp/http-echo"
args:
- -listen=:80
- --text="echo-v1"
ports:
- name: http
protocol: TCP
containerPort: 80apiVersion: v1
kind: Service
metadata:
name: echo-v2
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
name: http
selector:
app: echo
version: v2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-v2
spec:
replicas: 1
selector:
matchLabels:
app: echo
version: v2
template:
metadata:
labels:
app: echo
version: v2
spec:
containers:
- name: echo
image: "hashicorp/http-echo"
args:
- -listen=:80
- --text="echo-v2"
ports:
- name: http
protocol: TCP
containerPort: 80apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
name: ingress-echo
spec:
#ingressClassName: nginx
rules:
- host: canary.example.com
http:
paths:
- path: /echo
pathType: Exact
backend:
service:
name: echo-v1
port:
number: 80echo-v2
Deployez l’ingress suivant :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "Region"
nginx.ingress.kubernetes.io/canary-by-header-pattern: "fr|us"
kubernetes.io/ingress.class: nginx
name: ingress-echo-canary-header
spec:
#ingressClassName: nginx
rules:
- host: canary.example.com
http:
paths:
- path: /echo
pathType: Exact
backend:
service:
name: echo-v2
port:
number: 80curl -H "Host: canary.example.com" -H "Region: us" http://<IP_ADDRESS>:<PORT>/echo
curl -H "Host: canary.example.com" -H "Region: de" http://<IP_ADDRESS>:<PORT>/echo
curl -H "Host: canary.example.com" http://<IP_ADDRESS>:<PORT>/echoecho-v2
echo-v1
echo-v1
Déployez l’ingress suivant :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "my-cookie"
kubernetes.io/ingress.class: nginx
name: ingress-echo-canary-cookie
spec:
#ingressClassName: nginx
rules:
- host: canary.example.com
http:
paths:
- path: /echo
pathType: Exact
backend:
service:
name: echo-v2
port:
number: 80curl -s --cookie "my-cookie=always" -H "Host: canary.example.com" http://<IP_ADDRESS>:<PORT>/echo
curl -s --cookie "other-cookie=always" -H "Host: canary.example.com" http://<IP_ADDRESS>:<PORT>/echo
curl -H "Host: canary.example.com" http://<IP_ADDRESS>:<PORT>/echoecho-v2
echo-v1
echo-v1
Déployez l’ingress suivant :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "X-Canary"
nginx.ingress.kubernetes.io/canary-weight: "50"
kubernetes.io/ingress.class: nginx
name: ingress-echo-canary-weight
spec:
#ingressClassName: nginx
rules:
- host: canary.example.com
http:
paths:
- path: /echo
pathType: Exact
backend:
service:
name: echo-v2
port:
number: 80Vérifiez bien que vous avez une répartition de 50% entre echo-v1 et echo-v2
Installer k6: https://k6.io/docs/getting-started/installation/
Utilisez en l’adaptant (url) le fichier: script.js
Modifiez http://localhost/echo par http://ip-pub-loadbalancer/echo
import http from 'k6/http';
import {check, sleep} from 'k6';
import {Rate} from 'k6/metrics';
import {parseHTML} from "k6/html";
const reqRate = new Rate('http_req_rate');
export const options = {
stages: [
{target: 20, duration: '20s'},
{target: 20, duration: '20s'},
{target: 0, duration: '20s'},
],
thresholds: {
'checks': ['rate>0.9'],
'http_req_duration': ['p(95)<1000'],
'http_req_rate{deployment:echo-v1}': ['rate>=0'],
'http_req_rate{deployment:echo-v2}': ['rate>=0'],
},
};
export default function () {
const params = {
headers: {
'Host': 'canary.example.com',
'Content-Type': 'text/plain',
},
};
const res = http.get(`http://localhost/echo`, params);
check(res, {
'status code is 200': (r) => r.status === 200,
});
var body = res.body.replace(/[\r\n]/gm, '');
switch (body) {
case '"echo-v1"':
reqRate.add(true, { deployment: 'echo-v1' });
reqRate.add(false, { deployment: 'echo-v2' });
break;
case '"echo-v2"':
reqRate.add(false, { deployment: 'echo-v1' });
reqRate.add(true, { deployment: 'echo-v2' });
break;
}
sleep(1);
}
et lancez le de la manière suivante
k6 run script.js
vérifiez la répartition des requetes
Nous pouvons supprimer les ressources générées par cet exercices de la façon suivante :
pod "ingress-httpd-pod" deleted
service "ingress-httpd-service" deleted
pod "ingress-nginx-pod" deleted
service "ingress-nginx-service" deleted
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.networking.k8s.io "ingress-with-hosts" deleted
ingress.networking.k8s.io "ingress-with-paths" deletedMachine : master
error: Metrics API not available… Sans succès.
error: Metrics API not availableNous avons besoin d’installer un metrics server.
Avec le contenu yaml suivant :
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server/metrics-server:v0.3.7
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
kubernetes.io/os: linux
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-systemclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
Warning: apiregistration.k8s.io/v1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
apiservice.apiregistration.k8s.io/v1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server createdNAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 180m 9% 1249Mi 15%
worker 47m 2% 818Mi 10%NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-f9fd979d6-9kb87 4m 12Mi
kube-system coredns-f9fd979d6-tl95z 3m 12Mi
kube-system etcd-master 20m 41Mi
kube-system kube-apiserver-master 48m 294Mi
kube-system kube-controller-manager-master 16m 47Mi
kube-system kube-proxy-8dvrj 1m 15Mi
kube-system kube-proxy-ll8tb 1m 15Mi
kube-system kube-scheduler-master 4m 21Mi
kube-system metrics-server-75f98fdbd5-2lp87 1m 12Mi
kube-system weave-net-c4b7d 2m 58Mi
kube-system weave-net-zfqt6 2m 62MiParfait !
Nous allons déployer une stack de monitoring basée sur Prometheus et Grafana via Helm.
Avec le contenu yaml suivant :
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm upgrade --install prometheus prometheus-community/kube-prometheus-stack --values kube-prometheus-stack.yaml --namespace monitoring --create-namespacekubectl --namespace monitoring port-forward --address 0.0.0.0 service/prometheus-kube-prometheus-prometheus 8080:80Forwarding from 0.0.0.0:8080 -> 9090Forwarding from 0.0.0.0:8081 -> 80Machine : master
curl -Lo helm.tar.gz https://get.helm.sh/helm-v3.3.4-linux-amd64.tar.gz
tar xvf helm.tar.gz
sudo mv linux-amd64/helm /usr/local/bin
rm -rf helm.tar.gz linux-amd64version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}
"ealenn" has been added to your repositorieshelm install echo-server ealenn/echo-server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh}
NAME: echo-server
LAST DEPLOYED: Tue Oct 27 10:21:27 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
echo-server default 1 2020-10-27 10:21:27.307028704 +0000 UTC deployed echo-server-0.3.0 0.4.0NAME READY STATUS RESTARTS AGE
echo-server-79cc9789cb-hqmlt 1/1 Running 0 2m10srelease "echo-server" uninstalledAvec le contenu yaml suivant :
NAME: echo-server
LAST DEPLOYED: Sat Oct 31 17:57:50 2020
NAMESPACE: helm
STATUS: deployed
REVISION: 1NAME READY STATUS RESTARTS AGE
echo-server-66d9c454b5-8crn7 1/1 Running 0 32s
echo-server-66d9c454b5-wdr7p 1/1 Running 0 32s
echo-server-66d9c454b5-z6cwt 1/1 Running 0 32srelease "echo-server" uninstalledMachine : master
https://kubectl.docs.kubernetes.io/installation/kustomize/
Choisissez !
.
├── base
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── overlays
└── prod
├── custom-env.yaml
├── database-secret.yaml
├── kustomization.yaml
└── replica-and-rollout-strategy.yamlbase/deployment.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
selector:
matchLabels:
app: sl-demo-app
template:
metadata:
labels:
app: sl-demo-app
spec:
containers:
- name: app
image: nginx:1.19.9
ports:
- name: http
containerPort: 80
protocol: TCPbase/service.yamlapiVersion: v1
kind: Service
metadata:
name: sl-demo-app
spec:
ports:
- name: http
port: 8080
targetPort: 80
selector:
app: sl-demo-appbase/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yamlNAME READY STATUS RESTARTS AGE
pod/sl-demo-app-bb6494cc6-sd6k7 1/1 Running 0 6m42s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/sl-demo-app 1/1 1 1 6m42s
NAME DESIRED CURRENT READY AGE
replicaset.apps/sl-demo-app-bb6494cc6 1 1 1 6m42soverlays/prod/replica-and-rollout-strategy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
replicas: 10
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdateoverlays/prod/database-secret.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db-passwordoverlays/prod/custom-env.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app # (1)
env:
- name: CUSTOM_ENV_VARIABLE
value: Value defined by Kustomize ❤️overlays/prod/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
caas.fr/environment: "prod"
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
- database-secret.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
images:
- name: nginx
newName: nginx
newTag: 1.21.0kubectl delete -k k8s/base
kubectl apply -k k8s/overlays/prod
kubectl get all -l caas.fr/environment=prodMachine : master
Avec la CLI kubectl, nous pouvons d’ores et déjà récupérer plusieurs logs concernant notre cluster Kubernetes.
Kubernetes master is running at https://10.156.0.3:6443
KubeDNS is running at https://10.156.0.3:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://10.156.0.3:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.LAST SEEN TYPE REASON OBJECT MESSAGE
81s Normal ExternalProvisioning persistentvolumeclaim/postgres-openebs-pvc waiting for a volume to be created, either by external provisioner "openebs.io/provisioner-iscsi" or manually created by system administrator
89s Normal Provisioning persistentvolumeclaim/postgres-openebs-pvc External provisioner is provisioning volume for claim "default/postgres-openebs-pvc"pod/test-logs created/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up...
{"log":"I1027 12:51:51.629401 1 client.go:360] parsed scheme: \"passthrough\"\n","stream":"stderr","time":"2020-10-27T12:51:51.629623287Z"}
{"log":"I1027 12:51:51.629456 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2020-10-27T12:51:51.629671282Z"}
{"log":"I1027 12:51:51.629471 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2020-10-27T12:51:51.62968064Z"}pod "test-logs" deleted
# 16: CRDs
kubectl apply -f https://download.elastic.co/downloads/eck/1.9.1/crds.yaml
# 16: Operateur
kubectl apply -f https://download.elastic.co/downloads/eck/1.9.1/operator.yaml
...
{"log.level":"info","@timestamp":"2020-11-01T17:01:06.426Z","log.logger":"controller-runtime.controller","message":"Starting workers","service.version":"1.2.1-b5316231","service.type":"eck","ecs.version":"1.4.0","controller":"enterprisesearch-controller","worker count":3}
{"log.level":"info","@timestamp":"2020-11-01T17:01:06.426Z","log.logger":"controller-runtime.controller","message":"Starting workers","service.version":"1.2.1-b5316231","service.type":"eck","ecs.version":"1.4.0","controller":"elasticsearch-controller","worker count":3}Avec le contenu yaml suivant :
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 7.9.3
nodeSets:
- name: default
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: longhorn
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: falseelasticsearch.elasticsearch.k8s.elastic.co/elasticsearch createdNAME HEALTH NODES VERSION PHASE AGE
elasticsearch green 1 7.9.3 Ready 106sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-es-http ClusterIP 10.99.41.114 <none> 9200/TCP 2m24sPASSWORD=$(kubectl get secret elasticsearch-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
curl -u "elastic:$PASSWORD" -k "https://CLUSTER_IP_ELASTICSEARCH:9200"{
"name" : "elasticsearch-es-default-0",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "76FfZR4ARxO78QBQw_kBhg",
"version" : {
"number" : "7.9.3",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "c4138e51121ef06a6404866cddc601906fe5c868",
"build_date" : "2020-10-16T10:36:16.141335Z",
"build_snapshot" : false,
"lucene_version" : "8.6.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}Parfait !
Avec le contenu yaml suivant :
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 7.9.3
count: 1
elasticsearchRef:
name: elasticsearchkibana.kibana.k8s.elastic.co/kibana createdNAME HEALTH NODES VERSION AGE
kibana green 1 7.9.3 2m23sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kibana-kb-http ClusterIP 10.106.23.116 <none> 5601/TCP 2m45skubectl get secret elasticsearch-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echopb809RTC51EVCd3f19i9UVW5Notre Kibana est donc installé ! Vous pouvez y’accéder à l’aide de l’URL suivante : https://MASTER_EXTERNAL_IP:5601
Page d’authentification :

Page d’accueil :

Avec le contenu yaml suivant :
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 7.9.3
elasticsearchRef:
name: elasticsearch
config:
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
daemonSet:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containersbeat.beat.k8s.elastic.co/filebeat createdNAME HEALTH AVAILABLE EXPECTED TYPE VERSION AGE
filebeat green 2 2 filebeat 7.9.2 94sCréation de l’index pattern :

Nom de l’index pattern :

Time Field :

Discover :

Logs :

Pour désinstaller notre stack ELK via ECK :
elasticsearch.elasticsearch.k8s.elastic.co "elasticsearch" deleted
beat.beat.k8s.elastic.co "filebeat" deleted
kibana.kibana.k8s.elastic.co "kibana" deletedPréparation de la mise à jour
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring-1.28.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring-1.28.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get updatePour commencer, il faut mettre à jour kubeadm :
Vérifions la version de kubeadm :
Nous devons maintenant drain le noeud master afin de pouvoir faire l’upgrade dessus :
Nous pouvons avoir un aperçu de l’upgrade de la façon suivante :
sudo kubeadm upgrade plan
[[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.27.12
[upgrade/versions] kubeadm version: v1.28.8
I0408 06:40:22.060915 4163 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.28
[upgrade/versions] Target version: v1.28.8
[upgrade/versions] Latest version in the v1.27 series: v1.27.12
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.27.9 v1.28.8
Upgrade to the latest stable version:
COMPONENT CURRENT TARGET
kube-apiserver v1.27.12 v1.28.8
kube-controller-manager v1.27.12 v1.28.8
kube-scheduler v1.27.12 v1.28.8
kube-proxy v1.27.12 v1.28.8
CoreDNS v1.10.1 v1.10.1
etcd 3.5.9-0 3.5.12-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.28.8
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________Nous pouvons maintenant upgrade les composants du cluster :
[[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.28.8"
[upgrade/versions] Cluster version: v1.27.12
[upgrade/versions] kubeadm version: v1.28.8
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
W0408 06:41:41.559443 4249 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.28.8" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-08-06-41-48/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2343628394"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-08-06-41-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-08-06-41-48/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-08-06-41-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1974182237/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.28.8". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Nous pouvons remettre le noeud master en marche :
node/master uncordoned
Nous devons maintenant mettre à jour la kubelet et kubectl :
sudo apt-mark unhold kubectl kubelet
sudo apt-get install kubectl=1.28.8-1.1 kubelet=1.28.8-1.1
sudo apt-mark hold kubectl kubeletEnfin nous devons redémarrer la kubelet :
Vérification de la mise à jour du master
kubectl get nodes
ubuntu@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 19m v1.28.8
worker-0 Ready <none> 14m v1.27.9
worker-1 Ready <none> 14m v1.27.9Nous devons maintenant mettre à jour les workers :
A faire sur les noeud worker-0 et worker-1
Préparation de la mise à jour
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring-1.28.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring-1.28.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get updateComme pour le master, nous devons drain les noeuds workers :
Répéter les actions pour le noeud 2 noeud par noeud (pas en // )
Sur le master
Nous devons maintenant mettre à jour la configuration de notre worker-0 :
Sur le worker-0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines} sudo kubeadm upgrade node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2717758596/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.Enfin, comme pour le master nous devons mettre a jour la kubelet et kubectl :
sudo apt-mark unhold kubectl kubelet
sudo apt-get install kubectl=1.28.8-1.1 kubelet=1.28.8-1.1
sudo apt-mark hold kubectl kubeletEn prenant soin de redémarrer la kubelet :
Sans oublier de remettre le noeud en marche :
Sur le master ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines} kubectl uncordon worker-0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nous pouvons maintenant lister les noeuds :
kubectl get nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 25m v1.28.8
worker-0 Ready <none> 19m v1.28.8
worker-1 Ready <none> 19m v1.27.9Passez à la mise à jour du noeud worker-1
Et lister les pods pour vérifier que tout est fonctionnel :
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-jhcg9 1/1 Running 0 7m44s
kube-system coredns-f9fd979d6-mjfzf 1/1 Running 0 7m44s
kube-system etcd-master 1/1 Running 1 11m
kube-system kube-apiserver-master 1/1 Running 0 11m
kube-system kube-controller-manager-master 1/1 Running 0 11m
kube-system kube-proxy-4mvtr 1/1 Running 0 14m
kube-system kube-proxy-lkvxn 1/1 Running 0 13m
kube-system kube-scheduler-master 1/1 Running 0 11m
kube-system weave-net-t2h8r 2/2 Running 0 24m
kube-system weave-net-zxg6p 2/2 Running 1 23mNote : le CNI doit être mis à jour indépendamment
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
snapshot save <backup-file-location>où trust-ca-file, cert-file et key-file peuvent être obtenus à partir de la description du Pod etcd en mettant sur le master la commande:
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save etcd-bkpLe snapshot a créé un fichier nommé: etcd-bkp. Vérifier le status du snapshot:
Remarquez que l’opération de restauration a généré un repertoire default.etcd
Avant de remplacer l’ETCD par la nouvelle restauration:
Le pod créé après la sauvegarde ne doit plus exister après la restauration
Créer un namespace nommé kubeops
Créer un pod avec les caractéristiques suivantes :
| Nom | : webserver |
| Nom du conteneur | : webserver |
| Image | : dernière version de nginx |
| Namespace | : kubeops |
- Sur quel nœud se trouve votre pod ?
- Connectez-vous au conteneur du pod et vérifiez son OS avec la commande cat /etc/os-release
- Vérifiez les logs du pod
Ajoutez un nouveau conteneur du nom de webwatcher au pod créé précédemment, avec une image : afakharany/watcher:latest.
Vérifiez que les deux conteneurs sont « running » dans le pod.
Connectez-vous au conteneur webwatcher et affichez le contenu du fichier /etc/hosts
Lancez un Deployment nommé « nginx-deployment » avec 2 réplicas comprenant un conteneur nommé “nginxcont” dans le namespace “kubeops” avec l’image Nginx version 1.17.10 et définissez le port 80 comme port d’exposition.
Augmentez le nombre de réplicas du déploiement à 4 avec la commande kubectl scale
Mettez à jour l’image de votre application à une nouvelle version nginx :1.9.1 avec la commande kubectl set image et observez l’évolution de la mise à jour de l’application
Faites un Rollback de votre mise à jour du déploiement
Exposez votre application avec un service de type Nodeport sur le port 30000 des workers
Créez un Daemonset nommé prometheus-daemonset conteneur nommé prometheus dans le namespace “kubeops” avec l’image prom/node-exporter et définissez le port 9100 comme port d’exposition.
Le Daemonset précédemment créé est présent sur tous les nœuds du cluster. Nous ne souhaitons plus qu’il tourne sur le nœud worker-1. Trouvez la bonne stratégie pour que prometheus-daemonset ne soit présent que sur le worker-0 et le master
Cet exercice vise à montrer l’utilisation d’un secret et d’un configmap.
Création du secret :
kubectl create secret tls nginx-certs --cert=tls.crt --key=tls.keyFaites un kubectl describe pour vérifier le secret créé
Création du configmap
Afin de personnaliser la configuration de votre serveur nginx, créez un configmap avec le fichier de configuration nginx-custom.conf avec la commande suivante : kubectl create configmap nginx-config --from-file nginx-custom.conf
Création d’un déploiement
Testez votre application en l’exposant via un service
Nous souhaitons déployer une base de données mysql sur le cluster. Créez un déploiement « mysql-database » avec 1 réplica comprenant un conteneur nommé « database » avec la dernière version de Mysql.
Votre application doit afficher une erreur car il lui manque le mot de passe root de mysql.
Redéployez l’application en passant le mot de passe root de mysql en variable d’environnement avec les valeurs comme suit : MYSQL_ROOT_PASSWORD=test.
Utilisateurs et droits dans Kubernetes
Générez un certificat pour un utilisateur du nom de dev. Ajoutez les informations d’identification de l’utilisateur dev à notre fichier kubeconfig. Puis vérifiez si dev a le droit de lister les pods en mettant la commande : kubectl --user=dev get pods
Créez un rôle qui autorise à lister les pods puis liez le rôle à l’utilisateur dev. Vérifiez à présent si dev peut lister les pods.
Vous remarquerez que dev est limité au namespace dans lequel le rôle a été créé. Vous décidez de lui permettre de lister les pods de tous les namespaces. Mettez en place une solution appropriée.
Créez un pod statique avec une image redis. Rajoutez un request de 100Mi RAM et 100m CPU puis une limite à 200Mi RAM et 150m CPU
Installez le helm chart de wordress disponible sur ce lien. Modifier le type de service par défaut définit dans le fichier values.yaml en NodePort.
TroubleShooting : Faire en sorte que l’application fonctionne correctement et puisse afficher une page web avec le calcul de Pi. Corrigez toutes les erreurs dans le deploymentet les service
# 19: BUT : faire fonctionner l'application sur curl http://QuelqueChose:8020
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pi-web
labels:
k8s.alterwaylabs.fr: troubleshooting
spec:
replicas: 0
selector:
matchLabels:
app: pi-web
template:
metadata:
labels:
app: pi-web-app
spec:
containers:
- image: kiamol/ch05-pi-app
command: ["donet", "Pi.Web.dll", "-m", "web"]
name: web
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: "32"
memory: "128Gi"
readinessProbe:
tcpSocket:
port: 8020
periodSeconds: 5
livenessProbe:
httpGet:
path: /healthy
port: 80
periodSeconds: 30
failureThreshold: 1
---
apiVersion: v1
kind: Service
metadata:
name: pi-np
labels:
k8s.alterwaylabs.fr: troubleshooting
spec:
selector:
app: pi-web-pod
ports:
- name: http
port: 8020
targetPort: app
nodePort: 8020
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: pi-lb
labels:
k8s.alterwaylabs.fr: troubleshooting
spec:
selector:
app: pi-web-pod
ports:
- name: http
port: 8020
targetPort: app
type: ClusterIPTroubleshooting Deployments
Fix:
Deployments
Labels dans les spec du pod pour le rattachement au deploy
Replicas à 0 >> 1
Limits / Requests trop élévée
Nom de l’image >> aller voir sur le hub docker
Command au niveau de conteneur avec typo
Readiness probe mauvais port
Liveness /healthy >> /health
Services
target port 8020 invalide
Service pod selector invalide
Service port name invalide
```yaml apiVersion: apps/v1 kind: Deployment metadata: name: pi-web labels: k8s.alterwaylabs.fr: troubleshooting spec: replicas: 1 selector: matchLabels: app: pi-web template: metadata: labels: app: pi-web spec: containers: - image: kiamol/ch05-pi command: [“dotnet”, “Pi.Web.dll”, “-m”, “web”] name: web ports: - containerPort: 80 name: http resources: limits: cpu: “0.5”
memory: “1Gi” readinessProbe: tcpSocket: port: 80 periodSeconds: 5 livenessProbe: httpGet: path: / port: 80 periodSeconds: 30 failureThreshold: 1
apiVersion: v1 kind: Service metadata: name: pi-np labels: k8s.alterwaylabs.fr: troubleshooting spec: selector: app: pi-web ports: - name: http port: 8020 targetPort: http nodePort: 30020 type: NodePort — apiVersion: v1 kind: Service metadata: name: pi-lb labels: k8s.alterwaylabs.fr: troubleshooting spec: selector: app: pi-web ports: - name: http port: 8020 targetPort: http type: LoadBalancer