Cluster Architecture
Master Node
- ETCD cluster
- kube-scheduler
- kube-controller-manager
These components communicate via kube-api server
Worker Node
- container runtime engine, e.g. Docker, Rocket, ContainerD
- kubelet: agent that runs and listen for instructions from kube-api
- containers
The services deploy within worker nodes communicate with each other via kube-proxy
Objectives
ETCD
- a distributed reliable key-value store
- client commuications on port 2379
- server to server on port 2380
kube-api
primary management component
setup:
using kube-admin tools
deploy kube-api as a pod in kube-system namespace
the manifests is at /etc/kubernetes/manifests/kube-apiserver.yaml
the options is at /etc/systemd/system/kube-apiserver.service
search for kube-apiserver process on master node
12ps -aux | grep kube-apiserver
example: apply deployment using kubectl
- authenticates user
- validate the HTTP requests
- the kube-scheduler monitored the changes from the kube-api, then:
- retrieve the node information from kube-api
- schedule the pod to some node through kube-api to kubelet
- update the pod info to ETCD
kube-controller-manager
- continuously monitors the state of components
- the controllers packages into a single process called Kube-Controller-Manager, which includes:
- deployment-controller, cronjob, service-account-controller …
- namespace-controller, job-contorller, node-controller …
- endpoint-controller, replicaset, replication-controller(replica set) …
- remediate situation
kube-scheduler
- decide which pod goes to which node
- filter nodes
- rank nodes
kubelet
- follow the instruction from kube-scheduler to controll the container runtime engine (e.g. docker) that run or remove a container
- using kube-admin tools to deploy cluster, the kubelet are not installed by default in worker nodes, need intstall manually
kube-proxy
- runs on each nodes in the cluster
- create iptables rules on each nodes to forward traffic heading to the IP of the services to the IP of the actual pods
- kube-admin tool deploy kube-proxy as daemonset in each nodes
pod
- the container are encapsulated into a pod
- is a single instance of an application, the smallest object in k8s
- containers in same pod shares storages and network namespaces, created and removed in the same time
- multi-container pod is rare use case
ReplicationController
- apiVersion support in
v1
- the process to monitor the pods
- maintain the HA and specified number of pods that running on all nodes
- only care about the pod which RestartPolicy is set to Always
- scalable and replacable application should be managed by the controller
- use cases: rolling updates, multiple release tracks (multiple replication controller replica the same pod but using different labels)
ReplicaSets
- next generation of ReplicationController
- api version support in
apps/v1
- enhance the filtering in
.spec.selector
(the major difference) - be aware of the non-template pod that has same lables
- using Deployment as a replacement is recommended, it own and manage its ReplicaSets
Deployment
- provide replication vis replicaset and other:
- rolling update
- rollout
- pause and resume
Namespace
namespaces created at cluster creation
- kube-system
- kube-public
- default
each namespace can be assigned quota of resources
a DNS entry with
SERVICE_NAME.NAMESPACE.svc.cluster.local
format is automatically created when at service creation- the
cluster.local
is the default domain name of the cluster
- the
permanently config the namespace
12kubectl config set-context $(kubectl config current-context) --namespace=$NAMESPACE
ResourceQuota
- useful to limit the compute resources for single namespace
Service
- NodePort: listen to a port on the node and forward request to the pod
- the NodePort in range 30000-32767
- only the
port
is required,targetPort
will be the same if not sepcified,nodePort
can be automatically allocated - the service use the Random Algorithom to balance the load between pods
- the service is automatically configured by k8s to span across the cluster and map the target port to the same node port across the nodes
- ClusterIP: default, create a virtual IP inside the cluster
- group the pod together and provide a single Endpoint to access
- each service get a name and a reliable IP adress
- a default service
kubernetes
will create by k8s at launch with port 443
- LoadBalancer
Run a curl application is helpful for manual testing
1 2 3 |
kubectl run curl --image=radial/busyboxplus:curl -i --tty curl -k -I https://kubernetes:443 |