DaemonSet
What is a DaemonSet?
Like other workload objects, DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed.
DaemonSets use a Pod template, which contains a specification for its Pods. The Pod specification determines how each Pod should look: what applications should run inside its containers, which volumes it should mount, its labels and selectors, and more.
Usage patterns
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd.
For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware types and resource needs.
Creating DaemonSets
cat << EOF > daemonset.yaml
apiVersion: extensions/v1beta1 # For Kubernetes version 1.9 and later, use apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
selector:
matchLabels:
name: fluentd # Label selector that determines which Pods belong to the DaemonSet
template:
metadata:
labels:
name: fluentd # Pod template's label selector
spec:
nodeSelector:
type: prod # Node label selector that determines on which nodes Pod should be scheduled
# In this case, Pods are only scheduled to nodes bearing the label "type: prod"
containers:
- name: fluentd
image: gcr.io/google-containers/fluentd-elasticsearch:1.20
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
EOF
kubectl create -f daemonset.yaml
daemonset.extensions "fluentd" created
In sum, the Pod specification contains the following instructions:
- Label Pod as
fluentd. - Use node label selector
type: prodto schedule Pod to matching nodes, and do not schedule on nodes which do not bear the label selector. (Alternatively, omit thenodeSelectorfield to schedule on all nodes.) - Run
fluentd-elasticsearchat version1.20. - Request some memory and CPU resources.
kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 0 0 0 0 0 type=prod 47m
kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-persistent-disk-tuto-default-pool-e2669282-4g5m Ready <none> 1h v1.8.10-gke.0
gke-persistent-disk-tuto-default-pool-e2669282-75xl Ready <none> 1h v1.8.10-gke.0
gke-persistent-disk-tuto-default-pool-e2669282-knp6 Ready <none> 1h v1.8.10-gke.0
kubectl label node gke-persistent-disk-tuto-default-pool-e2669282-4g5m type=prod
node "gke-persistent-disk-tuto-default-pool-e2669282-4g5m" labeled
kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 1 1 1 1 1 type=prod 49m
Updating DaemonSets (TODO)
You can update DaemonSets by changing its Pod specification, resource requests and limits, labels, and annotations.
DaemonSet has two update strategy types:
- OnDelete
- RollingUpdate
OnDelete
This is the default update strategy for backward-compatibility. With OnDelete update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods. This is the same behavior of DaemonSet in Kubernetes version 1.5 or before.
Find the update strategy of fluentd ds
kubectl get ds/fluentd -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
OnDelete
RollingUpdate
With RollingUpdate update strategy, after you update a DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods will be created automatically, in a controlled fashion.
cat << EOF > rolling_ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: frontend
spec:
updateStrategy: RollingUpdate
maxUnavailable: 1
minReadySeconds: 0
template:
metadata:
labels:
app: frontend-webserver
spec:
nodeSelector:
app: frontend-node
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
EOF
kubectl create -f rolling_ds.yaml
error: error converting YAML to JSON: yaml: line 6: mapping values are not allowed in this context