SaltStack (TODO)

Kubernetes-Saltstack

Kubernetes-Saltstack provide an easy way to deploy H/A Kubernetes Cluster using Salt.

Getting started

git clone https://github.com/valentin2105/Kubernetes-Saltstack.git /srv/salt
Cloning into '/srv/salt'...
remote: Counting objects: 776, done.
remote: Total 776 (delta 0), reused 0 (delta 0), pack-reused 776
Receiving objects: 100% (776/776), 118.27 KiB | 141.00 KiB/s, done.
Resolving deltas: 100% (410/410), done.
Checking connectivity... done.

ln -s /srv/salt/pillar /srv/pillar
wget -q --show-progress --https-only --timestamping \
   https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
   https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
cfssl_linux-amd64                                    100%[======================================================================================================================>]   9.90M  2.03MB/s    in 4.9s
cfssljson_linux-amd64                                100%[======================================================================================================================>]   2.17M   618KB/s    in 3.6s
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
cd /srv/salt/k8s-certs
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

2018/06/10 16:33:15 [INFO] generating a new CA key and certificate from CSR
2018/06/10 16:33:15 [INFO] generate received request
2018/06/10 16:33:15 [INFO] received CSR
2018/06/10 16:33:15 [INFO] generating key: ecdsa-256
2018/06/10 16:33:15 [INFO] encoded CSR
2018/06/10 16:33:15 [INFO] signed certificate with serial number 565514221149210471468591032803693789210883052920

Because we need to generate our own CA and Certificates for the cluster, You MUST put every hostnames of the Kubernetes cluster (Master & Workers) in the certs/kubernetes-csr.json (hosts field).

You can use either public or private names, but they must be registered somewhere (DNS provider, internal DNS server, /etc/hosts file).

vim kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "10.32.0.1",
    "kubernetes.default",
    "kubernetes.default.svc.cluster.local",
    "ha-master-hostname.domain.tld",
    "ubuntu-c-4-8gib-blr1-01 ubuntu-c-4-8gib-blr1-01",
    "ubuntu-c-4-8gib-blr1-01 ubuntu-c-4-8gib-blr1-02",
    "ubuntu-c-4-8gib-blr1-01 ubuntu-c-4-8gib-blr1-03",
    "159.65.150.4",
    "159.65.150.25",
    "159.65.150.39",
    "master01.domain.tld",
    "master02.domain.tld",
    "master03.domain.tld",
    "worker01.domain.tld",
    "worker02.domain.tld",
    "worker03.domain.tld",
    "worker04.domain.tld",
    "127.0.0.1"
  ],
  "key": {
    "algo": "ecdsa",
    "size": 256
  },
  "names": [
    {
      "C": "ANY",
      "L": "Country",
      "O": "Kubernetes",
      "OU": "Cluster",
      "ST": "Local"
    }
  ]
}

After that, edit the pillar/cluster_config.sls to configure your future Kubernetes cluster

vim /srv/salt/pillar/cluster_config.sls
kubernetes:
  version: v1.10.1
  domain: cluster.local
  master:
#    count: 1
#    hostname: master.domain.tld
#    ipaddr: 10.240.0.10
    count: 3
    cluster:
      node01:
        hostname: ubuntu-c-4-8gib-blr1-01 ubuntu-c-4-8gib-blr1-01
        ipaddr: 159.65.150.4
      node02:
        hostname: ubuntu-c-4-8gib-blr1-01 ubuntu-c-4-8gib-blr1-02
        ipaddr: 159.65.150.25
      node03:
        hostname: ubuntu-c-4-8gib-blr1-01 ubuntu-c-4-8gib-blr1-03
        ipaddr: 159.65.150.39
    encryption-key: 'w3RNESCMG+o3GCHTUcrQUUdq6CFV72q/Zik9LAO8uEc='
    etcd:
      version: v3.3.5
  worker:
    runtime:
      provider: docker
      docker:
        version: 18.03.0-ce
        data-dir: /dockerFS
    networking:
      cni-version: v0.7.1
      provider: calico
      calico:
        version: v3.1.1
        cni-version: v3.1.1
        calicoctl-version: v3.1.1
        controller-version: 3.1-release
        as-number: 64512
        token: hu0daeHais3aCHANGEMEhu0daeHais3a
        ipv4:
          range: 192.168.0.0/16
          nat: true
          ip-in-ip: true
        ipv6:
          enable: false
          nat: true
          interface: ens18
          range: fd80:24e2:f998:72d6::/64
  global:
    clusterIP-range: 10.32.0.0/16
    helm-version: v2.8.2
    dashboard-version: v1.8.3
    admin-token: Haim8kay1rarCHANGEMEHaim8kay1rar
    kubelet-token: ahT1eipae1wiCHANGEMEahT1eipae1wi

Deployment

echo 'master: 159.65.150.4' | tee -a /etc/salt/minion

service salt-minion restart
salt-key -A

The following keys are going to be accepted:
Unaccepted Keys:
ubuntu-c-4-8gib-blr1-01
ubuntu-c-4-8gib-blr1-02
ubuntu-c-4-8gib-blr1-03
Proceed? [n/Y] Y
Key for minion ubuntu-c-4-8gib-blr1-01 accepted.
Key for minion ubuntu-c-4-8gib-blr1-02 accepted.
Key for minion ubuntu-c-4-8gib-blr1-03 accepted.
# Kubernetes Masters
cat << EOF > /etc/salt/grains
role: k8s-master
EOF

# Kubernetes Workers
cat << EOF > /etc/salt/grains
role: k8s-worker
EOF

# Kubernetes Master & Workers
cat << EOF > /etc/salt/grains
role:
  - k8s-master
  - k8s-worker
EOF

service salt-minion restart
salt -G 'role:k8s-master' test.ping
ubuntu-c-4-8gib-blr1-01:
    True

salt -G 'role:k8s-worker' test.ping
ubuntu-c-4-8gib-blr1-02:
    True
ubuntu-c-4-8gib-blr1-01:
    True
ubuntu-c-4-8gib-blr1-03:
    True
# Apply Kubernetes Master configurations
salt -G 'role:k8s-master' state.highstate

kubectl get componentstatuses
The connection to the server localhost:8080 was refused - did you specify the right host or port?

# Apply Kubernetes Worker configurations
salt -G 'role:k8s-worker' state.highstate

results matching ""

    No results matching ""