Smoke Test

In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.

Data Encryption

In this section you will verify the ability to encrypt secret data at rest.

Create a generic secret:

kubectl create secret generic kubernetes-the-hard-way \
  --from-literal="mykey=mydata"
secret/kubernetes-the-hard-way created

Print a hexdump of the kubernetes-the-hard-way secret stored in etcd:

gcloud compute ssh controller-0 \
  --command "sudo ETCDCTL_API=3 etcdctl get \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem\
  /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010  73 2f 64 65 66 61 75 6c  74 2f 6b 75 62 65 72 6e  |s/default/kubern|
00000020  65 74 65 73 2d 74 68 65  2d 68 61 72 64 2d 77 61  |etes-the-hard-wa|
00000030  79 0a 6b 38 73 3a 65 6e  63 3a 61 65 73 63 62 63  |y.k8s:enc:aescbc|
00000040  3a 76 31 3a 6b 65 79 31  3a 8e eb 86 22 60 e8 63  |:v1:key1:..."`.c|
00000050  fa 66 70 cc 29 0c 41 c4  f0 36 79 d9 5e 89 74 11  |.fp.).A..6y.^.t.|
00000060  54 ca 2e f9 11 b8 48 9c  3e ad 25 7a 5a 0a df f9  |T.....H.>.%zZ...|
00000070  44 28 2a 28 e1 9e a6 66  91 70 c4 23 37 a2 b3 d2  |D(*(...f.p.#7...|
00000080  28 31 74 1c 1e 41 c2 55  0a 5e a8 2f 82 e6 0f 05  |(1t..A.U.^./....|
00000090  2c 80 13 89 33 46 5c cf  49 71 e1 48 16 fe 86 0e  |,...3F\.Iq.H....|
000000a0  87 be 54 0d af 98 d2 3e  fe 59 d1 72 b7 2f 1c 3d  |..T....>.Y.r./.=|
000000b0  81 bf fe 59 a1 df 42 bd  9b 6a 1c 3d 52 22 52 37  |...Y..B..j.=R"R7|
000000c0  b2 87 f4 d4 65 f3 98 81  21 a3 20 61 19 fa b0 01  |....e...!. a....|
000000d0  71 e4 f8 fc d9 0b 91 51  cc 38 e4 63 40 8b 81 ee  |q......Q.8.c@...|
000000e0  0a 04 4f 49 f4 5e 71 06  86 0a                    |..OI.^q...|
000000ea

The etcd key should be prefixed with k8s:enc:aescbc:v1:key1, which indicates the aescbc provider was used to encrypt the data with the key1 encryption key.

Deployments

In this section you will verify the ability to create and manage Deployments.

Create a deployment for the nginx web server:

kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created

List the pod created by the nginx deployment:

kubectl get pods -l run=nginx
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-d8b8b   1/1     Running   0          15s

Port Forwarding

In this section you will verify the ability to access applications remotely using port forwarding.

Retrieve the full name of the nginx pod:

POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME
nginx-dbddb74b8-d8b8b

Forward port 8080 on your local machine to port 80 of the nginx pod:

kubectl port-forward $POD_NAME 8080:80
Forwarding from [::1]:8080 -> 80
Forwarding from 127.0.0.1:8080 -> 80

In a new terminal make an HTTP request using the forwarding address:

curl --head http://127.0.0.1:8080
HTTP/1.1 200 OK
Server: nginx/1.15.8
Date: Sat, 05 Jan 2019 11:45:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Dec 2018 09:56:47 GMT
Connection: keep-alive
ETag: "5c21fedf-264"
Accept-Ranges: bytes
curl http://127.0.0.1:8080
<!DOCTYPE html>
<html>
  <head>
    <title>Welcome to nginx!</title>
    <style>
      body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
      }
    </style>
  </head>
  <body>
    <h1>Welcome to nginx!</h1>
    <p>
      If you see this page, the nginx web server is successfully installed and
      working. Further configuration is required.
    </p>

    <p>
      For online documentation and support please refer to
      <a href="http://nginx.org/">nginx.org</a>.<br />
      Commercial support is available at
      <a href="http://nginx.com/">nginx.com</a>.
    </p>

    <p><em>Thank you for using nginx.</em></p>
  </body>
</html>

Switch back to the previous terminal and stop the port forwarding to the nginx pod:

Forwarding from [::1]:8080 -> 80
Forwarding from 127.0.0.1:8080 -> 80
Handling connection for 8080
Handling connection for 8080
^C

Logs

In this section you will verify the ability to retrieve container logs.

Print the nginx pod logs:

kubectl logs $POD_NAME
127.0.0.1 - - [05/Jan/2019:11:45:26 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
127.0.0.1 - - [05/Jan/2019:11:45:50 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.54.0" "-"

Exec

In this section you will verify the ability to execute commands in a container.

Print the nginx version by executing the nginx -v command in the nginx container:

kubectl exec -ti $POD_NAME -- nginx -v
nginx version: nginx/1.15.8

Services

In this section you will verify the ability to expose applications using a Service.

Expose the nginx deployment using a NodePort service:

kubectl expose deployment nginx --port 80 --type NodePort
service/nginx exposed

The LoadBalancer service type can not be used because your cluster is not configured with cloud provider integration. Setting up cloud provider integration is out of scope for this tutorial.

Retrieve the node port assigned to the nginx service:

NODE_PORT=$(kubectl get svc nginx \
  --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
echo $NODE_PORT
32456

Create a firewall rule that allows remote access to the nginx node port:

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
  --allow=tcp:${NODE_PORT} \
  --network kubernetes-the-hard-way
Creating firewall...⠛Created [https://www.googleapis.com/compute/v1/projects/espblufi-android/global/firewalls/kubernetes-the-hard-way-allow-nginx-service].
Creating firewall...done.
NAME                                         NETWORK                  DIRECTION  PRIORITY  ALLOW      DENY  DISABLED
kubernetes-the-hard-way-allow-nginx-service  kubernetes-the-hard-way  INGRESS    1000      tcp:32456        False

Retrieve the external IP address of a worker instance:

EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
  --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
echo $EXTERNAL_IP
35.247.1.136

Make an HTTP request using the external IP address and the nginx node port:

curl -I http://${EXTERNAL_IP}:${NODE_PORT}
HTTP/1.1 200 OK
Server: nginx/1.15.8
Date: Sat, 05 Jan 2019 11:48:36 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Dec 2018 09:56:47 GMT
Connection: keep-alive
ETag: "5c21fedf-264"
Accept-Ranges: bytes
curl http://${EXTERNAL_IP}:${NODE_PORT}
<!DOCTYPE html>
<html>
  <head>
    <title>Welcome to nginx!</title>
    <style>
      body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
      }
    </style>
  </head>
  <body>
    <h1>Welcome to nginx!</h1>
    <p>
      If you see this page, the nginx web server is successfully installed and
      working. Further configuration is required.
    </p>

    <p>
      For online documentation and support please refer to
      <a href="http://nginx.org/">nginx.org</a>.<br />
      Commercial support is available at
      <a href="http://nginx.com/">nginx.com</a>.
    </p>

    <p><em>Thank you for using nginx.</em></p>
  </body>
</html>

Untrusted Workloads

This section will verify the ability to run untrusted workloads using gVisor.

Create the untrusted pod:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: untrusted
  annotations:
    io.kubernetes.cri.untrusted-workload: "true"
spec:
  containers:
    - name: webserver
      image: gcr.io/hightowerlabs/helloworld:2.0.0
EOF
pod/untrusted created

Verification

In this section you will verify the untrusted pod is running under gVisor (runsc) by inspecting the assigned worker node.

Verify the untrusted pod is running:

kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE
busybox-bd8fb7cbd-zlmx4   1/1     Running   0          9m56s   10.200.1.2   worker-1   <none>
nginx-dbddb74b8-d8b8b     1/1     Running   0          5m4s    10.200.1.3   worker-1   <none>
untrusted                 1/1     Running   0          12s     10.200.0.3   worker-0   <none>

Get the node name where the untrusted pod is running:

INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
echo $INSTANCE_NAME
worker-0

SSH into the worker node:

gcloud compute ssh ${INSTANCE_NAME}

List the containers running under gVisor:

sudo runsc --root  /run/containerd/runsc/k8s.io list
I0105 11:50:12.028805   23603 x:0] ***************************
I0105 11:50:12.028986   23603 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
I0105 11:50:12.029074   23603 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I0105 11:50:12.029152   23603 x:0] PID: 23603
I0105 11:50:12.029218   23603 x:0] UID: 0, GID: 0
I0105 11:50:12.029286   23603 x:0] Configuration:
I0105 11:50:12.029342   23603 x:0]              RootDir: /run/containerd/runsc/k8s.io
I0105 11:50:12.029465   23603 x:0]              Platform: ptrace
I0105 11:50:12.029618   23603 x:0]              FileAccess: exclusive, overlay: false
I0105 11:50:12.029743   23603 x:0]              Network: sandbox, logging: false
I0105 11:50:12.029879   23603 x:0]              Strace: false, max size: 1024, syscalls: []
I0105 11:50:12.030101   23603 x:0] ***************************
ID                                                                 PID         STATUS      BUNDLE                                                                                                                   CREATED                OWNER
4f747e1bae0d7430b380da20b3a268a8f9721be880ee03e32b87764af3161dbb   23272       running     /run/containerd/io.containerd.runtime.v1.linux/k8s.io/4f747e1bae0d7430b380da20b3a268a8f9721be880ee03e32b87764af3161dbb   0001-01-01T00:00:00Z
f4be16652fff7519caf5828d59fbb432f5be48f58a4a84e2d543d47f2f698728   23211       running     /run/containerd/io.containerd.runtime.v1.linux/k8s.io/f4be16652fff7519caf5828d59fbb432f5be48f58a4a84e2d543d47f2f698728   0001-01-01T00:00:00Z
I0105 11:50:12.033749   23603 x:0] Exiting with status: 0

Get the ID of the untrusted pod:

POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
  pods --name untrusted -q)
echo $POD_ID
f4be16652fff7519caf5828d59fbb432f5be48f58a4a84e2d543d47f2f698728

Get the ID of the webserver container running in the untrusted pod:

CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
  ps -p ${POD_ID} -q)
echo $CONTAINER_ID
4f747e1bae0d7430b380da20b3a268a8f9721be880ee03e32b87764af3161dbb

Use the gVisor runsc command to display the processes running inside the webserver container:

sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
I0105 11:51:09.523569   23688 x:0] ***************************
I0105 11:51:09.523728   23688 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps 4f747e1bae0d7430b380da20b3a268a8f9721be880ee03e32b87764af3161dbb]
I0105 11:51:09.523822   23688 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I0105 11:51:09.523890   23688 x:0] PID: 23688
I0105 11:51:09.523957   23688 x:0] UID: 0, GID: 0
I0105 11:51:09.524023   23688 x:0] Configuration:
I0105 11:51:09.524079   23688 x:0]              RootDir: /run/containerd/runsc/k8s.io
I0105 11:51:09.524202   23688 x:0]              Platform: ptrace
I0105 11:51:09.524345   23688 x:0]              FileAccess: exclusive, overlay: false
I0105 11:51:09.524479   23688 x:0]              Network: sandbox, logging: false
I0105 11:51:09.524604   23688 x:0]              Strace: false, max size: 1024, syscalls: []
I0105 11:51:09.524727   23688 x:0] ***************************
UID       PID       PPID      C         STIME     TIME      CMD
0         1         0         0         11:49     20ms      app
I0105 11:51:09.526293   23688 x:0] Exiting with status: 0

results matching ""

    No results matching ""