Namespaces

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.

When to Use Multiple Namespaces

Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.

Namespaces are a way to divide cluster resources between multiple users (via resource quota).

In future versions of Kubernetes, objects in the same namespace will have the same access control policies by default.

It is not necessary to use multiple namespaces just to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.

Share a Cluster with Namespaces

Viewing namespaces

List the current namespaces in a cluster using:

kubectl get namespaces
NAME          STATUS    AGE
default       Active    1d
kube-public   Active    1d
kube-system   Active    1d

Kubernetes starts with three initial namespaces:

  • default: The default namespace for objects with no other namespace
  • kube-system: The namespace for objects created by the Kubernetes system
  • kube-public: The namespace is created automatically and readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.

You can also get the summary of a specific namespace using kubectl get namespaces <name>

kubectl get namespaces default
NAME      STATUS    AGE
default   Active    1d

Or you can get detailed information with kubectl describe namespaces <name>

kubectl describe namespaces default
Name:         default
Labels:       <none>
Annotations:  <none>
Status:       Active

No resource quota.

Resource Limits
 Type       Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
 ----       --------  ---  ---  ---------------  -------------  -----------------------
 Container  cpu       -    -    100m             -              -

Note that these details show both resource quota (if present) as well as resource limit ranges.

Resource quota tracks aggregate usage of resources in the Namespace and allows cluster operators to define Hard resource usage limits that a Namespace may consume.

A limit range defines min/max constraints on the amount of resources a single entity can consume in a Namespace.

A namespace can be in one of two phases:

  • Active: the namespace is in use
  • Terminating: the namespace is being deleted, and can not be used for new objects

Setting the namespace for a request

To temporarily set the namespace for a request, use the --namespace flag.

kubectl get pods --namespace=kube-system
NAME                                                        READY     STATUS    RESTARTS   AGE
event-exporter-v0.2.3-54f94754f4-xkbqq                      2/2       Running   0          1d
fluentd-gcp-scaler-6d7bbc67c5-mlkwn                         1/1       Running   0          1d
fluentd-gcp-v3.1.0-4j4st                                    2/2       Running   0          1d
fluentd-gcp-v3.1.0-5j2v8                                    2/2       Running   0          1d
fluentd-gcp-v3.1.0-ncgxr                                    2/2       Running   0          1d
heapster-v1.5.3-5cc58c9cb6-j2ckx                            3/3       Running   0          1d
kube-dns-788979dc8f-772gx                                   4/4       Running   0          1d
kube-dns-788979dc8f-7wj7q                                   4/4       Running   0          1d
kube-dns-autoscaler-79b4b844b9-m95vd                        1/1       Running   0          1d
kube-proxy-gke-admatic-cluster-default-pool-e5aae271-0b3v   1/1       Running   0          1d
kube-proxy-gke-admatic-cluster-default-pool-e5aae271-9tk6   1/1       Running   0          1d
kube-proxy-gke-admatic-cluster-default-pool-e5aae271-vt5z   1/1       Running   0          1d
l7-default-backend-5d5b9874d5-njll6                         1/1       Running   0          1d
metrics-server-v0.2.1-7486f5bd67-w5frl                      2/2       Running   0          1d
tiller-deploy-8586bc5c8b-zkd7p                              1/1       Running   0          1d

Setting the namespace preference

You can permanently save the namespace for all subsequent kubectl commands in that context.

kubectl get pods
No resources found.
kubectl config set-context $(kubectl config current-context) --namespace=kube-system
Context "prod" modified.
kubectl get pods
NAME                                                        READY     STATUS    RESTARTS   AGE
event-exporter-v0.2.3-54f94754f4-xkbqq                      2/2       Running   0          1d
fluentd-gcp-scaler-6d7bbc67c5-mlkwn                         1/1       Running   0          1d
fluentd-gcp-v3.1.0-4j4st                                    2/2       Running   0          1d
fluentd-gcp-v3.1.0-5j2v8                                    2/2       Running   0          1d
fluentd-gcp-v3.1.0-ncgxr                                    2/2       Running   0          1d
heapster-v1.5.3-5cc58c9cb6-j2ckx                            3/3       Running   0          1d
kube-dns-788979dc8f-772gx                                   4/4       Running   0          1d
kube-dns-788979dc8f-7wj7q                                   4/4       Running   0          1d
kube-dns-autoscaler-79b4b844b9-m95vd                        1/1       Running   0          1d
kube-proxy-gke-admatic-cluster-default-pool-e5aae271-0b3v   1/1       Running   0          1d
kube-proxy-gke-admatic-cluster-default-pool-e5aae271-9tk6   1/1       Running   0          1d
kube-proxy-gke-admatic-cluster-default-pool-e5aae271-vt5z   1/1       Running   0          1d
l7-default-backend-5d5b9874d5-njll6                         1/1       Running   0          1d
metrics-server-v0.2.1-7486f5bd67-w5frl                      2/2       Running   0          1d
tiller-deploy-8586bc5c8b-zkd7p                              1/1       Running   0          1d

Not All Objects are in a Namespace

Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. However namespace resources are not themselves in a namespace. And low-level resources, such as nodes and persistentVolumes, are not in any namespace.

Subdividing your cluster using Kubernetes namespaces

By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.

Create new namespaces

In a scenario where an organization is using a shared Kubernetes cluster for development and production use cases:

  • The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources are relaxed to enable agile development.
  • The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site.

One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.

Let’s create two new namespaces to hold our work.

cat << EOF > namespace-dev.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: development
  labels:
    name: development
EOF

kubectl create -f namespace-dev.yaml
namespace "development" created

Note that the name of your namespace must be a DNS compatible label.

There’s an optional field finalizers, which allows observables to purge resources whenever the namespace is deleted. Keep in mind that if you specify a nonexistent finalizer, the namespace will be created but will get stuck in the Terminating state if the user tries to delete it.

cat << EOF > namespace-prod.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    name: production
EOF

kubectl create -f namespace-prod.yaml
namespace "production" created

List all of the namespaces in our cluster.

kubectl get namespaces --show-labels
NAME          STATUS    AGE       LABELS
default       Active    1d        <none>
development   Active    2m        name=development
kube-public   Active    1d        <none>
kube-system   Active    1d        <none>
production    Active    28s       name=production

Create pods in each namespace

A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.

Users interacting with one namespace do not see the content in another namespace.

To demonstrate this, let’s spin up a simple Deployment and Pods in the development namespace.

We first check what is the current context:

kubectl config view
apiVersion: v1
clusters:
  - cluster:
      certificate-authority-data: REDACTED
      server: https://104.198.28.250
    name: gke_espblufi-android_us-central1-f_admatic-cluster
contexts:
  - context:
      cluster: gke_espblufi-android_us-central1-f_admatic-cluster
      user: gke_espblufi-android_us-central1-f_admatic-cluster
    name: gke_espblufi-android_us-central1-f_admatic-cluster
current-context: gke_espblufi-android_us-central1-f_admatic-cluster
kind: Config
preferences: {}
users:
  - name: gke_espblufi-android_us-central1-f_admatic-cluster
    user:
      auth-provider:
        config:
          access-token: ya29.GlyHBkujSyRmn3SLNekGbBGQ1e-UiqFll2-lSz1RcIQHSzlXzrRCoqNKv-cNbvs08ZVh2LG-3MpXhyOMstsoLjcMzb6qBFqPFKizNz_X3JekQcWMu98nUej9KivAg
          cmd-args: config config-helper --format=json
          cmd-path: /Users/adithya321/Downloads/google-cloud-sdk/bin/gcloud
          expiry: 2019-01-04T10:15:51Z
          expiry-key: "{.credential.token_expiry}"
          token-key: "{.credential.access_token}"
        name: gcp
kubectl config current-context
gke_espblufi-android_us-central1-f_admatic-cluster

The next step is to define a context for the kubectl client to work in each namespace. The value of “cluster” and “user” fields are copied from the current context.

kubectl config set-context dev --namespace=development \
  --cluster=gke_espblufi-android_us-central1-f_admatic-cluster \
  --user=gke_espblufi-android_us-central1-f_admatic-cluster
Context "dev" created.
kubectl config set-context prod --namespace=production \
  --cluster=gke_espblufi-android_us-central1-f_admatic-cluster \
  --user=gke_espblufi-android_us-central1-f_admatic-cluster
Context "prod" created.

By default, the above commands adds two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you wish to work against.

To view the new contexts:

kubectl config view
apiVersion: v1
clusters:
  - cluster:
      certificate-authority-data: REDACTED
      server: https://104.198.28.250
    name: gke_espblufi-android_us-central1-f_admatic-cluster
contexts:
  - context:
      cluster: gke_espblufi-android_us-central1-f_admatic-cluster
      namespace: development
      user: gke_espblufi-android_us-central1-f_admatic-cluster
    name: dev
  - context:
      cluster: gke_espblufi-android_us-central1-f_admatic-cluster
      user: gke_espblufi-android_us-central1-f_admatic-cluster
    name: gke_espblufi-android_us-central1-f_admatic-cluster
  - context:
      cluster: gke_espblufi-android_us-central1-f_admatic-cluster
      namespace: production
      user: gke_espblufi-android_us-central1-f_admatic-cluster
    name: prod
current-context: gke_espblufi-android_us-central1-f_admatic-cluster
kind: Config
preferences: {}
users:
  - name: gke_espblufi-android_us-central1-f_admatic-cluster
    user:
      auth-provider:
        config:
          access-token: ya29.GlyHBkujSyRmn3SLNekGbBGQ1e-UiqFll2-lSz1RcIQHSzlXzrRCoqNKv-cNbvs08ZVh2LG-3MpXhyOMstsoLjcMzb6qBFqPFKizNz_X3JekQcWMu98nUej9KivAg
          cmd-args: config config-helper --format=json
          cmd-path: /Users/adithya321/Downloads/google-cloud-sdk/bin/gcloud
          expiry: 2019-01-04T10:15:51Z
          expiry-key: "{.credential.token_expiry}"
          token-key: "{.credential.access_token}"
        name: gcp

Let’s switch to operate in the development namespace.

kubectl config use-context dev
Switched to context "dev".

You can verify your current context by doing the following:

kubectl config current-context
dev

At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.

Let’s create some contents.

kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
deployment.apps "snowflake" created

We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.

kubectl get deployment
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
snowflake   2         2         2            2           43s
kubectl get pods -l run=snowflake
NAME                         READY     STATUS    RESTARTS   AGE
snowflake-54fccfcd67-4ptkd   1/1       Running   0          1m
snowflake-54fccfcd67-9ttjv   1/1       Running   0          1m

And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.

Let’s switch to the production namespace and show how resources in one namespace are hidden from the other.

kubectl config use-context prod
Switched to context "prod".

The production namespace should be empty, and the following commands should return nothing.

kubectl get deployment
No resources found.
kubectl get pods
No resources found.

Deleting a namespace

Delete a namespace with kubectl delete namespaces <namespace-name>

kubectl delete namespaces development
namespace "development" deleted

This deletes everything under the namespace!

This delete is asynchronous, so for a time you will see the namespace in the Terminating state.

kubectl get namespaces --show-labels
NAME          STATUS        AGE       LABELS
default       Active        1d        <none>
development   Terminating   16m       name=development
kube-public   Active        1d        <none>
kube-system   Active        1d        <none>
production    Active        14m       name=production

Understanding the motivation for using namespaces

A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a ‘user community’).

Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster.

It does this by providing the following:

  • A scope for Names.
  • A mechanism to attach authorization and policy to a subsection of the cluster.

Use of multiple namespaces is optional.

Each user community wants to be able to work in isolation from other communities.

Each user community has its own:

  • resources (pods, services, replication controllers, etc.)
  • policies (who can or cannot perform actions in their community)
  • constraints (this community is allowed this much quota, etc.)

A cluster operator may create a Namespace for each unique user community.

The Namespace provides a unique scope for:

  • named resources (to avoid basic naming collisions)
  • delegated management authority to trusted users
  • ability to limit community resource consumption

Use cases include:

  • As a cluster operator, I want to support multiple user communities on a single cluster.
  • As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users in those communities.
  • As a cluster operator, I want to limit the amount of resources each community can consume in order to limit the impact to other communities using the cluster.
  • As a cluster user, I want to interact with resources that are pertinent to my user community in isolation of what other user communities are doing on the cluster.

Understanding namespaces and DNS

When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container just uses <service-name> it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN).

results matching ""

    No results matching ""