A pod is a collection of containers sharing a network and mount namespace and is the basic unit of deployment in Kubernetes. All containers in a pod are scheduled on the same node.

To launch a pod using the container image mhausenblas/simpleservice:0.5.0 and exposing a HTTP API on port 9876, execute:

$ kubectl run sise --image=mhausenblas/simpleservice:0.5.0 --port=9876

We can now see that the pod is running:

$ kubectl get pods
NAME                      READY     STATUS    RESTARTS   AGE
sise-3210265840-k705b     1/1       Running   0          1m

$ kubectl describe pod sise-3210265840-k705b | grep IP:
IP:                     172.17.0.3

From within the cluster (e.g. via minishift ssh) this pod is accessible via the pod IP 172.17.0.3, which we’ve learned from the kubectl describe command above:

[cluster] $ curl 172.17.0.3:9876/info
{"host": "172.17.0.3:9876", "version": "0.5.0", "from": "172.17.0.1"}

Note that kubectl run creates a deployment, so in order to get rid of the pod you have to execute kubectl delete deployment sise.

Alternatively you can create a pod from a configuration file. In our case the pod is running the already known simpleservice image from above along with a generic CentOS container:

$ kubectl create -f https://raw.githubusercontent.com/mhausenblas/kbe/master/specs/pods/pod.yaml

$ kubectl get pods
NAME                      READY     STATUS    RESTARTS   AGE
twocontainers             2/2       Running   0          7s

Now we can exec into the CentOS container and access the simpleservice on localhost:

$ kubectl exec twocontainers -c shell -i -t -- bash
[root@twocontainers /]# curl localhost:9876/info
{"host": "localhost:9876", "version": "0.5.0", "from": "127.0.0.1"}

Specify the resources field in the pod to influence how much CPU and/or RAM a container in a pod can use (here: 64MB of RAM and 0.5 CPUs):

$ kubectl create -f https://raw.githubusercontent.com/mhausenblas/kbe/master/specs/pods/constraint-pod.yaml

$ kubectl describe pod constraintpod
...
Containers:
  sise:
    ...
    Limits:
      cpu:      500m
      memory:   64Mi
    Requests:
      cpu:      500m
      memory:   64Mi
...

Learn more about resource constraints in Kubernetes via the docs here and here.

To sum up, launching one or more containers (together) in Kubernetes is simple, however doing it directly as shown above comes with a serious limitation: you have to manually take care of keeping them running in case of a failure. A better way to supervise pods is to use replication controllers, or even better deployments, giving you much more control.