Juju encapsulates the operational knowledge of provisioning, installing, and securing a Kubernetes cluster into one step. Juju allows you to deploy a Kubernetes cluster on different cloud providers with a consistent, repeatable user experience. Once deployed the cluster can easily scale up with one command.
The Juju Kubernetes work is curated by a dedicated team of community members, let us know how we are doing. If you find any problems please open an issue on the kubernetes project and tag the issue with “juju” so we can find them.
Note: If you’re running kube-up, on Ubuntu - all of the dependencies will be handled for you. You may safely skip to the section: Launch a Kubernetes Cluster
This documentation focuses on the Juju 2.0 release which will be promoted to stable during the April 2016 release cycle.
To paraphrase, on your local Ubuntu system:
sudo add-apt-repository ppa:juju/devel
sudo apt-get update
sudo apt-get install juju
If you are using another distro/platform - please consult the getting started guide to install the Juju dependencies for your platform.
If you prefer the isolation of Docker, you can run the Juju client in a container. Create a local directory to store the Juju configuration, then volume mount the container:
mkdir -p $HOME/.local/share/juju
docker run --rm -ti \
-v $HOME/.local/share/juju:/home/ubuntu/.local/share/juju \
jujusolutions/charmbox:devel
While this is a common target, the charmbox flavors of images are unofficial, and should be treated as experimental. If you encounter any issues turning up the Kubernetes cluster with charmbox, please file a bug on the charmbox issue tracker.
At this point you have access to the Juju client. Before you can deploy a cluster you have to configure Juju with the cloud credentials for each cloud provider you would like to use.
Juju supports a wide variety of public clouds to set up the credentials for your chosen cloud see the cloud setup page.
After configuration is complete test your setup with a juju bootstrap
command: juju bootstrap $controllername $cloudtype
you are ready to launch
the Kubernetes cluster.
You can deploy a Kubernetes cluster with Juju from the kubernetes
directory of
the kubernetes github project.
Clone the repository on your local system. Export the KUBERNETES_PROVIDER
environment variable before bringing up the cluster.
cd kubernetes
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
If this is your first time running the kube-up.sh
script, it will attempt to
install the required dependencies to get started with Juju.
The script will deploy two nodes of Kubernetes, 1 unit of etcd, and network the units so containers on different hosts can communicate with each other.
The juju status
command provides information about each unit in the cluster:
$ juju status
MODEL CONTROLLER CLOUD/REGION VERSION
default windows azure/centralus 2.0-beta13
APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS
etcd active false jujucharms etcd 3 ubuntu
kubernetes active true jujucharms kubernetes 5 ubuntu
RELATION PROVIDES CONSUMES TYPE
cluster etcd etcd peer
etcd etcd kubernetes regular
certificates kubernetes kubernetes peer
UNIT WORKLOAD AGENT MACHINE PORTS PUBLIC-ADDRESS MESSAGE
etcd/0 active idle 0 2379/tcp 13.67.217.11 (leader) cluster is healthy
kubernetes/0 active idle 1 8088/tcp 13.67.219.76 Kubernetes running.
kubernetes/1 active idle 2 6443/tcp 13.67.219.182 (master) Kubernetes running.
MACHINE STATE DNS INS-ID SERIES AZ
0 started 13.67.217.11 machine-0 trusty
1 started 13.67.219.76 machine-1 trusty
2 started 13.67.219.182 machine-2 trusty
The kubectl
file, and the TLS certificates along with the configuration are
all available on the Kubernetes master unit. Fetch the kubectl package so you
can run commands on the new Kuberntetes cluster.
Use the juju status
command to figure out which unit is the master. In the
example above the “kubernetes/1” unit is the master. Use the juju scp
command to copy the file from the unit:
juju scp kubernetes/1:kubectl_package.tar.gz .
tar xvfz kubectl_package.tar.gz
./kubectl --kubeconfig kubeconfig get pods
If you are not on a Linux amd64 host system, you will need to find or build a kubectl binary package for your architecture.
Copy the kubeconfig
file to the home directory so you don’t have to specify
it on the command line each time. The default location is
${HOME}/.kube/config
.
No pods will be available before starting a container:
kubectl get pods
NAME READY STATUSRESTARTS AGE
kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
We’ll follow the aws-coreos example. Create a pod manifest: pod.json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
Create the pod with kubectl:
kubectl create -f pod.json
Get info on the pod:
kubectl get pods
To test the hello app, we need to locate which node is hosting
the container. We can use juju run
and juju status
commands to find
our hello app.
Exit out of our ssh session and run:
juju run --unit kubernetes/0 "docker ps -n=1"
...
juju run --unit kubernetes/1 "docker ps -n=1"
CONTAINER IDIMAGE COMMAND CREATED STATUS PORTS NAMES
02beb61339d8quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hourk8s_hello....
We see “kubernetes/1” has our container, expose the kubernetes charm and open port 80:
juju run --unit kubernetes/1 "open-port 80"
juju expose kubernetes
sudo apt-get install curl
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
Finally delete the pod:
juju ssh kubernetes/0
kubectl delete pods hello
Want larger Kubernetes nodes? It is easy to request different sizes of cloud
resources from Juju by using constraints. You can increase the amount of
CPU or memory (RAM) in any of the systems requested by Juju. This allows you
to fine tune th Kubernetes cluster to fit your workload. Use flags on the
bootstrap command or as a separate juju constraints
command. Look to the
Juju documentation for machine
details.
Need more workers? Juju makes it easy to add units of a charm:
juju add-unit kubernetes
Or multiple units at one time:
juju add-unit -n3 kubernetes
You can also scale the etcd charm for more fault tolerant key/value storage:
juju add-unit -n2 etcd
We recommend that you use the kube-down.sh
script when you are done using
the cluster, as it properly brings down the cloud and removes some of the
build directories.
./cluster/kube-down.sh
Alternately if you want stop the servers you can destroy the Juju model or the
controller. Use the juju switch
command to get the current controller name:
juju switch
juju destroy-controller $controllername --destroy-all-models
Juju works with charms and bundles to deploy solutions. The code that stands up a Kubernetes cluster is done in the charm code. The charm is built from using a layered approach to keep the code smaller and more focused on the operations of Kubernetes.
The Kubernetes layer and bundles can be found in the kubernetes
project on github.com:
Juju is cloud agnostic and gives you a consistent experience across different cloud providers. Juju supports a variety of public cloud providers: Amazon Web Service, Microsoft Azure, Google Compute Engine, Joyent, Rackspace, any OpenStack cloud, and Vmware vSphere.
If you do not see your favorite cloud provider listed many clouds with ssh access can be configured for manual provisioning.
To change to a different cloud you can use the juju switch
command and set
up the credentials for that cloud provider and continue to use the kubeup.sh
script.
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
---|---|---|---|---|---|---|
Amazon Web Services (AWS) | Juju | Ubuntu | flannel | docs | Community ( @mbruzek, @chuckbutler ) | |
OpenStack | Juju | Ubuntu | flannel | docs | Community ( @mbruzek, @chuckbutler ) | |
Microsoft Azure | Juju | Ubuntu | flannel | docs | Community ( @mbruzek, @chuckbutler ) | |
Google Compute Engine (GCE) | Juju | Ubuntu | flannel | docs | Community ( @mbruzek, @chuckbutler ) |
For support level information on all solutions, see the Table of solutions chart.