On-disk files in a container are ephemeral, which presents some problems for
non-trivial applications when running in containers. First, when a container
crashes kubelet will restart it, but the files will be lost - the
container starts with a clean slate. Second, when running containers together
in a Pod
it is often necessary to share files between those containers. The
Kubernetes Volume
abstraction solves both of these problems.
Familiarity with pods is suggested.
Docker also has a concept of volumes, though it is somewhat looser and less managed. In Docker, a volume is simply a directory on disk or in another container. Lifetimes are not managed and until very recently there were only local-disk-backed volumes. Docker now provides volume drivers, but the functionality is very limited for now (e.g. as of Docker 1.7 only one volume driver is allowed per container and there is no way to pass parameters to volumes).
A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the pod that encloses it. Consequently, a volume outlives any containers that run within the Pod, and data is preserved across Container restarts. Of course, when a Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many type of volumes, and a Pod can use any number of them simultaneously.
At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.
To use a volume, a pod specifies what volumes to provide for the pod (the
spec.volumes
field) and where to mount those into containers(the
spec.containers.volumeMounts
field).
A process in a container sees a filesystem view composed from their Docker image and volumes. The Docker image is at the root of the filesystem hierarchy, and any volumes are mounted at the specified paths within the image. Volumes can not mount onto other volumes or have hard links to other volumes. Each container in the Pod must independently specify where to mount each volume.
Kubernetes supports several types of Volumes:
emptyDir
hostPath
gcePersistentDisk
awsElasticBlockStore
nfs
iscsi
flocker
glusterfs
rbd
cephfs
gitRepo
secret
persistentVolumeClaim
downwardAPI
azureFileVolume
vsphereVolume
We welcome additional contributions.
An emptyDir
volume is first created when a Pod is assigned to a Node, and
exists as long as that Pod is running on that node. As the name says, it is
initially empty. Containers in the pod can all read and write the same
files in the emptyDir
volume, though that volume can be mounted at the same
or different paths in each container. When a Pod is removed from a node for
any reason, the data in the emptyDir
is deleted forever. NOTE: a container
crashing does NOT remove a pod from a node, so the data in an emptyDir
volume is safe across container crashes.
Some uses for an emptyDir
are:
By default, emptyDir
volumes are stored on whatever medium is backing the
machine - that might be disk or SSD or network storage, depending on your
environment. However, you can set the emptyDir.medium
field to "Memory"
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead.
While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
machine reboot and any files you write will count against your container’s
memory limit.
A hostPath
volume mounts a file or directory from the host node’s filesystem
into your pod. This is not something that most Pods will need, but it offers a
powerful escape hatch for some applications.
For example, some uses for a hostPath
are:
hostPath
of /var/lib/docker
hostPath
of /dev/cgroups
Watch out when using this type of volume, because:
hostPath
hostPath
volumeapiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
apiVersion: v1
kind: Pod
metadata:
name: test-hostpath
spec:
containers:
- image: myimage
name: test-container
volumeMounts:
- mountPath: /test-hostpath
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /path/to/my/dir
A gcePersistentDisk
volume mounts a Google Compute Engine (GCE) Persistent
Disk into your pod. Unlike
emptyDir
, which is erased when a Pod is removed, the contents of a PD are
preserved and the volume is merely unmounted. This means that a PD can be
pre-populated with data, and that data can be “handed off” between pods.
Important: You must create a PD using gcloud
or the GCE API or UI
before you can use it
There are some restrictions when using a gcePersistentDisk
:
A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
Using a PD on a pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1.
Before you can use a GCE PD with a pod, you need to create it.
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
An awsElasticBlockStore
volume mounts an Amazon Web Services (AWS) EBS
Volume into your pod. Unlike
emptyDir
, which is erased when a Pod is removed, the contents of an EBS
volume are preserved and the volume is merely unmounted. This means that an
EBS volume can be pre-populated with data, and that data can be “handed off”
between pods.
Important: You must create an EBS volume using aws ec2 create-volume
or
the AWS API before you can use it
There are some restrictions when using an awsElasticBlockStore volume:
Before you can use a EBS volume with a pod, you need to create it.
aws ec2 create-volume --availability-zone eu-west-1a --size 10 --volume-type gp2
Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume type are suitable for your use!)
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
An nfs
volume allows an existing NFS (Network File System) share to be
mounted into your pod. Unlike emptyDir
, which is erased when a Pod is
removed, the contents of an nfs
volume are preserved and the volume is merely
unmounted. This means that an NFS volume can be pre-populated with data, and
that data can be “handed off” between pods. NFS can be mounted by multiple
writers simultaneously.
Important: You must have your own NFS server running with the share exported before you can use it
See the NFS example for more details.
An iscsi
volume allows an existing iSCSI (SCSI over IP) volume to be mounted
into your pod. Unlike emptyDir
, which is erased when a Pod is removed, the
contents of an iscsi
volume are preserved and the volume is merely
unmounted. This means that an iscsi volume can be pre-populated with data, and
that data can be “handed off” between pods.
Important: You must have your own iSCSI server running with the volume created before you can use it
A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, iSCSI volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
See the iSCSI example for more details.
Flocker is an open-source clustered container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
A flocker
volume allows a Flocker dataset to be mounted into a pod. If the
dataset does not already exist in Flocker, it needs to be first created with the Flocker
CLI or by using the Flocker API. If the dataset already exists it will be
reattached by Flocker to the node that the pod is scheduled. This means data
can be “handed off” between pods as required.
Important: You must have your own Flocker installation running before you can use it
See the Flocker example for more details.
A glusterfs
volume allows a Glusterfs (an open
source networked filesystem) volume to be mounted into your pod. Unlike
emptyDir
, which is erased when a Pod is removed, the contents of a
glusterfs
volume are preserved and the volume is merely unmounted. This
means that a glusterfs volume can be pre-populated with data, and that data can
be “handed off” between pods. GlusterFS can be mounted by multiple writers
simultaneously.
Important: You must have your own GlusterFS installation running before you can use it
See the GlusterFS example for more details.
An rbd
volume allows a Rados Block
Device volume to be mounted into your
pod. Unlike emptyDir
, which is erased when a Pod is removed, the contents of
a rbd
volume are preserved and the volume is merely unmounted. This
means that a RBD volume can be pre-populated with data, and that data can
be “handed off” between pods.
Important: You must have your own Ceph installation running before you can use RBD
A feature of RBD is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
See the RBD example for more details.
A cephfs
volume allows an existing CephFS volume to be
mounted into your pod. Unlike emptyDir
, which is erased when a Pod is
removed, the contents of a cephfs
volume are preserved and the volume is merely
unmounted. This means that a CephFS volume can be pre-populated with data, and
that data can be “handed off” between pods. CephFS can be mounted by multiple
writers simultaneously.
Important: You must have your own Ceph server running with the share exported before you can use it
See the CephFS example for more details.
A gitRepo
volume is an example of what can be done as a volume plugin. It
mounts an empty directory and clones a git repository into it for your pod to
use. In the future, such volumes may be moved to an even more decoupled model,
rather than extending the Kubernetes API for every such use case.
Here is a example for gitRepo volume:
apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git@somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
A secret
volume is used to pass sensitive information, such as passwords, to
pods. You can store secrets in the Kubernetes API and mount them as files for
use by pods without coupling to Kubernetes directly. secret
volumes are
backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.
Important: You must create a secret in the Kubernetes API before you can use it
Secrets are described in more detail here.
A persistentVolumeClaim
volume is used to mount a
PersistentVolume into a pod. PersistentVolumes are a
way for users to “claim” durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
See the PersistentVolumes example for more details.
A downwardAPI
volume is used to make downward API data available to applications.
It mounts a directory and writes the requested data in plain text files.
See the downwardAPI
volume example for more details.
A FlexVolume
enables users to mount vendor volumes into a pod. It expects vendor
drivers are installed in the volume plugin path on each kubelet node. This is
an alpha feature and may change in future.
More details are in here
A AzureFileVolume
is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0)
into a Pod.
More details can be found here
A vsphereVolume
is used to mount a vSphere VMDK Volume into your Pod. The contents
of a volume are preserved when it is unmounted.
Important: You must create a VMDK volume using vmware-vdiskmanager -c
or
the VSphere API before you can use it
Before you can use a vSphere volume with a pod, you need to create it.
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
apiVersion: v1
kind: Pod
metadata:
name: test-vmdk
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-vmdk
name: test-volume
volumes:
- name: test-volume
# This VMDK volume must already exist.
vsphereVolume:
volumePath: myDisk
fsType: ext4
The storage media (Disk, SSD, etc) of an emptyDir
volume is determined by the
medium of the filesystem holding the kubelet root dir (typically
/var/lib/kubelet
). There is no limit on how much space an emptyDir
or
hostPath
volume can consume, and no isolation between containers or between
pods.
In the future, we expect that emptyDir
and hostPath
volumes will be able to
request a certain amount of space using a resource
specification, and to select the type of media to use, for clusters that have
several media types.