Table of Contents
Introduction
From the Docker Documentation:
Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers. > The use of Linux containers to deploy applications is called containerization. Containers are not new, but their use for easily deploying applications is.
Containerization is increasingly popular because containers are:
- Flexible: Even the most complex applications can be containerized.
- Lightweight: Containers leverage and share the host kernel.
- Interchangeable: You can deploy updates and upgrades on-the-fly.
- Portable: You can build locally, deploy to the cloud, and run anywhere.
- Scalable: You can increase and automatically distribute container replicas.
- Stackable: You can stack services vertically and on-the-fly.
Images
An image is a collection of files (a package) that is executable. It has all the files necessary to run an application, from dependencies to configuration files.
It is analogous to a class from object-oriented programming.
Containers
“A container is a runtime instance of an image,” or in other words, it is an instance of a class. One image can be used to “spin up” many containers. A container is what an image becomes (in the computer’s memory) when it is launched. It is a user process with a state and need for access to resources.
Docker being a “machine” of sorts, it has its own processes, which happen to be instances of images–containers.
Just as you would in Linux, you can see a list of your running containers by issuing (sudo) docker ps (more on that soon).
Comparison to Virtual Machines
A container runs natively on Linux and shares the kernel of the host machine with other containers. It runs a discrete process, taking no more memory than any other executable, making it lightweight.
By contrast, a virtual machine (VM) runs a full-blown “guest” operating system with virtual access to host resources through a hypervisor. In general, VMs provide an environment with more resources than most applications need.
Here is a Virtual Machine:

And here is a Container:

Notice how Docker sits atop the Host Operating System 1 and is performing the management role of resources among applications in place of a “hypervisor” from a Virtual Machine.
Documentation
Here is the Cheatsheet from which I will be pulling much of what is in this post.
Install
Linux
Quick and easy install script provided by Docker:
curl -sSL https://get.docker.com/ | sh
From the author of the cheatsheet:
If you’re not willing to run a random shell script, please see the installation instructions for your distribution. If you are a complete Docker newbie, you should follow the series of tutorials now.
macOS
Download and install Docker Community Edition. if you have Homebrew-Cask, just type brew cask install docker.
Once you’ve installed Docker Community Edition, click the docker icon in Launchpad. Then start up a container:
docker run hello-world
Note: You may have to restart your shell session (either by closing and re-launching a new one, or logging out of your remote server connection with Ctrl-D and logging back in with ssh). This is what I had to do.
If successful, you should see a “Hello from Docker!” printout in your console.
And that’s it, you have a running Docker container (this one comes with the install).
However, we are not quite done yet. Let’s get to some configuration…
Windows
Go deal with it yourself. It’s similar to the Desktop-version available for Mac, but comes with all sorts of caveats you should read through first. It should be fairly straightforward for Windows 10 Users. My suggestion is to simply go with Linux, since our focus is on using Docker for deploying to servers, which are unlikely to be running Windows for the use-cases we have in mind. That said, should be doable, if you insist on it.
Configure
One thing you may notice is that docker commands require the use of sudo, which we would like to avoid.
To avoid permission errors (and the use of sudo), add your user to the docker group.
Post-Installation Steps contains optional procedures for configuring Linux hosts to work better with Docker. The following is taken from that source, and much more Troubleshooting Information can be found there.
Permissions
To create the docker group and add your user:
- Create the docker group.
- Add your user to the docker group.
- Log out and log back in so that your group membership is re-evaluated. Some caveats may apply. 2
sudo groupadd docker
sudo usermod -aG docker $USER
Verify that you can run docker commands without sudo.
docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.
If you initially ran Docker CLI commands using sudo before adding your user to the docker group, you may see the following error, which indicates that your
~/.docker/directory was created with incorrect permissions due to the sudo commands.WARNING: Error loading config file: /home/user/.docker/config.json - stat /home/user/.docker/config.json: permission deniedTo fix this problem, either remove the
~/.docker/ directory(it is recreated automatically, but any custom settings are lost), or change its ownership and permissions using the following commands:sudo chown "$USER":"$USER" /home/"$USER"/.docker -R sudo chmod g+rwx "$HOME/.docker" -R
Start on boot
Sometimes you want Docker to be the main thing running on a server and thus started up on boot (for the occasional restart).
This feature may be desired for servers that host critical processes using Docker.
Most current Linux distributions (RHEL, CentOS, Fedora, Ubuntu 16.04 and higher) use systemd to manage which services start when the system boots. 3
sudo systemctl enable docker
To disable this behavior, use disable instead.
sudo systemctl disable docker
Re-route IP
By default, the Docker daemon listens for connections on a UNIX socket to accept requests from local clients.
It is possible to allow Docker to accept requests from remote hosts by configuring it to listen on an IP address and port as well as the UNIX socket.
For more detailed information on this configuration option take a look at “Bind Docker to another host/port or a unix socket” section of the Docker CLI Reference article.
Security Notice: Before configuring Docker to accept connections from remote hosts it is critically important that you understand the security implications of opening docker to the network. If steps are not taken to secure the connection, it is possible for remote non-root users to gain root access on the host. For more information on how to use TLS certificates to secure this connection, check this article on how to protect the Docker daemon socket.
Configuring Docker to accept remote connections can be done with the docker.service systemd unit file for Linux distributions using systemd.
Use the command sudo systemctl edit docker.service to open an override file for docker.service in a text editor.
Add or modify the following lines, substituting your own values.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375
Save the file.
Reload the systemctl configuration and restart Docker.
sudo systemctl daemon-reload
sudo systemctl restart docker.service
Check to see whether the change was honored by reviewing the output of
netstatto confirmdockerdis listening on the configured port, which should look similar to:$ sudo netstat -lntp | grep dockerd tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 3758/dockerd
To enable IPv6 on the Docker daemon, see Enable IPv6 support.
Troubleshooting
More troubleshooting information can be found in the Troubleshooting section of the Post-Install documentation page.
Here we attempt to address just a couple of the most common things that we may have to do.
IP Forwarding
If you manually configure your network using systemd-network with systemd version 219 or higher, Docker containers may not be able to access your network.
Beginning with systemd version 220, the forwarding setting for a given network (net.ipv4.conf.<interface>.forwarding) defaults to off.
This setting prevents IP forwarding.
It also conflicts with Docker’s behavior of enabling the net.ipv4.conf.all.forwarding setting within containers.
To work around this on RHEL, CentOS, or Fedora, edit the
<interface>.networkfile in/usr/lib/systemd/network/on your Docker host (ex:/usr/lib/systemd/network/80-container-host0.network) and add the following block within the[Network]section.[Network] ... IPForward=kernel # OR IPForward=true ...
This configuration allows IP forwarding from the container as expected.
Limiting
You may see
WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.
This warning does not occur on RPM-based systems, which enable these capabilities by default. If you don’t need these capabilities, you can ignore the warning.
You can enable these capabilities on Ubuntu or Debian by following these instructions. Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation, even if Docker is not running.
Log into the Ubuntu or Debian host as a user with sudo privileges.
Edit the
/etc/default/grubfile. Add or edit theGRUB_CMDLINE_LINUXline to add the following two key-value pairs:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"Save and close the file.
- Update GRUB.
sudo update-grubIf your GRUB configuration file has incorrect syntax, an error occurs. In this case, repeat steps 3 and 4. The changes take effect when the system is rebooted.
Security
This is where security tips about Docker go. The Docker security page goes into more detail.
First things first: Docker runs as root.
If you are in the docker group, you effectively have root access.
If you expose the docker UNIX socket to a container, you are giving the container root access to the host.
Docker should not be your only defense. You should secure and harden it.
The security tips following are useful if you’ve already hardened containers in the past, but are not a substitute for understanding. 4
Security Tips
For greatest security, you want to run Docker inside a virtual machine. (Source: Docker Security Team Lead slides / notes.
Then, run with AppArmor / seccomp / SELinux / grsec etc to limit the container permissions. See the Docker 1.10 security features for more details.
Docker image ids are sensitive information and should not be exposed to the outside world. Treat them like passwords.
See the Docker Security Cheat Sheet by Thomas Sjögren: some good stuff about container hardening in there.
Check out the docker bench security script for a security benchmark.
Download the white papers and subscribe to the mailing lists (unfortunately Docker does not have a unique mailing list, only dev / user). To begin with, see this (foot)note from the cheatsheet 5.
Deployment
Making Docker Safe for Production
Since Docker 1.11, you can easily limit the number of active processes running inside a container to prevent fork bombs.
This requires a Linux kernel >= 4.3 with CGROUP_PIDS=y to be in the kernel configuration.
docker run --pids-limit=64
Also available since docker 1.11 is the ability to prevent processes from gaining new privileges.
This feature have been in the Linux kernel since version 3.5. You can read more about it in this blog post.
docker run --security-opt=no-new-privileges
From the Docker Security Cheat Sheet (it’s in PDF which makes it hard to use, so copying below) by Container Solutions :
Be aware that the following may affect the performance of your applications in unexpected ways if you are not sure what kinds of communications requirements you need.
Proceed with caution. Reference the [presentation above][docker-production].
Turn off interprocess communication with:
docker -d --icc=false --iptablesSet the container to be read-only:
docker run --read-onlyVerify images with a hashsum:
docker pull debian@sha256:a25306f3850e1bd44541976aa7b5fd0a29beSet volumes to be read only:
docker run -v $(pwd)/secrets:/secrets:ro debianDefine and run a user in your Dockerfile so you don’t run as root inside the container:
RUN groupadd -r user && useradd -r -g user user USER user
Port-Exposal
Exposing incoming ports through the host container is fiddly but doable.
This is done by mapping the container port to the host port (only using localhost interface) using -p:
docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage
You can tell Docker that the container listens on the specified network ports at runtime by using EXPOSE:
EXPOSE <CONTAINERPORT>
Note that EXPOSE does not expose the port itself – only -p will do that. To expose the container’s port on your localhost’s port:
iptables -t nat -A DOCKER -p tcp --dport <LOCALHOSTPORT> -j DNAT --to-destination <CONTAINERIP>:<PORT>
If you’re running Docker in Virtualbox, you then need to forward the port there as well, using forwarded_port. Define a range of ports in your Vagrantfile like this so you can dynamically map them:
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
...
(49000..49900).each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port
end
...
end
If you forget what you mapped the port to on the host container, use docker port to show it:
docker port CONTAINER $CONTAINERPORT
More
User Namespaces
There’s also work on user namespaces – it is in 1.10 but is not enabled by default.
To enable user namespaces (“remap the userns”) in Ubuntu 15.10, follow the blog example.
The Security Roadmap
The Docker roadmap talks about seccomp support. There is an AppArmor policy generator called bane, and they’re working on security profiles.
Security Videos
- Using Docker Safely
- Securing your applications using Docker
- Container security: Do containers actually contain?
- Linux Containers: Future or Fantasy?
Cheat Sheet
The following is full Cheat Sheet mentioned earlier, and presented here for convenience. Feel free to use the table of contents on the right sidebar (or duplicated below for mobile users) to more easily navigate this page than on the Github Gist 6.
Images
Images are just templates for docker containers.
Lifecycle
docker imagesshows all images.docker importcreates an image from a tarball.docker buildcreates image from Dockerfile.docker commitcreates image from a container, pausing it temporarily if it is running.docker rmiremoves an image.docker loadloads an image from a tar archive as STDIN, including images and tags (as of 0.7).docker savesaves an image to a tar archive stream to STDOUT with all parent layers, tags & versions (as of 0.7).
Info
docker historyshows history of image.docker tagtags an image to a name (local or registry).
Cleaning up
While you can use the docker rmi command to remove specific images, there’s a tool called docker-gc that will safely clean up images that are no longer used by any containers.
Load/Save image
Load an image from file:
docker load < my_image.tar.gz
Save an existing image:
docker save my_image:my_tag | gzip > my_image.tar.gz
Layers
The versioned filesystem in Docker is based on layers. They’re like git commits or changesets for filesystems.
Eventually you’ll want to make changes to an existing image, and will find yourself manipulating the Dockerfile that defines the build and configuration of the image.
You can think of this as mimicking the keystrokes a user would have to enter in order to set up each application on a fresh computer.
Pretty much every image we’ll be interested in will have it’s origin in some version of a Linux distribution on top of which a number of commands are run to define the files necessary for the use-case.
Since each “version” of an image is an entire filesystem, building one version of an image based on a previous one can lead to lots of unnecessary files being tracked.
As stated in the Dockerfile section, the command RUN executes any commands in a new layer on top of the current image, and commits the results.
This “on top” part is especially important to understand, and several things can be done to keep subsequent changes to an image relatively “lightweight.”
For example, make sure to clean up the APT repositories.
Containers
Your basic isolated Docker process. Containers are to Virtual Machines as threads are to processes. Or you can think of them as chroots on steroids.
Lifecycle
docker createcreates a container but does not start it.docker renameallows the container to be renamed.docker runcreates and starts a container in one operation.docker rmdeletes a container.docker updateupdates a container’s resource limits.
Normally if you run a container without options it will start and stop immediately, if you want keep it running you can use the command, docker run -td container_id this will use the option -t that will allocate a pseudo-TTY session and -d that will detach automatically the container (run container in background and print container ID).
If you want a transient container, docker run --rm will remove the container after it stops.
If you want to map a directory on the host to a docker container, docker run -v $HOSTDIR:$DOCKERDIR. Also see Volumes.
If you want to remove also the volumes associated with the container, the deletion of the container must include the -v switch like in docker rm -v.
There’s also a logging driver available for individual containers in docker 1.10. To run docker with a custom log driver (i.e., to syslog), use docker run --log-driver=syslog.
Another useful option is docker run --name yourname docker_image because when you specify the --name inside the run command this will allow you to start and stop a container by calling it with the name the you specified when you created it.
Starting and Stopping
docker startstarts a container so it is running.docker stopstops a running container.docker restartstops and starts a container.docker pausepauses a running container, “freezing” it in place.docker unpausewill unpause a running container.docker waitblocks until running container stops.docker killsends a SIGKILL to a running container.docker attachwill connect to a running container.
If you want to integrate a container with a host process manager, start the daemon with -r=false then use docker start -a.
If you want to expose container ports through the host, see the exposing ports section.
Restart policies on crashed docker instances are covered here.
Info
docker psshows running containers.docker logsgets logs from container. (You can use a custom log driver, but logs is only available forjson-fileandjournaldin 1.10).docker inspectlooks at all the info on a container (including IP address).docker eventsgets events from container.docker portshows public facing port of container.docker topshows running processes in container.docker statsshows containers’ resource usage statistics.docker diffshows changed files in the container’s FS.
docker ps -a shows running and stopped containers.
docker stats --all shows a running list of containers.
Import / Export
docker cpcopies files or folders between a container and the local filesystem.docker exportturns container filesystem into tarball archive stream to STDOUT.
Executing Commands
docker execto execute a command in container.
To enter a running container, attach a new shell process to a running container called foo, use: docker exec -it foo /bin/bash.
Container Import/Export
Import a container as an image from file:
cat my_container.tar.gz | docker import - my_image:my_tag
Export an existing container:
docker export my_container | gzip > my_container.tar.gz
The difference between loading a saved image and importing an exported container as an image
Loading an image using the load command creates a new image including its history.
Importing a container as an image using the import command creates a new image, excluding the history, which results in a smaller image size compared to loading an image.
Management
CPU Constraints
You can limit CPU, either using a percentage of all CPUs, or by using specific cores.
For example, you can tell the cpu-shares setting. The setting is a bit strange – 1024 means 100% of the CPU, so if you want the container to take 50% of all CPU cores, you should specify 512. See https://goldmann.pl/blog/2014/09/11/resource-management-in-docker/#_cpu for more:
docker run -ti -c 512 agileek/cpuset-test
You can also only use some CPU cores using cpuset-cpus. See https://agileek.github.io/docker/2014/08/06/docker-cpuset/ for details and some nice videos:
docker run -ti --cpuset-cpus=0,4,6 agileek/cpuset-test
Note that Docker can still see all of the CPUs inside the container – it just isn’t using all of them. See https://github.com/docker/docker/issues/20770 for more details.
Memory Constraints
You can also set memory constraints on Docker:
docker run -it -m 300M ubuntu:14.04 /bin/bash
Capabilities
Linux capabilities can be set by using cap-add and cap-drop. See https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities for details.
This should be used for greater security.
docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs
Give access to a single device:
docker run -it --device=/dev/ttyUSB0 debian bash
Give access to all devices:
docker run -it --privileged -v /dev/bus/usb:/dev/bus/usb debian bash
more info about privileged containers here
Dockerfile
Sets up a Docker container when you run docker build on it. Vastly preferable to docker commit.
Here are some common text editors and their syntax highlighting modules you could use to create Dockerfiles:
- Sublime Text 2
- Atom
- Vim
- Emacs
- Also see Docker meets the IDE
Instructions
.dockerignoreFROMSets the Base Image for subsequent instructions.MAINTAINER(deprecated - use LABEL instead) Set the Author field of the generated images.RUNexecute any commands in a new layer on top of the current image and commit the results.CMDprovide defaults for an executing container.EXPOSEinforms Docker that the container listens on the specified network ports at runtime. NOTE: does not actually make ports accessible.ENVsets environment variable.ADDcopies new files, directories or remote file to container. Invalidates caches. AvoidADDand useCOPYinstead.COPYcopies new files or directories to container. Note that this only copies as root, so you have to chown manually regardless of your USER / WORKDIR setting. See https://github.com/moby/moby/issues/30110ENTRYPOINTconfigures a container that will run as an executable.VOLUMEcreates a mount point for externally mounted volumes or other containers.USERsets the user name for following RUN / CMD / ENTRYPOINT commands.WORKDIRsets the working directory.ARGdefines a build-time variable.ONBUILDadds a trigger instruction when the image is used as the base for another build.STOPSIGNALsets the system call signal that will be sent to the container to exit.LABELapply key/value metadata to your images, containers, or daemons.
Tutorial
Examples
- Examples
- Best practices for writing Dockerfiles
- Michael Crosby has some more Dockerfiles best practices / take 2.
- Building Good Docker Images / Building Better Docker Images
- Managing Container Configuration with Metadata
- How to write excellent Dockerfiles
Networks
Docker has a networks feature. Not much is known about it, so this is a good place to expand the cheat sheet. There is a note saying that it’s a good way to configure docker containers to talk to each other without using ports. See working with networks for more details.
Lifecycle
Info
Connection
You can specify a specific IP address for a container:
# create a new bridge network with your subnet and gateway for your ip block
$ docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic
# run a nginx container with a specific ip in that block
$ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx
# curl the ip from any other place (assuming this is a public ip block duh)
$ curl 203.0.113.2
Registries
Registries v. Repositories
A repository is a hosted collection of tagged images that together create the file system for a container.
A registry is a host – a server that stores repositories and provides an HTTP API for managing the uploading and downloading of repositories.
Docker.com hosts its own index to a central registry which contains a large number of repositories.
Having said that, the central docker registry does not do a good job of verifying images and should be avoided if you’re worried about security.
docker loginto login to a registry.docker logoutto logout from a registry.docker searchsearches registry for image.docker pullpulls an image from registry to local machine.docker pushpushes an image to the registry from local machine.
Run local registry
You can run a local registry by using the docker distribution project and looking at the local deploy instructions.
Also see the mailing list.
Links
Links are how Docker containers talk to each other through TCP/IP ports. Linking into Redis and Atlassian show worked examples. You can also resolve links by hostname.
This has been deprected to some extent by user-defined networks.
NOTE: If you want containers to ONLY communicate with each other through links, start the docker daemon with -icc=false to disable inter process communication.
If you have a container with the name CONTAINER (specified by docker run --name CONTAINER) and in the Dockerfile, it has an exposed port:
EXPOSE 1337
Then if we create another container called LINKED like so:
docker run -d --link CONTAINER:ALIAS --name LINKED user/wordpress
Then the exposed ports and aliases of CONTAINER will show up in LINKED with the following environment variables:
$ALIAS_PORT_1337_TCP_PORT
$ALIAS_PORT_1337_TCP_ADDR
And you can connect to it that way.
To delete links, use docker rm --link.
Generally, linking between docker services is a subset of “service discovery”, a big problem if you’re planning to use Docker at scale in production. Please read The Docker Ecosystem: Service Discovery and Distributed Configuration Stores for more info.
Volumes
Docker volumes are free-floating filesystems. They don’t have to be connected to a particular container. You should use volumes mounted from data-only containers for portability.
Lifecycle
Info
Volumes are useful in situations where you can’t use links (which are TCP/IP only). For instance, if you need to have two docker instances communicate by leaving stuff on the filesystem.
You can mount them in several docker containers at once, using docker run --volumes-from.
Because volumes are isolated filesystems, they are often used to store state from computations between transient containers. That is, you can have a stateless and transient container run from a recipe, blow it away, and then have a second instance of the transient container pick up from where the last one left off.
See advanced volumes for more details. Container42 is also helpful.
You can map MacOS host directories as docker volumes:
docker run -v /Users/wsargent/myapp/src:/src
You can use remote NFS volumes if you’re feeling brave.
You may also consider running data-only containers as described here to provide some data portability.
Be aware that you can mount files as volumes.
Useful Commands/Tips
Sources:
Versions
It is very important that you always know the current version of Docker you are currently running on at any point in time. This is very helpful because you get to know what features are compatible with what you have running. This is also important because you know what containers to run from the docker store when you are trying to get template containers. That said let see how to know what version of docker we have running currently
docker versionchecks what version of docker you have running- Usage:
docker version [OPTIONS]
Get the server version
$ docker version --format '{{.Server.Version}}'
1.8.0
Dump raw JSON data
$ docker version --format '{{json .}}'
{"Client":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f5bae0a","GoVersion":"go1.4.2","Os":"linux","Arch":"am"}
Basics
Get IP Address
docker inspect $(dl) | grep -wm1 IPAddress | cut -d '"' -f 4
or install jq:
docker inspect $(dl) | jq -r '.[0].NetworkSettings.IPAddress'
or using a go template:
docker inspect -f '{{ .NetworkSettings.IPAddress }}' <container_name>
or when building an image from Dockerfile, when you want to pass in a build argument:
DOCKER_HOST_IP=`ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1`
echo DOCKER_HOST_IP = $DOCKER_HOST_IP
docker build \
--build-arg ARTIFACTORY_ADDRESS=$DOCKER_HOST_IP
-t sometag \
some-directory/
Get Port Mapping
docker inspect -f '{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' <containername>
General
Find containers using regular expression:
for i in $(docker ps -a | grep "REGEXP_PATTERN" | cut -f1 -d" "); do echo $i; done
Get environment settings
docker run --rm ubuntu env
Kill running containers
docker kill $(docker ps -q)
Delete all containers (force!! running or stopped containers)
docker rm -f $(docker ps -qa)
Delete old containers
docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs docker rm
Delete stopped containers
docker rm -v $(docker ps -a -q -f status=exited)
Delete containers after stopping
docker stop $(docker ps -aq) && docker rm -v $(docker ps -aq)
Delete dangling images
docker rmi $(docker images -q -f dangling=true)
Delete all images
docker rmi $(docker images -q)
Delete dangling volumes
As of Docker 1.9:
docker volume rm $(docker volume ls -q -f dangling=true)
In 1.9.0, the filter dangling=false does not work - it is ignored and will list all volumes.
Show image dependencies
docker images -viz | dot -Tpng -o docker.png
df
docker system df presents a summary of the space currently used by different docker objects.
Heredoc Docker Container
docker build -t htop - << EOF
FROM alpine
RUN apk --no-cache add htop
EOF
Prune
The new Data Management Commands have landed as of Docker 1.13:
docker system prunedocker volume prunedocker network prunedocker container prunedocker image prune
Last Ids
alias dl='docker ps -l -q'
docker run ubuntu echo hello world
docker commit $(dl) helloworld
Commit
with command (needs Dockerfile)
docker commit -run='{"Cmd":["postgres", "-too -many -opts"]}' $(dl) postgres
Monitoring
Monitor system resource utilization for running containers
To check the CPU, memory, and network I/O usage of a single container, you can use:
docker stats <container>
For all containers listed by id:
docker stats $(docker ps -q)
For all containers listed by name:
docker stats $(docker ps --format '{{.Names}}')
For all containers listed by image:
docker ps -a -f ancestor=ubuntu
Remove all untagged images
docker rmi $(docker images | grep “^” | awk '{split($0,a," "); print a[3]}')
Remove container by a regular expression
docker ps -a | grep wildfly | awk '{print $1}' | xargs docker rm -f
Remove all exited containers
docker rm -f $(docker ps -a | grep Exit | awk '{ print $1 }')
Volumes can be files
Be aware that you can mount files as volumes. For example you can inject a configuration file like this:
# copy file from container
docker run --rm httpd cat /usr/local/apache2/conf/httpd.conf > httpd.conf
# edit file
vim httpd.conf
# start container with modified configuration
docker run --rm -ti -v "$PWD/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro" -p "80:80" httpd
Efficiency
Cleaning
APTin aRUNlayer. Note 7RUN {apt commands} \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*Flatten an image
ID=$(docker run -d image-name /bin/bash) docker export $ID | docker import – flat-image-nameFor backup
ID=$(docker run -d image-name /bin/bash) (docker export $ID | gzip -c > image.tgz) gzip -dc image.tgz | docker import - flat-image-name
Best Practices
This is where general Docker best practices and war stories go:
- The Rabbit Hole of Using Docker in Automated Tests
- Bridget Kromhout has a useful blog post on running Docker in production (2014) at Dramafever.
- There’s also a best practices blog post (2014) from Lyst.
- Building a Development Environment With Docker (2013)
- Discourse in a Docker Container (2013)
- In this tutorial, we will be using Linux since that is what almost every server runs, but as the principle of Docker is that it makes applications independent of platforms, everything herein should be applicable no matter what machine you are running. ^
- If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect. On a desktop Linux environment such as X Windows, log out of your session completely and then log back in. ^
- Ubuntu 14.10 and below use
upstart. See the post-installation instructions for support. ^ - For an understanding of what containers leave exposed, you should read Understanding and Hardening Linux Containers by Aaron Grattafiori. This is a complete and comprehensive guide to the issues involved with containers, with a plethora of links and footnotes leading on to yet more useful content. ^
- You should start off by using a kernel with unstable patches for
grsecurity / paxcompiled in, such as Alpine Linux. If you are usinggrsecurityin production, you should spring for commercial support for the stable patches, same as you would do for RedHat. It’s $200 a month, which is nothing to your devops budget. ^ - The gist was scraped and mildly edited on 12/22/18, so it may behoove you to check the original source for any updates. If you find typos/corrections/updates that should be included below, please get in touch. ^
- This should be done in the same layer as other apt commands. Otherwise, the previous layers still persist the original information and your images will still be fat. ^