- Orientation and setup
- Download and install Docker
- Start the tutorial
- The Docker Dashboard
- What is a container?
- What is a container image?
- CLI references
- Docker overview
- The Docker platform
- What can I use Docker for?
- Docker architecture
- The Docker daemon
- The Docker client
- Docker registries
- Docker objects
- Images
- Containers
- The underlying technology
- Docker
- Contents
- Installation
- Usage
- Configuration
- Storage driver
- Daemon socket
- HTTP Proxies
- Docker daemon proxy configuration
- Docker container proxy configuration
- Configuring DNS
- Images location
- Insecure registries
- User namespace isolation
- Docker rootless
- Images
- Arch Linux
- Alpine Linux
- CentOS
- Debian
- Distroless
- Run GPU accelerated Docker containers with NVIDIA GPUs
- With NVIDIA Container Toolkit (recommended)
- With NVIDIA Container Runtime
- With nvidia-docker (deprecated)
- Arch Linux image with CUDA
- Useful tips
- Remove Docker and images
- Troubleshooting
- docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd
- Default number of allowed processes/threads too low
- Error initializing graphdriver: devmapper
- Failed to create some/path/to/file: No space left on device
- Docker-machine fails to create virtual machines using the virtualbox driver
- Starting Docker breaks KVM bridged networking
- Image pulls from Docker Hub are rate limited
- iptables (legacy): unknown option «—dport»
Orientation and setup
Estimated reading time: 4 minutes
Welcome! We are excited that you want to learn Docker.
This page contains step-by-step instructions on how to get started with Docker. In this tutorial, you’ll learn how to:
- Build and run an image as a container
- Share images using Docker Hub
- Deploy Docker applications using multiple containers with a database
- Running applications using Docker Compose
In addition, you’ll also learn about the best practices for building images, including instructions on how to scan your images for security vulnerabilities.
If you are looking for information on how to containerize an application using your favorite language, see Language-specific getting started guides.
We also recommend the video walkthrough from DockerCon 2020.
Download and install Docker
This tutorial assumes you have a current version of Docker installed on your machine. If you do not have Docker installed, choose your preferred operating system below to download Docker:
Start the tutorial
If you’ve already run the command to get started with the tutorial, congratulations! If not, open a command prompt or bash window, and run the command:
You’ll notice a few flags being used. Here’s some more info on them:
- -d — run the container in detached mode (in the background)
- -p 80:80 — map port 80 of the host to port 80 in the container
- docker/getting-started — the image to use
You can combine single character flags to shorten the full command. As an example, the command above could be written as:
The Docker Dashboard
Before going too far, we want to highlight the Docker Dashboard, which gives you a quick view of the containers running on your machine. The Docker Dashboard is available for Mac and Windows. It gives you quick access to container logs, lets you get a shell inside the container, and lets you easily manage container lifecycle (stop, remove, etc.).
To access the dashboard, follow the instructions in the Docker Desktop manual. If you open the dashboard now, you will see this tutorial running! The container name ( jolly_bouman below) is a randomly created name. So, you’ll most likely have a different name.
What is a container?
Now that you’ve run a container, what is a container? Simply put, a container is simply another process on your machine that has been isolated from all other processes on the host machine. That isolation leverages kernel namespaces and cgroups, features that have been in Linux for a long time. Docker has worked to make these capabilities approachable and easy to use.
Creating containers from scratch
If you’d like to see how containers are built from scratch, Liz Rice from Aqua Security has a fantastic talk in which she creates a container from scratch in Go. While she makes a simple container, this talk doesn’t go into networking, using images for the filesystem, and more. But, it gives a fantastic deep dive into how things are working.
What is a container image?
When running a container, it uses an isolated filesystem. This custom filesystem is provided by a container image. Since the image contains the container’s filesystem, it must contain everything needed to run an application — all dependencies, configuration, scripts, binaries, etc. The image also contains other configuration for the container, such as environment variables, a default command to run, and other metadata.
We’ll dive deeper into images later on, covering topics such as layering, best practices, and more.
If you’re familiar with chroot , think of a container as an extended version of chroot . The filesystem is simply coming from the image. But, a container adds additional isolation not available when simply using chroot.
CLI references
Refer to the following topics for further documentation on all CLI commands used in this article:
Источник
Docker overview
Estimated reading time: 7 minutes
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
The Docker platform
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way.
Docker provides tooling and a platform to manage the lifecycle of your containers:
- Develop your application and its supporting components using containers.
- The container becomes the unit for distributing and testing your application.
- When you’re ready, deploy your application into your production environment, as a container or an orchestrated service. This works the same whether your production environment is a local data center, a cloud provider, or a hybrid of the two.
What can I use Docker for?
Fast, consistent delivery of your applications
Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide your applications and services. Containers are great for continuous integration and continuous delivery (CI/CD) workflows.
Consider the following example scenario:
- Your developers write code locally and share their work with their colleagues using Docker containers.
- They use Docker to push their applications into a test environment and execute automated and manual tests.
- When developers find bugs, they can fix them in the development environment and redeploy them to the test environment for testing and validation.
- When testing is complete, getting the fix to the customer is as simple as pushing the updated image to the production environment.
Responsive deployment and scaling
Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a developer’s local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of environments.
Docker’s portability and lightweight nature also make it easy to dynamically manage workloads, scaling up or tearing down applications and services as business needs dictate, in near real time.
Running more workloads on the same hardware
Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines, so you can use more of your compute capacity to achieve your business goals. Docker is perfect for high density environments and for small and medium deployments where you need to do more with fewer resources.
Docker architecture
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface. Another Docker client is Docker Compose, that lets you work with applications consisting of a set of containers.
The Docker daemon
The Docker daemon ( dockerd ) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
The Docker client
The Docker client ( docker ) is the primary way that many Docker users interact with Docker. When you use commands such as docker run , the client sends these commands to dockerd , which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.
Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.
When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.
Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. This section is a brief overview of some of those objects.
Images
An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.
Containers
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.
Example docker run command
The following command runs an ubuntu container, attaches interactively to your local command-line session, and runs /bin/bash .
When you run this command, the following happens (assuming you are using the default registry configuration):
If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as though you had run docker pull ubuntu manually.
Docker creates a new container, as though you had run a docker container create command manually.
Docker allocates a read-write filesystem to the container, as its final layer. This allows a running container to create or modify files and directories in its local filesystem.
Docker creates a network interface to connect the container to the default network, since you did not specify any networking options. This includes assigning an IP address to the container. By default, containers can connect to external networks using the host machine’s network connection.
Docker starts the container and executes /bin/bash . Because the container is running interactively and attached to your terminal (due to the -i and -t flags), you can provide input using your keyboard while the output is logged to your terminal.
When you type exit to terminate the /bin/bash command, the container stops but is not removed. You can start it again or remove it.
The underlying technology
Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality. Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
Источник
Docker
Docker is a utility to pack, ship and run any application as a lightweight container.
Contents
Installation
Install the docker package or, for the development version, the docker-git AUR package. Next start and enable docker.service and verify operation:
Note that starting the docker service may fail if you have an active VPN connection due to IP conflicts between the VPN and Docker’s bridge and overlay networks. If this is the case, try disconnecting the VPN before starting the docker service. You may reconnect the VPN immediately afterwards. You can also try to deconflict the networks (see solutions [1] or [2]).
Next, verify that you can run containers. The following command downloads the latest Arch Linux image and uses it to run a Hello World program within a container:
If you want to be able to run the docker CLI command as a non-root user, add your user to the docker user group, re-login, and restart docker.service .
Usage
Docker consists of multiple parts:
- The Docker daemon (sometimes also called the Docker Engine), which is a process which runs as docker.service . It serves the Docker API and manages Docker containers.
- The docker CLI command, which allows users to interact with the Docker API via the command line and control the Docker daemon.
- Docker containers, which are namespaced processes that are started and managed by the Docker daemon as requested through the Docker API.
Typically, users use Docker by running docker CLI commands, which in turn request the Docker daemon to perform actions which in turn result in management of Docker containers. Understanding the relationship between the client ( docker ), server ( docker.service ) and containers is important to successfully administering Docker.
Note that if the Docker daemon stops or restarts, all currently running Docker containers are also stopped or restarted.
Also note that it is possible to send requests to the Docker API and control the Docker daemon without the use of the docker CLI command. See the Docker API developer documentation for more information.
See the Docker Getting Started guide for more usage documentation.
Configuration
The Docker daemon can be configured either through a configuration file at /etc/docker/daemon.json or by adding command line flags to the docker.service systemd unit. According to the Docker official documentation, the configuration file approach is preferred. If you wish to use the command line flags instead, use systemd drop-in files to override the ExecStart directive in docker.service .
For more information about options in daemon.json see dockerd documentation.
Storage driver
The storage driver controls how images and containers are stored and managed on your Docker host. The default overlay2 driver has good performance and is a good choice for all modern Linux kernels and filesystems. There are a few legacy drivers such as devicemapper and aufs which were intended for compatibility with older Linux kernels, but these have no advantages over overlay2 on Arch Linux.
Users of btrfs or ZFS may use the btrfs or zfs drivers, each of which take advantage of the unique features of these filesystems. See the btrfs driver and zfs driver documentation for more information and step-by-step instructions.
Overylay2 driver may fail to load if you just updated Archlinux, but have not rebooted it yet.
Daemon socket
By default, the Docker daemon serves the Docker API using a Unix socket at /var/run/docker.sock . This is an appropriate option for most use cases.
It is possible to configure the Daemon to additionally listen on a TCP socket, which can allow remote Docker API access from other computers. This can be useful for allowing docker commands on a host machine to access the Docker daemon on a Linux virtual machine, such as an Arch virtual machine on a Windows or macOS system.
Note that the default docker.service file sets the -H flag by default, and Docker will not start if an option is present in both the flags and /etc/docker/daemon.json file. Therefore, the simplest way to change the socket settings is with a drop-in file, such as the following which adds a TCP socket on port 4243:
Reload the systemd daemon and restart docker.service to apply changes.
HTTP Proxies
There are two parts to configuring Docker to use an HTTP proxy: Configuring the Docker daemon and configuring Docker containers.
Docker daemon proxy configuration
Docker container proxy configuration
See Docker documentation on configuring proxies for information on how to automatically configure proxies for all containers created using the docker CLI.
Configuring DNS
See Docker’s DNS documentation for the documented behavior of DNS within Docker containers and information on customizing DNS configuration. In most cases, the resolvers configured on the host are also configured in the container.
Most DNS resolvers hosted on 127.0.0.0/8 are not supported due to conflicts between the container and host network namespaces. Such resolvers are removed from the container’s /etc/resolv.conf. If this would result in an empty /etc/resolv.conf , Google DNS is used instead.
Additionally, a special case is handled if 127.0.0.53 is the only configured nameserver. In this case, Docker assumes the resolver is systemd-resolved and uses the upstream DNS resolvers from /run/systemd/resolve/resolv.conf .
If you are using a service such as dnsmasq to provide a local resolver, consider adding a virtual interface with a link local IP address in the 169.254.0.0/16 block for dnsmasq to bind to instead of 127.0.0.1 to avoid the network namespace conflict.
Images location
By default, docker images are located at /var/lib/docker . They can be moved to other partitions, e.g. if you wish to use a dedicated partition or disk for your images. In this example, we will move the images to /mnt/docker .
First, stop docker.service , which will also stop all currently running containers and unmount any running images. You may then move the images from /var/lib/docker to the target destination, e.g. cp -r /var/lib/docker /mnt/docker .
Configure data-root in /etc/docker/daemon.json :
Restart docker.service to apply changes.
Insecure registries
If you decide to use a self signed certificate for your private registries, Docker will refuse to use it until you declare that you trust it. For example, to allow images from a registry hosted at myregistry.example.com:8443 , configure insecure-registries in the /etc/docker/daemon.json file:
Restart docker.service to apply changes.
In order to enable IPv6 support in Docker, you will need to do a few things. See [5] and [6] for details.
Firstly, enable the ipv6 setting in /etc/docker/daemon.json and set a specific IPv6 subnet. In this case, we will use the private fd00::/80 subnet. Make sure to use a subnet at least 80 bits as this allows a container’s IPv6 to end with the container’s MAC address which allows you to mitigate NDP neighbor cache invalidation issues.
Restart docker.service to apply changes.
Finally, to let containers access the host network, you need to resolve routing issues arising from the usage of a private IPv6 subnet. Add the IPv6 NAT in order to actually get some traffic:
Now Docker should be properly IPv6 enabled. To test it, you can run:
If you use firewalld, you can add the rule like this:
If you use ufw, you need to first enable ipv6 forwarding following Uncomplicated Firewall#Forward policy. Next you need to edit /etc/default/ufw and uncomment the following lines
Then you can add the iptables rule:
It should be noted that, for docker containers created with docker-compose, you may need to set enable_ipv6: true in the networks part for the corresponding network. Besides, you may need to configure the IPv6 subnet. See [7] for details.
User namespace isolation
By default, processes in Docker containers run within the same user namespace as the main dockerd daemon, i.e. containers are not isolated by the user_namespaces(7) feature. This allows the process within the container to access configured resources on the host according to Users and groups#Permissions and ownership. This maximizes compatibility, but poses a security risk if a container privilege escalation or breakout vulnerability is discovered that allows the container to access unintended resources on the host. (One such vulnerability was published and patched in February 2019.)
The impact of such a vulnerability can be reduced by enabling user namespace isolation. This runs each container in a separate user namespace and maps the UIDs and GIDs inside that user namespace to a different (typically unprivileged) UID/GID range on the host. Note that in the Docker implementation, user namespaces for all containers are mapped to the same UID/GID range on the host, otherwise sharing volumes between multiple containers would not be possible.
Configure userns-remap in /etc/docker/daemon.json . default is a special value that will automatically create a user and group named dockremap for use with remapping.
Configure /etc/subuid and /etc/subgid with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group. This example allocates a range of 65536 UIDs and GIDs starting at 165536 to the dockremap user and group.
Restart docker.service to apply changes.
After applying this change, all containers will run in an isolated user namespace by default. The remapping may be partially disabled on specific containers passing the —userns=host flag to the docker command. See [8] for details.
Docker rootless
Install the docker-rootless-extras-bin AUR package to run docker in rootless mode (that is, as a regular user instead of as root).
Configure /etc/subuid and /etc/subgid with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group.
Enable the socket (this will result in docker being started using systemd’s socket activation):
Finally set docker socket environment variable:
Images
Arch Linux
The following command pulls the archlinux x86_64 image. This is a stripped down version of Arch core without network, etc.
For a full Arch base, clone the repo from above and build your own image.
Make sure that the devtools , fakechroot and fakeroot packages are installed.
To build the base image:
Alpine Linux
Alpine Linux is a popular choice for small container images, especially for software compiled as static binaries. The following command pulls the latest Alpine Linux image:
Alpine Linux uses the musl libc implementation instead of the glibc libc implementation used by most Linux distributions. Because Arch Linux uses glibc, there are a number of functional differences between an Arch Linux host and an Alpine Linux container that can impact the performance and correctness of software. A list of these differences is documented here.
Note that dynamically linked software built on Arch Linux (or any other system using glibc) may have bugs and performance problems when run on Alpine Linux (or any other system using a different libc). See [9], [10] and [11] for examples.
CentOS
The following command pulls the latest centos image:
See the Docker Hub page for a full list of available tags for each CentOS release.
Debian
The following command pulls the latest debian image:
See the Docker Hub page for a full list of available tags, including both standard and slim versions for each Debian release.
Distroless
Google maintains distroless images for several popular programming languages such as Java, Python, Go, Node.js, .NET Core and Rust. These images contain only the programming language runtime without any OS related files, resulting in very small images for packaging software.
See the GitHub README for a list of images and instructions on their use.
Run GPU accelerated Docker containers with NVIDIA GPUs
With NVIDIA Container Toolkit (recommended)
Starting from Docker version 19.03, NVIDIA GPUs are natively supported as Docker devices. NVIDIA Container Toolkit is the recommended way of running containers that leverage NVIDIA GPUs.
Install the nvidia-container-toolkit AUR package. Next, restart docker. You can now run containers that make use of NVIDIA GPUs using the —gpus option:
Specify how many GPUs are enabled inside a container:
Specify which GPUs to use:
Specify a capability (graphics, compute, . ) for the container (though this is rarely if ever used this way):
For more information see README.md and Wiki.
With NVIDIA Container Runtime
Install the nvidia-container-runtime AUR package. Next, register the NVIDIA runtime by editing /etc/docker/daemon.json
and then restart docker.
The runtime can also be registered via a command line option to dockerd:
Afterwards GPU accelerated containers can be started with
or (required Docker version 19.03 or higher)
With nvidia-docker (deprecated)
nvidia-docker is a wrapper around NVIDIA Container Runtime which registers the NVIDIA runtime by default and provides the nvidia-docker command.
To use nvidia-docker, install the nvidia-docker AUR package and then restart docker. Containers with NVIDIA GPU support can then be run using any of the following methods:
or (required Docker version 19.03 or higher)
Arch Linux image with CUDA
You can use the following Dockerfile to build a custom Arch Linux image with CUDA. It uses the Dockerfile frontend syntax 1.2 to cache pacman packages on the host. The DOCKER_BUILDKIT=1 environment variable must be set on the client before building the Docker image.
Useful tips
To grab the IP address of a running container:
For each running container, the name and corresponding IP address can be listed for use in /etc/hosts :
Remove Docker and images
In case you want to remove Docker entirely you can do this by following the steps below:
Check for running containers:
List all containers running on the host for deletion:
Stop a running container:
Killing still running containers:
Delete containers listed by ID:
List all Docker images:
Delete images by ID:
Delete all images, containers, volumes, and networks that are not associated with a container (dangling):
To additionally remove any stopped containers and all unused images (not just dangling ones), add the -a flag to the command:
Delete all Docker data (purge directory):
Troubleshooting
docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd
Docker attempts to enable IP forwarding globally, but by default systemd-networkd overrides the global sysctl setting for each defined network profile. Set IPForward=yes in the network profile. See Internet sharing#Enable packet forwarding for details.
When systemd-networkd tries to manage the network interfaces created by Docker, this can lead to connectivity issues. Try disabling management of those interfaces. I.e. networkctl list should report unmanaged in the SETUP column for all networks created by Docker.
Default number of allowed processes/threads too low
If you run into error messages like
then you might need to adjust the number of processes allowed by systemd. The default is 500 (see system.conf ), which is pretty small for running several docker containers. Edit the docker.service with the following snippet:
Error initializing graphdriver: devmapper
If systemctl fails to start docker and provides an error:
Then, try the following steps to resolve the error. Stop the service, back up /var/lib/docker/ (if desired), remove the contents of /var/lib/docker/ , and try to start the service. See the open GitHub issue for details.
Failed to create some/path/to/file: No space left on device
If you are getting an error message like this:
when building or running a Docker image, even though you do have enough disk space available, make sure:
- Tmpfs is disabled or has enough memory allocation. Docker might be trying to write files into /tmp but fails due to restrictions in memory usage and not disk space.
- If you are using XFS, you might want to remove the noquota mount option from the relevant entries in /etc/fstab (usually where /tmp and/or /var/lib/docker reside). Refer to Disk quota for more information, especially if you plan on using and resizing overlay2 Docker storage driver.
- XFS quota mount options ( uquota , gquota , prjquota , etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a kernel parameter rootflags= . Subsequently, it should not be listed among mount options in /etc/fstab for the root ( / ) filesystem.
Docker-machine fails to create virtual machines using the virtualbox driver
In case docker-machine fails to create the VM’s using the virtualbox driver, with the following:
Simply reload the virtualbox via CLI with vboxreload .
Starting Docker breaks KVM bridged networking
This is a known issue. You can use the following workaround:
If there is already a network bridge configured for KVM, this may be fixable by telling docker about it. See [16] where docker configuration is modified as:
Be sure to replace existing_bridge_name with the actual name of your network bridge.
Image pulls from Docker Hub are rate limited
Beginning on November 1st 2020, rate limiting is enabled for downloads from Docker Hub from anonymous and free accounts. See the rate limit documentation for more information.
Unauthenticated rate limits are tracked by source IP. Authenticated rate limits are tracked by account.
If you need to exceed the rate limits, you can either sign up for a paid plan or mirror the images you need to a different image registry. You can host your own registry or use a cloud hosted registry such as Amazon ECR, Google Container Registry, Azure Container Registry or Quay Container Registry.
To mirror an image, use the pull , tag and push subcommands of the Docker CLI. For example, to mirror the 1.19.3 tag of the Nginx image to a registry hosted at cr.example.com :
You can then pull or run the image from the mirror:
iptables (legacy): unknown option «—dport»
The factual accuracy of this article or section is disputed.
If you see this error when running a container, install iptables-nft instead of iptables (legacy) and reboot[17].
Источник