- Store configuration data using Docker Configs
- About configs
- Windows support
- How Docker manages configs
- Read more about docker config commands
- Examples
- Defining and using configs in compose files
- Simple example: Get started with configs
- Simple example: Use configs in a Windows service
- Example: Use a templated config
- Advanced example: Use configs with a Nginx service
- Generate the site certificate
- Configure the Nginx container
- Example: Rotate a config
- Post-installation steps for Linux
- Manage Docker as a non-root user
- Configure Docker to start on boot
- Use a different storage engine
- Configure default logging driver
- Configure where the Docker daemon listens for connections
- Configuring remote access with systemd unit file
Store configuration data using Docker Configs
Estimated reading time: 20 minutes
About configs
Docker swarm service configs allow you to store non-sensitive information, such as configuration files, outside a service’s image or running containers. This allows you to keep your images as generic as possible, without the need to bind-mount configuration files into the containers or use environment variables.
Configs operate in a similar way to secrets, except that they are not encrypted at rest and are mounted directly into the container’s filesystem without the use of RAM disks. Configs can be added or removed from a service at any time, and services can share a config. You can even use configs in conjunction with environment variables or labels, for maximum flexibility. Config values can be generic strings or binary content (up to 500 kb in size).
Note: Docker configs are only available to swarm services, not to standalone containers. To use this feature, consider adapting your container to run as a service with a scale of 1.
Configs are supported on both Linux and Windows services.
Windows support
Docker includes support for configs on Windows containers, but there are differences in the implementations, which are called out in the examples below. Keep the following notable differences in mind:
Config files with custom targets are not directly bind-mounted into Windows containers, since Windows does not support non-directory file bind-mounts. Instead, configs for a container are all mounted in C:\ProgramData\Docker\internal\configs (an implementation detail which should not be relied upon by applications) within the container. Symbolic links are used to point from there to the desired target of the config within the container. The default target is C:\ProgramData\Docker\configs .
When creating a service which uses Windows containers, the options to specify UID, GID, and mode are not supported for configs. Configs are currently only accessible by administrators and users with system access within the container.
On Windows, create or update a service using —credential-spec with the config:// format. This passes the gMSA credentials file directly to nodes before a container starts. No gMSA credentials are written to disk on worker nodes. For more information, refer to Deploy services to a swarm.
How Docker manages configs
When you add a config to the swarm, Docker sends the config to the swarm manager over a mutual TLS connection. The config is stored in the Raft log, which is encrypted. The entire Raft log is replicated across the other managers, ensuring the same high availability guarantees for configs as for the rest of the swarm management data.
When you grant a newly-created or running service access to a config, the config is mounted as a file in the container. The location of the mount point within the container defaults to / in Linux containers. In Windows containers, configs are all mounted into C:\ProgramData\Docker\configs and symbolic links are created to the desired location, which defaults to C:\ .
You can set the ownership ( uid and gid ) for the config, using either the numerical ID or the name of the user or group. You can also specify the file permissions ( mode ). These settings are ignored for Windows containers.
- If not set, the config is owned by the user running the container command (often root ) and that user’s default group (also often root ).
- If not set, the config has world-readable permissions (mode 0444 ), unless a umask is set within the container, in which case the mode is impacted by that umask value.
You can update a service to grant it access to additional configs or revoke its access to a given config at any time.
A node only has access to configs if the node is a swarm manager or if it is running service tasks which have been granted access to the config. When a container task stops running, the configs shared to it are unmounted from the in-memory filesystem for that container and flushed from the node’s memory.
If a node loses connectivity to the swarm while it is running a task container with access to a config, the task container still has access to its configs, but cannot receive updates until the node reconnects to the swarm.
You can add or inspect an individual config at any time, or list all configs. You cannot remove a config that a running service is using. See Rotate a config for a way to remove a config without disrupting running services.
To update or roll back configs more easily, consider adding a version number or date to the config name. This is made easier by the ability to control the mount point of the config within a given container.
To update a stack, make changes to your Compose file, then re-run docker stack deploy -c . If you use a new config in that file, your services start using them. Keep in mind that configurations are immutable, so you can’t change the file for an existing service. Instead, you create a new config to use a different file
You can run docker stack rm to stop the app and take down the stack. This removes any config that was created by docker stack deploy with the same stack name. This removes all configs, including those not referenced by services and those remaining after a docker service update —config-rm .
Read more about docker config commands
Use these links to read about specific commands, or continue to the example about using configs with a service.
Examples
This section includes graduated examples which illustrate how to use Docker configs.
Note: These examples use a single-Engine swarm and unscaled services for simplicity. The examples use Linux containers, but Windows containers also support configs.
Defining and using configs in compose files
The docker stack command supports defining configs in a Compose file. However, the configs key is not supported for docker compose . See the Compose file reference for details.
Simple example: Get started with configs
This simple example shows how configs work in just a few commands. For a real-world example, continue to Advanced example: Use configs with a Nginx service.
Add a config to Docker. The docker config create command reads standard input because the last argument, which represents the file to read the config from, is set to — .
Create a redis service and grant it access to the config. By default, the container can access the config at /my-config , but you can customize the file name on the container using the target option.
Verify that the task is running without issues using docker service ps . If everything is working, the output looks similar to this:
Get the ID of the redis service task container using docker ps , so that you can use docker container exec to connect to the container and read the contents of the config data file, which defaults to being readable by all and has the same name as the name of the config. The first command below illustrates how to find the container ID, and the second and third commands use shell completion to do this automatically.
Try removing the config. The removal fails because the redis service is running and has access to the config.
Remove access to the config from the running redis service by updating the service.
Repeat steps 3 and 4 again, verifying that the service no longer has access to the config. The container ID is different, because the service update command redeploys the service.
Stop and remove the service, and remove the config from Docker.
Simple example: Use configs in a Windows service
This is a very simple example which shows how to use configs with a Microsoft IIS service running on Docker for Windows running Windows containers on Microsoft Windows 10. It is a naive example that stores the webpage in a config.
This example assumes that you have PowerShell installed.
Save the following into a new file index.html .
If you have not already done so, initialize or join the swarm.
Save the index.html file as a swarm config named homepage .
Create an IIS service and grant it access to the homepage config.
Access the IIS service at http://localhost:8000/ . It should serve the HTML content from the first step.
Remove the service and the config.
Example: Use a templated config
To create a configuration in which the content will be generated using a template engine, use the —template-driver parameter and specify the engine name as its argument. The template will be rendered when container is created.
Save the following into a new file index.html.tmpl .
Save the index.html.tmpl file as a swarm config named homepage . Provide parameter —template-driver and specify golang as template engine.
Create a service that runs Nginx and has access to the environment variable HELLO and to the config.
Verify that the service is operational: you can reach the Nginx server, and that the correct output is being served.
Advanced example: Use configs with a Nginx service
This example is divided into two parts. The first part is all about generating the site certificate and does not directly involve Docker configs at all, but it sets up the second part, where you store and use the site certificate as a series of secrets and the Nginx configuration as a config. The example shows how to set options on the config, such as the target location within the container and the file permissions ( mode ).
Generate the site certificate
Generate a root CA and TLS certificate and key for your site. For production sites, you may want to use a service such as Let’s Encrypt to generate the TLS certificate and key, but this example uses command-line tools. This step is a little complicated, but is only a set-up step so that you have something to store as a Docker secret. If you want to skip these sub-steps, you can use Let’s Encrypt to generate the site key and certificate, name the files site.key and site.crt , and skip to Configure the Nginx container.
Generate a root key.
Generate a CSR using the root key.
Configure the root CA. Edit a new file called root-ca.cnf and paste the following contents into it. This constrains the root CA to only sign leaf certificates and not intermediate CAs.
Sign the certificate.
Generate the site key.
Generate the site certificate and sign it with the site key.
Configure the site certificate. Edit a new file called site.cnf and paste the following contents into it. This constrains the site certificate so that it can only be used to authenticate a server and can’t be used to sign certificates.
Sign the site certificate.
The site.csr and site.cnf files are not needed by the Nginx service, but you need them if you want to generate a new site certificate. Protect the root-ca.key file.
Configure the Nginx container
Produce a very basic Nginx configuration that serves static files over HTTPS. The TLS certificate and key are stored as Docker secrets so that they can be rotated easily.
In the current directory, create a new file called site.conf with the following contents:
Create two secrets, representing the key and the certificate. You can store any file as a secret as long as it is smaller than 500 KB. This allows you to decouple the key and certificate from the services that use them. In these examples, the secret name and the file name are the same.
Save the site.conf file in a Docker config. The first parameter is the name of the config, and the second parameter is the file to read it from.
List the configs:
Create a service that runs Nginx and has access to the two secrets and the config. Set the mode to 0440 so that the file is only readable by its owner and that owner’s group, not the world.
Within the running containers, the following three files now exist:
- /run/secrets/site.key
- /run/secrets/site.crt
- /etc/nginx/conf.d/site.conf
Verify that the Nginx service is running.
Verify that the service is operational: you can reach the Nginx server, and that the correct TLS certificate is being used.
Unless you are going to continue to the next example, clean up after running this example by removing the nginx service and the stored secrets and config.
You have now configured a Nginx service with its configuration decoupled from its image. You could run multiple sites with exactly the same image but separate configurations, without the need to build a custom image at all.
Example: Rotate a config
To rotate a config, you first save a new config with a different name than the one that is currently in use. You then redeploy the service, removing the old config and adding the new config at the same mount point within the container. This example builds upon the previous one by rotating the site.conf configuration file.
Edit the site.conf file locally. Add index.php to the index line, and save the file.
Create a new Docker config using the new site.conf , called site-v2.conf .
Update the nginx service to use the new config instead of the old one.
Verify that the nginx service is fully re-deployed, using docker service ps nginx . When it is, you can remove the old site.conf config.
To clean up, you can remove the nginx service, as well as the secrets and configs.
You have now updated your nginx service’s configuration without the need to rebuild its image.
Источник
Post-installation steps for Linux
Estimated reading time: 15 minutes
This section contains optional procedures for configuring Linux hosts to work better with Docker.
Manage Docker as a non-root user
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo . The Docker daemon always runs as the root user.
If you don’t want to preface the docker command with sudo , create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system, see Docker Daemon Attack Surface.
To create the docker group and add your user:
Create the docker group.
Add your user to the docker group.
Log out and log back in so that your group membership is re-evaluated.
If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect.
On a desktop Linux environment such as X Windows, log out of your session completely and then log back in.
On Linux, you can also run the following command to activate the changes to groups:
Verify that you can run docker commands without sudo .
This command downloads a test image and runs it in a container. When the container runs, it prints a message and exits.
If you initially ran Docker CLI commands using sudo before adding your user to the docker group, you may see the following error, which indicates that your
/.docker/ directory was created with incorrect permissions due to the sudo commands.
To fix this problem, either remove the
/.docker/ directory (it is recreated automatically, but any custom settings are lost), or change its ownership and permissions using the following commands:
Configure Docker to start on boot
Most current Linux distributions (RHEL, CentOS, Fedora, Debian, Ubuntu 16.04 and higher) use systemd to manage which services start when the system boots. On Debian and Ubuntu, the Docker service is configured to start on boot by default. To automatically start Docker and Containerd on boot for other distros, use the commands below:
To disable this behavior, use disable instead.
If you need to add an HTTP Proxy, set a different directory or partition for the Docker runtime files, or make other customizations, see customize your systemd Docker daemon options.
Use a different storage engine
For information about the different storage engines, see Storage drivers. The default storage engine and the list of supported storage engines depend on your host’s Linux distribution and available kernel drivers.
Configure default logging driver
Docker provides the capability to collect and view log data from all containers running on a host via a series of logging drivers. The default logging driver, json-file , writes log data to JSON-formatted files on the host filesystem. Over time, these log files expand in size, leading to potential exhaustion of disk resources.
To alleviate such issues, either configure the json-file logging driver to enable log rotation, use an alternative logging driver such as the “local” logging driver that performs log rotation by default, or use a logging driver that sends logs to a remote logging aggregator.
Configure where the Docker daemon listens for connections
By default, the Docker daemon listens for connections on a UNIX socket to accept requests from local clients. It is possible to allow Docker to accept requests from remote hosts by configuring it to listen on an IP address and port as well as the UNIX socket. For more detailed information on this configuration option take a look at “Bind Docker to another host/port or a unix socket” section of the Docker CLI Reference article.
Before configuring Docker to accept connections from remote hosts it is critically important that you understand the security implications of opening docker to the network. If steps are not taken to secure the connection, it is possible for remote non-root users to gain root access on the host. For more information on how to use TLS certificates to secure this connection, check this article on how to protect the Docker daemon socket.
Configuring Docker to accept remote connections can be done with the docker.service systemd unit file for Linux distributions using systemd, such as recent versions of RedHat, CentOS, Ubuntu and SLES, or with the daemon.json file which is recommended for Linux distributions that do not use systemd.
Configuring Docker to listen for connections using both the systemd unit file and the daemon.json file causes a conflict that prevents Docker from starting.
Configuring remote access with systemd unit file
Use the command sudo systemctl edit docker.service to open an override file for docker.service in a text editor.
Add or modify the following lines, substituting your own values.
Источник