Alpine linux установка docker

Docker

Contents

Installation

The Docker package is in the ‘Community’ repository. See Alpine_Linux_package_management how to add a repository.

Connecting to the Docker daemon through its socket requires you to add yourself to the `docker` group.

To start the Docker daemon at boot, see Alpine_Linux_Init_System.

For more information, have a look at the corresponding Github issue.

This weakening of security is not necessary to do with Alpine 3.4.x and Docker 1.12 as of August 2016.

Docker Compose

‘docker-compose’ is in the ‘Community’ repository starting with Alpine Linux 3.10.

For older releases:

To install docker-compose, first install pip:

Isolate containers with a user namespace

add to /etc/docker/daemon.json

You may also consider these options : ‘

You’ll find all possible configurations here[1].

Example: How to install docker from Arch

«WARNING: No limit support»

You might encounter this message when executing docker info . To correct this situation, we have to enable the cgroup_enable=memory swapaccount=1

Alpine 3.8

It may not have been the case before, but with Alpine 3.8, you must config cgroups properly

Warning: This seems not to work with Alpine 3.9 and Docker 18.06. Follow the instructions for grub or extlinux below instead.

If you use Grub, add the cgroup condition into /etc/default/grub , then upgrade your grub

Extlinux

With Extlinux, you add the cgroup condition, but inside of /etc/update-extlinux.conf

then update the config and reboot

How to use docker

The best documentation on using Docker and creating containers is at the main docker site. Adding anything to it here would be redundant.

If you create an account at docker.com, you can browse through user images and learn from the syntax in contributed dockerfiles.

Official Docker image files are denoted on the website by a blue ribbon.

Источник

Alpine собирает Docker билды под Python в 50 раз медленней, а образы в 2 раза тяжелей

Alpine Linux — часто рекомендованный как базовый образ для Docker`а. Вам говорят, что использование Alpine сделает ваши билды меньше, а процесс сборки быстрей.

Но если вы используете Alpine Linux для Python приложений, то он:

  • Делает ваши билды намного медленней
  • Делает ваши образы больше
  • Тратит ваше время
  • И в итоге может стать причиной ошибок в рантайме

Давайте рассмотрим почему же Alpine рекомендуют, но почему вам все же не стоит использовать его вместе с Python.

Почему люди рекомендуют Alpine?

Давайте предположим, что нам необходим gcc как часть нашего образа и мы хотим сравнить Alpine Linux vs Ubuntu 18.04, по скорости сборки и конечному размеру образа.

Для начала, скачаем два образа и сравним их размер:

Как вы видите, базовый образ для Alpine намного меньше. Давайте теперь попробуем установить gcc и начнем с Ubuntu:

Написание идеальных Dockerfile выходит за рамки этой статьи

Замерим скорость сборки:

Повторяем все то же самое для Alpine (Dockerfile):

Собираем, смотрим на время и размер сборки:

Как и обещано, образы на базе Alpine собираются быстрей и сами по себе меньше: 15 секунда вместо 30 и размер образа 105MB против 150MB. Это довольно хорошо!

Но если мы переключимся на сборку Python приложения, то все не так радужно.

Python образ

Python приложения часто используют pandas и matplotlib. Поэтому, один из вариантов взять официальный образ на базе Debian, используя такой Dockerfile:

Получаем образ размером в 363MB.
Получится у нас лучше с Alpine? Давайте попробуем:

Alpine не поддерживает wheels

Если вы посмотрите на билд, который базируется на Debian, то вы увидите, что он скачивает matplotlib-3.1.2-cp38-cp38-manylinux1_x86_64.whl.

Это бинарник для wheel. Alpine же скачивает исходники `matplotlib-3.1.2.tar.gz`, так как он не поддерживает стандартный wheels.

Почему? Большинство Linux дистрибутивов используют GNU версию (glibc) стандартной библиотеки C, который по факту необходим каждой программе написанной на C, включая Python. Но Alpine использует `musl`, а так как те бинарники предназначены для `glibc`, они попросту не вариант.

Поэтому, если вы используете Alpine, вам необходимо компилировать весь код, написанный на C, в каждом пакете Python.

Ах, да, список всех таких зависимостей которые, нужно компилировать придется искать самим.
В данном случае получаем такое:

И время билда занимает…

… 25 минут 57 секунд! А размер образа 851MB.

Образы на базе Alpine собираются намного дольше, сами по себе они большего размера и вам еще нужно искать все зависимости. Можно конечно уменьшить размер сборки используя multi-stage builds но это означает, что нужно проделать еще больше работы.

Alpine может быть причиной неожиданных багов в рантайме

  • В теории musl совместим с glibc, но на практике различия могут стать причиной многих проблем. И если они будут, то наверняка неприяные. Вот некоторые проблемы, которые могут возникнуть:
  • Alpine по умолчанию имеет меньший размер стека потока, что может привести к ошибкам в Python
  • Некоторые пользователи обнаружили, что Python приложения работают медленней из-за того как, musl выделяет память (отличается от glibc).
  • Один из пользователей обнаружил ошибку при форматировании даты

Наверняка эти ошибки уже исправили, но кто знает сколько их еще.

Источник

Так ли мал Alpine 3.8 Docker для Python 3 runtime

Совсем недавно произошёл релиз минималистичного Alpine Linux 3.8. Очень часто данный linux образ используют в докере, собирая очень компактные окружения для runtime.

Сегодняшняя статья будет рассмотрена в срезе использования runtime системы в докере для Python 3.6.X версий, с различным составом пакетов pip. А так же мы соберём самый новый Python 3.7 в Alpine.

В конце статьи будет представлен размер образа image, занимаемый на диске, в зависимости от состава пакетов pip и произведено сравнение между дистрибутивами Alpine 3.8, Debian 9, Fedora 28.

Итак, приступим: для тестирования дистрибутивы выбраны. Будем собирать следующие docker images:

  1. Система, ее обновление. И Python3 с обновлённым pip (10 версии)
  2. п.1 + tornado cython
  3. п.2 + numpy-scipy
  4. п.3 + pillow bokeh pandas websocket-client

В результате даных заливок, мы получим различные версии: Python без пакетов, Python с web сервером, Python с пакетами для обработки многопоточных математических вычислений, Python с «графическим» стеком и работы с данными.

Итак, результирующие файлы для Debian и Fedora будут выглядеть у нас так:
Debian

А вот с Alpine 3.8 пока заминка. Официально на момент написания статьи он ещё на вышел, а посмотреть, то хочется:-). Поэтому нам понадобиться их образ системы:
dl-cdn.alpinelinux.org/alpine/v3.8/releases/x86_64
И мы соберём свой Alpine from Sratch:
github.com/gliderlabs/docker-alpine/tree/master/versions/library-3.8/x86_64

Создаём свой докер файл:

Затем копипастим и добавляем в этот файл борку Python 3.6 со страницы github.com/docker-library/python/blob/master/3.6/alpine3.7/Dockerfile
не забыв удалить или закомментировать строку FROM alpine:3.7

И пробуем создать образ с Alpine 3.8 и Python на борту:

Результаты первого шага установка только Python (docker images —all):

  1. Debian 9 / 513 MB
  2. Fedora 28 / 387 MB
  3. Alpine 3.8 / 82.2 MB

Шаг 2. Установка cython и tornado

Начинаем добавлять пакеты pip. Первым установим cython и tornado. Для Debian и Fedora пакеты ставятся без ошибок, а вот Alpine падает с ошибкой:

Придется гуглить и потом уже добавлять библиотеки сборки в Alpine, чтобы pip успешно собрал их из исходного текста. Затем запускать сборку докера снова, затем опять искать зависимости, читать форумы stackoverflow и issues в github и ждать и ждать и ждать.

Поскольку в следующих шагах мы начнём добавлять математические и графические библиотеки в наш образ runtime Python, и чтобы слишком не увеличивать текст данной статьи, я приведу финальные зависимости для Alpine linux:

  1. Debian 9 / 534 MB
  2. Fedora 28 / 407 MB
  3. Alpine 3.8 / 144 MB

Шаг 3. Добавляем математику numpy scipy

  1. Debian 9 / 763 MB
  2. Fedora 28 / 626 MB
  3. Alpine 3.8 / 404 MB MB

Шаг 4. Добавляем графический стек websocket-client pytest pandas bokeh pillow

  1. Debian 9 / 905 MB
  2. Fedora 28 / 760 MB
  3. Alpine 3.8 / 650 MB

В качестве бонуса, попробуем в Alpine 3.8 скомпилировать ещё не вышедший для докера Python 3.7.
Новая версия Python 3.7 представлена 27 июня 2018 года

Размер Alpine 3.8 с Python 3.7 с текущим списком пакетов pip 656 MB

Итоги

Python

  • Debian 9 / больше в 6.24х / +430 Mb
  • Fedora 28 / больше в 4,7х / +304 Mb

Python tornado cython

  • Debian 9 / больше в 3,71х / +390 Mb
  • Fedora 28 / больше в 2,82x / +263 Mb

Python tornado cython numpy scipy

  • Debian 9 / больше в 1,88 раз / +359 Mb
  • Fedora 28 / больше в 1.54 раз / +222 Mb

Python tornado cython numpy scipy websocket-client pytest pandas bokeh pillow

  • Debian 9 / больше в 1,39 раз / +255 Mb
  • Fedora 28 / больше в 1.16 раз / +110 Mb

При использовании пустого runtime Python, дистрибутив Alpine linux лидер по минимальному размеру. При увеличени количества библиотек pip до tornado+cython+numpy+scipy Alpine все ещё дает заметную экономию в размере на жёстком диске. Одако как только в пакетах появляются графические утилиты для работы с данными для Python, разница практически исчезает.

При большом количестве графических пакетов, оптимальнее выбрать дистрибутив Fedora, чем заниматься компиляцией пакетов в Alpine (компиляция может длиться 1-2 часа), и в результате получить экономию в один или два десятка процентов места на жёстком диске.

UPDATE1: Тестирование проводилось на Fedora Atomic Host: release 28 (Twenty Eight), Version: 2018.5

Источник

How to install docker on Alpine Linux VM

Introduction

Alpine linux is a lighweight linux distro, making it small, fast and ideal for VM’s when server resources are limited. Especially when talking about running docker containers, a VM is the only way to go since LXC containers are not supported and its hacky to make docker run inside an LXC.

The Docker containers can be administered through the command line or by using a GUI tool.The two most lightweight administration tools are:

  • Cockpit, it can run only in systemd distros so it is not an option for Alpine.
  • Portainer, comes as a docker container, ideal for Alpine.

Following are the steps required to configure an Alpine system and install Docker.

Create Alpine VM on Proxmox

Download the latest .iso from Virtual category in https://alpinelinux.org/downloads/ .

Upload it to Proxmox VE from gui and create VM.

Typical example settings with one network card on Intel Core2 quad core PC

    • OS -> Other OS types
    • CD/DVD -> Use disk image (iso)
    • Hard Disk -> Bus/Device IDE 0 , Storage local-lvm, Cache Default (no cache)
    • CPU -> Type Default(kvm64)
    • Memory -> Auto allocate and input the desired range
    • Network -> Bridged mode , Model VirtIO (paravirtualized)

Again these are suggested defaults and depend on the machine and the network plan we have in mind.

Initial Setup

Still in Proxmox GUI , go to the new VM’s console, login as root and type:

and answer the questions of the wizard,including changing the root password.Alpine setup is possibly the easiest linux setup.Defaults are fine except in the disk creation.After selecting disk (sda) and purpose (sys), at least for this simple use case, type (y) in the warning:erase the above disk and continue ? question.

Alpine will prepare the portion of the hard disk allocated to our VM and will ask to reboot.

After rebooting the vm, from the proxmox console of the Alpine VM create a group and a user so we can ssh to the vm remotely from the terminal of our choice and not be bound to the builtin console of the proxmox GUI anymore:

We use docker group name on purpose because it will be needed later.

Now add the user in sudoers file to be able to execute all commands:

add anywhere this line:

Now press ESC and :wq to save and exit the editor. :q! to exit without saving.

Note:If for any reason we need to ssh as root,from proxmox GUI console:

and replace #permitrootlogin no with permitrootlogin yes .Save and exit the editor.Then restart ssh service:

Now we can leave the proxmox console and ssh to the server from another machine:

Install qemu-guest-agent [optional]

Install Docker Engine

We need to add the community repository in alpine that contains docker:

Now update the repositories, check that docker is present in apk and install:

Источник

Create a Docker Swarm on Alpine Linux 3.9.0

Introduction

This guide will show you how to create and configure a Docker swarm using multiple Alpine Linux 3.9.0 servers and Portainer. Please be aware that Vultr offers a One-Click Docker App that currently supports both CentOS 7 x64 and Ubuntu 16.04 x64.

Prerequisites

To begin, you will need at least two VC2 servers running Alpine Linux 3.9.0. Within your Docker swarm, one of these servers will act as a manager node — interfacing with external networks and delegating jobs to worker nodes. The other server will then act as a worker node — executing jobs delegated to it by the manager node.

Note that you can launch more than two servers if your application requires redundancy and/or more computing power, and the steps provided in this guide will still apply.

Deployment

Ensure that the Vultr Cloud (VC2) tab is selected at the top of the page.

You can select any location from the Server Location section, however all servers must be in the same location, otherwise it will not be possible to deploy a Docker swarm to them.

Select the ISO Library tab of the Server Type section and choose the Alpine Linux 3.9.0 x86_64 image.

Select an appropriate option from the Server Size section. This guide will use the 25 GB SSD server size, but this may be insufficient to meet your application’s resource requirements. While Vultr makes it easy to upgrade a server’s size after it has already been launched, you should still carefully consider which server size your application needs to perform optimally.

In the Additional Features section, you must select the Enable Private Networking option. While the other options are not required to follow this guide, you should consider whether or not each one makes sense in the context of your application.

If you’ve previously enabled the Multiple Private Networks option on your account, you will then need to either select an existing or create a new private network for your servers. If you have not enabled it, then you can ignore this section. For information on manually configuring private networks, see this guide.

Skip the Firewall Group section for now. Only the server acting as a manager node in the Docker swarm will need exposed ports, and this should be configured after server deployment.

At the very bottom of the page, you must enter a Server Qty of at least two. As mentioned previously, you may need more than two servers, but two is sufficient to follow this guide.

Finally, in the Server Hostname & Label section, enter meaningful and memorable hostnames and labels for each server. For the purposes of this guide, the hostname and label of the first server will be docker-manager and Docker Manager , respectively- and docker-worker and Docker Worker for the second, respectively.

After double checking all your configurations, you can then click the Deploy Now button at the bottom of the page to launch your servers.

Install Alpine Linux 3.9.0 on the servers

Because you chose an OS from Vultr’s ISO library, you’ll need to manually install and configure Alpine Linux 3.9.0 on each server.

After giving Vultr a minute or two to allocate your servers, click the triple dot more options icon for the Docker Manager server on the server management interface, and then choose the View Console option.

You should be redirected to a console with a login prompt. If not, please wait another minute for Vultr to finish deploying your servers.

At that login prompt, enter root as the username. The live version of Alpine Linux 3.9.0 (which is what your servers are currently running) does not require the superuser to enter a password when logging in.

Once you have successfully logged into the root account, you will see a welcome message followed by a shell prompt that looks like the following:

To start the Alpine Linux installer, enter the following command:

First, choose an appropriate keyboard layout. This guide will use the us layout and variant.

When setting the hostname, choose the same hostname that you set for this server during deployment. If you’ve been following this guide exactly, the hostname should be docker-manager .

Two network interfaces should be available: eth0 and eth1 . If you only see eth0 , that means you did not configure your servers’ private network correctly. Initialize eth0 using dhcp , and initialize eth1 using the private IP address, netmask, and gateway this server was assigned during deployment. You can access these details from the settings interface of your server. When prompted, do not perform any manual network configuration.

Enter a new password for the root account, and then select a timezone appropriate for the location you chose to deploy these servers to.

If you intend to use an HTTP/FTP proxy, enter its URL, otherwise do not set a proxy URL.

Choose an NTP client to manage system clock synchronization. This guide will use busybox .

When asked for a package repository mirror to use, either pick one explicitly by entering its number; automatically detect and select the fastest one by entering f ; or manually edit the repository configuration file by entering e , which is not recommended unless you’re familiar with Alpine Linux. This guide will use the first mirror.

If you plan to use SSH to access your servers or to host an SSH based file system, select an SSH server to use. This guide will use openssh .

When prompted for a disk to use, choose disk vda as sys type.

Alpine Linux 3.9.0 should now be installed on your server. Repeat this process for all other servers you deployed earlier, ensuring you substitute the correct values for hostname and the eth1 network interface.

Post-installation server configuration

At this point, your servers are still running the live ISO version of Alpine Linux 3.9.0. To boot from the SSD installation, visit the settings interface of your server, navigate to the Custom ISO side menu entry, and click the Remove ISO button. This should reboot the server. If it does not, then manually reboot.

Once the server has finished rebooting, navigate back to the web console for the server Docker Manager .

Log into the root account using the password you set earlier during the installation process.

Enable the community package repository by uncommenting the third line of /etc/apk/repositories using vi . You can enable the edge and testing repositories in a similar manner, but they are not required to follow this guide.

Synchronize the server’s local package index with the remote repository you selected earlier by entering the following shell command:

Then upgrade outdated packages:

As before, repeat this configuration process for each server you deployed earlier.

Install Docker on your servers

Before installing the Docker package itself, you may want to create a separate docker user. You can do this using the following command:

Note: This new user and any users added to the new docker group will have root privileges once the Docker package has been installed. See the following issue from the Moby Github repository:

Due to the —privileged in docker, anyone added to the ‘docker’ group is root equivalent. Anyone in the docker group has a back door around all privilege escalation policy and auditing on the system.

This is different from someone being able to run running sudo to root, where they have policy, and audit applied to them.

If you’d like to give sudo permission to the docker user, first install the sudo package:

Then create a sudo group:

Finally, add the docker user to the sudo group:

Now you can follow step 4 of this guide to finish configuring sudo.

At this point, you’re ready to install the Docker package. Note that it is not strictly necessary to have a separate, sudo-capable docker user to install and configure Docker, but this guide follows that convention.

Install the Docker package with the following command:

Then enable the Docker init script:

Finally, start the Docker daemon:

You can verify that Docker is running with this command:

As with last time, repeat this Docker installation process for each server you deployed at the start.

Initialize a Docker swarm with one manager node and one worker node

With all of that setup dealt with, you’re finally ready to create the Docker swarm.

Create a swarm and add a manager node

Navigate back to the web console of your Docker Manager server. You will configure this server as a manager node in your swarm. If you chose to create the docker user earlier, log in using that account rather than the superuser.

Enter the following command, but replace 192.0.2.1 with the private, (not the public), IP address your Docker Manager server was assigned:

Docker will display a command you can execute on other servers in the private network to add them as worker nodes to this new swarm. Save this command.

Add a worker node

Now navigate to the web console of your Docker Worker server, signing in with the docker user if you created it.

To add this server as a worker node to the swarm you just created, execute the command you saved from the output of the swarm creation command. It will look similar to the following:

Docker will output whether the node was able to join the swarm. If you encounter issues adding worker nodes to the swarm, double check your private network configuration and refer to this guide for troubleshooting.

If you deployed more than two servers at the beginning, you can add the rest as worker nodes to your swarm using the command above, increasing the amount resources available to your application. Alternatively, you can add additional manager nodes, but that’s beyond the scope of this guide.

Deploy Portainer with SSL to manage your Docker swarm

At this point your Docker swarm is ready for use. You may, however, optionally launch a Portainer stack on the manager node in your swarm. Portainer offers a convenient web interface for managing your swarm and the nodes therein.

It’s now time to create a firewall group for your swarm. Unless your application specifically requires it, only expose ports on your manager nodes. Exposing ports on your worker nodes without careful consideration can introduce vulnerabilities.

Navigate to the firewall management interface and create a new firewall group. Your application should dictate which ports to expose, but you must, at the very least, expose port 9000 for Portainer. Apply this firewall group to the Docker Manager server.

While it isn’t required, securing Portainer with SSL is strongly recommended. For the sake of this guide, you’ll only be using a self-signed OpenSSL certificate, but you should consider using Let’s Encrypt in production.

Navigate to the web console of the Docker Manager server, log in using the docker user, and use the following commands to generate a self-signed OpenSSL certificate:

Create a new file,

/portainer-agent-stack.yml , with the following contents:

After modifying this Docker stack configuration file to conform to your requirements, you can deploy it:

To verify that Portainer is working, execute the following command after having given Docker a minute or two to deploy the stack:

You will see two containers with the images portainer/portainer:latest and portainer/agent:latest , verifying that Portainer started correctly.

You can now configure and manage your Docker swarm by visiting the public IP address of your Docker Manager server on port 9000 using HTTPS.

Источник

Читайте также:  Linux добавить пользователя без оболочки
Оцените статью