php – How to optimize deployment via docker-compose?

Question:

We implemented the deployment via GitLab CI and docker-compose . During the deployment process, the docker-compose.yml file and Dockerfile files, etc. are copied to the remote server to create "auxiliary" containers like nginx and mysql

Everything works as it should. 2 points of concern: downtime and junk docker images (those with <none> in the TAG column in docker images )

Here is a piece of the .gitlab-ci.yml file responsible for actually deploying to a remote server:

.template-secure-copy: &secure-copy
    stage: deploy
    image: covex/alpine-git:1.0
    before_script:
      - eval $(ssh-agent -s)
      - ssh-add <(echo "$SSH_PRIVATE_KEY")
    script:
      - ssh -p 22 $DEPLOY_USER@$DEPLOY_HOST 'set -e ;
          rm -rf '"$DEPLOY_DIRECTORY"'_tmp ;
          mkdir -p '"$DEPLOY_DIRECTORY"'_tmp'
      - scp -P 22 -r build/* ''"$DEPLOY_USER"'@'"$DEPLOY_HOST"':'"$DEPLOY_DIRECTORY"'_tmp' # */ <-- в оригинале строка не закоментирована =)
      - ssh -p 22 $DEPLOY_USER@$DEPLOY_HOST 'set -e ;
          cd '"$DEPLOY_DIRECTORY"'_tmp ;
          docker login -u gitlab-ci-token -p '"$CI_JOB_TOKEN"' '"$CI_REGISTRY"' ;
          docker-compose pull ;
          if [ -d '"$DEPLOY_DIRECTORY"' ]; then cd '"$DEPLOY_DIRECTORY"' && docker-compose down --rmi local && rm -rf '"$DEPLOY_DIRECTORY"'; fi ;
          cp -r '"$DEPLOY_DIRECTORY"'_tmp '"$DEPLOY_DIRECTORY"' ;
          cd '"$DEPLOY_DIRECTORY"' ;
          docker-compose up -d --remove-orphans ;
          docker-compose exec -T php phing app-deploy -Dsymfony.env=prod ;
          rm -rf '"$DEPLOY_DIRECTORY"'_tmp'
    tags:
      - executor-docker

Downtime now is 1-2-3 minutes. It starts with docker-compose down ... until the end of the script execution. I would like to reduce it.

And how to make sure that "garbage" docker images do not appear – I did not understand at all. I know about docker image prune , I don’t want to clean it up, but not clutter it up.

UPD1:

The docker-compose.yml is created with the following construct:

.template-docker-compose: &docker-compose
    stage: build
    image: covex/docker-compose:1.0
    script:
      - for name in `env | awk -F= '{if($1 ~ /'"$ENV_SUFFIX"'$/) print $1}'`; do
          eval 'export '`echo $name|awk -F''"$ENV_SUFFIX"'$' '{print $1}'`'='$"$name"'';
        done
      - mkdir build
      - docker-compose -f docker-compose-deploy.yml config > build/docker-compose.yml
      - sed -i 's/\/builds\/'"$CI_PROJECT_NAMESPACE"'\/'"$CI_PROJECT_NAME"'/\./g' build/docker-compose.yml
      - cp -R docker build
    artifacts:
        untracked: true
        name: "$CI_COMMIT_REF_NAME"
        paths:
          - build/
    tags:
      - executor-docker

The result of this procedure is this docker-compose.yml :

networks:
  nw_external:
    external:
      name: graynetwork
  nw_internal: {}
services:
  mysql:
    build:
      context: ./docker/mysql
    environment:
      MYSQL_DATABASE: project
      MYSQL_PASSWORD: project
      MYSQL_ROOT_PASSWORD: root
      MYSQL_USER: project
    expose:
    - '3306'
    networks:
      nw_internal: null
    restart: always
    volumes:
    - database:/var/lib/mysql:rw
  nginx:
    build:
      args:
        app_php: app
        server_name: project-dev1.ru
      context: ./docker/nginx
    depends_on:
      php:
        condition: service_started
    networks:
      nw_external:
        ipv4_address: 192.168.10.13
      nw_internal: null
    ports:
    - 80/tcp
    restart: always
    volumes_from:
    - service:php:ro
  php:
    depends_on:
      mysql:
        condition: service_healthy
    environment:
      ENV_database_host: mysql
      ENV_database_name: project
      ENV_database_password: project
      ENV_database_port: '3306'
      ENV_database_user: project
      ENV_mailer_from: andrey@mindubaev.ru
      ENV_mailer_host: 127.0.0.1
      ENV_mailer_password: 'null'
      ENV_mailer_transport: smtp
      ENV_mailer_user: 'null'
      ENV_secret: ThisTokenIsNotSoSecretChangeIt
    expose:
    - '9000'
    image: gitlab.site.ru:5005/dev1-projects/symfony:master
    networks:
      nw_internal: null
    restart: always
    volumes:
    - /composer/vendor
    - /srv
version: '2.1'
volumes:
  database: {}

Dockerfile for nginx service

FROM nginx:alpine
ARG server_name=docker.local
ARG app_php=app_dev

COPY ./default.conf /etc/nginx/conf.d/default.conf

RUN sed -i 's/@SERVER_NAME@/'"$server_name"'/g' /etc/nginx/conf.d/default.conf \
    && sed -i 's/@APP@/'"$app_php"'/g' /etc/nginx/conf.d/default.conf

Dockerfile for mysql service

FROM mysql:5.7

HEALTHCHECK CMD mysqladmin ping --silent

Answer:

You need at least two containers, which will be zero downtime. Then the process ( well described here ): stop one old one, start one new one, and so on in turn.

But I literally went through all this a couple of weeks ago and docker swarm is VERY SIMPLE, there are a couple of commands, the recipe from the link is several times more complicated. For your solution, you need to configure the balancer, and everything is already in docker swarm and installed, I’ll say it again, VERY SIMPLE.

Go straight to docker swarm. It is very fast to set up and zero downtime out of the box.

It's just that the docker service is a great tool, and the docker stack is generally a bomb (it just raises all the weapons from a docker-compose similar file).

Scroll to Top