Docker官方入门文档

相当于给官网上的get-started翻译了一下。

Container

Dockerfile

Dockerfile defines what goes on in the environment inside your container. Access to resources like networking interfaces and disk drives is virtualized inside this environment, which is isolated from the rest of your system, so you need to map ports to the outside world, and be specific about what files you want to “copy in” to that environment.

示例

  • Dockerfile
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    # Use an official Python runtime as a parent image
    FROM python:2.7-slim
    # Set the working directory to /app
    WORKDIR /app
    # Copy the current directory contents into the container at /app
    ADD . /app
    # Install any needed packages specified in requirements.txt
    RUN pip install --trusted-host pypi.python.org -r requirements.txt
    # Make port 80 available to the world outside this container
    EXPOSE 80
    # Define environment variable
    ENV NAME World
    # Run app.py when the container launches
    CMD ["python", "app.py"]
  • requirements.txt文件
    连同下面的app.py需要和Dockerfile放在同级目录下
    1
    2
    Flask
    Redis
  • app.py
    打印host名,在Docker内部就是container ID
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    from flask import Flask
    from redis import Redis, RedisError
    import os
    import socket
    # Connect to Redis
    redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
    app = Flask(__name__)
    @app.route("/")
    def hello():
    try:
    visits = redis.incr("counter")
    except RedisError:
    visits = "<i>cannot connect to Redis, counter disabled</i>"
    html = "<h3>Hello {name}!</h3>" \
    "<b>Hostname:</b> {hostname}<br/>" \
    "<b>Visits:</b> {visits}"
    return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
    if __name__ == "__main__":
    app.run(host='0.0.0.0', port=80)
  • 执行
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # 构建image,并为image取别名
    docker build -t friendlyhello .
    docker image ls
    # 执行,并将内部(容器)的80端口映射到外部(主机)的4000端口
    docker run -p 4000:80 friendlyhello
    # 在后台执行
    docker run -d -p 4000:80 friendlyhello
    # -t相当于在终端登录到容器,-i可以将终端的STDIN通过管道(?man手册没有提到这个)连到容器的STDIN,以实现与容器内部的交互
    docker run -it -p 80:80 friendlyhello
  • 终止
    linux下直接CTRL+C即可,但在Windows下需要再关闭容器:
    1
    2
    docker container ls
    docker container stop <Container NAME or ID>

设置代理服务器

否则可能Docker会不能访问外网,导致上边的安装命令失败,所以在上边的安装命令之前:

1
2
3
# Set proxy server, replace host:port with values for your servers
ENV http_proxy host:port
ENV https_proxy host:port

发布Dockerfile

  1. 注册账号:
    https://cloud.docker.com/
  2. 然后登录
    1
    docker login
  3. 将本地image与仓库关联
    1
    2
    3
    4
    5
    6
    7
    # tag相当于版本号,或者其他的有意义的信息
    # docker tag image username/repository:tag
    docker tag friendlyhello tallate/get-started:version1
    # 上传至Docker Hub
    docker push tallate/get-started:version1
    # 在任何环境下运行,会自动拉取requirements.txt中定义的依赖项
    docker run -p 4000:80 tallate/get-started:version1
  4. 在Docker Hub中查看
    https://cloud.docker.com/swarm/tallate/repository/list

Service(Compose)

Service

Services are really just “containers in production.” A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on. Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.
is a cluster of machines running Docker, and deployed an application to it, with containers running in concert on multiple machines.

使用Docker Compose编排服务

  • 定义docker-compose.yml文件:
    该服务包含5个对应image的容器实例
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    version: "3"
    services:
    web:
    # replace username/repo:tag with your name and image details
    image: tallate/get-started:version1
    deploy:
    replicas: 5
    resources:
    limits:
    cpus: "0.1"
    memory: 50M
    restart_policy:
    condition: on-failure
    ports:
    - "80:80"
    networks:
    - webnet
    networks:
    webnet:
    Pull the image uploaded from the registry.
    Run 5 instances of that image as a service called web, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM.
    Immediately restart containers if one fails.
    Map port 80 on the host to web’s port 80.
    Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves publish to web’s port 80 at an ephemeral port.)
    Define the webnet network with the default settings (which is a load-balanced overlay network).
  • 使用Swarm管理集群
    1
    2
    # 如果有双网卡的情况需要指定ip(每个网卡一个ip)
    docker swarm init --advertise-addr 192.168.1.105
  • 运行,并为app命名
    1
    docker stack deploy -c docker-compose.yml getstartedlab
  • 查看服务
    1
    docker service ls
    web服务会在后面带上一个”_web”后缀

Task

  • A single container running in a service is called a task. Tasks are given unique IDs that numerically increment, up to the number of replicas you defined in docker-compose.yml.
    1
    2
    3
    4
    # List the tasks for your service
    docker service ps getstartedlab_web
    # 或者直接列出所有容器,-q表示quiet只列出ID
    docker container ls -q
  • 访问服务
    服务已经具有了负载均衡的特性,with each request, one of the 5 tasks is chosen, in a round-robin fashion, to respond.
  • 可能出现响应过慢的情况
    暂时还没有加入redis服务,所以访问redis时会直到超时一直在等待,导致请求响应过慢的情况

    Depending on your environment’s networking configuration, it may take up to 30 seconds for the containers to respond to HTTP requests. This is not indicative of Docker or swarm performance, but rather an unmet Redis dependency that we address later in the tutorial. For now, the visitor counter isn’t working for the same reason; we haven’t yet added a service to persist data.
    估计是和docker-compose.yml中的属性(内存、cpu等)、本机的网络配置有关系。

  • 终止应用
    1
    2
    3
    4
    # 终止应用
    docker stack rm getstartedlab
    # 终止swarm
    docker swarm leave --force

scale

可以修改docker-compose.yml中的replicas属性来伸缩应用,只需要再运行一次部署,Docker performs an in-place update, no need to tear the stack down first or kill any containers:

1
docker stack deploy -c docker-compose.yml getstartedlab

Swarm(Cluster)

安装docker-machine

下载:https://github.com/docker/machine/releases

1
2
3
4
# 拷贝到可执行目录
sudo cp docker-machine-Linux-x86_64 /usr/local/bin/docker-machine
# 赋权限
sudo chmod +x /usr/local/bin/docker-machine

Swarm

manager和worker

A swarm is a group of machines that are running Docker and joined into a cluster.
只有manager会执行指令,其他workers只用于扩容。
you continue to run the Docker commands you’re used to, but now they are executed on a cluster by a swarm manager. The machines in a swarm can be physical or virtual. After joining a swarm, they are referred to as nodes.
Swarm managers are the only machines in a swarm that can execute your commands, or authorize other machines to join the swarm as workers. Workers are just there to provide capacity and do not have the authority to tell any other machine what it can and cannot do.

多种运行容器策略

Swarm managers can use several strategies to run containers, such as “emptiest node” – which fills the least utilized machines with containers. Or “global”, which ensures that each machine gets exactly one instance of the specified container. You instruct the swarm manager to use these strategies in the Compose file, just like the one you have already been using.

初始化虚拟机

1
2
3
4
# enable swarm mode and make your current machine a swarm manager
docker swarm init
# on other machines to have them join the swarm as workers
docker swarm join

在虚拟机中执行

先安装virtualbox,然后:

1
2
3
4
docker-machine create --driver virtualbox myvm1
docker-machine create --driver virtualbox myvm2
# 查看虚拟机属性
docker-machine ls

初始化Swarm集群

The first machine acts as the manager, which executes management commands and authenticates workers to join the swarm, and the second is a worker.

1
2
3
4
5
6
7
8
9
10
11
12
# 初始化manager,注意这里的ip是虚拟机内的ip,命令成功后会返回一条命令提示worker加入集群的方式
docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"
# 如果内置的ssh出错,可以使用本机的ssh
# docker-machine --native-ssh ssh myvm1 ...
# 将其他节点加入集群
docker-machine ssh myvm2 "docker swarm join --token SWMTKN-1-1aac1t43c8g98xf2b88xqdxwuz7c740vizr7rld19k0d6xff6j-7i05pruv9w16anm0z96eeysvj 192.168.99.100:2377"
# 在manager中查看集群中的所有nodes
docker-machine ssh myvm1 "docker node ls"
# 拷贝文件
docker-machine scp <file> <machine>:~
# 关闭集群
docker swarm leave
  • 端口号
    Always run docker swarm init and docker swarm join with port 2377 (the swarm management port), or no port at all and let it take the default.
    The machine IP addresses returned by docker-machine ls include port 2376, which is the Docker daemon port. Do not use this port or you may experience errors.

在集群中部署app

  • 远程访问配置
    Another option is to run docker-machine env <machine> to get and run a command that configures your current shell to talk to the Docker daemon on the VM. This method works better for the next step because it allows you to use your local docker-compose.yml file to deploy the app “remotely” without having to copy it anywhere.
    1
    2
    3
    4
    5
    6
    # get the command to configure your shell to talk to myvm1
    docker-machine env myvm1
    # configure your shell to talk to myvm1
    eval $(docker-machine env myvm1)
    # 如果ACTIVE带星号说明是active machine,docker client靠着上面配置的环境变量来决定和docker daemon交互的行为
    docker-machine ls
  • 在虚拟机中部署应用
    只需要在manager中部署应用,不需要在worker中部署:
    1
    2
    3
    4
    5
    # 需要先运行上边的eval命令,再部署
    docker stack deploy -c docker-compose.yml getstartedlab
    # 查看节点上的服务
    docker service ls
    docker service ps <service_name>

集群路由原理

The reason both IP addresses work is that nodes in a swarm participate in an ingress routing mesh. This ensures that a service deployed at a certain port within your swarm always has that port reserved to itself, no matter what node is actually running the container. Here’s a diagram of how a routing mesh for a service called my-web published at port 8080 on a three-node swarm would look:
routing mesh diagram
Keep in mind that to use the ingress network in the swarm, you need to have the following ports open between the swarm nodes before you enable swarm mode:

  • Port 7946 TCP/UDP for container network discovery.
  • Port 4789 UDP for the container ingress network.

终止

  1. tear down the stack
    1
    docker stack rm getstartedlab
  2. 移除swarm
    1
    2
    3
    4
    # worker
    docker-machine ssh myvm2 "docker swarm leave"
    # manager
    docker-machine ssh myvm1 "docker swarm leave --force"

移除docker-machine shell variable环境变量设置

1
eval $(docker-machine env -u)

重启

因为是在本地虚拟机中搭建的集群,在合上电脑后虚拟机就关闭了、服务也就终止了,需要重启:

1
2
3
4
# 查看虚拟机状态
docker-machine ls
# 重启
docker-machine start <machine-name>

如果是manager节点挂掉了,重启后会发现其他节点的状态是Down:

1
2
3
# 查看所有节点状态
docker node ls

Stack(Complex Cluster)

Stack

A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application (though very complex applications may want to use multiple stacks).

增加visualizer服务

  1. 修改docker-compose.yml文件,在services节点下添加:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    services:
    visualizer:
    image: dockersamples/visualizer:stable
    ports:
    - "8080:8080"
    volumes:
    - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
    placement:
    constraints: [node.role == manager]
    networks:
    - webnet
    增加了一项visualizer服务,a volumes key, giving the visualizer access to the host’s socket file for Docker, and a placement key, ensuring that this service only ever runs on a swarm manager – never a worker. That’s because this container, built from an open source project created by Docker, displays Docker services running on a swarm in a diagram.
  2. 重新部署
    1
    2
    3
    4
    # 重新部署
    docker stack deploy -c docker-compose.yml getstartedlab
    # 查看应用中的Task
    docker stack ps getstartedlab

增加redis服务

  1. 在docker-compose.yml中增加redis服务:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    services:
    redis:
    image: redis
    ports:
    - "6379:6379"
    volumes:
    - "/home/docker/data:/data"
    deploy:
    placement:
    constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
    - webnet
    redis always runs on the manager, so it’s always using the same filesystem.
    redis accesses an arbitrary directory in the host’s file system as /data inside the container, which is where Redis stores data. 会在容器的/data路径下存储数据,并在容器被重新部署后清空。
    The placement constraint you put on the Redis service, ensuring that it always uses the same host.想搭建redis集群怎么办???
    The volume you created that lets the container access ./data (on the host) as /data (inside the Redis container). While containers come and go, the files stored on ./data on the specified host persists, enabling continuity.上边volumes配置项指定了:容器的/data目录对应着服务器的./data目录(相对docker用户来说)
  2. 创建服务器的./data目录
    否则会出现redis服务并没有开启
    1
    docker-machine ssh myvm1 "mkdir ./data"

App & Cloud

在远程主机部署Docker服务

  1. X我使用无驱动的方式部署没有成功过

    1
    docker-machine create --driver none --url=tcp://59.110.172.126:2376 ecs-59

    使用generic驱动

    1
    docker-machine create --driver generic --generic-ip-address=59.110.172.126 ecs-59
  2. 创建密钥
    将公钥文件(xxx.pub)内容拷贝到服务器的~/.ssh/authorized_keys文件内,

  3. 查看

    1
    2
    3
    curl http://59.110.172.126:2376/info

    docker -H tcp://59.110.172.126:2376 info
  4. 连接

    1
    2
    eval $(docker-machine env ecs-59)

  5. 查看日志

    1
    sudo journalctl -fu docker.service

参考

  1. Docker - get-started
  2. Command-Line Interfaces
  3. Dockerfile reference
  4. Use Docker Machine to provision hosts on cloud providers
  5. 使用Docker Machine管理阿里云ECS
  6. Develop with Docker