Linux容器化技术:Docker与Kubernetes实战指南 引言 容器化技术已成为现代应用部署和运维的标准实践。通过容器技术,可以实现应用的快速部署、弹性扩缩容、资源隔离和环境一致性。本文将深入探讨Linux环境下的容器化技术实践。
Docker基础与实践 1. Docker安装与配置 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 #!/bin/bash function install_docker () { echo "安装Docker..." sudo apt-get remove docker docker-engine docker.io containerd runc sudo apt-get update sudo apt-get install -y \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install -y docker-ce docker-ce-cli containerd.io sudo systemctl start docker sudo systemctl enable docker sudo usermod -aG docker $USER echo "Docker安装完成!" } function configure_docker () { echo "配置Docker..." sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json > /dev/null <<EOF { "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }, "storage-driver": "overlay2", "registry-mirrors": [ "https://mirror.ccs.tencentyun.com" ], "insecure-registries": [], "live-restore": true, "userland-proxy": false, "experimental": false } EOF sudo systemctl daemon-reload sudo systemctl restart docker echo "Docker配置完成!" } install_docker configure_docker
2. Dockerfile最佳实践 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 FROM node:16 -alpine AS builderWORKDIR /app COPY package*.json ./ RUN npm ci --only=production && npm cache clean --force COPY . . RUN npm run build FROM nginx:alpine AS productionRUN apk add --no-cache curl RUN addgroup -g 1001 -S nodejs && \ adduser -S nextjs -u 1001 COPY --from=builder --chown =nextjs:nodejs /app/dist /usr/share/nginx/html COPY nginx.conf /etc/nginx/nginx.conf HEALTHCHECK --interval=30s --timeout =3s --start-period=5s --retries=3 \ CMD curl -f http://localhost/ || exit 1 EXPOSE 80 CMD ["nginx" , "-g" , "daemon off;" ]
3. Docker Compose编排 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 version: '3.8' services: web: build: context: . dockerfile: Dockerfile target: production image: myapp:${VERSION:-latest} container_name: myapp-web restart: unless-stopped ports: - "80:80" environment: - NODE_ENV=production - DATABASE_URL=postgresql://user:pass@db:5432/myapp - REDIS_URL=redis://redis:6379 volumes: - ./logs:/var/log/nginx - ./uploads:/app/uploads networks: - app-network depends_on: db: condition: service_healthy redis: condition: service_started healthcheck: test: ["CMD" , "curl" , "-f" , "http://localhost/health" ] interval: 30s timeout: 10s retries: 3 start_period: 40s db: image: postgres:13-alpine container_name: myapp-db restart: unless-stopped environment: POSTGRES_DB: ${DB_NAME:-myapp} POSTGRES_USER: ${DB_USER:-user} POSTGRES_PASSWORD: ${DB_PASSWORD:-password} volumes: - db_data:/var/lib/postgresql/data - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro networks: - app-network healthcheck: test: ["CMD-SHELL" , "pg_isready -U ${DB_USER:-user}" ] interval: 10s timeout: 5s retries: 5 redis: image: redis:6-alpine container_name: myapp-redis restart: unless-stopped command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD:-password} volumes: - redis_data:/data networks: - app-network healthcheck: test: ["CMD" , "redis-cli" , "ping" ] interval: 10s timeout: 3s retries: 3 prometheus: image: prom/prometheus:latest container_name: myapp-prometheus restart: unless-stopped ports: - "9090:9090" volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro - prometheus_data:/prometheus networks: - app-network command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.path=/prometheus' - '--web.console.libraries=/etc/prometheus/console_libraries' - '--web.console.templates=/etc/prometheus/consoles' - '--storage.tsdb.retention.time=200h' - '--web.enable-lifecycle' volumes: db_data: driver: local redis_data: driver: local prometheus_data: driver: local networks: app-network: driver: bridge ipam: config: - subnet: 172.20 .0 .0 /16
4. 容器管理脚本 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 #!/bin/bash APP_NAME="myapp" COMPOSE_FILE="docker-compose.yml" ENV_FILE=".env" function show_usage () { echo "用法: $0 {start|stop|restart|status|logs|update|backup|restore}" } function start_containers () { echo "启动容器..." docker-compose -f $COMPOSE_FILE up -d echo "容器启动完成" } function stop_containers () { echo "停止容器..." docker-compose -f $COMPOSE_FILE down echo "容器停止完成" } function restart_containers () { echo "重启容器..." docker-compose -f $COMPOSE_FILE restart echo "容器重启完成" } function show_status () { echo "容器状态:" docker-compose -f $COMPOSE_FILE ps echo "" echo "资源使用:" docker stats --no-stream } function show_logs () { local service=${1:-} if [ -n "$service " ]; then docker-compose -f $COMPOSE_FILE logs -f $service else docker-compose -f $COMPOSE_FILE logs -f fi } function update_containers () { echo "更新容器..." docker-compose -f $COMPOSE_FILE pull docker-compose -f $COMPOSE_FILE build --no-cache docker-compose -f $COMPOSE_FILE up -d docker image prune -f echo "容器更新完成" } function backup_data () { local backup_dir="./backups/$(date +%Y%m%d_%H%M%S) " mkdir -p $backup_dir echo "备份数据到 $backup_dir " docker-compose -f $COMPOSE_FILE exec -T db pg_dump -U user myapp > $backup_dir /database.sql docker-compose -f $COMPOSE_FILE exec -T redis redis-cli BGSAVE docker cp myapp-redis:/data/dump.rdb $backup_dir / docker cp myapp-web:/app/uploads $backup_dir / echo "数据备份完成" } function restore_data () { local backup_dir=$1 if [ -z "$backup_dir " ] || [ ! -d "$backup_dir " ]; then echo "请指定有效的备份目录" ls -la ./backups/ return 1 fi echo "从 $backup_dir 恢复数据" if [ -f "$backup_dir /database.sql" ]; then docker-compose -f $COMPOSE_FILE exec -T db psql -U user -d myapp < $backup_dir /database.sql fi if [ -f "$backup_dir /dump.rdb" ]; then docker cp $backup_dir /dump.rdb myapp-redis:/data/ docker-compose -f $COMPOSE_FILE restart redis fi if [ -d "$backup_dir /uploads" ]; then docker cp $backup_dir /uploads myapp-web:/app/ fi echo "数据恢复完成" } case "$1 " in start) start_containers ;; stop) stop_containers ;; restart) restart_containers ;; status) show_status ;; logs) show_logs $2 ;; update) update_containers ;; backup) backup_data ;; restore) restore_data $2 ;; *) show_usage exit 1 ;; esac
Kubernetes集群部署 1. Kubernetes安装 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 #!/bin/bash function install_kubernetes () { echo "安装Kubernetes..." sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system sudo apt-get update sudo apt-get install -y containerd sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml sudo systemctl restart containerd sudo systemctl enable containerd sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl echo "Kubernetes安装完成" } function init_master () { echo "初始化Master节点..." sudo kubeadm init --pod-network-cidr=10.244.0.0/16 mkdir -p $HOME /.kube sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config sudo chown $(id -u):$(id -g) $HOME /.kube/config kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml echo "Master节点初始化完成" echo "使用以下命令加入Worker节点:" kubeadm token create --print-join-command } install_kubernetes read -p "是否初始化为Master节点? (y/n): " -n 1 -recho if [[ $REPLY =~ ^[Yy]$ ]]; then init_master fi
2. Kubernetes应用部署 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 apiVersion: apps/v1 kind: Deployment metadata: name: myapp namespace: default labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:latest ports: - containerPort: 3000 env: - name: NODE_ENV value: "production" - name: DATABASE_URL valueFrom: secretKeyRef: name: myapp-secrets key: database-url resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 3000 initialDelaySeconds: 5 periodSeconds: 5 volumeMounts: - name: app-storage mountPath: /app/uploads volumes: - name: app-storage persistentVolumeClaim: claimName: myapp-pvc --- apiVersion: v1 kind: Service metadata: name: myapp-service namespace: default spec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 3000 type: ClusterIP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ingress namespace: default annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - myapp.example.com secretName: myapp-tls rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: myapp-service port: number: 80
3. Kubernetes管理脚本 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 #!/bin/bash APP_NAME="myapp" NAMESPACE="default" MANIFEST_DIR="./k8s" function deploy_app () { echo "部署应用到Kubernetes..." kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f - kubectl apply -f $MANIFEST_DIR / -n $NAMESPACE kubectl rollout status deployment/$APP_NAME -n $NAMESPACE echo "应用部署完成" } function update_app () { local image_tag=${1:-latest} echo "更新应用镜像到 $image_tag " kubectl set image deployment/$APP_NAME $APP_NAME =$APP_NAME :$image_tag -n $NAMESPACE kubectl rollout status deployment/$APP_NAME -n $NAMESPACE echo "应用更新完成" } function rollback_app () { echo "回滚应用..." kubectl rollout history deployment/$APP_NAME -n $NAMESPACE kubectl rollout undo deployment/$APP_NAME -n $NAMESPACE kubectl rollout status deployment/$APP_NAME -n $NAMESPACE echo "应用回滚完成" } function scale_app () { local replicas=${1:-3} echo "扩缩容应用到 $replicas 个副本" kubectl scale deployment/$APP_NAME --replicas=$replicas -n $NAMESPACE echo "扩缩容完成" } function show_status () { echo "应用状态:" kubectl get pods,svc,ingress -l app=$APP_NAME -n $NAMESPACE echo "" echo "部署详情:" kubectl describe deployment/$APP_NAME -n $NAMESPACE } function show_logs () { local pod_name=$1 if [ -z "$pod_name " ]; then kubectl logs -l app=$APP_NAME -n $NAMESPACE --tail =100 -f else kubectl logs $pod_name -n $NAMESPACE --tail =100 -f fi } function delete_app () { echo "删除应用..." kubectl delete -f $MANIFEST_DIR / -n $NAMESPACE echo "应用删除完成" } case "$1 " in deploy) deploy_app ;; update) update_app $2 ;; rollback) rollback_app ;; scale) scale_app $2 ;; status) show_status ;; logs) show_logs $2 ;; delete) delete_app ;; *) echo "用法: $0 {deploy|update|rollback|scale|status|logs|delete}" exit 1 ;; esac
容器监控与日志 1. 容器监控配置 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config data: prometheus.yml: | global: scrape_interval: 15s scrape_configs: - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) --- apiVersion: apps/v1 kind: Deployment metadata: name: prometheus spec: replicas: 1 selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus spec: containers: - name: prometheus image: prom/prometheus:latest ports: - containerPort: 9090 volumeMounts: - name: config mountPath: /etc/prometheus - name: storage mountPath: /prometheus volumes: - name: config configMap: name: prometheus-config - name: storage emptyDir: {}
2. 日志收集配置 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 apiVersion: v1 kind: ConfigMap metadata: name: fluentd-config data: fluent.conf: | <source> @type tail path /var/log/containers/*.log pos_file /var/log/fluentd-containers.log.pos tag kubernetes.* format json read_from_head true </source> <filter kubernetes.**> @type kubernetes_metadata </filter> <match **> @type elasticsearch host elasticsearch.logging.svc.cluster.local port 9200 index_name fluentd type_name fluentd </match> --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd spec: selector: matchLabels: name: fluentd template: metadata: labels: name: fluentd spec: containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: config mountPath: /fluentd/etc volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: config configMap: name: fluentd-config
最佳实践与总结 容器化原则
单一职责 :每个容器只运行一个主进程
无状态设计 :应用状态存储在外部
镜像优化 :使用多阶段构建,减小镜像大小
安全加固 :使用非root用户,扫描漏洞
生产环境考虑
资源限制 :设置CPU和内存限制
健康检查 :配置存活和就绪探针
数据持久化 :使用持久卷存储数据
网络安全 :配置网络策略和服务网格
运维管理
监控告警 :全面的容器和应用监控
日志管理 :集中化日志收集和分析
备份恢复 :定期备份关键数据
灾难恢复 :多区域部署和故障转移
结语 Linux容器化技术为现代应用提供了标准化、可移植、可扩展的部署方案。通过Docker和Kubernetes的结合使用,可以构建高效、稳定、可维护的容器化平台,为业务发展提供强有力的技术支撑。
版权所有,如有侵权请联系我