Apache 集群部署
Apache Cluster Deployment
概述 (Overview)
Apache集群部署是构建高可用、可扩展Web应用架构的关键技术。本文将详细介绍Apache集群的部署方案,包括负载均衡配置、会话共享、共享存储、健康检查和故障转移等核心技术。
Apache cluster deployment is a key technology for building highly available and scalable web application architectures. This article will detail Apache cluster deployment solutions, including load balancing configuration, session sharing, shared storage, health checking, and failover core technologies.
1. 集群架构设计 (Cluster Architecture Design)
1.1 基本集群拓扑 (Basic Cluster Topology)
graph TB
LB[Load Balancer<br/>Nginx/Haproxy] --> AP1[Apache Node 1]
LB --> AP2[Apache Node 2]
LB --> AP3[Apache Node 3]
AP1 --> FS[Shared File System<br/>NFS/GlusterFS]
AP2 --> FS
AP3 --> FS
AP1 --> DB[Database Cluster]
AP2 --> DB
AP3 --> DB
subgraph "Apache Cluster"
AP1
AP2
AP3
end
style LB fill:#ffe4b5,stroke:#333
style FS fill:#e6e6fa,stroke:#333
style DB fill:#e6e6fa,stroke:#333
1.2 高可用集群架构 (High Availability Cluster Architecture)
graph TB
HAP1[HAProxy 1<br/>Active] --> AP1[Apache Node 1]
HAP1 --> AP2[Apache Node 2]
HAP1 --> AP3[Apache Node 3]
HAP2[HAProxy 2<br/>Standby] --> AP1
HAP2 --> AP2
HAP2 --> AP3
VIP[Virtual IP] --> HAP1
VIP --> HAP2
AP1 --> FS[Shared Storage]
AP2 --> FS
AP3 --> FS
subgraph "Load Balancer Layer"
HAP1
HAP2
VIP
end
subgraph "Apache Cluster"
AP1
AP2
AP3
end
style HAP1 fill:#90ee90,stroke:#333
style HAP2 fill:#ffa07a,stroke:#333
style VIP fill:#ffe4b5,stroke:#333
2. 负载均衡器配置 (Load Balancer Configuration)
2.1 HAProxy集群配置 (HAProxy Cluster Configuration)
# /etc/haproxy/haproxy.cfg
global
daemon
maxconn 4096
log stdout local0 info
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 30s
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
option httplog
option dontlognull
retries 3
# 前端配置
frontend apache_frontend
bind *:80
bind *:443 ssl crt /etc/ssl/certs/example.com.pem
# 重定向HTTP到HTTPS
redirect scheme https if !{ ssl_fc }
# ACL规则
acl is_api path_beg /api/
acl is_admin path_beg /admin/
# 路由规则
use_backend api_backend if is_api
use_backend admin_backend if is_admin
default_backend web_backend
# 后端配置
backend web_backend
balance roundrobin
option httpchk GET /health
http-check expect status 200
# Apache节点
server apache1 192.168.1.10:80 check inter 2000 rise 2 fall 3
server apache2 192.168.1.11:80 check inter 2000 rise 2 fall 3
server apache3 192.168.1.12:80 check inter 2000 rise 2 fall 3
backend api_backend
balance leastconn
option httpchk GET /api/health
http-check expect status 200
server api1 192.168.1.10:80 check
server api2 192.168.1.11:80 check
server api3 192.168.1.12:80 check
# 统计页面
listen stats
bind :9000
mode http
stats enable
stats uri /haproxy-stats
stats auth admin:password
2.2 Nginx负载均衡配置 (Nginx Load Balancer Configuration)
# /etc/nginx/sites-available/apache-cluster
upstream apache_cluster {
# 负载均衡策略
least_conn;
# 后端服务器
server 192.168.1.10:80 weight=3 max_fails=2 fail_timeout=30s;
server 192.168.1.11:80 weight=2 max_fails=2 fail_timeout=30s;
server 192.168.1.12:80 weight=1 max_fails=2 fail_timeout=30s backup;
# 健康检查
keepalive 32;
keepalive_requests 1000;
keepalive_timeout 60s;
}
server {
listen 80;
listen 443 ssl http2;
server_name example.com;
# SSL配置
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
location / {
proxy_pass http://apache_cluster;
# 会话粘性(如果需要)
proxy_cookie_path / "/; Secure; HttpOnly";
# 请求头设置
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
# 性能优化
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_bypass $http_upgrade;
# 错误处理
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
}
# 健康检查端点
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
3. 会话共享配置 (Session Sharing Configuration)
3.1 Redis会话存储 (Redis Session Storage)
# 启用必要的模块
LoadModule socache_redis_module modules/mod_socache_redis.so
# Redis会话存储配置
<IfModule socache_redis_module>
# 配置Redis连接
SocacheProvider redis:localhost:6379
</IfModule>
# 会话配置
<IfModule mod_session.so>
Session On
SessionCookieName session path=/
SessionCryptoPassphrase your-secret-passphrase
# 使用Redis存储会话
SessionEnv On
SessionHeader X-Session
</IfModule>
3.2 数据库会话存储 (Database Session Storage)
# 启用数据库会话模块
LoadModule session_dbd_module modules/mod_session_dbd.so
# 数据库连接配置
<IfModule mod_dbd.c>
DBDriver mysql
DBDParams "host=localhost user=apache password=apachepass dbname=sessions"
</IfModule>
# 会话数据库配置
<IfModule mod_session_dbd.so>
Session On
SessionDBDInsertLabel INSERT_SESSION
SessionDBDSelectLabel SELECT_SESSION
SessionDBDDeleteLabel DELETE_SESSION
SessionDBDUpdateLabel UPDATE_SESSION
# SQL语句定义
SessionDBDInsert "INSERT INTO sessions (session_id, session_data, expires) VALUES (%s, %s, %lld)"
SessionDBDSelect "SELECT session_data FROM sessions WHERE session_id = %s AND expires > %lld"
SessionDBDDelete "DELETE FROM sessions WHERE session_id = %s"
SessionDBDUpdate "UPDATE sessions SET session_data = %s, expires = %lld WHERE session_id = %s"
</IfModule>
4. 共享存储配置 (Shared Storage Configuration)
4.1 NFS共享存储 (NFS Shared Storage)
# NFS服务器配置
# /etc/exports
/var/www/html 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
# 启动NFS服务
sudo systemctl enable nfs-server
sudo systemctl start nfs-server
# 客户端挂载配置
# /etc/fstab
192.168.1.100:/var/www/html /var/www/html nfs defaults 0 0
# 挂载共享存储
sudo mount -a
4.2 GlusterFS分布式存储 (GlusterFS Distributed Storage)
# GlusterFS服务器配置
# 创建GlusterFS卷
sudo gluster volume create apache-volume replica 3 \
192.168.1.10:/gluster/brick1 \
192.168.1.11:/gluster/brick1 \
192.168.1.12:/gluster/brick1
# 启动卷
sudo gluster volume start apache-volume
# 客户端挂载
# /etc/fstab
192.168.1.10:/apache-volume /var/www/html glusterfs defaults,_netdev 0 0
# 挂载卷
sudo mount -a
5. 健康检查和监控 (Health Check and Monitoring)
5.1 集群健康检查脚本 (Cluster Health Check Script)
#!/bin/bash
# cluster-health.sh
check_cluster_health() {
echo "=== Apache Cluster Health Check ==="
# 定义集群节点
nodes=("192.168.1.10" "192.168.1.11" "192.168.1.12")
# 检查每个节点
for node in "${nodes[@]}"; do
echo "Checking node: $node"
# 检查端口
if nc -z $node 80 2>/dev/null; then
echo " ✓ Port 80 is open"
else
echo " ✗ Port 80 is closed"
fi
# 检查HTTP响应
response=$(curl -s -o /dev/null -w "%{http_code}" "http://$node/health" 2>/dev/null)
if [ "$response" = "200" ]; then
echo " ✓ HTTP 200 OK"
else
echo " ✗ HTTP $response"
fi
# 检查响应时间
response_time=$(curl -s -o /dev/null -w "%{time_total}" "http://$node/" 2>/dev/null)
response_time_ms=$(echo "$response_time * 1000" | bc)
echo " Response time: ${response_time_ms}ms"
echo
done
# 检查负载均衡器
echo "Checking Load Balancer:"
lb_status=$(curl -s -o /dev/null -w "%{http_code}" "http://192.168.1.100:9000/haproxy-stats" 2>/dev/null)
if [ "$lb_status" = "200" ]; then
echo " ✓ Load balancer stats accessible"
else
echo " ✗ Load balancer stats not accessible"
fi
echo
echo "Cluster health check completed!"
}
check_cluster_health
5.2 集群监控配置 (Cluster Monitoring Configuration)
# 集群监控配置
# 启用服务器状态模块
LoadModule status_module modules/mod_status.so
# 集群状态配置
<Location "/cluster-status">
SetHandler server-status
ExtendedStatus On
# 访问控制
<RequireAll>
Require ip 192.168.1.0/24
Require ip 127.0.0.1
</RequireAll>
</Location>
# 自定义集群监控日志
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D %{HTTP_HOST}e %{X-Forwarded-For}i" cluster_combined
CustomLog /var/log/apache2/cluster.log cluster_combined
6. 故障转移和高可用 (Failover and High Availability)
6.1 Keepalived配置 (Keepalived Configuration)
# /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 110
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.100/24
}
# 健康检查脚本
track_script {
check_haproxy
}
}
# 健康检查脚本
vrrp_script check_haproxy {
script "/usr/local/bin/check-haproxy.sh"
interval 2
weight -2
fall 3
rise 2
}
6.2 故障转移脚本 (Failover Script)
#!/bin/bash
# failover.sh
perform_failover() {
local failed_node=$1
local new_master=$2
echo "=== Performing Failover ==="
echo "Failed node: $failed_node"
echo "New master: $new_master"
echo
# 1. 从负载均衡器移除故障节点
echo "1. Removing failed node from load balancer:"
# 这里应该调用负载均衡器API或修改配置
echo " Node $failed_node removed from load balancer"
# 2. 检查共享存储
echo
echo "2. Checking shared storage:"
if mountpoint -q /var/www/html; then
echo " ✓ Shared storage is mounted"
else
echo " ✗ Shared storage is not mounted"
# 尝试重新挂载
sudo mount -a
fi
# 3. 启动备用服务
echo
echo "3. Starting services on new master:"
ssh $new_master "sudo systemctl start apache2"
if [ $? -eq 0 ]; then
echo " ✓ Apache started on $new_master"
else
echo " ✗ Failed to start Apache on $new_master"
fi
# 4. 更新DNS记录(如果需要)
echo
echo "4. Updating DNS records:"
# 这里应该调用DNS API更新记录
echo " DNS records updated"
echo
echo "Failover completed!"
}
perform_failover $1 $2
7. 集群部署脚本 (Cluster Deployment Scripts)
7.1 集群初始化脚本 (Cluster Initialization Script)
#!/bin/bash
# cluster-init.sh
initialize_cluster() {
local nodes=("192.168.1.10" "192.168.1.11" "192.168.1.12")
echo "=== Apache Cluster Initialization ==="
for node in "${nodes[@]}"; do
echo "Initializing node: $node"
# 1. 安装Apache
echo " Installing Apache..."
ssh $node "sudo apt-get update && sudo apt-get install -y apache2"
# 2. 配置共享存储
echo " Configuring shared storage..."
ssh $node "sudo mkdir -p /var/www/html && sudo mount -a"
# 3. 复制配置文件
echo " Copying configuration files..."
scp /etc/apache2/sites-available/cluster.conf $node:/etc/apache2/sites-available/
ssh $node "sudo a2ensite cluster.conf"
# 4. 启动服务
echo " Starting Apache service..."
ssh $node "sudo systemctl start apache2 && sudo systemctl enable apache2"
echo " Node $node initialization completed"
echo
done
echo "Cluster initialization completed!"
}
initialize_cluster
7.2 集群状态检查脚本 (Cluster Status Check Script)
#!/bin/bash
# cluster-status.sh
check_cluster_status() {
local nodes=("192.168.1.10" "192.168.1.11" "192.168.1.12")
echo "=== Apache Cluster Status ==="
# 检查每个节点状态
for node in "${nodes[@]}"; do
echo "Node: $node"
# 检查Apache进程
apache_status=$(ssh $node "systemctl is-active apache2" 2>/dev/null)
if [ "$apache_status" = "active" ]; then
echo " ✓ Apache is running"
else
echo " ✗ Apache is not running"
fi
# 检查磁盘空间
disk_usage=$(ssh $node "df /var/www | tail -1 | awk '{print \$5}' | sed 's/%//'" 2>/dev/null)
if [ "$disk_usage" -lt 90 ]; then
echo " ✓ Disk usage: ${disk_usage}%"
else
echo " ⚠️ Disk usage: ${disk_usage}% (Warning)"
fi
# 检查内存使用
memory_usage=$(ssh $node "free | grep Mem | awk '{printf \"%.0f\", \$3/\$2 * 100.0}'" 2>/dev/null)
if [ "$memory_usage" -lt 80 ]; then
echo " ✓ Memory usage: ${memory_usage}%"
else
echo " ⚠️ Memory usage: ${memory_usage}% (Warning)"
fi
echo
done
# 检查负载均衡器状态
echo "Load Balancer Status:"
curl -s "http://192.168.1.100:9000/haproxy-stats" > /dev/null
if [ $? -eq 0 ]; then
echo " ✓ Load balancer is accessible"
else
echo " ✗ Load balancer is not accessible"
fi
echo
echo "Cluster status check completed!"
}
check_cluster_status
小结 (Summary)
通过本文学习,你应该掌握:
- Apache集群架构设计原则
- 负载均衡器配置(HAProxy和Nginx)
- 会话共享解决方案(Redis和数据库)
- 共享存储配置(NFS和GlusterFS)
- 健康检查和监控机制
- 故障转移和高可用配置
- 集群部署和管理脚本
Apache集群部署是构建企业级Web应用的重要技术,通过合理的架构设计和配置可以实现高可用性和可扩展性。在下一篇文章中,我们将详细介绍Apache容器化部署技术。