MySQL性能优化:从索引设计到查询调优的完整指南

MySQL作为世界上最流行的开源关系型数据库管理系统,在互联网应用中扮演着至关重要的角色。随着业务规模的不断扩大和数据量的急剧增长,MySQL的性能优化成为了每个开发者和DBA必须掌握的核心技能。本文将从索引设计、查询优化、配置调优、架构优化等多个维度,深入探讨MySQL性能优化的理论基础和实践方法。

MySQL性能分析基础

性能监控指标

QPS(Queries Per Second):每秒查询数,衡量数据库的吞吐能力。

1
2
3
4
5
6
7
8
9
10
11
-- 查看当前QPS
SHOW GLOBAL STATUS LIKE 'Questions';
SHOW GLOBAL STATUS LIKE 'Uptime';
-- QPS = Questions / Uptime

-- 实时监控QPS
SELECT
VARIABLE_VALUE as Questions,
VARIABLE_VALUE / (SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'UPTIME') as QPS
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'Questions';

TPS(Transactions Per Second):每秒事务数,反映数据库的事务处理能力。

1
2
3
4
5
6
7
8
9
10
11
12
-- 查看事务相关指标
SHOW GLOBAL STATUS LIKE 'Com_commit';
SHOW GLOBAL STATUS LIKE 'Com_rollback';
SHOW GLOBAL STATUS LIKE 'Uptime';

-- 计算TPS
SELECT
(c.VARIABLE_VALUE + r.VARIABLE_VALUE) / u.VARIABLE_VALUE as TPS
FROM
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Com_commit') c,
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Com_rollback') r,
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Uptime') u;

响应时间分析:查询执行时间的分布情况。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-- 开启慢查询日志
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1; -- 记录执行时间超过1秒的查询
SET GLOBAL log_queries_not_using_indexes = 'ON'; -- 记录未使用索引的查询

-- 查看慢查询统计
SHOW GLOBAL STATUS LIKE 'Slow_queries';

-- 使用performance_schema分析查询性能
SELECT
SCHEMA_NAME,
DIGEST_TEXT,
COUNT_STAR,
AVG_TIMER_WAIT/1000000000 as avg_time_seconds,
MAX_TIMER_WAIT/1000000000 as max_time_seconds,
SUM_TIMER_WAIT/1000000000 as total_time_seconds
FROM performance_schema.events_statements_summary_by_digest
ORDER BY AVG_TIMER_WAIT DESC
LIMIT 10;

性能瓶颈识别

CPU瓶颈识别

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
-- 查看CPU密集型查询
SELECT
SCHEMA_NAME,
DIGEST_TEXT,
COUNT_STAR,
SUM_CPU_TIME/1000000000 as total_cpu_seconds,
AVG_CPU_TIME/1000000000 as avg_cpu_seconds
FROM performance_schema.events_statements_summary_by_digest
WHERE SUM_CPU_TIME > 0
ORDER BY SUM_CPU_TIME DESC
LIMIT 10;

-- 查看当前运行的查询
SELECT
ID,
USER,
HOST,
DB,
COMMAND,
TIME,
STATE,
INFO
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND != 'Sleep'
ORDER BY TIME DESC;

I/O瓶颈识别

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
-- 查看I/O密集型查询
SELECT
SCHEMA_NAME,
DIGEST_TEXT,
COUNT_STAR,
SUM_ROWS_EXAMINED,
AVG_ROWS_EXAMINED,
SUM_ROWS_SENT,
AVG_ROWS_SENT
FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_ROWS_EXAMINED DESC
LIMIT 10;

-- 查看表的I/O统计
SELECT
OBJECT_SCHEMA,
OBJECT_NAME,
COUNT_READ,
COUNT_WRITE,
SUM_TIMER_READ/1000000000 as total_read_seconds,
SUM_TIMER_WRITE/1000000000 as total_write_seconds
FROM performance_schema.table_io_waits_summary_by_table
ORDER BY SUM_TIMER_READ + SUM_TIMER_WRITE DESC
LIMIT 10;

锁等待分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-- 查看锁等待情况
SELECT
r.trx_id waiting_trx_id,
r.trx_mysql_thread_id waiting_thread,
r.trx_query waiting_query,
b.trx_id blocking_trx_id,
b.trx_mysql_thread_id blocking_thread,
b.trx_query blocking_query
FROM information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;

-- 查看死锁信息
SHOW ENGINE INNODB STATUS\G

索引设计与优化

索引类型与选择

B+树索引(默认):适用于范围查询和排序。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-- 创建单列索引
CREATE INDEX idx_user_email ON users(email);

-- 创建复合索引
CREATE INDEX idx_order_user_time ON orders(user_id, created_at);

-- 创建唯一索引
CREATE UNIQUE INDEX idx_user_username ON users(username);

-- 创建前缀索引(适用于长字符串字段)
CREATE INDEX idx_user_description ON users(description(50));

-- 分析前缀索引的选择性
SELECT
COUNT(DISTINCT LEFT(description, 10)) / COUNT(*) as selectivity_10,
COUNT(DISTINCT LEFT(description, 20)) / COUNT(*) as selectivity_20,
COUNT(DISTINCT LEFT(description, 30)) / COUNT(*) as selectivity_30,
COUNT(DISTINCT LEFT(description, 50)) / COUNT(*) as selectivity_50
FROM users;

哈希索引:适用于等值查询,Memory存储引擎支持。

1
2
3
4
5
6
7
8
9
-- Memory表使用哈希索引
CREATE TABLE session_cache (
session_id VARCHAR(64) NOT NULL,
user_id INT NOT NULL,
data TEXT,
expire_time TIMESTAMP,
PRIMARY KEY (session_id) USING HASH,
INDEX idx_user_id (user_id) USING HASH
) ENGINE=MEMORY;

全文索引:适用于文本搜索。

1
2
3
4
5
6
7
8
9
10
-- 创建全文索引
CREATE FULLTEXT INDEX idx_article_content ON articles(title, content);

-- 使用全文索引搜索
SELECT * FROM articles
WHERE MATCH(title, content) AGAINST('MySQL 性能优化' IN NATURAL LANGUAGE MODE);

-- 布尔模式搜索
SELECT * FROM articles
WHERE MATCH(title, content) AGAINST('+MySQL +性能 -Oracle' IN BOOLEAN MODE);

复合索引设计原则

最左前缀原则:复合索引的使用必须从最左边的列开始。

1
2
3
4
5
6
7
8
9
10
11
12
-- 创建复合索引
CREATE INDEX idx_user_age_city_salary ON employees(age, city, salary);

-- 能使用索引的查询
EXPLAIN SELECT * FROM employees WHERE age = 25; -- 使用索引
EXPLAIN SELECT * FROM employees WHERE age = 25 AND city = 'Beijing'; -- 使用索引
EXPLAIN SELECT * FROM employees WHERE age = 25 AND city = 'Beijing' AND salary > 10000; -- 使用索引

-- 不能使用索引的查询
EXPLAIN SELECT * FROM employees WHERE city = 'Beijing'; -- 不使用索引
EXPLAIN SELECT * FROM employees WHERE salary > 10000; -- 不使用索引
EXPLAIN SELECT * FROM employees WHERE city = 'Beijing' AND salary > 10000; -- 不使用索引

索引列顺序优化:将选择性高的列放在前面。

1
2
3
4
5
6
7
8
9
10
-- 分析列的选择性
SELECT
COUNT(DISTINCT age) / COUNT(*) as age_selectivity,
COUNT(DISTINCT city) / COUNT(*) as city_selectivity,
COUNT(DISTINCT department) / COUNT(*) as department_selectivity
FROM employees;

-- 根据选择性调整索引顺序
-- 如果city选择性最高,则应该创建:
CREATE INDEX idx_employee_city_age_dept ON employees(city, age, department);

覆盖索引设计:索引包含查询所需的所有列。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-- 创建覆盖索引
CREATE INDEX idx_order_cover ON orders(user_id, status, created_at, total_amount);

-- 使用覆盖索引的查询(Extra: Using index)
EXPLAIN SELECT user_id, status, created_at, total_amount
FROM orders
WHERE user_id = 123 AND status = 'completed';

-- 分析索引使用情况
SELECT
TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
COLUMN_NAME,
SEQ_IN_INDEX,
CARDINALITY
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = 'your_database'
ORDER BY TABLE_NAME, INDEX_NAME, SEQ_IN_INDEX;

索引监控与维护

索引使用情况分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
-- 查看未使用的索引
SELECT
t.TABLE_SCHEMA,
t.TABLE_NAME,
t.INDEX_NAME,
t.INDEX_TYPE
FROM INFORMATION_SCHEMA.STATISTICS t
LEFT JOIN performance_schema.table_io_waits_summary_by_index_usage p
ON t.TABLE_SCHEMA = p.OBJECT_SCHEMA
AND t.TABLE_NAME = p.OBJECT_NAME
AND t.INDEX_NAME = p.INDEX_NAME
WHERE t.TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys')
AND p.INDEX_NAME IS NULL
AND t.INDEX_NAME != 'PRIMARY'
ORDER BY t.TABLE_SCHEMA, t.TABLE_NAME;

-- 查看索引使用频率
SELECT
OBJECT_SCHEMA,
OBJECT_NAME,
INDEX_NAME,
COUNT_FETCH,
COUNT_INSERT,
COUNT_UPDATE,
COUNT_DELETE
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE OBJECT_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys')
ORDER BY COUNT_FETCH DESC;

索引碎片分析与优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
-- 查看表和索引的碎片情况
SELECT
TABLE_SCHEMA,
TABLE_NAME,
ENGINE,
TABLE_ROWS,
DATA_LENGTH,
INDEX_LENGTH,
DATA_FREE,
(DATA_FREE / (DATA_LENGTH + INDEX_LENGTH)) * 100 as fragmentation_percent
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys')
AND DATA_FREE > 0
ORDER BY fragmentation_percent DESC;

-- 优化表(重建索引)
OPTIMIZE TABLE your_table_name;

-- 或者使用ALTER TABLE重建表
ALTER TABLE your_table_name ENGINE=InnoDB;

查询优化技巧

EXPLAIN执行计划分析

基础EXPLAIN分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
-- 基本EXPLAIN
EXPLAIN SELECT * FROM orders o
JOIN users u ON o.user_id = u.id
WHERE o.status = 'pending' AND u.city = 'Beijing';

-- 详细的EXPLAIN FORMAT=JSON
EXPLAIN FORMAT=JSON
SELECT * FROM orders o
JOIN users u ON o.user_id = u.id
WHERE o.status = 'pending' AND u.city = 'Beijing';

-- EXPLAIN ANALYZE(MySQL 8.0+)
EXPLAIN ANALYZE
SELECT * FROM orders o
JOIN users u ON o.user_id = u.id
WHERE o.status = 'pending' AND u.city = 'Beijing';

执行计划关键字段解读

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
-- 创建示例表和数据用于演示
CREATE TABLE demo_orders (
id INT PRIMARY KEY AUTO_INCREMENT,
user_id INT NOT NULL,
product_id INT NOT NULL,
status VARCHAR(20) NOT NULL,
amount DECIMAL(10,2) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_user_id (user_id),
INDEX idx_status (status),
INDEX idx_created_at (created_at)
);

-- type字段分析
-- const: 主键或唯一索引等值查询
EXPLAIN SELECT * FROM demo_orders WHERE id = 1;

-- eq_ref: 连接查询中使用主键或唯一索引
EXPLAIN SELECT * FROM demo_orders o
JOIN users u ON o.user_id = u.id WHERE o.id = 1;

-- ref: 非唯一索引等值查询
EXPLAIN SELECT * FROM demo_orders WHERE user_id = 123;

-- range: 索引范围查询
EXPLAIN SELECT * FROM demo_orders WHERE created_at BETWEEN '2024-01-01' AND '2024-12-31';

-- index: 索引全扫描
EXPLAIN SELECT user_id FROM demo_orders;

-- ALL: 全表扫描(需要优化)
EXPLAIN SELECT * FROM demo_orders WHERE amount > 100;

JOIN优化策略

JOIN算法选择

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
-- Nested Loop Join(适用于小表驱动大表)
SELECT /*+ USE_NL(o, u) */
o.id, o.amount, u.username
FROM orders o
JOIN users u ON o.user_id = u.id
WHERE u.city = 'Beijing';

-- Hash Join(MySQL 8.0.18+,适用于等值连接)
SELECT /*+ HASH_JOIN(o, u) */
o.id, o.amount, u.username
FROM orders o
JOIN users u ON o.user_id = u.id;

-- 优化JOIN顺序
-- 小表驱动大表
EXPLAIN SELECT * FROM
(SELECT * FROM users WHERE city = 'Beijing') u
JOIN orders o ON u.id = o.user_id;

-- 使用STRAIGHT_JOIN强制JOIN顺序
SELECT STRAIGHT_JOIN * FROM users u
JOIN orders o ON u.id = o.user_id
WHERE u.city = 'Beijing';

子查询优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
-- 低效的子查询
SELECT * FROM orders
WHERE user_id IN (
SELECT id FROM users WHERE city = 'Beijing'
);

-- 优化为JOIN
SELECT o.* FROM orders o
JOIN users u ON o.user_id = u.id
WHERE u.city = 'Beijing';

-- EXISTS vs IN 的选择
-- 当子查询结果集较小时,使用IN
SELECT * FROM orders o
WHERE o.user_id IN (
SELECT u.id FROM users u WHERE u.vip_level = 'gold'
);

-- 当主查询结果集较小时,使用EXISTS
SELECT * FROM orders o
WHERE EXISTS (
SELECT 1 FROM users u
WHERE u.id = o.user_id AND u.city = 'Beijing'
);

分页查询优化

传统分页的问题

1
2
3
4
5
6
7
8
9
10
11
12
-- 低效的深度分页
SELECT * FROM orders
ORDER BY created_at DESC
LIMIT 100000, 20;

-- 使用覆盖索引优化
SELECT o.* FROM orders o
JOIN (
SELECT id FROM orders
ORDER BY created_at DESC
LIMIT 100000, 20
) t ON o.id = t.id;

游标分页优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
-- 基于ID的游标分页
SELECT * FROM orders
WHERE id > 1000000
ORDER BY id
LIMIT 20;

-- 基于时间的游标分页
SELECT * FROM orders
WHERE created_at < '2024-03-15 10:00:00'
ORDER BY created_at DESC
LIMIT 20;

-- 复合条件的游标分页
SELECT * FROM orders
WHERE (created_at < '2024-03-15 10:00:00')
OR (created_at = '2024-03-15 10:00:00' AND id < 1000000)
ORDER BY created_at DESC, id DESC
LIMIT 20;

MySQL配置优化

InnoDB存储引擎优化

缓冲池配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# my.cnf配置
[mysqld]
# InnoDB缓冲池大小(建议设置为物理内存的70-80%)
innodb_buffer_pool_size = 8G

# 缓冲池实例数(提高并发性能)
innodb_buffer_pool_instances = 8

# 缓冲池预热
innodb_buffer_pool_dump_at_shutdown = 1
innodb_buffer_pool_load_at_startup = 1

# 缓冲池状态监控
innodb_buffer_pool_dump_pct = 25
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
-- 监控缓冲池使用情况
SELECT
POOL_ID,
POOL_SIZE,
FREE_BUFFERS,
DATABASE_PAGES,
OLD_DATABASE_PAGES,
MODIFIED_DATABASE_PAGES,
PENDING_DECOMPRESS,
PENDING_READS,
PENDING_FLUSH_LRU,
PENDING_FLUSH_LIST
FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS;

-- 查看缓冲池命中率
SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_read%';

SELECT
(1 - (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests)) * 100 as hit_rate
FROM (
SELECT
VARIABLE_VALUE as Innodb_buffer_pool_reads
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'Innodb_buffer_pool_reads'
) reads,
(
SELECT
VARIABLE_VALUE as Innodb_buffer_pool_read_requests
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'Innodb_buffer_pool_read_requests'
) requests;

日志配置优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 重做日志配置
innodb_log_file_size = 1G
innodb_log_files_in_group = 2
innodb_log_buffer_size = 64M

# 刷新策略
innodb_flush_log_at_trx_commit = 1 # 最安全,每次事务提交都刷新
# innodb_flush_log_at_trx_commit = 2 # 平衡性能和安全性
# innodb_flush_log_at_trx_commit = 0 # 最高性能,但可能丢失数据

# 二进制日志
log_bin = mysql-bin
binlog_format = ROW
binlog_row_image = MINIMAL
expire_logs_days = 7
max_binlog_size = 1G

# 慢查询日志
slow_query_log = 1
long_query_time = 1
log_queries_not_using_indexes = 1
log_slow_admin_statements = 1

并发控制配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 连接数配置
max_connections = 1000
max_connect_errors = 100000
connect_timeout = 10
wait_timeout = 28800
interactive_timeout = 28800

# 线程配置
thread_cache_size = 100
thread_stack = 256K

# 表锁配置
table_open_cache = 4000
table_definition_cache = 2000

# InnoDB并发配置
innodb_thread_concurrency = 0 # 0表示不限制
innodb_read_io_threads = 8
innodb_write_io_threads = 8
innodb_purge_threads = 4
innodb_page_cleaners = 4

查询缓存优化

1
2
3
4
5
# 查询缓存配置(MySQL 5.7及以下)
query_cache_type = 1
query_cache_size = 256M
query_cache_limit = 2M
query_cache_min_res_unit = 2K
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-- 监控查询缓存效果
SHOW GLOBAL STATUS LIKE 'Qcache%';

SELECT
Qcache_hits / (Qcache_hits + Com_select) * 100 as cache_hit_rate,
Qcache_free_memory / Qcache_total_blocks as avg_free_block_size,
Qcache_free_blocks / Qcache_total_blocks * 100 as fragmentation_percent
FROM (
SELECT
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Qcache_hits') as Qcache_hits,
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Com_select') as Com_select,
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Qcache_free_memory') as Qcache_free_memory,
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Qcache_total_blocks') as Qcache_total_blocks,
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Qcache_free_blocks') as Qcache_free_blocks
) stats;

-- 清理查询缓存
FLUSH QUERY CACHE;
RESET QUERY CACHE;

架构优化策略

读写分离实现

主从复制配置

1
2
3
4
5
6
7
8
9
10
11
12
13
# 主库配置 (master)
[mysqld]
server-id = 1
log-bin = mysql-bin
binlog-format = ROW
binlog-do-db = your_database

# 从库配置 (slave)
[mysqld]
server-id = 2
relay-log = mysql-relay-bin
read-only = 1
log-slave-updates = 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
-- 主库创建复制用户
CREATE USER 'replication'@'%' IDENTIFIED BY 'strong_password';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%';
FLUSH PRIVILEGES;

-- 查看主库状态
SHOW MASTER STATUS;

-- 从库配置复制
CHANGE MASTER TO
MASTER_HOST='master_ip',
MASTER_USER='replication',
MASTER_PASSWORD='strong_password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=154;

-- 启动从库复制
START SLAVE;

-- 检查复制状态
SHOW SLAVE STATUS\G

应用层读写分离

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
// Spring Boot读写分离配置
@Configuration
public class DataSourceConfiguration {

@Bean
@ConfigurationProperties("spring.datasource.master")
public DataSource masterDataSource() {
return DataSourceBuilder.create().build();
}

@Bean
@ConfigurationProperties("spring.datasource.slave")
public DataSource slaveDataSource() {
return DataSourceBuilder.create().build();
}

@Bean
public DataSource routingDataSource() {
RoutingDataSource routingDataSource = new RoutingDataSource();

Map<Object, Object> dataSourceMap = new HashMap<>();
dataSourceMap.put("master", masterDataSource());
dataSourceMap.put("slave", slaveDataSource());

routingDataSource.setTargetDataSources(dataSourceMap);
routingDataSource.setDefaultTargetDataSource(masterDataSource());

return routingDataSource;
}
}

// 动态数据源路由
public class RoutingDataSource extends AbstractRoutingDataSource {

@Override
protected Object determineCurrentLookupKey() {
return DataSourceContextHolder.getDataSourceType();
}
}

// 数据源上下文
public class DataSourceContextHolder {

private static final ThreadLocal<String> contextHolder = new ThreadLocal<>();

public static void setDataSourceType(String dataSourceType) {
contextHolder.set(dataSourceType);
}

public static String getDataSourceType() {
return contextHolder.get();
}

public static void clearDataSourceType() {
contextHolder.remove();
}
}

// 读写分离注解
@Target({ElementType.METHOD, ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
public @interface ReadOnly {
}

// AOP切面实现读写分离
@Aspect
@Component
public class DataSourceAspect {

@Before("@annotation(readOnly)")
public void setReadDataSourceType(ReadOnly readOnly) {
DataSourceContextHolder.setDataSourceType("slave");
}

@Before("@annotation(org.springframework.transaction.annotation.Transactional)")
public void setWriteDataSourceType() {
DataSourceContextHolder.setDataSourceType("master");
}

@After("@annotation(readOnly) || @annotation(org.springframework.transaction.annotation.Transactional)")
public void clearDataSourceType() {
DataSourceContextHolder.clearDataSourceType();
}
}

分库分表策略

水平分表实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
-- 按用户ID分表
CREATE TABLE orders_0 (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
user_id INT NOT NULL,
product_id INT NOT NULL,
amount DECIMAL(10,2) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_user_id (user_id)
) ENGINE=InnoDB;

CREATE TABLE orders_1 (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
user_id INT NOT NULL,
product_id INT NOT NULL,
amount DECIMAL(10,2) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_user_id (user_id)
) ENGINE=InnoDB;

-- 分表路由逻辑
-- table_index = user_id % table_count
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
// ShardingSphere分库分表配置
@Configuration
public class ShardingConfiguration {

@Bean
public DataSource shardingDataSource() throws SQLException {
// 配置分片规则
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();

// 配置orders表的分片策略
TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration("orders", "ds${0..1}.orders_${0..3}");

// 分库策略
orderTableRuleConfig.setDatabaseShardingStrategyConfig(
new InlineShardingStrategyConfiguration("user_id", "ds${user_id % 2}")
);

// 分表策略
orderTableRuleConfig.setTableShardingStrategyConfig(
new InlineShardingStrategyConfiguration("user_id", "orders_${user_id % 4}")
);

shardingRuleConfig.getTableRuleConfigs().add(orderTableRuleConfig);

// 配置数据源
Map<String, DataSource> dataSourceMap = new HashMap<>();
dataSourceMap.put("ds0", createDataSource("db0"));
dataSourceMap.put("ds1", createDataSource("db1"));

return ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, new Properties());
}

private DataSource createDataSource(String database) {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://localhost:3306/" + database);
config.setUsername("username");
config.setPassword("password");
config.setMaximumPoolSize(20);
return new HikariDataSource(config);
}
}

连接池优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
// HikariCP连接池优化配置
@Configuration
public class DataSourceConfig {

@Bean
@ConfigurationProperties("spring.datasource.hikari")
public HikariConfig hikariConfig() {
HikariConfig config = new HikariConfig();

// 基础连接配置
config.setJdbcUrl("jdbc:mysql://localhost:3306/database");
config.setUsername("username");
config.setPassword("password");
config.setDriverClassName("com.mysql.cj.jdbc.Driver");

// 连接池大小配置
config.setMaximumPoolSize(20); // 最大连接数
config.setMinimumIdle(5); // 最小空闲连接数

// 连接超时配置
config.setConnectionTimeout(30000); // 30秒
config.setIdleTimeout(600000); // 10分钟
config.setMaxLifetime(1800000); // 30分钟
config.setLeakDetectionThreshold(60000); // 1分钟

// 性能优化配置
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
config.addDataSourceProperty("useServerPrepStmts", "true");
config.addDataSourceProperty("useLocalSessionState", "true");
config.addDataSourceProperty("rewriteBatchedStatements", "true");
config.addDataSourceProperty("cacheResultSetMetadata", "true");
config.addDataSourceProperty("cacheServerConfiguration", "true");
config.addDataSourceProperty("elideSetAutoCommits", "true");
config.addDataSourceProperty("maintainTimeStats", "false");

return config;
}

@Bean
public DataSource dataSource() {
return new HikariDataSource(hikariConfig());
}
}

// 连接池监控
@Component
public class ConnectionPoolMonitor {

@Autowired
private HikariDataSource dataSource;

@Scheduled(fixedRate = 60000) // 每分钟监控一次
public void monitorConnectionPool() {
HikariPoolMXBean poolBean = dataSource.getHikariPoolMXBean();

log.info("Connection Pool Status: " +
"Active={}, Idle={}, Total={}, Waiting={}",
poolBean.getActiveConnections(),
poolBean.getIdleConnections(),
poolBean.getTotalConnections(),
poolBean.getThreadsAwaitingConnection());

// 告警逻辑
if (poolBean.getActiveConnections() > poolBean.getTotalConnections() * 0.8) {
log.warn("Connection pool usage is high: {}%",
(poolBean.getActiveConnections() * 100.0 / poolBean.getTotalConnections()));
}

if (poolBean.getThreadsAwaitingConnection() > 0) {
log.warn("Threads waiting for connections: {}",
poolBean.getThreadsAwaitingConnection());
}
}
}

性能监控与诊断

实时性能监控

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
-- 创建性能监控视图
CREATE VIEW performance_summary AS
SELECT
'QPS' as metric,
ROUND((
SELECT VARIABLE_VALUE
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'Questions'
) / (
SELECT VARIABLE_VALUE
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'Uptime'
), 2) as value
UNION ALL
SELECT
'TPS' as metric,
ROUND((
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Com_commit') +
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Com_rollback')
) / (
SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Uptime'
), 2) as value
UNION ALL
SELECT
'Buffer Pool Hit Rate' as metric,
ROUND((
1 - (
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Innodb_buffer_pool_reads') /
(SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Innodb_buffer_pool_read_requests')
)
) * 100, 2) as value;

-- 查看性能摘要
SELECT * FROM performance_summary;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
// Spring Boot Actuator自定义健康检查
@Component
public class MySQLHealthIndicator implements HealthIndicator {

@Autowired
private JdbcTemplate jdbcTemplate;

@Override
public Health health() {
try {
// 检查数据库连接
jdbcTemplate.queryForObject("SELECT 1", Integer.class);

// 检查慢查询数量
Long slowQueries = jdbcTemplate.queryForObject(
"SHOW GLOBAL STATUS LIKE 'Slow_queries'",
(rs, rowNum) -> Long.parseLong(rs.getString("Value"))
);

// 检查连接数
Long connections = jdbcTemplate.queryForObject(
"SHOW GLOBAL STATUS LIKE 'Threads_connected'",
(rs, rowNum) -> Long.parseLong(rs.getString("Value"))
);

Long maxConnections = jdbcTemplate.queryForObject(
"SHOW VARIABLES LIKE 'max_connections'",
(rs, rowNum) -> Long.parseLong(rs.getString("Value"))
);

Health.Builder builder = Health.up()
.withDetail("slowQueries", slowQueries)
.withDetail("connections", connections + "/" + maxConnections)
.withDetail("connectionUsage", String.format("%.2f%%",
(connections * 100.0 / maxConnections)));

// 告警条件
if (connections > maxConnections * 0.8) {
builder.status("WARNING")
.withDetail("warning", "High connection usage");
}

if (slowQueries > 100) {
builder.status("WARNING")
.withDetail("warning", "High slow query count");
}

return builder.build();

} catch (Exception e) {
return Health.down(e).build();
}
}
}

自动化性能诊断

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
// 自动化性能诊断工具
@Service
public class MySQLPerformanceDiagnostic {

@Autowired
private JdbcTemplate jdbcTemplate;

@Scheduled(fixedRate = 300000) // 每5分钟执行一次
public void performDiagnostic() {
DiagnosticReport report = new DiagnosticReport();

// 检查慢查询
checkSlowQueries(report);

// 检查锁等待
checkLockWaits(report);

// 检查索引使用情况
checkIndexUsage(report);

// 检查缓冲池状态
checkBufferPoolStatus(report);

// 生成报告
if (report.hasIssues()) {
log.warn("Performance issues detected: {}", report.getSummary());
sendAlert(report);
}
}

private void checkSlowQueries(DiagnosticReport report) {
try {
List<Map<String, Object>> slowQueries = jdbcTemplate.queryForList(
"SELECT SCHEMA_NAME, DIGEST_TEXT, COUNT_STAR, " +
"AVG_TIMER_WAIT/1000000000 as avg_time_seconds " +
"FROM performance_schema.events_statements_summary_by_digest " +
"WHERE AVG_TIMER_WAIT > 1000000000 " + // 超过1秒
"ORDER BY AVG_TIMER_WAIT DESC LIMIT 10"
);

if (!slowQueries.isEmpty()) {
report.addIssue("Slow Queries",
"Found " + slowQueries.size() + " slow query patterns");
report.addDetail("slowQueries", slowQueries);
}
} catch (Exception e) {
log.error("Error checking slow queries", e);
}
}

private void checkLockWaits(DiagnosticReport report) {
try {
List<Map<String, Object>> lockWaits = jdbcTemplate.queryForList(
"SELECT COUNT(*) as lock_wait_count " +
"FROM information_schema.innodb_lock_waits"
);

Integer lockWaitCount = (Integer) lockWaits.get(0).get("lock_wait_count");
if (lockWaitCount > 0) {
report.addIssue("Lock Waits",
"Found " + lockWaitCount + " lock waits");
}
} catch (Exception e) {
log.error("Error checking lock waits", e);
}
}

private void checkIndexUsage(DiagnosticReport report) {
try {
List<Map<String, Object>> unusedIndexes = jdbcTemplate.queryForList(
"SELECT t.TABLE_SCHEMA, t.TABLE_NAME, t.INDEX_NAME " +
"FROM INFORMATION_SCHEMA.STATISTICS t " +
"LEFT JOIN performance_schema.table_io_waits_summary_by_index_usage p " +
"ON t.TABLE_SCHEMA = p.OBJECT_SCHEMA " +
"AND t.TABLE_NAME = p.OBJECT_NAME " +
"AND t.INDEX_NAME = p.INDEX_NAME " +
"WHERE t.TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys') " +
"AND p.INDEX_NAME IS NULL " +
"AND t.INDEX_NAME != 'PRIMARY' " +
"LIMIT 10"
);

if (!unusedIndexes.isEmpty()) {
report.addIssue("Unused Indexes",
"Found " + unusedIndexes.size() + " potentially unused indexes");
report.addDetail("unusedIndexes", unusedIndexes);
}
} catch (Exception e) {
log.error("Error checking index usage", e);
}
}

private void checkBufferPoolStatus(DiagnosticReport report) {
try {
Map<String, Object> bufferPoolStats = jdbcTemplate.queryForMap(
"SELECT " +
"(1 - (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests)) * 100 as hit_rate " +
"FROM (" +
"SELECT VARIABLE_VALUE as Innodb_buffer_pool_reads " +
"FROM INFORMATION_SCHEMA.GLOBAL_STATUS " +
"WHERE VARIABLE_NAME = 'Innodb_buffer_pool_reads'" +
") reads, (" +
"SELECT VARIABLE_VALUE as Innodb_buffer_pool_read_requests " +
"FROM INFORMATION_SCHEMA.GLOBAL_STATUS " +
"WHERE VARIABLE_NAME = 'Innodb_buffer_pool_read_requests'" +
") requests"
);

Double hitRate = (Double) bufferPoolStats.get("hit_rate");
if (hitRate < 95.0) {
report.addIssue("Low Buffer Pool Hit Rate",
String.format("Buffer pool hit rate is %.2f%%, consider increasing innodb_buffer_pool_size", hitRate));
}
} catch (Exception e) {
log.error("Error checking buffer pool status", e);
}
}

private void sendAlert(DiagnosticReport report) {
// 实现告警发送逻辑
log.error("MySQL Performance Alert: {}", report.toJson());
}

private static class DiagnosticReport {
private List<String> issues = new ArrayList<>();
private Map<String, Object> details = new HashMap<>();

public void addIssue(String category, String description) {
issues.add(category + ": " + description);
}

public void addDetail(String key, Object value) {
details.put(key, value);
}

public boolean hasIssues() {
return !issues.isEmpty();
}

public String getSummary() {
return String.join("; ", issues);
}

public String toJson() {
Map<String, Object> report = new HashMap<>();
report.put("issues", issues);
report.put("details", details);
report.put("timestamp", System.currentTimeMillis());

try {
ObjectMapper mapper = new ObjectMapper();
return mapper.writeValueAsString(report);
} catch (Exception e) {
return "Error serializing report: " + e.getMessage();
}
}
}
}

总结与最佳实践

MySQL性能优化核心原则

  1. 索引优化是基础:合理设计索引,避免冗余索引,定期维护索引
  2. 查询优化是关键:编写高效的SQL语句,避免全表扫描
  3. 配置调优是保障:根据硬件资源和业务特点调整MySQL配置
  4. 架构设计是根本:采用读写分离、分库分表等架构优化策略
  5. 监控诊断是手段:建立完善的监控体系,及时发现和解决性能问题

性能优化实施步骤

  1. 性能基线建立:收集当前系统的性能指标作为优化基准
  2. 瓶颈识别分析:通过监控和分析工具识别性能瓶颈
  3. 优化方案制定:根据瓶颈分析结果制定针对性的优化方案
  4. 分步实施验证:逐步实施优化措施并验证效果
  5. 持续监控改进:建立长期的性能监控和持续改进机制

常见性能问题及解决方案

  1. 慢查询问题:优化SQL语句,添加合适的索引
  2. 锁等待问题:优化事务逻辑,减少锁持有时间
  3. 连接数过多:优化连接池配置,实现连接复用
  4. 内存不足问题:调整缓冲池大小,优化内存使用
  5. 磁盘I/O瓶颈:使用SSD存储,优化数据分布

MySQL性能优化是一个系统性工程,需要从多个维度综合考虑。通过合理的索引设计、查询优化、配置调优和架构改进,可以显著提升MySQL数据库的性能表现,为业务系统提供稳定可靠的数据服务支撑。

版权所有,如有侵权请联系我