缓存技术与Redis应用

概述

缓存是提升系统性能的重要手段,Redis作为最流行的内存数据库,在分布式系统中扮演着关键角色。本文深入讲解Redis的数据结构、持久化机制、集群部署等核心技术和最佳实践。

核心面试问题

1. Redis数据结构与应用场景

面试问题:Redis支持哪些数据结构?各自的应用场景是什么?

五种基本数据结构

@Service
public class RedisDataStructureService {

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    // 1. String - 字符串类型应用
    public class StringOperations {

        // 分布式锁
        public boolean acquireLock(String lockKey, String value, long expireTime) {
            Boolean success = stringRedisTemplate.opsForValue()
                .setIfAbsent(lockKey, value, Duration.ofMillis(expireTime));
            return Boolean.TRUE.equals(success);
        }

        // 计数器
        public long incrementCounter(String key) {
            return stringRedisTemplate.opsForValue().increment(key);
        }

        // 限流器(令牌桶)
        public boolean checkRateLimit(String userId, int maxRequests, int windowSeconds) {
            String key = "rate_limit:" + userId;
            String currentTime = String.valueOf(System.currentTimeMillis() / 1000);

            // 使用Lua脚本保证原子性
            String script = """
                local key = KEYS[1]
                local window = tonumber(ARGV[1])
                local limit = tonumber(ARGV[2])
                local current = tonumber(ARGV[3])

                local count = redis.call('get', key)
                if count == false then
                    redis.call('setex', key, window, 1)
                    return 1
                else
                    count = tonumber(count)
                    if count < limit then
                        redis.call('incr', key)
                        return count + 1
                    else
                        return -1
                    end
                end
                """;

            DefaultRedisScript<Long> redisScript = new DefaultRedisScript<>(script, Long.class);
            Long result = stringRedisTemplate.execute(redisScript, 
                Collections.singletonList(key), 
                String.valueOf(windowSeconds), 
                String.valueOf(maxRequests), 
                currentTime);

            return result != null && result != -1;
        }

        // 分布式Session
        public void saveSession(String sessionId, UserSession session) {
            String key = "session:" + sessionId;
            String sessionJson = JSON.toJSONString(session);
            stringRedisTemplate.opsForValue().set(key, sessionJson, Duration.ofHours(2));
        }

        public UserSession getSession(String sessionId) {
            String key = "session:" + sessionId;
            String sessionJson = stringRedisTemplate.opsForValue().get(key);
            return sessionJson != null ? JSON.parseObject(sessionJson, UserSession.class) : null;
        }
    }

    // 2. Hash - 哈希类型应用
    public class HashOperations {

        // 用户信息缓存
        public void cacheUserInfo(Long userId, User user) {
            String key = "user:" + userId;
            Map<String, String> userMap = new HashMap<>();
            userMap.put("id", user.getId().toString());
            userMap.put("username", user.getUsername());
            userMap.put("email", user.getEmail());
            userMap.put("status", user.getStatus().toString());

            stringRedisTemplate.opsForHash().putAll(key, userMap);
            stringRedisTemplate.expire(key, Duration.ofHours(1));
        }

        public User getUserInfo(Long userId) {
            String key = "user:" + userId;
            Map<Object, Object> userMap = stringRedisTemplate.opsForHash().entries(key);

            if (userMap.isEmpty()) {
                return null;
            }

            User user = new User();
            user.setId(Long.parseLong((String) userMap.get("id")));
            user.setUsername((String) userMap.get("username"));
            user.setEmail((String) userMap.get("email"));
            user.setStatus(Integer.parseInt((String) userMap.get("status")));

            return user;
        }

        // 购物车实现
        public void addToCart(Long userId, Long productId, Integer quantity) {
            String key = "cart:" + userId;
            stringRedisTemplate.opsForHash().put(key, productId.toString(), quantity.toString());
            stringRedisTemplate.expire(key, Duration.ofDays(7));
        }

        public Map<Long, Integer> getCart(Long userId) {
            String key = "cart:" + userId;
            Map<Object, Object> cartMap = stringRedisTemplate.opsForHash().entries(key);

            return cartMap.entrySet().stream()
                .collect(Collectors.toMap(
                    entry -> Long.parseLong((String) entry.getKey()),
                    entry -> Integer.parseInt((String) entry.getValue())
                ));
        }
    }

    // 3. List - 列表类型应用
    public class ListOperations {

        // 消息队列
        public void pushMessage(String queue, String message) {
            stringRedisTemplate.opsForList().leftPush(queue, message);
        }

        public String popMessage(String queue, long timeout) {
            return stringRedisTemplate.opsForList().rightPop(queue, Duration.ofSeconds(timeout));
        }

        // 最新动态列表
        public void addUserActivity(Long userId, UserActivity activity) {
            String key = "user_activity:" + userId;
            String activityJson = JSON.toJSONString(activity);

            stringRedisTemplate.opsForList().leftPush(key, activityJson);

            // 保持最新100条
            stringRedisTemplate.opsForList().trim(key, 0, 99);
            stringRedisTemplate.expire(key, Duration.ofDays(30));
        }

        public List<UserActivity> getUserActivities(Long userId, int count) {
            String key = "user_activity:" + userId;
            List<String> activities = stringRedisTemplate.opsForList().range(key, 0, count - 1);

            return activities != null ? activities.stream()
                .map(json -> JSON.parseObject(json, UserActivity.class))
                .collect(Collectors.toList()) : Collections.emptyList();
        }

        // 粉丝时间线(推模式)
        public void pushToFollowersTimeline(Long userId, Post post) {
            List<Long> followers = getFollowers(userId);
            String postJson = JSON.toJSONString(post);

            for (Long followerId : followers) {
                String timelineKey = "timeline:" + followerId;
                stringRedisTemplate.opsForList().leftPush(timelineKey, postJson);
                stringRedisTemplate.opsForList().trim(timelineKey, 0, 999); // 保持1000条
            }
        }

        private List<Long> getFollowers(Long userId) {
            // 从数据库或缓存获取粉丝列表
            return Collections.emptyList();
        }
    }

    // 4. Set - 集合类型应用
    public class SetOperations {

        // 用户标签系统
        public void addUserTags(Long userId, String... tags) {
            String key = "user_tags:" + userId;
            stringRedisTemplate.opsForSet().add(key, tags);
            stringRedisTemplate.expire(key, Duration.ofDays(30));
        }

        public Set<String> getUserTags(Long userId) {
            String key = "user_tags:" + userId;
            return stringRedisTemplate.opsForSet().members(key);
        }

        // 共同好友
        public Set<String> getCommonFriends(Long userId1, Long userId2) {
            String key1 = "user_friends:" + userId1;
            String key2 = "user_friends:" + userId2;
            return stringRedisTemplate.opsForSet().intersect(key1, key2);
        }

        // 签到系统
        public boolean checkIn(Long userId, String date) {
            String key = "checkin:" + date;
            Boolean added = stringRedisTemplate.opsForSet().add(key, userId.toString());
            stringRedisTemplate.expire(key, Duration.ofDays(1));
            return Boolean.TRUE.equals(added);
        }

        public boolean hasCheckedIn(Long userId, String date) {
            String key = "checkin:" + date;
            return Boolean.TRUE.equals(stringRedisTemplate.opsForSet().isMember(key, userId.toString()));
        }

        public long getCheckinCount(String date) {
            String key = "checkin:" + date;
            return stringRedisTemplate.opsForSet().size(key);
        }

        // 抽奖系统
        public Set<String> drawLottery(String lotteryId, int count) {
            String key = "lottery_pool:" + lotteryId;
            return stringRedisTemplate.opsForSet().distinctRandomMembers(key, count);
        }
    }

    // 5. ZSet - 有序集合类型应用
    public class ZSetOperations {

        // 排行榜系统
        public void updateScore(String leaderboard, String player, double score) {
            stringRedisTemplate.opsForZSet().add(leaderboard, player, score);
        }

        public Set<String> getTopPlayers(String leaderboard, int count) {
            return stringRedisTemplate.opsForZSet().reverseRange(leaderboard, 0, count - 1);
        }

        public Set<ZSetOperations.TypedTuple<String>> getTopPlayersWithScores(String leaderboard, int count) {
            return stringRedisTemplate.opsForZSet().reverseRangeWithScores(leaderboard, 0, count - 1);
        }

        public Long getPlayerRank(String leaderboard, String player) {
            return stringRedisTemplate.opsForZSet().reverseRank(leaderboard, player);
        }

        // 延时任务队列
        public void addDelayTask(DelayTask task) {
            String key = "delay_tasks";
            long executeTime = System.currentTimeMillis() + task.getDelayMillis();
            String taskJson = JSON.toJSONString(task);

            stringRedisTemplate.opsForZSet().add(key, taskJson, executeTime);
        }

        public List<DelayTask> getExpiredTasks() {
            String key = "delay_tasks";
            long currentTime = System.currentTimeMillis();

            Set<String> expiredTasks = stringRedisTemplate.opsForZSet()
                .rangeByScore(key, 0, currentTime);

            if (expiredTasks != null && !expiredTasks.isEmpty()) {
                // 移除已过期的任务
                stringRedisTemplate.opsForZSet().removeRangeByScore(key, 0, currentTime);

                return expiredTasks.stream()
                    .map(json -> JSON.parseObject(json, DelayTask.class))
                    .collect(Collectors.toList());
            }

            return Collections.emptyList();
        }

        // 热搜榜单
        public void incrementSearchCount(String keyword) {
            String key = "hot_search";
            stringRedisTemplate.opsForZSet().incrementScore(key, keyword, 1);
        }

        public List<String> getHotSearchKeywords(int count) {
            String key = "hot_search";
            Set<String> keywords = stringRedisTemplate.opsForZSet()
                .reverseRange(key, 0, count - 1);
            return new ArrayList<>(keywords);
        }
    }
}

2. Redis持久化机制

面试问题:Redis的RDB和AOF持久化机制有什么区别?如何选择?

持久化配置与监控

@Configuration
public class RedisPersistenceConfig {

    // Redis配置
    @Bean
    public LettuceConnectionFactory redisConnectionFactory() {
        RedisStandaloneConfiguration config = new RedisStandaloneConfiguration();
        config.setHostName("localhost");
        config.setPort(6379);
        config.setPassword("password");
        config.setDatabase(0);

        LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
            .poolConfig(getConnectionPoolConfig())
            .commandTimeout(Duration.ofSeconds(30))
            .shutdownTimeout(Duration.ofSeconds(20))
            .build();

        return new LettuceConnectionFactory(config, clientConfig);
    }

    private GenericObjectPoolConfig getConnectionPoolConfig() {
        GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
        poolConfig.setMaxTotal(20);        // 最大连接数
        poolConfig.setMaxIdle(10);         // 最大空闲连接数
        poolConfig.setMinIdle(5);          // 最小空闲连接数
        poolConfig.setMaxWaitMillis(3000); // 最大等待时间
        poolConfig.setTestOnBorrow(true);  // 借用时验证
        poolConfig.setTestOnReturn(true);  // 归还时验证
        poolConfig.setTestWhileIdle(true); // 空闲时验证
        return poolConfig;
    }
}

@Component
public class RedisPersistenceMonitor {

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    // 监控RDB持久化
    @Scheduled(fixedRate = 300000) // 每5分钟检查一次
    public void monitorRDBPersistence() {
        try {
            Properties info = stringRedisTemplate.getConnectionFactory()
                .getConnection().info("persistence");

            String lastSave = info.getProperty("rdb_last_save_time");
            String rdbChangesSinceLastSave = info.getProperty("rdb_changes_since_last_save");

            long lastSaveTime = Long.parseLong(lastSave) * 1000; // 转换为毫秒
            long timeSinceLastSave = System.currentTimeMillis() - lastSaveTime;

            // 检查是否超过预期的保存间隔
            if (timeSinceLastSave > Duration.ofHours(1).toMillis()) {
                System.err.println("RDB持久化告警: 距离上次保存已超过1小时");
                System.err.println("上次保存时间: " + new Date(lastSaveTime));
                System.err.println("自上次保存以来的变更数: " + rdbChangesSinceLastSave);
            }

        } catch (Exception e) {
            System.err.println("RDB监控异常: " + e.getMessage());
        }
    }

    // 监控AOF持久化
    @Scheduled(fixedRate = 300000)
    public void monitorAOFPersistence() {
        try {
            Properties info = stringRedisTemplate.getConnectionFactory()
                .getConnection().info("persistence");

            String aofEnabled = info.getProperty("aof_enabled");
            String aofLastRewriteTime = info.getProperty("aof_last_rewrite_time_sec");
            String aofCurrentSize = info.getProperty("aof_current_size");
            String aofBaseSize = info.getProperty("aof_base_size");

            if ("1".equals(aofEnabled)) {
                long currentSize = Long.parseLong(aofCurrentSize);
                long baseSize = Long.parseLong(aofBaseSize);

                // 检查AOF文件增长情况
                if (currentSize > baseSize * 2) {
                    System.err.println("AOF文件增长告警: 当前大小已超过基础大小的2倍");
                    System.err.println("当前大小: " + currentSize + " bytes");
                    System.err.println("基础大小: " + baseSize + " bytes");

                    // 可以考虑触发AOF重写
                    triggerAOFRewrite();
                }
            }

        } catch (Exception e) {
            System.err.println("AOF监控异常: " + e.getMessage());
        }
    }

    // 手动触发RDB保存
    public void manualRDBSave() {
        try {
            stringRedisTemplate.getConnectionFactory().getConnection().bgSave();
            System.out.println("手动触发RDB后台保存");
        } catch (Exception e) {
            System.err.println("手动RDB保存失败: " + e.getMessage());
        }
    }

    // 手动触发AOF重写
    public void triggerAOFRewrite() {
        try {
            stringRedisTemplate.getConnectionFactory().getConnection().bgReWriteAof();
            System.out.println("手动触发AOF重写");
        } catch (Exception e) {
            System.err.println("AOF重写失败: " + e.getMessage());
        }
    }

    // 获取内存使用情况
    public Map<String, Object> getMemoryInfo() {
        Map<String, Object> memoryInfo = new HashMap<>();

        try {
            Properties info = stringRedisTemplate.getConnectionFactory()
                .getConnection().info("memory");

            memoryInfo.put("used_memory", info.getProperty("used_memory"));
            memoryInfo.put("used_memory_human", info.getProperty("used_memory_human"));
            memoryInfo.put("used_memory_rss", info.getProperty("used_memory_rss"));
            memoryInfo.put("used_memory_peak", info.getProperty("used_memory_peak"));
            memoryInfo.put("mem_fragmentation_ratio", info.getProperty("mem_fragmentation_ratio"));

        } catch (Exception e) {
            System.err.println("获取内存信息失败: " + e.getMessage());
        }

        return memoryInfo;
    }
}

3. Redis缓存设计模式

面试问题:常见的缓存设计模式有哪些?如何解决缓存穿透、击穿、雪崩问题?

缓存设计模式实现

@Service
public class CachePatternService {

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    @Autowired
    private UserRepository userRepository;

    // 1. Cache-Aside模式(旁路缓存)
    public User getUserCacheAside(Long userId) {
        String key = "user:" + userId;

        // 先查缓存
        String cachedUser = stringRedisTemplate.opsForValue().get(key);
        if (cachedUser != null) {
            if ("NULL".equals(cachedUser)) {
                return null; // 空值缓存
            }
            return JSON.parseObject(cachedUser, User.class);
        }

        // 缓存未命中,查数据库
        User user = userRepository.findById(userId);

        if (user != null) {
            // 写入缓存
            stringRedisTemplate.opsForValue().set(key, JSON.toJSONString(user), Duration.ofHours(1));
        } else {
            // 空值缓存,防止缓存穿透
            stringRedisTemplate.opsForValue().set(key, "NULL", Duration.ofMinutes(5));
        }

        return user;
    }

    public void updateUserCacheAside(User user) {
        // 先更新数据库
        userRepository.save(user);

        // 再删除缓存(而不是更新缓存)
        String key = "user:" + user.getId();
        stringRedisTemplate.delete(key);
    }

    // 2. Read-Through模式
    public User getUserReadThrough(Long userId) {
        return getCacheManager().getCache("users").get(userId, () -> {
            // 缓存未命中时的加载逻辑
            return userRepository.findById(userId);
        });
    }

    // 3. Write-Through模式
    public void updateUserWriteThrough(User user) {
        getCacheManager().getCache("users").put(user.getId(), user);
        // 缓存会自动同步到数据库
    }

    // 4. Write-Behind模式(异步写入)
    public void updateUserWriteBehind(User user) {
        // 先更新缓存
        String key = "user:" + user.getId();
        stringRedisTemplate.opsForValue().set(key, JSON.toJSONString(user), Duration.ofHours(1));

        // 异步写入数据库
        CompletableFuture.runAsync(() -> {
            try {
                Thread.sleep(100); // 批量延迟
                userRepository.save(user);
            } catch (Exception e) {
                System.err.println("异步写入数据库失败: " + e.getMessage());
            }
        });
    }

    // 解决缓存穿透:布隆过滤器
    private final BloomFilter<String> bloomFilter = BloomFilter.create(
        Funnels.stringFunnel(Charset.defaultCharset()), 1000000, 0.01);

    public User getUserWithBloomFilter(Long userId) {
        String userIdStr = userId.toString();

        // 先检查布隆过滤器
        if (!bloomFilter.mightContain(userIdStr)) {
            // 肯定不存在
            return null;
        }

        // 可能存在,继续查缓存和数据库
        return getUserCacheAside(userId);
    }

    // 解决缓存击穿:分布式锁
    public User getUserWithDistributedLock(Long userId) {
        String key = "user:" + userId;
        String lockKey = "lock:user:" + userId;

        // 先查缓存
        String cachedUser = stringRedisTemplate.opsForValue().get(key);
        if (cachedUser != null) {
            return JSON.parseObject(cachedUser, User.class);
        }

        // 缓存未命中,尝试获取分布式锁
        String lockValue = UUID.randomUUID().toString();
        Boolean lockAcquired = stringRedisTemplate.opsForValue()
            .setIfAbsent(lockKey, lockValue, Duration.ofSeconds(10));

        if (Boolean.TRUE.equals(lockAcquired)) {
            try {
                // 获取锁成功,再次检查缓存(双重检查)
                cachedUser = stringRedisTemplate.opsForValue().get(key);
                if (cachedUser != null) {
                    return JSON.parseObject(cachedUser, User.class);
                }

                // 查询数据库并更新缓存
                User user = userRepository.findById(userId);
                if (user != null) {
                    stringRedisTemplate.opsForValue().set(key, JSON.toJSONString(user), Duration.ofHours(1));
                }

                return user;

            } finally {
                // 释放锁(使用Lua脚本保证原子性)
                releaseLock(lockKey, lockValue);
            }
        } else {
            // 获取锁失败,等待一段时间后重试
            try {
                Thread.sleep(100);
                return getUserWithDistributedLock(userId);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                return null;
            }
        }
    }

    private void releaseLock(String lockKey, String lockValue) {
        String script = """
            if redis.call('get', KEYS[1]) == ARGV[1] then
                return redis.call('del', KEYS[1])
            else
                return 0
            end
            """;

        DefaultRedisScript<Long> redisScript = new DefaultRedisScript<>(script, Long.class);
        stringRedisTemplate.execute(redisScript, Collections.singletonList(lockKey), lockValue);
    }

    // 解决缓存雪崩:差异化过期时间
    public void cacheUsersWithRandomExpiration(List<User> users) {
        Random random = new Random();

        for (User user : users) {
            String key = "user:" + user.getId();
            // 基础过期时间1小时,加上0-30分钟的随机值
            int randomMinutes = random.nextInt(30);
            Duration expiration = Duration.ofHours(1).plusMinutes(randomMinutes);

            stringRedisTemplate.opsForValue().set(key, JSON.toJSONString(user), expiration);
        }
    }

    // 缓存预热
    @PostConstruct
    public void warmUpCache() {
        // 应用启动时预热热点数据
        List<User> hotUsers = userRepository.findHotUsers();
        cacheUsersWithRandomExpiration(hotUsers);

        // 将用户ID加入布隆过滤器
        for (User user : hotUsers) {
            bloomFilter.put(user.getId().toString());
        }
    }

    private CacheManager getCacheManager() {
        // 返回配置好的缓存管理器
        return null;
    }
}

// 多级缓存实现
@Service
public class MultiLevelCacheService {

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    // 本地缓存(L1)
    private final Cache<String, Object> localCache = Caffeine.newBuilder()
        .maximumSize(1000)
        .expireAfterWrite(Duration.ofMinutes(10))
        .build();

    // Redis缓存(L2)
    public <T> T getWithMultiLevelCache(String key, Class<T> clazz, Supplier<T> dataLoader) {
        // 1. 先查本地缓存
        Object cached = localCache.getIfPresent(key);
        if (cached != null) {
            return clazz.cast(cached);
        }

        // 2. 查Redis缓存
        String redisCached = stringRedisTemplate.opsForValue().get(key);
        if (redisCached != null) {
            T value = JSON.parseObject(redisCached, clazz);
            // 回填本地缓存
            localCache.put(key, value);
            return value;
        }

        // 3. 查数据库
        T value = dataLoader.get();
        if (value != null) {
            // 写入Redis缓存
            stringRedisTemplate.opsForValue().set(key, JSON.toJSONString(value), Duration.ofHours(1));
            // 写入本地缓存
            localCache.put(key, value);
        }

        return value;
    }

    public void evictCache(String key) {
        // 同时清除两级缓存
        localCache.invalidate(key);
        stringRedisTemplate.delete(key);
    }

    // 缓存一致性:发布订阅模式
    @EventListener
    public void handleCacheEvictionEvent(CacheEvictionEvent event) {
        // 发布缓存失效消息
        stringRedisTemplate.convertAndSend("cache:eviction", event.getKey());
    }

    @RedisMessageListener
    public void onCacheEvictionMessage(String key) {
        // 接收到缓存失效消息,清除本地缓存
        localCache.invalidate(key);
    }
}

4. Redis集群与高可用

面试问题:Redis的主从复制原理?Sentinel和Cluster的区别?

Redis集群配置

@Configuration
public class RedisClusterConfig {

    // Redis Cluster配置
    @Bean
    @Primary
    public LettuceConnectionFactory redisClusterConnectionFactory() {
        List<String> clusterNodes = Arrays.asList(
            "192.168.1.101:7000",
            "192.168.1.102:7001", 
            "192.168.1.103:7002",
            "192.168.1.104:7003",
            "192.168.1.105:7004",
            "192.168.1.106:7005"
        );

        RedisClusterConfiguration clusterConfig = new RedisClusterConfiguration(clusterNodes);
        clusterConfig.setMaxRedirects(3); // 最大重定向次数

        LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
            .poolConfig(getConnectionPoolConfig())
            .commandTimeout(Duration.ofSeconds(30))
            .build();

        return new LettuceConnectionFactory(clusterConfig, clientConfig);
    }

    // Redis Sentinel配置
    @Bean
    public LettuceConnectionFactory redisSentinelConnectionFactory() {
        RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration()
            .master("mymaster")
            .sentinel("192.168.1.101", 26379)
            .sentinel("192.168.1.102", 26379)
            .sentinel("192.168.1.103", 26379);

        sentinelConfig.setPassword("password");

        LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
            .poolConfig(getConnectionPoolConfig())
            .commandTimeout(Duration.ofSeconds(30))
            .build();

        return new LettuceConnectionFactory(sentinelConfig, clientConfig);
    }

    private GenericObjectPoolConfig getConnectionPoolConfig() {
        GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
        poolConfig.setMaxTotal(20);
        poolConfig.setMaxIdle(10);
        poolConfig.setMinIdle(5);
        poolConfig.setMaxWaitMillis(3000);
        return poolConfig;
    }

    @Bean
    public RedisTemplate<String, Object> redisTemplate(LettuceConnectionFactory connectionFactory) {
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        template.setConnectionFactory(connectionFactory);

        // 设置序列化器
        Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer = 
            new Jackson2JsonRedisSerializer<>(Object.class);
        ObjectMapper om = new ObjectMapper();
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        om.activateDefaultTyping(LaissezFaireSubTypeValidator.instance, ObjectMapper.DefaultTyping.NON_FINAL);
        jackson2JsonRedisSerializer.setObjectMapper(om);

        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();

        // key采用String的序列化方式
        template.setKeySerializer(stringRedisSerializer);
        template.setHashKeySerializer(stringRedisSerializer);

        // value序列化方式采用jackson
        template.setValueSerializer(jackson2JsonRedisSerializer);
        template.setHashValueSerializer(jackson2JsonRedisSerializer);

        template.afterPropertiesSet();
        return template;
    }
}

// 集群监控服务
@Service
public class RedisClusterMonitorService {

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    // 监控集群状态
    @Scheduled(fixedRate = 60000) // 每分钟检查一次
    public void monitorClusterHealth() {
        try {
            RedisClusterConnection clusterConnection = 
                redisTemplate.getConnectionFactory().getClusterConnection();

            // 获取集群信息
            Properties clusterInfo = clusterConnection.clusterGetClusterInfo();
            String clusterState = clusterInfo.getProperty("cluster_state");
            String clusterSlotsAssigned = clusterInfo.getProperty("cluster_slots_assigned");
            String clusterSlotsOk = clusterInfo.getProperty("cluster_slots_ok");
            String clusterKnownNodes = clusterInfo.getProperty("cluster_known_nodes");

            System.out.println("集群状态: " + clusterState);
            System.out.println("已分配槽位: " + clusterSlotsAssigned);
            System.out.println("正常槽位: " + clusterSlotsOk);
            System.out.println("已知节点: " + clusterKnownNodes);

            if (!"ok".equals(clusterState)) {
                sendAlert("Redis集群状态异常", "集群状态: " + clusterState);
            }

            // 检查各节点状态
            Iterable<RedisClusterNode> nodes = clusterConnection.clusterGetNodes();
            for (RedisClusterNode node : nodes) {
                if (!node.isConnected()) {
                    sendAlert("Redis节点连接异常", 
                        "节点: " + node.getHost() + ":" + node.getPort());
                }

                if (node.isMarkedAsFail()) {
                    sendAlert("Redis节点标记为失败", 
                        "节点: " + node.getHost() + ":" + node.getPort());
                }
            }

        } catch (Exception e) {
            System.err.println("集群监控异常: " + e.getMessage());
            sendAlert("Redis集群监控异常", e.getMessage());
        }
    }

    // 获取集群性能指标
    public Map<String, Object> getClusterMetrics() {
        Map<String, Object> metrics = new HashMap<>();

        try {
            RedisClusterConnection clusterConnection = 
                redisTemplate.getConnectionFactory().getClusterConnection();

            Iterable<RedisClusterNode> nodes = clusterConnection.clusterGetNodes();

            long totalMemoryUsed = 0;
            long totalCommands = 0;
            int connectedNodes = 0;

            for (RedisClusterNode node : nodes) {
                if (node.isConnected()) {
                    connectedNodes++;

                    // 获取节点信息
                    Properties nodeInfo = clusterConnection.info(node, "memory");
                    String usedMemory = nodeInfo.getProperty("used_memory");
                    if (usedMemory != null) {
                        totalMemoryUsed += Long.parseLong(usedMemory);
                    }

                    Properties statsInfo = clusterConnection.info(node, "stats");
                    String totalCommandsProcessed = statsInfo.getProperty("total_commands_processed");
                    if (totalCommandsProcessed != null) {
                        totalCommands += Long.parseLong(totalCommandsProcessed);
                    }
                }
            }

            metrics.put("connectedNodes", connectedNodes);
            metrics.put("totalMemoryUsed", totalMemoryUsed);
            metrics.put("totalCommands", totalCommands);

        } catch (Exception e) {
            System.err.println("获取集群指标失败: " + e.getMessage());
        }

        return metrics;
    }

    // 故障转移测试
    public void testFailover(String masterNodeId) {
        try {
            RedisClusterConnection clusterConnection = 
                redisTemplate.getConnectionFactory().getClusterConnection();

            // 手动触发故障转移
            clusterConnection.clusterFailover(RedisClusterNode.of(masterNodeId));

            System.out.println("故障转移已触发,节点: " + masterNodeId);

        } catch (Exception e) {
            System.err.println("故障转移失败: " + e.getMessage());
        }
    }

    private void sendAlert(String title, String message) {
        // 发送告警通知
        System.err.println("告警: " + title + " - " + message);
    }
}

5. Redis性能优化

面试问题:如何优化Redis性能?有哪些性能监控指标?

性能优化实践

@Service
public class RedisPerformanceOptimizationService {

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    // 1. 批量操作优化
    public void batchOperations() {
        // 使用Pipeline批量执行命令
        List<Object> results = stringRedisTemplate.executePipelined(new RedisCallback<Object>() {
            @Override
            public Object doInRedis(RedisConnection connection) throws DataAccessException {
                for (int i = 0; i < 1000; i++) {
                    connection.set(("key" + i).getBytes(), ("value" + i).getBytes());
                }
                return null;
            }
        });

        System.out.println("批量操作完成,结果数量: " + results.size());
    }

    // 2. 使用合适的数据结构
    public void optimizeDataStructure() {
        // 错误示例:使用String存储结构化数据
        String userKey = "user:1001";
        Map<String, String> userData = new HashMap<>();
        userData.put("name", "张三");
        userData.put("age", "25");
        userData.put("email", "zhangsan@example.com");
        String userJson = JSON.toJSONString(userData);
        stringRedisTemplate.opsForValue().set(userKey, userJson);

        // 正确示例:使用Hash存储结构化数据
        String userHashKey = "user:hash:1001";
        stringRedisTemplate.opsForHash().putAll(userHashKey, userData);

        // Hash的优势:
        // 1. 内存效率更高
        // 2. 可以单独更新字段
        // 3. 可以使用HGET获取单个字段
    }

    // 3. 大Key处理
    public void handleLargeKeys() {
        String largeListKey = "large:list";

        // 分批删除大List
        while (true) {
            List<String> batch = stringRedisTemplate.opsForList().range(largeListKey, 0, 99);
            if (batch == null || batch.isEmpty()) {
                break;
            }

            stringRedisTemplate.opsForList().trim(largeListKey, 100, -1);

            // 避免阻塞,适当休眠
            try {
                Thread.sleep(10);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                break;
            }
        }

        // 最后删除空的key
        stringRedisTemplate.delete(largeListKey);
    }

    // 4. 内存优化
    public void memoryOptimization() {
        // 使用过期时间自动清理数据
        String temporaryKey = "temp:data:12345";
        stringRedisTemplate.opsForValue().set(temporaryKey, "temporary data", Duration.ofMinutes(30));

        // 压缩数据
        String originalData = "很长的字符串数据...";
        byte[] compressedData = compress(originalData);
        stringRedisTemplate.opsForValue().set("compressed:key", Base64.getEncoder().encodeToString(compressedData));

        // 使用位图节省内存(适用于用户签到、在线状态等场景)
        String signInKey = "signin:2024:01";
        long userId = 12345;
        stringRedisTemplate.opsForValue().setBit(signInKey, userId, true); // 用户签到
    }

    // 5. 连接池优化
    @EventListener
    public void monitorConnectionPool(ApplicationReadyEvent event) {
        // 监控连接池使用情况
        Timer.Sample sample = Timer.start();
        try {
            // 执行Redis操作
            stringRedisTemplate.opsForValue().get("test:key");
        } finally {
            sample.stop(Timer.builder("redis.operation.duration")
                .tag("operation", "get")
                .register(Metrics.globalRegistry));
        }
    }

    // 6. 慢查询监控
    @Scheduled(fixedRate = 300000) // 每5分钟检查一次
    public void monitorSlowQueries() {
        try {
            RedisConnection connection = stringRedisTemplate.getConnectionFactory().getConnection();

            // 获取慢查询日志
            List<Object> slowLogs = connection.slowLogGet(10); // 获取最近10条慢查询

            for (Object slowLog : slowLogs) {
                if (slowLog instanceof List) {
                    List<Object> logEntry = (List<Object>) slowLog;
                    if (logEntry.size() >= 4) {
                        Long id = (Long) logEntry.get(0);
                        Long timestamp = (Long) logEntry.get(1);
                        Long duration = (Long) logEntry.get(2); // 微秒

                        if (duration > 100000) { // 超过100ms的查询
                            System.err.println("慢查询检测: ID=" + id + 
                                             ", 耗时=" + (duration / 1000) + "ms" +
                                             ", 时间=" + new Date(timestamp * 1000));
                        }
                    }
                }
            }

        } catch (Exception e) {
            System.err.println("慢查询监控异常: " + e.getMessage());
        }
    }

    // 7. 键空间分析
    public Map<String, Long> analyzeKeyspace() {
        Map<String, Long> keyspaceInfo = new HashMap<>();

        try {
            RedisConnection connection = stringRedisTemplate.getConnectionFactory().getConnection();
            Properties info = connection.info("keyspace");

            for (String key : info.stringPropertyNames()) {
                if (key.startsWith("db")) {
                    String value = info.getProperty(key);
                    // 解析 "keys=1000,expires=100,avg_ttl=3600000"
                    String[] parts = value.split(",");
                    for (String part : parts) {
                        if (part.startsWith("keys=")) {
                            long keyCount = Long.parseLong(part.substring(5));
                            keyspaceInfo.put(key + "_keys", keyCount);
                        } else if (part.startsWith("expires=")) {
                            long expireCount = Long.parseLong(part.substring(8));
                            keyspaceInfo.put(key + "_expires", expireCount);
                        }
                    }
                }
            }

        } catch (Exception e) {
            System.err.println("键空间分析异常: " + e.getMessage());
        }

        return keyspaceInfo;
    }

    private byte[] compress(String data) {
        try {
            ByteArrayOutputStream baos = new ByteArrayOutputStream();
            GZIPOutputStream gzos = new GZIPOutputStream(baos);
            gzos.write(data.getBytes(StandardCharsets.UTF_8));
            gzos.close();
            return baos.toByteArray();
        } catch (Exception e) {
            throw new RuntimeException("数据压缩失败", e);
        }
    }

    private String decompress(byte[] compressedData) {
        try {
            ByteArrayInputStream bais = new ByteArrayInputStream(compressedData);
            GZIPInputStream gzis = new GZIPInputStream(bais);
            ByteArrayOutputStream baos = new ByteArrayOutputStream();

            byte[] buffer = new byte[1024];
            int len;
            while ((len = gzis.read(buffer)) != -1) {
                baos.write(buffer, 0, len);
            }

            return baos.toString(StandardCharsets.UTF_8);
        } catch (Exception e) {
            throw new RuntimeException("数据解压失败", e);
        }
    }
}

高频面试题目

1. 理论深度题目

Q: Redis为什么这么快?

A: Redis高性能的原因:

  1. 内存存储:数据存储在内存中,避免磁盘I/O
  2. 单线程模型:避免线程切换和锁竞争开销
  3. I/O多路复用:使用epoll等高效的I/O模型
  4. 优化的数据结构:针对不同场景优化的数据结构
  5. 简单的协议:RESP协议简单高效

Q: Redis的内存淘汰策略有哪些?

A: Redis内存淘汰策略:

  • noeviction:不淘汰,写入时返回错误
  • allkeys-lru:所有键中淘汰最少使用的
  • volatile-lru:有过期时间的键中淘汰最少使用的
  • allkeys-random:所有键中随机淘汰
  • volatile-random:有过期时间的键中随机淘汰
  • volatile-ttl:淘汰即将过期的键

2. 实战应用题目

Q: 如何设计一个分布式限流系统?

答题要点:

  1. 令牌桶算法:使用Redis实现令牌桶
  2. 滑动窗口:基于ZSet实现滑动时间窗口
  3. 分布式一致性:使用Lua脚本保证原子性
  4. 多级限流:IP、用户、接口多维度限流
  5. 动态调整:根据系统负载动态调整限流参数

Q: 如何实现Redis的数据备份和恢复?

答题要点:

  1. RDB备份:定期生成RDB快照文件
  2. AOF备份:实时记录写操作命令
  3. 主从同步:实时同步到从节点
  4. 跨机房备份:异地备份保证数据安全
  5. 增量备份:结合RDB和AOF实现增量备份

总结

Redis面试重点:

  1. 数据结构:五种基本类型的特点和应用场景
  2. 持久化机制:RDB和AOF的原理、优缺点、配置
  3. 缓存设计:缓存模式、一致性、穿透/击穿/雪崩问题
  4. 集群架构:主从复制、Sentinel、Cluster的特点
  5. 性能优化:内存优化、慢查询分析、连接池配置

建议结合实际项目中Redis的使用经验,能够描述具体的应用场景和解决的问题。

powered by Gitbook© 2025 编外计划 | 最后修改: 2025-07-28 18:05:38

results matching ""

    No results matching ""