一.Redis集群
Redis集群中⾄少应该有三个节点。要保证集群的⾼可⽤,需要每个节点有⼀个备份机。 Redis集群⾄少需要 6台服务器。 搭建伪分布式。可以使⽤⼀台虚拟机运⾏6个redis实例。然后修改redis的端口号7001-7006(⾃ 定义端口,只要不和其他程序重复)
1.特点:
(1)所有的redis节点彼此互联(PING-PONG机制),内部使⽤⼆进制协议优化传输速度和带宽.
(2)节点的fail是通过集群中超过半数的节点检测失效时才⽣效.
(3)客⼾端与redis节点直连,不需要中间proxy层.客⼾端不需要连接集群所有节点,连接集群中任何⼀个可⽤节点即可
(4)redis-cluster把所有的物理节点映射到[0-16383]slot上,cluster 负责维护node<->slot<->value Redis 集群中内置了 16384 个哈希槽,当需要在 Redis 集群中放置⼀个 key-value 时,redis 先对 key 使 ⽤ crc16 算法算出⼀个结果, 然后把结果对 16384 求余数,这样每个 key 都会对应⼀个编号在 0-16383 之间的哈希槽,redis 会根据节点数 量⼤致均等的将哈希槽映射到不同的节点.
2.集群环境的搭建
(1)使⽤ruby脚本搭建集群。需要ruby的运⾏环境。
yum install ruby
yum install rubygems
(2)上传并安装ruby脚本运⾏使⽤的包。
[root@localhost ~]# gem install redis-3.0.0.gem
Successfully installed redis-3.0.0
1 gem installed
Installing ri documentation for redis-3.0.0...
Installing RDoc documentation for redis-3.0.0...
[root@localhost ~]#
[root@localhost ~]# cd redis-3.0.7/src
[root@localhost src]# ll *.rb
-rwxrwxr-x. 1 root root 48141 Apr 1 2015 redis-trib.rb
3. 搭建步骤
需要6台redis服务器。搭建伪分布式。 需要6个redis实例。 需要运⾏在不同的端口7001-7006
(1) 创建6个redis实例,每个实例运⾏在不同的端口。需要修改redis.conf配置⽂件。配置⽂件中还需要把cluster-enabled yes前的注释去掉。
(2) 启动每个redis实例
(3) 使⽤ruby脚本搭建集群
./redis-trib.rb create --replicas 1 10.0.135.131:7001 10.0.135.131:7002
10.0.135.131:7003 10.0.135.131:7004 10.0.135.131:7005 10.0.135.131:7006
(4)创建关闭集群的脚本
注意下⾯创建集群使⽤的ip地址不能使⽤127.0.0.1,要使⽤当前机器的ip,⽐如 如果是公⽹机器就使⽤公⽹ip,内⽹机 器就使⽤内⽹ip,如果你使⽤127.0.0.1 会出现使⽤java代码连接集群的时候,即便你在代码中填写的是外⽹或者是 内⽹ip,但是创建了集群对象后,它内部使⽤的ip都是127.0.0.1,导致连接集群失败,因为我们java代码所在的电脑上⾯没有redis
[root@localhost redis-cluster]# vim shutdow-all.sh
redis01/redis-cli -p 7001shutdown
redis01/redis-cli -p 7002 shutdown
redis01/redis-cli -p 7003 shutdown
redis01/redis-cli -p 7004 shutdown
redis01/redis-cli -p 7005 shutdown
redis01/redis-cli -p 7006 shutdown
[root@localhost redis-cluster]# chmod u+x shutdow-all.sh
[root@localhost redis-cluster]# ./redis-trib.rb create --replicas 1 10.0.135.131:7001 10.0.135.131:7002 10.0.135.131:7003 10.0.135.131:7004 10.0.135.131:7005 10.0.135.131:7006
>>> Creating cluster
Connecting to node 10.0.135.131:7001: OK
Connecting to node 10.0.135.131:7002: OK
Connecting to node 10.0.135.131:7003: OK
Connecting to node 10.0.135.131:7004: OK
Connecting to node 10.0.135.131:7005: OK
Connecting to node 10.0.135.131:7006: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
10.0.135.131:7001
10.0.135.131:7002
10.0.135.131:7003
Adding replica 10.0.135.131:7004 to 10.0.135.131:7001
Adding replica 10.0.135.131:7005 to 10.0.135.131:7002
Adding replica 10.0.135.131:7006 to 10.0.135.131:7003
M: 2e48ae301e9c32b04a7d4d92e15e98e78de8c1f3 10.0.135.131:7001 slots:0-5460 (5461 slots) master
M: 8cd93a9a943b4ef851af6a03edd699a6061ace01 10.0.135.131:7002 slots:5461-10922 (5462 slots) master
M: 2935007902d83f20b1253d7f43dae32aab9744e6 10.0.135.131:7003 slots:10923-16383 (5461 slots) master
S: 74f9d9706f848471583929fc8bbde3c8e99e211b 10.0.135.131:7004 replicates 2e48ae301e9c32b04a7d4d92e15e98e78de8c1f3
S: 42cc9e25ebb19dda92591364c1df4b3a518b795b 10.0.135.131:7005 replicates 8cd93a9a943b4ef851af6a03edd699a6061ace01
S: 8b1b11d509d29659c2831e7a9f6469c060dfcd39 10.0.135.131:7006 replicates 2935007902d83f20b1253d7f43dae32aab9744e6
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....
>>> Performing Cluster Check (using node 10.0.135.131:7001)
M: 2e48ae301e9c32b04a7d4d92e15e98e78de8c1f3 10.0.135.131:7001 slots:0-5460 (5461 slots) master
M: 8cd93a9a943b4ef851af6a03edd699a6061ace01 10.0.135.131:7002 slots:5461-10922 (5462 slots) master
M: 2935007902d83f20b1253d7f43dae32aab9744e6 10.0.135.131:7003 slots:10923-16383 (5461 slots) master
M: 74f9d9706f848471583929fc8bbde3c8e99e211b 10.0.135.131:7004 slots: (0 slots) master replicates 2e48ae301e9c32b04a7d4d92e15e98e78de8c1f3
M: 42cc9e25ebb19dda92591364c1df4b3a518b795b 10.0.135.131:7005 slots: (0 slots) master replicates 8cd93a9a943b4ef851af6a03edd699a6061ace01
M: 8b1b11d509d29659c2831e7a9f6469c060dfcd39 10.0.135.131:7006 slots: (0 slots) master
replicates 2935007902d83f20b1253d7f43dae32aab9744e6
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@localhost redis-cluster]#
4.集群使用方法
Redis-cli连接集群。
[root@localhost redis-cluster]# redis01/redis-cli -p 7002 -c
-c:代表连接的是redis集群
二.springboot搭建redis
1.配置redis的相关设置
RedisConfig.java
package nz.study.config;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.data.redis.RedisProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisOperations;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
@Configuration
@ConditionalOnClass(RedisOperations.class)
@EnableConfigurationProperties(RedisProperties.class)
public class RedisConfig {
@Bean
@ConditionalOnMissingBean(name = "redisTemplate")
public RedisTemplate<Object,Object> redisTemplate(RedisConnectionFactory redisConnectionFactory){
RedisTemplate<Object,Object> template = new RedisTemplate<>();
//使用faseterjson实现对于对象的序列化
Jackson2JsonRedisSerializer serializer = new Jackson2JsonRedisSerializer(Object.class);
//设置"值"的序序列化方式
template.setValueSerializer(serializer);
//设置"hash"类型的数据的序列化方式
template.setHashValueSerializer(serializer);
//设置 "key" 的序列化方式
template.setKeySerializer(new StringRedisSerializer());
//设置 "hash的key" 的序列化方式
template.setHashKeySerializer(new StringRedisSerializer());
//设置redis模板的工厂对象
template.setConnectionFactory(redisConnectionFactory);
return template;
}
@Bean
@ConditionalOnMissingBean(StringRedisTemplate.class)
public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory){
StringRedisTemplate template = new StringRedisTemplate();
template.setConnectionFactory(redisConnectionFactory);
return template;
}
}
2.创建工具类
RedisUtil.java
package nz.study.utils;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Component;
import org.springframework.util.CollectionUtils;
import javax.annotation.Resource;
import java.util.Map;
import java.util.concurrent.TimeUnit;
@Component
public class RedisUtil {
@Resource
private RedisTemplate<String,Object> redisTemplate;
public void setRedisTemplate(RedisTemplate<String,Object> redisTemplate){
this.redisTemplate = redisTemplate;
}
/**
* 指定缓存的失效时间
* @param key redis中的键
* @param time 时间(秒)
* @return 该key对应的值是否失效
*/
public boolean expire(String key,long time){
try{
if(time>0){
redisTemplate.expire(key,time, TimeUnit.SECONDS);
}
return true;
}catch (Exception e){
e.printStackTrace();
return false;
}
}
/**
* 根据key拿到该key的失效时间
* @param key
* @return 失效时间(秒)
*/
public long getExpire(String key){
return redisTemplate.getExpire(key,TimeUnit.SECONDS);
}
/**
* 判断该key是否存在
* @param key 值
* @return true存在,false不存在
*/
public boolean haskey(String key){
return redisTemplate.hasKey(key);
}
/**
* 删除缓存中的所有的keys的值
* @param keys 可变长度的参数,可以是0个,1个和多个key进行删除
*/
public void delete(String ... keys){
if(keys != null && keys.length>0){
if(keys.length == 1){
redisTemplate.delete(keys[0]);
}else{
redisTemplate.delete(CollectionUtils.arrayToList(keys));
}
}
}
////////////////String类型////////////////
public Object get(String key){
return key == null ? null : redisTemplate.opsForValue().get(key);
}
/**
* 在redis服务器中放置String类型的值
* @param key String类型的key
* @param value 值
* @return true添加成功,false添加失败
*/
public boolean set(String key,Object value){
try{
redisTemplate.opsForValue().set(key,value);
return true;
}catch (Exception e){
e.printStackTrace();
return false;
}
}
/**
* 在redis服务器中放置String类型的值,并设置失效时间
* @param key String类型的key
* @param value 值
* @param time 失效时间(秒)
* @return
*/
public boolean set(String key,Object value,long time){
try{
if (time > 0){
redisTemplate.opsForValue().set(key, value, time,TimeUnit.SECONDS);
}else{
set(key, value);
}
return true;
}catch (Exception e){
e.printStackTrace();
return false;
}
}
////////////////////HASH类型//////////////////////
/**
* hash类型的数据存储
* @param key hash类型值的key
* @param map 键值对
* @return
*/
public boolean hmset(String key, Map<String,Object> map){
try{
redisTemplate.opsForHash().putAll(key,map);
return true;
}catch (Exception e){
e.printStackTrace();
return false;
}
}
/**
* 设置hash中指定key下的field的值为value
* @param key hash的key键
* @param field hash中的field域
* @param value 给hash中的field设置的值
* @return true 设置成功, false设置失败
*/
public boolean hset(String key,String field,Object value){
try{
redisTemplate.opsForHash().put(key,field,value);
return true;
}catch (Exception e){
e.printStackTrace();
return false;
}
}
/**
* 设置hash中指定key下的field值为value,并设置失效时间
* @param key hash的key键
* @param field hash的field域
* @param value 给hash中的field域设置的值
* @param time 失效时间
* @return
*/
public boolean hset(String key,String field,Object value,long time){
try{
redisTemplate.opsForHash().put(key,field,value);
return true;
}catch (Exception e){
e.printStackTrace();
return false;
}
}
/**
* hash类型数据的存储
* @param key hash类型值的key
* @param map 键值对
* @param time 失效时间
* @return true设置成功,false 设置失败
*/
public boolean hmset(String key,Map<String,Object> map,long time){
try {
redisTemplate.opsForHash().putAll(key,map);
if (time > 0 ){
expire(key, time);
}
return true;
}catch (Exception e){
e.printStackTrace();
return false;
}
}
/**
* hash类型数据的获取
* @param key hash类型数据的键
* @param field hash类型的field
* @return 在改key所对应的hash中field值
*/
public Object hget(String key,String field){
return redisTemplate.opsForHash().get(key,field);
}
/**
* 获取hash类型数据的key对应的整个map对象
* @param key hash中的key
* @return 该hash对应的hash对象
*/
public Map<Object,Object> hmget(String key){
return redisTemplate.opsForHash().entries(key);
}
}
3.配置yml文件
application.yml
spring:
redis:
host: 127.0.0.1
port: 6379
database: 0
4.测试类
实体类User.java
package nz.study.pojo;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class User {
private int uid;
private String username;
private String password;
private int age;
}
TestUtil.java
package nz.study;
import nz.study.pojo.User;
import nz.study.utils.RedisUtil;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
import javax.annotation.Resource;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@SpringBootTest
public class TestUtil {
@Resource
private RedisUtil util;
@Test
public void testString() throws InterruptedException{
System.out.println(util.set("hello1","world1"));
System.out.println(util.get("hello1"));
System.out.println(util.set("1", "1", 5));
System.out.println(util.get("1"));
Thread.sleep(5000);
System.out.println(util.get("1"));
}
@Test
public void testStringObject() throws InterruptedException{
User user = new User();
user.setUid(9999);
user.setUsername("xiaoming");
user.setPassword("123456789");
user.setAge(20);
System.out.println(util.set("xiaohong", user, 5));
System.out.println(util.get("xiaohong"));
Thread.sleep(5000);
System.out.println(util.get("xiaohong"));
}
@Test
public void testStringList(){
List<User> list = new ArrayList<>();
for (int i = 0; i < 10; i++) {
list.add(new User(i,"name"+ i,"pass"+i,18 + i));
}
System.out.println(util.set("list", list));
System.out.println(util.get("list"));
}
@Test
public void testHash(){
Map<String,Object> users = new HashMap<>();
users.put("wukong","sunxingzhe");
User tangtang = new User(1000,"tangtang","888888",20);
users.put("tangatang",tangtang);
users.put("bajie","zhuwuneng");
System.out.println(util.hmset("xiyouji", users));
System.out.println(util.hmget("xiyouji"));
System.out.println(util.hget("xiyouji", "tangtang"));
}
}
今天是我在千锋线上学习的第55天,加油!!!