部署
- docker pull elasticsearch:7.9.3
- docker network create esnetwork
- firewall-cmd --add-port=9300/tcp 单独为9300 关闭防火墙
- docker run -d --name es --net esnetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms256m -Xmx256m" elasticsearch:7.9.3
- 配置跨域
- 进入容器 vi /config/elasticsearch.yml
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- docker pull Kibana:7.9.3
- docker run -it -d -e ELASTICSEARCH_URL=http://172.18.72.153:9200 -p 5601:5601 --name kibana kibana:7.9.3
- http://172.18.72.153:5601/app/home#/tutorial_directory/sampleData
- 汉化 i18n.locale: "zh-CN"
- docker pull logstash:7.9.3
- docker pull mobz/elasticsearch-head:5
- docker run -d -p 9100:9100 mobz/elasticsearch-head:5
- 这里有一处真他妈巨坑啊!!!!!!!!!! 还要改它里边的js !!!!!
- docker 起的里边还没有vi 我日了狗 也没有yum 牛逼
- apt-get install vim -y
- 网卡 换了镜像源也是废废 放弃了 不用了
- 使用linux 搭建吧
- wget https://nodejs.org/dist/v6.10.2/node-v6.10.2-linux-x64.tar.xz
- tar xvf node-v6.10.2-linux-x64.tar.xz
- vim /etc/profile
- export NODE_HOME=/usr/local/node
- export PATH=PATH:NODE_HOME/bin
- source /etc/profile
- node -v
- 下载elasticsearch-head-5.0.0.tar.gz github上下载最新
- 我在opt下 tar -zxvf elasticsearch-head-5.0.0.tar.gz
- 进入elasticsearch-head 文件夹 npm install
- npm run start
- Ctrl + z (开始后台运行)
- bg
- disown
- ik 分词 注意版本同步
- docker exec -it 容器 /bin/bash
- mkdir /usr/share/elasticsearch/plugins/ik
- yum -y install wget
- wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.9.3/elasticsearch-analysis-ik-7.9.3.zip
- unzip elasticsearch-analysis-ik-7.9.3.zip
- 验证 elasticsearch-plugin list
- 重启
基础语法
1 新建一个索引 就是新建一个数据库
-
put /lib/ lib是索引名称
{
"settings":{
"index": {
"number_of_shards":3, //分片数量
"number_of_replicas":1 //复制备份数量
}
}
} put lib2 默认的配置建立索引
2 查询索引
get /lib/_settings 查询索引设置
3 添加一个文档 相当于数据库建一个表 建一条数据
put /lib/user/1
{
"name" : "kakatadage",
"age" : 33,
"about": "I like football!",
"interests" : ["girls","football"]
}
使用PUT添加,其中user表示类型(type),1代表这条数据的主键;如果id不填也可以,ES会自动生成一条主键,不过这时就不能用PUT了,需要使用POST添加
4 查询这个文档 查询这个表中的数据
get /lib/user/1
5 修改文档
-
put /lib/user/1 这种方式是一个覆盖
{
"name" : "kakatadage",
"age" : 30,
"about": "I like money!",
"interests" : ["music","money"]
} -
post /lib/user/1/_update 这是一个针对字段及的修改
{
"doc": {
"age" :32
}
}
6 删除文档或索引
delete lib/user/1
delete lib2
_search
POST /lib/peaple/
{
"name":"旺财2"
}
GET /_search
GET /lib2/_search
GET /lib/_search
GET /lib,lin2/_search
GET /lib,lin2/user/_search
GET /lib/_search
GET /lib/_search?size=1
GET /lib/_search?size=1&from=1
PUT /lib2/class/1
{
"name":"hahah33"
}
GET /lib/_search?q=name:123
GET /lib/_search?q=name:33
GET /_search?q=like
GET /lib2/_search?q=name:33
PUT /customer2/doc/1?pretty
{
"name": "huangzhihcneg"
}
GET /customer2,lib,lib2/_search
特殊查询
term:查询某个字段里含有某个关键词的文档
GET /lib/user/_search
{
"query": {
"term": {
"name": "33"
}
}
}
terms:查询某个字段里含有多个关键词的文档
GET /lib/user/_search
{
"query": {
"terms": {
"name":["33","123"]
}
}
}
match query 知道分词器的存在,会对field进行分词操作,然后再查询
keyword 不会被分词
你可以理解为 将查询内容分词后拿着结果再查询
GET /lib/user/_search
{
"query": {
"match": {
"name": "123 33"
}
}
}
match_all:查询所有文档
GET /lib/user/_search
{
"query": {
"match_all": {}
}
}
multi_match:可以指定多个字段
GET /lib/user/_search
{
"query": {
"multi_match": {
"query": "33",
"fields": ["name","age"]
}
}
}
fiter 过滤查询
filter是不计算相关性的,同时可以缓存。因此filter速度快于query。
bool过滤查询
GET /lib4/items/_search
{
"query": {
"bool": {
"filter": [
{ "term":{"price":40}}
]
}
}
}
过滤查询价格等于40
格式:
{"bool" : {"must":[],"should":[],"must_not":[] } }
must:必须满足的条件 (相当于and)
should:可以满足也可以不满足的条件 (相当于or)
must_not:不需要满足的条件 (相当于not)
GET /lib/items/_search 价格25 或者itemID是id1004 并且价格不是30
{
"query": {
"bool": {
"should": [
{"term": {
"price": 25
}},
{"term": {
"itemID":"id1004"
}}
],
"must_not": [
{"term": {
"price":30
}
}
]
}
}
}
范围过滤
gt:>
lt:<
gte:>=
-
lte:<=
GET /lib4/items/_search #range表示取一定范围的数据
{
"query": {
"bool":{
"filter": {
"range": {
"price": {
"gt": 25,
"lte": 50
}
}
}
}
}
}
聚合查询
sum聚合
GET /lib4/items/_search
{
"size": 20, //表示查询多条文档,聚合只需总和结果,输出文档可以设置为0条
"aggs": { //aggs表示是聚合查询
"price_of_sum": { //结果集名字随便起
"sum": {
"field":"price"
}
}
}
}
min
GET /lib4/items/_search
{
"size": 20,
"aggs": {
"mippp": {
"min": {
"field": "price"
}
}
}
}
avg
GET /lib4/items/_search
{
"size": 20,
"aggs": {
"avgppp": {
"avg": {
"field": "price"
}
}
}
}
cardinality
GET /lib4/items/_search
{
"size": 0,
"aggs": {
"price_of_cardi": {
"cardinality": { //其实相当于该字段互不相同的值有多少类,输出的是种类数
"field": "price"
}
}
}
}
terms聚合
GET /lib4/items/_search
{
"size": 0,
"aggs": {
"price_of_by": {
"terms": {
"field": "price" //按价格来分组
}
}
}
}
高亮
GET /hzc/_search 搜索name 带有诸葛亮的全部高亮显示
{
"query": {
"match": {
"name": "诸葛亮"
}
},
"highlight": {
"fields": {
"name":{}
}
}
}
自定义搜索高亮条件
GET /hzc/_search
{
"query": {
"match": {
"name": "诸葛亮"
}
},
"highlight": {
"pre_tags": "<p class='key' style='color:red'>",
"post_tags": "<p>",
"fields": {
"name":{}
}
}
}
查看一些信息
GET _cat/indices
使用bool查询 接受以下参数: must:文档必须匹配设定条件才能被包含进来 must_not:文档必须不匹配设定条件才能被包含进来 should:如果满足语句中的任意语句,将增加_source,否则,无任何影响。主要用于修正每个文档的相关性得分 filter:必须匹配,但以不评分、过滤模式来进行。这些语句对评分没有贡献,只是根据过滤标准来排除或包含文档
ik分词学习
自带分词
POST _analyze
{
"analyzer": "standard",
"text": "他和我们全部是中国人"
}
中文分词
POST _analyze
{
"analyzer": "ik_smart",
"text": "他和我们全部是中国人"
}
POST _analyze 最新力度划分
{
"analyzer": "ik_max_word",
"text": "中华人民共和国国歌"
}
POST _analyze 阿凡达已经自定义
{
"analyzer": "ik_smart",
"text": "中国人阿凡达"
}
有一个问题是:这种中文的分词是一种固定的模式,类似于在Word文档中在一句话中随意双击鼠标,Word会自动分词。
通常,我们有很多的网络新词,需要我们分词,我们需要配置一下自定义的分词拓展词库,来支持热门的网络流行词或新词。
- 启动一个nginx
- docker run -d --name nginx -p 80:80 nginx
- 配置分词
- 进入NGINX容器 在 /usr/share/nginx/html/ 下创建fenci.txt 供访问其中自定义的分词
- 如果NGINX没有vi命令 安装vi 真的好慢 最好在启动的时候把要创建文件的文件夹都给他弄成数据卷 映射出去 在主机上就能方便修改了
- 一定要像上一条那么干 真的是太慢了 还得换下载源 超级麻烦
- 这种的访问中文乱码 废了好大力气 没有解决 不想拉菲时间了 换个方法
- 启动一个nginx 指定数据卷
- 先将nginx配置文件弄到主机上上一份
- 然后在主机创建3个映射文件夹
- mkdir -p /mydata/nginx/html
- mkdir -p /mydata/nginx/logs
- mkdir -p /mydata/nginx/conf
- docker container cp nginx:/etc/nginx /mydata/nginx/conf/
docker run -p 80:80 --name nginx
-v /mydata/nginx/html:/usr/share/nginx/html
-v /mydata/nginx/logs:/var/log/nginx
-v /mydata/nginx/conf/:/etc/nginx
-d nginx
- 在 /mydata/nginx/html/ 创建 fenci.txt 就是在nginx html 访问路径下放一个设置自定义分词的文件 然后让el在分词的时候访问这个文件 读取里边自定义的分词
- vi fenci.txt 阿凡达
- 进入 elasticsearch 中文分词配置文件 vim IKAnalyzer.cfg.xml
- <entry key="remote_ext_dict">http://172.18.72.153/fenci.txt</entry> 这个地方一定要注意把注释去掉!!!!!
- docker restart elasticsearch
java 连接
pom
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>7.6.1</version>
</dependency>
配置类
@Bean
public RestHighLevelClient restHighLevelClient() {
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("172.18.72.153", 9200, "http")
)
);
return client;
}
增删改查
package com.hzc.es;
import com.alibaba.fastjson.JSON;
import com.hzc.es.model.User;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.client.indices.GetIndexRequest;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.index.query.TermQueryBuilder;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
@SpringBootTest
public class EsTest {
@Autowired
private RestHighLevelClient restHighLevelClient;
// 创建索引
@Test
void createIndex() throws IOException {
CreateIndexRequest createIndexRequest = new CreateIndexRequest("hzc_index");
CreateIndexResponse createIndexResponse = restHighLevelClient.indices().create(createIndexRequest, RequestOptions.DEFAULT);
System.out.println(createIndexResponse);
}
//判断索引是否存在
@Test
void existIndex() throws IOException {
GetIndexRequest getIndexRequest = new GetIndexRequest("hzc_index");
boolean boo = restHighLevelClient.indices().exists(getIndexRequest, RequestOptions.DEFAULT);
System.out.println(boo);
}
//删除索引
@Test
void deIndex() throws IOException {
DeleteIndexRequest getIndexRequest = new DeleteIndexRequest("test3");
AcknowledgedResponse delete = restHighLevelClient.indices().delete(getIndexRequest, RequestOptions.DEFAULT);
System.out.println(delete);
System.out.println(delete.isAcknowledged());
}
//添加一个文档
@Test
void addDocument() throws IOException {
User user = new User("巡抚", 99);
String usreJson = JSON.toJSONString(user);
IndexRequest indexRequest = new IndexRequest("hzc_index");
indexRequest.source(usreJson, XContentType.JSON);
indexRequest.timeout(TimeValue.timeValueSeconds(1));
indexRequest.id("13"); // 手动指定id
IndexResponse response = restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
System.out.println(response.toString());
System.out.println(response.status());
}
//判断文档是否存在
@Test
void existDocument() throws IOException {
GetRequest getRequest = new GetRequest("hzc_index", "8");
boolean exists = restHighLevelClient.exists(getRequest, RequestOptions.DEFAULT);
System.out.println(exists);
}
//获取文档
@Test
void getDocument() throws IOException {
GetRequest getRequest = new GetRequest("hzc_index", "210");
GetResponse documentFields = restHighLevelClient.get(getRequest, RequestOptions.DEFAULT);
System.out.println(documentFields + "----------------");
System.out.println(documentFields.getSourceAsString());
}
//更新文档
@Test
void updateDocument() throws IOException {
UpdateRequest updateRequest = new UpdateRequest("hzc_index", "2");
User user = new User("河南", 45);
updateRequest.doc(JSON.toJSONString(user), XContentType.JSON);
updateRequest.timeout("1s");
UpdateResponse update = restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT);
System.out.println(update + "---------");
System.out.println(update.status());
}
//删除文档
@Test
void deleteDocument() throws IOException {
DeleteRequest deleteRequest = new DeleteRequest("hzc_index", "2");
deleteRequest.timeout("1s");//这个是1秒哈哈哈哈哈哈哈 神奇
DeleteResponse delete = restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT);
System.out.println(delete);
System.out.println(delete.status());
}
//批量插入
@Test
void batchDocument() throws IOException {
List<User> list = new ArrayList<>();
list.add(new User("鞍山", 11));
list.add(new User("抚顺", 12));
list.add(new User("锦州", 13));
list.add(new User("丹东", 14));
list.add(new User("铁岭", 15));
list.add(new User("本溪", 16));
BulkRequest bulkRequest = new BulkRequest();
bulkRequest.timeout("10s");
for (int i = 0; i < list.size(); i++) {
bulkRequest.add(new IndexRequest("hzc_index").id("" + i + 10).source(JSON.toJSONString(list.get(i)),XContentType.JSON));
}
BulkResponse bulk = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
System.out.println(bulk);
System.out.println(bulk.status());
System.out.println(bulk.hasFailures()); // 返回false代表成功
}
//查询
@Test
void search() throws IOException {
SearchRequest searchRequest = new SearchRequest("hzc_index");
// SearchRequest searchRequest = new SearchRequest("hzc_index","lib");
// searchRequest.indices("lib1","lib2", "lib3", "lib4");
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
// TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("name","铁岭");
QueryBuilder termQueryBuilder = new TermQueryBuilder("name","抚"); //这个是查询的分词结果 其实已经被分为“抚”和“顺” 那么查询抚顺是没有结果的
// searchSourceBuilder.highlighter();
searchSourceBuilder.query(termQueryBuilder);
searchSourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));
searchRequest.source(searchSourceBuilder);
SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
System.out.println(search);
System.out.println(JSON.toJSONString(search.getHits()));
System.out.println("----------------------------------");
for (SearchHit hit : search.getHits()) {
System.out.println(hit.getSourceAsMap());
}
System.out.println("===========================================");
for (SearchHit hit : search.getHits().getHits()) {
System.out.println(hit.getSourceAsMap());
}
}
}
es集群
创建所有配置文件数据卷 权限赋满
yml
log
data
-
plugins
存放配置文件的文件夹
mkdir -p /home/elasticsearch/node-1/config
mkdir -p /home/elasticsearch/node-2/config
mkdir -p /home/elasticsearch/node-3/config存放数据的文件夹
mkdir -p /home/elasticsearch/node-1/data
mkdir -p /home/elasticsearch/node-2/data
mkdir -p /home/elasticsearch/node-3/data存放运行日志的文件夹
mkdir -p /home/elasticsearch/node-1/log
mkdir -p /home/elasticsearch/node-2/log
mkdir -p /home/elasticsearch/node-3/log存放IK分词插件的文件夹
mkdir -p /home/elasticsearch/node-1/plugins
mkdir -p /home/elasticsearch/node-2/plugins
mkdir -p /home/elasticsearch/node-3/plugins
配置文件 3个
#集群名称
cluster.name: my-es
#当前该节点的名称
node.name: node-1
#是不是有资格竞选主节点
node.master: true
#是否存储数据
node.data: true
#最大集群节点数
node.max_local_storage_nodes: 3
#给当前节点自定义属性(可以省略)
#node.attr.rack: r1
#数据存档位置
path.data: /usr/share/elasticsearch/data
#日志存放位置
path.logs: /usr/share/elasticsearch/log
#是否开启时锁定内存(默认为是)
#bootstrap.memory_lock: true
#设置网关地址,我是被这个坑死了,这个地址我原先填写了自己的实际物理IP地址,
#然后启动一直报无效的IP地址,无法注入9300端口,这里只需要填写0.0.0.0
network.host: 0.0.0.0
#设置其它结点和该结点交互的ip地址,如果不设置它会自动判断,值必须是个真实的ip地址,设置当前物理机地址,
#如果是docker安装节点的IP将会是配置的IP而不是docker网管ip
network.publish_host: 172.18.72.153
#设置映射端口
http.port: 9200
#内部节点之间沟通端口
transport.tcp.port: 9300
#集群发现默认值为127.0.0.1:9300,如果要在其他主机上形成包含节点的群集,如果搭建集群则需要填写
#es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点,也就是说把所有的节点都写上
discovery.seed_hosts: ["172.18.72.153:9300","172.18.72.153:9301","172.18.72.153:9302"]
#当你在搭建集群的时候,选出合格的节点集群,有些人说的太官方了,
#其实就是,让你选择比较好的几个节点,在你节点启动时,在这些节点中选一个做领导者,
#如果你不设置呢,elasticsearch就会自己选举,这里我们把三个节点都写上
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
#在群集完全重新启动后阻止初始恢复,直到启动N个节点
#简单点说在集群启动后,至少复活多少个节点以上,那么这个服务才可以被使用,否则不可以被使用,
#删除索引是是否需要显示其名称,默认为显示
#action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: "*"
启动
docker run -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d -p 9200:9200 -p 9300:9300 -v /mydata/es/config/es1.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/es/plugin1:/usr/share/elasticsearch/plugins -v /mydata/es/data1:/usr/share/elasticsearch/data -v /mydata/es/log1:/usr/share/elasticsearch/log --name es-node-1 elasticsearch:7.9.3
docker run -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d -p 9201:9201 -p 9301:9301 -v /mydata/es/config/es2.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/es/plugin2:/usr/share/elasticsearch/plugins -v /mydata/es/data2:/usr/share/elasticsearch/data -v /mydata/es/log2:/usr/share/elasticsearch/log --name es-node-2 elasticsearch:7.9.3
docker run -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -d -p 9202:9202 -p 9302:9302 -v /mydata/es/config/es3.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/es/plugin3:/usr/share/elasticsearch/plugins -v /mydata/es/data3:/usr/share/elasticsearch/data -v /mydata/es/log3:/usr/share/elasticsearch/log --name es-node-3 elasticsearch:7.9.3
爬虫京东数据 存入es中
package com.hzc.es.util;
import com.alibaba.fastjson.JSON;
import com.hzc.es.model.Content;
import org.dom4j.DocumentException;
import org.dom4j.io.SAXReader;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.xcontent.XContentType;
import org.springframework.stereotype.Component;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import javax.annotation.Resource;
import java.io.File;
import java.io.IOException;
import java.net.URL;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.charset.Charset;
import java.util.ArrayList;
@Component
public class HtmlParseUtil {
@Resource
private HttpUtils httpUtils;
@Resource
private RestHighLevelClient restHighLevelClient;
public ArrayList<Content> parseJd(String keyword) throws IOException, DocumentException {
// url
// String url = "https://list.tmall.com/search_product.htm?q=" + keyword;
// httpUtils.doGetHTML(url);
//爬虫用不了 弄了返回的html
Document parse = Jsoup.parse(new File("C:\\Users\\Administrator\\Desktop\\1.html"), "UTF-8");
// 解析网页,返回Document
// Document parse = Jsoup.parse(new URL("http://www.tooopen.com/view/1439719.html"), 300000);
// Element j_goodsList = parse.getElementById("J_goodsList");
Elements elements = parse.getElementsByTag("li");
ArrayList<Content> contentArrayList = new ArrayList<>();
for (Element element : elements) {
String img = element.getElementsByTag("img").eq(0).attr("src");
String price = element.getElementsByClass("p-price").eq(0).text();
String title = element.getElementsByClass("p-name").eq(0).text();
if (title.length() != 0 && img != null) {
Content content = new Content(title, img, price);
contentArrayList.add(content);
}
}
//弄到es里边去 批量插入
BulkRequest bulkRequest = new BulkRequest();
bulkRequest.timeout("1m");
for (int i = 0; i < contentArrayList.size(); i++) {
bulkRequest.add(new IndexRequest("jd_goods").source(JSON.toJSONString(contentArrayList.get(i)), XContentType.JSON));
}
BulkResponse bulk = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
System.out.println(!bulk.hasFailures());
return contentArrayList;
}
}
页面
这里直接使用thymeleaf 模板引擎了 使用vue.js 不做前后分离项目了
巨坑
@Slf4j
@Controller ========================这个地方不能用RestController 会直接返回页面的啊啊啊啊!!!!
public class IndexController {
@GetMapping("/index")
public String index(Model model){
model.addAttribute("message", "这里是首页面");
return "index";
}
}
简单的页面搜索高亮
<!DOCTYPE html>
<html lang="en" xmlns:th="http://www.thymeleaf.org/">
<head>
<meta charset="UTF-8">
<title>springboot+elasticsearch+vue</title>
<script th:src="@{/axios.min.js}"></script>
<script th:src="@{/vue.min.js}"></script>
</head>
<body>
<!-- 使用th:text属性输出 -->
<div id="app">
搜索:<input type="text" v-model='keyword'>
<button type="submit" @click='search'>搜一下</button>
<div>
这里是列表了哈
<div v-for="re in result">
<p>
<em>{{re.price}}}</em>
</p>
<p>
<a v-html='re.title'></a>
</p>
</div>
</div>
</div>
<script>
new Vue({
el: '#app',
data: {
keyword: '',
result: []
},
methods: {
search() {
let keyword = this.keyword;
console.log(keyword);
axios.get('es/jdTest/' + keyword + '/1/20').then(res => {
console.log(res);
this.result = res.data;
})
}
}
})
</script>
</body>
</html>
这里有一个问题注意下 那就是自定义分词在落数据之前就要做好,这样保证在查询的时候能自定义查询,如果在数据落库之后再创建一个自定义分词,那么对于已经落库的数据是没有效果的