部署集群收集nginx,mysql,tomcat,httpd日志。
1.准备条件
kafka都要配置jdk环境,提前配置好,并将kafka安装包解压到/usr/local
kafka-1 10.8.156.176
kafka-2 10.8.156.186
kafka-3 10.8.156.183
kafka-4 10.8.156.179
filebeat+nginx 10.8.156.180
filebeat+tomcat 10.8.156.190
filebeat+mysql 10.8.156.177
filebeat+httpd 10.8.156.182
2.安装配置
配置zookeeper
[root@kafka-1 ~]# sed -i 's/^[^#]/#&/' /usr/local/kafka_2.11-2.1.0/config/zookeeper.properties
[root@kafka-1 ~]# vim /usr/local/kafka_2.11-2.1.0/config/zookeeper.properties #添加如下配置
dataDir=/opt/data/zookeeper/data
dataLogDir=/opt/data/zookeeper/logs
clientPort=2181
tickTime=2000
initLimit=20
syncLimit=10
server.1=10.8.156.176:2888:3888
server.2=10.8.156.186:2888:3888
server.3=10.8.156.183:2888:3888
server.4=10.8.156.179:2888:3888
(4台机器的配置是一样的)
#创建data、log目录
[root@kafka-1 ~]# mkdir -p /opt/data/zookeeper/{data,logs}
#创建myid文件
[root@kafka-1 ~]# echo 1 > /opt/data/zookeeper/data/myid #myid号按顺序排(4台机器分别echo1234)
配置kafka
[root@kafka-1 ~]# sed -i 's/^[^#]/#&/' /usr/local/kafka_2.11-2.1.0/config/server.properties
[root@kafka-1 ~]# vim /usr/local/kafka_2.11-2.1.0/config/server.properties #在最后添加
broker.id=1 (这里如果是kafka1就写1,如果是kafaka2就写2)
listeners=PLAINTEXT://10.8.156.179:9092 (这个是本机的ip,我是kafka4)
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/data/kafka/logs
num.partitions=6
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
zookeeper.connect=10.8.156.176:2181,10.8.156.186:2181,10.8.156.183:2181,10.8.156.17 (这里要把所有的kafka机器的ip写上)
9:2181zookeeper.connection.timeout.ms=60000
group.initial.rebalance.delay.ms=0
[root@kafka-1 ~]# mkdir -p /opt/data/kafka/logs
启动并验证kafaka集群是否成功
在4个节点依次执行:
[root@kafka-1 ~]# cd /usr/local/kafka_2.11-2.1.0/
[root@kafka-1 kafka_2.11-2.1.0]# nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
过滤端口:
[root@kafka-1 ~]# netstat -lntp | grep 2181
tcp6 0 0 :::2181 :::* LISTEN 1226/java
在4个节点再次都执行:
[root@kafka-1 ~]# cd /usr/local/kafka_2.11-2.1.0/
[root@kafka-1 kafka_2.11-2.1.0]# nohup bin/kafka-server-start.sh config/server.properties &
验证:
在其他kafka机器上测试
[root@kafka-1 kafka_2.11-2.1.0]# bin/kafka-topics.sh --zookeeper 10.8.156.176:2181 --list
nginx
Filebeat配置
解压到/usr/local并mv为filebeat
修改nginx配置文件:
[root@sxp-nginx ~] vim /etc/nginx/nginx.conf
http {
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domain":"$host",'
'"host":"$server_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"referer": "$http_referer",'
'"ua": "$http_user_agent"'
'}';
access_log /var/log/nginx/access_json.log json;
[root@sxp-nginx filebeat]# mv filebeat.yml filebeat.yml.bak
[root@sxp-nginx filebeat]# vim filebeat.yml
filebeat.inputs:
- input_type: log
paths:
- /var/log/nginx/*.log
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
output.kafka:
hosts: ["10.8.156.176:9092","10.8.156.186:9092","10.8.156.183:9092","10.8.156.179
"] topic: 'nginx'
启动
[root@sxp-nginx filebeat]# nohup ./filebeat -e -c filebeat.yml &
[root@sxp-nginx filebeat]# tail -f nohup.out
验证kafka是否生成topic
[root@sxp-nginx filebeat]# cd /usr/local/kafka_2.11-2.1.0/
[root@sxp-nginx kafka_2.11-2.1.0]# bin/kafka-topics.sh --zookeeper 10.8.156.176 --list
__consumer_offsets
nginx #已经生成topic
nginx
ES部署
2、部署
1、安装配置jdk8
ES运行依赖jdk8 -----三台机器都操作,先上传jdk1.8
[root@mes-1 ~]# tar xzf jdk-8u191-linux-x64.tar.gz -C /usr/local/
[root@mes-1 ~]# cd /usr/local/
[root@mes-1 local]# mv jdk1.8.0_191/ java
[root@mes-1 local]# echo '
JAVA_HOME=/usr/local/java
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH
' >>/etc/profile
[root@mes-1 ~]# source /etc/profile
[root@mes-1 local]# java -version
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)
2、安装配置ES----只在第一台操作操作下面的部分
(1)创建运行ES的普通用户
[root@mes-1 ~]# useradd elsearch
[root@mes-1 ~]# echo "123456" | passwd --stdin "elsearch"
(2)安装配置ES
[root@mes-1 ~]# tar xzf elasticsearch-6.5.4.tar.gz -C /usr/local/
[root@mes-1 ~]# cd /usr/local/elasticsearch-6.5.4/config/
[root@mes-1 config]# ls
elasticsearch.yml log4j2.properties roles.yml users_roles
jvm.options role_mapping.yml users
[root@mes-1 config]# cp elasticsearch.yml elasticsearch.yml.bak
[root@mes-1 config]# vim elasticsearch.yml ----找个地方添加如下内容
cluster.name: elk
node.name: elk01
node.master: true
node.data: true
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.8.156.166", "10.8.156.167","10.8.156.171"] #可用域名和IP
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping_timeout: 150s
discovery.zen.fd.ping_retries: 10
client.transport.ping_timeout: 60s
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.initial_master_nodes: ["elk01","elk02","elk03"]
第二台es
cluster.name: elk
node.name: elk02
node.master: true
node.data: true
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.8.156.166", "10.8.156.167","10.8.156.171"] #可用域名和IP
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping_timeout: 150s
discovery.zen.fd.ping_retries: 10
client.transport.ping_timeout: 60s
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.initial_master_nodes: ["elk01","elk02","elk03"]
第三台
cluster.name: elk
node.name: elk03
node.master: true
node.data: true
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.8.156.166", "10.8.156.167","10.8.156.171"] #可用域名和IP
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping_timeout: 150s
discovery.zen.fd.ping_retries: 10
client.transport.ping_timeout: 60s
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.initial_master_nodes: ["elk01","elk02","elk03"]
(3)设置JVM堆
[root@mes-1 config]# vim jvm.options ----将
-Xms1g ----修改成 -Xms2g
-Xmx1g ----修改成 -Xms2g
###### (4)创建ES数据及日志存储目录
[root@mes-1 ~]# mkdir -p /data/elasticsearch/data (/data/elasticsearch)
[root@mes-1 ~]# mkdir -p /data/elasticsearch/logs (/log/elasticsearch)
###### (5)修改安装目录及存储目录权限
[root@mes-1 ~]# chown -R elsearch:elsearch /data/elasticsearch
[root@mes-1 ~]# chown -R elsearch:elsearch /usr/local/elasticsearch-6.5.4
#### 3、系统优化
###### (1)增加最大文件打开数
永久生效方法:
echo "* - nofile 65536" >> /etc/security/limits.conf
###### (2)增加最大进程数
* ```
[root@mes-1 ~]# vim /etc/security/limits.conf ---在文件最后面添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
###### (3)增加最大内存映射数
[root@mes-1 ~]# vim /etc/sysctl.conf ---添加如下
vm.max_map_count=262144
vm.swappiness=0
[root@mes-1 ~]# sysctl -p
#### 4、启动ES
[root@mes-1 ~]# su - elsearch
Last login: Sat Aug 3 19:48:59 CST 2019 on pts/0
[root@mes-1 ~] ./bin/elasticsearch #先启动看看报错不,需要多等一会
终止之后
[root@mes-1 elasticsearch-6.5.4] tail -f nohup.out #看一下是否启动
或者:
su - elsearch -c "cd /usr/local/elasticsearch-6.5.4 && nohup bin/elasticsearch &"