[TOC]
一、Library Environment
Hostname | IP | Software | OS | Identity | Remark |
---|---|---|---|---|---|
master.app.com | 10.66.3.155 | Elasticsearch、Logstash、Kibana、Redis、Redis-browser、Java | CentOS release 6.6 | 服务端 | 软件自备 |
node1.app.com | 10.66.3.136 | Logstash、Java、Nginx、Rsyslog | CentOS release 6.6 | 客户端 | 软件自备 |
开机启动命令(这里作说明,先不用操作)
# vim /etc/rc.local //需创建elasticsearch、kibana用户,并修改相应目录权限;logstash因为要发邮件,所以用root用户身份运行
su -l -c "su elasticsearch /opt/elasticsearch/bin/elasticsearch >/dev/null 2>&1 &"
su -l -c "su kibana /opt/kibana/bin/kibana >/dev/null 2>&1 &"
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_indexer_tomcat_catalina_local_250.conf >/dev/null 2>&1 &"
二、Software Installation
2.1.Server Software Installation
2.1.1 Install JDK
# tar xf jdk-7u79-linux-x64.tar.gz -C /opt/ //解压JDK至指定目录
# ln -sv /opt/jdk1.7.0_79/ /opt/java //创建链接,保留原来目录可以方便一目了然的看到版本号
# vim /etc/profile.d/java.sh //创建环境变量
export JAVA_HOME=/opt/java
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
# . /etc/profile.d/java.sh //使环境变量生效
# java -version // 查看是否生效,生效了会显示java版本信息
java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
2.1.2 Install Elasticsearch
# groupadd -g 92 elasticsearch
# useradd -g 92 -u 92 elasticsearch
# tar xf elasticsearch-2.3.1.tar.gz -C /opt/ //解压elasticsearch至指定目录
# ln -sv /opt/elasticsearch-2.3.1/bin/elasticsearch /usr/bin/ //将elasticsearch可执行文件链接至/usr/bin目录,方便在任何目录下都可直接执行`elasticsearch`
# chown -R elasticsearch.elasticsearch /opt/elasticsearch/
2.1.3 Install Logstash
# tar xf logstash-2.3.1.tar.gz -C /opt/ //解压logstash至指定目录
# ln -sv /opt/logstash-2.3.1/bin/logstash /usr/bin/ //将logstash可执行文件链接至/usr/bin目录,方便在任何目录下都可直接执行`logstash`
2.1.4 Install Redis
# tar xf redis-3.0.7.tar.gz //解压至当前目录
# cd redis-3.0.7 //切换至redis源码目录
# make //编译
# yum install tcl //安装测试依赖工具
# make test //有可能会失败,只是看看,不用在意
# make install //安装
# mkdir /opt/redis/{db,conf} -pv //创建redis安装目录
# cp redis.conf /opt/redis/conf/ //复制配置文件至redis安装目录
# cd src
# cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server mkreleasehdr.sh /opt/redis/ //复制各文件至redis安装目录
# ln -sv /opt/redis/redis-cli /usr/bin/ //将redis-cli可执行文件链接至/usr/bin目录,方便在任何目录下都可直接执行`redis-cli`
# vim /opt/redis/conf/redis.conf //修改redis.conf 中的 `daemonize`为`yes`,让server以守护进程在后台执行,这一步可以不做,因为后面要执行的脚本会自动创建这个文件,且这个值会设置为`yes`
daemonize yes
make install仅仅在你的系统上安装了二进制文件,不会替你默认配置init脚本和配置文件,为了把它用在生产环境而安装它,在源码目录的utils目录下Redis为系统提供了
这样的一个脚本install_server.sh
# ./utils/install_server.sh //执行sh格式的安装脚本
`Welcome to the redis service installer
This script will help you easily set up a running redis server
Please select the redis port for this instance: [6379]
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf] /opt/redis/conf/redis.conf
Please select the redis log file name [/var/log/redis_6379.log]
Selected default - /var/log/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379] /opt/redis/db/6379.db
Please select the redis executable path [/usr/bin/redis-server]
Selected config:
Port : 6379
Config file : /opt/redis/conf/redis.conf
Log file : /var/log/redis_6379.log
Data dir : /opt/redis/db/6379.db
Executable : /opt/redis/redis-server
Cli Executable : /usr/bin/redis-cli`
# chkconfig --add redis_6379 //将redis加入系统服务
# chkconfig redis_6379 on //加入开机启动
# vim /opt/redis/conf/redis.conf
requirepass Carsing2582# //设置密码
# /etc/init.d/redis_6379 restart
2.1.5 Install Kibana
# groupadd -g 56 kibana
# useradd -g 56 -u 56 kibana
# tar xf kibana-4.5.0-linux-x64.tar.gz -C /opt/ //解压kibana至指定目录
# ln -sv /opt/kibana-4.5.0-linux-x64/bin/kibana /usr/bin/ //将kibana可执行文件链接至/usr/bin目录,方便在任何目录下都可直接执行`kibana`
# chown -R kibana.kibana /opt/kibana/
2.2 Client Software Installation
2.2.1 Install JDK
# tar xf jdk-7u79-linux-x64.tar.gz -C /opt/ //解压JDK至指定目录
# ln -sv /opt/jdk1.7.0_79/ /opt/java //创建链接,保留原来目录可以方便一目了然的看到版本号
# vim /etc/profile.d/java.sh //创建环境变量
export JAVA_HOME=/opt/java
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
# . /etc/profile.d/java.sh //使环境变量生效
# java -version // 查看是否生效,生效了会显示java版本信息
java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
2.2.2 Install Logstash
# tar xf logstash-2.3.1.tar.gz -C /opt/ //解压logstash至指定目录
# ln -sv /opt/logstash-2.3.1/bin/logstash /usr/bin/ //将logstash可执行文件链接至/usr/bin目录,方便在任何目录下都可直接执行`logstash`
三、Start Service
服务端
3.1 Start Redis
# /etc/init.d/redis_6379 start
# netstat -tnlp //查看是否有6379端口
3.2 Start Elasticsearch
elasticsearch只能以普通用户运行
# nohup elasticsearch >nohup & //启动并放至后台运行
# vim /etc/rc.local //设置开机启动
su -l -c "su elasticsearch /opt/elasticsearch/bin/elasticsearch >/dev/null 2>&1 &"
# netstat -tnlp //ElasticSearch默认的对外服务的HTTP端口是9200,节点间交互的TCP端口是9300,注意打开tcp端口
# exit //退出当前用户
http://10.66.3.155:9200 //可以看到如下信息
{ "name" : "node0", "cluster_name" : "es_cluster", "version" : { "number" : "2.3.1", "build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39", "build_timestamp" : "2016-04-04T12:25:05Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search" }
# curl -X GET http://10.66.3.155:9200 //获取网页内容
# curl -I GET http://10.66.3.155:9200 //获取网页头部信息,200正常
3.3 Start Kibana
# nohup kibana >nohup & //启动并放至后台运行
# vim /etc/rc.local //设置开机启动
su -l -c "su kibana /opt/kibana/bin/kibana >/dev/null 2>&1 &"
# netstat -tnlp //启用端口为5601
http://10.66.3.155:5601 //可以看到网页内容
# vim /etc/rc.local //开机自启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_indexer_tomcat_catalina_local_250.conf >/dev/null 2>&1 &"
四、Monitor Nginx Log
4.1 只监控access日志,,从文件获取将access日志导向服务端的redis
客户端
# vim /opt/logstash-2.3.1/conf/log_agent_nginx_access.conf //定义一个实例配置,从access.log获取日志并存储至redis
input {
file {
type => "nginx access log"
path => ["/var/log/nginx/access.log"]
}
}
output {
redis {
host => "10.66.3.155"
port => "6379"
data_type => "list"
key => "nginx_access_136:redis"
}
}
# /opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/conf/log_nginx_access.log //启动实例测试,正常显示如下
Settings: Default pipeline workers: 8 Pipeline main started
ctrl + c 退出
# nohup /opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/conf/log_nginx_access.log >nohup & //放至后台运行
# vim /etc/rc.local //开机自启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_nginx_access.log >/dev/null 2>&1 &"
服务端
# redis-cli //登录redis
# 127.0.0.1:6379> exists nginx_access_136:redis
(integer) 1
//如果存在会显示这个
# vim /opt/logstash-2.3.1/config/log_indexer_nginx_access_136.conf //从redis获取键名为nginx_access_136:redis的日志,将之过虑并导向elasticsearch
input {
redis {
host => "10.66.3.155"
port => "6379"
data_type => "list"
key => "nginx_access_136:redis"
type => "redis-input"
}
}
filter {
if [type] =~ "nginx access log" {
mutate {
replace => { "type" => "apache_access" }
}
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => "10.66.3.155:9200"
}
stdout { codec => rubydebug }
}
# logstash -f /opt/logstash-2.3.1/config/log_indexer_nginx_access_136.conf //启动一个实例,如果正常会显示如下信息
{ "message" => "10.66.0.1 - - [15/Apr/2016:12:30:05 +0800] \"POST /weixin/services/SysServiceLog?wsdl HTTP/1.0\" 200 594 \"-\" \"Axis/1.4\"", "@version" => "1", "@timestamp" => "2016-04-15T04:30:05.000Z", "path" => "/opt/nginx/logs/access.log", "host" => "LO-T-DEMO-AP", "type" => "apache_access", "clientip" => "10.66.0.1", "ident" => "-", "auth" => "-", "timestamp" => "15/Apr/2016:12:30:05 +0800", "verb" => "POST", "request" => "/weixin/services/SysServiceLog?wsdl", "httpversion" => "1.0", "response" => "200", "bytes" => "594", "referrer" => "\"-\"", "agent" => "\"Axis/1.4\"" }
ctrl+c取消,可以放到后台运行
# nohup logstash -f /opt/logstash-2.3.1/config/log_indexer_nginx_access_136.conf >nohup & //放至后台运行
# vim /etc/rc.local //设置开机启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_indexer_nginx_access_136.conf >/dev/null 2>&1 &"
4.2 同时监控access、error日志,,从文件获取将access、error日志导向服务端的redis
客户端
# vim /opt/logstash-2.3.1/conf/log_agent_nginx_all.conf //定义一个实例配置,从access.log、error*.log获取日志并存储至redis
input {
file {
path => "/opt/nginx/logs/access.log"
type => "nginx_access"
}
file {
path => "/opt/nginx/logs/erro*.log"
type => "nginx_error"
}
}
output {
redis {
host => "10.66.3.155"
port => "6379"
data_type => "list"
key => "nginx_all_136:redis"
}
}
# /opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/conf/log_nginx_all.log //启动实例测试,正常显示如下
Settings: Default pipeline workers: 8 Pipeline main started
ctrl + c 退出
# nohup /opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/conf/log_nginx_all.log >nohup & //放至后台运行
# vim /etc/rc.local //设置开机自启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_nginx_all.log >/dev/null 2>&1 &"
服务端
# redis-cli //登录redis,验证键名是否存在
# 127.0.0.1:6379> exists nginx_all_136:redis
(integer) 1
//如果存在会显示这个
# vim /opt/logstash-2.3.1/config/log_indexer_nginx_all_136.conf //从redis获取键名为`nginx_all_136:redis`的日志,将之过虑并导向elasticsearch
input {
redis {
host => "10.66.3.155"
port => "6379"
data_type => "list"
key => "nginx_all_136:redis"
type => "redis-input"
}
}
filter {
if [type] =~ "access" {
mutate {
replace => { type => "apache_access" }
}
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
else if [type] =~ "error" {
mutate { replace => { type => "apache_error" } }
}
else {
mutate { replace => { type => "random_logs" } }
}
}
output {
elasticsearch {
hosts => "10.66.3.155:9200"
}
stdout { codec => rubydebug }
}
# logstash -f /opt/logstash-2.3.1/config/log_indexer_nginx_all_136.conf //启动一个实例,如果正常,会显示如下信息
{ "message" => "10.66.0.1 - - [15/Apr/2016:19:15:05 +0800] \"POST /weixin/services/SysServiceLog?wsdl HTTP/1.0\" 200 435 \"-\" \"Axis/1.4\"", "@version" => "1", "@timestamp" => "2016-04-15T11:15:05.000Z", "path" => "/opt/nginx/logs/access.log", "host" => "LO-T-DEMO-AP", "type" => "apache_access", "clientip" => "10.66.0.1", "ident" => "-", "auth" => "-", "timestamp" => "15/Apr/2016:19:15:05 +0800", "verb" => "POST", "request" => "/weixin/services/SysServiceLog?wsdl", "httpversion" => "1.0", "response" => "200", "bytes" => "435", "referrer" => "\"-\"", "agent" => "\"Axis/1.4\"" }
ctrl+c取消,可以放到后台运行
# nohup logstash -f /opt/logstash-2.3.1/config/log_indexer_nginx_all_136.conf >nohup & //放至后台运行
# vim /etc/rc.local //设置开机启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_indexer_nginx_all_136.conf >/dev/null 2>&1 &"
五、Monitor System Log
5.1Client rsyslog Install (If not installed)
客户端
# yum install rsyslog
5.2 The configuration of /etc/rsyslog.conf
客户端
# vim /etc/rsyslog.conf //在最后一行加上如下所示,5000端口为服务端自定义的,服务端是多少这里就是多少
*.* @10.66.3.155:5000
# vim /etc/bashrc //定义日常命令操作也记录至syslog日志,最后一行加入如下所示
export PROMPT_COMMAND='{ msg=$(history 1 | { read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg"; }'
# service rsyslog restart //重启rsyslog服务
5.3 Start the instance on the server
服务端
# vim /opt/logstash-2.3.1/config/log_agent_136.conf //定义logstash实例监听在5000端口,接收10.66.3.136发过来的日志
input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}
output {
redis {
host => "10.66.3.155"
port => "6379"
data_type => "list"
key => "syslog_136:redis"
}
}
# logstash -f /opt/logstash-2.3.1/config/log_agent_syslog_136.conf //启动实例,如果正常,会显示如下信息
Settings: Default pipeline workers: 6 Pipeline main started
ctrl + c退出,可以放至后台运行
# nohup logstash -f /opt/logstash-2.3.1/config/log_agent_syslog_136.conf >nohup & //放至后台运行
# vim /etc/rc.local //设置开机启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_agent_syslog_136.conf >/dev/null 2>&1 &"
# redis-cli //登录redis,验证是否获取信息
# 127.0.0.1:6379> exists syslog_136:redis
(integer) 1
//如果存在会显示这个
# vim /opt/logstash-2.3.1/config/log_indexer_syslog_136.conf //定义实例从redis中取得键名为`syslog_136:redis`的数据,并将数据导向elasticsearch
input {
redis {
host => "10.66.3.155"
port => "6379"
data_type => "list"
key => "syslog_136:redis"
type => "redis-input"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => "10.66.3.155:9200"
}
stdout {
codec => rubydebug
}
}
# logstash -f /opt/logstash-2.3.1/config/log_indexer_syslog_136.conf //启动实例,正常显示如下
`{
"message" => "<13>Apr 15 19:46:52 LO-T-DEMO-AP root: [euid=root]:root pts/0 2016-04-15 18:51 (10.66.13.36):[/opt/logstash-2.3.1]/opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/conf/log_nginx_access.log",
"@version" => "1",
"@timestamp" => "2016-04-15T11:46:52.000Z",
"type" => "syslog",
"host" => "10.66.3.136",
"syslog_timestamp" => "Apr 15 19:46:52",
"syslog_hostname" => "LO-T-DEMO-AP",
"syslog_program" => "root",
"syslog_message" => "[euid=root]:root pts/0 2016-04-15 18:51 (10.66.13.36):[/opt/logstash-2.3.1]/opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/conf/log_nginx_access.log",
"received_at" => "2016-04-15T11:46:51.736Z",
"received_from" => "10.66.3.136",
"syslog_severity_code" => 5,
"syslog_facility_code" => 1,
"syslog_facility" => "user-level",
"syslog_severity" => "notice"
}`
ctrl + c退出,可以放至后台运行
# nohup logstash -f /opt/logstash-2.3.1/config/log_indexer_syslog_136.conf >nohup & //放至后台运行
# vim /etc/rc.local //设置开机启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_indexer_syslog_136.conf >/dev/null 2>&1 &"
六、Monitor Tomcat-catalina Log
6.1 从文件获取日志
客户端
# vim /opt/logstash-2.3.1/conf/log_agent_tomcat_catalina_local_250.conf //定义一个实例配置,从catalina.log获取日志,直接在客户端进行日志filter,并存储至redis,键名定义为`tomcat_catalina_local_250:redis`,在filter中用到了replace,即自定义日志类型分类
input {
file {
path => "/opt/apache-tomcat-7.0.53/logs/catalina.out"
type => "tomcat_catalina"
codec=> multiline {
pattern => "(^.+[^\[INFO\]]Exception:.+)|(^.+\[ERROR\].+)|(^[a-zA-Z])|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)"
#pattern => "(^[a-zA-Z].+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)"
#匹配这种日志,有的以[开头,有的直接以数字开头
#[04-21 15:42:00,123][DefaultQuartzScheduler_Worker-6][INFO] carsing.crm.customer.service.impl.ServiceNoteWsServiceImpl.queryPeriodFromContract(line:742) CRM<<<<<<<<<Contract:<resultset></resultset>
#2016-04-21 15:42:15,022 [com.trade.info.impl.InfoPlatformDispatcherImpl:41]-[INFO] ---------线程开始提交--------
#pattern => "(^\s+)|(^=)|(^\d+=\d+)|(^\()|(^[a-zA-Z].+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)"
#pattern => "(^[^\[])|(^\s+at .+)|(^\s*Caused by:.+)"
#匹配这种日志,直接以[开头,即不是以[开头的都并入下一行
#[04-21 16:07:55,150][http-bio-8080-exec-837][INFO] carsing.crm.log.InfoInteractionWS.sysCarInfoToWechatService(line:304) >>>>>>>>>sysCarInfoToWechatService:
#pattern => "(^.+Exception:.+)|(^[a-zA-Z])|(^\s+at .+)|(^\s*Caused by:.+)"
what=> "previous" #如果不换行的话放在哪,这里表示放在前面
}
}
}
filter {
if "ERROR" in [message] { #如果消息里有ERROR字符则将type改为自定义的标记
mutate { replace => { type => "tomcat_catalina_error" } }
}
else if "WARN" in [message] {
mutate { replace => { type => "tomcat_catalina_warn" } }
}
else {
mutate { replace => { type => "tomcat_catalina_info" } }
}
grok {
#match => { "message" => "%{COMBINEDAPACHELOG}" }
#match => [ "message", "%{TOMCATLOG}", "message", "%{CATALINALOG}" ]
match => [ "message", "\[%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:%{MINUTE}:(%{SECOND})\]\[(?<thread_name>.+?)\]\[(?<log_level>\w+)\]\s*(?<content>.*)", "message", "%{TIMESTAMP_ISO8601:date} \[(?<thread_name>.+?)\]-\[(?<log_level>\w+)\]\s*(?<content>.*)" ]
#多种格式匹配,如下
#[04-21 15:42:00,123][DefaultQuartzScheduler_Worker-6][INFO] carsing.crm.customer.service.impl.ServiceNoteWsServiceImpl.queryPeriodFromContract(line:742) CRM<<<<<<<<<Contract:<resultset></resultset>
#2016-04-21 15:42:15,022 [com.trade.info.impl.InfoPlatformDispatcherImpl:41]-[INFO] ---------线程开始提交--------
remove_field => ["message"] #这表示匹配成功后是否删除原始信息,这个看个人情况,如果为了节省空间可以考虑删除
}
}
output {
redis {
host => "10.66.3.155"
port => "6379"
data_type => "list"
key => "tomcat_catalina_local_250:redis"
}
}
# /opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/conf/log_agent_tomcat_catalina_local_250.conf //启动实例测试,正常显示如下
Settings: Default pipeline workers: 8 Pipeline main started
ctrl + c退出,可以放至后台运行
# nohup /opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/conf/log_agent_tomcat_catalina_local_250.conf >nohup & //放至后台运行
# vim /etc/rc.local //设置开机启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_agent_tomcat_catalina_local_250.conf >/dev/null 2>&1 &"
6.2 服务端从redis中提取相应键名的数据,并错误日志通过邮件发送(msmtp + mutt请另行配置)
服务端
# vim /opt/logstash-2.3.1/config/log_indexer_tomcat_catalina_local_250.conf //定义实例,从redis中取得键名为tomcat_catalina_local_250:redis
的数据,即取完redis就没有数据了,如果有错误日志则执行mutt命令发送邮件通知
input {
redis {
host => "10.66.3.155"
port => "6379"
data_type => "list"
key => "tomcat_catalina_local_250:redis"
type => "redis-input"
}
}
output {
elasticsearch {
hosts => "10.66.3.155:9200"
#user => "root" #如果安装了shield并配置了用户,则加上用户名及密码
#password => "admin1"
#ssl => true #如果安装了shield并在elasticsearch启用了https,则在这里启用ssl,并在下行指定证书
#cacert => "/etc/logstash/ssl/node01.crt" #指定证书
index => "tomcat-catalina-local_250_%{+YYYY.MM.dd}" #索引名称
}
if "ERROR" in [message] {
exec {
command => "echo '%{message}' | mutt -s '服务器%{host} : %{type}日志发现异常!!!' wangjinhou@carsing.com.cn -c jhw11211@163.com"
}
}
stdout {
codec => rubydebug
}
}
# /opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/config/log_indexer_tomcat_catalina_local_250.conf //启动实例测试,正常显示如下,如果有错误出现还会发送邮件
Settings: Default pipeline workers: 8 Pipeline main started { "@timestamp" => "2016-04-17T01:15:00.164Z", "message" => "[04-17 09:15:00,019][DefaultQuartzScheduler_Worker-1][INFO] carsing.crm.customer.service.impl.FollowAssignServiceImpl.automaticAllocation(line:140) automaticAllocation start.....", "@version" => "1", "path" => "/opt/apache-tomcat-7.0.53/logs/catalina.out", "host" => "LO-T-DEMO-AP", "type" => "tomcat_catalina", "tags" => [ [0] "_grokparsefailure" ] }
ctrl + c退出,可以放至后台运行
# nohup /opt/logstash-2.3.1/bin/logstash -f /opt/logstash-2.3.1/config/log_indexer_tomcat_catalina_local_250.conf >nohup & //放至后台运行
# vim /etc/rc.local //设置开机启动
su -l -c "nohup /opt/logstash/bin/logstash -f /opt/logstash/config/log_indexer_tomcat_catalina_local_250.conf >/dev/null 2>&1 &"
七、kibana Show(可选操作,按个人需求安装与否,安装方式有变,请参考官方文档)
7.1Install Plugin
服务端
head插件: (以查看集群几乎所有信息,还能进行简单的搜索查询,观察自动恢复的情况等等。)
# /opt/elasticsearch-2.3.1/bin/plugin install mobz/elasticsearch-head
kopf插件:(它提供了一个简单的方法,一个elasticsearch集群上执行常见的任务。)
# /opt/elasticsearch-2.3.1/bin/plugin install lmenezes/elasticsearch-kopf/1.6
bigdesk插件: (集群监控插件,通过该插件可以查看整个集群的资源消耗情况,cpu、内存、http链接等等。代码已许久未更新,该插件可能已不再支持)
# /opt/elasticsearch-2.3.1/bin/plugin install lukas-vlcek/bigdesk
7.2 Start Elasticsearch
服务端
上面已经启动了,可以kill掉pid,再重新启动
7.3 Kibana Usage
通过插件查看集群状态
http://10.66.3.155:9200/_plugin/head/
http://10.66.3.155:9200/_plugin/bigdesk/
http://10.66.3.155:9200/_plugin/kopf/
八、Redis-browser
该工具用于网页在线浏览redis中存储的键值对
8.1 Install Ruby
服务端
# yum install openssl* openssl-devel zlib-devel gcc gcc-c++ make autoconf readline-devel curl-devel expat-devel gettext-devel
Ruby包淘宝网址:https://ruby.taobao.org/
# wget https://ruby.taobao.org/mirrors/ruby/ruby-2.3.0.tar.gz
# tar xf ruby-2.2.0.tar.gz
# ./configure --prefix=/opt/ruby
# make
# make install
配置gem 镜像
#gem sources --remove https://rubygems.org/
# gem sources -a https://ruby.taobao.org/
# gem sources -l
8.2 Install redis-browser
服务端
# gem install redis-browser //如果有错误百度一下解决
8.3 Start redis-browser
服务端
# vim /opt/ruby/lib/ruby/gems/2.3.0/gems/redis-browser-0.3.3/config.yml
connections:
default:
url: redis://127.0.0.1:6379/0
auth: password //如果有密码的话填入
production:
host: mydomain.com
port: 6666
db: 1
auth: password
# redis-browser --config /opt/ruby/lib/ruby/gems/2.3.0/gems/redis-browser-0.3.3/config.yml -B 10.66.3.155 //测试运行
http://10.66.3.155:4567
# vim /etc/rc.local //加入开机自动运行
su -l -c "nohup redis-browser --config /opt/ruby/lib/ruby/gems/2.3.0/gems/redis-browser-0.3.3/config.yml -B 10.66.3.155 >/dev/null 2>&1 &"