|
搭建ELK 8 I3 Y( b% f3 H u, d* ~
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:4 p* @! g% E! c- N7 m5 @$ G
6 n% g; N, o2 u b0 v9 K1 J处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单* {/ i& a( I7 A, Q4 ]3 S. R$ X9 y/ \
Elasticsearch
" r' ~$ @1 T8 h4 [elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
5 K, y4 m* Z; I' U
& i( e9 C. v: u+ C4 x9 Selasticsearch的特点:
6 y: d6 P# W) M. L 4 o- C+ }5 [, i; ~( x m1 _
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json' J/ f$ L" I$ \& f- \* U" S6 |; ]: e9 u
部署elasticsearch " J4 g$ D8 g3 V# v& N+ y w6 D& S
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发
/ f/ z3 ~) U( a# Q ; ?; R, z' u, ~7 l
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
: j4 @9 v4 _- n+ J
* R2 \- H G, b. ]( y服务器1:172.20.22.24
4 e+ n0 M, U) i. I ) t5 a. }, z5 R( X0 F7 H
服务器2:172.20.22.275 e1 A0 F0 ~1 Y
- g( ?1 F. S" [7 S K服务器3:172.20.22.285 ~3 f9 i8 f5 |0 ~8 i$ ]
5 K. G& \5 |1 C$ G) K6 n% c###ubuntu+ \: a- o) z7 k: c+ t/ J4 ?
# apt install -y ntpdate
% Q) i8 `! Z! o3 m* O# rm -f /etc/localtime. X; J3 ?; l& ^2 w4 M4 R+ Q
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime8 y) @: Z( Y/ x: V
# hwclock --systohc
: j# c- V- n4 z* |$ s# ntpdate -u ntp1.aliyun.com
) x! a6 E% F' Z5 v) H$ T###设置内核参数/ j7 x4 F( Y5 G2 m( [4 Q1 Z
# vim /etc/security/limits.conf
4 S- p3 O0 U! @+ K$ g! z y* soft nofile 5000007 Z. Z2 U) t* H
* hard nofile 5000003 U* m, \9 Q* g X V6 k
# vim /etc/security/limits.d/20-nproc.conf " b$ S/ I3 ?; w8 w1 H' S; A
* soft nproc 4096
1 Y8 [1 j8 w$ ]7 f8 D- }elasticsearch soft nproc unlimited9 C- S* P5 b0 h7 X; I
root soft nproc unlimited0 O. {( H0 d' w0 r7 w* p
###安装jdk
" O9 r! n! a# e3 ~% ], `# apt install -y openjdk-8-jdk% r& v# F$ C. Q' p2 J
1 g V3 b9 p% r###每个节点都安装+ r! R, J$ H3 L0 r. J! M
# ls -lrt elasticsearch-7.12.1-amd64.deb% P7 R4 @' O8 W1 `1 }( q2 f
# dpkg -i elasticsearch-7.12.1-amd64.deb
; g/ Z9 G" z+ A% u8 w) f###节点1配置文件
) o5 Z: i% @* U4 S$ C# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
4 W9 `* R8 L# rcluster.name: m63-elastic #集群名称
0 S2 |, n* U& j6 h+ w/ e+ L& Q6 Pnode.name: node1 #当前节点在集群内的节点名称
9 Z3 Y- H8 o/ K5 I; Xpath.data: /data/elasticsearch #数据保存目录! P) Q* @+ q6 W# \
path.logs: /data/elasticsearch #日志保存目录8 s, _6 B" M7 l* S# ~* F8 I
bootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap d. u6 T0 |( [+ j( r
network.host: 172.20.22.24 #监听IP
7 A) X8 `+ L' E2 s/ mhttp.port: 9200 #监听端口
0 Z: I7 {$ S+ T8 o4 W2 b4 K###集群中node节点发现列表6 K# n( F( o% ^( Y/ b
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
+ t4 q, J- V$ u1 m###集群初始化哪些节点可以被选举为master
8 x7 O/ q7 M( G0 G! bcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]: w5 t% ]7 q4 C1 w
action.destructive_requires_name: true6 K! e$ E! `" @% R' H+ u
# mkdir /data/elasticsearch -p
( j2 A& S6 A/ h/ L* c# chown -R elasticsearch. /data/elasticsearch9 F& y4 T& T3 t: q
# systemctl start elasticsearch.service
1 H+ |; g5 [ h3 Z###节点2) C; I8 m# G8 q5 P. S
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml: _3 {) I# o1 i, z0 L
cluster.name: m63-elastic9 X/ l' m! j1 A' c( J: u7 W2 J
node.name: node2( A7 X( w) ?- G$ A& A. L: q, u W0 N
path.data: /data/elasticsearch5 Z" f' q" g8 K' K
path.logs: /data/elasticsearch, @/ y# W: P x- a
network.host: 172.20.22.27
* S! l. M9 _' ~2 E ?3 ~8 {# Rhttp.port: 9200! l7 n/ V" i T/ R% J8 a' ]9 R) L
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
P/ q3 o/ _ M" K7 f* e7 xcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
% I e5 M5 Q+ ]! M9 ]action.destructive_requires_name: true# z a8 e0 t2 H. f! Y/ @/ T6 G
# mkdir /data/elasticsearch -p' ?/ k/ H( q4 N( Q
# chown -R elasticsearch. /data/elasticsearch& Z- f* r5 F4 w! C+ D5 y! P
# systemctl start elasticsearch.service% i* P$ Y) n) p# R8 z3 }
###节点3# ` k% W# @7 A. T+ d
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml* [6 j, R% V4 a5 Q. D: ^ \6 @
cluster.name: m63-elastic
5 `* ?5 ~1 Z5 v7 a( ?. Bnode.name: node34 \# O3 e) l& i
path.data: /data/elasticsearch8 [3 u) b+ F W2 o
path.logs: /data/elasticsearch( q3 W1 l+ n' L
network.host: 172.20.22.285 ~6 s1 _1 `9 N2 E* T# w5 P: X
http.port: 9200
9 \2 f- p' q, n3 C' I- Ndiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
% r& @7 J! c3 d5 P$ D+ F3 R" ccluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"], i5 Y- s. U, @' o# ^; f
action.destructive_requires_name: true
7 K* d* r3 c# T. `# U) l) x) @# mkdir /data/elasticsearch -p
7 [& g0 k( s- l! ?% ~5 N# chown -R elasticsearch. /data/elasticsearch# H& i/ p3 ?6 t9 w, l
# systemctl start elasticsearch.service 5 F( _# A* P G7 e% q
浏览器访问验证
2 B. O. O+ Y0 [* i7 f* uhttp://$IP:92008 ^ {# d6 c" `; ^0 D" x
# F) Z# E( b0 v. Z9 [4 { ) t0 ]: n/ d& A; G6 o
4 X' `7 g. l1 ^- _, ^
Logstash
/ n0 }0 I& C9 z% |* D! ~6 q2 dLogstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。$ b: J' \9 p( j8 l( _& ?
) }! ]4 C, d3 a部署Logstash 0 O3 S0 g& g; H& F
Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
4 M& a2 Y& h4 U3 x6 p2 W" W* \ * H& E) J5 R5 @9 ^8 K
https://github.com/elastic/logstash #GitHub
) o; M5 Q$ t( `; m6 w$ r6 Z
+ @6 i; s; P$ p, [; n* pElastic Stack and Product Documentation | Elastic
9 W0 ]8 O; |8 j& q8 G( p 4 F; y- T& M4 B4 i9 O4 O* ~7 I
环境准备:关闭防火墙和selinux,并且安装java环境 P+ [+ R+ a5 b- F+ |$ g
; V. A4 N4 E/ V) c- I# apt install -y openjdk-8-jdk
5 c3 U- D7 ], b# ls -lrt logstash-7.12.1-amd64.deb
8 o v9 F! E4 } r; ~# dpkg -i logstash-7.12.1-amd64.deb
: ^0 J& G2 }/ f2 e( k2 ?; P" J, \2 Q###启动测试
7 E, g/ @4 d' T+ P# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出
/ B7 o5 p+ B8 @hello world!~
4 ?4 L7 i* c- J$ a0 F{. Z- z; m1 b" \$ p
"@version" => "1",
% \' M2 m& f# J5 f "@timestamp" => 2022-04-13T06:16:32.212Z,
( j4 X7 B6 f! e, i$ B" b+ _' T "host" => "jenkins-slave",8 }$ [: r& T+ X- j0 G2 H! G" @
"message" => "hello world!~"4 x6 d6 C+ \" ]2 R/ J
}$ S% [5 |+ S' w7 H/ ^! I0 H
###通过配置文件启动
1 m: Z; t+ j" Z; P( A7 g+ S0 r+ w# cd /etc/logstash/conf.d/3 A! g6 @2 { I6 y' f+ C4 B+ E
# cat test.conf 5 R. A6 l) N( i @7 L( p) c# a
input {
, T# l& I9 A# | stdin {}1 r8 [& c0 h/ U% C$ x
}" M1 A, l; r8 Q9 I- H
output {4 s2 r0 @7 @$ y3 w1 s
stdout {}
- V8 j9 u& E# W3 L3 h. {}4 n/ g/ K3 }7 V, t
2 h, m3 [- @2 v0 e###通过指定配置文件启动: x: C5 Y( G5 N
# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法
/ Y* n( V9 Q- }* I' J5 z# /usr/share/logstash/bin/logstash -f test.conf Z8 Y: v, K3 U2 B; J* e
9 R; m2 n) n+ p' l: W2 `####输出到elasticsearch' Y" i5 }- Q- N: |* g
# cat test.conf 1 v8 i8 O* Z: P5 r: ~9 W
input { - }6 \5 y& f5 {; }; H8 D. ?3 L X
stdin {}
" Y& W* d# I+ J0 Y$ _}" U; @7 N6 n! O% q
output {+ E8 ^% c4 q: k' Y$ S
#stdout {}
0 N6 l! o' |4 {3 p0 e- C! z' ^8 f elasticsearch {
! x" c2 Z. W5 u- g* ~* f hosts => ["172.20.22.24:9200"]
1 }! l8 G1 u$ f7 ^+ d# D index => "magedu-m63-test-%{+YYYY.MM.dd}"2 v; h/ Q& Y0 h9 a: [. s1 b% w
}
8 z* E# X, w: c# C& W# o}
$ O8 v: ?! t# _ |5 ]# /usr/share/logstash/bin/logstash -f test.conf) p7 h4 B! k4 D" L3 y
version1
; S0 L7 |5 @6 R: z# T' wversion2
. F, l% I. T, z" @, r. g1 rversion3
4 P+ n! J+ T2 n9 ]test18 p6 i& T& ~1 w# I9 J7 ^3 u) g. k# e
test2
- \! d# A# Z8 j$ @4 E" `test3
# w% F- d8 [4 a5 K
9 b0 z5 K3 }: m" S) _2 Y; z# w2 K- g####elasticsearch服务器查看收集到的数据& n/ Y+ y. j( J, H* F( k$ m- S* o& U
# ls -lrt /data/elasticsearch/nodes/0/indices/
3 @* o8 [7 D0 C. y8 atotal 4
& |( [3 n& g! w" [9 N Rdrwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
1 s+ ?2 k! u# O H$ ~* r7 \2 E, z: Xkibana
7 V+ g* F4 ]1 I& |* B' Y' k* ^kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等! _0 R5 M! |- M( l' r. J" A
" z; i% M, z; z e4 C
部署kibana ' H L C1 V, w5 d: k& q5 E/ a$ I
# ls -lrt kibana-7.12.1-amd64.deb
+ E3 S! C3 T% c, ~8 p: K( Z1 _" j# dpkg -i kibana-7.12.1-amd64.deb) U; V: W! W9 n! \, P
# grep "^[^$|#]" /etc/kibana/kibana.yml
7 I6 N2 }7 I" m3 b9 Sserver.port: 56014 I# j6 Y0 G" O9 }; y: U# x
server.host: "172.20.22.24" C3 C1 B z% y% A/ t3 }) e' H
elasticsearch.hosts: ["http://172.20.22.27:9200"]
+ ~& M1 M9 w# X# li18n.locale: "zh-CN"
0 K+ z! D; t2 Z- O6 Z( ^$ J K% Y# systemctl restart kibana
8 s% d- `$ V1 h浏览器访问http://172.20.22.24:5601% v+ V* U3 ^/ q0 R& [
& h: k2 [0 S# b( n% N8 R _$ cStack Management-->索引模式-->创建索引模式5 H+ s% }9 ~) _- I% c- M4 ?
/ }" i7 ]+ }2 x1 X! G
# ]5 S1 T9 g0 [, w: v) U: J7 j" V6 `) V选择时间字段
! Q$ B: E/ h( h: }( x l. B! i; f4 ~8 S
查看对应创建的索引日志信息
4 a; u$ @$ J" @1 P% S' q$ I
# ~# S) v/ _3 h% i4 U$ ^! A
7 r# O& H0 m6 k - e$ P/ [. k5 F# z" \2 o, \
收集tomcat日志 : ?) P; Q. ? X8 b0 l+ w
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现3 d: v& f9 j; j# \3 p
7 P" Q: I* C4 H7 w9 _0 L, l
部署tomcat 6 a5 n3 z* M1 x6 X1 L
####tomcat1,172.20.22.30' s j# c) p. D6 e3 Q5 n/ o/ u
# apt install -y openjdk-8-jdk
: f- l( J/ b) g6 M7 c# ls -lrt apache-tomcat-8.5.77.tar.gz 6 F8 ]3 Y# A& ]* S* }$ T
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
' G" Y2 X6 J6 O, l5 l; a+ V. q- L# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/* c! V# j' E4 E: e
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat! b8 ~* r6 R) I7 v2 ~* c. |# Z
# cd /usr/local/tomcat/ I+ n; b8 ]' z: F( w3 i. ~, Q
###修改tomcat日志格式为json% m5 e8 W- s7 _* @+ p8 z, Y! _' h2 ~
# vim conf/server.xml
7 z# g, e( r+ F7 c7 w9 G....- Y% j% P9 R& J+ W1 |
4 a9 F( n u' x% x( q+ O....0 l' O! G$ y: I. v
# mkdir /usr/local/tomcat/webapps/myapp
* Z) p4 L$ U+ F5 h" @# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
, d7 I; o& g0 n V" t) o# ./bin/catalina.sh start* e7 J5 D( _: n+ }1 f. U+ y
l Y- k g' G! u0 @! i###访问测试1 [0 j9 Q. H7 H5 M6 N, Q* ]
# curl http://172.20.22.30:8080/myapp/6 G. H4 j- }, p; c" [% O2 Z7 c
###查看访问日志
+ Q) I& G" I1 F, m! j- _# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
; u& A( |2 l: G0 W# x
( ]4 t9 @) t c y####tomcat2,172.20.22.26
% v$ d8 }4 G8 u( B* u1 y4 U, q# apt install -y openjdk-8-jdk `1 N7 }3 l3 u% C+ C
# ls -lrt apache-tomcat-8.5.77.tar.gz
( }" x0 u2 ]' _& n+ o2 n# V* f3 Z# @-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz$ h8 `, A6 X( u5 z3 l$ r% C
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
5 S' _" A$ w1 z0 a4 B0 a# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat, ~. B. M; X7 q- A0 ^2 P. @
# cd /usr/local/tomcat
: d1 @) C2 C1 I2 ~. V8 a1 M% f+ i###修改tomcat日志格式为json5 P% e" Y* ^1 _; T: T
# vim conf/server.xml
- j6 ^) K" H2 v+ W....* V( t! ^+ X6 s9 D
+ o3 ?% u8 ]) ~
....
0 K" D5 Q5 i7 E) U7 U+ }) ?# mkdir /usr/local/tomcat/webapps/myapp6 B& c7 E9 P3 ]. P
# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html1 j) ~+ f4 y( F. Y( Y( k
# ./bin/catalina.sh start# }7 |9 i( N& c: ~: _0 ~
i+ x, {& ~0 O
###访问测试+ q. I+ z) P% n3 s# v0 Q7 ]
# curl http://172.20.22.26:8080/myapp/
, U7 Q1 o( C$ }8 O0 A###查看访问日志% o6 q6 m! l/ t$ e3 r
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
: d+ ~2 a4 t$ f部署logstash 5 R+ y$ s+ z: {, R; ^- ?) _; W
在tomcat服务器安装logstash收集tomcat和系统日志9 z( Q% q0 |7 a
8 O# X" `+ u! z. u####tomcat1,172.20.22.30$ |% w9 }: @7 e, @
# ls -lrt logstash-7.12.1-amd64.deb
5 {! l3 C+ v1 s8 c7 {6 D3 L# dpkg -i logstash-7.12.1-amd64.deb6 V, f$ s5 f6 W1 N- b( M3 k
# vim /etc/systemd/system/logstash.service8 L' A, D0 D$ A& }
...
7 ~! @! j7 E& v* F# q' ~User=root
1 K3 f# Q$ z- H* c& AGroup=root7 k! g# B" L. R
...9 I0 T& j! g* e7 ]2 [5 Q3 z
# cd /etc/logstash/conf.d
' ?6 U/ ?# a! \6 t% B$ D( \# cat tomcat.conf
+ ]% N# }$ _: t# Ginput { % {: n3 ^! A' b; k5 A* ?) m9 d
file {" c; c+ k' H5 ^ T# L
path => "/usr/local/tomcat/logs/tomcat_access_log*.log"7 N2 ]1 T: I- C. q* Q; Y' h- P& ?
type => "tomcat-log"
' _3 X, e% K% ?& r; j6 D start_position => "beginning"3 w' x3 m2 C( k# j+ D! A, L6 A
stat_interval => "3"
0 |" D w: k. |" I$ Q }0 _/ m0 f! g( y/ ]7 f( a
file {( P' ^/ f- @+ c* d. W" T
path => "/var/log/syslog"
( m2 B4 w8 S4 m- k, j type => "systemlog": Z8 ~" Q* F F! L. I N* {* J
start_position => "beginning"
' p9 t; r8 p7 B& |6 _* Q stat_interval => "3"2 y/ p, }- f& P+ ~0 f* `
}
' ]4 j/ Y* L, N% s}
( P% W, |7 Y* U" Loutput {8 L/ r4 l( ]" v! Q" w. k. Z8 F
if [type] == "tomcat-log" {
8 Q. ?6 A4 P" F$ V" u elasticsearch {
; g: | F1 t% Z6 r hosts => ["172.20.22.24:9200","172.20.22.27:9200"]' L' b4 b' p& D- `0 j% L
index => "elk-tomcat-%{+YYYY.MM.dd}"
# R3 l. v! B: s \( R% k' \% S }}4 U% X: x4 [# r- s& r* `9 { x
if [type] == "systemlog" {
8 Y5 `; X R# x3 U0 z- |3 W elasticsearch {* } h3 F2 S! l) z, V2 J
hosts => ["172.20.22.27:9200","172.20.22.27:9200"]. V6 t' C. k( L/ |' v
index => "elk-syslog-%{+YYYY.MM.dd}"
5 ]2 i2 L- X7 ?# @: d }}( \- r) V' r: q' V2 q1 b
}
: |! G8 t' h6 e- s" j$ D+ L
; |8 _) W4 j; j! p: H# /usr/share/logstash/bin/logstash -f tomcat.conf -t! r. ]" N2 p; }$ b
# systemctl daemon-reload
, h$ F$ h3 K& n7 ~ R# systemctl start logstash.service
9 W6 t" @' J" K6 U2 l f9 B. A7 V# scp tomcat.conf root@3172.20.22.26. |" ?8 F* l ]! H4 j( C% o0 ]0 F
+ @/ C# ]* Q) f/ I####tomcat2,172.20.22.261 e" B e4 B R4 |/ K6 i
# ls -lrt logstash-7.12.1-amd64.deb! u9 _, s U/ C
# dpkg -i logstash-7.12.1-amd64.deb' |; [& b; G$ k( r
# vim /etc/systemd/system/logstash.service6 `6 u: f1 _7 c, f; h% O( G
...
. F; `; n" {3 X+ ~User=root$ F. }" h7 a: h/ [( I! s0 g
Group=root
w. T' q- M U1 A...
6 t6 f! K B9 [" f4 M2 y5 S# systemctl daemon-reload0 u- E+ V' P$ N+ }$ p# x1 Q
# systemctl daemon-reload! W! I. e. y4 b2 U$ o" N
# systemctl start logstash.service
; t0 e! j% ?% i/ U$ m* B通过kibana展现
. X2 n2 O/ L+ H# V/ v- U3 G, ^: D1 U1 [
( h5 W; q6 }' k, N/ V( w收集Java日志
* H" B6 ?6 r* N: N4 c, ]使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并1 _# k r1 H( q% V
8 ]( g. |) ~/ N0 v1 ?
Multiline codec plugin | Logstash Reference [8.1] | Elastic3 ?% {% E# `: j
6 n/ d& l; k! a$ _& s
添加logstash配置文件
) S, Y' U ^! S8 ^###收集logstash自身的日志,172.20.22.26" I3 G# {- \0 z! Q! _ R
# cd /etc/logstash/conf.d
3 Q8 w; T" g5 d6 v* V d# cat java.conf + @+ p6 Q# q& b+ |
input {, Z( S+ ~+ p4 c( v
file {
+ F! T. `0 Y w4 h, d path => "/var/log/logstash/logstash-plain.log"1 l3 G! Q/ h- L1 [% p
type => "logstash-log"
" ?; u% L9 D" G: K+ w7 \5 G7 S" s start_position => "beginning"$ K C" @$ n, ]* z9 q
stat_interval => "3"
2 ]" X( r, b+ }( R! n codec => multiline {
* M2 _& c _3 N" B. @& r. Y0 i pattern => "^\["+ m! d& ^* X1 C; x5 j
negate => true" R4 S+ \: d5 r" i6 S' m
what => "previous"
1 q( ^0 F% E* U; J: d }}
" `% H6 ~, ]% u+ w' S. H}; ~3 r2 W- _( `& p" X2 d
output {. _! c: V! |/ h8 H# O
if [type] == "logstash-log" {
1 Y7 f# l2 i$ i+ L) ^0 W# @) ]4 s/ l elasticsearch {
% O( C' T3 ?" e8 Y1 Q* ?" ? hosts => ["172.20.22.24"]% s+ y, R L) Q) w6 m; T- q
index => "logstash-log-%{+YYYY.MM.dd}") a; ^5 v9 w- T% _9 s- L2 o, `
}}
. O# ]' n! p9 o/ K0 P, S- M}+ y/ N2 g" Q$ e! |
8 N) B( h- ?3 d% y4 E& u1 ^# /usr/share/logstash/bin/logstash -f java.conf -t
. S0 A2 ?$ q' k2 S3 g6 V' V# systemctl restart logstash.service, X/ e/ U+ r+ h1 A8 U$ Q3 N
% z) u- p2 O; v# }& @
###收集logstash自身的日志,172.20.22.305 q6 Z8 _' Q4 o1 r% \" }
# cd /etc/logstash/conf.d) W2 P- \ B6 C c' s/ x
# cat java.conf 5 p' t, g1 [8 {1 Z8 D/ K2 j0 |3 ?
input {
1 F3 ]! A7 x% {6 ~ file {
% C1 |2 B, U0 S path => "/var/log/logstash/logstash-plain.log"4 s$ a7 e1 U! X0 B7 }
type => "logstash-log"0 j7 b/ C0 ^# [3 Z/ ?
start_position => "beginning"
' R4 U3 e" G9 v0 i! @- h stat_interval => "3"" V( B' O# {$ ]2 Z3 ~7 W, t: _7 ?
codec => multiline {
9 {# {9 D5 D' U- L+ N: j pattern => "^\["
# U) P% K- E3 z2 h; U, t negate => true# F' W F! n6 ~0 O8 g# I) N2 F: C. _
what => "previous"
5 i o7 {* s+ e& G, Y$ |+ r }}4 @; O: M7 ~7 F3 _
}' z$ Q/ B2 q. |+ S) J3 H4 c
output {
3 L: V/ h1 f- ` X- { if [type] == "logstash-log" {1 }4 x+ \7 ^& e% T9 g- {8 X
elasticsearch {
8 Q- O1 \1 h) k) E hosts => ["172.20.22.24"]
9 v& R% C" V _9 @; } index => "logstash-log-%{+YYYY.MM.dd}"
" P& T$ h9 v$ }9 c6 @ }} Q, y& s; r y* U I$ D1 k
}' ~0 Q1 N3 w* n
( E3 l+ u0 `3 t4 A" ~; L4 Q
# /usr/share/logstash/bin/logstash -f java.conf -t
$ M( r6 ]( T1 v+ Y' k u# systemctl restart logstash.service ; _* j Z K# x3 G3 F/ X
查看kibana收集到的日志
5 j: K0 B, I% F2 a* P% f6 [6 y5 O! J' @( T
7 `( w3 b% w0 w4 S; f
; I' d' e g2 R' ]" efilebeat结合redis、logstash收集nginx日志
( t+ Z8 a: s* @( o1 c使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
+ Q9 M* y- [" J% \
3 q7 f: y7 L d! M8 v5 F0 }- f' _( _web1:172.20.22.30,部署好nginx、filebeat、llogstash0 M j$ K1 c* O
% k1 Z/ P9 N/ G2 W$ t U; w
web2:172.20.22.26,部署好nginx、filebeat、llogstash
4 @( d% R% O! ]" W ! }" Z4 ~7 d: o. a
logstash服务器2:172.20.22.23,redis服务器:172.20.23.1575 C7 T6 I, Q/ D" _) Z3 ^9 x) o
) @7 Y9 j" ] M! A8 Rnginx服务器相关配置
9 w$ M! S* L: ?: C1 B d" {部署nginx
" I1 B) g3 T5 p. Q# wget http://nginx.org/download/nginx-1.18.0.tar.gz6 k( K3 [0 l8 `$ n
# tar xf nginx-1.18.0.tar.gz
, @% K& s9 I6 E2 g; }9 ?5 h; H# cd nginx-1.18.0
* u5 ~; M2 {* }7 ]+ ]5 V# ./configure --prefix=/usr/local/nginx --with-http_ssl_module# j6 F1 b' W+ i" l/ A) I% Z
# make -j4 && make install2 k$ T# C. w7 X2 Q% T7 K3 e2 O$ ~
# /usr/local/nginx/sbin/nginx * i: M3 ^6 ]" |0 g
部署配置logstash
! t' s3 N% `1 b: x h( L把filebeat收集到的日志信息发送到redis9 v+ N& ^' \& [5 C3 c% z6 D$ c6 d& }( b
8 f/ w. U5 q; Y- B( z1 f* \- R; Z# apt install -y openjdk-8-jdk
9 @1 V8 y+ Z) N% f5 U# I( Y, G# dpkg -i logstash-7.12.1-amd64.deb
& o0 Z2 E! E k5 d, ^2 |, k9 C# cat /etc/logstash/conf.d/beats-to-redis.conf
+ b* J3 r6 e7 e m# H7 d8 r4 Sinput {6 _1 k1 u: l2 p. c5 [4 P
beats {
* Y6 o% M/ |2 v9 C e8 Z" @ port => 5044, V. c3 `, a+ g9 T4 F% L# z
codec => "json"5 s1 O; [ p$ F9 C9 K
}
; z3 Q9 R' j8 l" } beats {$ C. {6 i m( L: p7 u
port => 5045
, H/ w6 v8 w0 t) |+ Q codec => "json"
. G6 w+ [6 r! Q& U; H; I }
: \5 ~, F; V& U& K9 _0 [}
0 L" k1 N/ O" p+ `output {
2 G0 W; r8 K: s6 Y$ h& b# v8 P if [fields][project] == "filebeat-systemlog" {
- i: z" b' y) X- g* z+ Y redis {0 ~. I$ r+ | F1 G0 W4 e; u2 `
data_type => "list"
" Q9 @2 Z0 P0 {' D" G key => "filebeat-redis-systemlog"# ]3 U, ?! q8 J" j
host => "172.20.23.157"
: b% d' U: Z4 y$ H6 U5 b- n' B port => "6379"! o% B9 I) w, N2 K. E$ [# ?! N0 j
db => "0" c- v. } g+ ]0 f. ^- I* I
password => "12345678") `2 {; I8 N& c' F+ @! M7 N
}}+ a J! Y2 u- H) M1 Q
if [fields][project] == "filebeat-nginx-accesslog" {
/ Y, D, _- E8 P redis {1 |; N6 s* D; R9 R
data_type => "list"
, U$ w$ @( M w. V' m/ c key => "filebeat-redis-nginx-accesslog"
9 ^" j. L7 g, W4 B) J host => "172.20.23.157"
7 g9 U& T: Y8 n1 P port => "6379"
( e* |" A& c3 m. w' O db => "1"
6 b+ u ?$ t; `! _3 _1 A8 I password => "12345678", x3 D2 K u0 G. T2 _
}}
) A6 N& P7 B% o1 \' q if [fields][project] == "filebeat-nginx-errorlog" {
/ {( h2 w; L$ \( O: w6 W1 m* b redis {# n1 l$ H& O9 s6 T
data_type => "list"* x# ^1 N; N! v0 Y+ Z* x, T
key => "filebeat-redis-nginx-errorlog"
; X. r0 z) X6 [ host => "172.20.23.157"
6 u$ W/ G9 c* {% ~( \- c+ }; i2 P/ V- l port => "6379"
9 O+ W1 X2 o* s3 H5 a( o db => "1"8 L+ z! T0 L& d
password => "12345678"
# i8 z& T4 f: h. ?- Y9 \2 k }}- `( Y# W- K, K; t
}' {, p* q c# D5 i. c+ D
# systemctl start logstash; C6 E9 a. d8 I5 U5 N& I" D! [, f; V
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ $ n% d& ~- E' e9 s# M* @* d
部署配置filebeat , H; P- u) N0 u3 z7 y4 W0 z
通过filebeat收集日志信息发送到logstash
) i' @6 w7 p$ ~1 M4 l- R8 j& w
! p% Y& s& z; \ u. S8 Z8 x# dpkg -i filebeat-7.12.1-amd64.deb
% Q6 X- A d0 G7 |* z8 g# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"
+ {2 p1 }7 ~$ s6 d$ V- \# J9 M' zfilebeat.inputs:$ N7 R N/ L; G# F) L2 R. z
- type: log6 R/ y' T. C7 M$ q
enabled: true" A. N3 {5 V5 @8 X
paths:
1 `- f. P0 i2 f0 X+ `6 R - /var/log/syslog/ L6 F: d7 Z9 [& C& i4 e
fields:4 z, }& T! {- g0 ?8 J2 d7 g) F
project: filebeat-systemlog
6 w( H) Z0 h/ I$ z1 Q& O- type: log' s7 z7 I, \$ D; P5 K0 R+ _7 a
enabled: true% A$ I1 w$ z1 B
paths:0 k$ r! R1 |) }9 D. [! q+ X
- /usr/local/nginx/logs/access.log0 @/ l! k1 g4 N0 D; P* l3 a k. E
fields:
# g& _/ T& L1 e* k project: filebeat-nginx-accesslog
L1 K" T1 t0 o/ O6 p" K% S- type: log0 @1 L' Z) S/ Y4 C3 T% H
enabled: true6 s( @7 s: [. |/ G' e: Y: r
paths:9 Q, v: h7 K: ^# D6 w! L7 U9 n% @
- /usr/local/nginx/logs/error.log
- q6 |* e" d* B! Z7 u fields:
( K- Q5 l/ F( G project: filebeat-nginx-errorlog' V3 e* f) w9 O; h% H6 v, Q& ^+ c
filebeat.config.modules:% F: L1 e" _5 P' a5 ^( c
path: ${path.config}/modules.d/*.yml
0 p6 @7 ~* k$ @, x; R7 g0 Q" D reload.enabled: false5 q, G8 K0 {& G2 R8 E8 Q' T0 {
setup.template.settings:" k- C: K$ p8 ~
index.number_of_shards: 1
, j2 Y: C6 V: N. g5 asetup.kibana:
# g b( z' p5 E+ M# Vprocessors:! g) {$ l0 Q2 y2 r- x; w: t ]
- add_host_metadata:4 {1 O& z$ M8 N) x; \/ s* c5 q
when.not.contains.tags: forwarded
- ?$ A9 \; H. Z8 F; } - add_cloud_metadata: ~3 N* r8 R6 @* p0 q! j
- add_docker_metadata: ~2 c: ?0 \) p6 G; ?% @
- add_kubernetes_metadata: ~
- I& A- ?* f ]# G" Y1 Koutput.logstash:
/ A- [* \- X- m- X& F hosts: ["172.20.22.30:5044","172.20.22.30:5045"]* H& F; ^! V# U
enabled: true* @9 t/ A" {2 \7 p; _/ P$ U
worker: 2
4 f5 a7 I" t/ ^ compression_level: 3
1 h0 v; q: H! D. e0 H loadbalance: true
" K- c: b' T: Z* e9 s1 `6 r9 f* C- v( f& N/ T2 g
# systemctl start filebeat
6 y- N. S R- f. @; |# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
- L% V+ O! n1 f+ p+ tlogstash服务器配置
' `* \: V" e0 K- G4 blogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
9 u' n8 ?) }) E
8 w/ k# D+ z7 |! D( R6 e# apt install -y openjdk-8-jdk& E9 Q1 u! K1 c2 g/ i& r7 G
# dpkg -i logstash-7.12.1-amd64.deb& m3 P6 a/ V* M W5 ~: S' D, X
# cat /etc/logstash/conf.d/redis-to-es.conf
" U3 O4 c8 z. X/ X" R+ ?* ?7 Ainput {# d( |7 W: B0 D/ w, K' p! F
redis {
% _* b2 P4 `6 _. i- G data_type => "list"% U* v1 K& |- k
key => "filebeat-redis-nginx-accesslog"/ M1 [/ Q2 C! l. b& e
host => "172.20.23.157"
5 i8 Q! ?+ h5 M# ^" ] c* c port => "6379"( e2 i* D* D1 L, M) w% s, k$ P
db => "1"% d' ?8 a2 T' H7 x9 c" _% m
password => "12345678"
- X8 D9 O0 d; ^6 N8 u& t }
/ v8 e# j1 G+ e8 b) x redis { G' y) _6 G9 {4 G8 o. C' M' i
data_type => "list"$ |+ E$ K% E$ u, l
key => "filebeat-redis-nginx-errorlog"9 ~" Y9 |$ E1 c) w) c5 _, I* _
host => "172.20.23.157"8 S% |6 p# y4 A8 T9 ]9 v3 C& X
port => "6379"& i8 I' w4 `0 V
db => "1"
% ]% k+ ~6 m9 |( C; W2 ^6 r' d: a password => "12345678"7 s4 F2 u2 Y7 e, G' W
}
+ r4 B' G+ X" ^* w0 w+ a9 j redis {* Z* |" d; E4 Y- ~: C6 x6 Q C
data_type => "list"- n" Q" J' v, q
key => "filebeat-redis-systemlog"( P/ W! o- e# n$ o! Q6 g$ P, T
host => "172.20.23.157"
% G/ u& d* O; t9 L port => "6379"
9 m0 n. [/ ~. V) l db => "0"
8 o! H/ {# [8 k/ ^ password => "12345678"" k; r. g, M$ W% ]
}
, w8 j0 Q$ R( ?) [}! a2 V& i% s/ b, R2 ? v& T
output {, B/ j9 m+ y" Z, m. y0 W; R
if [fields][project] == "filebeat-systemlog" {
- E- R; @" q* o: f' C: C/ {0 m' D elasticsearch {
4 p' d% K' f9 w! ?* X& @; Y- z hosts => ["172.20.22.28:9200"]
; C/ D* n" E. A( p index => "filebeat-systemlog-%{+YYYY.MM.dd}"
# R+ \/ N# H( ]! ~ }}- h! w. @0 g2 C2 x, j
if [fields][project] == "filebeat-nginx-accesslog" {0 I( n- X( t3 o j" d
elasticsearch {
5 N% g* _6 }2 m. d( I& B# B% M hosts => ["172.20.22.28:9200"]
2 r+ o3 s4 A1 E- C6 i, j0 F index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"( \- C! z, \8 K
}}' [9 `! s! K' a) x$ e$ K5 B% ]
if [fields][project] == "filebeat-nginx-errorlog" {
% ~0 L0 k: \; X, z( j elasticsearch {+ I' f; D; F/ D
hosts => ["172.20.22.28:9200"]2 x: I; a0 _3 g
index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
4 U u1 O- J: M/ B( N8 l }}
9 v, ^% f" _! H: R3 J) L& {}
w8 v- ?& l: {# systemctl restart logstash.service 6 G" D/ B Z6 U2 u4 U
redis安装配置
4 {8 L8 v0 U0 e9 e- f9 o/ lredis服务器:172.20.23.157,
x& P; \* ~5 Y. d9 B- x6 B% x 4 N9 a$ ]# C& t
# yum install -y redis" d/ k2 B, h, n0 B
# vim /etc/redis.conf! a( |2 U, h! H) |) U% T
####修改以下配置项' J7 W$ W2 i" K# {. ]5 m, r( u
bind 0.0.0.0* h6 S, Y" {) W3 }; u5 Y, D( R+ t
....& s/ O" F- @7 j. c/ b& c C
save ""( f* y) P/ A! G' B' i9 R
...., _- W$ Q: q# L2 K4 S* b
requirepass 12345678
! x. ^8 [$ d3 ^0 Q9 m. }....: z( {1 [) s5 w) O
# systemctl start redis) o/ {2 y, l+ R7 X$ g* L
###测试连接redis
: y1 z. J- g/ B* B, J$ ?# redis-cli 1 F& }/ ^& X; d& _. i& N' [
127.0.0.1:6379> auth 12345678
Y0 j2 Q; p) I2 aOK
5 C5 J2 B: @" M c' F G l127.0.0.1:6379> ping
/ G5 z1 e+ U- V! A S/ `PONG; {4 j& E* g5 G q9 F
$ E% c" a2 A! ]2 C' P
###验证收集到的日志信息
& u1 B) l5 {( m6 A" b/ C( l127.0.0.1:6379[1]> keys *- k+ b9 ~1 c D7 s: [& Q4 t9 V
1) "filebeat-redis-nginx-accesslog"
& m/ `: |1 d; Y. ^# V. M! k0 n3 o# [2) "filebeat-redis-nginx-errorlog"
2 {0 o/ {* h# `( B3 _127.0.0.1:6379[1]> select 0
9 e- X% T% ^8 d7 t" ROK
* W& g; d* b1 H: y0 V6 ?127.0.0.1:6379> keys *& F4 l6 l' t( K7 \6 K
1) "filebeat-redis-systemlog"
* E7 h/ |& ^) b4 G7 X通过head插件验证生成的索引: k4 o3 B' m9 }# e- e3 h( M
, ]. x1 h" Q x8 S8 T5 M5 x8 i
9 z1 \ f" I" E% \kibana验证收集到的日志信息 O6 j7 w- k ?( O1 j; T% n
|
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|