|
|
搭建ELK , n2 G4 F+ N+ a7 C; Y# J
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:7 \ W4 `) e0 W" Q
" Y+ o0 }; I8 f; ?1 z( y: g; S+ o处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单8 @2 {: E/ j1 Q4 z
Elasticsearch
9 v1 y8 p6 [; J7 S' A2 W, a3 J- ^# \elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
" [! d: t' w* [) l 8 ^% \; J7 ~0 K; D# ^; |
elasticsearch的特点:. _( t0 G; J b
8 R8 |! G2 i M0 ~5 C, o$ x' e
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
6 v4 _; S+ p) r* H: L" V. u, S" _部署elasticsearch 4 b" d- h/ t( J8 L9 t
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发
& w3 a" j% C7 _( D0 @ 7 U* _# ^* G7 P: n, j7 ^' l
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
* ?6 ^- X9 t/ [9 l. U; H ! p c- W+ u3 }- V
服务器1:172.20.22.243 w) E& ^( s" d) P* `; |5 A2 o
7 Z0 b) t& L! u m; x
服务器2:172.20.22.27% L. u- a% s: ^* y
' y+ Y2 o5 u5 k" n4 {6 e0 K服务器3:172.20.22.28
% V; p5 t2 J- O9 e. z7 I+ e " p. k* @: u6 g- l5 j6 u' R1 O
###ubuntu3 T5 v V: m2 I9 _ j- }3 q& k- n% _8 R
# apt install -y ntpdate
1 e7 F4 T* z4 g+ P! Q3 c0 Q# rm -f /etc/localtime
; a, P: N3 M/ k$ v0 E1 S# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime* d5 s" x4 g- n g4 v' l9 f
# hwclock --systohc% W. b2 O: B6 E% D% `
# ntpdate -u ntp1.aliyun.com4 w7 e3 [5 S6 k3 `- Y
###设置内核参数
; Q. b) h$ A- Q8 `3 S/ |, Q) g6 _# vim /etc/security/limits.conf
- G9 r' d( s8 ?) ^7 f* a' o* soft nofile 5000006 X) D8 a3 A" o$ x; L3 u1 K/ E
* hard nofile 500000 `% O( t; b* G- L+ f- x$ f
# vim /etc/security/limits.d/20-nproc.conf
$ C4 `6 Y* r/ d) W" S; R9 H$ ~( r; |* soft nproc 4096
0 H7 B; i1 f* E5 k) n2 b) m, W) Celasticsearch soft nproc unlimited7 E. N* ~5 Q7 n" j
root soft nproc unlimited
8 ]+ J( c+ |6 j( l# B% ]/ N* Y###安装jdk
8 {& o1 v2 X; d8 L- {6 C, W# apt install -y openjdk-8-jdk
- J/ |; ~: ?4 s( L6 F: z
7 Z7 g' w. z, z; i/ t###每个节点都安装6 \& l. E' d# @9 @+ f& C
# ls -lrt elasticsearch-7.12.1-amd64.deb
7 g, `7 k1 i8 e4 i) \* Y5 m# dpkg -i elasticsearch-7.12.1-amd64.deb! p# Y+ U h0 t X
###节点1配置文件4 s6 b3 }9 Y, q5 m4 w5 P( C
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml& f9 W; j' C; T4 Z
cluster.name: m63-elastic #集群名称
. ~' g% L @7 {# S- q: @+ W5 b, wnode.name: node1 #当前节点在集群内的节点名称7 P4 s7 d9 m" h# V, V5 `
path.data: /data/elasticsearch #数据保存目录/ ]0 a$ u/ V8 e7 s
path.logs: /data/elasticsearch #日志保存目录7 m# w2 i3 ]" f* D) f
bootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap, K6 }% k7 ?9 ]2 x a
network.host: 172.20.22.24 #监听IP" j! S% Z. z: G, |8 R
http.port: 9200 #监听端口
" D+ H7 T0 d" n, K9 f###集群中node节点发现列表
# F% h" N# U+ Z8 ]2 M! X' Pdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
1 L) U5 ^" V. t1 Y; F6 |7 }# p###集群初始化哪些节点可以被选举为master; e) n2 @0 z: c9 [3 [
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
9 O) t) Y$ K' J; |4 w! daction.destructive_requires_name: true
1 |& p( f+ \5 g M( }& X# mkdir /data/elasticsearch -p
6 w7 b( v9 W8 y/ n1 M7 R, _ B# chown -R elasticsearch. /data/elasticsearch, u9 ^3 e, j6 [
# systemctl start elasticsearch.service
5 {2 Q U3 C# j7 A###节点2+ p2 u8 F7 ]$ Y
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
1 M: k4 l6 K/ \cluster.name: m63-elastic
& A a' D6 I! |3 C2 @1 x; Rnode.name: node2
0 m1 b& ]7 D; v( }) y1 Npath.data: /data/elasticsearch
) ~2 ?: k4 [' Z1 j- O$ [! Ypath.logs: /data/elasticsearch
8 ]4 X2 J' B+ o, w, Rnetwork.host: 172.20.22.270 n3 L' b! r/ R; o9 h! |1 S
http.port: 9200
- o/ t$ a( q \4 `& E# s! ldiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]" g7 p- g: J. X
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
7 Z$ m, q- S' n# ]; T5 waction.destructive_requires_name: true0 j- z6 W! N# d1 V9 E# X
# mkdir /data/elasticsearch -p
6 e0 U; O. p4 t+ R; v. z# chown -R elasticsearch. /data/elasticsearch
' ^- q ?; k) n+ p# systemctl start elasticsearch.service
% I/ A L; k$ V" B7 w% j###节点3
5 v t% K2 y! D9 T5 A' b# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml9 g: Q. |/ M- Z i
cluster.name: m63-elastic
9 j, i( ]) m2 D, |' v8 x Rnode.name: node31 g+ w* V! f4 ~1 \, L6 ~
path.data: /data/elasticsearch
7 h v' v! Z2 t) o$ tpath.logs: /data/elasticsearch
2 ?/ q- o# R+ \3 J; }network.host: 172.20.22.288 ^4 t2 J+ U( @4 h# S. S, C6 ~
http.port: 9200) o) T. T. ~: y- G5 d9 w) ~. J# s) v
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"], d5 R* d% ]0 q: g; V
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
" F U4 a, {# A- c* `2 u% laction.destructive_requires_name: true
0 Z* B7 O7 S4 j# B# mkdir /data/elasticsearch -p1 ?8 F7 O) R8 Z) r* u3 r
# chown -R elasticsearch. /data/elasticsearch! [3 x& ]9 t1 `. a: _, ]" ?5 p
# systemctl start elasticsearch.service & h9 k! x5 U1 ?
浏览器访问验证
j! v0 K& M" G& A2 u& ?2 `- h& Z' phttp://$IP:92003 Q( d ^3 V/ Z
2 \8 T+ |! t0 X: n4 m8 z# D& e; P # R ?$ K9 i: U
6 G8 U9 f3 J/ o- x; y- R$ P( S
Logstash & _% V* Z Q' [5 y
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。2 h2 p7 z9 M0 m' ^6 J9 p
4 k: t( |: @$ A' K3 A2 \2 j1 b/ j
部署Logstash
, M& Q1 ^/ U) r+ s. c5 CLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
9 U6 p& Y9 q9 e, F# @2 w ) [. y H: Z( l6 O6 G% }
https://github.com/elastic/logstash #GitHub' x! d- m& [" p+ a) T
) T& a$ ]& w9 C% A. cElastic Stack and Product Documentation | Elastic0 r/ y* u" X# I/ w6 Q. Q# b% C
0 {) A& {7 e6 W& |环境准备:关闭防火墙和selinux,并且安装java环境
& W; S9 {7 Q* a3 m8 y" Q8 b6 w) Y
! F; l- u4 S8 h: e+ ~$ O) ]# apt install -y openjdk-8-jdk
0 ]* l. G6 y0 {1 W5 ?- J1 N. c# ls -lrt logstash-7.12.1-amd64.deb
! C5 t6 ^, C! R- J# dpkg -i logstash-7.12.1-amd64.deb$ I6 m+ q, \: s2 ]' V
###启动测试! H B8 s- e/ A) D4 A. u" T# [
# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出
# C/ J* `& b( Phello world!~
5 s. B f% a5 f- Q4 X" \- w$ f{$ k) Q/ \2 Q8 ]& v3 C3 y
"@version" => "1",4 O1 Q; \- o' A a* x4 `: p' }
"@timestamp" => 2022-04-13T06:16:32.212Z,0 b% E8 _; }4 D0 ~- W5 x
"host" => "jenkins-slave",% f, @: _( d8 {
"message" => "hello world!~"
5 @7 v: r/ n; \2 O4 E9 b. @, B& h: z}
2 X5 o, m. u. `8 G8 [+ p* @1 P###通过配置文件启动* m1 v6 z0 P8 M( x4 J$ O$ E4 {/ U
# cd /etc/logstash/conf.d/ y, d+ B6 Y5 O+ C
# cat test.conf
/ i+ W; M4 J- o& Qinput { - T$ b# C' O1 Q/ O# r$ k! I. Z! \
stdin {}
1 `4 ^1 V& a4 Y: L4 H6 ~1 R! G}
: y1 S" Z, ^5 koutput { h& B& s4 A) k! U7 |- x( x8 V, r/ \3 l
stdout {}
/ A5 K" G' \% c4 R$ I8 Z( Q7 [}8 F0 N! ~# ?7 G# f
% J9 t4 r; I" N+ X
###通过指定配置文件启动
. R: A% H* f3 e0 Q# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法1 n+ o4 ?! D6 Z+ v" g
# /usr/share/logstash/bin/logstash -f test.conf
1 W# i9 a% V: {
- a$ F0 f( w8 R7 n####输出到elasticsearch) z/ J$ e! h1 o0 g4 S A
# cat test.conf
- J. V9 d! |* i' i1 ~input {
2 r( k/ }$ t3 f stdin {}& a; _! e& H7 r7 F+ c9 [+ W! H
}
. n7 w6 E; i7 Y& p$ C! Voutput {
8 z$ z. p3 ]/ Y9 h #stdout {}( A8 e; T7 \- v2 y9 E# C+ ~3 z
elasticsearch {& A, I# C. U2 i
hosts => ["172.20.22.24:9200"]" r4 u" G# j; t; f% m3 V
index => "magedu-m63-test-%{+YYYY.MM.dd}"7 v8 P8 n6 F0 M/ \9 Y& S
}# g) I1 Y1 u6 C$ j0 Q' s
}
! F. s0 t9 g4 [) X/ {0 Z# /usr/share/logstash/bin/logstash -f test.conf
& e/ d- v$ r+ V8 Wversion1( m+ e; ^5 j$ h# A" _* L5 S
version29 G" P- {9 `& p. B) V( G
version30 a+ r' i* a8 S8 `
test1
8 P; ~& C9 y) E% m$ Rtest27 l7 Q4 e* c& t/ l! J, ?
test3
0 O' s- y: ~% g6 t# V/ p" `5 T5 ?+ o: V3 q4 v" B$ X4 u4 v
####elasticsearch服务器查看收集到的数据
# z. P# e; W1 O4 t# p* B# ls -lrt /data/elasticsearch/nodes/0/indices/) d- v, J, Q- P# R! a
total 4
: q' A# ?$ G: c+ _& xdrwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
5 s: t% M2 J8 W. skibana
( W" f! @1 ^# Tkibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等9 K# }! L9 b ~6 L+ s
0 A9 P, r1 {9 A5 `9 U0 ?9 k5 h部署kibana % P8 C5 r4 A8 F3 G; g* E
# ls -lrt kibana-7.12.1-amd64.deb
' R; R* }, C5 w, N- a p# dpkg -i kibana-7.12.1-amd64.deb
4 i5 n3 p2 P8 T# H/ J# grep "^[^$|#]" /etc/kibana/kibana.yml$ ~7 S) A# @7 C. q
server.port: 5601
& Y0 e5 A1 r( sserver.host: "172.20.22.24"
. O! w. o+ X$ W5 c3 ielasticsearch.hosts: ["http://172.20.22.27:9200"]8 W& c" c$ A O( y
i18n.locale: "zh-CN"" y8 B0 H6 v+ A! l' x% p; J+ D
# systemctl restart kibana
, j* n4 S$ A8 ?浏览器访问http://172.20.22.24:5601
1 A" B3 Q! r2 {" ~, k3 n4 s
) f5 h" Y) }& ]6 H- [Stack Management-->索引模式-->创建索引模式" W( C0 W7 }: m5 Z
9 ~% G" o9 z9 z8 {# s
. d2 b& c) g' V+ @$ p6 @4 f选择时间字段4 L* ^3 c" x- d7 n; F7 [' i
( S) |; H) ^& u3 E
查看对应创建的索引日志信息
9 {7 j5 m, [2 F/ \& U
4 x7 C5 {- D5 h' p S, G4 J1 u1 ?
( w8 t& L2 R4 K* Y6 ~* N ; u0 b) T7 |+ L: M4 z
收集tomcat日志
( l+ l! U5 Q/ z! E收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现' F9 b, Q: X( |& |, B. Y# o+ T2 K9 J) R
7 c+ s7 D4 d9 a
部署tomcat 0 @+ p; Z7 e4 v5 I O
####tomcat1,172.20.22.30) x2 r) j- G7 t. e' _% J0 r
# apt install -y openjdk-8-jdk
' m0 P; T) C. D) X: t q0 [# ls -lrt apache-tomcat-8.5.77.tar.gz / F( G: u6 J& Z
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
6 R$ d' A/ I0 `. C% C4 s# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
* p) u$ W2 S x# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat& Q# d8 j# w3 _3 ?
# cd /usr/local/tomcat$ e+ C+ p7 Z4 m8 }/ U9 R: ?; y- ~1 G7 l
###修改tomcat日志格式为json; I; o- _5 B8 S- _: V$ V+ |
# vim conf/server.xml0 k/ ?. v3 f) Y. G. m3 K+ W/ Y
....+ b9 Y8 C' N; T
1 a b. ~4 g6 D/ a....
u4 o" p) k( {5 R# mkdir /usr/local/tomcat/webapps/myapp. E( N1 f o; n: N' ?
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html) Z. E2 U3 W5 n
# ./bin/catalina.sh start* G1 ~9 v) x2 [2 d
. P- p d' J& F f; a0 z) X###访问测试2 q, J! v0 W% P" c; I* N
# curl http://172.20.22.30:8080/myapp/
' h% }+ Z: I( T5 U+ b/ j###查看访问日志" `8 R3 F" m) H, P$ C
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
1 ?5 I. X, P z9 h- }' \
5 A& ~) B7 [/ O/ \ X2 I* ?+ |####tomcat2,172.20.22.26) p! c* N5 B" X3 u! p2 f/ x5 e4 Z
# apt install -y openjdk-8-jdk
' i! g) R/ x" f; C; Y1 `# ls -lrt apache-tomcat-8.5.77.tar.gz
+ s* h$ g; X$ Q- f* _* p-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz3 m( [# q' A/ U: @ g
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
. |2 w1 V0 s/ i4 [. i# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat4 Y8 t# c' h! R; C
# cd /usr/local/tomcat
9 O/ n: d% F) @% F###修改tomcat日志格式为json
: q$ B; D; n, f% x2 E% q, d# vim conf/server.xml
& z+ p0 q8 O% R3 {( c8 j. |/ c....
' y7 N% v" D0 m; A9 W 8 }2 q5 e; r! K! L2 d& Q7 z) }
....; R0 u; `. }& y( T* W
# mkdir /usr/local/tomcat/webapps/myapp
4 T$ @& J- q9 y; W/ O# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html
4 z, v" n* e# O, [3 t# ./bin/catalina.sh start/ i" m/ I+ P/ p* K& f0 `2 j; \
( L4 `5 s, d* V& P- h" g
###访问测试. \, W5 ]/ G( ]8 g# e8 ?8 |
# curl http://172.20.22.26:8080/myapp/6 a$ N- S, ~/ p
###查看访问日志
+ y; ~) ~ K% R2 ^7 O5 i( L8 B# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
8 C) V8 W# H- [; Y& l2 [部署logstash
2 k T0 Z1 p3 M在tomcat服务器安装logstash收集tomcat和系统日志: c) e! z7 l- l
) l* b3 J1 R) z4 i2 Q4 J* _6 t####tomcat1,172.20.22.304 p' i) F' n( l" H
# ls -lrt logstash-7.12.1-amd64.deb
- Z/ y" C% g* J7 y3 `# dpkg -i logstash-7.12.1-amd64.deb
3 W# t8 l7 S; Y4 y& _; Q# vim /etc/systemd/system/logstash.service
% t; c8 P- u+ W: t8 Z: j...4 e1 r. y0 j. i2 D2 P
User=root3 U- c+ t h; H, C& r, O
Group=root
* V3 |. m) H6 @% B8 f...1 {' j, B# N. j+ x8 a+ h* B
# cd /etc/logstash/conf.d
6 k7 y3 P9 Z$ ?8 n% C" z2 q& w# cat tomcat.conf
4 R' p% J: \5 hinput {
% O7 C8 i6 r3 i$ e* p) x file {
# X) R( E' \/ r! B1 a, M4 \ path => "/usr/local/tomcat/logs/tomcat_access_log*.log"
# M# L. |9 ^& k/ H type => "tomcat-log"6 U" [$ M9 G+ L( O! Q
start_position => "beginning"
' _+ U6 e Q* Z3 O8 K stat_interval => "3"
' \! K' \' M0 c |, @- J }0 B1 |0 b3 o) c% `9 `: u# U
file {' D. e: O( Y, n+ B6 V& Y
path => "/var/log/syslog"
, O! q! J4 Z0 x! y4 U2 }# S type => "systemlog"4 j T# h v# ?8 P. M# @
start_position => "beginning"0 k3 p# ?+ N4 L4 O! T* P0 H N
stat_interval => "3"
' V( \# p; {+ y }- s! l* F6 p/ N6 B# m, n& {% E$ J* D3 \
}
; l- c' v6 Z. x6 O6 y5 X$ Ioutput {
5 C9 U5 i& E; ^. @7 Y9 j! Y' V, s if [type] == "tomcat-log" {
8 F2 q) l h: m, _& [ elasticsearch {
$ Q) }0 T. V9 U hosts => ["172.20.22.24:9200","172.20.22.27:9200"]7 g+ o3 ~& C; Y+ d; f
index => "elk-tomcat-%{+YYYY.MM.dd}"+ R3 E5 [3 ], [ J: M. y7 N
}}: e! g) T7 a) [, i' W) N
if [type] == "systemlog" {
9 B% ]' y+ ^* M5 z elasticsearch {
2 u& B* K s1 M4 C2 y J hosts => ["172.20.22.27:9200","172.20.22.27:9200"] N) |+ S# p1 A, A! ~. L, ~
index => "elk-syslog-%{+YYYY.MM.dd}"
9 L1 ?" H: @4 B) b0 m9 O }}
% `2 A+ }: y& _! q) m}; [ W4 ?( Z5 c" q2 u
9 i' p7 |, f7 s
# /usr/share/logstash/bin/logstash -f tomcat.conf -t4 D6 Z) l6 J) r* T& w7 Y4 C. S
# systemctl daemon-reload. [/ _4 ]+ Q: C9 d) M
# systemctl start logstash.service5 S6 D# X, V, R: G! ]' C, J p) z& G
# scp tomcat.conf root@3172.20.22.26 z. g) c# X* N, J
+ b5 ]/ j- t5 m! j# r####tomcat2,172.20.22.26
' L0 Y* W/ U { f+ e% J# d7 J# ls -lrt logstash-7.12.1-amd64.deb$ i7 m8 B1 @4 d3 i) W T7 N
# dpkg -i logstash-7.12.1-amd64.deb. d" B1 E0 H) a% N& c2 z, ]& H
# vim /etc/systemd/system/logstash.service# T% O/ w0 ~5 {6 X) J% h
..." ~; c7 X9 E I- l+ {
User=root7 C: |4 S0 d) B. D
Group=root* ]3 N4 b; b2 q Z9 {" o: U7 [
...
( L( [+ O" d7 [$ R# systemctl daemon-reload& \% z& B4 {" ~0 d [ h: Y; f
# systemctl daemon-reload
( Y3 |$ m/ z" A# systemctl start logstash.service 1 ~ O7 J% _. y, r! K
通过kibana展现
& g$ F1 w9 y- Y. Y# G7 F& u2 Q: a2 y9 c, B( Y1 ?
/ R5 \! H1 \% @ t收集Java日志
" w- E' e9 R3 o) ?- h6 x使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并" V& @9 W1 Q$ [! C5 J! M& Z
& `7 O/ v" Z4 ~4 b
Multiline codec plugin | Logstash Reference [8.1] | Elastic3 f( S5 s2 R& n4 N1 ?, M
- {% s' t; ]4 |8 {
添加logstash配置文件
4 ~/ ~* o) d7 L###收集logstash自身的日志,172.20.22.26
& j1 A: Y8 y0 l. {( x$ ^' Y N0 t# cd /etc/logstash/conf.d- B8 }0 }0 q" |! k
# cat java.conf
@) [# I+ [7 z- q% K. L/ zinput {
' }5 Z q7 S: n# P# Z file {
7 n8 k) W1 l& D+ N1 s/ { path => "/var/log/logstash/logstash-plain.log"4 y$ n7 F# M) x& ^- p# D
type => "logstash-log"
' B2 G* y- m1 q: Y8 u- y9 x2 O4 _. V start_position => "beginning"; x* c' t4 C w7 p
stat_interval => "3"
! A& D1 C9 q9 V5 [ codec => multiline {) g% n/ d+ I, Z* F: ]4 V
pattern => "^\["
F0 h6 k* G# ~' j negate => true' n7 ]0 k. m g& e* o) }6 u% Q
what => "previous" / j( n( _7 a9 G& d+ \3 i( N
}}/ B2 A& m: q6 {+ q" {0 V
}
8 U' ^5 @$ x+ t8 n# o' {output {1 X( }4 f3 D7 g5 O& f# \
if [type] == "logstash-log" {1 L. I& A t; B* @! O
elasticsearch {
3 f4 h, a+ s, M7 F7 ]6 f" a hosts => ["172.20.22.24"]
, S \/ v6 ]$ w2 ]. R% n, V index => "logstash-log-%{+YYYY.MM.dd}"
8 H: c0 Z) h8 ^; s; D0 q9 X }}
6 p* U" w9 S" q/ e}5 o! |# i% B9 {; W
3 ?9 g& l8 W0 n# /usr/share/logstash/bin/logstash -f java.conf -t% P" C- H5 l0 R
# systemctl restart logstash.service
/ I o" \& I/ w. H. D3 M& ]" b, K
* \: S: F. n9 g+ Y###收集logstash自身的日志,172.20.22.30% j- ~, q: ^% x9 {) Y; d& ^
# cd /etc/logstash/conf.d
& T7 k/ L* [ T7 B7 a9 }# cat java.conf
) c7 l- a+ U9 h# |+ p. M% Uinput {
* u& h+ n4 i- k- G5 Z6 F file {) W/ q- z/ J6 ^* \! F3 W# a
path => "/var/log/logstash/logstash-plain.log"
3 a: P. L, O7 @ type => "logstash-log"( k! ^8 o2 ~! j: p) V% w
start_position => "beginning"0 D; g6 o1 @8 h9 u2 c( q
stat_interval => "3"1 | ~/ }) D9 K- M
codec => multiline {
( i' Z0 L) }! y# | pattern => "^\["& V. V& H6 M E6 L
negate => true' k! M7 X! W2 A* t. Q6 o( u
what => "previous"
( N# l0 B: o" S }}
4 W/ h) \6 [6 q8 e$ [& b}
1 \* C7 Q& W k4 c8 X' Boutput {& A3 f' K# I/ ~ H
if [type] == "logstash-log" {
. L, Z8 x: O! B- @+ x5 V% R3 y elasticsearch {/ t4 R5 V: X9 X
hosts => ["172.20.22.24"]! n+ K: \( J Q) d1 O+ ?
index => "logstash-log-%{+YYYY.MM.dd}") O: P' P; i4 U2 L5 R
}}
' r& @9 o# i Q" ~$ G# n- B& I}
: K+ x" w7 \. f2 {
3 c. {' H' q1 e, d* Q6 r! k, A, M0 H$ o# /usr/share/logstash/bin/logstash -f java.conf -t
- i5 U' d3 C" I1 x$ O0 h# systemctl restart logstash.service 5 }2 {) B0 Q% W; R1 k8 Q
查看kibana收集到的日志) a L- T$ l( V L3 [+ g$ r
Z Q2 X5 p! x2 Q1 \. \5 g
- P5 ~" q( B7 g& E4 r) D
( ^1 d2 n) h6 Yfilebeat结合redis、logstash收集nginx日志 + u# x& y1 f: d
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
4 x, w& {( k+ v5 o* p% o$ |+ E2 S : ? z! c. J0 p$ J% L
web1:172.20.22.30,部署好nginx、filebeat、llogstash' T6 ?' J7 n$ K( P) [2 [
) E8 ^ t/ R- U' B' b b3 J
web2:172.20.22.26,部署好nginx、filebeat、llogstash4 h8 k2 ]7 k2 u a& U; _ Y
( b2 P8 w7 I, D4 m
logstash服务器2:172.20.22.23,redis服务器:172.20.23.157
, N; ]% b6 W2 o ! o: c( `% N' O, n2 ]
nginx服务器相关配置
8 q2 j r5 L- Y- k* ^/ I部署nginx
3 R, ~6 \% a. m0 T: M# wget http://nginx.org/download/nginx-1.18.0.tar.gz2 o0 y% ~6 D. W2 Z* M2 |
# tar xf nginx-1.18.0.tar.gz: c( {$ Y( u( A3 E$ U
# cd nginx-1.18.0
) A2 a$ D$ H: o' n# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
! o- K+ D! z" O! ^6 k; B: V# make -j4 && make install! L1 o6 c6 A5 g$ ]* i9 G
# /usr/local/nginx/sbin/nginx
! f/ B2 H) A2 k, N6 q& ^部署配置logstash
+ ]& {5 k x4 ^2 [# N把filebeat收集到的日志信息发送到redis$ [; |6 F1 d/ h2 A& m8 p; c% J
" S% `" b4 T2 g7 u7 J% N# R# apt install -y openjdk-8-jdk
; j$ S* |) i6 d' u+ s$ [# dpkg -i logstash-7.12.1-amd64.deb
1 Q& j) u6 L! s3 n# cat /etc/logstash/conf.d/beats-to-redis.conf 4 z) r# L/ r1 |/ P
input {
) s$ I' n H {- b! s" W. P beats {
# U- E0 Q; |/ V port => 5044
+ m" X8 q8 _1 T+ }* M codec => "json"
% [+ X. b) m) f* {; ^ }: w$ U9 B+ `1 y3 ^% v
beats {
/ q4 n) v- ~4 q/ n" `0 G port => 5045( {( x, o# E) J+ L; D# ~
codec => "json"
8 [+ _0 u3 h! \ }3 y3 S2 q* w+ z R$ n5 `: C0 i
}
/ i5 @" W1 ^" B/ joutput {; j! U# W# r1 D5 P! z
if [fields][project] == "filebeat-systemlog" {7 C: a* `/ ^8 Q& O4 _6 `
redis {! q) u* _ ?. z6 n6 \
data_type => "list"
% b5 j2 `; A, U! d key => "filebeat-redis-systemlog"; Z$ A' f+ N" ^4 X0 A
host => "172.20.23.157"" V& ?+ h7 N4 Z1 q) E7 t# N
port => "6379"
6 A6 q% C- ~4 K- a) z db => "0"3 b+ f8 i! [: S% g/ s |
password => "12345678"
1 }5 m6 ]2 s2 Z7 ^2 i# X }}& R& r( z' o9 T) ^1 }* ~
if [fields][project] == "filebeat-nginx-accesslog" {- W* x: M( Y! {/ A
redis {
* D2 C- O% v1 } K% \ data_type => "list") h. B3 s1 h7 t% j: D
key => "filebeat-redis-nginx-accesslog"
- e, Z! M# N: n [0 V host => "172.20.23.157"
& E. M+ E, q( `2 N9 n. A5 I port => "6379"
+ A. \8 _' x( Y$ I! m db => "1"
# A% v# ]' K* Y j password => "12345678"3 o+ s. W& y; H v, n. R0 }
}}0 m* P8 h; `: r) H2 w6 y& t* z
if [fields][project] == "filebeat-nginx-errorlog" {
9 w- k0 p7 \2 l redis {7 l+ ]8 r; l9 Z4 r: R$ O
data_type => "list"4 o. E. L, ]! z! q
key => "filebeat-redis-nginx-errorlog"
0 {; V& l; {* D6 P9 ]; I host => "172.20.23.157"- Q9 _4 T# G A. s! N
port => "6379"
( P/ A; {3 ?) c* ^7 L8 O db => "1"& M4 {. ?9 K5 u! x/ s
password => "12345678"1 _* C- |( b- e6 H% a& A C
}}6 h# k r- ~. f r( ]* g- c
}
/ m: g1 L' Z* ]' y7 {- {# systemctl start logstash. H: |& ], y' u
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ 9 A$ l9 h8 |! s0 a) X9 H" u5 T8 x
部署配置filebeat 2 X7 v! G: l/ p
通过filebeat收集日志信息发送到logstash
' O6 x6 J( n9 o4 m# ~% Q
& \: e8 \( E7 O) H7 w# dpkg -i filebeat-7.12.1-amd64.deb
9 C8 O7 \+ R. K1 \ O) |! `% V# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"+ i4 ?8 ?- V' ^7 H$ h) S
filebeat.inputs:
l" o. y6 \4 w- type: log
5 T* ~8 H( ~: h3 @8 x* I enabled: true- o1 W' {& c7 j; h; }. Y
paths:7 D" R0 f b' f: y- B _: N6 g6 S
- /var/log/syslog
3 e. E- a( R9 e5 h fields:
) {+ Q- v2 n- [4 q/ _2 W6 N project: filebeat-systemlog
1 h3 s, P8 K, t: r( y2 l- type: log! V" `4 y* R% ^9 e
enabled: true. j. R1 K' H7 F1 ]5 b! f ?8 p
paths:
& F% ~+ ]% \2 p3 \( \) q! L - /usr/local/nginx/logs/access.log
! y* Q) W: \, v0 Z8 h( O fields:
' F$ ^; S8 v* G. @% m+ O$ v project: filebeat-nginx-accesslog
5 ~# O7 E1 _$ a* v' R- type: log) {4 @. A" t" I r0 ?- f
enabled: true
& l7 h; R+ u6 H6 b8 X paths:; L( x1 }7 ~2 [& z2 N
- /usr/local/nginx/logs/error.log
6 b; F$ Q! x1 g8 V4 U) I# Y# D3 u fields:
8 ~/ a7 O. P; r+ V V$ j- t project: filebeat-nginx-errorlog
; Q) J1 R( i9 I. J) ^; |, F7 Z5 Kfilebeat.config.modules:
9 k- }" Y3 E1 t path: ${path.config}/modules.d/*.yml0 R8 k! \6 i# t& _, u
reload.enabled: false4 t+ ]9 P4 i+ S& `5 s
setup.template.settings:
6 t5 d& I$ J: y& l$ D7 y6 p1 o index.number_of_shards: 1
) l# b% j8 {# S4 U' Vsetup.kibana:: d& `" n- _9 S; X2 @8 C s
processors:& A* X* I5 O; b3 d6 S/ V
- add_host_metadata:
) l2 G7 C$ P' J when.not.contains.tags: forwarded
/ E6 z: E3 n, z1 t4 h1 G - add_cloud_metadata: ~
8 i0 s7 J4 |6 y1 T/ L: I+ T4 f - add_docker_metadata: ~) z* D) j2 t! |& V0 {! ~2 K5 @7 H z. M
- add_kubernetes_metadata: ~
: q+ O- T: _! woutput.logstash:
& ?- b- O6 G; d( p hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
4 J. N! F- Y& e' J( m- c enabled: true
$ D/ ]8 c+ @' B& E worker: 2
% r7 e2 L$ X2 V4 @4 T compression_level: 3- p3 e3 \0 S2 g$ I# ?0 W
loadbalance: true3 c8 S( R1 U4 N$ U \- ~3 y
% u0 R% R" P" n- e# systemctl start filebeat
8 w" h$ b( k; @' J+ o/ J# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/ * }* R3 C4 M- d( U4 U/ z
logstash服务器配置
" o- Z g, J/ t! c! K+ L: z" jlogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch# z; q) u1 ~# b. M4 q) @, j/ ^+ ^
/ O( u* I3 s0 Z% p
# apt install -y openjdk-8-jdk3 I [2 v9 j F2 ?9 E
# dpkg -i logstash-7.12.1-amd64.deb* e( ]7 `& }/ C2 r' E g" E
# cat /etc/logstash/conf.d/redis-to-es.conf
* T) e q3 j# j2 } F/ Z9 ginput {
0 _1 ~( ]! h, z redis {
: @& G$ @6 j% j1 h data_type => "list"! V8 \) r1 F E7 }
key => "filebeat-redis-nginx-accesslog"" G) m! c. f( x9 w
host => "172.20.23.157"
% r0 N0 B9 p3 x! `1 D9 w4 _ port => "6379"% j( D6 ~# s8 x/ t' k* Z/ _
db => "1"
0 [* w: Y4 ?* g8 G0 X password => "12345678"
9 R% M; z! U+ {3 Y }8 H6 T- J5 H/ d! T; J E- H
redis {
: f% y3 m, r2 g# [! o. a* y. w) B1 A, E data_type => "list"$ `( L2 x# ~" K4 [8 K
key => "filebeat-redis-nginx-errorlog"
* O# }+ m7 i! o& L, [! M9 ? host => "172.20.23.157"
3 X: r0 v0 u( T0 r8 x% ~5 C8 ^; t port => "6379". M, s/ G) m: m# G p; x' A
db => "1"
# M2 r2 ~0 C t" j password => "12345678"+ t4 J$ t$ M0 D0 q; L+ W6 C8 R
}
# E& d: M& E, m3 _1 V# I. X" C4 u redis {5 K5 s) m5 G E. K! U' |9 C
data_type => "list"7 i2 p5 n( Q& l; ^+ b3 m
key => "filebeat-redis-systemlog") Q/ J; a( _! O, ~2 x" x9 ?8 C/ w
host => "172.20.23.157"0 a8 g# h5 C& P2 ?5 A7 a+ k3 G
port => "6379"' h7 U2 _2 L, I4 @8 a9 f0 w( t
db => "0"
7 f$ w+ X7 B/ B8 W" _& `4 ^ password => "12345678"5 [6 a* G) ]% ?3 _( h; }7 i( X
}
; U( R4 `/ s: D: H- t7 g- @}
f! |1 T( D7 F1 L6 w% Woutput {
0 I- C# Y6 [! U0 ?, |+ S if [fields][project] == "filebeat-systemlog" {
, }' n: z. X6 s4 z Z) y0 K- E$ h G elasticsearch {+ z6 }5 i4 X% r! @9 v: c9 S
hosts => ["172.20.22.28:9200"]
6 J7 q" B) Q8 b4 M1 u& p index => "filebeat-systemlog-%{+YYYY.MM.dd}"- a# @5 ]3 p) ]2 f- p
}}% `( d/ N' [7 i8 ]# ^, O+ Z
if [fields][project] == "filebeat-nginx-accesslog" {
3 f* }" ^8 e, }) s5 Y9 e% ^' @ elasticsearch {5 M1 Q2 l! h: C3 L, e
hosts => ["172.20.22.28:9200"]; v4 q6 f, @1 f5 W7 U& G f3 f
index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
@8 q+ m* a% b( s }}- @3 v( I8 l1 v' f4 F* b- ~! L
if [fields][project] == "filebeat-nginx-errorlog" {
: `- Z8 c9 J& A elasticsearch {
+ L# J# X2 u' N6 ^0 x) ?* ]" j2 y hosts => ["172.20.22.28:9200"]( q6 e8 T" m4 `
index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
8 \% f r5 V- e9 w' g3 r* G0 Q- H }}. \" `" m; k4 f; P! M! I5 g
}
3 L+ t- l4 D$ j3 I8 z) r+ r# systemctl restart logstash.service & [" j, G$ E0 r
redis安装配置 5 A. _$ R K% ^$ n( k# d! N
redis服务器:172.20.23.157,0 d7 D3 q3 U+ C, J6 w1 o
( f ]6 F Z V- Q# J3 l
# yum install -y redis
8 j6 @. P# K1 G6 T# vim /etc/redis.conf
9 O: u6 }' |6 L6 p0 _####修改以下配置项
/ x4 D( s9 X, S9 h$ ?bind 0.0.0.0' |0 w7 f% G. [# U: E: m G. I
..../ @! E+ j6 D: J
save ""
) D7 m% m* @* b8 d% T, q# Z....
7 k7 A L- |) C! Urequirepass 12345678
5 V# y! T& J! l....; {9 V; l5 D" f
# systemctl start redis% U$ m# G% d. T8 _2 g
###测试连接redis
9 D; T7 J' v, C$ J+ Y7 q# redis-cli . ^# H% E$ H8 C
127.0.0.1:6379> auth 123456785 F& |9 C( s' i9 ~3 Z4 B x
OK& }/ Y2 S: v3 C; {) ]- b4 ~. S
127.0.0.1:6379> ping
) O- g2 d. y0 }$ G$ r7 c# ?PONG
I' n: b* X, t8 N" a
7 M9 k4 O& I. |2 ?8 I3 ]2 X###验证收集到的日志信息
, ^6 z, C" s5 N3 ~- Y127.0.0.1:6379[1]> keys *$ i9 Z$ f+ E+ i# r% V/ q; I
1) "filebeat-redis-nginx-accesslog"
% r3 E6 {# G+ z L+ L) T2) "filebeat-redis-nginx-errorlog", v% r2 C( r3 }& u- B
127.0.0.1:6379[1]> select 0
" c x) B" Y* g4 M# e hOK
, e! e k2 V8 w* u127.0.0.1:6379> keys *3 I1 J' M- q, d
1) "filebeat-redis-systemlog"
( F( y" U8 _/ [通过head插件验证生成的索引) |+ E0 v& W& \. T. Z! u; L
2 ]. T6 O% r4 C" v/ W. b
" K! U3 b9 z$ l3 \ H- F9 Lkibana验证收集到的日志信息 $ b. i6 Q* N2 q2 m8 y: o" ~
|
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|