|
|
搭建ELK 6 W/ I# K- P/ v4 b2 m" T: C5 k
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:, f2 [. S$ T- @/ M/ e0 l
( b3 L& T) f3 z5 U! t4 y
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单
" ]( p0 t3 v$ o; H0 IElasticsearch ) S3 S% l$ c3 O3 u' F
elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。) ]% U( R( E3 f {( M/ T8 i
( C1 I5 M( ? W1 @$ M
elasticsearch的特点:5 K; N7 J+ b6 d6 ~# T0 f
3 h1 F5 \6 M2 k实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json! a ^3 i2 C0 b7 R4 y k# \& Y, d7 a
部署elasticsearch / w- w/ E6 L; _9 [2 c8 J% d3 c
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发
/ F5 s( h8 a1 Q; F2 R 1 e! ?& V# [/ G* Y: W$ z: a5 F& z
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步& C; p1 ]0 k/ c" u3 ]" ^' K8 g% v
! G1 b, n/ w2 o* [8 b2 c! }. G0 _服务器1:172.20.22.24/ k: `. j3 W3 a2 F4 H. g
' b, L3 S3 c8 }% Y" |服务器2:172.20.22.27
' K% v! W8 x9 [ 5 Y$ u' ?0 ?2 F0 K
服务器3:172.20.22.28- J3 @5 u* `- m4 }
) P3 G* W& Z! l###ubuntu
: D% J0 R3 t0 F( z# apt install -y ntpdate
: T2 f( U6 o; M9 |; w2 U1 E, o# rm -f /etc/localtime
* T: o7 Y2 j( v! c I! b5 [, I A# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
- Q& d; j+ M7 g0 \4 X0 J# hwclock --systohc
9 f' S6 d5 O) K, |# ntpdate -u ntp1.aliyun.com Z5 Q0 Y/ c# k
###设置内核参数8 M- I1 M% Q' ~* i% o6 \- Z( t
# vim /etc/security/limits.conf3 ~ T1 M, Q7 u6 \
* soft nofile 500000! f0 z& r/ ^9 z5 V9 C5 K
* hard nofile 5000000 k4 t1 ?4 }) W6 u1 B) b
# vim /etc/security/limits.d/20-nproc.conf
}3 R* F0 t2 ~6 z8 e9 M1 P0 d3 j* soft nproc 4096& f" V$ v5 y, z# i; O
elasticsearch soft nproc unlimited
) N; K2 [( Y! h0 a$ e- troot soft nproc unlimited$ o5 q5 a- @( X2 I
###安装jdk) H$ c/ R8 W' P2 D
# apt install -y openjdk-8-jdk
0 Q% R; ?) V% q3 K; N4 L, H6 K5 R9 o0 r
###每个节点都安装/ @0 K2 ^( _5 H8 J/ G: w1 R
# ls -lrt elasticsearch-7.12.1-amd64.deb
' W: y r9 q* q j# dpkg -i elasticsearch-7.12.1-amd64.deb7 B! B. d( l6 l4 I
###节点1配置文件' R7 K3 k, n7 q( |9 i2 x* {) p
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml/ N& ?6 J5 i: z
cluster.name: m63-elastic #集群名称
3 l5 a L" C$ g7 snode.name: node1 #当前节点在集群内的节点名称
) N3 `" Q) t d+ }- }2 Z4 Ypath.data: /data/elasticsearch #数据保存目录% Z8 L; E, E. T
path.logs: /data/elasticsearch #日志保存目录1 |# y9 b, z' S2 U0 B) D
bootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap
' t. }2 n; g1 A- `network.host: 172.20.22.24 #监听IP9 a7 K/ E4 p2 W& o
http.port: 9200 #监听端口
! Q+ q& u% B7 Q# @; J+ G6 L###集群中node节点发现列表
2 h2 U6 R& t! W5 B, Mdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]3 B ~4 \9 G/ x, m+ g0 ~
###集群初始化哪些节点可以被选举为master
+ Z7 B( _: d, ?/ K. w2 Wcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]! ]3 a& d9 c* R+ {; e" j
action.destructive_requires_name: true8 j$ ?# p( y) K4 G8 h$ M$ F" D8 V
# mkdir /data/elasticsearch -p
2 P# y i3 [8 y8 Y% K# chown -R elasticsearch. /data/elasticsearch
' L2 O2 H9 H- V# systemctl start elasticsearch.service- e) n9 c, s ` }; a
###节点23 c$ e, \ G7 } F( i
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml3 T& }5 o% P& Y/ c- h2 ?4 T7 b+ m( } {
cluster.name: m63-elastic
3 S( q& v; r6 y5 t( Lnode.name: node2
8 [8 e7 u' Z5 n$ p, Ppath.data: /data/elasticsearch
/ W+ c+ p, t3 W7 b) a! }9 [path.logs: /data/elasticsearch/ ?9 B {% x* c( q
network.host: 172.20.22.27
0 F4 |5 z: J5 V; Thttp.port: 9200
6 }: K! o0 J2 G+ ~discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]& T( b5 |$ N1 v# A
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]* H b8 U" W/ T' k
action.destructive_requires_name: true. A Z, O8 `# c/ I1 s
# mkdir /data/elasticsearch -p
1 R: H& O/ _( |: i6 v: |# chown -R elasticsearch. /data/elasticsearch
. c$ a& t5 r' t& {$ E7 E# systemctl start elasticsearch.service! X; d0 F; w5 n
###节点3
; _$ P" [0 X0 Y1 O# v, Z' C3 V; h+ A+ U# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml u: i' s) X7 K$ e; R
cluster.name: m63-elastic7 F: I4 y; u& e
node.name: node38 `4 D% o# U/ s& |' Y
path.data: /data/elasticsearch
# C7 l5 ~5 h9 z5 `path.logs: /data/elasticsearch4 ^/ X% n* x. G" G. |" r
network.host: 172.20.22.28
: i1 X9 U9 J9 L2 jhttp.port: 9200
0 G: B- Z# P1 |" Gdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
1 p" a" H. U/ N5 P$ jcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]0 N+ o! }) K) i( \
action.destructive_requires_name: true
! Q7 e4 Q c6 u a7 o( G9 Q: z' b# mkdir /data/elasticsearch -p! s, ~ @3 s) @" [' T/ ?, [5 }& ^1 R
# chown -R elasticsearch. /data/elasticsearch
- _" A& F! v8 g# systemctl start elasticsearch.service
* w& w5 X0 V, V4 e6 k M$ L浏览器访问验证
& M# E6 s2 A* H. F. a1 p. vhttp://$IP:9200
3 V0 F/ j3 \% P$ w# ]5 y% r- a! {; t z
* j6 H9 `; n# G, n9 |
) ~/ A2 |5 |- B8 f5 K2 XLogstash + o# \# m7 T$ x
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。6 t# ^% _, s. Z) k
) x7 F- m0 ]5 ^+ ~, O! [2 s
部署Logstash 1 V+ A5 G1 m. ~4 |% D! R0 o) Q
Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地2 c1 c( W8 V8 c( y5 q1 \. C# U
9 j* o4 V2 A3 ?. ?
https://github.com/elastic/logstash #GitHub
; C- n& p S8 b. o + D9 k! C. d% H
Elastic Stack and Product Documentation | Elastic! P: \* m4 b; \4 F3 ^+ R
% F& J: P) ^& J环境准备:关闭防火墙和selinux,并且安装java环境
- _/ O* i8 ~+ f
6 `' g! Z. P( {* t- d- i1 q9 p; `# apt install -y openjdk-8-jdk
2 g7 N% i1 H7 L% g1 Q" i( e- `% H# ls -lrt logstash-7.12.1-amd64.deb
1 A4 C/ r6 A2 |) g$ |6 b# dpkg -i logstash-7.12.1-amd64.deb
0 j: S ?4 E. C/ c) K' d* Y. K###启动测试
! N& ^5 L% L; [% C$ z: s( @3 l1 Z# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出
; z8 X& L" v" C Xhello world!~
* ], ]" p( t' A% V' ^8 Q{
6 H0 B) F) J* n2 a( } "@version" => "1",
+ h% C8 }7 L" i& ^ "@timestamp" => 2022-04-13T06:16:32.212Z,
4 y3 x9 G1 W. e "host" => "jenkins-slave",
: D: z6 i- l; e f) k) w3 f "message" => "hello world!~"
! B) c8 Y7 ^6 }: C, S; o" t}
p6 a7 }; i$ b2 a$ y( Z3 V###通过配置文件启动3 z6 Q/ F( Z: u- K4 }$ z) g( y$ P
# cd /etc/logstash/conf.d/
$ E7 l& Z/ ]- j( b) { Q- n8 s# cat test.conf 6 I1 M9 S; F5 M/ w1 b6 I
input {
6 L8 p a; J2 v6 Z7 M' u stdin {} n7 M% p% O2 T+ O9 V
}0 `0 t4 U9 ~8 G ^% r
output {
7 s% q$ C: u- B2 s4 g stdout {}
. A) g4 O# J0 W8 j0 _% `}4 o6 M3 `# c0 l3 W+ ^
# K3 l9 T6 x8 w
###通过指定配置文件启动
0 J0 i! j; E- u: p7 h% I# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法
9 w$ p% C" i* C) j, E ^# v# /usr/share/logstash/bin/logstash -f test.conf
3 k: e; Z, ?. l% O9 i% G" \
4 Z5 |# I7 T* I* N% H* B####输出到elasticsearch
/ I5 U) P, x2 ^5 B+ u# cat test.conf
U/ O7 n) m9 Y% `$ einput { . }5 Y, t7 W5 N# H& [# d s
stdin {}( C, T z5 u+ h/ H# k( e ]) I) O. E
}
) H6 I% \; H* o" H( w2 B; Ooutput {- H+ }7 e+ l1 x) s6 ~; |" o8 I
#stdout {}# { R( \' l- J+ w
elasticsearch {% }. H! f, K* K1 B# U; E
hosts => ["172.20.22.24:9200"]0 k3 o9 w0 n) z) x# Z$ p: p
index => "magedu-m63-test-%{+YYYY.MM.dd}"6 _5 _/ N" Y( q* w5 Y
}! q. L/ i4 x. C. y. U6 i. J
}/ l/ c% M- s0 N- b3 D
# /usr/share/logstash/bin/logstash -f test.conf
. O. S0 Q- H. N) N" J! V) Lversion18 @! D p7 R' `3 j B. M8 \
version2
w1 d, b) @3 E* A2 G8 zversion3' i& p5 n: c/ ]# t3 F7 b
test1% E2 z/ {! M$ n3 S; b
test2
( a+ M5 Q& @! s# \2 ?- N4 vtest3( C; w/ v# G. H4 Q% }! b. W4 {
i; N7 a3 q! f, W
####elasticsearch服务器查看收集到的数据& s! q2 c: R. G& [ X- L
# ls -lrt /data/elasticsearch/nodes/0/indices/- J& {3 E& G' ?6 c' |8 N( y
total 4
( ?6 R; B+ ~* x6 b& Hdrwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
! V3 q5 S2 ~! lkibana ; q6 P2 S4 T0 U
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等
6 R5 T4 \- R, n2 q( p / k0 T2 i0 l4 t; A% y! }% B3 D
部署kibana ! G' h2 ~: C# Y; g
# ls -lrt kibana-7.12.1-amd64.deb
' s) Q8 J6 N% Q5 w# dpkg -i kibana-7.12.1-amd64.deb" J6 T9 |5 d) ?: @6 ]
# grep "^[^$|#]" /etc/kibana/kibana.yml$ ?6 {- s+ \, W7 J* w; ?
server.port: 5601
- w1 O/ h8 \7 D$ A' u' iserver.host: "172.20.22.24"
2 Z6 W# }* ^, ]5 D9 Welasticsearch.hosts: ["http://172.20.22.27:9200"] s" Z! Z- v# H% H
i18n.locale: "zh-CN"8 A7 s! P0 z) K" W s" X% l
# systemctl restart kibana " E) I7 E) T1 p3 c2 W- A6 e2 G
浏览器访问http://172.20.22.24:5601" M& M0 J V7 Y1 B4 j2 T, H& M
3 U* C# `2 L6 j7 P) M% S% A
Stack Management-->索引模式-->创建索引模式' j; i2 z+ Z- S- f. X2 @
2 E5 N! u; |: K2 T0 W
; I+ ^% U& {! t( f% A$ R
选择时间字段
6 n4 P1 E* Z7 e* E0 }
+ \/ E* r, ?& R) L0 R& R查看对应创建的索引日志信息
7 b C: T# A k0 [# J9 K
/ Z0 D2 L1 W8 | " C! ?; A+ `" f5 s! [" C0 F% N
+ E; t) w+ R) q2 k7 D2 k收集tomcat日志
3 \1 E( u' O0 L% Z) T- W( k$ o收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
9 C$ y: }2 ?' W1 r3 M! m ' \; J5 ?/ c3 K4 f5 n3 R
部署tomcat ( L" z! y6 q: {; ?0 w, n% W
####tomcat1,172.20.22.30" }+ \9 U W; _4 Z, S4 H
# apt install -y openjdk-8-jdk' O* F6 r: d6 W
# ls -lrt apache-tomcat-8.5.77.tar.gz
( t( J2 E' {2 w1 r; ~1 c-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
/ j( B4 `7 ]% a/ b8 M* W# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
3 b/ R' v$ I6 ~: I# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
8 ^0 R8 b2 F- P2 M+ q% @/ T/ T2 P# cd /usr/local/tomcat+ I- j8 U0 M J1 }0 [% b- l7 @
###修改tomcat日志格式为json
- H, M% Y" M& U3 u5 t# vim conf/server.xml
/ q- q# |4 j6 V P1 b/ Q3 G/ b....4 Z$ c2 {( f& [7 f7 k& j+ f
/ b( w: L" H, W+ m2 p& W. b..... u! _9 y; k! }7 ]8 L) l: ]; C
# mkdir /usr/local/tomcat/webapps/myapp( X( B. J' i; U* r! v' V+ ^ M; U/ s
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html# @6 C' s/ V" ?& s9 D6 J4 }9 T
# ./bin/catalina.sh start
0 `+ I; A$ \% c4 B4 ]2 g/ L
5 G' t$ a7 Y% T" F$ {9 S7 N0 B###访问测试4 ]& t( M; `* @! z/ x" }
# curl http://172.20.22.30:8080/myapp/
6 L# }- A5 j" L4 S. N- `4 }$ ~###查看访问日志, g3 u) U H1 E1 T% j5 X
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log4 o6 p% q! Z* k2 ?" H
/ ^$ }( E( A9 }8 k3 k####tomcat2,172.20.22.26
/ ^. a* f2 ^7 U# r# apt install -y openjdk-8-jdk
+ C- g/ N3 q8 F4 `& v# ls -lrt apache-tomcat-8.5.77.tar.gz & ?% _ g6 s4 W! q
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz9 u a, K5 N7 n4 }
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/% a! \+ u( p/ N# ^" H- K
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat; `: g0 s. {! M; ] u
# cd /usr/local/tomcat u% v; T) q4 D+ |& i" j
###修改tomcat日志格式为json' |: ^& D$ R( A$ O
# vim conf/server.xml
: f- j; `# F* n+ ~6 f4 s) `5 H....& A6 q* ~# c8 [5 |/ {5 x
) `! Z7 q9 V O- A; p
....
/ [1 Z% m) M, @9 w# mkdir /usr/local/tomcat/webapps/myapp
$ g* k- q9 R: p3 F# {. f% O/ j# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html
3 k; A. }5 e6 [# p+ z# ./bin/catalina.sh start
4 l8 M8 p- r' |! r2 d7 A; I! k
/ G6 }) h0 ?1 Q/ ^###访问测试: m1 ?1 U6 j3 R
# curl http://172.20.22.26:8080/myapp/
' h) l9 _' n2 @6 _* O" }! b###查看访问日志- C; E g8 J- r/ H
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log 5 ~6 [- y2 o4 k! Z. E; Y
部署logstash
! u; \6 _2 @3 I2 H5 m; c$ H在tomcat服务器安装logstash收集tomcat和系统日志
# d" p" C' h+ e) E8 u3 ]
* z1 G1 U7 @* h, @2 |####tomcat1,172.20.22.30
5 h; f$ {9 v8 L, |; ?/ ^# ls -lrt logstash-7.12.1-amd64.deb; l$ C; e5 g! Z! s# u# Q- }
# dpkg -i logstash-7.12.1-amd64.deb9 r$ p F' l8 V
# vim /etc/systemd/system/logstash.service
. x; i4 b* C& D0 |1 y4 x1 r...
; r: I5 o" J6 d2 iUser=root
( y! d8 I0 R& cGroup=root
6 K) r0 M3 z, P6 W% }1 k...
/ @2 N% n+ e6 I1 S' L" A' `- s: g. [# cd /etc/logstash/conf.d" a$ Z, r$ ^2 Z" [2 S) w
# cat tomcat.conf* S/ ^- m1 Q" o% r P
input { 2 W1 Y7 d/ a" W7 {/ M0 o
file {3 s! o# W/ G+ Z) K
path => "/usr/local/tomcat/logs/tomcat_access_log*.log"1 ~, v: f# O1 K: D( Z9 Q
type => "tomcat-log": b' m* X2 @3 T; I: i4 b' b
start_position => "beginning"4 T. I# ]$ h9 w# F# n
stat_interval => "3"
E! d& |" r Z" Q8 r) Y }( ]$ f0 i% r$ D8 r4 k0 G+ e V
file {+ `' q2 F' n" M& d. Y5 ^
path => "/var/log/syslog"0 F0 U( h9 i7 d: P
type => "systemlog"; X- D/ l; z6 A) w; G% d6 }: {
start_position => "beginning"
* S! t4 {& p) {% Z# ^! E: s N7 C: q; C stat_interval => "3"
; y2 Z# q* B' T5 k) ~4 } }. r! a9 ?, x( K
}
; L8 P3 Q/ P4 i2 Noutput {
: T& x; f6 n6 T1 a0 W( } if [type] == "tomcat-log" {
/ Y' y7 f+ Z7 v4 h _5 N elasticsearch {- W1 G% `( E5 g3 J W
hosts => ["172.20.22.24:9200","172.20.22.27:9200"]% t' M* m! O3 h3 x# f
index => "elk-tomcat-%{+YYYY.MM.dd}"
y r, }; d$ ` }}( X/ J" U. T. J# ^
if [type] == "systemlog" {. H# j- _" n d5 \1 N5 X y8 T3 H
elasticsearch {1 ]1 x( H3 _, O' O
hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
/ Q- ~6 ]- S' y! q( U' }( {3 I* m1 u index => "elk-syslog-%{+YYYY.MM.dd}"+ O9 R8 p \0 X
}}
/ r) e1 j' ]* M0 C# K v5 u' p. z}
( q, n b7 L; @
( P1 L1 t, m3 _. s5 e7 `# /usr/share/logstash/bin/logstash -f tomcat.conf -t
( Y3 a' U# ]" O U# g" t. E# systemctl daemon-reload
0 A3 M5 U0 Q; b1 }$ S# systemctl start logstash.service
6 t8 F0 r( Y% ? v+ [8 b7 N$ I# scp tomcat.conf root@3172.20.22.26* w7 w* ~0 f( K) O
( q9 B( y" `6 O' O
####tomcat2,172.20.22.26, S: ^. Y/ Y L0 N: K/ v
# ls -lrt logstash-7.12.1-amd64.deb
" v9 Q' I) q& ~: n1 {6 w! f# dpkg -i logstash-7.12.1-amd64.deb! q: S- G: K/ r. _* W$ r1 j; Q
# vim /etc/systemd/system/logstash.service
3 L1 m# x4 a" B. `' j E...
* P7 d5 i. f9 \# W W sUser=root
) e. S) v& |2 x2 Y$ \Group=root# u9 p9 \! z# q2 b" Y5 y
...( s- L5 Z% \: Y5 i) p4 v0 G3 W
# systemctl daemon-reload* O2 F7 D/ H6 e: N ?
# systemctl daemon-reload- a0 D$ A. e7 d4 x* L
# systemctl start logstash.service 1 R3 j5 r( B9 j& r% ?6 K5 |8 C
通过kibana展现; }! }; F; t+ ?9 s* F: j5 G
, q6 U6 u1 F8 q0 A: z% Z' O8 H3 D
) ^ X1 j9 m8 F# { L
收集Java日志
0 F, E! U! S! p: c1 m使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并6 z( o2 F, S& ], I! t
* W4 n1 j$ W: U* q2 h
Multiline codec plugin | Logstash Reference [8.1] | Elastic
! f3 h# z8 \' N! [ 5 a( F. G1 e& w7 l% U
添加logstash配置文件
' p! _& q- g9 S$ ]###收集logstash自身的日志,172.20.22.26
9 K N. q, d5 J- d2 j5 O' D3 v# cd /etc/logstash/conf.d
9 ?! m+ k$ _* r1 W: A @3 D# cat java.conf $ U8 A$ m! l. y
input {
) \) }" `: }. C" @4 K file {; u) W+ r! b' g5 c+ Y* |
path => "/var/log/logstash/logstash-plain.log"7 T2 N# ~# c7 s2 z
type => "logstash-log"
7 _' E0 G8 P5 \1 h start_position => "beginning"0 @, ~9 d: H& r" O9 s2 {
stat_interval => "3"& n9 v: c2 L% L3 C8 a
codec => multiline {- l! b6 }3 m+ k5 J" F
pattern => "^\["
! L# ]. }1 |3 f4 u7 q; F negate => true
. v( E: A+ V8 h6 Y( N what => "previous"
* N( H3 J* S; p }}
5 q+ n1 C2 y1 l3 ^}! g- { f% R; @
output {* q7 F, R0 [5 b
if [type] == "logstash-log" {
3 s' I& D* n: z' @, s elasticsearch {" k" i( }( Z- T, ~1 S
hosts => ["172.20.22.24"]7 q% x/ E7 y5 b, g3 a9 k
index => "logstash-log-%{+YYYY.MM.dd}"
/ i( a6 ^; Z* |4 l }}! Z5 p/ v4 R/ V% N/ _' l
}
' q+ T9 v2 {- X5 [* {* F3 d! I: o) j5 C) `7 Y* y
# /usr/share/logstash/bin/logstash -f java.conf -t
# y6 I/ r2 i2 h' D0 }, [- q# systemctl restart logstash.service! m9 z, N- K4 T3 R2 P4 ^9 ^" T7 q
c, C+ U+ w7 @ q###收集logstash自身的日志,172.20.22.30
/ ]" C T) U4 V2 `, I# cd /etc/logstash/conf.d
5 B* g5 D% T3 ~0 h8 X# cat java.conf
1 ^. m* W* ]' Y+ hinput {
" Q; V4 t# l/ ~% D file {
" V8 l. W& v6 X) C path => "/var/log/logstash/logstash-plain.log"
4 v" A7 [* O- ^ type => "logstash-log"
9 m) ~! Q+ a! l% b3 q- p1 d- x7 P start_position => "beginning"
0 @' c& n) E, I6 C2 _1 z stat_interval => "3"
+ g- G Z9 k% \! X" | codec => multiline {& q: g1 d' o! f7 [$ y6 k
pattern => "^\["
: _7 x) l& ?2 u negate => true
5 c$ f( R8 y Y$ `, P what => "previous"
' N+ B) S$ ], d }}/ D. t; E8 ]4 u0 N+ C9 G* \% B% [
}5 T4 l; E9 H0 X5 t; D
output {' [5 O: \6 X: `( O7 R R
if [type] == "logstash-log" {
: x/ ]" C b$ n. g" h: A elasticsearch {4 Q# f0 Z9 C' F. l) s1 g% T* k' C
hosts => ["172.20.22.24"]
9 O7 @9 A$ w9 I' ]9 C* O9 H; k index => "logstash-log-%{+YYYY.MM.dd}"
5 Z1 z5 n; s) A$ A( ]2 [ }}
1 a" k4 \) M k( C( `" G% k}1 V/ k% Z; X) Z' }( b3 X0 j. J/ ^6 W
, b4 i4 ~& W- ]8 G" I. U6 t8 ^
# /usr/share/logstash/bin/logstash -f java.conf -t: u ~5 r$ S& z0 ~ J, p
# systemctl restart logstash.service
' y& ?9 Z4 h# s" ^% c( S( l查看kibana收集到的日志- M) ^( d8 m# @3 [, F/ a' p' ~- Z
1 u: _$ r$ o- d c& U- F; L9 j
. z: O+ v7 M# l4 b- V+ S
9 Z) V+ W# o' y- d" @8 t
filebeat结合redis、logstash收集nginx日志 / l# P: {- b+ C
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
# z4 ?1 w' Z/ L3 W! N # K8 m( n- H& N" j
web1:172.20.22.30,部署好nginx、filebeat、llogstash
, h, I. `8 P" t) M! C+ U 8 }3 U. x3 V/ O& |" ]' |
web2:172.20.22.26,部署好nginx、filebeat、llogstash! v# N& @0 q. a
& [6 L( ?' C, k& A* r6 y8 f$ J
logstash服务器2:172.20.22.23,redis服务器:172.20.23.1574 {, ^7 Z: Q7 }3 g9 e2 }
+ v! Z& m5 M' D8 r
nginx服务器相关配置
& }. q6 l) ]* I* @部署nginx
8 o+ W u6 X- U$ G! s1 W3 ^1 Y# wget http://nginx.org/download/nginx-1.18.0.tar.gz
, K3 n; a) O9 o! \8 Y# tar xf nginx-1.18.0.tar.gz
4 e3 u6 ^0 f8 J/ x( `- p# ?# cd nginx-1.18.0
: b* d% _ h% S) t1 j0 z# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
' a4 ]7 c. f# _& s- P. N. S# make -j4 && make install
% q' J5 W* I$ v5 x& X) l# /usr/local/nginx/sbin/nginx
0 h7 ?! S+ h- Z6 P! e' T部署配置logstash
; m- K( ~$ ^' ~; k$ v6 R6 u" n2 ^把filebeat收集到的日志信息发送到redis
3 Z R6 @9 m# C: G
y, i5 i$ i! s) r8 P0 s: D: F# apt install -y openjdk-8-jdk
6 K3 d q, ?6 [- y/ p8 S1 M W# dpkg -i logstash-7.12.1-amd64.deb
/ V$ N8 X4 m7 x$ {$ O# cat /etc/logstash/conf.d/beats-to-redis.conf
% _* N4 V# x O8 e2 oinput {0 J% X3 i7 V* I
beats {
* H3 d3 N5 v+ {7 L% M; l! a port => 50442 S" h9 F# V" I1 y% Y/ _
codec => "json"
- X6 X" n- H6 z; q: U }/ X) }( N# N$ i& U& U6 |
beats {
) |5 s2 G' j( ]8 c* D7 H port => 5045; u* d* N) v3 r. j* J: {. n4 Q
codec => "json"0 A' i: _! p4 i2 i# Z) s
}+ B; B7 o7 Z0 b
}
- d) }1 _& e$ o: _# Loutput {: Q# }# F& A& I0 q, B! S; H; I
if [fields][project] == "filebeat-systemlog" {
6 r) p5 ]1 ?7 P. F3 M redis {
" s) j" }& q: L+ @4 V data_type => "list"
. x- o7 ]7 \5 P% ]8 P# q5 V; L4 Y key => "filebeat-redis-systemlog"
# H+ `0 k. A7 h. V" r7 s( k host => "172.20.23.157"0 l- } e4 d# r. V( i4 s
port => "6379"( e. r3 g W8 u8 a
db => "0"
: @% n9 |. e7 `! N: C( ` password => "12345678"
9 v9 q: T8 X. ~( l& Z+ k# [; B }}! K, `8 a, p. K3 Q- A+ @ B
if [fields][project] == "filebeat-nginx-accesslog" {
$ |; G T. Q; Q4 g b& M redis {* q9 U- l. w6 P) w R
data_type => "list") K5 e: b5 a( w8 p
key => "filebeat-redis-nginx-accesslog"
1 j) h+ i% F, E/ V& {* y host => "172.20.23.157"' P0 v) p/ Z; v& e* W$ X
port => "6379") K: O. J3 l Y
db => "1") m9 D6 ?! u- u0 x
password => "12345678"
" P5 v' n2 O! F# \& j/ A }}6 l5 t! M: a* X; D4 r
if [fields][project] == "filebeat-nginx-errorlog" {/ K1 A( D) r$ y! O- r/ G
redis {2 r. A& u& }" u5 K. @
data_type => "list"
9 b- s5 f. O; _# K. H: R key => "filebeat-redis-nginx-errorlog"1 `# A+ d/ B/ P9 X% ]
host => "172.20.23.157"1 e" A6 n/ b2 h2 n2 i0 f5 s
port => "6379"5 e7 X; z5 x; J3 \# F
db => "1"
/ _( m3 \. A& z+ X- y, i password => "12345678"
! g+ z& J$ I! r9 B }}8 F2 t' {! m. g' e& ?
}
, T% d" G- G/ o# systemctl start logstash( ?6 }# q- P9 q. S. A2 q5 D
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ 1 a' G+ n v7 r, l% ]" T
部署配置filebeat ( p4 F, o* z' O/ c
通过filebeat收集日志信息发送到logstash
/ Z. O# x) X& r" k
& c) s& R$ ~& ~/ R# dpkg -i filebeat-7.12.1-amd64.deb
% C7 \7 b. ?' u- R# M# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"3 Z5 j# Y4 ~! Z* @* C; Z/ i* e, K& O
filebeat.inputs:
2 |' ?; Z5 Z. c3 {; e- type: log- T5 Y f( O* U
enabled: true1 k& F* r4 M$ R' E6 D" W/ T( e
paths:$ k6 g! P; t. u, ^% }( ?
- /var/log/syslog
- }5 S# \/ ]' g, Q) U/ p+ _ fields:
0 V) m! f5 t5 a+ ]+ l8 X. Y, z project: filebeat-systemlog" u. k* Y+ x" E& X9 P$ Y
- type: log
# Q+ E6 c) l( U7 b: ] enabled: true
0 q( k6 V5 O! q paths:3 Q. V) W! y7 P
- /usr/local/nginx/logs/access.log
% ~: [, i) u% S8 K fields:/ G( ]+ a; d: z8 P: [
project: filebeat-nginx-accesslog
j" h5 K2 ~' \5 h- type: log2 g9 \6 U9 R4 {9 ]
enabled: true& l R4 P- d$ j$ v" L) d9 l6 J7 O
paths:7 ~# h9 h( X* f1 c( b
- /usr/local/nginx/logs/error.log
' c- d" X: ^1 y) O' v. Z5 e fields:
; O0 {! `* n, M$ \8 t project: filebeat-nginx-errorlog2 f* ?$ J/ V7 d/ N
filebeat.config.modules:# }+ N9 U: N7 C0 P; O5 c) z( j; V
path: ${path.config}/modules.d/*.yml
3 [5 s0 L1 h/ @8 p( R6 n reload.enabled: false
" t; A% F; L8 U3 lsetup.template.settings:2 K: B/ X- W5 }
index.number_of_shards: 1
' O5 h) B0 M4 `- asetup.kibana: p3 ]- q/ R& h+ _5 p- L
processors:
. R8 U" x* _' a8 ]2 _6 r: \" ? - add_host_metadata:
3 k G8 ~; s' W; T- i when.not.contains.tags: forwarded
3 g M/ M6 @- A0 j8 {5 B - add_cloud_metadata: ~1 F% F! X5 L# }2 P$ X4 l
- add_docker_metadata: ~/ U- |- ?+ b5 T
- add_kubernetes_metadata: ~
8 D' }- [$ g- r* ` K5 Qoutput.logstash:
% o# _+ ?/ O! Z) u- I5 i/ q) f hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
: h: V7 E2 O4 q" X; U$ n enabled: true
, n; `6 e6 A, z0 b worker: 2$ E- A# W5 W6 D0 \( M
compression_level: 3
% K# `* H# z1 J; ^6 z% E loadbalance: true
' }' r- P% E d" u4 R' G: h- k6 v- D; Q9 O/ y
# systemctl start filebeat* `, j& ~ [. m4 \
# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
. l4 s L& f' K; k: f; \8 Q! {logstash服务器配置
9 H# R; X' e5 t4 r: y3 ulogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
$ e: m7 R& w1 M1 `( f* }8 P
M* x2 A+ C. R# apt install -y openjdk-8-jdk
0 t3 u; {/ l/ T- q: B# dpkg -i logstash-7.12.1-amd64.deb
& A' X, p6 C+ Q8 {1 x# cat /etc/logstash/conf.d/redis-to-es.conf 4 v, F' A) N8 O. h) \ Y1 q1 ?
input {
" B! ?! T; w/ a# V2 X redis {
. r! q4 I4 G: u9 p) O4 |/ b2 _ data_type => "list"! i% x2 d4 ?# A6 P8 ^2 P% M7 [
key => "filebeat-redis-nginx-accesslog"
4 _3 L/ N) y( `7 V host => "172.20.23.157"
( o" \. z- P: ^) b3 m port => "6379"
0 A9 h1 U/ g. Z: e. ~ db => "1"; N! `1 F. h6 r0 L+ Z
password => "12345678"
8 c1 n6 |" w3 r. M }
' w( Z3 s- [# C: u4 s7 W( ^ redis {
' `- o: [% O+ x. b( x- A data_type => "list"
1 R- J, |- X7 N7 a key => "filebeat-redis-nginx-errorlog"& F5 L$ u0 b* T' K7 ?" \' G
host => "172.20.23.157"
; a; A: m# {1 e! Q+ e port => "6379"
2 d2 f9 c; K8 @% f* E$ N/ O db => "1"
4 S( B1 v3 V; N p% C8 t9 h% z& b# b password => "12345678"
7 y- _. \& V0 [) s& a3 L/ R* W }. u/ d) q$ f$ ?! b& t% X
redis {$ k, k2 ?* y+ r3 t. ?& h5 V# s
data_type => "list"
$ N- l. W* \( |. J5 _: O key => "filebeat-redis-systemlog", L7 L! G; C, r6 E7 P" N& g5 ~
host => "172.20.23.157"
, Z* a; X' v$ N" {# \: B/ H port => "6379"+ n: {" { w! B+ `4 ?! F
db => "0"6 h( E% ^7 m) p0 G
password => "12345678"
& m: W% C- b4 h5 Y; ]+ |1 | }
4 E+ W/ ?9 i' W8 H# H8 N0 u}
k/ H3 Q* M' Routput {7 f2 c i8 E% t9 z0 t
if [fields][project] == "filebeat-systemlog" {9 E0 J0 U) Q/ h) R* V. P
elasticsearch {
: j# C r8 l% |! z hosts => ["172.20.22.28:9200"]* _5 P# q& ` F( g& F2 o
index => "filebeat-systemlog-%{+YYYY.MM.dd}"
3 e9 |6 I) ?" h% L }}
2 d. }! ^3 X! m4 L* l- z if [fields][project] == "filebeat-nginx-accesslog" {" `; g+ q1 ]) T! A
elasticsearch {/ ]1 q4 \: l( X/ O
hosts => ["172.20.22.28:9200"]3 E$ x! L0 T$ S e6 Q5 d" K
index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
- m ?0 s& e, l- U }}; g0 W6 G( ^$ R
if [fields][project] == "filebeat-nginx-errorlog" {- C8 ~% x& B( C) P- k3 D" u
elasticsearch {
% S# H, ^9 \4 G6 _- P hosts => ["172.20.22.28:9200"]
, a( r# J& h& n; E0 ? index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
, @& V7 ^0 b& ]2 p3 j; V) W }}
2 X) e1 L$ p% m0 w) q}
% Q" t' F! q: u6 z& q# systemctl restart logstash.service $ d$ h: U; }9 Z1 F o$ f
redis安装配置
1 U7 a- ~7 C6 A1 Z! gredis服务器:172.20.23.157,
" z0 b; \2 W% b1 l0 R# E% x 3 Y# U @1 X/ y6 S
# yum install -y redis
! D7 u& v6 G) i( z: Q' Z2 g7 C' O, ~# vim /etc/redis.conf8 ^8 |* ?- D2 Z! H+ g/ D3 g
####修改以下配置项8 P' X: @1 N7 {# {0 w; Q
bind 0.0.0.0' s% V; S+ c& t9 U% f0 i# B3 E( m
....( Y3 H3 p+ m* C5 X. B
save ""+ I3 I; Q4 Y' j) x% X% m# V
....: w p! J, E+ k
requirepass 12345678
; u' T" E! K: K) ^1 q4 ^8 |....
+ |0 t- O3 E0 a) g( \) \2 G& u# systemctl start redis9 y, H' s! L# }5 U# v
###测试连接redis
0 ~% _ A% t! T# redis-cli + w8 k; m# T( e. N
127.0.0.1:6379> auth 123456780 X' \. S/ u" T, x# w
OK, f8 y' v" {3 W# |" R
127.0.0.1:6379> ping
& `3 E1 Q2 J$ Y/ jPONG
5 _* U8 f( ~" u5 S& ?3 I# |! Z9 o4 t$ h" D
###验证收集到的日志信息
. V9 K4 E" P& t& Y+ h# |127.0.0.1:6379[1]> keys *
C$ {( k1 u/ [" V( r1) "filebeat-redis-nginx-accesslog"3 b8 }! ? R( f9 B" z8 Y
2) "filebeat-redis-nginx-errorlog") i9 s3 e# a! f5 x+ ^
127.0.0.1:6379[1]> select 06 M b+ w! a- Z- }
OK$ U7 [# [0 B( p! P0 F
127.0.0.1:6379> keys *1 T4 W$ O. t" J) u
1) "filebeat-redis-systemlog" 0 ~: q% K* I; N2 Q7 l& ~
通过head插件验证生成的索引- Q, ~" U% @. y# J
. R0 L/ U# i) o0 g3 Y. k O7 l+ h
* X0 F3 j* h3 ukibana验证收集到的日志信息 & u9 V! Q8 F8 r* |$ U. S
|
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|