|
搭建ELK
) P8 M g2 }% Z* S1 m* rELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:+ X7 D0 H5 [) v6 @6 J0 s; u+ B
6 L/ M7 w5 Q) P) t" }8 |/ |$ t4 A处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单% c, a2 i- G* v6 R8 ~& v
Elasticsearch
! K8 ~5 S& c4 b/ d+ z3 R2 pelasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
, N$ R7 `; W. K1 Q& w7 m
2 M& m+ F" N% J6 c8 `2 uelasticsearch的特点:
7 ~; I1 ?& d. C 7 ~; s3 r6 H" ^. e/ X
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json. w6 |0 m& _6 k! b
部署elasticsearch , y3 l1 z: b, c+ i: L; V Y2 M
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发5 n3 I. i) b o- A: n/ w6 J* ]
9 w& [- C$ ^# N* A. a* j, O
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步. O6 u5 `# Q& J/ H s" D
% I/ p) u& }- h服务器1:172.20.22.246 e4 F8 Y( f$ C7 {+ r/ u
, E- ~! s8 [/ v/ j
服务器2:172.20.22.27
5 d2 Z7 N8 u o8 n 1 n3 f; q7 A8 ~0 [, D4 |2 s6 o5 Y
服务器3:172.20.22.28
+ g# p0 }$ A( Y& Y
* C; ]4 C" ]$ V, q. F+ ~###ubuntu
& ?3 w. y5 ~! A8 K' J# apt install -y ntpdate7 s4 B' d$ [5 E) c) ?% a3 C/ ]
# rm -f /etc/localtime
+ C: i: w$ h* L' _# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime3 L- z1 Z) ^$ E1 i' o9 C1 |
# hwclock --systohc
4 `0 ?- Z7 t5 r0 l; y p# ntpdate -u ntp1.aliyun.com
. f5 R' g& }! F6 M0 ~2 {###设置内核参数
e7 T& j+ e" i; D, |) r! b# vim /etc/security/limits.conf3 E9 Z( X5 {0 F! B5 H; {3 Z, T. Y4 h
* soft nofile 500000, S+ m7 k$ ~# }( Y: g* ~
* hard nofile 500000/ k7 F5 z# Q5 o5 b3 {( K! Z6 ^
# vim /etc/security/limits.d/20-nproc.conf $ E* t/ j, ~* i9 A4 I8 a! U' \
* soft nproc 40965 b0 z j# }/ ?
elasticsearch soft nproc unlimited
# k% K) y' m& A4 Lroot soft nproc unlimited
, B+ T' u6 W. n! {* x9 J7 E; P###安装jdk. T' N- g; `8 O* Z0 |" B: ~
# apt install -y openjdk-8-jdk
9 X: Z& H# o( L- ~* {, Q( F0 K% F6 f8 h. ^4 [4 \ D p
###每个节点都安装2 z3 K' @0 [& `" \, Z: J. ^5 x$ s
# ls -lrt elasticsearch-7.12.1-amd64.deb7 R/ L) j& [9 M" x- E0 T- Y1 F# S! E
# dpkg -i elasticsearch-7.12.1-amd64.deb0 o7 F8 N# |- x, t; X2 K
###节点1配置文件# A- W2 ]0 e+ E% W( P
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml+ @0 W( e/ u, ?: w3 C
cluster.name: m63-elastic #集群名称, Y( s4 q3 X a% B8 [$ t3 [ Y& S
node.name: node1 #当前节点在集群内的节点名称4 k% u) Q0 V- e# ?
path.data: /data/elasticsearch #数据保存目录1 N1 |0 G7 H# P7 o, |7 f
path.logs: /data/elasticsearch #日志保存目录' w: o3 _) L; V0 O& E- A
bootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap
: E- f9 J' V4 H7 d! I& N. S+ _; [; Xnetwork.host: 172.20.22.24 #监听IP9 b5 R9 W9 m! P& Q0 D
http.port: 9200 #监听端口1 H6 z9 y) q. Z/ ^# V w9 N
###集群中node节点发现列表% o6 }6 \* b: _* H- h( n e
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
. x! K) h+ I6 E. ?2 u7 o###集群初始化哪些节点可以被选举为master
" u8 U5 a5 [5 |( Q) D" P! Vcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
- I' D$ q" n8 K! O/ P6 raction.destructive_requires_name: true, ]9 T2 t) j) Q! L+ ]) F" E0 o4 @
# mkdir /data/elasticsearch -p
1 m- D6 P! V4 ?+ v, }' ]# chown -R elasticsearch. /data/elasticsearch/ E, a2 |: i2 N
# systemctl start elasticsearch.service
1 B, G! F5 |% u3 {5 e( s###节点2 g. t% G; h0 \6 U" k+ x7 S& L
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
" n( V" D: p' ^cluster.name: m63-elastic3 r8 Q: Q: U" G; u' x! J8 @, }* W
node.name: node2
, r' M# V8 ?5 N' W4 rpath.data: /data/elasticsearch
( ~3 u$ F2 |0 d0 _; f0 |+ Wpath.logs: /data/elasticsearch2 s* n. A- J- K" d; U; r! B9 u' ^
network.host: 172.20.22.278 }% G4 O. A. |2 l- X- T" ?3 W
http.port: 9200
1 a i+ R) N. G7 e1 ]) Tdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
. E' Q6 t% S/ j' T4 Y( U$ `cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]+ |7 X8 f7 B9 ]2 t! k0 _
action.destructive_requires_name: true. V: \/ I/ e+ W- `3 F
# mkdir /data/elasticsearch -p
' [+ s/ [; s, w; a7 b8 K# chown -R elasticsearch. /data/elasticsearch
% }: H- f' o/ w/ g# systemctl start elasticsearch.service8 N! \2 Y1 h, n
###节点3
" g I! v3 u7 b. P# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
- ~& J' g2 o0 y6 B7 Z3 |cluster.name: m63-elastic
7 W$ J6 _$ m# }( hnode.name: node3
3 m* d2 Q6 m3 ppath.data: /data/elasticsearch0 x/ d% Z" X0 n' o
path.logs: /data/elasticsearch' |* f0 t! t% }4 A8 L
network.host: 172.20.22.28
, j8 q: N+ u% v: b. X3 j4 O8 e$ xhttp.port: 9200) u- K' m n# N2 Y, H
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
, b$ h; D/ C- c- w5 F; ccluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
q# x- m, P0 yaction.destructive_requires_name: true" o" S# T5 x5 p7 A
# mkdir /data/elasticsearch -p
u- D6 y2 F9 T$ Q3 _; d- f# chown -R elasticsearch. /data/elasticsearch# X1 V- S) ~. q
# systemctl start elasticsearch.service " w& i5 ^, Q) v w
浏览器访问验证
" h# @. V; ?! L, Q5 @! L! o- whttp://$IP:9200
1 Y3 T. q/ w* E# O! l/ O
$ P2 ?/ `9 |4 s1 z, y . c- |1 X* T2 j1 u/ N5 v
4 d% q2 V5 g& s' d3 m( f" E$ m o
Logstash + j# o D0 l ?4 l; A* X
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。
0 o- m A5 j- C% _* P; t d2 R $ h2 w- R$ J& P6 `7 P7 o
部署Logstash
0 r4 h9 n* o9 \Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地/ j; A% t' @' h M
: p, T+ m$ {* O. Shttps://github.com/elastic/logstash #GitHub
" S* \% T7 N5 r. z( O
: @3 f y! D0 YElastic Stack and Product Documentation | Elastic9 ^* t8 `2 h3 [- G
# N2 `. ]" J7 Q! T环境准备:关闭防火墙和selinux,并且安装java环境3 W" i4 y, d* `, W- J7 G4 \9 V9 Q- M
" x. N. ~; w8 W, N+ x5 ]7 J: U
# apt install -y openjdk-8-jdk
" V2 S7 E8 K. Y9 f# ls -lrt logstash-7.12.1-amd64.deb; o- @) S* l- r# X2 c" ]
# dpkg -i logstash-7.12.1-amd64.deb
5 z" |- f+ y9 U* J, g p: P- j$ I" }4 F###启动测试' `6 C& D- N! ^4 D4 {0 b
# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出
) G, Y0 O. M- N0 i3 `) p( Ehello world!~: f& q! n$ A" J( Y4 T+ O6 \
{
" _7 z$ F7 h3 I' V. R$ Z. s "@version" => "1",
3 e U- L i& S, @ "@timestamp" => 2022-04-13T06:16:32.212Z,7 i1 o$ C& Y: o9 K- f3 P! E4 N% f
"host" => "jenkins-slave",
; p e1 }. B- f ]3 p% y "message" => "hello world!~"" Y# P3 D- Q# t- y! L0 j
}
0 e, I5 x2 |' [; ^###通过配置文件启动
7 ?- t) M6 i- c# U. d4 x# cd /etc/logstash/conf.d/9 r8 Y: j0 s6 R7 Y
# cat test.conf * O0 ~% B i7 V, Y" n
input {
X9 r! U7 x0 H1 ^$ Z stdin {}
: K% C/ y/ `& ^. v}
3 @ q. P: N( soutput {
) r( k8 H4 K6 P* Q( Z stdout {}+ s9 s7 W; x5 D# j7 O0 S" f3 ~
}
7 L" z1 ~. m& Z+ d- e) _" A
2 n/ q1 }, @% G3 X. Z9 L###通过指定配置文件启动$ q/ R4 e B/ ?5 a# i8 M" ]" ?7 S4 s
# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法
2 t% g- W/ B9 s' v3 y, ~# /usr/share/logstash/bin/logstash -f test.conf9 V1 u& T. ?8 N, K! ?% E
: V' R: `- }: X; X7 @5 v E; W2 _####输出到elasticsearch
( X% M' r, A1 x( W& ` u# cat test.conf
8 P4 l1 P. c, k. v/ X: ^# kinput {
* B6 q6 P4 [* X. F1 U, ] stdin {}4 |2 d1 V# d8 f& l8 o1 {+ a. ~2 {6 a
}4 s+ h$ |) z* s5 H
output {
8 n/ A8 O9 @4 G f5 X #stdout {}
' R8 @! ]5 e: t6 L# \: D elasticsearch {6 Z& p" }# A" `1 ?+ r; \# @
hosts => ["172.20.22.24:9200"]
! \5 j& Z4 S. l! c& G" U index => "magedu-m63-test-%{+YYYY.MM.dd}"; p1 R* [ B, P6 i# f$ m, F
}
4 n1 q5 N% W: i: ?}
" v# M% S) g! z E# /usr/share/logstash/bin/logstash -f test.conf3 o0 D. \7 s! a8 v/ R
version1& s9 @% k$ y3 M; V6 w% n' u0 `4 s
version2
$ `( n" M) h4 K+ ^version3
- y1 P" n, ~3 |9 S; Q) d$ V3 ~# Wtest1
' _4 H: y& ?# j$ s, I0 t$ _test2
) K" X* J8 X, q7 Utest3& i& O4 n6 P' _4 b/ U0 K
- }3 ?$ K, o# R/ _/ f1 [) Y6 `
####elasticsearch服务器查看收集到的数据
% H+ x% ]' \4 |# ls -lrt /data/elasticsearch/nodes/0/indices/ j9 P; ^) k1 J$ r
total 4
+ z! O) c7 ?( t4 `3 {) fdrwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
- f( j5 P! s2 Q7 M* ^kibana , n% W, f* q1 L- p8 V/ Y
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等
: [6 n/ _( Y. D3 F0 r) m : F' B, V* o4 e* }8 g
部署kibana
& E- M$ _7 u0 F1 S& R8 Q8 U* m# ls -lrt kibana-7.12.1-amd64.deb
, k& J7 D% ` z/ N, g- \# dpkg -i kibana-7.12.1-amd64.deb
. l% }! V8 `) n/ @8 P8 U# grep "^[^$|#]" /etc/kibana/kibana.yml) q0 Q) x( p- Q) A8 m# `0 R
server.port: 5601: Y+ c2 J$ s( N% O- g
server.host: "172.20.22.24"' b7 P8 y8 H/ P# L5 z
elasticsearch.hosts: ["http://172.20.22.27:9200"]2 g) `% A- w1 z* \- }
i18n.locale: "zh-CN"2 c" C8 r8 c4 Q% V6 S4 y! m. P/ X
# systemctl restart kibana + G$ i: R0 f. { r8 C6 F K0 L6 F. A
浏览器访问http://172.20.22.24:5601
1 r) [% E; I/ k# t% M
* M U. O/ ^) uStack Management-->索引模式-->创建索引模式( K! [1 c- l4 b8 F5 I B
- R- b( J) J6 x! I. k # {. R3 Q' K" y+ S/ ~
选择时间字段
8 m2 `% _) c1 s
! E9 b( i( v* M! Q# j$ P查看对应创建的索引日志信息
. L0 P$ q. @- x6 A
t" C5 }; k% W5 g1 q2 k! k
4 A6 H8 ~) I8 B3 S. P 4 O$ N# _/ G8 P; y
收集tomcat日志
- t8 s/ O' N. q# q: ]收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现( z& T A. G' [% z' c
4 ?% R( M0 | ^) P
部署tomcat % i9 B0 c, ?7 m5 v, |2 F
####tomcat1,172.20.22.30
1 G4 \; U. h- X# apt install -y openjdk-8-jdk
. B. e7 ?, @* T# ls -lrt apache-tomcat-8.5.77.tar.gz
7 P) y& O1 x% }* V; V-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
/ o2 a* J9 f4 G& U8 _2 n$ P) c9 j- ~. d# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/0 q9 d2 ]7 I* B' W1 u5 d
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
; q, U* x0 F+ L3 R3 q# cd /usr/local/tomcat8 O3 y* `& ~( R& F$ ?
###修改tomcat日志格式为json
% r# `; F( H2 |6 W8 B8 G# vim conf/server.xml4 U5 l# G% K( M5 u* G; k& N
....
. n/ P. X, _% v. {* h$ D- L , {* r9 H8 ?: \, T! G
....
# Z6 t/ ^& U( T# mkdir /usr/local/tomcat/webapps/myapp
' R* K3 Z% V' [2 [4 X# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html# @ _; D% Z* r. ]
# ./bin/catalina.sh start7 B( {7 v# _. h
0 Q0 N4 ?0 d) E0 ~% a0 ~! Y1 }###访问测试$ S( `9 u. G- X, l) F$ P8 [) h2 R1 i
# curl http://172.20.22.30:8080/myapp/
- {2 j+ B. v) \, [% ~###查看访问日志
3 E; M- U$ l! B1 m! C# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log; X; G4 o ]% O& i5 y
5 L) S' u4 i; z) T+ T! f
####tomcat2,172.20.22.261 M+ |% B) c/ S) G* w) |% m
# apt install -y openjdk-8-jdk
, n2 L# R# S/ O& n# ls -lrt apache-tomcat-8.5.77.tar.gz
1 q; e+ `; ~3 i; {3 _' P, c7 O-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz5 w1 k7 U" a4 D1 V
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
2 v. b; z( E o: l# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
( G9 v" J) l) j1 Z* |2 ?9 e# cd /usr/local/tomcat
6 E# t% K* X& n9 g2 ?* m3 c###修改tomcat日志格式为json
M1 j: ]! _4 j* ^- J# vim conf/server.xml/ }3 p( e. |7 g1 F7 P J
....8 I8 I. e- P5 \: o' d4 y5 Q- W( N
0 w9 j2 R5 W8 s7 f* w2 S! Z
....
! F+ I, k9 N) R# \( l# mkdir /usr/local/tomcat/webapps/myapp
4 Q4 w H9 C$ O8 W1 z5 e# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html$ K) _$ V# `# R. Q# L) f
# ./bin/catalina.sh start
$ h. }' Q% l& [4 Z" o
! N# i# Y7 k: t5 h2 V. J) K###访问测试4 G+ D q' B4 c& R6 p" F3 {% z( e
# curl http://172.20.22.26:8080/myapp/7 t3 p" W( S7 ?9 n
###查看访问日志2 N& W( K0 [, z" Z3 b$ j9 Z; o0 f
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
0 l, ]4 w6 \* v p- ?6 {部署logstash
! }. G9 r" `' V2 A" b' T在tomcat服务器安装logstash收集tomcat和系统日志% o, W$ m2 \# @- |! T
9 T: S, _6 H0 t) q/ ^
####tomcat1,172.20.22.306 }2 k, g3 r1 m5 {1 S% x' Z
# ls -lrt logstash-7.12.1-amd64.deb
* ~& \: {( P3 J q# dpkg -i logstash-7.12.1-amd64.deb) t6 I. w4 [6 F* V/ o
# vim /etc/systemd/system/logstash.service
1 w' d( k3 D' w. i: |) k...
1 m } d$ i8 H' uUser=root M" [) ?! z! `2 K0 V' F
Group=root; c( E# Y; g+ O/ x. }+ F8 R
...4 ]) g q" _) F/ n$ U3 d9 [
# cd /etc/logstash/conf.d
, ^/ `: {* w7 @2 f# cat tomcat.conf" I3 q$ W& T: F5 _& X
input { : W( R8 F' l: |) S- c( y6 i
file {
O) _3 D$ i8 U4 y# O path => "/usr/local/tomcat/logs/tomcat_access_log*.log"0 q. {* B1 M% q9 @. H
type => "tomcat-log"
7 x8 q) Z8 }1 Z# q3 G' B start_position => "beginning"
- ]4 o& m5 t. I/ H2 x* u% K stat_interval => "3"
' S) J3 l. R l3 ^ }) n8 b9 [1 f0 N- W& n+ f3 L
file {
7 M3 S/ G, _- y. T! } path => "/var/log/syslog"
6 N5 q _4 X: I B0 K type => "systemlog"
! t) x& B$ J/ \9 U9 Y start_position => "beginning"' f- D+ } c) b3 S: l# Y
stat_interval => "3"
: b6 _% A0 V; ~) C9 m O1 n }
3 I7 s4 X" k! U, k t; x; ~5 }8 P}8 H1 i3 i$ z" S% `" s+ O
output {3 n; ?- [+ r% D2 c+ E
if [type] == "tomcat-log" {; j6 c& F+ I) M3 H1 J7 c
elasticsearch {. n; G: D( `% T c; t% B) J1 y
hosts => ["172.20.22.24:9200","172.20.22.27:9200"]' {4 }0 Y U( o
index => "elk-tomcat-%{+YYYY.MM.dd}"
" B- I+ o; Y- n }}# b5 F. L- c, h! T) N
if [type] == "systemlog" {! y; x T$ a" H. O
elasticsearch {
/ O \1 H* A" v' m6 K hosts => ["172.20.22.27:9200","172.20.22.27:9200"]; A- b9 p0 l6 X' w
index => "elk-syslog-%{+YYYY.MM.dd}"& Y! R. V( _$ ^9 B% W1 B7 h
}}
% P5 |6 h" j, _+ Z- ^} z. X: u& | @3 x
2 ]' G Y4 ~) b$ m" @) l# l
# /usr/share/logstash/bin/logstash -f tomcat.conf -t
+ P* _& i" p, O* P# systemctl daemon-reload
: R8 y# r3 m# b$ |# G% l# systemctl start logstash.service" I$ Z+ u$ Z) b0 E3 x
# scp tomcat.conf root@3172.20.22.26. z+ f4 S* U: V
; M# h* w) M$ J9 y+ D3 D5 w0 z####tomcat2,172.20.22.26" C) E' m# e; b& v
# ls -lrt logstash-7.12.1-amd64.deb" @$ p/ I9 H$ O& `2 V/ `* q
# dpkg -i logstash-7.12.1-amd64.deb7 w& ^# C" o* y" U& t1 V) ~
# vim /etc/systemd/system/logstash.service" u) Z' T- ?1 D+ l' `) {
...' X& b. F: H/ n; f% Z7 ^ P
User=root
! E, I- E% o! j, j0 C/ CGroup=root' {4 ?4 o. E4 Q' ]4 E
...
' H! D0 i/ ^( V3 I- ^# systemctl daemon-reload
" o% }3 A/ S: m$ U9 O( a# systemctl daemon-reload; W. |, N# E; h* I: o4 b, C: f
# systemctl start logstash.service
4 F# [# x* p# \$ k' t8 [/ @" k通过kibana展现. @, b- \3 A$ y2 G! Q
4 e: d! y f+ W) L
. O! l# x5 t5 \ [收集Java日志
. j" E0 o! l2 [5 B1 Z使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并
. ]( R Z/ n1 D$ P2 W1 H! o9 ? 1 M2 m0 p1 a0 m* i' L% p
Multiline codec plugin | Logstash Reference [8.1] | Elastic9 [1 Q. a0 H6 z+ B, \/ H
9 A* y z* r" P' F6 D8 a添加logstash配置文件 * ^5 {" a$ @5 Z# H1 M6 d1 I
###收集logstash自身的日志,172.20.22.26+ W& r( @+ o" b
# cd /etc/logstash/conf.d
: ~& w# y1 ^) m3 A) a* S# cat java.conf
1 l% Q2 n0 [; W+ _8 Oinput {
8 W: z' ]6 ^$ R/ t3 K! t/ @0 v, j# | file {
1 a" S7 P2 k9 k8 ~+ y0 C' [6 I' w path => "/var/log/logstash/logstash-plain.log"
& F7 e# L" T" h5 B, U type => "logstash-log"- ]0 s& p) o& `5 o
start_position => "beginning"
/ l4 _/ N1 r/ x \* ~$ t stat_interval => "3": Q% P# }! w8 m
codec => multiline {) C" `: \$ u `5 [, Q
pattern => "^\["$ ?% ` O7 Y1 i+ e6 A
negate => true
+ F+ \* L$ W# x/ s what => "previous" 4 ~4 z; V9 n/ O* E: F8 p" A7 q
}}( h5 X0 q, J& v& p1 D
}
- W3 r+ i1 p+ E- b, @output {
3 E+ ^' H; v5 W& \7 z6 ~7 w6 O if [type] == "logstash-log" {" g+ M5 Q7 ]; ^
elasticsearch {; L2 l M& @! e* ^$ i5 h4 ?' f* H
hosts => ["172.20.22.24"]2 r2 e/ y5 P- m7 N3 r3 r, V
index => "logstash-log-%{+YYYY.MM.dd}"
* O% |. J$ M/ ]! L }}" I+ v0 m2 [# M- @7 J
}
0 v' e; H0 F+ g$ q4 d, X* ~. \7 @! r- C4 G; ~# q" @" h
# /usr/share/logstash/bin/logstash -f java.conf -t0 Q9 i2 V/ m" y; m
# systemctl restart logstash.service
1 }# ?5 r" U& J1 @. v3 E* [) Q' H
###收集logstash自身的日志,172.20.22.30
: s8 H: C9 s: Y: G% n2 A7 r# cd /etc/logstash/conf.d
. P4 t$ n( |. x6 }" l- B: V7 g" G# cat java.conf
6 U6 S0 c! l3 E& E8 finput {
& B, X6 {9 a* S) }9 m2 {4 P file {
2 i: d- H" @& u. I% { path => "/var/log/logstash/logstash-plain.log"* U) T4 ^. R. ?7 Y0 h+ }
type => "logstash-log"# V: l3 @ K0 \* i9 o
start_position => "beginning"6 b$ B/ \5 n: o4 j8 P
stat_interval => "3" d( b6 E) z. C3 Y( b
codec => multiline {( H1 W+ c( O* r; c" {/ o6 H; j+ Z
pattern => "^\["
1 z, S9 j, i; R, N2 v negate => true
, k `8 L9 C# ]+ ^! b: h. ?) ^ what => "previous" $ a' y2 O/ }5 v% Y. f
}}& I q9 T, P4 x. _5 F
}7 o5 A" L0 }' u. M- a R; T
output {
: V" u2 s3 _7 Z0 a if [type] == "logstash-log" {# b2 @$ i: p: V' ?# S
elasticsearch {4 m" F9 Q6 r- d/ W' h* e! @! C
hosts => ["172.20.22.24"] _$ F" u& w. l" ^1 F0 L; D9 h! Q
index => "logstash-log-%{+YYYY.MM.dd}"% d5 o+ C( ~- p
}}
7 t+ N( m. s5 f" h% L* [8 m}
) Q8 |# q3 b3 w1 U; h3 U7 K1 m1 D" d; v" u. j
# /usr/share/logstash/bin/logstash -f java.conf -t
; K1 b% a ]7 b# systemctl restart logstash.service
$ I8 z% z- R5 g' A( T& u查看kibana收集到的日志
0 \0 z; t5 T! p8 S% ]+ i7 \
7 X3 o% Q6 v; Z, N- f/ M3 e D. x) [# l/ v x
; D4 e- _6 n. M/ W% P* Q( qfilebeat结合redis、logstash收集nginx日志 ]' s6 N' T$ c- t$ d* A
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch; u1 V0 H" Q% M E0 K( a9 i
3 r9 d6 C* y4 C g1 j+ r9 N/ hweb1:172.20.22.30,部署好nginx、filebeat、llogstash3 L# x. ^- h& Y. X. V/ Y6 I
7 j7 o- n, J5 a
web2:172.20.22.26,部署好nginx、filebeat、llogstash5 M4 j0 F3 B9 j I" r
! F7 i& g; i; Y; J }# \, ?
logstash服务器2:172.20.22.23,redis服务器:172.20.23.157& { Y1 O: J; r9 s/ t2 V
# e6 C5 z5 ^) J) d2 }4 C
nginx服务器相关配置 $ v% b, D- V5 M8 J. L
部署nginx
/ X8 o3 U' G: F# ^, x% [# wget http://nginx.org/download/nginx-1.18.0.tar.gz
# G2 _/ W' g, r8 a" ~, N8 e2 E/ H# tar xf nginx-1.18.0.tar.gz8 p5 S$ R! M+ z* D3 W6 e8 l z9 ?) @) G
# cd nginx-1.18.0( v5 U( u% O: m4 ?4 {" R; {0 s
# ./configure --prefix=/usr/local/nginx --with-http_ssl_module T) [4 h( N3 e% ]
# make -j4 && make install
) E! R% m* f: \% R; ~2 j3 q- n# /usr/local/nginx/sbin/nginx
, V+ M$ ^6 K9 Y部署配置logstash + w/ s! }% M; A7 f: M% p3 f
把filebeat收集到的日志信息发送到redis
4 }! D+ w+ P2 [2 ~
( k* ]! k; N' S' Q7 T' ?# apt install -y openjdk-8-jdk
1 {1 o% F8 Z* i h* c# dpkg -i logstash-7.12.1-amd64.deb4 J* S( v& r) b, l0 s
# cat /etc/logstash/conf.d/beats-to-redis.conf
9 {' L2 Q2 Y* L/ @; `7 Binput {
1 ?% {! R) ?; h beats {
2 q$ z; v, V2 B port => 5044( k4 v) |; W- D
codec => "json"
/ ^7 t6 L) t% ?) d C4 _ }2 _) n1 X3 |7 l' v; L7 d
beats {: {) {8 I# q/ y
port => 5045
5 A1 T- R; H4 S6 ^7 s codec => "json"
/ E1 i# o) D) i4 X. | }! }) n* X L8 P
}7 `6 B9 y& P a* G
output {
8 r- F/ r- r3 k! p# e9 F7 O1 T if [fields][project] == "filebeat-systemlog" {
q" h3 p, m% b: o4 x$ ^ redis {) z7 N, d8 L! }9 u% y
data_type => "list"$ P% T9 C. Y( X' N% d
key => "filebeat-redis-systemlog"% K' W) F2 U5 `6 y. E
host => "172.20.23.157"" [1 D) v- u6 F
port => "6379"
5 O- G% H, Q4 |/ q0 ^5 B db => "0"
! Q0 A+ m1 `3 N) M" g password => "12345678", v' O3 J5 J1 P: m3 ]' |, d/ d |: N
}}- Z* J' Q# w" l
if [fields][project] == "filebeat-nginx-accesslog" {1 a9 ?; L% \* Y: K" O- j. T
redis {
5 m5 G/ `# e E+ m" J data_type => "list"/ L+ P: ]. J# b1 Q) y! D
key => "filebeat-redis-nginx-accesslog"
( _4 D, j" B& O+ |- ] host => "172.20.23.157"
5 q$ w5 H' ?6 R+ l port => "6379"
* o: T9 \2 _! k db => "1"6 G* j! N9 _" j5 Y, k. W4 p9 p `
password => "12345678" z& R1 o2 g& C I5 Z
}}4 `7 e1 O5 F" \) U# u" O2 i
if [fields][project] == "filebeat-nginx-errorlog" {
1 v" F' q+ f. D/ r4 E. ~ redis {1 \, [4 a( Z8 N& u A" D
data_type => "list"
8 ]' U0 p5 b- h; V key => "filebeat-redis-nginx-errorlog"
0 W: T) @9 r* M6 g: z host => "172.20.23.157"
c* @1 a1 o: m4 f# r5 _+ Q* @ port => "6379"
( x* `+ V! l, W9 a db => "1"
, i# a) P$ A0 v7 A password => "12345678"
1 W/ i& |5 a5 [7 H8 o }}
6 q. ?6 o7 ?1 Q5 `0 Y}0 R7 v# G- l3 f7 R c) {) q# p! `
# systemctl start logstash6 j" y: t/ _' A+ h: Z
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/
& B9 T" ?* j. J* x( G$ q; |部署配置filebeat
$ x7 r9 r) h9 n. V通过filebeat收集日志信息发送到logstash9 U% [! v5 o- {* V) [$ D. @
8 I# [8 z! R- [2 C. r5 y# dpkg -i filebeat-7.12.1-amd64.deb
+ O, a, v9 q! q% }# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"8 q, k0 \7 z2 O) \. M0 S
filebeat.inputs:
1 D- i8 m$ f" `$ [! g/ w; P+ W3 m% G8 p- type: log
* ~9 { {! s. Y1 } enabled: true5 Z! N* x+ F6 y7 `3 I9 [- ?
paths:( D5 Z; {$ U" ]& x
- /var/log/syslog
- c% g6 q5 H8 q- i fields:
' x( j$ C0 J4 `) ^ project: filebeat-systemlog
, v S0 a! N. E# {; y/ H, x- type: log
/ V9 ~3 U. _4 O1 V' C enabled: true
% C8 l; v: E, E- Q" R2 U paths:7 m5 v5 [; ?3 ]7 o" j( Q' w6 W
- /usr/local/nginx/logs/access.log
8 J3 U" h) R5 Q5 m9 p3 ] fields:6 b: }5 `: U5 r' q$ ?& d/ K
project: filebeat-nginx-accesslog2 h0 k0 a; v3 d0 j
- type: log
6 l: X$ A- f$ r6 x: ?% {. ]& G enabled: true
4 [. C. Y6 D4 d2 C3 R7 e paths:; A' T! w3 Y, F( n! ]
- /usr/local/nginx/logs/error.log
8 R9 B1 e3 F q2 A) T6 m: \ fields:6 ?' m: g+ i, D9 N: h Z8 ]
project: filebeat-nginx-errorlog9 f+ {) X, w+ ^! I3 s* }* k
filebeat.config.modules:6 z7 l" ?3 e: Y
path: ${path.config}/modules.d/*.yml- O* k5 k9 w0 i$ t
reload.enabled: false" r) f9 x; ^! I6 @2 L: k4 H
setup.template.settings:$ ]! ^9 j q, r9 w
index.number_of_shards: 1
X s7 d1 e' jsetup.kibana:
& y. D( `: c1 U$ y0 f2 Fprocessors:
+ _; T+ c: c3 a% t, S6 Z2 r - add_host_metadata:& K+ L& q9 P9 X% ]" C
when.not.contains.tags: forwarded
" m& F( X( A2 x U - add_cloud_metadata: ~
! D/ T1 J- L J* p4 D1 f - add_docker_metadata: ~# g1 _. R1 [/ y
- add_kubernetes_metadata: ~: \* b5 S- P# C% i& H
output.logstash:
4 A- n+ U, d# ?: ?! K9 }( b hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
. ~( P' e; U1 U: ] enabled: true
. N% K' m F6 E" W- Y worker: 2: Q* G, ~" J% F0 }( V* r6 @
compression_level: 37 J, h' ^; G. {3 w$ D/ q
loadbalance: true
2 d* f/ J3 C* U' a" F* R+ F
v& o5 P7 Z5 F; r0 A# systemctl start filebeat5 {% B; \% Q3 }9 w, i- I3 u
# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
9 l8 p# m6 [; D( F6 t! {logstash服务器配置 4 Q! `3 L% f( Q$ Z* f3 O
logstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
/ I: u7 y9 h# M; D! n+ Z3 Z6 k
! Q& V) s# z/ t5 p ]# apt install -y openjdk-8-jdk
4 A5 X% |$ O$ U- k" x# dpkg -i logstash-7.12.1-amd64.deb* R8 h6 {. \0 B- T" T$ j* p* i
# cat /etc/logstash/conf.d/redis-to-es.conf
2 }+ d2 |7 g1 X* ]/ ]input {
9 ?( n! r4 N5 M5 k3 w redis {
& F& t+ U, m- ~" H) @ data_type => "list"5 }! E# \; d) a& R
key => "filebeat-redis-nginx-accesslog"
y/ L' h6 t: W4 ~! r: x1 K7 f host => "172.20.23.157"0 e; j9 S) T Z+ e u
port => "6379"
! o8 _# @4 K' ?4 p" p db => "1"9 Q8 a* X* h0 w5 W% f! y/ T8 j+ U- H
password => "12345678". A5 A w- N4 }* u1 p+ _4 _- A0 \
}
9 t" B" u8 ?9 \, ]& K" L7 i redis {
7 m) G! }$ W3 Z" w data_type => "list"3 R* Q2 o i" c+ m/ k% L
key => "filebeat-redis-nginx-errorlog"
7 l+ U# U( d! J. ~ host => "172.20.23.157"! { u/ e; C& a( k1 e' s
port => "6379". q/ W* i- x9 l: i
db => "1"
7 W1 Q& X: H* k5 n( x password => "12345678"/ s4 D" W4 N! U1 H& Q
}
8 i' j b3 u( R: u0 a B, t redis {
* H0 S4 `7 Y |( g+ I data_type => "list"
% `/ D2 b3 }& ]9 j( Z! a# D6 u/ e key => "filebeat-redis-systemlog"
) g8 R5 c1 e9 s host => "172.20.23.157"; e; P( s- L% X, _; T3 a
port => "6379"
! ^ l: _$ ]& J db => "0"
4 p% h" X& d! e- H6 H password => "12345678"
4 `; ]+ Y6 b( k, B- i# y( |. B }
4 K; X+ U( G3 {# }( I8 C}
( |- x2 D i. p: p W( N4 loutput {
( F8 W* s1 p3 y6 L0 M+ |( J if [fields][project] == "filebeat-systemlog" {) f. e5 Z, e& T l( J" `7 h
elasticsearch {
. s! @ _: @6 Q: M' e' V8 g' l hosts => ["172.20.22.28:9200"]
% G/ Q, j7 m0 s$ {1 j+ [% _ index => "filebeat-systemlog-%{+YYYY.MM.dd}"
8 B0 S2 P5 E7 b+ P$ Q0 N3 }: ~ }}/ S3 P" q" G. G6 ?0 [
if [fields][project] == "filebeat-nginx-accesslog" {
0 y$ q7 e7 h# Z3 E6 D elasticsearch {5 V- F# E. w2 L
hosts => ["172.20.22.28:9200"]
4 I& J7 \3 \# s' C7 [# e index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
' {" V ], J% x# g- c' U }} d' f; L! Y+ g5 g0 J
if [fields][project] == "filebeat-nginx-errorlog" {$ m) a4 s" L0 b
elasticsearch {. u7 m! j1 @7 N1 |
hosts => ["172.20.22.28:9200"]0 M& h) U" S. }: K$ H( \5 f
index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"' p" [3 ^7 a# o- X0 t
}}
( O& ?1 e3 Y* c) g: K}
: _* }) X6 A( g' _+ n2 g* n# systemctl restart logstash.service $ ?/ `$ Z' O0 \* s- `, R7 J/ G7 Y, k* t
redis安装配置 7 {4 R, l# R0 N. h+ H" U
redis服务器:172.20.23.157,+ o# r1 B, ?3 U0 L; A
' k( _# F! A `# yum install -y redis# Z+ c- `$ p: M0 C
# vim /etc/redis.conf9 u5 h x4 a; o; P
####修改以下配置项
( Q0 t. E5 t! Kbind 0.0.0.01 f5 {7 Z7 {, f5 R1 ~8 ?% `( m# u
....
6 U" x8 L/ r% g) D% ^$ G @save ""% S0 N4 S. k0 ?- G/ z; y0 g
..../ ]: L1 @5 g# I6 e N' R0 b
requirepass 12345678
& ~; j" ?' n: b* j....
( J+ p# d; q+ L u# systemctl start redis5 q; g$ A1 u/ B+ P/ `$ S
###测试连接redis
# V. i- w2 d! `% v. Y$ r, r# redis-cli & P# U7 ?& r$ I2 F! ^* ]% I
127.0.0.1:6379> auth 12345678
1 N+ q" v( \: {) dOK b3 E- t8 L, M0 K3 ^" s
127.0.0.1:6379> ping
# r- f9 b: s1 c; J6 iPONG/ E" e( s9 n' K; d6 [
: M4 P2 E; O! Q( Z8 `4 l0 r+ L###验证收集到的日志信息
" e; L5 s1 Q9 L; k. E" ~127.0.0.1:6379[1]> keys *7 ~5 R9 a- a0 B3 b0 i! ~- R v
1) "filebeat-redis-nginx-accesslog"
7 S$ [' q& C; |4 _% o! K; y2) "filebeat-redis-nginx-errorlog"( F9 ]# U3 e: ?& Y0 X t
127.0.0.1:6379[1]> select 04 q* x8 ~5 Y5 r; X H- n
OK' g$ I4 h0 c( k- T p
127.0.0.1:6379> keys *
7 n1 _% y( v4 [/ E2 B1 ?2 y+ I) S1) "filebeat-redis-systemlog" ) k. A% {& z7 `4 F) D r
通过head插件验证生成的索引
7 R! }" ^- W ^; m& E- s. Z( d( c5 C3 X; c- g# n" n g# @
: j* g& U6 R3 l Q: T$ zkibana验证收集到的日志信息 & v4 T# H5 y% a2 A) o( k+ v
|
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|