|
搭建ELK - i9 @9 e7 W, R5 f
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:8 v! L* P4 P% a3 f, j1 G! q
9 y$ M& s2 l/ S$ j2 W5 a( F
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单6 x9 R2 Z+ [) D0 t- w
Elasticsearch l8 S8 w" }% ?! }& E% E1 T
elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
9 O/ p' Z5 ^8 Y- N* v
0 q- @0 K8 `4 R' J2 x$ d% selasticsearch的特点: _# d2 A/ i8 R j v) v
( T T( O a9 \$ S1 H; A+ o; B0 ]实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json% l# T- W" r& I" a1 k
部署elasticsearch 2 b& K0 d1 F$ L& Y* q& u, e
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发
/ p: O" A" y/ v8 T* O- d, A0 ~. ^6 F( H 2 c9 H a4 b$ `* `
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
) G! q3 S& ^" E. X1 A% B $ M8 N- X5 G/ K% I3 M
服务器1:172.20.22.24
3 Q3 V- i4 Y) r* `& N& f4 j
" r" t& N7 k2 h2 I+ H服务器2:172.20.22.27
5 \$ `0 I: C3 o! s1 B- x" W. ^) w
, I) i0 `2 g( b/ C/ A; ^; S8 u服务器3:172.20.22.283 M3 H6 Y e6 N Y" v
; q+ m r7 z* b% F4 S7 O, O###ubuntu
+ a- a5 [% m7 y! H" q& b. b9 d& j# apt install -y ntpdate
; ^# D! m! R, v' u8 J# rm -f /etc/localtime
, }2 W) B7 K0 @# o5 D4 X# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
, g0 X' M+ j. l9 r' T# hwclock --systohc
! p# V+ i& O- f! ], b8 j# ntpdate -u ntp1.aliyun.com; q" a3 M5 }( ^/ n5 ^: w' l
###设置内核参数
, @0 b$ d) }8 ?8 }) y l- ~7 T# vim /etc/security/limits.conf
9 h T ^, C2 S- i+ z* soft nofile 500000
2 f% o6 t4 ^" O; E9 ]9 h" q, I* hard nofile 500000
6 k1 ]2 Y3 E, q7 p2 v9 X' c# vim /etc/security/limits.d/20-nproc.conf
( c# J+ a3 h3 ~* Z) ?4 _* soft nproc 4096
5 J; s9 i) ]1 G. E2 v) S2 \elasticsearch soft nproc unlimited5 S6 X/ t7 A( \6 k* I3 d2 A
root soft nproc unlimited
! z4 ?0 k3 Y4 L% v/ p###安装jdk
8 Y" c! X7 }- S2 q; ~7 L! [0 C( H# apt install -y openjdk-8-jdk1 x' o9 J8 u/ U: y% Y! g& C
5 T( z J5 q* z2 e3 F; h! W###每个节点都安装3 E! Q6 F' e( f/ c
# ls -lrt elasticsearch-7.12.1-amd64.deb8 h0 s" q) C1 F
# dpkg -i elasticsearch-7.12.1-amd64.deb
4 C; A/ U. ]( U- H: Y/ G% r r###节点1配置文件
# T3 m. ]* O; C$ v# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml z: p% V* y0 C% K) ]
cluster.name: m63-elastic #集群名称
$ R# i. E5 y) D+ y3 `: inode.name: node1 #当前节点在集群内的节点名称, O& t; L2 _4 {1 M
path.data: /data/elasticsearch #数据保存目录1 `& q8 Q- B1 F) z0 V
path.logs: /data/elasticsearch #日志保存目录
+ p3 P( p/ x4 A4 X' mbootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap5 {1 |" ^$ v4 {# v8 r' ~* C
network.host: 172.20.22.24 #监听IP
9 x3 m0 U, }, [9 }/ g' r: Vhttp.port: 9200 #监听端口
- i% h+ L/ q) I! a###集群中node节点发现列表
( @& q, c+ ` U* V1 f; u( B& P& Ydiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
( F2 z& c* {1 @5 z" i###集群初始化哪些节点可以被选举为master
( P4 Y1 a$ j; X- g% R5 B) Bcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]" C: b" A) W5 E2 x' a
action.destructive_requires_name: true' G R3 E; J8 [. i
# mkdir /data/elasticsearch -p* y& R) T, u; [% e: j' B" r- M/ C( r
# chown -R elasticsearch. /data/elasticsearch: D5 k p; ]9 l$ U s* C% X ]
# systemctl start elasticsearch.service
* S y: J5 H% t$ n+ G) E###节点21 ?& [: T; G3 e2 ~+ ^- ?( B
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml) E' O$ J- I1 C; n
cluster.name: m63-elastic
0 r; }+ e6 Z2 unode.name: node2
9 E1 Y3 i) {7 U% S5 v: c6 Npath.data: /data/elasticsearch. s& R: Y5 u# i4 k; m: J n
path.logs: /data/elasticsearch
, c! L. B4 r& C% ~, Q% d5 b Wnetwork.host: 172.20.22.27
/ v, J+ E2 E* n+ k* Ihttp.port: 92009 I! `6 D; ]7 a+ a+ J
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]+ Y5 w/ T$ j. P$ M1 {2 m
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]1 S" G6 c" R0 f" k _4 y! `; \0 \! j" B
action.destructive_requires_name: true
5 a5 W: R1 |& \" _* |7 {# mkdir /data/elasticsearch -p9 e2 }6 G; b8 T! S- {( l! r7 e! k- H
# chown -R elasticsearch. /data/elasticsearch
' }% u- S% O; t) ]& M, }4 a3 t: v# systemctl start elasticsearch.service% \/ Q8 ?7 x# y# V+ G& t9 Y1 Q
###节点3$ z# s; e0 A6 h9 A' G* f4 v
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml# z J$ H7 ~" N) j
cluster.name: m63-elastic
- ?; i0 Q# _9 b1 c( \node.name: node3
, D _( y# t4 }3 Mpath.data: /data/elasticsearch
4 R- O: |- Z& y' _" Upath.logs: /data/elasticsearch
9 W7 B2 i+ F4 J9 Inetwork.host: 172.20.22.28( `; A0 `9 y& }) j: Y. I( _
http.port: 9200
- Y' i) Z8 U0 t9 \5 hdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
/ j7 Z0 E# P) J# j8 q& G J1 Fcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
0 t' y' z: C5 W1 U8 o% baction.destructive_requires_name: true1 z0 r2 k3 ?! H- y" |) a4 e# [
# mkdir /data/elasticsearch -p
1 S, s# q+ n% o& K# chown -R elasticsearch. /data/elasticsearch7 z3 e+ x) t2 e2 j
# systemctl start elasticsearch.service ; C6 s/ d: P* D( o M+ X s
浏览器访问验证
. R. [1 O5 }/ s5 k$ K( ohttp://$IP:9200
9 W; e* e' K" d5 ^7 C# c8 q
5 T8 T S' ?% ^* }( z( t! i # T2 A% }! z! n9 e! ~0 l
; C. }! {! H+ j* O, [. S8 z* TLogstash ; K' @1 `0 Q+ c" j* K7 c2 J
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。7 T! m' ?& c) d. n7 K* x
: [9 @+ a) E3 L- @2 m
部署Logstash 7 q+ x9 [) A, k5 s. ~, b
Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
3 o) L+ l$ }: Z$ U9 ]! y' Y! U
' T+ f# Y/ [. Jhttps://github.com/elastic/logstash #GitHub
! x# @+ { b9 l! t5 s* x( |+ E) ?
) y: w) e+ u/ V- oElastic Stack and Product Documentation | Elastic
8 X/ Y ~/ ]; k4 [/ L) y7 c$ e4 M 5 D+ Q: X8 a5 {' }: I' o0 Q& |
环境准备:关闭防火墙和selinux,并且安装java环境5 [7 Z0 I5 K: F( A
4 e2 C+ r" i' l6 d5 K# apt install -y openjdk-8-jdk; m3 J, ]) ]! E
# ls -lrt logstash-7.12.1-amd64.deb3 b) e1 Y6 K. w& h) A5 g
# dpkg -i logstash-7.12.1-amd64.deb9 T7 S Y& t% X0 P8 O
###启动测试6 \; m" t' R2 L) a
# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出; j9 v e+ f8 w) B: ^& H
hello world!~ |/ V$ V0 H3 h/ ]3 G7 }0 O
{0 a4 P6 r$ N2 x, V
"@version" => "1",6 _- \9 t1 m; ?
"@timestamp" => 2022-04-13T06:16:32.212Z,5 e# ]9 K2 U# ?4 i* D& a
"host" => "jenkins-slave",
, x; W. l# m6 [) k7 g "message" => "hello world!~"
* B+ M J# @: ^}
9 [. D5 e7 f2 G###通过配置文件启动6 a1 l) k$ ?' m3 \1 x5 q
# cd /etc/logstash/conf.d/. E+ Z$ E5 L/ [. P# z$ L
# cat test.conf
9 _/ L2 b1 l1 _2 s: N/ Finput {
( q5 c2 n7 U# e& [% O4 d stdin {}
. b' |+ @* A0 {8 d6 n$ e}
6 J# g( S; s* {4 o- u m# Ioutput {
* S! M! M! ?( c6 e! P2 r stdout {}% N, _6 Y) F( F& Y, x
}
: ?9 V' n# J2 u. }) J5 x( P: ?$ [" O" `
###通过指定配置文件启动8 }) p6 P0 A$ E, T; D" b8 ?
# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法
$ ]6 S5 T% x5 M# /usr/share/logstash/bin/logstash -f test.conf. p3 S8 y) z) t$ m+ B5 ~3 Y6 s2 I7 L
# ^0 j2 V+ ?/ O* M4 E- N####输出到elasticsearch' l% S3 `6 y+ x3 Z% e
# cat test.conf
$ `+ K2 f( p% ?4 W; E" @9 Ninput {
# J" w! R1 S1 D" B stdin {}- H$ ]2 x+ }1 z
}5 R8 q; M9 h5 ~, H
output {: Z5 `/ M9 m! ?' }* L
#stdout {}" o: r# T' f) X1 z/ {% G' ]* y# v/ o
elasticsearch {
$ R; W* {+ g- F" @ hosts => ["172.20.22.24:9200"]
U F- w, e- t) O7 P index => "magedu-m63-test-%{+YYYY.MM.dd}"
4 K4 Y2 [% l9 h8 X' F# z$ M* i }
A! z$ k+ H& F! C0 v9 v! A, s6 `8 S+ b}1 H8 o+ p9 W8 S4 T9 k
# /usr/share/logstash/bin/logstash -f test.conf: P, W0 X% E) z! c ]
version1( N# t; a% _$ T6 T
version2
/ q/ U6 ~% Z6 lversion3
1 W9 b' U8 c" xtest1) U- V, c$ |1 P' J/ Z
test2
5 J l& K, `! f/ \2 ~! ]% ptest3
& u5 t; F% Z5 v" z: A+ ^* w/ x- \$ B
####elasticsearch服务器查看收集到的数据: \' i3 Y- @* m, R! U
# ls -lrt /data/elasticsearch/nodes/0/indices/: Z1 @* R+ {4 e2 t
total 4
# h. v* O9 t- p8 t; wdrwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
0 Z5 ~4 _( t- U5 P* }kibana 3 L) a% Q$ |9 N0 u! |$ d+ j
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等3 V8 v+ A8 [1 t3 `& X& X' B0 L
& P5 H+ A. x( l0 t0 o5 W! h2 C
部署kibana
. f0 [$ L% K$ [- H; G# ls -lrt kibana-7.12.1-amd64.deb5 V2 I. ]2 v: M- {% b [6 B8 _: a
# dpkg -i kibana-7.12.1-amd64.deb9 X0 G9 f1 I+ t$ u
# grep "^[^$|#]" /etc/kibana/kibana.yml) Y5 B* y/ Z) p3 j2 u3 K
server.port: 5601+ G+ c" q$ D* \
server.host: "172.20.22.24") w! `7 a2 E! o1 b+ c2 c
elasticsearch.hosts: ["http://172.20.22.27:9200"]
& l! J6 h/ v" A L; S* W& X1 O2 l' ?i18n.locale: "zh-CN"# H2 p' ]* a' J. ]
# systemctl restart kibana * ?6 d8 c, m5 e/ m; S! F8 U
浏览器访问http://172.20.22.24:5601
) M' ^8 X2 q# h; m, e1 c4 ?
5 y* R* B% {1 r; I- x- B. }Stack Management-->索引模式-->创建索引模式, b8 @) ?9 `5 u, j8 D0 T5 G/ d- e
1 u0 K+ E* I0 N! e5 X
* o" F5 ]& [5 O$ Y, {选择时间字段2 w( u8 |7 J% s$ p
0 ~$ ^4 B6 Q; R查看对应创建的索引日志信息' z: n8 }2 `8 S6 g7 L
7 {6 R; V6 V8 e( l2 l) }( z1 X
. k5 n( P+ l6 {- @6 z" Q' P
! h. i9 p/ R6 }0 x) f4 l- p6 J收集tomcat日志
7 {( B* |. W V; z8 ~收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现0 J. |' `, J) K
1 C. N; b! y% K% M部署tomcat ! t4 |4 y& ~9 o |5 p1 ~- K
####tomcat1,172.20.22.304 l$ S" V, f# b6 q2 X6 G
# apt install -y openjdk-8-jdk
9 g' S" k4 X; g8 u) J# ls -lrt apache-tomcat-8.5.77.tar.gz
; h# `0 u* f' T+ v# X R3 C0 S-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
7 x( f7 E9 Y. z! G9 t" I- ~# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/2 w i* x9 K" Z- [5 C
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat( d3 D( F6 s j7 `
# cd /usr/local/tomcat1 _0 j+ a( m/ ^5 |1 _1 p) O
###修改tomcat日志格式为json+ @ o7 y$ e0 L* U
# vim conf/server.xml/ G; g! [) k% {
....+ b, v) F6 a0 Y& F+ y2 v
% t; _# ^1 ~: a8 l+ w....
, z" t0 V8 i6 x5 R- u# mkdir /usr/local/tomcat/webapps/myapp& `! V. ]: a! b6 t& S3 d
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
, e& `( D, m. O# y1 x9 G+ U# ./bin/catalina.sh start
5 A$ {% g1 N' ~) M
1 X9 q8 b- s; N# r) c###访问测试# U) q7 {5 g& y& Y
# curl http://172.20.22.30:8080/myapp/3 p R( S0 o6 I* a; {+ P m5 b. c
###查看访问日志
& A' x6 S7 t- ]5 O. {# i7 @# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log* F6 q$ \0 o) M3 G
3 `% B+ S" L `8 T1 F2 b P* ?% n9 S
####tomcat2,172.20.22.26
% ?, k4 p* d. W# apt install -y openjdk-8-jdk( D- ]6 y1 x5 v. D5 m
# ls -lrt apache-tomcat-8.5.77.tar.gz
1 S% k% Q# N+ s. \- c-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
4 t* Z' q, A$ h" y# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/3 f* O) d3 P* W7 V& o- a
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat& p5 | c/ b6 H/ b, ~
# cd /usr/local/tomcat
+ r1 o' @% D- ^4 ^1 M7 V###修改tomcat日志格式为json
" l# |+ W3 p; V2 ]: `# vim conf/server.xml
/ h0 \: G( I& c; ~....
) {% e- h5 P' Y1 a" ~; S . `/ x' ~9 W+ x0 q) l2 ^. @
....
0 j6 j' z5 e ]# mkdir /usr/local/tomcat/webapps/myapp
) f) D$ w2 W9 W8 f2 L# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html
! t5 B( l3 p; I% k' T0 F# ./bin/catalina.sh start# Q( y6 P8 Y1 o! B; W! H9 i
! x2 l3 S/ B- U' L) l8 ]###访问测试0 {9 L0 V' k4 y, |
# curl http://172.20.22.26:8080/myapp/5 o, g h- n7 f# O9 b2 [# ^) b' d
###查看访问日志! v Q( v- b5 M) R/ [1 T" Z; [3 O: I
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log 7 Y0 [* H, i4 B. D. B- C; O# `- j
部署logstash 2 ]7 U* v3 f% ?& I: K
在tomcat服务器安装logstash收集tomcat和系统日志
+ p3 N* d) k$ b6 G
" B0 L) x# o: H####tomcat1,172.20.22.30# r+ c4 ^- H K
# ls -lrt logstash-7.12.1-amd64.deb# E) c% v0 d" Y! k' o
# dpkg -i logstash-7.12.1-amd64.deb
/ r0 ]1 X# _! n# {# vim /etc/systemd/system/logstash.service
* d3 `3 q$ P, D$ B' t...
2 S' J4 }% s9 O5 U1 QUser=root
8 u- W6 E) M. y' w8 _; q) oGroup=root
, O4 Y6 m }! N2 V...
; s* P6 v6 G9 F' h. `! l# cd /etc/logstash/conf.d. [ d' J# c+ {6 s( V
# cat tomcat.conf
$ F5 e+ \; N& j* C, y! ^, xinput {
! g6 H/ C5 [; F: {2 f. S& U file {. j# I8 V h, U) p4 \8 d# Q
path => "/usr/local/tomcat/logs/tomcat_access_log*.log"
9 p1 k6 ?$ i" T$ v8 C type => "tomcat-log"
3 W; t. V/ l# o4 E+ m- @$ U start_position => "beginning"
+ g& p1 t' z) [* ~5 S stat_interval => "3"( y& Z9 A3 t4 n! u4 Z
}) F' q, q# q. B8 t0 U& Z! W' m7 \( c
file { @- G; F4 o; D" _5 l2 Y
path => "/var/log/syslog"8 `. S* P. |" l2 h9 B( U. P
type => "systemlog"# {. e2 ~) l, j! X/ e* z
start_position => "beginning"' L2 Z2 b: e q# Y" c' _$ F/ s
stat_interval => "3"; l4 B+ D/ `9 Y0 E1 O3 l
}
8 F& [ b# M8 I$ u5 p( c9 {: D} |! e# _; H/ v6 Q
output {
, Q* `! x! v5 m/ d) r5 K if [type] == "tomcat-log" {
& p$ Q U) e4 o G" R elasticsearch {
2 X# U5 j* Q. j/ d8 `' ^$ q hosts => ["172.20.22.24:9200","172.20.22.27:9200"]
% X3 a U3 e# B index => "elk-tomcat-%{+YYYY.MM.dd}") @2 F$ Q7 G7 E4 ^6 @
}}; ]# n& }- F6 b# @
if [type] == "systemlog" {
5 C* H* V" y5 X9 M" Q) ~ elasticsearch {
% g; H9 M5 E& Y! J* l hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
! {$ S/ U+ y ]# h1 p index => "elk-syslog-%{+YYYY.MM.dd}"* e# D$ \( K4 u8 ?
}}
7 W# D; a" E9 X6 p5 R* Q. K! x" h}6 l( j# y9 S; O/ l. h
7 e }7 m+ w, v' U: p# /usr/share/logstash/bin/logstash -f tomcat.conf -t! O- p3 O, j$ ^5 M L
# systemctl daemon-reload
k: D7 E2 z# J$ O, ^# systemctl start logstash.service9 V% D+ M( x9 M# I" i
# scp tomcat.conf root@3172.20.22.26& ?8 z9 D1 i/ Q1 L- _, [. U4 Y
6 C8 | t3 u# e/ D1 t) [3 m$ m
####tomcat2,172.20.22.26
7 i: g. |- T; d1 V0 u2 ^7 ]# ls -lrt logstash-7.12.1-amd64.deb
; I5 c; [! w* _6 }. @) K# dpkg -i logstash-7.12.1-amd64.deb
2 C0 K& C) V- g, i J1 V9 ]; }0 v# vim /etc/systemd/system/logstash.service
- k* b6 K7 Z+ U6 a! x. U% x...+ s0 _2 J" O8 ~" b: l2 _
User=root' ~5 ?- C! ?8 ~+ o- L2 v' x v9 ? t
Group=root
9 q3 N+ \+ v5 W' b...% m' p3 C& l* j+ ^/ s
# systemctl daemon-reload
1 P2 v8 D0 W% M8 s# systemctl daemon-reload5 q/ e v G" j9 C) ]
# systemctl start logstash.service
6 ~, F6 I+ b9 s U$ ?( M通过kibana展现$ b, d! H# S5 r
$ Q- ^! t! _$ F
8 {1 Y, c% ^* }0 Y/ D2 X' v4 Z& c
收集Java日志 % K9 Z% ]; i r0 V H- x. q
使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并) {8 b7 o/ t% V' `! G
) f" V8 y( w1 a- K
Multiline codec plugin | Logstash Reference [8.1] | Elastic
' n: a; z# L/ C & E3 N0 V/ \: |7 C* \
添加logstash配置文件 ' \0 q2 b, `! D# d, [* n) Y; i# N
###收集logstash自身的日志,172.20.22.26
4 L6 E0 J+ `5 z2 N# cd /etc/logstash/conf.d
x5 @5 ?, S6 G/ c# P# cat java.conf 6 ?" p1 T& e; w8 }/ M% C: ~
input {. o+ P: q1 L, p$ H! [8 H( q
file {; j! l( q6 b" J, p* M
path => "/var/log/logstash/logstash-plain.log"
4 `# E' O) t; G( @, ]( y type => "logstash-log"
7 f, o- t6 R( k' a' C start_position => "beginning"" T4 j% A' F; n* T7 M8 o5 {3 q
stat_interval => "3"
8 T4 q$ [. \6 Q8 Z3 i; ^+ x; M codec => multiline {. r$ P0 p7 @; E4 t* }" j5 g/ G( y5 [' J
pattern => "^\["
1 E6 z% a5 ^( D7 B2 k0 ` negate => true
x Z E& R. g2 t$ Z8 a0 H what => "previous"
* W0 d: J, E6 J" W1 _9 l, t: E3 d* { }}
) ?2 X7 I' k4 \4 n1 @}( N7 h4 f5 v# A# G
output {
4 v0 K8 X) p0 |/ y& F if [type] == "logstash-log" {
4 f Q0 T% `. k! I7 x) T x elasticsearch { k( i% G" H2 b* G
hosts => ["172.20.22.24"]$ ]8 q G) D& T# w4 N
index => "logstash-log-%{+YYYY.MM.dd}"
1 n. D j- _6 a4 G$ r5 @2 A }}3 P0 Q n7 n0 A+ e' n* @ }) e! w
}1 I+ C) r# j0 s, A
/ C- E3 K# o6 p5 s0 C" f# /usr/share/logstash/bin/logstash -f java.conf -t# m! g* Q0 P: r
# systemctl restart logstash.service
/ n# w: G; v2 u
6 N6 a" ?6 K$ T' p###收集logstash自身的日志,172.20.22.30
' i. N/ k& v. V0 S# cd /etc/logstash/conf.d
6 s* f) w# x" h' p: D; B# cat java.conf
' @( T e9 x' c/ L0 ~ z7 ]input {% o9 H/ S) [) E" y- K$ n
file {3 A/ j2 ^, j9 A( K, J) N
path => "/var/log/logstash/logstash-plain.log"
9 I9 I9 M0 J& a* V type => "logstash-log"6 B, ~) o6 Y! ^8 F$ G
start_position => "beginning"
- m) H- H4 |- X" d# Y4 G stat_interval => "3"
) ?9 i9 ~- A. G8 g; S5 D: O: B codec => multiline {
* Z+ b/ C: {! f) ~" F pattern => "^\["
, ^5 f Q% @1 v' K# ^- d3 U negate => true
* N# I8 R) z) l* ?9 X what => "previous" 5 d5 c: U+ n3 u! q
}}
2 D7 z% K3 o: P}
/ }5 Y- I9 U' M! }! E# Q v. ioutput {
5 C8 Y7 ^) q$ k2 L if [type] == "logstash-log" {
' U) i" B$ L+ F/ l9 B, x7 } elasticsearch {
- R' N( F( x% ~ E) n hosts => ["172.20.22.24"]+ G! e0 K+ {5 r& r) y& T" u+ b
index => "logstash-log-%{+YYYY.MM.dd}"/ r8 f8 \& T) l2 \
}}5 s- V$ D- s& Z) I8 d9 d# |3 B
}
' H: N9 X6 K# K
8 A2 o) E' p8 h; _# c t' {# /usr/share/logstash/bin/logstash -f java.conf -t
% _9 k* N/ h* t. T6 i# systemctl restart logstash.service 7 Q$ k& D. E$ `# O" B. h
查看kibana收集到的日志1 @! B* R- }& }/ K! V' ~, V
. I) a5 N' D2 _5 t0 t8 J ; Y1 \- i. C8 b
2 a0 R9 O' z' Mfilebeat结合redis、logstash收集nginx日志
! D: q' k% L F* k使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch+ L& z7 c2 ?1 H& _! C8 v# s
3 t1 K ]4 {: p* f
web1:172.20.22.30,部署好nginx、filebeat、llogstash0 _) L$ U5 f: T* }
. N) G1 D0 r& Q% U8 R
web2:172.20.22.26,部署好nginx、filebeat、llogstash2 k# d8 }8 K) i/ |# n. x1 a. \" P
% ?- z, x% L7 `logstash服务器2:172.20.22.23,redis服务器:172.20.23.157" E# h! D0 I' {4 n6 D+ ]. v
6 i, V3 h! v& D7 B( X- e, M
nginx服务器相关配置 . D) n" {7 [! ~, b5 C
部署nginx _7 ~) |& l- w9 t
# wget http://nginx.org/download/nginx-1.18.0.tar.gz( R( F- r! @1 @. @" o: i
# tar xf nginx-1.18.0.tar.gz
1 P3 u. J# C* d3 n0 W0 C1 H# cd nginx-1.18.0$ s; A/ ?& S# f& F
# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
: ^5 I8 n4 _0 P4 m% p# make -j4 && make install( |& R. Q3 J0 f/ Y! ?* s
# /usr/local/nginx/sbin/nginx
( V: F* I/ |! `; j) W部署配置logstash ) J- N! ^) l9 ?( j3 Q: @
把filebeat收集到的日志信息发送到redis1 T$ s! Y. w" b1 f0 {
, z" Z/ s! i9 S4 Z4 |1 r. u, n
# apt install -y openjdk-8-jdk6 r+ U" z( o5 @+ M, q2 L0 d
# dpkg -i logstash-7.12.1-amd64.deb
% P- j) e) V. O9 |9 `# cat /etc/logstash/conf.d/beats-to-redis.conf ) ]9 n |# E' e5 T+ [; I J
input {4 J, \1 X* j0 f! ^5 {
beats {
; l1 x: L T2 s9 D$ c9 Z- { port => 5044! k8 ~' A+ g* `' p" @3 S) ?
codec => "json"7 j8 ^" s% {4 L( f0 q6 ~
}3 ?1 E: U9 f; C$ L9 d% Z$ `. r/ V
beats {6 ], ?7 u' _. V f0 h( E
port => 5045) |% V8 R2 ]/ e H+ r& U
codec => "json"
5 {; E } k( x }
2 x1 |. x/ ?6 V) {, K6 |/ x}7 `$ H8 L6 q1 N; z
output {
4 O* }5 q1 o$ \8 P) e- g3 h$ x if [fields][project] == "filebeat-systemlog" { b2 p4 H! | Q
redis {2 G. m. A" V% z/ e
data_type => "list"
+ V$ Y2 H& X: w key => "filebeat-redis-systemlog"# z2 s$ C1 h/ C( G8 _# I0 \( P
host => "172.20.23.157"
. L. f& O3 |1 X. E port => "6379"; C& C1 M1 | x% d* A
db => "0"
q" T, q3 h' [7 m" w5 U password => "12345678"- m) R* l9 T8 F3 ]1 E8 P% A
}}0 y/ }" A3 [4 o/ Y4 ~' g f0 ^
if [fields][project] == "filebeat-nginx-accesslog" {; {2 L/ A- y, q+ y9 J. m
redis {0 P+ j7 V2 G7 T1 r) ]/ A& @
data_type => "list"
3 D' w+ C8 o" [ key => "filebeat-redis-nginx-accesslog": S+ T6 e/ _4 V5 F3 z* Q
host => "172.20.23.157"
0 X( `3 [# M" {! D* _0 s$ ] port => "6379"
* P) U" `4 \" K5 C( I4 L) ] db => "1"8 B1 E) p2 v2 d. u
password => "12345678"
% i! H e& w7 r. Y }}
6 L" P7 T& b! ? if [fields][project] == "filebeat-nginx-errorlog" {( q% t/ J6 W6 R n$ y% l
redis {
5 S; T$ o/ A9 H$ w data_type => "list"- l L |( g; R. Y/ \2 H2 C
key => "filebeat-redis-nginx-errorlog"
6 v1 H/ a% \# M' A1 P9 i( s% W host => "172.20.23.157"0 n2 x, R( {( w6 v6 e
port => "6379"# B9 q+ ^- J9 C. \9 J* ~
db => "1"
$ A9 j1 o2 K, S password => "12345678"
* b4 `' J; N( S. E+ m }}
! k r' z, l) [, }) i k/ X}
8 O1 G; `' a3 `5 p9 T P# systemctl start logstash1 E+ Q- f9 ]9 Z! @/ r
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ % E8 y* U5 j" C5 S0 V$ ~5 ~$ D4 @
部署配置filebeat : `$ @& W+ C( P2 d: |7 A) u
通过filebeat收集日志信息发送到logstash5 X, z1 n# N9 ?6 G
1 y" z4 O* [. H1 w- _
# dpkg -i filebeat-7.12.1-amd64.deb; b. X3 ^, m A; W% q# o
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]" J1 M. R% ]' U7 U0 Y6 I( d
filebeat.inputs:
5 V$ a8 l2 I+ r/ F- type: log
4 l7 V4 _, q |$ S enabled: true: R' ?, W1 T0 Z
paths:
* j q1 T4 R% j& [5 w - /var/log/syslog
6 n' O, A7 @6 _0 U( v fields:. a; [/ P0 O# }/ N1 {" I! ~
project: filebeat-systemlog
" k3 D, m' B9 K5 d& x5 B- L- type: log
2 A2 p# T/ X- g5 ] enabled: true
& E3 b* Q, Q9 q+ D) T. i7 L0 G paths:4 P! D- U: r A
- /usr/local/nginx/logs/access.log
; s$ |. {& m( b% J) T; h+ Y fields:
6 \1 ~+ x( j. x8 P* u/ J2 u1 p2 y" O project: filebeat-nginx-accesslog- w% w6 a' [/ W7 @1 M8 S
- type: log
4 j5 |2 ]) g$ V$ g enabled: true. H% s" r. M1 G( M2 s' H/ S& R
paths:4 S6 E O7 p, M: V* c! V% P
- /usr/local/nginx/logs/error.log
( s5 y! _" m, E( x* w fields:
# v+ u9 A, d4 n6 }. |: v project: filebeat-nginx-errorlog
$ N% _0 L5 u! k% z/ ~; _filebeat.config.modules:* x) K- X4 R# I9 a7 t3 l
path: ${path.config}/modules.d/*.yml
* i+ s: E/ K: k5 i( x( L reload.enabled: false" j2 d# S8 Y3 |2 L: U: n& z' i
setup.template.settings:' H/ b3 ` d" ]/ B, T5 I0 f
index.number_of_shards: 1. E& ?( w* U9 H4 ?2 R# X
setup.kibana:- M* }% d3 v' j j+ e1 e
processors:
/ L' `* n, ]$ J1 f8 M, H$ z: c2 N$ | - add_host_metadata:- U$ J* U" p) k* Y2 V3 M4 p# L/ u
when.not.contains.tags: forwarded
, r, B5 U' g; c9 |8 _# m. u& V - add_cloud_metadata: ~) j7 P" N: g7 _1 D6 \$ f
- add_docker_metadata: ~
1 e" m( {' P2 d' L7 q% Y7 F& y& u - add_kubernetes_metadata: ~: J0 _5 A7 ^3 i u
output.logstash:
t/ W1 ?- F) }5 g4 E hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
5 Y! [* B, _, M5 Z% [ enabled: true
4 ?$ ], [8 S" V) f8 ? worker: 2
4 \& A2 H5 J1 ` R) f; u compression_level: 3$ S2 S9 U( p6 K6 J
loadbalance: true
- l8 u" s2 @$ P$ R1 l* \3 K
5 a3 Q i9 O8 u0 q# systemctl start filebeat
l& @' G1 c% c2 k8 Y7 ^# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
- s# K3 B9 I2 U1 Qlogstash服务器配置
5 q2 Z( `( x( I1 V- [! jlogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch, ?* R/ i8 ]- h% m) p7 n X; n
* b0 D( {% P: P- _( m) w
# apt install -y openjdk-8-jdk
. Z: d& s* u7 H& K; h# dpkg -i logstash-7.12.1-amd64.deb" h5 ^. \: z$ }" l
# cat /etc/logstash/conf.d/redis-to-es.conf
1 I( A: F6 N/ c; r) i! Kinput {! G9 ^5 m( x. w* g
redis {* ]- ?4 b4 u& ~7 B f2 c+ z: h
data_type => "list"
. e! `. U. p/ C6 t4 a9 I B key => "filebeat-redis-nginx-accesslog"! D0 l9 T( q8 r% k: F) D; C
host => "172.20.23.157"! e, K% n) \# E5 e' Q: T; l2 l
port => "6379"* h6 o( d3 i6 O' A, y. l
db => "1"
- Q- k! S2 y4 I4 w password => "12345678"$ F6 ]9 q/ _, E) H3 J
}
6 L: x6 g" f i2 y: @6 d redis {' @0 g; J4 }+ a% D
data_type => "list": G2 ?. X7 e; J& @! L) u2 D
key => "filebeat-redis-nginx-errorlog"
, _/ O5 V- V0 D) V/ l host => "172.20.23.157", a4 v4 t3 |- a2 i
port => "6379"5 M% Q; K9 Y; l+ r
db => "1"
- ~$ Z9 C6 x3 m l' C password => "12345678"/ v2 V* u$ x1 \9 W
}" {+ c5 o% l9 B3 v/ l
redis {
7 @2 _" }3 H2 A7 i# @: U# O" o data_type => "list"
" f, ~* i$ q% u key => "filebeat-redis-systemlog": U6 @& o( O8 \* m6 u0 a4 @
host => "172.20.23.157"
" y) T+ s5 `6 c! p& o1 f5 P port => "6379"
- S$ o5 k' R* Q' F db => "0"& A z1 A! }; @
password => "12345678"
5 ?9 V3 `- ~! ]9 J }
) n0 f) g' d1 T- O}( T! U3 j, z. Y
output {. W8 a5 K% o8 A3 t- Z
if [fields][project] == "filebeat-systemlog" {
1 W$ R' H! b+ v: E# ~ elasticsearch {; L7 f a& ~# Q. h. u# [8 I
hosts => ["172.20.22.28:9200"]3 k% O& ^9 U, u/ b
index => "filebeat-systemlog-%{+YYYY.MM.dd}"* I$ `9 h$ V6 X; u& ^+ @
}}, `5 {* k9 l; r2 J7 R; o/ L
if [fields][project] == "filebeat-nginx-accesslog" {3 O$ ~6 a9 Z$ |* i8 X
elasticsearch {
% a# u3 `0 Y! w% ^2 T2 e hosts => ["172.20.22.28:9200"]
* l2 `, `; W" ] index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
0 T8 k9 p2 L* h' c- Z8 P/ O/ b' x }}
9 N" A, |3 [$ Z0 [ if [fields][project] == "filebeat-nginx-errorlog" {
$ O; N5 t) \" o& K \3 p E9 N elasticsearch {2 ^2 ^& o) m) x# d4 Z b
hosts => ["172.20.22.28:9200"]
# ~* ?" c, R" l& w8 W, a/ f index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
- w. s+ j; F% H7 X0 g N- z }}" x% C" \$ X& b2 j/ x' l
}- B3 _$ w% }( T( o
# systemctl restart logstash.service
, C: h. \$ f4 v4 k! s) W7 oredis安装配置 2 A) E2 \" g4 q/ X3 K# F5 i
redis服务器:172.20.23.157," q8 B- L- j/ \8 ]; S4 b
) \; B3 D' }/ Q; V0 D3 q7 Y2 p; E# yum install -y redis
/ @: z3 O, w+ U1 a# vim /etc/redis.conf
: | f' `4 Q4 w" A1 J####修改以下配置项
' ` q' x9 I2 sbind 0.0.0.0' P! H D: z9 O9 J. I; [" N; |
....* D/ i @2 J( z o! {+ i& X" Q4 F
save ""
. ?" D' R$ \) X& o2 }, C* r.... H+ a' K& Z# w* f, e
requirepass 12345678, ^. D3 ~" [, S8 g% P6 N( z; @
..... ]) @; m h+ K! s8 M; O
# systemctl start redis
, P; Y1 D) {8 v7 ~###测试连接redis
( z$ E: x5 i& Q3 B+ V5 D' ~# redis-cli $ o5 [3 }( A- R$ n/ z
127.0.0.1:6379> auth 12345678) }) ?4 E# e) W/ M, e: X2 ]( k
OK. H( k* ^0 |2 Q5 Y
127.0.0.1:6379> ping9 \( c7 j" _0 g: w
PONG7 j/ I* |% g, d, O7 q) K
) f. U/ f5 e% {" T( _1 s0 |0 U
###验证收集到的日志信息
9 h2 g; y! `8 r127.0.0.1:6379[1]> keys *& R( X7 c# B5 S& I
1) "filebeat-redis-nginx-accesslog"
8 T2 O* O) d8 @2 r4 v1 a3 a2) "filebeat-redis-nginx-errorlog"6 U& S, O5 Z8 [5 K/ i3 [
127.0.0.1:6379[1]> select 0
4 j& T6 Y9 ~) v$ \7 Z9 rOK- z0 f" r2 [; p) F. R2 x
127.0.0.1:6379> keys *
: }$ {- X* M) d1 A+ k% S5 v5 u% n: e1) "filebeat-redis-systemlog"
* O' i# ]: U, S* U6 n9 i' [% V通过head插件验证生成的索引7 Z9 d! G3 {' f8 K6 [+ c
- ~0 j& R3 p' W8 X
! |& U3 y" q. f3 \ h' B
kibana验证收集到的日志信息 , x/ u5 E7 w/ K+ ]) b+ A
|
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|