|
搭建ELK
% k5 F5 E( V8 h- v2 ^, _ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
& r, d9 ~1 I* E/ w% e Z8 e2 g7 p5 V " x7 C. ?) e7 ^; X; P; }8 K8 C
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单
, ^9 Z' S2 j- L: KElasticsearch , Z6 c& @, D) I; z' @' _+ m [0 g
elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。6 t c0 v0 y% L" v
/ |4 { @& X9 F) m& {' ?3 ?
elasticsearch的特点:* ]4 B; C2 q8 W3 _' n- o1 q4 l1 D
" w% ^! r1 D& S7 A7 P# m" o实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
3 p) N) G; K; ^8 v7 A1 i X部署elasticsearch
( P$ B" q3 c3 ?2 E& UGitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发* U9 g/ C( D5 z' R. a$ q! d4 {
* E6 i/ n: i, h1 H% V. m( hcentos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步; O' b2 } ^4 u0 j) f0 I$ X/ d4 N
( B- Z( p3 W5 g9 r' M
服务器1:172.20.22.246 G `4 l/ |' ` S1 e( o; T
, S: O& T9 G, y5 X6 y. n, y
服务器2:172.20.22.274 ] g3 ]0 r: f# n5 Q
1 Z* g& N/ c* q- t7 V4 `服务器3:172.20.22.28
' J9 V& m) }8 L& t# M
- d2 f0 I! y B! C1 ~& O) K###ubuntu
7 V K J8 u3 _$ ~& S, q& n# apt install -y ntpdate
8 |2 O: _' Q) ~$ w$ J- y# rm -f /etc/localtime1 R1 C3 J0 w' f) _4 ~& z
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
3 O+ q7 ]: x9 P6 S/ ]# hwclock --systohc
, ~! `2 w3 X% ~% ]9 d0 T3 y) i# ntpdate -u ntp1.aliyun.com
3 [5 E: |& Q# s###设置内核参数) W Z+ h D+ D! w! c2 D
# vim /etc/security/limits.conf2 h# }7 r% i4 w4 |5 H' {, R
* soft nofile 500000
- L% S, e% x. r! N# t* hard nofile 500000 `, s; d$ {- P% F4 G |
# vim /etc/security/limits.d/20-nproc.conf
* b" F& N$ a% v: |4 u3 T1 i) @* soft nproc 4096
0 [3 t: f& K1 w$ P7 x/ melasticsearch soft nproc unlimited
* N: m: q9 I) t' f* k4 sroot soft nproc unlimited
+ d( w1 x4 S: \# Y, m9 H1 O###安装jdk- Z. Q( f9 b4 p
# apt install -y openjdk-8-jdk- R6 e& G! N5 W- u8 S E& s- B
: B) z D" M- h# Z
###每个节点都安装
: b6 {- q/ x3 ^( w# ls -lrt elasticsearch-7.12.1-amd64.deb
. Q6 W4 y3 q* O# dpkg -i elasticsearch-7.12.1-amd64.deb
+ _6 s6 n6 ~+ F###节点1配置文件
; K! {$ q% y; d7 x& u! q! [: ?# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
7 I% D+ w$ H! d* d) \cluster.name: m63-elastic #集群名称
- t" b4 i2 n8 b& y3 \node.name: node1 #当前节点在集群内的节点名称 p$ D7 S, l8 w3 ?( I. ^
path.data: /data/elasticsearch #数据保存目录1 n( y5 d; h3 |1 u0 o. z4 V: H5 S
path.logs: /data/elasticsearch #日志保存目录
% `% ], ]! k, Bbootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap. D% A4 r& e+ u# d6 \: j R& g
network.host: 172.20.22.24 #监听IP! l0 q3 k. ~# w
http.port: 9200 #监听端口
" S$ C0 S$ I7 R9 P4 d###集群中node节点发现列表2 N+ }- ?6 v4 i; t* ]2 f
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
/ g- T+ w: C* x% q$ E###集群初始化哪些节点可以被选举为master
7 p8 S- w; F0 ?' @( u: wcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]9 H1 m$ {* N& j+ x: F) w. k8 U! B1 Z
action.destructive_requires_name: true
8 Q$ i8 I) t/ N1 F& @# q' v# mkdir /data/elasticsearch -p
k& A5 c7 j- R$ a" w4 x% r# chown -R elasticsearch. /data/elasticsearch
' O( f1 C8 V( s% Q8 @# systemctl start elasticsearch.service
5 X* y, j1 w9 [9 N! y! K###节点2( P" n" q2 e" q6 b2 z! p
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
$ U: X a, e" a' k) n' |3 e$ P( e; Icluster.name: m63-elastic1 K3 R0 q/ j9 I7 ?
node.name: node2; h1 U U m O$ V; g( d7 ]* T0 `
path.data: /data/elasticsearch: m* A; c0 E# k" L5 p
path.logs: /data/elasticsearch
- Y. j( [! v' U. e7 d7 R7 J% h& k3 Znetwork.host: 172.20.22.27
- F4 I$ [$ n* Y3 [http.port: 92008 l3 V, I: {+ C: J9 g
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]! J! j% R- t1 i+ a' C! `1 M
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
3 f) `( z" @7 {* caction.destructive_requires_name: true+ F- W% V; k% u, Q# [- j
# mkdir /data/elasticsearch -p
2 y0 r8 N5 W. D' `# chown -R elasticsearch. /data/elasticsearch# y0 G. D6 f1 M0 a, x
# systemctl start elasticsearch.service( K6 A' j" x) s
###节点3 h5 N+ O; u# x! ]$ n# f4 B
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml- `7 Q4 a# _5 ~* k: o3 l
cluster.name: m63-elastic! J/ z1 g' _2 K: q3 P
node.name: node35 j) C3 Y- y$ E( v) Q
path.data: /data/elasticsearch5 w. q7 S9 j( ?: O5 j
path.logs: /data/elasticsearch
; p6 O" b5 ]7 n h! C) Onetwork.host: 172.20.22.28& D' f& R% O0 ?1 h! s3 f/ k8 I: ^
http.port: 9200+ _# S" t6 b) W6 B/ @. k* \" b# y
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
- f8 Y( K+ L. T' Ycluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
; c% A( H! T6 }, W& e8 b) {9 {$ eaction.destructive_requires_name: true$ }4 ~( y5 X2 v1 X+ A
# mkdir /data/elasticsearch -p& |# ?( m: N3 t$ v8 d9 ]
# chown -R elasticsearch. /data/elasticsearch; g0 i# l' j, U/ w! ~- D
# systemctl start elasticsearch.service
0 B( ^! i" v4 C浏览器访问验证
: z" s6 y. t+ G$ x# ^http://$IP:9200
, P, f7 |5 K9 M0 {- X9 n& ~2 A7 z+ v( }% H0 N/ h* l2 R
: T4 @2 T4 [5 v6 ~7 a! W , M3 r8 G+ K3 Q }2 F
Logstash
, Q2 G" J3 o. J6 ALogstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。
; E9 ]$ i2 v* g- I, o
: x$ Z4 s6 v( t; y7 h3 x3 s部署Logstash
( `* Z3 F0 y0 g0 O# F9 s0 gLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地: q' B6 S7 E. u# d9 S4 l" ?( {
7 U% R3 P7 m+ B3 \* [
https://github.com/elastic/logstash #GitHub7 @+ A: m: a5 _/ I- x* K4 u
) m! M, y J% eElastic Stack and Product Documentation | Elastic' u3 K9 \- [% N. c# `1 g
( x$ k+ o7 K3 X( a+ E环境准备:关闭防火墙和selinux,并且安装java环境
. {2 I- S2 I- l& l1 x
# p; i& C2 M6 R# apt install -y openjdk-8-jdk8 N! Y1 e) v& p8 W
# ls -lrt logstash-7.12.1-amd64.deb
5 ~/ V7 ^+ G7 X% P% D% H2 q7 f# dpkg -i logstash-7.12.1-amd64.deb
* b4 t! k2 P6 G$ k, c# }' a! r###启动测试( e/ L9 A5 O5 U# r$ h& j
# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出
3 h& i0 n0 B, I- T! i$ f6 _hello world!~
4 F# X5 \3 x& p3 s{ J" g9 J) R3 |2 p
"@version" => "1",- o/ b9 g# q* R* t `6 s
"@timestamp" => 2022-04-13T06:16:32.212Z,
+ x! ], C$ R) C z6 ]2 c- i "host" => "jenkins-slave",
, B) W) m) Z9 Q' j "message" => "hello world!~" K% K5 L x4 P
}1 \6 g% u5 n7 G. E5 B
###通过配置文件启动
" |! n! K% m6 B* P! F" a: q# cd /etc/logstash/conf.d/
9 }; U( b3 Z4 u4 O2 ^# cat test.conf & e/ l! |7 m' s: C% B& r
input { ! D9 G4 {- ]: L5 D+ Z+ g! w& g
stdin {}
8 c3 _5 o2 Q1 h4 V% P6 c8 |}
0 p8 c6 i9 w1 N D: |' t; Z) _/ {output {
2 _4 G+ c5 d. i: Q. ~$ }! o/ [) l stdout {}( l$ Q: {- W6 h4 M0 ~; n
}
! Q9 W. T- v: A/ Y: I! S8 @5 K$ Z8 `/ y0 M( U9 H( e! O4 E
###通过指定配置文件启动
3 v$ q0 q& K: v# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法
0 D* V4 m) T: x7 y# U: s# /usr/share/logstash/bin/logstash -f test.conf& ^; }6 v% f, `) A
3 @5 A4 b( W; ~5 |7 T6 g- V####输出到elasticsearch
1 E! Y- |. G4 F# cat test.conf ' }0 N" Z# E$ a. t9 F1 o
input {
" o$ F- j5 [* I$ G8 |& C stdin {}
; V& k* K+ d) x6 d% l1 |}) {: R4 g9 q. [
output {) W" v( P' W6 b3 v) N# b. x; \
#stdout {}* s' O( H# d/ f- u1 t4 l& n* m
elasticsearch {
/ m7 \+ m6 {4 E, D3 R2 r) N hosts => ["172.20.22.24:9200"]% Z7 T* j8 T3 k; \
index => "magedu-m63-test-%{+YYYY.MM.dd}"
, Q. Y1 [0 q& L; p' V }
2 v0 U( l) k7 p0 }6 p}
1 z( A; q2 j4 s# /usr/share/logstash/bin/logstash -f test.conf, v7 Z5 f; z. b; e v2 f7 P
version1: w5 L4 b& a# N( U5 N$ A; X
version2
- t- n9 [% ?/ A- q; U( eversion3
* b; Z: `( x+ ^5 wtest1
9 b U g( F8 j: u' V5 W4 f; ctest2
5 _4 q' F" p9 ^" }9 mtest3
$ c2 d/ ?" h$ [0 P4 h
( J% D) P2 J6 D# H9 q9 q####elasticsearch服务器查看收集到的数据5 Q. V' N1 Q) ^9 u6 `! u$ a2 d
# ls -lrt /data/elasticsearch/nodes/0/indices/* G+ I1 W! d; F& p" ]
total 4# o7 B, U0 z o- u( [8 l( c
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
& n' ]1 c( m4 r+ z2 L6 G, y' J+ Zkibana
" o6 Z0 D7 s) X" r9 g+ Dkibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等0 M4 D) v" v1 z/ O. D
2 K# b& `" Z8 w0 d5 m# p部署kibana
, Q% F5 a. }3 S* T Q% E# ls -lrt kibana-7.12.1-amd64.deb
$ b! e3 W, ^# T* {' m1 Z D# dpkg -i kibana-7.12.1-amd64.deb
P+ {( [* p$ T/ S: x' B3 g# grep "^[^$|#]" /etc/kibana/kibana.yml
9 \: t! R) o) F- s- s/ l+ vserver.port: 5601$ K+ U1 q0 A3 {0 U' a4 e% j# t
server.host: "172.20.22.24"
1 \8 P( |5 ?6 w& g& ?; lelasticsearch.hosts: ["http://172.20.22.27:9200"]
) U) v" |; l5 m* a# K) J$ Ni18n.locale: "zh-CN"! D6 j6 N5 A* z( n5 D; F0 H l, o) ]
# systemctl restart kibana
: K/ y2 {1 H# ?. `浏览器访问http://172.20.22.24:5601; C8 z( p, ]! C; ~9 o" J4 y
2 ^% s5 v: C' J; w: n5 z1 ]0 KStack Management-->索引模式-->创建索引模式# C( Z: A; m7 U5 P
1 d, c: I/ q9 H- n" z5 T% y
7 B3 ^3 K" o; I! J/ P8 ]% X) F选择时间字段
3 e" d( m! V) u @& K; I
& z; A/ {& i( n! Q& A查看对应创建的索引日志信息. j) {1 m/ p; z$ E# S
/ [/ F/ S' m; G$ }
# `3 ?! W0 V3 r4 i
+ l, m! x( U6 R9 l1 H; {收集tomcat日志 " H4 j+ G9 y. K2 j
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
7 ]/ T0 Z) h; \9 @0 S" s) ]+ g 4 L% l6 u7 |; h2 R( X
部署tomcat
! \; J! M5 u6 }3 ?+ m4 c* t) s####tomcat1,172.20.22.30
9 O* b$ W* D$ J0 N) u0 {# apt install -y openjdk-8-jdk0 Y( O3 Q& f7 U) m( h, T3 @
# ls -lrt apache-tomcat-8.5.77.tar.gz & ]0 U5 v$ `- W7 E8 [) C0 O0 l
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
* n! q) K: I6 y/ H3 o$ G0 z# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
6 D0 e, t) J {$ ^4 ?( B1 q3 n# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
; m9 p' p6 v0 k2 Z# cd /usr/local/tomcat# Q8 n, X7 h: c. k$ G# C/ p9 s* G+ P
###修改tomcat日志格式为json2 A5 h: H! U8 [! | T1 l
# vim conf/server.xml
7 D. }7 a: C4 w3 c....
$ a+ [- h- V* Y) ~( h
( Y* A* W$ V9 v....
; V1 g, }" v$ d! r0 G3 R7 H# mkdir /usr/local/tomcat/webapps/myapp
1 u6 ]7 W0 Y' R# r# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html, @+ z! G/ R6 [( y
# ./bin/catalina.sh start
6 R d8 V: ?6 B+ x0 a
7 M1 U* g! s$ y/ U% P: I###访问测试
8 |2 l0 T a- K) ]: `6 X* c# curl http://172.20.22.30:8080/myapp/+ d2 L7 E0 o! m0 V$ |
###查看访问日志0 c) N4 i2 S. S6 O- b! ?0 F
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log* Q+ @! n: o7 i7 t
f# |' ]( C+ j& V
####tomcat2,172.20.22.26
% W! T5 N) q2 I- o4 @: R$ y6 f# apt install -y openjdk-8-jdk
2 I: K. W* S, v4 i' K P. H; k% [ V# ls -lrt apache-tomcat-8.5.77.tar.gz
1 \' D- x8 x- p( e( \; t @-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz0 x, ~8 w. y+ [% f- ]+ e) u
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/- Y( [; Q! g6 b- ?! n+ ?8 D J% F
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
( B; c. P7 t+ \6 W0 q# cd /usr/local/tomcat+ h' c+ q& S6 l! L
###修改tomcat日志格式为json
0 H' }+ H0 Y! G9 G* ^0 r! Z# vim conf/server.xml; `5 k* O* w3 d. @
....
, O0 S/ X: o7 V! a: K6 B8 Q
! Q1 {8 o# G; m/ V( q, v....
* F7 x; q6 y5 v! |/ U+ ~# mkdir /usr/local/tomcat/webapps/myapp5 A C& |- H6 Q( X: o" r2 \
# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html' n8 W# o# L, F G Z
# ./bin/catalina.sh start
6 V5 v% s* x. \8 M0 v0 i5 `4 A$ T7 s' t0 U) j6 T
###访问测试: |1 \) K9 R6 C* [
# curl http://172.20.22.26:8080/myapp/% @8 }4 } j+ i: K# }9 _
###查看访问日志) t2 q; j: n9 L& [4 g" d. V3 l
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log ) l3 a* W- i, Z: v
部署logstash
! V5 S! I) N3 a在tomcat服务器安装logstash收集tomcat和系统日志
! I% i, m- u2 z% B( x8 e* z
5 n" L3 u6 g5 n {####tomcat1,172.20.22.30
, V4 @# u2 _$ q) D% R. p# ls -lrt logstash-7.12.1-amd64.deb. Y5 F9 }2 H: e- L( M! @8 k; R
# dpkg -i logstash-7.12.1-amd64.deb# d1 y0 t8 j4 b1 S8 b$ f _
# vim /etc/systemd/system/logstash.service
- P/ S2 V( ^1 t- _. R...
3 T: c7 ]: k, c3 v5 V2 |User=root: B$ v. |$ z8 k2 r6 X P+ |
Group=root
' C+ ^$ v9 @- I3 P1 \* ?* Q& g0 O r...
$ Z, l3 R! [4 Z7 p# cd /etc/logstash/conf.d
' s: n8 j2 a" @. [/ B1 B# cat tomcat.conf
8 |& r+ B P1 ?' Minput {
( b6 s1 @ k, q file {
$ N" c2 u# V/ F7 [, p9 r ]- l path => "/usr/local/tomcat/logs/tomcat_access_log*.log"
+ G# P( J6 t- L x" x4 c& d8 o type => "tomcat-log"0 Z& E5 R1 R% O( u$ J
start_position => "beginning"- c5 P( Q$ p7 t9 H* a$ Z8 {" L
stat_interval => "3"
) n' T5 n- N* {2 p" O) s1 ]: E }/ }* S }* v+ j- T& l
file {
. a0 y f4 [; [! t. B/ o& K path => "/var/log/syslog"+ _ h8 G" h2 h1 x
type => "systemlog"
5 r: B! S3 y: `9 { start_position => "beginning"
; W7 `4 `& |" r- { stat_interval => "3"' X2 s$ B; ?+ E; w2 X* P
}
" v/ c5 n' ]5 J" r, h& K S}
% Z0 ?+ Q& c! x0 b$ {/ joutput {
' C6 d4 `% g1 ^" N4 `1 ] if [type] == "tomcat-log" {: G- w: H1 v" q* W; B
elasticsearch {0 a4 m* v7 ^8 w; v
hosts => ["172.20.22.24:9200","172.20.22.27:9200"]
2 |$ c2 Y4 C+ i3 ~. | index => "elk-tomcat-%{+YYYY.MM.dd}"
2 q) Q! D( V0 F, {) f }}8 |+ H# k9 Q( p8 i: P9 p
if [type] == "systemlog" {
' Y+ U$ k+ v, `, c$ u) K elasticsearch {
+ N8 q5 `5 _- U% ^ l hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
N! E z. u& ^! D' P! B* w index => "elk-syslog-%{+YYYY.MM.dd}"$ v! d- R/ W* ^$ T6 z8 c
}}: K( U& l' K. Y9 o4 b H( U
}
3 P6 l* z3 H1 k. Q, b" s9 f* i* ^# k7 A% _; I
# /usr/share/logstash/bin/logstash -f tomcat.conf -t" G" F* [' G$ Q, M" O h, c
# systemctl daemon-reload
# b4 t, T e+ L$ v# systemctl start logstash.service0 a: {2 d. q2 M
# scp tomcat.conf root@3172.20.22.26
: @2 Z7 ?1 {; {0 t& \! A# }, l D/ M, r# g/ M$ D8 J
####tomcat2,172.20.22.26! w( ~& N2 b# S3 g A1 F
# ls -lrt logstash-7.12.1-amd64.deb
9 V" C! b6 q4 q4 {# dpkg -i logstash-7.12.1-amd64.deb
) G( K9 I% T* G% ~3 x- T6 X9 a# vim /etc/systemd/system/logstash.service9 @) @: P" j& K
...
8 D I) ~8 w4 \, e! n3 i4 fUser=root
& {, n) l, g( X) s( @Group=root& h3 l0 I/ v% x
...' V; F/ Y+ L: H5 _2 h7 E; y# |
# systemctl daemon-reload% [% _; y) Q7 g3 ~8 ]
# systemctl daemon-reload
" ~, ?" D( c9 g# systemctl start logstash.service
2 r( o5 E0 R2 b6 \. O4 M通过kibana展现" I% |/ M, Z7 N1 w' x3 E
% F8 g& m# H% R; d1 w% _& K
- [/ P1 t8 Y* L* S; u
收集Java日志 0 I1 m2 \2 H" i5 p& [6 T" T; u
使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并, W: l# _4 u( S/ F
" Q2 M7 y7 V; o4 |/ a. HMultiline codec plugin | Logstash Reference [8.1] | Elastic; ^ N! J0 K7 ~" E1 Z
) J% [) Q b8 P添加logstash配置文件 * I0 v1 U7 s: s% Z! e8 M3 }. l) d
###收集logstash自身的日志,172.20.22.260 `/ h0 _; E; \8 {2 S+ e
# cd /etc/logstash/conf.d$ Y: B6 U: z0 T1 O) `, k4 A
# cat java.conf 9 S1 K' Z) S' T4 e- j& E. Q
input {! {: |& h7 q4 [. G
file {% i6 g2 b( [1 I, L7 g$ f
path => "/var/log/logstash/logstash-plain.log"
- h- X- A( n( K: S; b3 c type => "logstash-log": L/ M% s, T2 ?0 J% `$ N; M; Z
start_position => "beginning"
) _: E4 [% h% a stat_interval => "3", d. F6 ~" d ?
codec => multiline { u$ w7 R s3 _
pattern => "^\["( h% V2 h" d& o; a D
negate => true: x1 S" |& n: p2 A; c
what => "previous" 7 n# d& Z( x. L$ v' O
}}
5 o' U$ h# P" d9 l/ N. ~}6 }; [" k0 i8 w- o; I" A
output {
f, p% p, g) C8 k if [type] == "logstash-log" {
2 w/ \. Q2 s" Q' z elasticsearch {
/ f& \+ a& l+ W hosts => ["172.20.22.24"]
2 X3 \6 V/ ^. C1 O index => "logstash-log-%{+YYYY.MM.dd}"
: g$ `' O$ ~% }- Y: e* W }}8 {. X2 `& K. [& U
}
, \2 u+ z6 F( b8 Q5 _) i( m. m
; p; V' R* ?4 i7 d8 l2 i \# /usr/share/logstash/bin/logstash -f java.conf -t
- J+ G' t2 W$ V& V# systemctl restart logstash.service- H1 `8 M w- E- D4 O) E- T
9 z7 ?' ~& [2 V) G( K7 s: V###收集logstash自身的日志,172.20.22.30: b4 |. i7 y& u9 ?& N9 s
# cd /etc/logstash/conf.d
+ Z. U7 Q$ f/ c G( U; ^5 U, a# cat java.conf 7 S% Y5 L; G8 X& {7 F" `# p/ o/ C, H" ?
input {. T! E% a3 P7 A/ o
file {# @. P. i5 _, Q4 b. {( [
path => "/var/log/logstash/logstash-plain.log"
* u5 a8 X) Y- y- O type => "logstash-log"
! _' f: z8 ?; z+ N7 V( M3 w/ V: s start_position => "beginning"* V" j, S. j' u- ?+ X
stat_interval => "3"2 a) L$ _; G& n4 n( t
codec => multiline {
1 N1 p! T( Y. ?1 p; R2 y; @! f$ Z pattern => "^\["9 `6 B% Y% s+ q; ?
negate => true
8 X& P! T: T B! F: P/ n" e% S" t what => "previous"
7 ~6 a# R \' H$ e7 K/ i, }3 b }}
3 s! v: T# c ~2 L8 O( J}' [$ T% L) D& K$ c6 X3 H# t! v
output {
5 q! n+ s" X1 L9 s) k0 I& I- S if [type] == "logstash-log" {
6 ^# S2 q5 C& l9 `/ l. ]9 V elasticsearch {
; @; C8 a5 Y1 k: V3 w1 v# [9 d hosts => ["172.20.22.24"]
) s% U$ n: L Q+ W2 s5 T index => "logstash-log-%{+YYYY.MM.dd}"
) d6 t0 I- k+ P: l }}
. \0 }: v! R! O' H$ g) X}- B8 g$ p% v$ E
& v' {7 G; s, b5 }4 w
# /usr/share/logstash/bin/logstash -f java.conf -t# \1 J o* @' h0 M8 h8 _ y/ Q1 F
# systemctl restart logstash.service
, M9 c: v5 u+ N! A* W) G0 U9 d! e查看kibana收集到的日志2 P& x. K( v9 _
0 e+ x; o$ O0 c+ _
/ t" G, X/ \5 C8 ]
8 T1 j7 ]" U4 d: \: V pfilebeat结合redis、logstash收集nginx日志 5 l5 T: w3 k8 e- G* ^7 ~4 ^; h
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch5 n: P6 ^. @ M/ K, [
5 W5 p0 g8 ~( R D) U
web1:172.20.22.30,部署好nginx、filebeat、llogstash
+ G) H* h8 a3 N# T% F$ E1 K$ v4 X . _. \0 P4 V: Q3 h) d0 L( D
web2:172.20.22.26,部署好nginx、filebeat、llogstash* ~3 b5 e9 k v, u1 B$ ?
: L- |1 d$ P6 j( s* a
logstash服务器2:172.20.22.23,redis服务器:172.20.23.157
( T- L' G4 l& f9 f8 C7 l+ e1 W 8 |, B+ C- Z8 G& N: R
nginx服务器相关配置 % |1 T; K; _% s6 h
部署nginx 5 j3 v" [# k) J$ _& Z b8 G
# wget http://nginx.org/download/nginx-1.18.0.tar.gz
5 ]2 p4 ^7 ^& v: C, l; {# tar xf nginx-1.18.0.tar.gz3 R% C' [& e; x0 r8 a; |# \
# cd nginx-1.18.0$ h8 M F# L, o7 u8 u) C- K% g
# ./configure --prefix=/usr/local/nginx --with-http_ssl_module# L& A6 d4 ?7 e' Q
# make -j4 && make install
) q5 U+ h/ L3 n- o: [- y6 j; F- [# /usr/local/nginx/sbin/nginx / j# o6 m9 f! M$ B6 j
部署配置logstash 5 G8 H+ a/ L: {3 ~6 b
把filebeat收集到的日志信息发送到redis5 O. f! n6 L4 \) Y# S1 n }
7 n" {$ F3 c' A8 h5 O) r. U5 @4 j# apt install -y openjdk-8-jdk! c$ X' a* E# \
# dpkg -i logstash-7.12.1-amd64.deb5 U1 [; \9 E* w0 e9 g" P r1 _
# cat /etc/logstash/conf.d/beats-to-redis.conf
* X& u ~- {8 x6 N2 Q+ ?input {
& D0 d. E1 r9 R ^4 @ beats {# v; Y9 X8 f5 \6 G) d6 t! n% n p
port => 5044
, I. @* G; v: S4 U codec => "json"
! X# h0 i) M5 k/ m$ @3 o }
+ o' {, r3 j2 e% m7 d# { beats {' ^/ H) A/ l1 M0 o
port => 5045
3 O6 }4 j7 ~+ x+ q* |( m; j, a codec => "json"
, A/ M( d) i" L- Z: p( i }" o8 Y7 y8 W! Q0 g1 N; p; j* o
}$ ~" o2 q3 X# Z# d6 ^6 c7 [. J
output {
& Z# n! A; y( k" H- m" r# n if [fields][project] == "filebeat-systemlog" {
# l$ A( ~ q5 E# X% v4 w1 N! g4 S redis {
; P R& `! \" |( x& y& p* B data_type => "list"- `. P- T3 D: a
key => "filebeat-redis-systemlog"
+ p8 h) v6 f a7 | host => "172.20.23.157"' \7 W* s+ C$ a# e# i
port => "6379"
$ _* f& O8 z$ X! x' @1 Q- `' | db => "0"' W1 j* ~/ y2 \ |/ c& F5 `
password => "12345678"
; U3 Z0 C3 ^1 y }} Z& ?' L& b2 z/ x3 d1 x7 _8 `
if [fields][project] == "filebeat-nginx-accesslog" {
0 ?) j) q5 L/ w4 z) }( Z redis {! M$ q3 u3 Z9 g: K/ e6 H
data_type => "list"6 `; t. |' r0 X
key => "filebeat-redis-nginx-accesslog"
: I4 N% v; o3 H! W' f. q* G: B/ k host => "172.20.23.157"3 _0 G8 E( K! E7 e9 k( Y% M
port => "6379"- R E \& h+ O. S
db => "1"0 L6 \* a* i. I( o* k @
password => "12345678"1 C! B4 p, l6 V/ L; J* ?; M
}}
1 J3 ]7 O5 A/ y: j0 p* ]9 t) l if [fields][project] == "filebeat-nginx-errorlog" {
" ?' s3 z: f; S# ]2 }3 @9 ^ redis {2 b# \+ `/ ^5 c' J
data_type => "list"& K; w' o7 O( f# f- C
key => "filebeat-redis-nginx-errorlog"
/ J3 a& a& \1 E! j# E host => "172.20.23.157"
" o6 p" w1 M4 ^; Q R, Y5 n port => "6379"
/ R8 U& L* f% f, I h8 Z db => "1"
2 D- v+ J+ q- e( O password => "12345678"
( `0 I) o! W9 @% [' ^ }}
3 d3 [9 z9 y: _}! u; ?1 f' C. O! k* k4 h- Y
# systemctl start logstash0 {' v' M, e/ c
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ ! z3 N$ Y; T5 ~( O& e6 H
部署配置filebeat 9 j# T) B! K! @; x
通过filebeat收集日志信息发送到logstash
! _9 s* I# B) U" A( D/ j' d5 r) ?
/ n. s/ ~2 p( D! a, E* C# dpkg -i filebeat-7.12.1-amd64.deb- j0 R+ e2 N7 K
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"
$ k" z& X( n x% A4 v. i: |1 Qfilebeat.inputs:4 O# v/ a. @8 ^2 o. D
- type: log
# K+ h$ g* A# X0 ?7 ]; y, o enabled: true- b+ z# a( e% E9 ]0 X( p! J
paths:
5 J3 E- _" o) D& Q - /var/log/syslog
/ y* X, ^1 x# l" C, [& H" H fields:
1 r+ v* w- S0 r* G$ u( f) Q' V, X project: filebeat-systemlog
' R) j" Y0 r9 Q h3 S$ {- type: log w. m9 N1 n, O! b5 x
enabled: true3 N) W# B( s2 @. z! n
paths:4 S! w9 l3 J8 S1 t7 D
- /usr/local/nginx/logs/access.log: j1 @, X4 @6 O! J5 @9 M2 w
fields:
- u M* k5 Y5 F7 ?9 _! j- v project: filebeat-nginx-accesslog
+ y0 X$ h8 k8 q- F" K. @- type: log
9 g9 x @% K, T enabled: true9 _; i" a# g6 f3 X: S6 P! b
paths:
! K' |& S, L* y2 H* P( P5 r; N - /usr/local/nginx/logs/error.log4 ?+ {7 n6 m, c0 [" j! ]7 Y$ i
fields:
|6 h) O2 m4 g9 L- ^ project: filebeat-nginx-errorlog
' A" B7 g1 p5 o ?# [filebeat.config.modules:; @1 W* v" t. i- I8 Y. V
path: ${path.config}/modules.d/*.yml
* n% r4 m1 `+ T6 a9 e reload.enabled: false% I9 r2 v! L2 y" j! Y1 r: L3 Q1 K
setup.template.settings:
4 u( |- m0 x* p; p index.number_of_shards: 1) X0 c/ v3 O% f# H: A* r. u
setup.kibana:' U- t. ?4 X+ K4 Q4 G0 g
processors:9 N; R U3 p9 z# i
- add_host_metadata:7 j/ l4 e# B4 D# r: p- x
when.not.contains.tags: forwarded* e6 T V! O3 Y) q
- add_cloud_metadata: ~/ A% u! }3 b9 M
- add_docker_metadata: ~& u. E& j! d! N* `/ T( j" o- u
- add_kubernetes_metadata: ~
3 r7 ]1 Q* h8 \2 M+ g( `* e6 W5 {5 loutput.logstash:
0 o' _! V% n1 g$ z* V hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
. l6 H' F% m o1 E5 u3 U0 Z; J enabled: true
w: b" _9 d4 j+ e worker: 2/ w! P+ j& u) N
compression_level: 39 o- t. d( Z* e# X' B, S
loadbalance: true2 d' L0 |/ w3 j) p. i
" y! u: v0 ^" B& [1 _# systemctl start filebeat
# }" G" @8 p. t# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/ 0 j3 g& H5 X6 O8 u& r
logstash服务器配置
5 \- n& U+ N rlogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch( I' {( p; b" b
0 w# i: R' w9 s2 ^# apt install -y openjdk-8-jdk- l: \# h& v5 x% G
# dpkg -i logstash-7.12.1-amd64.deb7 l8 T; J$ h( _0 k9 L& W
# cat /etc/logstash/conf.d/redis-to-es.conf
% T4 r1 J2 C- B: X- X7 Oinput {/ j/ D! E6 N2 S% y- r# K" v# G
redis {
3 [8 s+ |. o- B' j2 H, N data_type => "list"
' }% s' R2 u8 `1 g e key => "filebeat-redis-nginx-accesslog"
/ H; g+ q6 D# _: y$ Y" J3 m( q host => "172.20.23.157"
3 f6 D# S3 p* h4 y! K: d0 |5 q port => "6379") q7 S( ^: e8 l
db => "1"
# g- e' S5 g7 h! e% \- s password => "12345678"0 r0 b8 k# l& Q9 h; r7 g% J
}0 z8 |; ]4 l$ }6 t- \
redis {
& |8 U" {. H" F8 ^ data_type => "list"2 f" V9 H5 s% P$ O: ?
key => "filebeat-redis-nginx-errorlog"
) J& a; j J5 T5 G+ m host => "172.20.23.157"
4 O9 K; [ f/ M4 c port => "6379"
8 `4 D+ b; T1 n. | db => "1"
0 E- y' s; ~- Q2 h& w' [ password => "12345678"
0 ?; q( b* [ C5 s5 b }- u- E, @! T/ V9 ^0 [$ C
redis {
/ D& [- p( S7 Z1 Z7 X# B# \ data_type => "list"
$ V# c- ~. ^% R1 I. z key => "filebeat-redis-systemlog"
' }8 u6 e* ^1 z& s8 ]3 T host => "172.20.23.157"
7 D; T0 ] k$ V* W2 X0 r port => "6379"
& e3 D, h* z& |& n- z db => "0", Y# N* t9 r0 f2 v/ ^. e, D6 U
password => "12345678"
$ v4 |) S. [# a7 N' Q! _1 ~ }' @8 J/ B1 x' ~/ L6 w
}
* [/ \: I, m" G3 B7 g0 Goutput {
+ O2 A" r7 a+ v" S if [fields][project] == "filebeat-systemlog" {) X! z4 ]& _9 |) ~1 L0 H! T4 s! ?
elasticsearch {
5 {1 i4 E$ ~( E3 s1 ~ hosts => ["172.20.22.28:9200"]
! \6 q0 d6 y& E0 L" F; P5 i/ m index => "filebeat-systemlog-%{+YYYY.MM.dd}"
# r: L2 a# A2 k/ p }}
! D8 O0 H( K- i5 f$ }# T if [fields][project] == "filebeat-nginx-accesslog" {
3 Z. [3 X) {: y! \* ~& R% E elasticsearch {
8 i. D+ Z( G; P X8 H hosts => ["172.20.22.28:9200"]0 g+ ?( x: x+ o) S
index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"5 [7 J2 \# ^. t1 q# d
}}8 z! V0 i) @4 j8 o& |# n
if [fields][project] == "filebeat-nginx-errorlog" {
; F9 w4 H* ^$ z elasticsearch {2 @5 T: n( G4 G0 s
hosts => ["172.20.22.28:9200"]
' r* r$ z3 W8 n! E index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"4 A: y* X2 H+ I8 o9 G- K# P$ [( N& S
}}) f: }; X! P( [' a0 h0 C1 l
}6 m( W0 L4 T! [9 {4 d. K% Z
# systemctl restart logstash.service ) M) [ {- F" d0 C" Q
redis安装配置
7 b$ N' o4 \3 kredis服务器:172.20.23.157,
7 f! K* n& N! t( n* Y9 h, ] 7 |& M7 n& E* _; F! |
# yum install -y redis x* l; r- `; }( F& G
# vim /etc/redis.conf
: m G0 _/ p5 U; Q$ j1 V5 O% V####修改以下配置项
7 W# V& J2 m! abind 0.0.0.0
. y1 l" y7 V& F/ w; G2 W0 n$ ^....0 A: D2 C# ~0 N/ p- q
save ""! \$ E# L$ I, ^4 I" h
..... {; C5 ^% o2 n, z
requirepass 12345678, f4 ]6 J+ d5 a4 r
...., w5 y5 z D9 b( ?. o
# systemctl start redis
; I1 A1 T5 y$ g" f7 l" q###测试连接redis
0 E0 h: D% _2 i, q# redis-cli
6 j! ^2 P, ~3 _. f* p127.0.0.1:6379> auth 123456786 m, ^3 b1 Z) t& j! G+ ]- t
OK
; h- N+ R M+ ]0 k6 V+ e* Y127.0.0.1:6379> ping1 z% c4 `. d8 s0 s8 u. Y
PONG
# D2 x! T+ k8 C7 H ~
' L; Z1 j2 Q$ w8 v6 ^###验证收集到的日志信息) h" w. q& S1 V, l
127.0.0.1:6379[1]> keys *
0 v" U$ _- ]+ v+ C8 N$ L% J- ?' b1) "filebeat-redis-nginx-accesslog"; D; O1 e3 E. e2 y1 h& t6 v
2) "filebeat-redis-nginx-errorlog"
5 R% X* ^" t% O& Z127.0.0.1:6379[1]> select 0 R) v( O+ \/ u7 k# X! m X
OK
8 N G* ?& i7 i- X8 J& p127.0.0.1:6379> keys *& a# E4 d- w4 E7 H5 @8 T
1) "filebeat-redis-systemlog"
( b7 \1 ^$ z h3 t6 K$ Y通过head插件验证生成的索引1 [8 H: ?( n# L7 h- Q6 N
2 L5 ~6 x2 a( B3 G9 J# t
" U% F& H' U5 T$ Mkibana验证收集到的日志信息
- J0 H/ }+ ~. c, [! ]. C |
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|