扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 761|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK 1 \" m2 @! M5 n* N
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
1 q; m) C8 h! s& ?9 k" s4 }7 r0 b
. \% V$ [9 ^5 f5 {3 c" e处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单2 X8 M- U- A# H
Elasticsearch
3 p0 q+ e  k) ~) felasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
$ q9 w  G: ]: @
/ o2 J7 U! r, u6 d. b5 h$ O" {elasticsearch的特点:
+ c0 }( e; i( T5 i" U8 m 9 @  D' z- G+ W9 M( N
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
4 P" w7 D( J& d5 _" Y" A* C5 f) b3 [部署elasticsearch
: U( k) k1 [1 B5 dGitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发' }1 }. Z+ ~+ N1 t$ ^
+ b+ v2 T- _8 B0 U9 R* C
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步# _: O% l2 y; _' }8 [  P' r4 Z
; f- O7 v* u( K# X; D
服务器1:172.20.22.24
% s) m; G) s  ~+ j
; {: i6 p% @* f3 x( i" _) |: C* e服务器2:172.20.22.27
( n( R2 R$ w& O/ s9 f& n ' L- ]' f% B0 o7 T) A  l
服务器3:172.20.22.28
0 n4 Q# B1 G3 e# A' [ & b9 a( k" T% u& T0 M5 k
###ubuntu
' \7 I. }1 |* Z" @4 |# apt install -y ntpdate, `- L) \, O  p0 G8 i5 S
# rm -f /etc/localtime2 B; o: Z5 _$ S  T) M4 g! J+ ^
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
! ]; a6 t7 G6 f& W- i( P2 j5 z$ E9 f# hwclock --systohc/ x/ {/ o3 B2 x# R- Y
# ntpdate -u ntp1.aliyun.com  T" Y( G# F. f+ `4 @; y
###设置内核参数* C6 j. L- J! @' ]- o; y4 x$ q
# vim /etc/security/limits.conf
+ m, |# J( N2 m. D*                soft        nofile                5000000 ~: ^' S, O5 t) ]* R8 O
*                hard        nofile                500000
- }) {; b9 `: O' L2 i, h$ g# vim /etc/security/limits.d/20-nproc.conf 5 i- v; v6 C7 X. H
*          soft    nproc     4096& y0 K, j  F( E! `& G
elasticsearch soft    nproc     unlimited
2 |; n5 w2 ?2 M1 F( k% Kroot       soft    nproc     unlimited8 T" M" l6 z% }! u3 P
###安装jdk
+ d6 Z( y7 z9 p. X# apt install -y openjdk-8-jdk$ i: n9 U# x# H
1 D; u7 H: v1 u  J
###每个节点都安装2 {: O6 S1 z6 _* C0 {  H! J0 \" ^
# ls -lrt elasticsearch-7.12.1-amd64.deb
( U0 `. \" r( D: }" E# dpkg -i elasticsearch-7.12.1-amd64.deb3 {1 x# M; H7 w4 D$ \5 `
###节点1配置文件6 L- `4 X6 B; t, @$ q" e" s
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
( V' s  M/ R  r5 O, y) C% ?cluster.name: m63-elastic        #集群名称3 F# g6 _; v3 g$ I( T
node.name: node1                 #当前节点在集群内的节点名称$ Y" f$ k1 w+ i$ z
path.data: /data/elasticsearch   #数据保存目录, z. j8 B! m. U0 J2 [5 V5 c& p9 k6 a
path.logs: /data/elasticsearch   #日志保存目录7 h: G2 K. `. `1 j0 Q) E, ~
bootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap
! g5 n- `8 g# r( O/ B& dnetwork.host: 172.20.22.24       #监听IP
, q+ J: }; W( H9 d. y9 qhttp.port: 9200                  #监听端口3 o# b! ^5 f+ \% W, b; |$ o
###集群中node节点发现列表
. K9 d' {) G2 \6 z9 v4 q. k; T2 j* q: |discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
$ _, y  h+ t3 w1 ^' H6 e###集群初始化哪些节点可以被选举为master+ W# v; e( {7 X9 \+ t: q* F
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]" F- D' Y8 ?/ R6 g2 y
action.destructive_requires_name: true" a7 G6 U" b6 q& n% g7 S7 d! L
# mkdir /data/elasticsearch -p3 ?0 |0 U$ d2 P% ?; L( t+ }! ]8 z
# chown -R elasticsearch. /data/elasticsearch
% c" u$ I  x/ x7 k0 q7 T7 N# systemctl start elasticsearch.service
0 i) O  x; d4 V###节点2
# ]+ X) o8 N- y$ n. A' q# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
+ A9 ^; G( J: M4 D% N3 S8 r0 ~cluster.name: m63-elastic( q) c" X" k7 r' o# S" b8 o
node.name: node2
0 T1 u" |8 o9 O/ ?+ Xpath.data: /data/elasticsearch1 D( Y6 o' S1 \% q7 ?- l% b
path.logs: /data/elasticsearch! g3 ?; A' v" `( X" ?+ h) c
network.host: 172.20.22.27# e6 O& k6 s5 s+ Z! k
http.port: 9200% C  h7 ?/ P7 W3 m
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]' u7 O" }1 p) z
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]* E& J% O6 t! v- G1 J0 B
action.destructive_requires_name: true( B) C8 p. [! S; M7 o# Q! b
# mkdir /data/elasticsearch -p9 {( f  a% F" k( `7 o  J
# chown -R elasticsearch. /data/elasticsearch) [' Z! b; O% N
# systemctl start elasticsearch.service2 c4 t, A3 j; q/ d' ]
###节点3( R. e# o% O/ t9 a" Q) D" `
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml& Z  R% A$ X% R" I4 B) ^' C8 V) e
cluster.name: m63-elastic
" s7 K+ k# z) W, }8 h; T- {6 k5 Pnode.name: node31 _9 E5 }* l% \& D1 N% X
path.data: /data/elasticsearch
- w1 Z8 M- l2 C( U) x5 t- Epath.logs: /data/elasticsearch
1 C8 ?2 J, V' K6 z% fnetwork.host: 172.20.22.28
  N* `2 o& n6 [1 Z0 y- a0 U8 khttp.port: 9200( V1 {) ~( q: i
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]) V8 |9 e* q! R" M
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
* C( r* Q0 }4 k$ Zaction.destructive_requires_name: true
; \0 @9 j- s; d: y# mkdir /data/elasticsearch -p: i) e' z7 Z/ s# t1 I9 i; @
# chown -R elasticsearch. /data/elasticsearch1 L! @2 P8 n3 b; V! t9 h
# systemctl start elasticsearch.service
+ X5 C* s9 h6 `8 l0 r浏览器访问验证 6 `% ~$ {' @9 @" ~1 X- O
http://$IP:9200# [: P8 l& @% s( L2 ?
; @; D1 k2 C; ^  _3 i
& I1 d8 ^. R- Y

' B- R: w. U3 u1 c+ _: OLogstash
) a" j8 a- P; i3 |0 R5 d% E+ A+ G9 MLogstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。- n  v6 h! _6 [3 ?$ ?

; F: D; U6 ^$ g部署Logstash
# ]; h+ K! ]. b  K2 YLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地& T: m! S6 D1 n* z0 f: }0 M

+ D$ A" }' M6 Y) E* N. J* ghttps://github.com/elastic/logstash #GitHub
$ c8 Z6 [2 y* m0 p9 K0 | / l6 ~. W, x; V; n: s
Elastic Stack and Product Documentation | Elastic
; H; R4 Q5 ^$ a; ?2 K* ?, Y8 C' o
+ k$ t" b/ \$ t+ B  H" l环境准备:关闭防火墙和selinux,并且安装java环境/ T, s2 }" Q3 ^7 Z+ g" F4 P/ Q5 q
; D! }- [% j; V" ^8 c3 M* h
# apt install -y openjdk-8-jdk* I$ h# V1 s5 L2 L, y" `6 w
# ls -lrt logstash-7.12.1-amd64.deb
! w$ g' p' Q6 G2 J, \) [# dpkg -i logstash-7.12.1-amd64.deb
! G* F; [, @- m###启动测试
6 g* [  U, A" c+ E# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出" k9 Z( H' U) m2 P, J% Q
hello world!~+ M! ^4 [- L) |* F- @3 k+ P! ?
{; r/ _* b6 Q, ^; I4 D7 V/ t4 i- l
      "@version" => "1",
6 V, ^& r) Q/ C, c; `  L    "@timestamp" => 2022-04-13T06:16:32.212Z,8 ~0 H5 U8 a) I5 G# Q1 S
          "host" => "jenkins-slave",
4 _  @+ o3 H% T8 i+ [' \6 V       "message" => "hello world!~"- P" f/ v1 o0 Z
}
! E: ?4 `4 J" `+ a( [: E+ U; r###通过配置文件启动
6 w3 I, t2 J7 |; X! q7 n) c# cd /etc/logstash/conf.d/
5 i& {. b$ U8 u# cat test.conf
& S# l' _. V" U7 p$ ^, Linput {
5 Z* ]0 a* u6 t1 K7 L( p  stdin {}9 L: U1 B+ A/ }: E
}( J. Y) Y$ ~8 _5 h4 m  F9 w, z
output {
: _- F- e7 X3 r# r% ^% I0 q  stdout {}7 J' S0 C* D4 F' d" _5 Z( C* @2 o
}7 s$ J. J) R6 P7 o, ~/ c

- J( Y) c" z% }) A% C  h3 Q, e###通过指定配置文件启动
. `- W2 C4 `! e$ c% U& Q# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法' C# w) ^: @0 q: P+ i
# /usr/share/logstash/bin/logstash -f test.conf* z4 [. [1 {; Q* Q5 O

- a# [  ?* O$ W' Y  d9 A. P) O+ n####输出到elasticsearch& q: c1 L% a6 H
# cat test.conf
7 p3 Q; R% S4 k+ j3 U3 `input {
& Q  O( p' ~6 ^; _6 z  stdin {}/ v& x+ N$ {7 G0 q
}
. W# }. X# w/ S/ T9 Soutput {0 Z, [! i0 J, Z1 `- ]3 G3 h3 _# W
  #stdout {}
  m7 s! G2 ~! t/ d- Y, ~  elasticsearch {& {1 D- S3 }+ m  }0 b3 i
    hosts => ["172.20.22.24:9200"]
% `- r1 Y; W6 M: c    index => "magedu-m63-test-%{+YYYY.MM.dd}"* w, f7 W; B4 P4 Z& n7 @5 R, U
  }
. @9 A; s) ?  S. P  S. {}. h2 ~( b2 U& i$ K- f" G/ ^9 E
# /usr/share/logstash/bin/logstash -f test.conf! L7 U- S/ S' `2 O
version1
1 `2 I7 z! K% b# C# R. Nversion2
0 V+ j5 N' w* G- G, S. ]2 I) bversion3
- G. y* d; Y' o* \1 ztest1
* n' k$ r  w4 b9 Gtest22 @" D5 h0 j4 M" z, U' S5 }+ |
test3
) b4 Y: ]. k( e8 L& G
3 R/ Z$ w, e' A6 u####elasticsearch服务器查看收集到的数据
- p; b( M5 C. N! r# Z% z# ls -lrt /data/elasticsearch/nodes/0/indices/
0 u7 k% x# @, U/ E$ e& Stotal 4( q' `( @- L5 r5 M, h3 W4 M: l
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
& s# I0 Y% t* }4 Nkibana ) I5 |/ V3 {8 I$ p3 }
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等$ f" c5 N7 L% J$ [
% _. ?# a5 O- N/ E6 z& n
部署kibana   C1 s2 G0 G9 X: ?6 J
# ls -lrt kibana-7.12.1-amd64.deb2 n2 ~; a, R" K. p" Z- F
# dpkg -i kibana-7.12.1-amd64.deb
: k3 A1 H# {* c+ n/ N# grep "^[^$|#]" /etc/kibana/kibana.yml8 y" h9 I/ z' \; {1 R) k
server.port: 5601% |. ]7 H* D1 b7 n: [* w8 K
server.host: "172.20.22.24"
2 v+ s0 Q: O5 Y/ i0 x( a9 }' F! Belasticsearch.hosts: ["http://172.20.22.27:9200"]
; ?+ p) n  P# F) Y+ G9 si18n.locale: "zh-CN"4 `. @" ~. P% `& n, A
# systemctl restart kibana 2 B8 N( q' q5 C+ G2 a( o; m8 n1 j9 d
浏览器访问http://172.20.22.24:5601
- N& C2 a1 v) B0 m6 K9 p4 O: L# c
! ~+ r% k/ _  S: R3 fStack Management-->索引模式-->创建索引模式- ?, Y6 _9 ^0 V
! y  B: [2 e" J6 |
! G; F3 ~( z( |" `7 d2 G
选择时间字段$ P6 [. ]7 [3 j( F

# n' O( j0 [% F- P$ Z  @查看对应创建的索引日志信息' {' u) E" c- ]5 d* \

3 D, r+ L5 P6 h# M
7 {8 X( d) a- ?% Z" i1 l
  S" A: Z4 f! c; H$ n4 i: A. v收集tomcat日志 ) U0 ~& {4 K% I( L1 z3 d5 X2 L
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
* u7 F4 o; h% d: _6 k" m
+ j' g0 i) H; e部署tomcat 7 q# G, T) p7 q6 I
####tomcat1,172.20.22.30  O# o: B# p1 D* v/ D: u
# apt install -y openjdk-8-jdk
1 t1 V* a9 G, r& x$ t# ls -lrt apache-tomcat-8.5.77.tar.gz 3 k! w' u/ o, R
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
5 Q8 `9 Z) ?$ F; J# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
3 S4 }  T5 k  Z% R# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
' i) w& j7 P1 _6 E. R  F# cd /usr/local/tomcat
0 t2 I9 H2 c* H6 T###修改tomcat日志格式为json* G) Y: Q/ g1 O! d. e( p
# vim conf/server.xml
  L( B3 v8 f& [7 ^  S0 N( A3 [. c* N....6 A* \! m% b) E" {5 e. d6 `+ C) ~

1 y$ Y" M4 }: j/ s....4 c" D- ^% t$ m. {# W7 T
# mkdir /usr/local/tomcat/webapps/myapp
' J1 T  Y8 m4 ~2 `' d# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html& P4 U" t1 `; }+ @; x) D$ m
# ./bin/catalina.sh start9 \8 P4 |" W/ _7 |' Q% n

) I& j9 I( `4 h% j' l###访问测试
3 v! }# M& C8 L" f' c  L1 |# curl http://172.20.22.30:8080/myapp/
+ q" T: B- t6 g###查看访问日志
9 l$ f0 F9 V9 P; h- N, `# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
4 V/ S) `+ Q- C* ^8 T" o' ?9 d) X
+ k5 H3 S$ V& y  a% F+ x2 m####tomcat2,172.20.22.266 W$ ]9 m, C0 w7 C4 R
# apt install -y openjdk-8-jdk! B5 f) n4 {7 {( M# q% c
# ls -lrt apache-tomcat-8.5.77.tar.gz
: j; E: u2 A: W, }-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz1 D+ d" T$ [* V% ]
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
2 W* B% C/ Q- i9 g. Y5 @# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
8 M( m: ]3 v/ q5 ~# cd /usr/local/tomcat  S7 E; [- D- s7 l0 m
###修改tomcat日志格式为json+ K; h4 Y6 C6 S0 W4 C
# vim conf/server.xml: _5 b! Y' {+ o" c, @5 Z; V1 P
....
* i' y+ E1 Y1 p+ r+ p& p
( x2 f' O) L- F0 M8 o8 S6 K+ [....
, b! [9 K8 G/ O# mkdir /usr/local/tomcat/webapps/myapp9 f' ^4 K# x+ t- v( B
# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html" X- u- ~! e* ]% `% ^5 g
# ./bin/catalina.sh start/ U# V' o' E( q% i7 W9 D6 `
) U5 j2 ?( s; `5 P% U( ?# E. L
###访问测试6 y& _; m( F6 O
# curl http://172.20.22.26:8080/myapp/
7 b2 D, l% Z9 \9 I% u% N+ y###查看访问日志
5 f* C+ |( T. g/ ~; Q3 _# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
: q3 J1 E- ^7 D& ]+ [1 P部署logstash
0 D5 u) O" O9 t' ]3 a' @在tomcat服务器安装logstash收集tomcat和系统日志
! M# }4 ]5 q, P- i
. P2 L; |1 H7 N* m! U) n) P####tomcat1,172.20.22.300 ~& B# ~8 ]4 e; _% z* F# Q
# ls -lrt logstash-7.12.1-amd64.deb! V/ X2 x; \$ n, F: O+ P
# dpkg -i logstash-7.12.1-amd64.deb
- I% s; Z/ d, u1 X6 k, ?% }# vim /etc/systemd/system/logstash.service
! Z9 Z* F% U! t- x. s( Q...
$ d% j, @, M& `7 FUser=root
: g( M0 y; y" [  U( S: p# I: r* z7 cGroup=root
5 F( V, P+ M: i' n0 R- Z5 C...
7 V+ Z2 Q7 ~9 |$ r; p" C* f/ e# cd /etc/logstash/conf.d
3 c& T) t$ h7 ]1 b# cat tomcat.conf
( U% H% I1 o5 D- Vinput {
4 H9 u: d1 \+ Z8 r0 A  file {! F; K8 Y& @" s7 [
    path => "/usr/local/tomcat/logs/tomcat_access_log*.log", |1 V. _, L5 @
    type => "tomcat-log"' J  @# I' D) E" W7 B
    start_position => "beginning"  [! @/ g7 d& u  q
    stat_interval => "3"$ ~" E' I6 x7 [+ C, F
  }! s0 f. \0 S, z( ~. T
  file {7 x. E$ V3 i: v) t3 F
    path => "/var/log/syslog"3 ~: J' @  i, r5 b/ B' d" {
    type => "systemlog"
1 u% i' R: L: [, _) F    start_position => "beginning"( L: z; ~- D/ o/ D) ~1 E
    stat_interval => "3"2 P/ X+ ?; i& S+ W0 C6 N% y
  }
. t, G' e! I1 A$ Q- d$ k}
0 P  c$ I) d% E$ C7 voutput {
6 z" D2 P6 P3 a1 n6 U  if [type] == "tomcat-log" {
5 j4 u. i& @) g# [$ [! }- V7 C  elasticsearch {
5 f, u! ^! C4 j" x, |    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]8 O+ O+ a: f# _1 o  l7 q
    index => "elk-tomcat-%{+YYYY.MM.dd}"9 J2 Y5 y, I. l" o2 h1 s
  }}8 Y/ l7 K% v8 G8 P
  if [type] == "systemlog" {3 i/ s. `% ]8 w- k
  elasticsearch {& W; ^' I5 }7 i' X. S- |( C+ N
    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
& R" \% t( c$ S    index => "elk-syslog-%{+YYYY.MM.dd}"
, [2 h( q5 T. Y# v0 ~  }}. t# l1 t0 }2 p( b
}# ^" ^9 v; D: N, v* q3 f

9 L+ @" i3 n2 A0 [( p. A% T# /usr/share/logstash/bin/logstash -f tomcat.conf -t
* I& f% u. }' K+ J# systemctl daemon-reload) _, I1 ?% b# Z7 J# G2 o) I0 _. G
# systemctl start logstash.service! r; ?- E: k1 [7 I; A# l0 {
# scp tomcat.conf root@3172.20.22.26
3 k8 {7 C" i9 f: }1 f! e" ]/ X
! y' I& K* T$ P' ^####tomcat2,172.20.22.267 M( q; E; x" p$ c1 @
# ls -lrt logstash-7.12.1-amd64.deb4 e) q6 w9 o) L; W
# dpkg -i logstash-7.12.1-amd64.deb
& |5 z' R9 M$ `+ r- n# vim /etc/systemd/system/logstash.service5 f5 [" R* V+ v8 Q. |. j! j
...
" k7 _3 E  J$ U+ [& [. O8 I2 m8 LUser=root! ]' u: }2 ^6 X# i: |2 m
Group=root
8 Q0 a. [5 G# K6 W6 k...
8 R+ L8 r, [: R" a' i. Q# systemctl daemon-reload
* F/ ?0 H" v1 v# systemctl daemon-reload
; ]4 }( W  q7 B8 s7 j4 X: k# systemctl start logstash.service
. u7 U1 O- y7 U通过kibana展现( a5 `  `, q2 {5 \

6 k( E- p( D5 {+ Y8 B. b( A* @5 F , h8 ?$ B4 Y) ~# S
收集Java日志 5 E; A9 S& ]) ]6 n5 F! ?
使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并; B" z; Z( t+ _$ k% ]. v* L% I. h& {
# O& x" e7 d/ K/ a
Multiline codec plugin | Logstash Reference [8.1] | Elastic
; a; U9 g+ z3 w
8 b2 z* v1 k8 x3 }- G  N$ {" y% u添加logstash配置文件 * y. ~. f+ O1 V- d, Z3 a
###收集logstash自身的日志,172.20.22.26
- F  C" |+ X( _; g1 J# cd /etc/logstash/conf.d
8 H1 f7 R* w8 [' ]. g# cat java.conf 0 Z; r+ D* |( t  W" y3 o
input {
: ], z" k. f" }0 }5 G  file {
  p; ~1 S" L* O/ K1 t8 f0 E    path => "/var/log/logstash/logstash-plain.log"' _: S8 _) J; n5 }+ [% C; W
    type => "logstash-log"3 i% F- k0 Y# |* {) ]
    start_position => "beginning"
3 o+ g! z' c  r( Y& `5 n; a8 O1 Y    stat_interval => "3"
+ l) N7 \) b# [. r    codec => multiline {, @# l- D6 Z: E5 \# X
      pattern => "^\["' F" F7 r* a7 h$ J' x5 t+ [: [
      negate => true* g! N  Z+ d$ r9 Q6 I+ F
      what => "previous"
! m3 `. ?$ a+ ~  V3 H2 }- f- U8 D* E3 o   }}6 b1 f5 x0 [7 R$ v1 M7 h/ }6 I$ M2 F% t
}
9 Y: |: b0 i! ?% j+ c0 Koutput {
  M; v& @" F) ?6 B2 h9 P: w  if [type] == "logstash-log" {
& D& w; K) S% @; N5 C: C  elasticsearch {1 n7 g7 b8 }2 B6 m
    hosts => ["172.20.22.24"]
( J, a2 {  D7 s: ]0 a% i    index => "logstash-log-%{+YYYY.MM.dd}"* I$ e, ~6 a( |3 o. q# Y$ H8 q$ ^0 ~
  }}; o9 q/ Q0 Y9 P$ _2 n! ?' r
}* Y8 \+ b) e* B" Y" o- O; B2 t
+ j4 S2 p' O# B$ z' n& ~5 ?+ W
# /usr/share/logstash/bin/logstash -f java.conf -t3 d. f0 B8 _8 w: x' f
# systemctl restart logstash.service
/ t6 s0 L1 v- U
* S+ U* P' r0 e3 @% L###收集logstash自身的日志,172.20.22.30. i# ^; ~' y7 e3 \) e; q
# cd /etc/logstash/conf.d
+ d" S0 D4 Q4 L' @# cat java.conf
% I! }, L1 ]9 E; J3 Z/ q, ?2 r0 J( Minput {3 b' U" G- J, @& l* {
  file {  `: D9 {8 b, v. m
    path => "/var/log/logstash/logstash-plain.log"
: A% j* V4 f3 w, v    type => "logstash-log"* J  M) h8 o. l$ V$ M
    start_position => "beginning"
' m1 l, [! u& A3 ~$ H5 m    stat_interval => "3"
' ]- S# W. L+ k0 l/ _    codec => multiline {
+ x' ~4 U* N1 @      pattern => "^\["
0 ]. w$ C( ~9 ?" h5 t1 M+ M! J      negate => true
! G- U# X# R" D3 w: K! n. x( O      what => "previous"
6 [/ X0 a- u9 y" y7 V' j( Z% K   }}6 v7 X. m; N2 l6 t' H+ U1 a
}. s; k* [* ]" F+ W. |/ L
output {
' g9 |& X& J- Q' u  if [type] == "logstash-log" {
1 L+ I+ `8 m- V3 T' D  elasticsearch {& W& B$ X. j  R8 f  i
    hosts => ["172.20.22.24"]
5 |. Y# }4 w  x9 p( F- r    index => "logstash-log-%{+YYYY.MM.dd}"- E4 P3 ?6 N. O+ n& N
  }}
- R- R  S' b6 O8 y1 H* t6 Y7 k}
3 ?* U5 p2 z8 v5 q% W* Y0 Q1 L1 @- A( p' O
# /usr/share/logstash/bin/logstash -f java.conf -t
6 I; M  j( M9 `) w# systemctl restart logstash.service
2 [/ g6 N/ l8 C8 C& ?- L& V' y查看kibana收集到的日志, w( c* o- J2 x/ O
# z5 v; T: X0 h4 ~! i

+ w# ]1 O5 o. V
0 y' X& R, O1 F6 P5 u3 Bfilebeat结合redis、logstash收集nginx日志 ! Q2 h$ e  c0 N9 F0 b+ M8 D
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch' F7 @% Y- M" M! g. V
! r3 O7 R+ X4 B" l$ z
web1:172.20.22.30,部署好nginx、filebeat、llogstash& c) A* J. Q9 H% p2 O. A0 r
3 c0 j" m% |! z
web2:172.20.22.26,部署好nginx、filebeat、llogstash0 @& |5 H& y3 d) U! O% a& i# c

/ [4 }# a5 i' b  b2 X5 vlogstash服务器2:172.20.22.23,redis服务器:172.20.23.157( P/ J* r9 q3 I+ x5 S

2 B  u& J$ Z3 h5 X+ q& l' M' lnginx服务器相关配置 ) P' `$ Z* U8 m; z+ ]
部署nginx
# r3 K: d* D8 a2 @8 T- f9 L# wget http://nginx.org/download/nginx-1.18.0.tar.gz
, n2 R! P" i: g1 X% Y- Q# tar xf nginx-1.18.0.tar.gz
, }0 N" c+ R3 r9 Q  T; g2 r  D8 Q; R# cd nginx-1.18.0
6 p( }. X8 E, A2 {# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
* h5 ^' R* V; y+ z# make -j4 && make install: @+ O5 Z8 _6 x5 u2 O" J5 ]
# /usr/local/nginx/sbin/nginx
) u8 l" b5 S# a0 r7 B部署配置logstash - X* X' H, K0 W- B, f9 ?
把filebeat收集到的日志信息发送到redis
: X6 j' m+ B8 h: E% l
1 U# V1 m8 X7 Y( g+ d: E* H* T# apt install -y openjdk-8-jdk" p9 U" e8 P  ?" v
# dpkg -i logstash-7.12.1-amd64.deb
- g+ n2 j8 E/ E+ W2 T# cat /etc/logstash/conf.d/beats-to-redis.conf
  p$ m) p9 n8 x- V! O0 S6 ainput {
7 i3 H% I4 I  J+ T- M! `5 E  beats {
' K, {) l* h$ E+ A    port => 5044
5 S! i5 q2 Z( d7 I5 O/ F7 t8 `    codec => "json"
7 ^2 b7 x! ?. F1 J/ Z4 V0 y- E6 m4 y  }
% G+ m' b8 s: B4 Q5 e* O( z  beats {% R; l" Y+ Y  @! T/ S7 b
    port => 5045
' V+ o- {  W3 `    codec => "json"
, f8 b% g. _! P  }9 b, o, b7 U+ X; P: f! {
}
# Z$ D- v4 J: D$ soutput {, V! ]8 g% F/ d; s! H+ z( N5 V- {
  if [fields][project] == "filebeat-systemlog" {$ \9 t2 S9 F7 Q+ }" @/ I0 E
    redis {1 c2 s3 U. {  M
      data_type => "list"- B: L( w" ?) |7 v* }
      key => "filebeat-redis-systemlog"- Q5 q$ v/ w" y' A+ Y
      host => "172.20.23.157"7 @: S6 [( f& J6 ~8 q9 Z: x
      port => "6379"
% q! o) \/ C8 ^      db => "0"
3 y  z' h- h0 k4 \      password => "12345678"
6 R8 U1 v! [' r  }}% l3 v, Z: h  U
  if [fields][project] == "filebeat-nginx-accesslog" {/ Z% A6 s7 g+ N2 g6 A
    redis {1 k" u: b% U7 c) S
      data_type => "list"5 d0 H9 R& i1 r
      key => "filebeat-redis-nginx-accesslog"8 e" M% l3 A; @+ p
      host => "172.20.23.157"
( B3 t6 ^3 e8 G      port => "6379"' h5 _* ~* ^* d/ o& K
      db => "1"' `0 Z- i+ {( ]6 e/ d
      password => "12345678"' k) }  O8 D( V# h9 f) K+ m
  }}& u/ c, H7 [3 U6 L& i- }
  if [fields][project] == "filebeat-nginx-errorlog" {# P% Y! s; K/ I
    redis {, s# n: X1 e/ C9 i7 K
      data_type => "list"
& B+ S1 k; q/ O# K8 h      key => "filebeat-redis-nginx-errorlog"2 R6 K7 }. P* d$ h: T8 e& A7 ^
      host => "172.20.23.157"2 n0 p: E& p. v& V
      port => "6379"
7 c6 K0 K9 [3 G      db => "1"+ a  [9 I' ^) p, y! Q, K2 W: T9 m: \1 a
      password => "12345678"
( U2 k; F) L* T7 F) |! e  }}, U5 W! s$ N  F+ o9 f- Q
}" J. V  [  V- M: j; w* Q. e
# systemctl start logstash
0 t! o: m  f& z% h" P% v9 |9 R# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/
9 w" `* `0 e6 ~, b# n% Y5 s* C部署配置filebeat
# v( A# S' }+ \3 c$ d, M+ N) M通过filebeat收集日志信息发送到logstash  h/ |. ^7 E. q3 X# @1 s& X) ^
( `1 w; b8 V' H/ j; |/ |0 u
# dpkg -i filebeat-7.12.1-amd64.deb
/ P0 F, {- A, U# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]": ~' H2 X$ E( W9 ?% H, \
filebeat.inputs:1 {. J" v7 `; _2 [' L3 W
- type: log5 T! ]( }# m- d0 k) z
  enabled: true
5 j# w) T" P- \% J  paths:
8 z4 m0 y( N0 H$ _    - /var/log/syslog
1 m0 R+ d+ H1 M. ^& g3 _  fields:7 K& K( U( B$ s; ~" B
    project: filebeat-systemlog
, d" n) }4 B9 _9 Q- type: log
  f" s- w2 P. K* P  enabled: true
$ I1 B9 @/ H8 Z3 x8 B; }5 m. h, g  paths:4 r9 v& x* ~# a# x
    - /usr/local/nginx/logs/access.log
5 E. a2 r, y4 t& h( k+ c# c  fields:
" \1 R1 P0 R6 D' ]* x1 h* B+ z    project: filebeat-nginx-accesslog
' b5 o& L  b% @  U- type: log0 i8 C! k2 ]4 @" G* h5 L
  enabled: true, T. {9 o; b  n  [5 |% N& m
  paths:
* x) T. O, y) v  M6 n    - /usr/local/nginx/logs/error.log" V( z0 H7 Y. V( M
  fields:6 R/ y& C7 J! A; X" t+ u, T
    project: filebeat-nginx-errorlog- z; T" g' Z  r. b1 C4 ?5 O
filebeat.config.modules:
& I6 q. Z9 R% G" @. g: }  path: ${path.config}/modules.d/*.yml5 z1 h& @' N1 M' D; ]6 k
  reload.enabled: false. y. E+ r1 A  ^( R/ a) t: [
setup.template.settings:
; r) S$ }: G* b  b# c+ ^  index.number_of_shards: 1
3 x( b, p. g2 a7 psetup.kibana:- C) c% |2 Y, ]" H
processors:9 K6 |0 H# T* K. @" @
  - add_host_metadata:& G0 W/ V) S4 L8 u1 Z) N- Z
      when.not.contains.tags: forwarded: M0 l0 @  r) B) z- g7 U: v
  - add_cloud_metadata: ~
, `* ?4 h! j; G7 ~* j  - add_docker_metadata: ~) T" y6 i( q' f9 w
  - add_kubernetes_metadata: ~
& @+ P3 d' f: c% S. D: p+ w7 moutput.logstash:
7 Y% V" z* a  f  m3 a  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]5 ~5 j/ V7 |, f
  enabled: true0 ?8 S" }! Y; ~6 {
  worker: 2
3 D! k6 G. k7 @! s  compression_level: 3. O- b! C. a: Z) o
  loadbalance: true
; U% y8 Y" E6 g3 H% v5 T) I$ L) j
  o' W. v) K( m2 I1 \# systemctl start filebeat$ m; Z8 `1 v1 {6 K( ]% s6 _
# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
- e  D4 y0 t: T) Xlogstash服务器配置 8 \( d$ _( o" ^$ \, O) i0 B
logstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
; }2 W+ M4 m0 I# l
8 }; Z( C3 j: a& c4 T3 |- r9 T1 M# apt install -y openjdk-8-jdk1 v# ^' b1 ^9 `# H# C) P, m
# dpkg -i logstash-7.12.1-amd64.deb) }  e; T# h+ c2 N/ ^* Q: A
# cat /etc/logstash/conf.d/redis-to-es.conf
4 k6 f2 ~8 c% c  |input {, Y1 a% u& a# ^, @/ \1 [, N
  redis {
# d1 D; X" {/ d# N1 W; C* v    data_type => "list"1 p( U' _0 X7 C: E
    key => "filebeat-redis-nginx-accesslog"
9 s! ^: K" t2 E    host => "172.20.23.157"' I6 q6 e! ~' O, D3 M
    port => "6379"1 M3 x2 H/ z5 ^7 @0 J8 l( ?# a
    db => "1"
. U- f& K; ^2 C( h9 R+ W    password => "12345678"
3 K$ M: b2 J' @  }
" v) `; P: C. t; ~. K' e4 }: ~  redis {9 d9 m# V; Y1 v/ l. x
    data_type => "list"  j- D; o+ q+ Y/ W% [
    key => "filebeat-redis-nginx-errorlog"
: r7 Q. Y) D: v    host => "172.20.23.157". D7 T5 j5 ^- u" z# ^" j
    port => "6379": C2 C: L5 E. j: C
    db => "1"
% K; O; a# ]0 q0 y6 d5 ~    password => "12345678"
* p3 P1 {- S& C- n- X  }% I: m# a: {& @5 W: \8 L
  redis {
2 i. h$ w2 L" |    data_type => "list"& O% D- P+ `' ]& t: k1 A$ k
    key => "filebeat-redis-systemlog"+ J7 @) N. Q! {" v. l1 X) Y' _
    host => "172.20.23.157"
! ~( Q" C( q- W8 q! l/ L    port => "6379"
* g5 Y9 K' C4 d    db => "0"
% I0 [/ S1 F+ S    password => "12345678"
7 ]: b7 S- k8 t  ^  }
4 w7 E3 E0 V9 e}
8 l3 g" ?# m) g& k8 @) X. x# [output {
: l' [% P7 p! t& i  [: V  if [fields][project] == "filebeat-systemlog" {: \& W3 g  i1 d0 V  Z  t, w8 Z
    elasticsearch {. m3 O2 E/ [4 I5 x. l) L
      hosts => ["172.20.22.28:9200"]
, h! n: l1 `: V) a( z1 t% `      index => "filebeat-systemlog-%{+YYYY.MM.dd}"
1 ?! H" O/ F& W( A; k* g. Z  }}
0 _2 H3 v4 `7 i6 ?  @2 d  if [fields][project] == "filebeat-nginx-accesslog" {
1 c7 K% m# s  ]) ~9 a    elasticsearch {
% b  K6 d0 _7 q- y' a      hosts => ["172.20.22.28:9200"]
& o" W- P9 L5 a4 Y: O  O      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
) }; Q% X  F- h7 ~: X# ]  }}
, i$ g8 r/ N) y8 E$ ^" G+ N  if [fields][project] == "filebeat-nginx-errorlog" {
6 y* Q' G& p+ Z% ^' y    elasticsearch {9 R1 U  P  Z2 t3 |- h8 i' E. ?( `8 C1 O
      hosts => ["172.20.22.28:9200"]
) r0 K8 ?7 m7 i      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"$ A' w2 L0 \8 b* v. W/ S! \
  }}% y  `! L( F$ Z6 U
}
: Y1 _% K: L% a7 d9 v# systemctl restart logstash.service
( J! q! {2 O: U* |0 ?2 Iredis安装配置
! E- ]* |. Z3 ~6 bredis服务器:172.20.23.157,
3 }5 w& @3 c9 T5 X' L 8 w( f9 @8 {/ @& V
# yum install -y redis' ~1 l, K: x% I
# vim /etc/redis.conf" A3 j% ~: ~* M
####修改以下配置项
" r; d* z4 w! }% M# O8 zbind 0.0.0.0
% K2 c, x9 R2 @2 z8 h....
; Q  T- s) t$ x! o7 [6 rsave ""/ J8 O1 y6 m0 h# d& \- f  I
....
5 e6 t" w  \, @# krequirepass 12345678" Y( V2 e* O9 m  d8 C, p# E
....3 j3 k8 n, O* f2 ]; ?5 d& Q
# systemctl start redis0 B* |+ N0 B! e( x
###测试连接redis& W( }" ]! b- s8 O2 e
# redis-cli 6 E- [7 N# d8 M, m3 Q4 ]
127.0.0.1:6379> auth 12345678
7 n# p- u/ n7 J' E" w3 q& V$ c5 YOK; {+ T* H; h4 n9 R
127.0.0.1:6379> ping  z9 b1 k0 A+ }4 n+ Q3 w- D1 j  g
PONG# q4 E) Z0 `" x" C
' q" q1 ^7 n( P2 w  n3 L( y' J# V
###验证收集到的日志信息
* s  I, R% T9 K4 y! R# ]2 j127.0.0.1:6379[1]> keys *6 L" b) I5 ~7 L9 G! s0 \
1) "filebeat-redis-nginx-accesslog"
' Q$ K: M) e5 i& a; w' q9 p2) "filebeat-redis-nginx-errorlog"8 ], r1 e6 D  u- ~% l  `" F0 i
127.0.0.1:6379[1]> select 0* N2 V( O% Z4 W( P( l  ^
OK
. f2 h1 N2 ]# ~8 x127.0.0.1:6379> keys *4 U# w) [1 g/ e) Z+ ~3 V
1) "filebeat-redis-systemlog"
) f' T7 v  i+ W, H4 o- Q通过head插件验证生成的索引
: H# Y# S& j* Z5 U( e5 V
. C/ V9 f8 Y% j5 i 8 j: I; t+ D4 n. s/ R1 V
kibana验证收集到的日志信息
( B5 {+ ~; ~( }' B0 |

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表