扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 684|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK ! @1 }. B7 I( P6 G( X$ I
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
9 C/ U# |8 a0 N# w / h2 i& `3 v9 v' K- c1 t4 I
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单6 P  C9 I. X; b& d( V1 W2 p
Elasticsearch
3 p7 Q7 I* K$ X- |/ P" k2 c1 U% Melasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
8 s; h. R- r) t( g; d
! V$ E1 k7 C% c# X/ W0 I4 X% Q: k! v7 Telasticsearch的特点:; M4 p: K# m: ^- K. b

% b3 R2 K" l$ D# o2 {实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json/ |) H* t# u; H7 p# T: b
部署elasticsearch , z+ C$ y# I" e' z
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发; g# f2 J& n* T: j. y; W/ B
) i  f7 i3 W+ w/ E' X8 o
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
, a6 Y* Q; x6 C 1 X# ~' u2 l8 \- j
服务器1:172.20.22.24" V3 ?# X6 u9 F0 V, }

. f  D% Z$ Q, i9 l' D服务器2:172.20.22.27
" L) c' X3 i; t3 ] & y2 ]/ p. T* o1 q1 M! A
服务器3:172.20.22.28
- o9 E6 I1 q) D) j9 y" v) m . j8 C+ Q: G( M0 C% E' }
###ubuntu2 S" l+ k4 O. Z5 D% r; ^: v
# apt install -y ntpdate  l9 T3 H8 d+ U& p0 n
# rm -f /etc/localtime  V; N8 {1 O: ~( {1 p  ^4 H/ Y
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime9 E; O" M& X9 b* o( v, b. I2 ^
# hwclock --systohc
; x* P( y0 ]( o' v2 G! z# ntpdate -u ntp1.aliyun.com) u) ^- f6 r9 ~* P8 l
###设置内核参数' U, q* s3 ]; ^, F% X6 @# [7 i
# vim /etc/security/limits.conf8 B! u  N+ ~2 |- N. b
*                soft        nofile                500000+ ?) D& y/ o0 [+ [. A' U/ g
*                hard        nofile                500000% }$ @# v$ k# G" P
# vim /etc/security/limits.d/20-nproc.conf 0 G/ A+ O  m& d. ~' S4 H
*          soft    nproc     4096
6 \+ d! D+ x$ R5 c! C* n0 [elasticsearch soft    nproc     unlimited
% J: i9 j* T$ G; xroot       soft    nproc     unlimited
' y- U& o/ X" {5 t( l###安装jdk- n% U5 o  F+ k0 x/ x- K! I
# apt install -y openjdk-8-jdk
$ c6 Y* x) g$ }7 F3 W; e+ q$ d8 [0 a6 H/ i2 x! n
###每个节点都安装, {! \: h+ o; V" z* \! H
# ls -lrt elasticsearch-7.12.1-amd64.deb6 r0 }6 F4 v. _8 n. A5 p& k
# dpkg -i elasticsearch-7.12.1-amd64.deb
6 \' v% \$ ~" ~- j  C###节点1配置文件- X4 f1 @- G, K! I, Q
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml  j  A/ p1 v1 s. U
cluster.name: m63-elastic        #集群名称
$ k/ R/ L+ P+ M! \% j# Fnode.name: node1                 #当前节点在集群内的节点名称
/ k4 Z8 g4 |6 ~7 G6 ?/ cpath.data: /data/elasticsearch   #数据保存目录  {8 j/ j' A' m  P/ l
path.logs: /data/elasticsearch   #日志保存目录. s4 I# B3 u$ d4 C" X
bootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap6 x: l5 E' C/ q7 t' U3 B
network.host: 172.20.22.24       #监听IP
6 |* M1 }& h! E! y0 shttp.port: 9200                  #监听端口
2 R7 k  r' H4 S$ o& }###集群中node节点发现列表9 Q9 C. d; B9 c  q& m4 {& p
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
: U7 [$ f" I0 J  _###集群初始化哪些节点可以被选举为master5 G8 x4 ?# E0 p" p& a+ p
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
& I3 C5 n1 [3 N% R5 aaction.destructive_requires_name: true
4 o" b4 z3 z$ a* T- f2 j# mkdir /data/elasticsearch -p5 w9 c% F9 E8 y- Z" q- J# N5 k
# chown -R elasticsearch. /data/elasticsearch
& D: z; e7 c) M# systemctl start elasticsearch.service
3 {  D8 j6 @0 ^  E0 k6 \0 {9 v###节点2( c7 X( ^% M& o0 F
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml' G4 z- t, S! M& l
cluster.name: m63-elastic: N/ N7 {0 E) n. M
node.name: node2
( o: W5 s% Q$ Tpath.data: /data/elasticsearch
# i9 _  i8 V: t8 Mpath.logs: /data/elasticsearch
/ S2 ]. y" p+ ^5 G+ t8 ynetwork.host: 172.20.22.27
* y9 D/ }! m9 I/ }$ {5 Whttp.port: 9200; w2 y& B! A# \
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]/ n3 P9 X4 U. m6 G& N6 T5 S$ s9 a* f
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]1 j% ~, ^& d" S1 m  L2 }1 u0 ~
action.destructive_requires_name: true6 k0 p: ?: e' H
# mkdir /data/elasticsearch -p
' n) i5 e6 a# d) p' n6 Q8 `# chown -R elasticsearch. /data/elasticsearch
2 {: H$ B6 i0 v4 J' v# systemctl start elasticsearch.service
1 d$ G$ {0 j9 c; U' F; ~###节点3
- w9 C+ m* P9 Z: t" Y' D# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml  {) M9 C0 K! j
cluster.name: m63-elastic
* C1 y% d! {& |node.name: node31 u/ }6 Y5 F; ~. G3 Y
path.data: /data/elasticsearch/ d  r1 a- }* H' e  ]
path.logs: /data/elasticsearch
/ B6 T: x' A( T% x# o! c. Jnetwork.host: 172.20.22.289 @& g  n5 L& d9 j
http.port: 92004 I; \, l) Q& b" o+ C% w
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]3 o, y% M$ x  i/ C# D+ g
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
7 ?" W! k5 v$ r: c4 ?action.destructive_requires_name: true
* j* N& T; w7 `: a! {# mkdir /data/elasticsearch -p( y2 V8 [6 p: w6 B. m& P, r
# chown -R elasticsearch. /data/elasticsearch
) @% f/ j/ |& t7 o, A* O# systemctl start elasticsearch.service
# Y% s$ f0 P0 x& `  a. j浏览器访问验证
" H, y0 p1 m$ |( \- A7 R' j& Ihttp://$IP:92002 n" u6 L) f5 a( X

5 _6 q1 G8 t9 w( S1 z. E7 k* @ + p9 G; o- A8 `& K; X. V' |
) O( G7 M+ x, T; X0 K2 d. q2 F
Logstash
/ j# ]1 i# Q. r. g  Y& ]Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。( U2 m! _, a9 i3 I

- _8 R, y; d' Q, @3 I- a3 c$ s部署Logstash # g2 g: n$ U; R8 W
Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地2 |% m" H- A; e- z# g

1 r( }/ j. v+ J/ s1 Q0 D. i5 Chttps://github.com/elastic/logstash #GitHub  m8 D: Y3 I. E6 T  [
$ C$ y2 z/ G7 ^( H1 ~6 j) d
Elastic Stack and Product Documentation | Elastic
, V4 p* j+ Z2 [( q4 U- i2 z9 g- _
' l; u' Y! j6 y- [环境准备:关闭防火墙和selinux,并且安装java环境9 K3 |/ E. `3 K$ o
" s5 ?' z  J4 Q$ Q1 k6 j( W$ u
# apt install -y openjdk-8-jdk
8 ?9 v3 P8 c8 S( u# ls -lrt logstash-7.12.1-amd64.deb
: D( p7 q8 Y; i) z$ `# dpkg -i logstash-7.12.1-amd64.deb
& N, [' c) k6 a( ^# h# T###启动测试. r/ S$ V* G! g; j. w4 K
# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出
- V$ h# L* F, S5 J1 |( x% ihello world!~9 ?. \2 R* P& D/ A) P: `
{
! E8 v9 f5 M9 H; O0 C: w      "@version" => "1",( `3 E; h! h% S0 e, `2 @1 Q
    "@timestamp" => 2022-04-13T06:16:32.212Z,, O3 Y7 K, J/ U
          "host" => "jenkins-slave",
7 C  l) ]5 w) @) y. Z$ F1 o       "message" => "hello world!~"+ d% x7 o2 Q- D3 Y
}
6 d5 T+ g+ P" x###通过配置文件启动
& Q5 X$ {" L0 X9 _% l( H# cd /etc/logstash/conf.d/
8 v4 T6 S& M4 `0 h9 X9 t# cat test.conf ' n; y' q9 s- g: t
input {
8 V  X/ @/ G" _6 G  stdin {}
1 Z1 ^* ^& J6 ^8 Q* ?1 I}
* V9 Z: U% z% d+ ^: T& Poutput {7 [7 ~, t" C( G5 ]+ u1 }6 T
  stdout {}* }; D; p/ `, ~
}
8 b' t4 X4 Y, l3 A% `* q+ d8 m: z4 \5 P! U
###通过指定配置文件启动
) R: ~7 C& r" w7 ~- ?7 k# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法
8 Y* O! b; N7 X9 {: v" O. w8 k# /usr/share/logstash/bin/logstash -f test.conf
2 T4 O' w: `0 o0 W; O: ?) \9 T5 [1 W
; }$ C0 [- `0 F2 W- I####输出到elasticsearch  r7 n% d8 X! T  `9 p% h* D
# cat test.conf # A; C, u* d# T" p! n% k
input { 2 @" U  {" Q' A& G4 c. }
  stdin {}
+ A4 v1 f! N# J8 k}  Z0 H0 P3 l8 R3 O  V, q& y
output {
" }  n# J" j' p1 P2 A9 Y  #stdout {}
+ Z4 M) e  Y' F  H; s  elasticsearch {! V- Y/ A  m( u2 T- l3 q0 _
    hosts => ["172.20.22.24:9200"]; z: S, N" z! m! o
    index => "magedu-m63-test-%{+YYYY.MM.dd}"
0 h: y4 r% E) V- ]" O9 |. Q8 V8 M  }( S2 i# y4 i$ Z- X; W+ i6 t' r
}6 U2 U" k8 c% c- t, `' ]
# /usr/share/logstash/bin/logstash -f test.conf
( z+ W. ^6 H9 x0 Rversion1
" q, M3 u- U: f. P5 A6 [version2& _% O+ ^- d1 s& {' k
version3
9 T: o/ D- C4 ^; T# ntest18 |  p0 J: Q# g. L
test2- S! V! E$ I& d$ L& T
test34 r3 T& ~* y$ U. g5 E
9 O  A5 `) ?. A7 `2 r& o# m0 x
####elasticsearch服务器查看收集到的数据& b9 ]( {1 ~! o$ P$ o0 b8 ]
# ls -lrt /data/elasticsearch/nodes/0/indices/
; p8 j5 q) [2 ~) ^- S7 Ctotal 49 O1 e$ ^, S1 X$ N
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA ' o( M4 q) X' J
kibana 4 o. v2 q* _8 O
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等: X5 ?* o  v0 w% G  c  z$ j# j4 S

' ?2 r" ?' V- C0 u部署kibana ( m1 I  a1 F' x* c
# ls -lrt kibana-7.12.1-amd64.deb( D- f$ {: Z  r3 Q+ b7 `# d
# dpkg -i kibana-7.12.1-amd64.deb
2 h, o2 ^! A9 l0 @) m: v) G# grep "^[^$|#]" /etc/kibana/kibana.yml
% v) W3 v( x7 jserver.port: 5601
3 f, u1 u3 K* i' j. pserver.host: "172.20.22.24"
& D8 N; b" V+ s1 S5 V4 [3 Melasticsearch.hosts: ["http://172.20.22.27:9200"]; |/ g) E- X, \, I- ]2 S# Y
i18n.locale: "zh-CN"
5 m3 V% D! A+ j3 k7 ?+ E* G# systemctl restart kibana
6 A+ J: B% U0 I浏览器访问http://172.20.22.24:5601
6 r* R2 D5 |/ A4 A " ?4 G5 m3 q: E) Y+ e  r
Stack Management-->索引模式-->创建索引模式
8 a% v1 U/ I& s0 v6 [6 K  ]
4 B2 W  \: i0 \3 A9 W3 c4 R4 ~
2 `2 c5 P; a1 j9 M" Q$ e# i选择时间字段' |) N. ], w( @. n8 \+ P  X

' i5 G9 b3 H% x1 V9 \: v/ D* |+ N查看对应创建的索引日志信息
$ e! ~; R! O9 u5 [/ S: d, i9 x2 m9 f

: h% F2 r& I6 l6 \
; g( A1 G0 l, K4 H0 ~收集tomcat日志 " h  e9 c; ?% a7 v% o5 Q0 ?2 O' F
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现+ c) A4 H2 h( `) N/ Z9 m
$ s5 ], k$ ^$ D' S
部署tomcat
6 Q$ @# D# }# I4 n4 f& o2 \####tomcat1,172.20.22.30
; {% H5 {- k  i# Q/ B5 ?* u/ l( ~# apt install -y openjdk-8-jdk  n8 t$ _% }8 j1 s' P# {7 l: b
# ls -lrt apache-tomcat-8.5.77.tar.gz 2 k* t$ K# t9 n6 D4 u
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz  c6 ]4 T8 a! F' M- H
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
% w! m6 d1 g0 G4 _/ r/ [+ b# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
( X+ k: |2 X7 T: m9 E# cd /usr/local/tomcat' f3 E$ S  \5 r% b
###修改tomcat日志格式为json
. p* U6 d  L8 e9 M8 p: O/ Q! a# vim conf/server.xml
, x) Z/ b8 t) V; y" ~! ~/ P& x* B....5 Y; V) k2 V9 L8 `+ ]7 d+ s) \

4 a0 V9 u( w: U! S" g7 Z1 ~6 K....
, r9 h- K6 ^8 C: J; @+ C4 U$ E: A* r# mkdir /usr/local/tomcat/webapps/myapp
6 N/ b1 q  I  x5 l8 A# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
- _2 X0 V1 s' E8 U# ./bin/catalina.sh start: d. x* P6 L4 i- B/ W# A$ h
0 ?# j1 n$ R6 [9 d* E
###访问测试- P% D8 d% x% Q  i
# curl http://172.20.22.30:8080/myapp/1 i0 M, G: a, [; }$ _3 T
###查看访问日志
# I4 }. t. p4 Q0 e+ _4 w* L# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log8 u, t8 R$ _, p# K" F

5 m: i; |! `: K* v####tomcat2,172.20.22.26
+ Q5 G1 `- F4 A/ d. {# apt install -y openjdk-8-jdk
% l4 ?! X8 V5 h' F& I2 B: R3 f# ls -lrt apache-tomcat-8.5.77.tar.gz ( h% ?5 Z6 J1 f9 v+ _" O% I1 l) R
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz5 i  W: V" C) g5 j
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
1 |  }+ B7 R& k: O! N$ n' w# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
0 {) W8 v1 E4 S; `% R& g# cd /usr/local/tomcat
/ ]: z& k/ u& @2 T: m###修改tomcat日志格式为json
2 Z4 q; x9 O! i5 f% N6 x6 G& K# vim conf/server.xml4 I6 R3 x9 }. @  O% T2 o- i  E
....) K' K, E$ ~! K2 J: }# N  m+ R- @

; x" P" g) F5 s# p. ^  K1 A....1 N0 M% v5 r0 J( m
# mkdir /usr/local/tomcat/webapps/myapp
6 v' a: U. |, b+ L; z9 {% n# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html; Z* L/ q* [# v
# ./bin/catalina.sh start# j& K% o' P9 z+ g

9 }  v$ X; t- |. T###访问测试. t% q0 P3 }7 k7 f2 M$ K
# curl http://172.20.22.26:8080/myapp/
1 o) j& f. |0 [0 Z###查看访问日志0 R8 L* B1 A" C# x$ A; \# n' Y
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
* _! @1 Q1 E5 I- K部署logstash ! a( o, x4 q) D/ @
在tomcat服务器安装logstash收集tomcat和系统日志8 h! h; e5 z! {2 U7 m' z9 G$ I) G

% c& `+ ^# V7 c$ h####tomcat1,172.20.22.30& O0 H( ?# K: J; }1 M6 V
# ls -lrt logstash-7.12.1-amd64.deb! v% E5 ~- L1 h5 R/ W% H( V; E
# dpkg -i logstash-7.12.1-amd64.deb
9 a7 d: t" X% r5 ^5 _# vim /etc/systemd/system/logstash.service: p% j8 y. ?0 [( `. P& p' A
...
0 Y  h0 U4 Q/ N# |User=root' q- G  m: \7 I  t' T4 w# E8 ]
Group=root
1 q% U! D" t% M3 k( Z* O...
* N, w* b3 D6 \* D# cd /etc/logstash/conf.d
' {! Z; w2 }5 P% l# m1 w4 _# cat tomcat.conf* S8 N/ W4 @1 n* O$ V" y
input { * @7 c* d) A8 N. a
  file {
- E: J' H1 V( |' e' P    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"/ o2 y8 K8 C, F9 ~- o
    type => "tomcat-log"
" l% Q, _) e) P$ U    start_position => "beginning"* n- y( ^0 R6 \
    stat_interval => "3"
  W0 A  p, p, c1 x9 b  }
: g; T4 P3 _- d6 S  u# f  j! O  file {" F7 a5 c+ i7 @( Y9 q+ R$ Z
    path => "/var/log/syslog"
/ W4 n; ]3 p+ X- g8 {  j    type => "systemlog"
/ d( A. l% ^! t" ]2 H( b0 ~2 J0 `! A    start_position => "beginning"
% q7 D6 G# k) v8 M6 U: G8 P! [    stat_interval => "3"8 {  W# X4 O: J: B" ]
  }
4 G1 n  I* S: P}
, M9 @" t% Z, L  ioutput {* H' e8 I4 b; W1 K* Z
  if [type] == "tomcat-log" {5 a- b5 \+ c3 d8 B6 @+ U. Q
  elasticsearch {
0 b8 f. [6 D$ k& I  y    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]; f4 [8 w9 }; p8 t2 T
    index => "elk-tomcat-%{+YYYY.MM.dd}"
" `  m  j9 d) u. v6 t7 P  }}; I5 w. V0 r* J- \* L
  if [type] == "systemlog" {3 R7 ~& d$ T6 \" i
  elasticsearch {
6 \! _" S. M9 Y8 ?    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
4 t3 I( Z& l/ y1 Y/ D% q5 b    index => "elk-syslog-%{+YYYY.MM.dd}"
4 f1 O! V$ J$ [# r3 Y  X  }}4 `- j; X" D* ?) ?, b5 ?# l7 X
}
; A: O( D: V6 _+ G
3 F1 [- S1 G, v# Q5 f" Y- i: `' }0 P# /usr/share/logstash/bin/logstash -f tomcat.conf -t5 ]1 b" V7 B. M1 d7 f* ^
# systemctl daemon-reload* W+ s( P: I5 K+ x- j1 w
# systemctl start logstash.service0 @- i" M7 N+ p( F
# scp tomcat.conf root@3172.20.22.26
% F3 F4 H- ^9 O+ k( W+ B
7 u- {; s$ L( ^9 [9 }) }! f9 W####tomcat2,172.20.22.26, r$ E7 L( F" e) V
# ls -lrt logstash-7.12.1-amd64.deb6 ^8 q$ W, A4 U9 S
# dpkg -i logstash-7.12.1-amd64.deb* [4 B1 S4 J  o$ ?
# vim /etc/systemd/system/logstash.service
: C- w' U7 ?1 C3 S...  H; P3 i' d5 I% w: b1 W
User=root7 c" K$ n$ T- r. {5 D' o1 J
Group=root
% Z9 U  b. g3 R+ o: ]...
, l* p" x% u7 v' N$ ~  q3 n# systemctl daemon-reload1 Z; e- a: n( H: u1 e
# systemctl daemon-reload
  F' f, L: N( H3 r% f# systemctl start logstash.service
7 Y+ g9 ]" J! C# i0 P9 {通过kibana展现& V% `8 b2 \: f8 r# p

/ @2 H5 z# K0 F
. s3 ?7 O. c! A- n" }# b( k% X收集Java日志 . W/ h' a, `% T, ?
使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并
# x3 x8 |6 N' y- e5 u+ o. K
2 ^$ |8 b4 _7 f/ F; N+ wMultiline codec plugin | Logstash Reference [8.1] | Elastic% m% Z" K7 r8 Z
) x& S0 ^* g. \7 R* o
添加logstash配置文件
9 T$ G% d7 z8 Y2 Q% i6 Q3 z###收集logstash自身的日志,172.20.22.26: f: D' Z6 {0 K  X
# cd /etc/logstash/conf.d
4 K. q: o3 T4 A2 q# cat java.conf 7 W, A& V! Q1 ]5 q4 P0 ?) h$ v' r: q: u
input {
# t  z/ F% P' q3 Q" l  U# [  file {8 G) c& }3 h4 Q6 L7 S
    path => "/var/log/logstash/logstash-plain.log"
. F1 b2 v: l  A3 h! x8 A    type => "logstash-log"% c3 B: u2 U3 k* B7 g
    start_position => "beginning"
9 t# M. D7 f" G$ b2 n9 y6 m    stat_interval => "3"% F0 `8 f1 Q: X9 U
    codec => multiline {
* |7 t7 M' r9 Q0 p+ h/ D      pattern => "^\["
6 O$ p, }$ H5 A6 A) E      negate => true% d0 v5 v# S* `, J' _
      what => "previous"
$ z9 x$ C, X! z; {/ A   }}3 Z) V& M' @1 q6 y  w6 N
}
0 B  |6 N' y# G8 v* Z: C, d4 O3 a5 voutput {- g+ F  f4 I5 T6 K' x& H
  if [type] == "logstash-log" {
$ T9 B1 o. c9 U+ A7 L; U& g: s% P  elasticsearch {
' D( T. @" V; Y' @3 x- y    hosts => ["172.20.22.24"]1 ~" Q+ B2 j! `8 q) J% y2 k" m
    index => "logstash-log-%{+YYYY.MM.dd}"  s% T, z0 b: i9 K# ]' g
  }}
& [0 K+ q( V$ V, V0 F! a/ n/ R}: ~! O* t* V+ S" q
- Q- [8 q0 Z6 T1 B- R5 ?
# /usr/share/logstash/bin/logstash -f java.conf -t/ {* K+ w+ K# u3 ^
# systemctl restart logstash.service% P" I8 w1 s) s' ]' v, |1 _
; h7 D2 F3 W  G' k" f9 a  t' a
###收集logstash自身的日志,172.20.22.30
2 @% u" @! X+ H9 L8 g& ?5 O# cd /etc/logstash/conf.d
: }& N. B$ k1 Y# cat java.conf
2 a2 E8 L3 p, p9 B' h! @- [3 ]7 J* vinput {' a5 j6 `! U) j8 |
  file {
4 e+ q$ L* j& Q$ Z8 N    path => "/var/log/logstash/logstash-plain.log"" g8 O0 p; v* H$ X, {) I
    type => "logstash-log"# ^3 a3 C7 G: M% q/ z
    start_position => "beginning"
: t( [# h2 A0 u, c6 c0 \0 O+ r, ]    stat_interval => "3"
& y. `0 V2 k+ j6 i    codec => multiline {! g( `# n; w2 W
      pattern => "^\["
* p* ^& ?) Q3 T8 C9 u# L/ ^0 L      negate => true6 r( P* J& r- C8 O0 W
      what => "previous" & A" r# d. }5 }9 D
   }}2 f4 M& X0 v6 Z# {
}
* ~% ~/ X7 M3 F4 b1 w& f, Y, ]' ooutput {1 G; A. B6 i! O1 Q# Y5 w
  if [type] == "logstash-log" {. M, x. i# z6 [2 b- }* y, j
  elasticsearch {
" V5 V5 t! M& I    hosts => ["172.20.22.24"]  @2 |% `2 O1 V! t: I0 Z
    index => "logstash-log-%{+YYYY.MM.dd}"$ E/ M) C% l8 }2 A7 g  [2 ~
  }}
0 K; r7 E5 {1 {  Y. d7 Z}
7 G! L: k+ B) _; F& V
- b* x/ {1 o' A# /usr/share/logstash/bin/logstash -f java.conf -t. [* |  q, m; d- S6 l
# systemctl restart logstash.service
1 _! ~! d, M& Z6 k1 v8 [查看kibana收集到的日志3 R' g8 ]2 t! Q

- E+ Q; Q: e4 j- y0 s( u* _* O( }
; Y$ B, i  v5 g. o- S8 h - ?. Z7 ^" X5 X+ u- ~) j; ]
filebeat结合redis、logstash收集nginx日志
1 g0 [$ e3 N4 J3 @! N使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
$ e5 C; O( W6 L, |& }  s/ c
( |, {  S) V' @$ F/ v! B; Cweb1:172.20.22.30,部署好nginx、filebeat、llogstash
# C/ C* q- n8 K6 x! n
# s, W' c! M: j4 |( xweb2:172.20.22.26,部署好nginx、filebeat、llogstash5 }! I  ?% `. A5 x
' e8 |8 U/ H: o/ R! a) w8 t
logstash服务器2:172.20.22.23,redis服务器:172.20.23.157) C  M" U- j" e

2 |$ f. m" W5 x7 Gnginx服务器相关配置
/ @9 |/ n  C- M2 O部署nginx
6 L+ k2 z+ j# I- g9 u. {# wget http://nginx.org/download/nginx-1.18.0.tar.gz
# H6 a8 V& Y& b" u! \# tar xf nginx-1.18.0.tar.gz
, G7 @. _. N) S  G# cd nginx-1.18.0
/ T! K4 o( S. X+ E3 W7 f# ./configure --prefix=/usr/local/nginx --with-http_ssl_module! m% [* i6 E6 n0 Y
# make -j4 && make install
5 M. E! M. q9 S; X8 ]4 M: B# /usr/local/nginx/sbin/nginx
6 |; B. \' m# J, m0 F5 P; R# R部署配置logstash
2 m7 Y& _+ S' i% p+ j把filebeat收集到的日志信息发送到redis+ E$ R  P3 h9 A! L
( r% i: v) `+ L3 n
# apt install -y openjdk-8-jdk
8 {8 M- ?+ i: T! l3 U% S# dpkg -i logstash-7.12.1-amd64.deb
) T9 _& L( |" ^1 Y0 p# cat /etc/logstash/conf.d/beats-to-redis.conf
+ R  G. r' a0 jinput {
  c+ V( m( l/ [1 J& t/ ?+ B0 y  beats {
" G0 V" S9 J% F0 g5 e    port => 50444 g7 T% b- k8 S' Q1 i
    codec => "json"
2 x% ]. H0 d3 q' ^; u" e. K& }  }
+ Q: n& S  k' r. q: e( Z* u  beats {/ k+ C' y% q* e" p6 n, q4 g
    port => 5045" a7 B, W. y7 _$ |( k1 W
    codec => "json"
) J; ]* e- \' v* f8 x  }
8 U4 U6 r) X3 ~% Z  |}  c7 ^2 W8 f' B% w; {8 W8 c
output {+ d$ x: R: s2 f
  if [fields][project] == "filebeat-systemlog" {
( L. |* W- r$ A& L7 W7 v' p    redis {- C: h5 M( p7 Y( X2 x" {
      data_type => "list"- K6 k" }8 m, _# B; x3 e
      key => "filebeat-redis-systemlog"7 Y( r  S2 n! x4 A6 H
      host => "172.20.23.157"& E9 a) z, f, v  _; a' w1 J# x% J
      port => "6379"* D& X" V4 g( I# v  D* u; Z
      db => "0"; ?5 u+ u4 q! l& l+ r: u
      password => "12345678"9 c% _$ W0 S  o* d4 s
  }}0 v$ m3 S9 Q/ _5 v2 b! {0 q
  if [fields][project] == "filebeat-nginx-accesslog" {
' N" C3 y  V* ^: Y3 _. P    redis {
' b1 G' v/ s1 e  b( G. n      data_type => "list"
( W. t4 M- D- E      key => "filebeat-redis-nginx-accesslog"
* g/ `! J" ~9 j7 j/ d* z& E      host => "172.20.23.157"
7 {, O* Y% s' ~. O3 r( W      port => "6379"' Y, w: f6 B6 }$ p' v
      db => "1"
" Z- I  k0 X+ b9 l# D# q2 C      password => "12345678"  m, v- ^) W. X3 |9 i+ N
  }}
7 m0 {8 ~  s+ q) [% Q( ]  if [fields][project] == "filebeat-nginx-errorlog" {
4 L/ }) G- [$ l    redis {
- _+ D9 V; Z6 W4 [/ |4 I, E0 k      data_type => "list"
9 O8 P2 g" x+ q. z      key => "filebeat-redis-nginx-errorlog"6 ?0 k. j  U& i& I' K: z
      host => "172.20.23.157"
) s3 @+ z5 a) b: u" z. ]      port => "6379"6 x7 F  ?+ D5 Z. G  ?$ x- }' v
      db => "1"
1 F! s, p. a( T# N9 }1 L2 K$ i7 a      password => "12345678"
  Z! h1 H; l% m5 N4 W" R7 i9 t- `  }}0 g9 O4 Q6 q1 E5 u* c4 l' J# d$ O
}
9 R4 M0 W; A3 \+ U) |* t( |# systemctl start logstash
! e; D3 E) x9 L4 k9 w# \6 Z# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/
) X1 |$ U- _# w9 U部署配置filebeat ' C: P% Y  o- N+ q& x
通过filebeat收集日志信息发送到logstash
+ {2 {5 q$ u2 E6 V- w+ m/ j$ ~ - g+ U! V: F8 i5 h, U  s. P( X
# dpkg -i filebeat-7.12.1-amd64.deb
1 d( q# r/ l; e# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"+ A6 ~' H  S/ O9 `% L& z0 f
filebeat.inputs:
) o7 T5 u+ x& V  c" x; \- type: log
; \# Z/ H# R, B: {3 r+ b: R  enabled: true/ ]4 S1 v4 H# \0 K" i
  paths:
! e& z) p- _6 j    - /var/log/syslog
: {, m8 p& u( J& Q# A6 o" _% V  fields:( i# K% \% F1 j3 @& Y2 f+ f6 Q
    project: filebeat-systemlog, }( F4 w, i& y; _  Q
- type: log! O0 J' T2 ~- E+ O
  enabled: true  P- \! u, I% w, e
  paths:- I* |, E6 Q: U1 s
    - /usr/local/nginx/logs/access.log' I2 v3 F  Y  P1 x$ v( a
  fields:
% O  F9 k5 E- A+ E    project: filebeat-nginx-accesslog
3 D. k' L" V4 ], G) m" m- type: log
% z& A' H6 a+ H( v% n2 t  enabled: true
: p5 p# }' ^3 b, f2 Q) P: R  paths:0 V- P( H. u( ]3 \- F
    - /usr/local/nginx/logs/error.log
7 B. ^4 P3 n- S+ j/ B. c  fields:0 f: U# r4 I0 @8 o. g" |0 C7 P
    project: filebeat-nginx-errorlog7 N. d: F  L5 k2 j+ Q+ O6 h
filebeat.config.modules:
0 `; D5 Y5 k' a! m  path: ${path.config}/modules.d/*.yml) D  u& r' R' T# A1 a8 u: S3 C
  reload.enabled: false
/ P& K+ ?& L2 Q9 B- I7 Vsetup.template.settings:
$ Z" V8 `4 a' y5 H/ I  index.number_of_shards: 1
, P' R. k& q. W8 p: {setup.kibana:
' E/ ?1 {3 K: h; w, xprocessors:
/ H; s0 n1 Y8 ?2 b4 h2 [) c  - add_host_metadata:
8 _# r' w( q9 ~( Z: J! @8 L      when.not.contains.tags: forwarded
+ A6 W7 a+ K% F+ v+ X- Z  - add_cloud_metadata: ~
# z0 h1 m7 T$ t/ p8 Q' B4 p  - add_docker_metadata: ~# z% n% C) ?4 w/ R
  - add_kubernetes_metadata: ~; j0 l6 X) n' [4 b$ W* A' D
output.logstash:+ x6 [/ Q4 o2 f+ |0 C3 z, g
  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
6 Z: ~, E. R% ]  enabled: true
" d3 d+ s( }+ Z  worker: 2
' v( E# D% n( u0 V5 Z  compression_level: 3* n/ D! r  T# X2 j' J# e" h
  loadbalance: true
9 g/ t' j/ B. P! S, y( ?+ k1 d9 O! v& D) e/ b
# systemctl start filebeat
: A: n7 t# I/ I6 S# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/ 0 C9 t1 |( H. U, I$ Y
logstash服务器配置 8 C' P1 {' ^3 l
logstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch% n- r3 J: W! p& y/ c0 d
8 X1 w) o' b4 V5 l6 F
# apt install -y openjdk-8-jdk
7 [+ i# k. G. V/ m# dpkg -i logstash-7.12.1-amd64.deb
, ?* Q/ z" Q- [. X# cat /etc/logstash/conf.d/redis-to-es.conf 6 c7 t( D2 ?' @
input {
4 @' J& K. ?: n1 ~# a  redis {
8 M  e: h. x( a/ A7 F9 ]: w    data_type => "list"
! U& l0 A" _- Q    key => "filebeat-redis-nginx-accesslog"6 ?1 o4 v3 e: {/ m) g: c
    host => "172.20.23.157"5 V. W. F" {9 w3 _  d" A
    port => "6379"9 s" n# X+ J9 _3 z7 K, ?
    db => "1"5 D/ w; |2 b* ~# |
    password => "12345678"
  r7 K# ~+ l% X8 r$ ^: n9 i$ J& S  }
) x2 m, ^" z( Y/ V' U* _; }3 r  \  redis {( U' t' s' z2 `+ E4 Z
    data_type => "list"' d% _5 h( O+ z$ l5 k/ P1 \
    key => "filebeat-redis-nginx-errorlog"
4 V- Z( W9 Z$ c, F/ o" M    host => "172.20.23.157"
* {$ V2 \  n, N5 Y3 a& d    port => "6379"
0 D5 J7 F) _3 Q( [6 i    db => "1"' P" |' N6 o& c6 F" A0 e3 }1 [
    password => "12345678"& S% r6 H3 \$ J6 |& N5 g$ ?
  }8 m6 ]6 @: @5 r
  redis {9 }) y7 k8 P. @5 |
    data_type => "list"
1 H, h$ V8 E; m0 n2 h8 m4 n( W' \    key => "filebeat-redis-systemlog"$ J1 I8 Q' k, L- a; [% D. T' {+ H
    host => "172.20.23.157"
1 @" C% s! t# \: A2 B5 ]! G8 R    port => "6379"
+ i. \; j* Q" [" u% Q3 ]; K    db => "0"
/ |! i' J4 Y6 f0 [    password => "12345678"
% e6 R, W! @; h5 F  }
' j" l* i9 n) b2 R}- N# }, `, t6 U9 F
output {' h' p, C& w( l! U9 f
  if [fields][project] == "filebeat-systemlog" {: K; W2 {3 \( t7 r
    elasticsearch {
( V8 g1 j% Y9 }      hosts => ["172.20.22.28:9200"]
! A) u8 X1 N! \0 A      index => "filebeat-systemlog-%{+YYYY.MM.dd}"
* s( _7 _; o: n8 ~. {3 A& @6 v* I  }}
5 ]; s7 e: i- ?6 O1 c8 u0 a$ Q  if [fields][project] == "filebeat-nginx-accesslog" {6 f7 _) P4 I% \5 A! H/ H
    elasticsearch {' o# `& _0 d  G
      hosts => ["172.20.22.28:9200"], V, Y" w" U1 c! t" J1 k. m) n& r2 C( O3 h. p
      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"/ m$ `" E  g" @5 m( b/ E
  }}5 x- n% G  [. S: l1 A5 ^6 K
  if [fields][project] == "filebeat-nginx-errorlog" {
+ ^  S* q4 p0 o! T    elasticsearch {+ L. S" J* N$ K) q4 L$ S- N
      hosts => ["172.20.22.28:9200"]0 J7 Y$ D: e4 Q; W  L* w$ X
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
! L9 Y- {: ^2 j' `& n  }}. O/ z1 ~$ B+ l8 h: q0 ?$ k
}1 A5 g! {' ?: V
# systemctl restart logstash.service   t. n8 s3 j5 Y6 X
redis安装配置
/ S  p% R* j5 Z- A# x/ }- lredis服务器:172.20.23.157,
+ H  y, s8 H$ K4 h8 v% D
. p9 X2 y6 E- q* i+ S7 `: L9 Q# yum install -y redis
+ Y4 E$ K( m! ?( f( V# vim /etc/redis.conf
/ t6 e8 l* {( Y5 r. S####修改以下配置项
# q  }" ~, ~% ^bind 0.0.0.0
' t, m+ k; c7 ?2 \5 j& x....4 J1 L, f9 X9 `7 l( J0 Y
save ""  X8 w7 Z- s6 e! N' Q( d
....
# ?' B$ ?1 }) Y7 i$ f+ t$ Frequirepass 12345678( v% K8 @4 l( d
....' t* d/ z: R0 }; o  ?7 ]/ Y/ N5 J9 t
# systemctl start redis3 Q+ |3 j" x4 {( `
###测试连接redis
2 @& c2 X! z! |1 Y/ J1 R. q! q/ V# t  w# redis-cli
8 `; u6 p! h' k# |; f127.0.0.1:6379> auth 12345678/ X* b& i2 Y, f4 S  n8 P& m+ `
OK
- y* S2 M! I; I0 T6 f8 H127.0.0.1:6379> ping5 v3 v. O  A2 ^- H0 T+ D( r& q2 K
PONG  R3 s( v8 m  q7 u+ Y% i4 k( Z  }# F
' }& e- k% m- r* I5 E7 T5 ~
###验证收集到的日志信息
/ \$ C5 F; J3 q/ [3 T127.0.0.1:6379[1]> keys *- \7 K- G( i3 q
1) "filebeat-redis-nginx-accesslog"# U' M5 v& R# \3 C( y$ e* K0 [
2) "filebeat-redis-nginx-errorlog"7 i8 i" p( T+ m/ T  S8 p
127.0.0.1:6379[1]> select 0
7 L2 X, q% {5 \3 P* oOK
6 Z/ I9 K% A4 p( j& Q8 _' [127.0.0.1:6379> keys *
1 I8 c# s! K! B1) "filebeat-redis-systemlog"
9 z: _! M" Z, @& S& u8 T通过head插件验证生成的索引  k8 w, w! x, r0 w" l. O
6 S% q6 `7 x+ x1 r

5 _" n  K) a& Q2 H2 Akibana验证收集到的日志信息
* V0 W' B+ R, {+ w% g' ^2 m" v

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表