扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 536|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK   s. e  J7 |8 _/ k1 @' B0 t/ |
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
7 ~% Z7 I: D( D& H- E 1 n9 p5 o" I& }4 l
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单+ E7 N9 N% ~3 D2 u
Elasticsearch
! {* n; {) R5 jelasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。# Z8 F3 T1 \4 C. d6 N

; b% O1 D% K' w9 s( nelasticsearch的特点:
, Q2 Z- o1 ^' n + E4 E$ k1 r- \, X
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json. I6 q* A! w/ V& \6 u
部署elasticsearch
7 }' s7 \4 f" [9 [  E) JGitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发* y  C0 Y/ `0 f
2 d+ e7 S( t; |5 q5 q+ E
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步+ `  ?6 F, y! t. f! d) |7 J
& |7 Y+ ?6 @+ K! W! d4 F, {+ C
服务器1:172.20.22.244 ?" m3 d; U3 K6 C

# j1 ~# Q, \# O  s服务器2:172.20.22.27
  @6 U" z1 c) T
3 o7 ?4 m; g3 `5 U1 I9 d, D服务器3:172.20.22.288 v; w8 T4 [- C2 W7 V/ V

4 f0 Y% G% L( D! V+ O& W- H9 w4 n###ubuntu! K0 i% Y+ v% O. ~1 v) O+ a3 @# D
# apt install -y ntpdate* E" O+ v$ ?" E8 `/ C/ W( f8 I
# rm -f /etc/localtime7 F  s! Z4 o7 A. o  G/ |9 S
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime( N7 G  _$ Z$ m& W6 O. k
# hwclock --systohc
1 |$ M" o6 Q  s/ M2 b- E' T# ntpdate -u ntp1.aliyun.com* \5 Q2 T, _* b/ l0 N. d5 s
###设置内核参数' A6 O# k6 e, s" A( T% q! m; r
# vim /etc/security/limits.conf
- ?0 n  K9 {& Q$ Q& V7 D, [. D*                soft        nofile                500000( {/ v. u( e6 q: X1 \/ H
*                hard        nofile                500000  I7 u* @2 I( Y0 F1 I/ c. Z5 i
# vim /etc/security/limits.d/20-nproc.conf   |% E3 X6 y0 Y* C7 e9 `
*          soft    nproc     4096
4 }, l! m4 N& }5 H0 b' I/ Jelasticsearch soft    nproc     unlimited
0 ~2 F8 U- g+ L" Sroot       soft    nproc     unlimited
/ j7 j$ t( Z) ]: _) A) ^###安装jdk7 n; _8 S5 s; D5 X3 ]7 j
# apt install -y openjdk-8-jdk
4 }1 S( f& o: i: S/ @8 @3 T  C& C
+ [* N& V6 P" r! k2 G###每个节点都安装2 O- P! ?! _8 B* S0 x! {9 D( @
# ls -lrt elasticsearch-7.12.1-amd64.deb
/ _2 ^; K$ S+ F8 K+ @; H# dpkg -i elasticsearch-7.12.1-amd64.deb
& M+ }. S1 K: X- p. F###节点1配置文件
' v. }# Z+ ^/ h: O( P8 i# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml. y; j5 x) ^1 \* f/ W2 t
cluster.name: m63-elastic        #集群名称, N9 I/ x. v8 J+ }+ d2 A) Y. Q: L
node.name: node1                 #当前节点在集群内的节点名称0 i. f. T  O" |* [
path.data: /data/elasticsearch   #数据保存目录
' a; W% K+ G! Opath.logs: /data/elasticsearch   #日志保存目录  g  v- |4 u1 R, p/ R5 |# E
bootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap
% h2 a' N5 n/ {, R4 cnetwork.host: 172.20.22.24       #监听IP
& i1 ?! c, w  dhttp.port: 9200                  #监听端口" i6 G9 e* R% d8 ^) m7 E
###集群中node节点发现列表  o; M4 a+ h( y+ g( {
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
6 q4 t" ~2 o9 O4 Y7 K###集群初始化哪些节点可以被选举为master
' Z+ k* p# N, E3 ]- s6 Ucluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
! F) o$ T' I. M, Yaction.destructive_requires_name: true
! [9 h. z' K, Q8 a# mkdir /data/elasticsearch -p
+ s+ z5 k4 s4 ^9 P( x  e# chown -R elasticsearch. /data/elasticsearch
0 n+ M8 g2 y, n2 [9 O# systemctl start elasticsearch.service( O) x8 U& D& i4 w) s/ ?! |
###节点2; w* D& y: K( K- ]' ~
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml0 T' g7 b8 R# _5 Z# S
cluster.name: m63-elastic+ E0 b0 O+ l# C+ b
node.name: node26 x/ {% ?. k# q
path.data: /data/elasticsearch  G# u" w# t1 Z+ o2 s( Q" U) P
path.logs: /data/elasticsearch
% Z! G+ ]  I* P# m4 g3 gnetwork.host: 172.20.22.27
- r; D6 L- o' C; r8 R5 i6 y; K2 Chttp.port: 9200
2 @+ z5 K$ Y; w) k/ K! n7 |discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]; j6 t1 e! B6 v0 O- @
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
3 m0 H7 |' d& J2 D) Haction.destructive_requires_name: true
, I( |% t0 y: {, f# mkdir /data/elasticsearch -p% C2 H2 {6 y. o: K$ W' C
# chown -R elasticsearch. /data/elasticsearch
, r9 U) @' z) r. R/ q# D# systemctl start elasticsearch.service
, h6 ?: K; r1 g! g% ?5 ~6 J###节点3& _4 u: ?, L) F  a
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml( i5 b: F, z' ^
cluster.name: m63-elastic1 |3 i% W( @. V+ J% o9 t2 x1 F; _, W
node.name: node3
  r8 a# M" \' B% P$ ~- O" e& e7 x' I) Fpath.data: /data/elasticsearch8 y8 A' {# h/ g8 L2 t
path.logs: /data/elasticsearch
( f4 X; t  H$ `! gnetwork.host: 172.20.22.283 o7 R( ?& `7 Q) `+ m" o1 F
http.port: 9200+ ]$ T  a  h& d$ q
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
1 W: [' a" n5 J. g- X$ b; A" y3 hcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]4 k# _0 T% \) j3 j. D& A
action.destructive_requires_name: true
& r5 Q9 ]* A! f  m% j# S! u# mkdir /data/elasticsearch -p
5 u8 q# A0 f" x% O( z1 N/ W" u+ u" O# chown -R elasticsearch. /data/elasticsearch
/ B# S4 E) ]- E8 l# systemctl start elasticsearch.service
, @' J5 I$ L. v, t1 Z浏览器访问验证
  P0 i; k" i1 s  d2 n, A& qhttp://$IP:92006 R$ |6 g" g% |4 h; s
% x( I, @7 {7 R( e
& k$ _4 V* p% W* L$ o
6 m6 l& d2 s% G
Logstash
0 ^3 l" q9 U% b1 F) Q+ `Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。# s' I) w% D: m/ W

  N% e, e+ h+ F3 n部署Logstash " W! |  d$ H5 M" X4 E
Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地5 |( L* K& H2 n
; {" T$ [+ ]# F$ Y; G8 k/ {  U
https://github.com/elastic/logstash #GitHub3 b- u6 |6 k$ A
! ~- i5 V: a0 L/ ~/ Y/ ?4 U
Elastic Stack and Product Documentation | Elastic
% ^+ r$ c% x3 ~1 d 5 ?' r$ D$ A0 b7 |+ w. {
环境准备:关闭防火墙和selinux,并且安装java环境+ R7 m( X! h$ c0 P, G! X
! L  p; f: A; o6 t! G( g& T
# apt install -y openjdk-8-jdk
/ m- Y* b! N( b+ I# ls -lrt logstash-7.12.1-amd64.deb
1 T$ |) E9 V8 k; f) z. w# dpkg -i logstash-7.12.1-amd64.deb% l. j9 ^4 j$ K5 d
###启动测试; ^3 T1 r# U: E, ^+ S
# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出
$ b4 a0 r& S7 ihello world!~
! _. ~( R1 z* N/ u% G7 p' D; Z{) i0 S/ d5 ~! w+ W, T) H3 V! |3 k
      "@version" => "1",
' p- V0 I5 `& E% R    "@timestamp" => 2022-04-13T06:16:32.212Z,
0 ~+ _  C5 ~  b: z1 t          "host" => "jenkins-slave",- v5 t# H( ]7 i1 B' Z+ C, ~- O# z
       "message" => "hello world!~"
0 `: R9 @; D9 {/ K" v; h* @}$ x. V$ j0 y7 x+ ~1 n% |4 B8 L0 r* L
###通过配置文件启动4 H5 B1 i% h6 ]
# cd /etc/logstash/conf.d/
# ~( Z/ Q# }% A# cat test.conf
6 {1 M9 J9 g5 y% o# C$ [; v) Linput { # o* K/ N" J) ~( w* P4 _0 X
  stdin {}
/ W# ?  m7 a# L$ j}+ S7 I# m* w) K& Y$ o, S' v( s! I
output {
8 I/ j! W# T' q( D/ S  stdout {}
/ W7 C/ G5 i- C2 F2 t$ H- e2 G}
* a( [! |- S/ G# A" f+ K9 Y2 d! ^. t+ Z
###通过指定配置文件启动
+ i( y: ?& j" z# `# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法( f+ J, d+ F& E! e) s$ l& _: v) k% |
# /usr/share/logstash/bin/logstash -f test.conf
1 D+ V& s4 i8 h. o1 `" O/ _# Q0 l4 w) H0 t) z) ]9 O: t+ ]
####输出到elasticsearch
  m5 N+ L9 X1 w# cat test.conf 5 f( ^9 h3 N' ~+ `0 ]. e
input {
' G7 S# x6 R! _: b* k  stdin {}
- U" M/ t3 J+ ~/ e- {+ c}! P& I: R; T# q$ S9 @
output {
) ]* O4 A- V% D+ e  #stdout {}) c/ M. ]- M3 x
  elasticsearch {4 E' v8 ^4 i+ {+ Q
    hosts => ["172.20.22.24:9200"]
7 p8 J5 l8 P3 d8 @; B% r    index => "magedu-m63-test-%{+YYYY.MM.dd}"
5 M! \8 v9 W% E* R  }" `1 i0 v' ^. Q- d+ n9 v2 U
}- K% z) m9 O; ^. I" I2 C
# /usr/share/logstash/bin/logstash -f test.conf
5 x$ z2 J: c6 {2 |" h* p8 Nversion15 Q, `6 p& ]4 y. j: i
version2
( i4 Y$ K; c# d2 y" C  f8 r8 t  Hversion3
4 z8 \7 D; b& q, b$ X7 z0 G4 Rtest1. Q+ X- J  w" I8 \
test27 V, ?( S" A. }3 @; D7 y2 P; U
test3, e# w& K, y" @- v: l: l

  S! x' u' Q8 @: C2 U####elasticsearch服务器查看收集到的数据
% l) R* X) W' W! c: F# ls -lrt /data/elasticsearch/nodes/0/indices/
6 i1 G: r% G% Ototal 4
- ?2 c% s% R1 @( T1 c! i) Wdrwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA * p; ~' R6 O4 l8 V3 Y- O
kibana
- ~) C2 \- d' X! |7 E4 o7 ]3 Jkibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等4 l6 n; Z( [- ?  D
3 k! G& s+ w- f4 Q' e" }. d' K: R$ {
部署kibana 8 Z, S  p$ f2 ~% d
# ls -lrt kibana-7.12.1-amd64.deb% }# X, P& T4 m
# dpkg -i kibana-7.12.1-amd64.deb
+ Y7 a$ Q5 M: b$ F3 E# grep "^[^$|#]" /etc/kibana/kibana.yml
# ?, Y% X. n; R- r# ~, ?server.port: 5601+ `5 Q, s1 p6 i" V0 g7 [. h) B- r
server.host: "172.20.22.24"
# e) `# Q! K% \* b8 Relasticsearch.hosts: ["http://172.20.22.27:9200"], ?/ K" S  i6 V8 H2 l$ T) Y
i18n.locale: "zh-CN"* s/ d6 H$ j1 Y( D- O/ p
# systemctl restart kibana
( Y" \  O4 r5 Y& N浏览器访问http://172.20.22.24:5601
7 v/ l2 q) P0 o: A
. j+ _; x) F; h* Y  L1 wStack Management-->索引模式-->创建索引模式
; O" c+ S. q. o3 P, M4 A
, {/ L( i1 Y  _5 y: d+ X& \
) A8 n( |8 Q8 ]+ d* Q/ D选择时间字段" W# S& b, w: d# b

( y( Q. N3 e! n9 t5 k/ M; f( @/ b查看对应创建的索引日志信息, x' Y1 X+ T7 k, ]

$ Z7 ?- |6 V; K' v
2 t/ l  e+ u' E2 _* U3 P& | 0 T( X0 d4 t7 x# \# l
收集tomcat日志
9 @) E. F3 ?8 e# b收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现3 e- b" b7 H* ^, E" w# s6 \2 o: u

( X* f2 Y) `1 ?& W' D, s部署tomcat
, U5 i$ Z1 O' ~####tomcat1,172.20.22.300 \5 v$ w; K# H
# apt install -y openjdk-8-jdk
" F3 o, J3 u0 ]( Z# D9 ^; `# ls -lrt apache-tomcat-8.5.77.tar.gz
# F$ M' F0 u  T5 H, e-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz! ?2 ^3 }) i% T: B% X
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/) N1 P* Y0 ~  X9 Y) U
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
+ \' X% j6 t, d7 B: ?# cd /usr/local/tomcat
$ I: I7 U: w- I- R  n4 ~7 \###修改tomcat日志格式为json+ D$ ?7 x- C0 y# A5 c
# vim conf/server.xml* S( s. x, B8 \: m
...., ~0 b  `% r1 q8 J# H
  V' C$ u. Z( ^* P  g) q
....( T" w0 `; y  {$ x" x
# mkdir /usr/local/tomcat/webapps/myapp
: \" ~' I1 d4 V  }! j# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html' u6 E3 U0 r( T0 N0 Y4 j  F
# ./bin/catalina.sh start
0 b' S$ E) P% o7 Q6 Q% G9 D, i+ _, c$ J# b# q6 J7 Q* n: e
###访问测试% e1 y& p0 X: ?
# curl http://172.20.22.30:8080/myapp/: g5 Y% m3 R7 y. }. W
###查看访问日志
! x% O+ K+ c' }/ R" k# W# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log5 _- S$ y! i+ u7 F) e1 |

$ b5 ~' D7 Q' k) I7 W. Z# U& L) v####tomcat2,172.20.22.26
. l+ a4 V1 Y0 F. S# {( F  ~4 B# apt install -y openjdk-8-jdk
2 z" U; b" w/ g* `$ O7 t# ls -lrt apache-tomcat-8.5.77.tar.gz 8 t% q4 Y) e0 S( U* f
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz+ ]; `/ A6 i+ ~6 `5 S# e# ~
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
/ m' s8 E, a' {4 y# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat" G+ k8 h9 I* `# w9 Y, y
# cd /usr/local/tomcat
5 v( p1 e  k% P* V# q# L- m" c###修改tomcat日志格式为json% p) w3 d2 {) d+ w, |
# vim conf/server.xml8 N8 n1 f5 q; ?  v' N; G
....
1 e1 w3 H- t3 d" U0 v) e 2 ~2 ^) ^# C1 S
....
( g' u. u, H; a, x! g8 A# mkdir /usr/local/tomcat/webapps/myapp
+ V( h3 o! \4 J% d# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html
- j9 s3 ^% M+ ]& S8 Y2 N2 v# ./bin/catalina.sh start! o# Z0 n1 l5 W3 \

8 Z2 N; i. K  d8 c2 ^###访问测试
' v- m) V* n+ P  G# curl http://172.20.22.26:8080/myapp/, P5 O7 @4 k$ n
###查看访问日志! A4 K, ^5 b' T9 ]2 {
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log ' X' l. b8 B7 U5 L5 z4 F; s+ a
部署logstash * }( z, I+ c5 p7 f) j3 g
在tomcat服务器安装logstash收集tomcat和系统日志
# d& H2 r, f2 Z9 _+ H( @
8 @" Q0 C- O! x6 V( L7 D####tomcat1,172.20.22.308 ]) c+ K6 O6 ]3 k& j& {$ X; D
# ls -lrt logstash-7.12.1-amd64.deb% x1 d+ ^: X  [5 v; O* X! t- ]
# dpkg -i logstash-7.12.1-amd64.deb
; i9 m* a" n0 p* L- b3 r- u# vim /etc/systemd/system/logstash.service
1 Y4 v( q( P4 I3 t  d% @...
" j3 g% Q6 s0 Z5 @$ R+ W7 oUser=root
! ^5 `0 J6 r/ l* [; X, l1 R# S1 a5 kGroup=root/ [9 @' a, D( C
...+ D2 L2 v: P, B- v  g' T  Q
# cd /etc/logstash/conf.d
5 b7 I* Y3 a, }9 c3 l2 m6 S# cat tomcat.conf' |+ F  l% V- e, P
input { . [7 j: I1 f8 R" N8 \( G) W6 E
  file {3 I9 ]5 x% k& V2 z- `! T; ]
    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"
8 H. n( v9 {5 Q8 J) y4 x    type => "tomcat-log"
. u4 h' E; o6 p$ i6 p4 B' @    start_position => "beginning"; q3 C& `8 v6 \& G$ k2 p0 ?# z
    stat_interval => "3"# r* p: A+ R6 ~5 z0 B! S. r
  }7 V3 O$ Q1 ^# U1 g0 ?
  file {
5 P: t- r1 o) r) d6 M" i8 U    path => "/var/log/syslog"/ O6 ]+ a9 E; b) B. ~
    type => "systemlog"# q* \- t4 n- d3 N4 f* z" Q3 W
    start_position => "beginning"- |6 C0 s* b( P, H+ o8 u
    stat_interval => "3"0 I% K) C9 q% E: F7 Y5 I
  }1 O8 C$ \5 Y  u- |6 W6 Y; e
}3 Y& [+ a  t* k$ N
output {# `/ J) {7 N9 c& r
  if [type] == "tomcat-log" {  ?- ]: H2 R8 A% z2 w$ P
  elasticsearch {5 _! e& g9 t# S8 w* r% o
    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]; X3 o/ M4 R+ r8 V: ~* D/ j
    index => "elk-tomcat-%{+YYYY.MM.dd}"7 {; Z. C7 L! j# v+ O* S3 \
  }}
7 ~' t0 @$ S( @/ @- `1 p6 E5 ^  if [type] == "systemlog" {: \/ l* e5 S/ j: w2 I. e
  elasticsearch {
. S  Y! ?0 W$ {5 f    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
6 ?# Z2 J! A, E) R5 I9 I: E2 j    index => "elk-syslog-%{+YYYY.MM.dd}"
" f8 N; i; E$ d; U; m2 c  }}6 o; I$ ?, ?7 y# Z: V) @
}; a" Y+ T6 I) S# q

: ?% `* n, }" F2 [8 K/ `$ N$ r# /usr/share/logstash/bin/logstash -f tomcat.conf -t
$ f% D/ ^, H& ~* W, W# systemctl daemon-reload
, r" ~+ k" u2 G" Z; l* d  X. q# systemctl start logstash.service
) e  b5 |1 D6 G) e5 r7 q7 k8 [7 X# N" S# scp tomcat.conf root@3172.20.22.26
  N9 b# {- X. W! x) J# j: N
* R# t0 D" s% `5 [9 D) x- [####tomcat2,172.20.22.269 ?6 r7 c; i+ {0 [2 V$ X
# ls -lrt logstash-7.12.1-amd64.deb. E) U# Y& ]) e3 o$ G
# dpkg -i logstash-7.12.1-amd64.deb
/ R6 F( z( ?7 W" t3 w# vim /etc/systemd/system/logstash.service
& s+ R& Q: b. I: {! c9 J0 f" o! y) e...
) d1 ]" B" N. g# CUser=root
' H: n9 h$ [/ m' g% vGroup=root
- {7 n$ v0 N2 W! j$ S...4 J0 F/ [4 z8 l9 c9 B, z
# systemctl daemon-reload% [9 X7 D; B" Q$ ]7 o
# systemctl daemon-reload
- u' j2 r* }6 M9 O/ n. [# systemctl start logstash.service ) p" K* \, c2 G' Z
通过kibana展现% p3 V4 h7 C1 L& G4 S0 Q
, K7 }$ P: `9 g& \) Q9 H6 k$ j/ C
, L, N+ J- D. P) i0 L3 N8 j9 p7 k; W, O
收集Java日志 ' m' t) }+ `0 C; O1 E4 d3 r
使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并; v5 }4 w* G. z/ i# z1 w; Z2 o& U
! r) o$ G2 G8 V3 H
Multiline codec plugin | Logstash Reference [8.1] | Elastic. C- I* u0 L/ W/ S, z' B
0 B: q1 p/ l5 s$ y/ I
添加logstash配置文件 , l1 X3 j9 c/ K' M% x
###收集logstash自身的日志,172.20.22.26
3 r, y" j5 V  H. W# cd /etc/logstash/conf.d
5 T' }+ W( i$ C; c6 |" d2 r# r# cat java.conf
8 R( D! D# }( P8 O/ C8 Yinput {
% s" Q" t$ F2 A& C5 e# P$ Q  file {
, J9 U- J0 T" C" L2 d    path => "/var/log/logstash/logstash-plain.log"! C; G8 J% n7 {0 {% M
    type => "logstash-log"
$ K1 I  s& \% B/ i; P0 ~& ~    start_position => "beginning"& f/ l3 y6 C6 E4 ^
    stat_interval => "3"
/ M6 l8 v7 n) y; [+ C. E    codec => multiline {
0 f- ^; z) m+ i$ Z6 j7 O      pattern => "^\["
8 k8 A3 j! ^. K! Z' V, n      negate => true5 n% P. V# o0 k! n) p2 P; f
      what => "previous"
! P8 R: X% E% P; O1 }% I, i/ I   }}
0 M7 o3 v% G5 |) t! i$ t9 ^. Z; P}
+ q$ r0 C# `' c  I$ @output {8 B* b( w3 o4 K$ H4 k9 l4 c- @4 j
  if [type] == "logstash-log" {- E  c( r& `% @; L
  elasticsearch {
3 Z# E7 a6 L% P* V    hosts => ["172.20.22.24"]: Z) j$ v1 I. U# o
    index => "logstash-log-%{+YYYY.MM.dd}"  w) w# B. L. m+ w6 w* L" D( @
  }}( Q! G: a9 h  N' d# S) D
}
! S/ P! o( X" A$ y" @
7 d# M1 P. @- P, t9 ?) p) |# /usr/share/logstash/bin/logstash -f java.conf -t( l  E* o5 a) w9 R1 S3 s
# systemctl restart logstash.service, I2 s' H" @3 O7 m$ C  m

9 k0 y$ e  ]9 e! c0 m# G  k; d$ M###收集logstash自身的日志,172.20.22.30
4 u# R: N6 P9 N1 L' K/ ~+ t# cd /etc/logstash/conf.d+ v2 b, J3 ?$ B- l
# cat java.conf
( Q! Q& _- C6 \' n. I% finput {
8 C9 k) p0 l/ Q1 `. @: V" X  file {
, H  F; j9 s+ L5 I    path => "/var/log/logstash/logstash-plain.log"3 p3 n6 W  i% u9 o! }, B
    type => "logstash-log"
+ [' x' X- @: m( j0 M+ d    start_position => "beginning"
" r& }6 N, p) H    stat_interval => "3"" g: E' v6 C4 K+ ?7 d
    codec => multiline {
. Y& F" P4 [0 q% j      pattern => "^\["$ |& v5 s) n+ T: d5 S; i
      negate => true% L5 t) G8 P2 l9 `
      what => "previous"
' D( j9 H- O2 J5 ^2 v( f2 S   }}( o* V: Q! l6 q
}: a: Y" D7 Q  F+ u
output {
9 q5 H9 R% j0 j4 i" w) a9 A* \  if [type] == "logstash-log" {
9 m+ W9 k" x' q% U2 b( V  elasticsearch {7 R7 \% M# w$ t
    hosts => ["172.20.22.24"]
9 @& K. l$ f7 C/ K4 L    index => "logstash-log-%{+YYYY.MM.dd}"+ d  N3 `" q- H
  }}9 t! b( a$ h( S+ r1 a2 d2 l
}
3 H1 e+ c- n) N* ~( s; O2 P, q( b- |3 i, {+ B! R
# /usr/share/logstash/bin/logstash -f java.conf -t* b! F, }2 E+ o( a6 s9 `) x& M2 j
# systemctl restart logstash.service
" c1 H5 _1 v  g) ~$ F查看kibana收集到的日志8 J* @2 l' q# R" P' Q! P& x

- |+ ?* @, {. [3 x$ I6 f, l 1 E% G. D3 k( i& S6 N
1 x' r. A4 P0 b5 l/ ?1 R2 F
filebeat结合redis、logstash收集nginx日志 % n$ i/ s! `7 {
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
. u  i* H7 x8 |- W7 N& d
$ k) i" K- k  q' Yweb1:172.20.22.30,部署好nginx、filebeat、llogstash5 [# H" [6 C% p+ C, @# i6 s8 B
* q9 }+ E! R, ~, @' g$ [
web2:172.20.22.26,部署好nginx、filebeat、llogstash
7 a; W3 A1 j5 q6 p6 u& p4 Y/ ~) k3 o) V
9 ^4 R' d1 ^6 g5 ~0 ^logstash服务器2:172.20.22.23,redis服务器:172.20.23.157
. h+ A; o- W! a; p
4 B( B8 h: R5 D: y, Enginx服务器相关配置
. f. t7 o2 L$ A9 u3 ^% E; m部署nginx
* k) l$ |2 t9 Z1 i4 ?9 r' H% i# n# wget http://nginx.org/download/nginx-1.18.0.tar.gz) j8 k; O/ d/ H& h" E" U! S
# tar xf nginx-1.18.0.tar.gz) ]) V$ j; b$ k& r; u$ j
# cd nginx-1.18.0
1 @& Z6 ]6 k, f# u, H6 ~# ./configure --prefix=/usr/local/nginx --with-http_ssl_module* x. y' k% M  e  Y* G* F2 W
# make -j4 && make install
1 s, [9 g4 J% b: @% S# /usr/local/nginx/sbin/nginx
) V/ }( L  D, d- @部署配置logstash 8 [0 I5 m- ^+ g( S
把filebeat收集到的日志信息发送到redis6 g. x& V$ G8 ?$ |

( Y& b9 e2 B  m/ ~# apt install -y openjdk-8-jdk
0 N/ c6 W8 b# u7 l4 }# dpkg -i logstash-7.12.1-amd64.deb
, t% Y# s8 Q* R6 l* n# cat /etc/logstash/conf.d/beats-to-redis.conf
! k/ v1 Y4 Y$ k5 ~6 j* finput {, y. N5 K, m) I% k8 U4 ]! D
  beats {
0 G7 C' x& I' s$ T0 t8 q* r    port => 5044
' d' t) X: k  M- n$ G    codec => "json"
" ?# }5 x0 u; J8 _- u9 n  }- r+ l' \' L1 K  X# h  n) R
  beats {# D! G/ D7 z- s" r, Q8 C8 M% P
    port => 5045# S) R$ d5 M. A
    codec => "json"6 Y+ a# m9 s# B5 A$ [" I+ x' ?
  }* B, h" s. {" Q3 O: f
}
, ~- q  h( w- N/ |7 R" }output {  m& `5 A* G! r9 K3 \) y
  if [fields][project] == "filebeat-systemlog" {& |6 W2 c& @* F
    redis {
- G' D0 |: c0 l4 u3 F' |& M      data_type => "list"" z- h" x. j& O1 {6 U# I- o
      key => "filebeat-redis-systemlog"1 b0 [. @8 }8 Y9 {- Q
      host => "172.20.23.157"
2 e- ?- `+ Q, e      port => "6379"0 d8 r: u3 Y8 W- J) @; |5 u
      db => "0"
1 S% x+ O% I0 d7 W& k0 G      password => "12345678"
- g( {0 k, Q* M# Y: ^& P" \/ b% }  }}
# X! J# h  y3 S# }  T+ G  if [fields][project] == "filebeat-nginx-accesslog" {2 @5 l8 P: x' x  J8 V+ _# K- ~
    redis {0 l5 z. d2 {7 _; u7 [! q
      data_type => "list"
( U: K1 _% j+ w0 I- `2 X" X  P      key => "filebeat-redis-nginx-accesslog"3 X& O; P& a4 V0 c1 \6 H$ t! L
      host => "172.20.23.157"4 }* D- J/ x: f9 e1 E" Z5 n  k; C
      port => "6379"
) x$ Y% F1 _5 Y+ p0 D6 S3 v6 ]* d      db => "1"+ \( k. v3 b& v2 [* i
      password => "12345678"! _* I# G1 V$ e' |
  }}
* k% ~2 q( E$ y( V: D% Y' {4 d+ v  if [fields][project] == "filebeat-nginx-errorlog" {
5 @& G6 C/ o6 Y    redis {' q- M4 g7 t9 K% O* `& Y5 t" n
      data_type => "list"& x6 c* y- ]% w% ]6 G2 s# K
      key => "filebeat-redis-nginx-errorlog"
1 I# K# c! }" Z6 ^6 }! I      host => "172.20.23.157"" @; Z- z/ Y& S4 a$ O8 ]
      port => "6379"
1 K( }$ z6 U" p# A1 j% M      db => "1"
& U7 |( T8 n4 O      password => "12345678"
5 k. K5 D1 y8 n4 O; K2 l( J' J  }}
, y( @3 Z( J* b4 y2 b}
* J5 \6 ~3 B+ c) i- B1 ]: v# systemctl start logstash
, Q9 p: v7 i) C: ~- l# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ & g- D9 t! y" V
部署配置filebeat 3 Q% F. k& D: N- I5 Q* W
通过filebeat收集日志信息发送到logstash5 x  Y; S! g* k7 P2 Z

: I2 L: y' l/ G% {* o# dpkg -i filebeat-7.12.1-amd64.deb
! r1 T, S7 Q0 _# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"
7 a  y/ E7 \3 m. y9 z: ^  X% bfilebeat.inputs:
) \; {9 m$ v7 y* |7 k( ?; s& r3 u- type: log+ F! J: Z) n9 j4 Q( [, `4 o
  enabled: true
* M3 o# n' v  a/ c% ]' u6 d" X  paths:
+ Y3 I$ ]$ s5 d( m1 Q    - /var/log/syslog: k: O& j1 ^" K' W- j6 h. Z6 V
  fields:  G( H2 I* y$ v; I2 I) u
    project: filebeat-systemlog0 s8 D# x- Z, k9 ^. f6 ], S6 {
- type: log0 O2 q. h* f" ^# c. r% w+ ]9 G
  enabled: true
# L+ `! h& `3 ~2 \4 |  J0 A0 x0 p  paths:
. U* T9 x% m5 Y; D    - /usr/local/nginx/logs/access.log0 g" _4 Z+ C( y( ], ]0 D8 x$ M
  fields:
5 F7 q* ?& H' r1 m  Y    project: filebeat-nginx-accesslog( \6 S5 p# g# S* C; b
- type: log
* G) V" U; f7 F. v  z  enabled: true
5 q! N: u' W6 x7 B  paths:1 j5 ^3 U+ A, r9 k0 q9 ]2 V
    - /usr/local/nginx/logs/error.log9 O) z8 l5 z& k
  fields:2 h2 M  R9 D& A
    project: filebeat-nginx-errorlog+ _2 D* E* G% O" d$ P/ ]
filebeat.config.modules:
4 J  B; v- c$ T; M+ b0 k  path: ${path.config}/modules.d/*.yml% }$ N, W" A+ r8 y# g
  reload.enabled: false
; a, j6 w. r, S3 t$ M& ]setup.template.settings:
- L) x( b2 D8 z8 r+ G  index.number_of_shards: 1
9 K# R( B# ~/ H' y3 B# tsetup.kibana:
  }8 B. N. c4 p( Tprocessors:1 x1 I% U' c6 {* w
  - add_host_metadata:! K5 W5 `/ s5 m8 t4 b9 k: t
      when.not.contains.tags: forwarded
$ T  j" @/ m& M. C0 {. ~  Z  - add_cloud_metadata: ~: x( _6 ~' s- Q( ~; G. F
  - add_docker_metadata: ~
- O+ t+ q- R+ o" K0 L0 m$ @  - add_kubernetes_metadata: ~3 H4 R( ~  G0 J# T/ }
output.logstash:
9 c- U4 a4 S1 F: m0 q) V; [: ?  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]: p3 ^3 I; y- A6 j
  enabled: true& y0 f5 ~2 I3 K1 \1 w  B6 Y
  worker: 21 R5 a7 W/ y* Z! _$ z! o
  compression_level: 39 o5 E0 O% |& ~# E
  loadbalance: true
; X& y4 v+ g# K) J% @9 D: s* T: H
# systemctl start filebeat
8 w, k* c+ F) b  W# b* p* P% Q% m# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
. y* g& B2 I! s4 Z0 Alogstash服务器配置
6 F0 ?5 h4 @! j: slogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
& g/ Q7 B7 ]0 n+ L5 l ' d% w# r9 m# w; J& B# [- v/ t
# apt install -y openjdk-8-jdk
5 p; X1 ?* }# X4 u# dpkg -i logstash-7.12.1-amd64.deb
1 x- d* k5 ]  x. v# cat /etc/logstash/conf.d/redis-to-es.conf
7 n7 M3 u0 e2 v: Z% t: @input {0 A, Y) Y0 B: p
  redis {3 x* ]9 P$ W3 |8 g* L
    data_type => "list"
" m5 z3 N' i$ k7 M5 M) v    key => "filebeat-redis-nginx-accesslog"3 j" Q4 R8 Q, v( b" x
    host => "172.20.23.157"5 H' n# W; Y: [3 I& v3 f% J
    port => "6379"1 o, u* n- z9 @0 ?" @- j
    db => "1"
- o/ P1 \+ r: ]: K# s    password => "12345678", `+ W; `5 x, E  R2 j; E
  }3 A) J5 ^& |1 e$ T; T; I: b8 J! z
  redis {! j  v' W! w5 l) w" L
    data_type => "list"
( m* x4 W8 d4 d2 Q1 d6 t    key => "filebeat-redis-nginx-errorlog"3 \* X7 h# s+ p& g1 g  s( _
    host => "172.20.23.157". C# D- j' M( g6 i4 q( U
    port => "6379"! w# ?; o, H  l- i
    db => "1"
1 m+ j) }" ]. k1 q    password => "12345678"
  }; w, f, ^. d' r- b1 S0 f  }. W& @! s- D! G- l5 t! F% _
  redis {
. }% G! c$ u( J  e9 h    data_type => "list"
; s/ ]! h/ n, B- u5 s; W    key => "filebeat-redis-systemlog"8 p* ~) R9 x) ?8 T+ g, b: G1 `
    host => "172.20.23.157"
. ^: l0 V4 ~5 s% c# }2 K( A    port => "6379"
! X; S) }" T, {0 Y* s    db => "0"/ E- O: n9 L  q$ r
    password => "12345678"
! F, v# W; E# W: ?0 a: d  }
" f, T# ]* ]) c% W( {( ~}7 v1 {& q" f3 I. m
output {
( F! S7 [% |% z& f) c  if [fields][project] == "filebeat-systemlog" {
, _/ @3 Z3 i: S- u3 V# s5 T    elasticsearch {
2 @2 {% N0 t9 ^! H      hosts => ["172.20.22.28:9200"]
3 n8 O0 o+ {) f1 Y9 y      index => "filebeat-systemlog-%{+YYYY.MM.dd}"
- h& s4 i) p8 g" Z  v+ }  }}
% M3 C" r/ x2 g- c* o  A# b  if [fields][project] == "filebeat-nginx-accesslog" {! e9 L. Z  J, y0 N. k7 W
    elasticsearch {; @- h6 q- B0 u* w* f
      hosts => ["172.20.22.28:9200"]
, D2 }* v5 }2 ?! x* q! t! }      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
2 E- M. e: y1 Y& N# Z  }}
' w! M8 l, D! g9 t6 u6 T9 L  if [fields][project] == "filebeat-nginx-errorlog" {! }% Q; Z+ {9 W
    elasticsearch {- d; [' k3 J5 a: x& ~( X( O
      hosts => ["172.20.22.28:9200"]) |. `/ y, @7 q. m$ z
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
  A; X" P& e) R% v$ I+ r  }}( D2 S" G0 D. z- }5 N5 u! ^+ Q3 I
}
7 T8 R2 v  L2 Y' H# systemctl restart logstash.service
* f  W) Q( d5 X$ Z, r7 w3 e$ Vredis安装配置 % d$ T* _  R' F! T
redis服务器:172.20.23.157,. T% [4 ]' V. t5 \. v5 h
. E; S# V2 w* y* ^4 v0 v, Q
# yum install -y redis9 b" P4 B5 X/ _1 T1 H, z! `# R
# vim /etc/redis.conf; f; Q4 z. _0 U
####修改以下配置项1 u: n( f' H! T2 ?0 o! X$ N+ V
bind 0.0.0.0
3 ~) ?! C! U. r) X....
' X# ]  p/ m9 h4 {save ""
# V, F, F6 V; u! Y....
. p. \* @+ K7 G7 m% Rrequirepass 12345678
- i8 j0 |& E8 w. g2 P( A* {" D....* Y" B- |' U2 J, h1 T9 C2 L0 ]: c
# systemctl start redis
* o+ R- G) ?  ?- Q$ Y, Y1 s###测试连接redis: Y( `: r; h& b' U/ B! o1 J0 Q9 r
# redis-cli $ X; W7 S7 ~+ n9 o, P) E) R
127.0.0.1:6379> auth 12345678
6 m, {5 |7 z/ r0 ]; e/ BOK6 J7 F/ J8 p9 H0 d  l- m; X  B
127.0.0.1:6379> ping4 O2 f4 Q' u" e6 E6 P
PONG
/ H% T" i1 X6 `% _) L5 S) r
- V7 J& F+ G, [###验证收集到的日志信息
7 x6 a+ p5 O$ l9 c0 k- F5 S127.0.0.1:6379[1]> keys *- s" @. X% z: I6 k( Z4 t' [& Q
1) "filebeat-redis-nginx-accesslog"
; c* f- L- O; J0 n( Y& B2) "filebeat-redis-nginx-errorlog"8 ]7 r. f) D1 e5 C; |- }
127.0.0.1:6379[1]> select 0, g- H5 Y; b  |& a2 k
OK
) h5 A4 R* y" _- y- ?" U" H) E127.0.0.1:6379> keys *+ c* J3 }8 N7 f& J
1) "filebeat-redis-systemlog" 5 y  Q6 e/ P) {9 m
通过head插件验证生成的索引6 z; B- v: I8 B! M- u" s

2 i0 H. t4 a0 r . n; A6 H/ H! Z$ @& G; Y
kibana验证收集到的日志信息
6 M' O& ^$ ^/ m" e# F9 b+ `

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表