扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 1012|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK
/ @! J; m5 V! P" D% t( l; J5 E& JELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:7 N, Q! d7 m! W9 K

7 w: Y: I) c' w) P+ ~( y处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单* k+ K/ s$ \/ ~2 F% x; d& T- h
Elasticsearch
7 I: r4 O2 C- K) T+ E# U: yelasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。" a2 }) H) F) |4 i7 `

  h! \4 z' i* B5 ]/ S+ z. eelasticsearch的特点:
* K# T: q) f" u, s9 M( \
% d1 R# v/ @0 H9 {% x) M实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json; R; B; @/ Q3 n/ z+ M; E+ d
部署elasticsearch 5 D! A0 K, `9 [) J; l0 C( S' a
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发
7 @$ [4 P2 j' {5 c9 B $ x$ G. V4 i- u' Y
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步+ l' l4 w' ?6 O, w& q9 J, F3 J3 F
5 q5 W( [7 U! D- G6 Q
服务器1:172.20.22.24) q: ~, _0 }3 |; R, }

3 w( ^0 `/ }8 T服务器2:172.20.22.27
, A4 r* ?- }  W$ G! {, N: L& d/ v! [
6 u. q2 i9 W, x服务器3:172.20.22.281 Y& o  E* I6 q
+ W, Z; O: {2 V" s+ C* z' n. a- J
###ubuntu
! y  g" I" H; o0 H  h: C# apt install -y ntpdate
: u$ g7 v" d% [0 z; g# rm -f /etc/localtime* G' U  h7 I( Z" F+ q8 Q
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
" t% X- ?1 O6 {. a# hwclock --systohc0 n6 s' _' i$ ?0 T  r
# ntpdate -u ntp1.aliyun.com
" Z) V) o4 I- r7 z###设置内核参数
' M/ a, r/ ^. i5 n( w# vim /etc/security/limits.conf
: t2 p5 g3 r( p& k*                soft        nofile                5000001 h0 B5 _" C6 s- y2 D0 I
*                hard        nofile                500000
; I5 ?' p' ~6 g" L/ v. E# vim /etc/security/limits.d/20-nproc.conf
' q% I7 V: q+ @/ ~/ R! O! J9 ?*          soft    nproc     40965 ~- z0 L" k8 E. i
elasticsearch soft    nproc     unlimited
( d6 s3 N& L; Xroot       soft    nproc     unlimited: G' |/ X# k$ C
###安装jdk4 c# q! c! w  q
# apt install -y openjdk-8-jdk
. k) O, r' s  r- n+ {* U# |7 D
( j7 q1 C$ ~/ E###每个节点都安装
" D" Q8 L; P" X6 e# ls -lrt elasticsearch-7.12.1-amd64.deb, q/ {6 z2 [# v3 I
# dpkg -i elasticsearch-7.12.1-amd64.deb# t. J- S5 _& s% ]( H$ U7 n8 K* R
###节点1配置文件1 \. }2 y1 N7 ]: w( y! ~+ y
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
# B- t" h, ]0 d# x6 T% a# n0 o1 icluster.name: m63-elastic        #集群名称
9 a! W/ p# p/ H. G; _node.name: node1                 #当前节点在集群内的节点名称+ \6 D; q7 ^- w$ ?* `  I
path.data: /data/elasticsearch   #数据保存目录
: W. q3 L' C8 x! d( ?8 Epath.logs: /data/elasticsearch   #日志保存目录6 p& M7 K, F8 }( j' U( J3 e
bootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap
6 a5 [( y4 e" I5 R5 `7 y4 nnetwork.host: 172.20.22.24       #监听IP
8 s' N/ ?9 X9 X: Uhttp.port: 9200                  #监听端口! k0 E) E* F6 w
###集群中node节点发现列表. }" v8 F) ~1 }
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
) O! d1 Z  b4 R' s( K' s  }###集群初始化哪些节点可以被选举为master* k4 g5 z5 Y, ^; n$ t* y! r
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]. O* x, B6 O4 Y9 B) O1 P8 `
action.destructive_requires_name: true
; g" U4 f: f, T7 E/ ~. Z# mkdir /data/elasticsearch -p
  p7 |4 f# H( f. u# chown -R elasticsearch. /data/elasticsearch1 x7 S7 h7 B) @7 Y
# systemctl start elasticsearch.service5 m& J3 D: ]' d  n$ v0 G( ^
###节点2
  ?; n8 u" W1 h% e: Q# c( o# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml/ X9 {+ d! s% }2 q' n( n
cluster.name: m63-elastic+ n5 |. x: o# _
node.name: node2. w  M9 ^/ T; `: u" W: Q: j
path.data: /data/elasticsearch5 k. |  T2 H0 c9 L
path.logs: /data/elasticsearch
$ {* U3 c! I  [+ _network.host: 172.20.22.27
' K6 @1 B- L$ {2 ?2 ghttp.port: 9200
! w; Q1 ^  I/ F$ W# }& \; tdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]% a5 o2 Q: {& q  G& E* S* K" E
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
9 G: k; y2 n- Zaction.destructive_requires_name: true
. p0 d" C+ M( R3 G! o1 o# mkdir /data/elasticsearch -p
6 E: P% t, a) b( S1 _; Q5 M# chown -R elasticsearch. /data/elasticsearch8 ]/ n6 t" `- b9 ^7 v
# systemctl start elasticsearch.service
1 [+ B9 ~3 I$ }7 Y) B1 O7 D###节点31 W8 d+ i" V; x: q
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml8 I( _6 t" B, O! d# Q1 t/ O) ^
cluster.name: m63-elastic, f7 }3 D. i) b: X! S9 X. i2 y& ]* {% t
node.name: node3. {8 Y3 l8 {% Z# x
path.data: /data/elasticsearch4 D3 ~5 t# a+ \$ L, E+ V
path.logs: /data/elasticsearch
' I: g- A2 D  Fnetwork.host: 172.20.22.28
0 |4 R0 }0 j: D6 X, hhttp.port: 92009 B3 q1 ]1 p1 o" j. s
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]* \1 X/ k; Z, d9 g2 D4 ^: m
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]+ z0 _& [# d. o" |- y+ M! [
action.destructive_requires_name: true! l; M; a4 ^' c3 T( P7 e
# mkdir /data/elasticsearch -p; I0 I- h9 Z7 E5 d
# chown -R elasticsearch. /data/elasticsearch' n$ i3 B1 N0 N4 T) D+ N
# systemctl start elasticsearch.service
6 z0 e/ O! q0 s. z浏览器访问验证 - a+ s% Q0 t0 d4 L/ W, s
http://$IP:9200
  [, y' n) U* ]+ Y9 f! Q! k8 e3 ?

- E" Z* g2 U+ I2 v1 y* J( K
1 R; o, O# M7 C6 G6 F3 zLogstash 4 f3 O. i3 j* [# u
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。* F; A% ~8 t7 W6 q

5 r5 _8 @5 h- F- [9 v; V部署Logstash
  ?- N" x$ U: k: v0 i0 u- n( ]Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
  b; v& _6 K( O* n
  e( o# j5 L# w5 |9 W7 C2 y: Zhttps://github.com/elastic/logstash #GitHub
/ H9 ^3 `& T  P5 i/ |. Z
0 R6 X" j& Y* b+ }: `Elastic Stack and Product Documentation | Elastic  _, h3 K! `; t8 J
: b4 [9 A  k* ?, x
环境准备:关闭防火墙和selinux,并且安装java环境
  E6 I9 [- W& r3 ~; q8 v! O & j! N1 a+ v4 y* |$ ^, n* p
# apt install -y openjdk-8-jdk
  S* x5 _- r4 k* A# ls -lrt logstash-7.12.1-amd64.deb
8 `( y( z  o7 a3 b4 q' ~4 c# dpkg -i logstash-7.12.1-amd64.deb
  o% }* t3 m3 L% q/ h* _8 G0 X###启动测试: j( t" V, t" m) F; m+ j5 c
# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出
  b  ^# L1 H: qhello world!~5 J1 |2 I* M( @; ]* Q! e
{
2 F% [7 q' A+ h9 T, G6 M" L6 i      "@version" => "1",6 a% k* b$ X# j: m) b  _
    "@timestamp" => 2022-04-13T06:16:32.212Z,( }4 l6 L2 ^1 A' v8 a
          "host" => "jenkins-slave",
5 M3 q) N) V2 f% _       "message" => "hello world!~"
# y  K( d2 c* ^( F9 K2 O}
8 R" T8 m& m- {' ^1 a###通过配置文件启动
9 h" y( t) p( ]  c# cd /etc/logstash/conf.d/
( h! w  M' ]/ s* M# cat test.conf
8 N4 o  X; Y$ }( q1 minput { ! }( d/ X& @, E% K/ y6 R" }
  stdin {}
7 f& |& E" R2 M: z, b) t}$ O( V# F& e4 J2 l
output {2 U; @2 ], b' M9 }* D- q! N- B
  stdout {}6 s' L7 }' {0 u- S8 @9 h/ S
}
0 r% [; n# V) t& v" R# ?: C
; ~5 x, K! U+ s% [###通过指定配置文件启动: z$ q+ R3 _) I
# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法- G; h# [1 l" Y  u
# /usr/share/logstash/bin/logstash -f test.conf8 y6 W& W4 }) ~! p
- b6 W$ K5 z- F8 `6 y+ f. l9 n
####输出到elasticsearch
4 z2 P) r$ t9 u8 p# cat test.conf
& }9 T  Z. M! d- z4 minput {
5 _$ [* N; Q; G  stdin {}
+ y# s" K6 m. m& z  D% W) f}1 @' r% [2 N& D# t' k
output {
% U0 B% F0 a1 W4 B% m+ t  #stdout {}
# q! K& X  K  |% j  elasticsearch {
1 C$ y  B: b2 m, B    hosts => ["172.20.22.24:9200"]
  }& d3 w0 r0 ?$ i    index => "magedu-m63-test-%{+YYYY.MM.dd}"' f, W: Z  A# V1 e( e( V9 w( Q
  }
9 w* n2 `* S2 O0 Q- b}2 S2 h0 K* T7 b& S
# /usr/share/logstash/bin/logstash -f test.conf  E9 }% S' p* l: ~
version1; K( Y" C  i& k2 X4 X
version2& i" d% d& j9 l  s: w0 T. ^
version3
/ d- U7 [3 W% W' A& x1 s+ Q5 ltest1
% u, h, D4 h1 }$ W) mtest23 o6 B6 k5 r- m" d+ O- j. T8 q1 r
test36 G9 @  F5 a9 t7 C$ G
( B/ v% J9 h3 B0 {
####elasticsearch服务器查看收集到的数据
$ s- u$ t$ S+ i& x3 C. D# ls -lrt /data/elasticsearch/nodes/0/indices/+ V& U2 E5 ^8 O/ q
total 4% Y' i5 ]1 v( ], `, J/ T
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA & l, B: }2 `/ d  w. I/ r1 ^: M: x" t
kibana
7 S# Y; C4 O. W" ]* n& D4 x3 U( D+ Ikibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等
' p: F2 q2 S( r% [1 _$ Q. d2 P2 T% L
4 m7 M7 L, x! z1 h9 M部署kibana 3 V! ]- C( E. j
# ls -lrt kibana-7.12.1-amd64.deb
5 O  W3 v. q3 b0 f+ k1 B# dpkg -i kibana-7.12.1-amd64.deb
( C' U$ Y6 i4 o0 g  n2 b' I, G# |# grep "^[^$|#]" /etc/kibana/kibana.yml
) Z: o* P- k6 K7 tserver.port: 5601  u# R9 v$ o# v" g: t+ `( H# k( m
server.host: "172.20.22.24"' S2 E" B% O; g8 x/ u( F" S( k9 c
elasticsearch.hosts: ["http://172.20.22.27:9200"]
* z( e" t& W1 a( w" d& Ui18n.locale: "zh-CN"/ T: w/ K3 X! m; H, X7 H
# systemctl restart kibana
9 S) |) ~: p% P+ w6 X' [* g$ ]' H* v浏览器访问http://172.20.22.24:5601
0 g+ \2 ?. D* {1 R 7 H9 w; A+ ^' i6 C5 v
Stack Management-->索引模式-->创建索引模式
: s& C3 d0 o) p" f) D/ m; W3 j* ~: \4 Y/ {2 X! @7 K: N& t" |/ r5 w" p# ^

3 r  q5 ^# n6 F, q2 G) b+ Q3 Q' U0 o0 a( \选择时间字段' _' W8 C; W, ^# k1 H! g

. N( w. a0 J6 {2 }+ S8 f查看对应创建的索引日志信息
$ R( g: N. Z0 O
6 ~8 x/ e# n/ w5 R0 T6 B' t: I
. c" h6 q& E2 \  k
+ }% O3 z  V0 k* H3 \4 [" _7 d& m收集tomcat日志 ( Q) W4 {! S0 K  }7 x2 p3 t8 Z
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
" j7 O! z& {$ h: o2 U% c  t
6 E2 W0 v, z( ~4 E  L. S3 F( l部署tomcat
9 X% [1 w. C6 j* R+ ]" v- Z####tomcat1,172.20.22.30
5 L5 |- k9 b# E2 V2 _! C2 z' D) _# apt install -y openjdk-8-jdk
6 h0 S3 w  P! h8 Y( c5 N8 D  M# ls -lrt apache-tomcat-8.5.77.tar.gz + P5 b# f/ `# \5 w6 X  T, T; ]0 E
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz/ t* N$ E3 |$ @4 A1 z$ ?2 z
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
. ~2 z& x( S8 ~* t# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
% H) j' L& C+ U+ T# cd /usr/local/tomcat+ I/ G; `7 y- l: `7 d7 s$ L5 k# }
###修改tomcat日志格式为json6 T4 s7 u) ~) h, e3 z+ _* C
# vim conf/server.xml
( W/ B" I( U) w5 [4 i....8 |1 @1 c, X  f- G

7 }! V1 ?8 G/ P# U1 Q4 k...., {7 X% }) N. z5 a% u
# mkdir /usr/local/tomcat/webapps/myapp; u9 E# h5 s* D4 \2 T, _# }
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
3 F1 e: i7 Q9 h# ./bin/catalina.sh start3 @- I* i/ q, E6 D2 e

4 D+ K3 l5 Z+ J###访问测试
- t/ s7 L7 K1 |4 m2 s# curl http://172.20.22.30:8080/myapp/
0 k- X% E* E! R- Q###查看访问日志( |( C) \' `& n+ j# V2 z
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
' Y1 W% H0 h& J; f
8 p" M! `4 x( b- W8 F####tomcat2,172.20.22.26
5 ]' x2 J3 ^, w4 ^0 B; U; G# apt install -y openjdk-8-jdk, E+ q( D  C- p
# ls -lrt apache-tomcat-8.5.77.tar.gz 7 s: e  o& s9 U! f/ p3 t
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
  s: h7 m" X9 C1 w( r6 |# o1 |# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
% a1 D6 G! ]$ c0 I# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat$ [& i/ y$ y1 A3 F" p
# cd /usr/local/tomcat
* [) G" q8 v7 i5 u* ^5 m2 z2 B###修改tomcat日志格式为json
6 r0 M" m0 \$ j$ _# vim conf/server.xml% t- D, v! W. f
....
; B  t# ^; B* Q5 B) y+ a; P$ w9 B, A
8 [: B8 v7 r! F! X6 `....; a: J5 f9 X. }
# mkdir /usr/local/tomcat/webapps/myapp
( Z% N9 w; R3 W8 ]# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html) q  G# J/ K- P( @0 j, J- V
# ./bin/catalina.sh start' x; W3 ~& o# w3 n( t

9 I" q3 x- o+ E7 d9 M1 o- a  n, J5 N###访问测试. b) H3 [  \! l% w. q1 ~; c6 D9 w0 X
# curl http://172.20.22.26:8080/myapp/
; j  n! u; U1 n7 N2 Z5 [$ y; |# f###查看访问日志7 ?" }' u, U% ^! n$ K8 u/ P
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log $ D. ^0 ?' M1 E
部署logstash
; q/ f$ w3 ^: C+ ~1 W$ \% {在tomcat服务器安装logstash收集tomcat和系统日志1 i8 j; |$ S) n: R
: p' L3 m" K  j) J# _+ \- C, ~( I
####tomcat1,172.20.22.30: p* W# {( T7 w+ _% X# C
# ls -lrt logstash-7.12.1-amd64.deb
( \, M8 Z& {" m& o0 P* u5 @# dpkg -i logstash-7.12.1-amd64.deb
6 E2 J4 b4 e* G+ M/ Q/ B  E# vim /etc/systemd/system/logstash.service
; t6 I+ ?# ^! z; G& R/ a$ [...
: |" p/ b% ?. C5 W5 gUser=root
! r2 I& j1 A: b: C+ N+ ~2 {Group=root
3 b0 I. T1 q8 ]...
, O5 i1 M8 x+ b2 n! Z( J# e# cd /etc/logstash/conf.d) ^! t% t+ x- m9 [9 |7 r; _8 O
# cat tomcat.conf
/ q  }: Z& v; Finput {
" D8 I+ C& O$ N7 x. @  ]4 L! I* K  file {
6 {, U! h; H! @3 @) P  {7 _    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"3 R4 C/ A7 q( N# E3 I9 Q
    type => "tomcat-log"( v  k, ?9 a6 D+ o
    start_position => "beginning"( C0 ~# m- k! y7 g6 u
    stat_interval => "3"
; ]) e8 S, o6 M  }, G9 |+ {0 `, p! Q5 J
  file {
1 i- I) j5 W  T0 r* v1 G    path => "/var/log/syslog"
! Z1 a& U/ O' f    type => "systemlog". l  i2 L# j6 E2 |3 N, S
    start_position => "beginning"% n2 x$ z9 _( G) W$ `
    stat_interval => "3"8 Z" T4 B0 O% V! a1 O! \0 Q. w
  }8 j! R3 ^. g  G
}2 R  z# i9 x7 r2 m% ?1 i
output {
7 m# z  {; S% l5 \! ^, G  if [type] == "tomcat-log" {) C. d* g; Y2 b+ @4 c: ]
  elasticsearch {" M- q2 M; y- t7 U, x8 W' K
    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]
( t7 J4 u! }* ]" T    index => "elk-tomcat-%{+YYYY.MM.dd}"! H- S* g3 q3 l) Q! y* w
  }}2 ~( X! K* _2 z4 x6 e, z
  if [type] == "systemlog" {5 d- i$ B! c- x* A- I/ J/ \+ n
  elasticsearch {) ]7 h9 y1 \; `$ |0 z; I
    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
" \/ Y0 M) Z5 s3 c# b4 U/ I- V    index => "elk-syslog-%{+YYYY.MM.dd}"0 i+ K0 ]; \& f
  }}; E7 j; F% W/ T# w: h6 ?7 X1 I
}: o/ c# k* s5 l* M: Y
! N" ]3 h( V' {7 s5 j3 u$ ]
# /usr/share/logstash/bin/logstash -f tomcat.conf -t
0 P, O7 I5 @9 B3 E0 t# systemctl daemon-reload
& Z" `; L  a* g2 B- l8 c4 ~# systemctl start logstash.service
9 \" n/ R* t, `3 s! x8 F# scp tomcat.conf root@3172.20.22.26
2 K2 J) t. h9 ?, U  {9 I0 T/ s2 a" o& {6 W3 h/ K0 z
####tomcat2,172.20.22.26
: E7 ]" l6 x& [2 G5 B# ls -lrt logstash-7.12.1-amd64.deb
$ o8 h* K. c, M, j* H  V, w# dpkg -i logstash-7.12.1-amd64.deb8 T1 t# [3 o$ w% v; U: E$ ^' P& `
# vim /etc/systemd/system/logstash.service& O% `) X; h9 o1 J6 m4 I% l8 ~
...
5 _6 K5 L& k9 `* X$ j& LUser=root. s& c# S% q' T  m0 F' N
Group=root, O- K4 L# n8 ~* [% e/ _$ L
...8 d* M( I! a4 l8 F; V* a: }3 E
# systemctl daemon-reload1 z7 j" n% @2 _
# systemctl daemon-reload  l! E8 L! K7 r0 G
# systemctl start logstash.service
& Z# S% z! P1 v1 O通过kibana展现
0 I, S" z# T4 N6 M$ [$ N2 E) [5 z( \; O* Z- Y, P

. ^0 c1 ~$ Z  Z8 f  w; ~! y2 {收集Java日志
) V: K' @9 j0 Z- @9 h/ }使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并  P3 `. y8 \3 h. {1 W

2 i: [9 D3 ~* s: o. T* lMultiline codec plugin | Logstash Reference [8.1] | Elastic4 l  ]+ P% i; Y1 v- H
  F+ z0 `, e. R" n* A0 y
添加logstash配置文件 2 o5 A0 ~  _* b% _1 T
###收集logstash自身的日志,172.20.22.26
- }/ G9 a4 y' S) G# cd /etc/logstash/conf.d% I& d4 o: F7 t# {. w  q
# cat java.conf 7 ]  {" J+ q; w5 y8 J5 M" R
input {" {8 d, {1 N3 A, Z( j6 E
  file {9 M, D# L: T! ^8 w% x& k9 y
    path => "/var/log/logstash/logstash-plain.log"
6 e7 B* L# K. N' e& K9 F$ t9 U* n    type => "logstash-log"
! U  e* i2 m/ J    start_position => "beginning"7 m" K7 _; x$ K. I3 ]
    stat_interval => "3"9 B# |, ]1 f: r# W- \2 D
    codec => multiline {1 x& i8 F+ \( q  a, T
      pattern => "^\["1 F3 z6 p. F2 U; z
      negate => true2 {0 b5 @% u8 {
      what => "previous"
: T9 _" |) e7 }  o9 ?   }}$ q$ ]4 i0 w0 S# X% v8 k' k) h% V% |
}
6 d- i. f. D$ I0 a4 R8 Toutput {
; m5 e# h. [+ {  if [type] == "logstash-log" {
" r' x) v9 U) s! Q* b  elasticsearch {) y- V$ r% M2 r0 Y1 [% }+ {
    hosts => ["172.20.22.24"]
! V/ n; h7 O0 u# [    index => "logstash-log-%{+YYYY.MM.dd}"
- E% A* A; s7 L! e; a  }}
2 x& X3 U" _& A9 f}% o* L  u1 F0 l& U

5 ^! O9 X; e8 @+ H# /usr/share/logstash/bin/logstash -f java.conf -t
" ~9 u1 m1 }- t. G; ~# systemctl restart logstash.service
; P5 w% D2 {, l6 _7 Y  k- H/ f, k' u. K" n3 z& ?/ S  I& V% m; Q9 J
###收集logstash自身的日志,172.20.22.30  `* ]6 f+ k0 ?) O
# cd /etc/logstash/conf.d! F% M$ F+ w% [. @
# cat java.conf
$ m- y" {8 z2 J8 K+ `6 n$ Binput {
' L) @( Y7 v- u8 `( M  file {( q* h) T  L  k# g, @5 k. Y0 _
    path => "/var/log/logstash/logstash-plain.log"" @7 G: c7 e. x' v9 D# x
    type => "logstash-log"7 I# i% G3 i/ O$ F4 u# s
    start_position => "beginning"
5 @" {! q2 q. E- b* f1 {    stat_interval => "3"& ~9 @+ A; \& Z; e$ t
    codec => multiline {9 t4 T7 G( g- N4 [* o" H9 j
      pattern => "^\["
. a3 k3 e, @% c      negate => true
8 q, c' i, t5 w* {# T      what => "previous" & W! T. C' A% [' K) a, o8 W1 z7 G
   }}
' w5 ?8 i  P4 S9 B7 Y3 g2 [/ P}6 d2 [: Y0 @2 Q1 s( y* ?
output {' N) x3 a  B! F: O4 W
  if [type] == "logstash-log" {0 C5 J' o6 w3 i; y
  elasticsearch {2 k$ F* J8 F( x* d" U! W
    hosts => ["172.20.22.24"]
' h5 A# G' d; L( j    index => "logstash-log-%{+YYYY.MM.dd}"& y4 [4 Y+ N5 e5 x* Y  S) ~8 D
  }}
; i8 Z2 c3 c' J0 y8 H}" p# y5 v' }  s6 Y) E7 d4 Q
% D0 D4 q* Z( N; u: m" ?% w& K0 _  A
# /usr/share/logstash/bin/logstash -f java.conf -t
! M6 q1 k6 S  U' l. w# systemctl restart logstash.service
5 Q: B0 ~2 v! _$ x查看kibana收集到的日志# R  w! r2 a2 m# z& ~2 ]
" Y: d2 T1 m, R+ v& d9 P1 n
" h+ x  u( U$ ~8 O' U
$ F. p: I$ P9 s- m1 p0 w5 Z
filebeat结合redis、logstash收集nginx日志
. n3 P/ {' V6 I使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
4 O3 @$ y) ?' H / D7 z' @/ g: d& U  I8 ?
web1:172.20.22.30,部署好nginx、filebeat、llogstash7 v( n8 u9 R0 W6 z2 h: p
" G( W' ?* R: V9 p+ k
web2:172.20.22.26,部署好nginx、filebeat、llogstash4 E8 R8 j; u) Y8 d% e3 k  S% K! v) ]

8 K; ]2 Z5 a& }* L" ulogstash服务器2:172.20.22.23,redis服务器:172.20.23.157" F. g1 m1 `; l
9 a# F0 ^9 Y  P' U
nginx服务器相关配置 5 s* h& e) A9 A( z$ \) l$ r
部署nginx 6 Z1 J! u( H, u4 u' r# s
# wget http://nginx.org/download/nginx-1.18.0.tar.gz) g% p  L! ~/ R. z6 T
# tar xf nginx-1.18.0.tar.gz
5 H% [' T- g  V/ r# cd nginx-1.18.08 J# o8 }1 i7 q/ E  T+ J# K
# ./configure --prefix=/usr/local/nginx --with-http_ssl_module1 V0 T$ K3 }( t, L9 G; f* k
# make -j4 && make install7 S; O) q: F5 E" Y) _9 I
# /usr/local/nginx/sbin/nginx
' \3 [$ `- f+ ^" i+ |3 l部署配置logstash . R! L5 K- Z/ f( s, N# ~: r5 \, f
把filebeat收集到的日志信息发送到redis8 f" ]. z  L9 k) w9 `8 Q% e/ w9 C
2 T. R: L- V6 R  ]% A8 G
# apt install -y openjdk-8-jdk
- o7 I' Z3 B9 Y& Z# dpkg -i logstash-7.12.1-amd64.deb' v# A# T5 u& e: N% m
# cat /etc/logstash/conf.d/beats-to-redis.conf ( d, m# G; I# W" ?3 O9 z: I
input {
9 F, p/ G3 W9 B4 S) b  beats {
6 Y, A, A1 M; k& \; [: J    port => 5044
6 Y$ x" ~. N; @    codec => "json"* [; `0 R# ~# j. y
  }! |1 Z) N2 v& K+ v' E4 |
  beats {! `5 b: a( J9 n2 Z
    port => 5045
' t  @. U8 I6 Q2 }    codec => "json"0 C) ]+ o, A$ R) R  y9 \
  }$ x3 R* h& O. F/ N
}
" c3 o% j9 ]5 qoutput {( [+ X9 Q, D" ]8 U. N* j; E, Z
  if [fields][project] == "filebeat-systemlog" {
, x% m0 X8 X, C* t    redis {; u: y, N6 V4 C7 F
      data_type => "list"
+ _) U. k2 M' R$ g+ a      key => "filebeat-redis-systemlog"+ _5 v$ ^& ~; T3 K6 E# w1 `1 M/ U; H' B
      host => "172.20.23.157"
& {: D  K% r' n: l0 ]      port => "6379"% a' N0 n& A2 l! `7 F, @4 r
      db => "0"' h) b  E3 q+ F( H
      password => "12345678": @* \! _  v) O
  }}( X3 d9 _  E) E; M6 E% g
  if [fields][project] == "filebeat-nginx-accesslog" {
4 t0 X9 f% {- ^/ U' _    redis {7 N5 N' Q" S2 q7 `4 S* S/ R2 v( a
      data_type => "list"
% ?% Q, U6 E: R% E* J      key => "filebeat-redis-nginx-accesslog"
/ ^6 u6 w) D2 r+ C, e0 C, [      host => "172.20.23.157"0 s! ^1 F: _7 D" Q2 L0 U0 K
      port => "6379"7 A$ {# r* c6 O1 f4 U$ v1 g
      db => "1"
( O$ H( f! H( u      password => "12345678"
! p8 [4 O# ]9 m/ G7 C( A  }}
% n/ J. }: X% K  l: g4 }  if [fields][project] == "filebeat-nginx-errorlog" {
* [; ?# l! M7 [$ O3 [! \( n    redis {
- m8 R2 T( n3 d5 n, E      data_type => "list"! ~+ m) z7 o) d1 h4 j8 K9 X
      key => "filebeat-redis-nginx-errorlog"
; i4 K4 v; C% V9 l      host => "172.20.23.157"
) p0 l0 ]' D. T5 k6 \7 ]      port => "6379"/ {$ Z+ G# H8 ]  ^
      db => "1"
6 N! n! ?1 J* f2 \      password => "12345678"3 n1 F  H9 T' V) L. Q) q# r# l+ i
  }}
! z  ?& \/ z9 v  B$ ^}% h3 ?6 p; S) M7 p
# systemctl start logstash
" j* C5 M9 ~. ?- y3 U8 R# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ , l1 z# O3 S7 {) T3 ^
部署配置filebeat
1 C5 I8 }; [6 N1 U  ?8 `# F0 R通过filebeat收集日志信息发送到logstash
# {1 k$ a3 l; `2 I
4 _/ H- T& j- o- ^# dpkg -i filebeat-7.12.1-amd64.deb
3 j' L3 P' Y- v1 n# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"3 B5 N7 A: b2 r6 K/ J$ ~( {5 s
filebeat.inputs:: Q/ ^- k7 n9 W1 W5 u/ b, F
- type: log- b4 @/ V$ C$ D& O
  enabled: true+ T; t) T/ n! g
  paths:' E2 @% A; t: b' s
    - /var/log/syslog
2 o, y' J7 f  A) y- i2 |  fields:. t6 o; L  ?! h: x+ e- K
    project: filebeat-systemlog4 F2 g5 q3 q* m) r8 Q
- type: log( T, ^# [9 \" v! k7 Q
  enabled: true* e8 L1 F8 G. `: _: b
  paths:  O: a/ w* \. J
    - /usr/local/nginx/logs/access.log6 R  k* }* ]/ ]6 E
  fields:
- Y5 {& ^" U( [$ u2 K( C& u    project: filebeat-nginx-accesslog
* ?& T( C. D5 o- c( w2 K- type: log
3 B. H$ C" C& ^5 t" D$ [; @) K; h  enabled: true
- \# _) x$ ~7 t* r  paths:
* Z6 f" w; v9 g, e3 z  g( h    - /usr/local/nginx/logs/error.log* o6 p) V1 l9 J: J7 E
  fields:" t' t3 u( p8 G& i
    project: filebeat-nginx-errorlog. n9 w1 h4 G; R7 r7 i1 z1 R( u
filebeat.config.modules:+ Y! L9 m# a4 u: \& M- i
  path: ${path.config}/modules.d/*.yml
* N- i- {8 F' y9 E" H4 }3 ~  reload.enabled: false
0 U% o# I) h. y; {setup.template.settings:
3 s0 f! C  c7 e  index.number_of_shards: 1
  G. M3 a" A2 Q" |2 Z! X/ }& tsetup.kibana:
4 S: i. v1 p" [8 {7 z4 j) d% Bprocessors:9 j) P& I7 O) C+ K# z8 X
  - add_host_metadata:6 W( P5 Y  P* D/ I0 E! n9 M
      when.not.contains.tags: forwarded5 h* G0 e# A) a6 e9 [
  - add_cloud_metadata: ~' o" K0 n+ E$ Q! V) Q8 ~( ~/ D4 b
  - add_docker_metadata: ~; d$ I% p4 L; y7 x! E
  - add_kubernetes_metadata: ~
8 Q! ]3 r! g1 s( A; c" m" Coutput.logstash:
# {8 _/ c4 U1 }1 c7 a  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]. q: I1 E8 k" u& i6 X
  enabled: true
; j4 r7 f  L: q  worker: 24 T! O8 @- c/ h% U
  compression_level: 3
: I, |2 i, @7 R; k  loadbalance: true) _' m3 F2 x5 H3 R* g& L

& x7 r* A6 d, W* Y5 y# systemctl start filebeat
* g% H2 ?1 A- v+ j! `# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
! \" v# n7 ^1 s" U" M+ M$ llogstash服务器配置
, ~! c' w$ \8 a6 Clogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
1 w# X' Y$ C, v! x' M. c9 v- Y 4 j+ `$ e" K8 q, A/ n
# apt install -y openjdk-8-jdk
7 [/ ~5 A, x7 P8 M# dpkg -i logstash-7.12.1-amd64.deb/ l; ~* |! }' g9 ]! K- B$ d7 u
# cat /etc/logstash/conf.d/redis-to-es.conf 7 u; g' a  `4 ^5 q
input {2 R$ B9 h/ p3 m1 t. a
  redis {
. J. B0 }, ~1 ^/ T( W# Q    data_type => "list"
* v: J. Z* m5 s& D/ T& f    key => "filebeat-redis-nginx-accesslog"/ X2 G1 {& b9 t7 |" I
    host => "172.20.23.157"
- ^) o' }, x' \" \9 o    port => "6379"
. K  d1 a  h3 v( _5 z( r+ P! U. ~    db => "1"
: k" G- T9 Q% R( \& s* G( C4 t    password => "12345678"1 @- T) Y& s& x* ~6 o# F4 p
  }
% o. ]- B$ {* o, W/ [  redis {
" t( `6 r- v4 L- o; ]    data_type => "list"& n( b3 z* l7 Q2 C0 [
    key => "filebeat-redis-nginx-errorlog": U5 @& O: p& }3 K
    host => "172.20.23.157"
  {* a3 M) T( d0 c# O8 S5 j' B    port => "6379"
4 e  T  j6 T' g" j  \9 W0 {    db => "1"; U& }. z$ P! w9 k
    password => "12345678". s- O. h( k# V6 g) b- X
  }1 K0 R% P! G" k* E% l9 B  j1 j
  redis {
* X4 A: V, J4 J    data_type => "list"+ F4 [7 S2 @- c5 F* q
    key => "filebeat-redis-systemlog"6 N5 E3 V( H2 q8 u. b" _0 A
    host => "172.20.23.157"+ Z( N6 P' G/ u1 M' C# q
    port => "6379"
9 |$ C* g. T9 g' ^/ r" A. I    db => "0"
7 T' t0 N+ y& g. p. e2 \) L" }4 t    password => "12345678"
! G. `1 E  g) y3 P+ [  }/ V6 l( M& h7 Q( v
}8 M  w7 @% Q3 X' e+ {0 v) w
output {: G  f/ s+ h( `8 q- P& }" [" A
  if [fields][project] == "filebeat-systemlog" {% j* M7 n8 q0 `1 j0 {1 T+ b$ v
    elasticsearch {
1 O6 b) `# \) V5 I5 ^# w+ s. ^      hosts => ["172.20.22.28:9200"]) M& I# M3 D' t- X: R
      index => "filebeat-systemlog-%{+YYYY.MM.dd}"5 B9 j' b+ u  o7 U0 q1 I9 V
  }}
& U3 V- Y* t. r- H  if [fields][project] == "filebeat-nginx-accesslog" {( U3 {1 D. S: D/ Z6 r, ^/ _
    elasticsearch {; b" i7 Z+ h4 r7 c! Q, K
      hosts => ["172.20.22.28:9200"]" U' g' k& K/ u! p" _
      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"" l- L7 E3 @* ]0 ^# t
  }}
2 S% Z4 {$ K# v, {# P6 U' g: j  if [fields][project] == "filebeat-nginx-errorlog" {
) Y9 i$ E$ }/ Z* W& j# y# f    elasticsearch {4 {  X" |% d% k* x
      hosts => ["172.20.22.28:9200"]
+ }1 P4 l/ e! h4 c$ ^- F. K# w3 w      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"  d% G1 [# ^( Z  H: @! ~. n8 |
  }}) k9 L" N2 J5 B7 ]( X! h7 u) ?, J. B
}% ^: O; @  ]3 ~
# systemctl restart logstash.service
% m1 t: |3 i5 nredis安装配置 - B3 j" V3 l6 J9 _; y; Y# p
redis服务器:172.20.23.157,
2 N9 p/ n1 j1 I6 k8 `$ O" B & w# u, v9 A' J- _  v: U2 n
# yum install -y redis
6 }% a" V; E' k7 m1 c9 \- o- G* v# vim /etc/redis.conf- G9 \( V  ^% {$ d4 e8 C  d! w
####修改以下配置项
5 x- y2 I, Q" w5 m  m4 q! Fbind 0.0.0.02 g0 z  x; L7 I( T  Y
....
$ o, c- R& B5 _) nsave ""
. Y# x# ~! e" p; [2 E5 n) D....2 M! W3 ^( K8 \5 o1 C/ Y9 U
requirepass 123456783 O. B. {( C) G/ c( d# X
....
' |; X% F& S4 J& u9 J: Y# systemctl start redis# o: b: S* @" z& N
###测试连接redis6 H# N" Q: m  J; z6 v- d) o1 T
# redis-cli
  l3 z9 t- _/ }: `' @127.0.0.1:6379> auth 12345678& c- J: i# j- O
OK' z% V' g% y; G$ W
127.0.0.1:6379> ping; S; x2 x0 @. E$ {
PONG: h/ N* a4 g9 ?2 F

7 z& W. n8 D+ h1 V6 ?: U###验证收集到的日志信息3 ?9 C# g) v. w# ^
127.0.0.1:6379[1]> keys *
) D1 y& V0 }2 c3 o3 c. W) R: }1) "filebeat-redis-nginx-accesslog"$ {9 w3 g( V) V
2) "filebeat-redis-nginx-errorlog"% q8 v- j* Z  q2 s
127.0.0.1:6379[1]> select 0
, v5 d( e* f4 R; OOK% B) c+ \3 [! K8 ~; p
127.0.0.1:6379> keys *
; w$ h; K* x& F5 ?1 V1) "filebeat-redis-systemlog" ( g8 c% d; H1 }6 i! Z
通过head插件验证生成的索引$ F/ M* O) w  r% B  Y) o1 U

/ k5 @. f0 s3 E: c3 Q2 t: P
' v  V: |3 y6 y) Q& e8 X8 zkibana验证收集到的日志信息 3 e8 v0 A7 ]9 ?* s; P4 ^$ r

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表