扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 460|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK ; C! t0 N; e. r' p
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
, \& @, y  |; q, p# |: L6 Z $ ~7 M9 \# h. {8 C7 z
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单
6 Q/ z9 H4 ^' m& U  O4 B# FElasticsearch ( v& M6 ]& U$ n7 A3 _
elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。7 @. n, C! _0 T: P$ {" n3 `0 G  U

/ G4 S1 @& G3 P7 Relasticsearch的特点:
' u* e! Z. e6 A; O- s2 r* Y* A1 L 0 R0 E" S* g/ h2 k0 f3 O# R8 N5 y
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
" C' `- d0 x& g' g1 y2 y# [( c3 a部署elasticsearch + |$ X7 @% {7 V) ], j4 z6 x
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发
6 ]' k- K/ N8 d' t% w
' M' s. y% T  K6 {6 e) dcentos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
' j( t& A( u  e / `7 B) c# `' ~7 ~* b+ w4 O9 x# B
服务器1:172.20.22.24. R- C* I+ r" ]* R& v
- i  Z; b- j! C* D+ x4 e% q
服务器2:172.20.22.27
1 e. @' T  |8 b( U2 k' Q4 e4 _  s 9 _/ t9 y) l" R. |( [2 w
服务器3:172.20.22.286 X  o+ z0 ^! a& x

" |& y* }2 {' \) _9 |( t###ubuntu
- t; c9 j! I' ~# ~0 K9 u# apt install -y ntpdate
, S% [* h& ~3 o3 `: C) d6 a# rm -f /etc/localtime7 q% a8 Y, g/ k. @% F
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
* W) _1 |$ Y5 F% p  f# hwclock --systohc/ j0 ^7 ~8 ?/ X2 M5 c9 j. `
# ntpdate -u ntp1.aliyun.com; v5 Q7 y' Q  {+ h$ `# G
###设置内核参数1 @8 L; B) e" U( r* I# O. Y
# vim /etc/security/limits.conf
7 E" U- V) A" P6 n) R9 i3 q*                soft        nofile                500000
, _% d( P1 g' M; D6 _*                hard        nofile                500000
$ v- ]) Y; c9 g1 ]+ D; Q# vim /etc/security/limits.d/20-nproc.conf
3 v+ t! C5 W; N*          soft    nproc     40968 q: Z- H- N% ]. C& b% E7 l) v9 M
elasticsearch soft    nproc     unlimited; g/ v4 ~* i3 d( M) h: G
root       soft    nproc     unlimited" {$ ^- M4 r# Q% t6 C
###安装jdk) X5 c3 W4 n& I4 Z
# apt install -y openjdk-8-jdk2 x* T" r: k$ Q' k/ ^9 u1 z: Z
' `3 H$ R( w9 {; k& |- [! ^
###每个节点都安装
  ?/ f3 K+ f( K# \  X" w/ {& o# ls -lrt elasticsearch-7.12.1-amd64.deb6 S8 F  d+ v* p; ~, w! w6 e$ C% z
# dpkg -i elasticsearch-7.12.1-amd64.deb
4 [8 e/ ]8 J7 K$ @###节点1配置文件
9 A, K/ Y. `. K# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml  P. U. T1 |1 p+ q
cluster.name: m63-elastic        #集群名称
* W0 `: d. ~' t" B9 @node.name: node1                 #当前节点在集群内的节点名称
! Q# G+ P, A8 {, v* e' {; x. ^path.data: /data/elasticsearch   #数据保存目录
+ J1 J/ q9 y) Q4 ?, o+ B3 }% hpath.logs: /data/elasticsearch   #日志保存目录
/ V8 a; p+ b+ T3 wbootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap0 [, A1 F7 D7 n) u1 g( c3 I
network.host: 172.20.22.24       #监听IP$ z6 X& v3 h/ J% v+ }
http.port: 9200                  #监听端口1 k- Q% {! P, d" V, G1 i3 e5 n
###集群中node节点发现列表+ O9 {: R. t) _
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
, ^+ k% j0 L' h; U" d9 q4 m; M###集群初始化哪些节点可以被选举为master/ C5 |8 t% p3 O! M- b2 ]
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]3 W+ E. P6 e7 e; i
action.destructive_requires_name: true
7 u4 \) d! i" z; t9 V) E# mkdir /data/elasticsearch -p  F% Z6 A" i% ~2 c1 G1 {
# chown -R elasticsearch. /data/elasticsearch
# ]9 s3 Q  i. T: X0 Y2 D# systemctl start elasticsearch.service
& w' B9 u. H6 \: L1 R: g. `: k###节点2
6 P, z' [; o) W4 D; O# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml. [  F/ C  O# P
cluster.name: m63-elastic2 [2 H- R* L2 s5 `+ s  w
node.name: node2
" a6 ^; J3 l: Upath.data: /data/elasticsearch
# p) k' O. ?) P5 t# qpath.logs: /data/elasticsearch8 c5 Y3 D! D% ~; O' r& Z% q) Y0 V
network.host: 172.20.22.27
* E" w8 Q# K2 ~: i& f- ohttp.port: 9200. g( s$ ^1 S. C9 ~, X0 i! b5 O8 I
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
# P: C9 r% m( \6 Q, C9 o8 g+ Wcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
5 j$ I, D- x: e) Y( y1 @4 `6 Saction.destructive_requires_name: true
5 k1 j0 E# y- g: z' c/ L, F# mkdir /data/elasticsearch -p0 H8 t* [- ?- `" Z$ S! ^6 o
# chown -R elasticsearch. /data/elasticsearch
$ s" S3 V" f* A( p6 ]# systemctl start elasticsearch.service
. {0 H& W# ~; G  d###节点3
5 p$ Y+ S+ |8 r' n- L, v5 X# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
* P2 z2 u) S1 m9 Acluster.name: m63-elastic+ C. S& U! ]( x: t
node.name: node34 a$ C( q) D( R
path.data: /data/elasticsearch
1 {8 R/ o# O2 t4 J! q: M" t$ B/ Jpath.logs: /data/elasticsearch
+ w6 t, n: Z" c0 s% I0 P$ o& ]network.host: 172.20.22.28: E" o6 q6 M  V
http.port: 92001 s- O; ]9 j* H
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
& J% [: `4 x  S1 z4 T: Ocluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]8 b/ L7 i# S6 f
action.destructive_requires_name: true' c/ \5 s3 Z/ h; V& R/ y
# mkdir /data/elasticsearch -p
/ W6 s2 z; d7 s2 r* i# chown -R elasticsearch. /data/elasticsearch* u: R5 O6 ]' D, D# [" `4 p( N
# systemctl start elasticsearch.service 7 J7 W! X/ ?1 G; V3 l5 V0 l
浏览器访问验证
$ ^7 n- }$ F* r2 u; x0 V, {) ?0 Z! ?http://$IP:9200
+ {2 ?5 a& Y4 P# L, s
% }# O! Z4 n! ^' }4 [: `# I
1 v' {- n- L8 F3 E4 e
+ z" ?) H" e( ?4 YLogstash ' j$ e4 U3 _9 C- x& c5 w! {
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。! k% V! j' p/ V( S
+ P8 l: A. ]- \3 m- k8 V+ U
部署Logstash
2 ?2 Q" v5 @6 S/ \8 r1 N6 eLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地# y4 ^+ S  t2 @
7 s( J: z' F/ B  ?+ ~
https://github.com/elastic/logstash #GitHub
  b+ C# b* J* z" G6 c0 Y4 t- e, L
# w3 d# P4 C  DElastic Stack and Product Documentation | Elastic
; Q9 w( l; n* S ' ]/ l3 W/ F* \, I8 `6 R9 {
环境准备:关闭防火墙和selinux,并且安装java环境" ~& I5 t1 `: }& H0 A
) x5 R1 _1 U* m% O: a* C
# apt install -y openjdk-8-jdk
% C- E9 T5 k) o. q/ z# ls -lrt logstash-7.12.1-amd64.deb
+ {2 ^/ @# z: u7 }8 C# t" i# dpkg -i logstash-7.12.1-amd64.deb: T, Y- X2 R+ W9 K. I
###启动测试
, e- _4 Z- R/ X" `; \! v9 @# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出
3 B! N4 E, I. z7 g% j* `hello world!~( k0 h) V& i' w, f/ p3 k% R7 ?5 X
{
% |8 w+ x' i7 L# h; k5 `" \      "@version" => "1",9 ?7 H1 p/ q9 c" U. ~
    "@timestamp" => 2022-04-13T06:16:32.212Z,
% c( V9 E# i6 i* m9 ?          "host" => "jenkins-slave",
2 S' E6 ?4 z4 u( p       "message" => "hello world!~"
, Q+ o# F6 K& X! S% i. u6 N$ i; j}5 g: b0 N' p. G% {
###通过配置文件启动
! [5 O4 l6 `& r% R5 k# cd /etc/logstash/conf.d/
- C- x! z. E0 G. z# cat test.conf
* ^; Y2 ^9 _8 c# b, G- Qinput { / A8 P; K# N( M8 Q2 _2 r  j
  stdin {}
8 G) {+ J+ s* ]; n8 H- y' Q4 }}+ H' e( A. V: X' i( B- X
output {2 o0 j' [7 \$ `
  stdout {}
( Y8 E5 g. \+ T7 N: Q$ X}8 x1 ?: z) \, N. x5 t* w3 {

1 {% S" z. u& m2 p" e" y###通过指定配置文件启动7 f1 y) h) ~+ z
# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法& i  z: b3 c+ r1 q
# /usr/share/logstash/bin/logstash -f test.conf
  u6 X" v1 ~* f' F  V" X# M- N+ B. F9 W% D3 J1 A0 k5 Z- |" r
####输出到elasticsearch; b2 D8 i, }+ C
# cat test.conf
' W. o3 g  @$ p$ x& c. @3 m  ]! ?input {
6 h- X% K0 o. B4 Y7 x9 J5 M  stdin {}+ l1 }' K+ Y( s% H
}
' V9 J, J; J# w( D, woutput {; M1 T6 B$ Y7 H5 z6 o8 W
  #stdout {}
8 Y* d( g8 S2 }4 j* J$ o0 C  elasticsearch {( @9 X& Q+ E1 X- ?* ^7 n
    hosts => ["172.20.22.24:9200"]
( `: l* @5 h7 z! f# V* p9 C    index => "magedu-m63-test-%{+YYYY.MM.dd}"
7 g0 ]5 U  f/ P' e0 v  D3 r4 F% y  ?  }- b/ E* P; G+ J& H5 l9 w
}; {: r$ @0 n) l& A
# /usr/share/logstash/bin/logstash -f test.conf
$ ?! c" Y, @  e4 u4 @version1
/ `4 \7 ^5 u4 Iversion2
2 `! V$ w: Q" X2 Uversion3
% X( Y3 {* R8 Z9 h5 J5 Ttest1
! v$ [; [% Z) ]1 X0 X. h1 H9 s( ?test26 j6 }5 O; J5 ?- I" y/ L, Z: j" J
test3: G6 V2 Z% P- r; F3 }
9 Y) i" W4 O3 o: ?5 x
####elasticsearch服务器查看收集到的数据# {# _  r4 `! y3 h. u# |3 a8 _" n9 g
# ls -lrt /data/elasticsearch/nodes/0/indices/( O  ^9 S9 }; C0 P7 p0 r) L& _
total 4  p$ M  h  U. p2 A+ \( b
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
2 p! [+ S. L' E6 L! N9 k! `kibana 0 K5 I" F& R, w/ @! E
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等- v6 }1 a; |& K4 v5 ]# C% D
: y1 `9 K' v, ]- B* m% h$ i
部署kibana
; R6 [2 i- q; l3 X# ls -lrt kibana-7.12.1-amd64.deb
! u& q0 |7 X  u, I1 q# dpkg -i kibana-7.12.1-amd64.deb
" k  u5 f: R- U2 H# grep "^[^$|#]" /etc/kibana/kibana.yml
) [. n0 t! n0 z5 V0 @server.port: 5601/ U& I$ G) |* z. {0 b- O2 A
server.host: "172.20.22.24"
. O: Q# z7 d8 K" `  I& lelasticsearch.hosts: ["http://172.20.22.27:9200"]
: P! L; q) B5 x- R( ti18n.locale: "zh-CN"# k& _( f3 P  q, h
# systemctl restart kibana $ R* U8 ~! G/ f) D) w! I
浏览器访问http://172.20.22.24:5601: U' _% B8 S6 M% S, @

2 ]$ p4 V1 A6 g& Y! C7 }Stack Management-->索引模式-->创建索引模式/ S/ l& s4 K! b8 z1 o
9 f% l8 H) D/ O; Z% t8 F# q/ Z
' \7 h* h1 J3 n: Z* i/ ?5 F
选择时间字段5 H% ?3 _) n$ D$ Z5 g

  _* q8 U( m2 x- u; I3 N$ k查看对应创建的索引日志信息% F' @0 w' J2 X6 c) X6 `

' @7 u1 R' \3 U  H4 B$ o" m/ ] 8 K) d% o9 S  }- x$ W$ \$ z! n
( g9 ^0 g! J% e  Q
收集tomcat日志
( J2 k9 l6 m; |# t收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
: l* q  T) @6 s# z! c6 A ' M( F  K/ x8 ~; J2 j0 u" U
部署tomcat ; o  r. m* \0 B* W% p1 e5 I0 v; f
####tomcat1,172.20.22.30
- n! _$ f, R+ r$ Y3 H9 T) G) e# apt install -y openjdk-8-jdk
+ ]. n* d* K; b2 B3 E! E! F# ls -lrt apache-tomcat-8.5.77.tar.gz
" W) \4 R' \( l/ \5 }-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz$ z* y8 C0 @: e
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/; M- I3 w- E2 A0 G0 U- D
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat5 }! N+ e1 {$ L5 P  r8 I
# cd /usr/local/tomcat* x2 @( c% m2 b4 q; X
###修改tomcat日志格式为json
" T) x% `- N/ T6 R+ b6 v# vim conf/server.xml; F' s4 I9 `' n3 h
....
0 |9 @5 H0 L3 x' {9 }0 p: d  J9 q ' j2 B+ ^* K" b- V' R- A# d- q# M
....( @3 F, s: q0 m$ I4 k- i' W$ R  u
# mkdir /usr/local/tomcat/webapps/myapp
; p4 s+ F6 h; Z& g5 ^# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
, B4 K( ^' X- V' C3 L, E# ./bin/catalina.sh start" D# ?1 I. `) U2 Z

# ]+ `  R" |% X* O$ I2 J$ Q; e# T! ^###访问测试
' v; j8 E9 Y6 \* n$ v- h# curl http://172.20.22.30:8080/myapp/; P2 a7 k  F+ |
###查看访问日志7 E. |! ]- s0 A
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log% t( a/ _" f+ H2 C

& }/ l- }$ D2 O3 r7 I4 S4 N( ^  J####tomcat2,172.20.22.26
8 ^/ S7 O+ e& Q, \4 w- R+ |2 X4 G# apt install -y openjdk-8-jdk, k- ^' [5 I7 [* @7 z9 t7 ?- g3 K: o
# ls -lrt apache-tomcat-8.5.77.tar.gz # e. i. [( C0 K7 A9 L% j% }
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz' @# Y, p) Z* r: |2 j; ?
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
9 T! y3 W8 p& i5 R: D1 Q# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat% A4 i% z* g1 _3 M$ d, O
# cd /usr/local/tomcat
' e  V8 p1 s2 J& i, U###修改tomcat日志格式为json9 ?$ e: M& s1 V. [
# vim conf/server.xml
9 r, e. V& x' W7 p9 i' Z....
1 H. a: Q+ Q& A5 t' q4 X & u8 s! P5 c7 z5 A% }' f
....2 r& G- c5 ?# p* C) Z5 O
# mkdir /usr/local/tomcat/webapps/myapp
6 U( W1 Q, m$ e0 |* Z: P# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html$ H" G6 D6 J+ s: |4 f
# ./bin/catalina.sh start
0 y9 t4 a) h/ r6 ~) p" M& ]9 {/ U2 L9 i; f/ E- \' j+ g
###访问测试
" {$ @+ }/ i) o9 |# T# curl http://172.20.22.26:8080/myapp/% m: L- g9 A( v: t
###查看访问日志
8 V/ i9 a. v3 \# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log   d$ {, B  G: ?' q2 N
部署logstash
7 Y5 \8 y) f; F$ Q在tomcat服务器安装logstash收集tomcat和系统日志
6 }2 c1 C: D& P, j: Q ( z2 @0 f6 L: ?. y
####tomcat1,172.20.22.30
* C: {; d5 X# ?: J8 ~! q# ls -lrt logstash-7.12.1-amd64.deb
) ~/ Y9 q) _* {3 U# dpkg -i logstash-7.12.1-amd64.deb& W0 b5 D) I9 m7 {. |
# vim /etc/systemd/system/logstash.service5 D: _% ]" O5 p
...! X# s9 r) B& _8 z
User=root/ a; w" X8 b# w8 h. |& u% W9 m9 h
Group=root
! d+ Q  \: f) q+ @2 o...& w& T. r0 i  K) z; z% l  ^- y
# cd /etc/logstash/conf.d
* E" |1 a0 [4 U( X  F9 [2 \8 d# cat tomcat.conf
0 z) p  M& [# \4 r2 n# R) N  Qinput { 9 p6 L6 z1 X2 H# V
  file {
/ ~& I; W* L& P3 @. Z" @4 e  f    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"
- O" H& V7 I: z+ w4 n6 \6 X2 q    type => "tomcat-log"
8 R- A% H" V1 S4 N1 U  B    start_position => "beginning"
. o& p6 w4 Z: ]4 c" K& _$ |    stat_interval => "3"( j- E3 b" k7 l" E- h
  }% r; z# s, q$ V6 o# S: y' Z9 C
  file {6 Q, C) B; R2 d  i
    path => "/var/log/syslog") ^6 I3 {; Y* l$ y7 K% C6 r* h; t. o
    type => "systemlog"+ Z4 D8 O  |+ F' \: y" j3 {
    start_position => "beginning"
! R" Y  e5 |9 \    stat_interval => "3"
* Y/ g& m! Z: ?! X* A  }0 F9 Y8 ?$ ~/ A) T/ H! y
}
' J4 d" k% s1 r; Goutput {
' h/ `0 f. ?- M3 P* N  if [type] == "tomcat-log" {) j* B" G, k2 D
  elasticsearch {0 _( {, {3 a# N) L+ ?% [( }
    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]- G4 [- ^' N  z8 B
    index => "elk-tomcat-%{+YYYY.MM.dd}"
& C! g  m. k$ J# q+ [  }}
7 Q! S- T" ~, l  j6 ?* O- q7 U) e  if [type] == "systemlog" {
8 `- a% T) {% g* J7 H# q, R' Y. m  elasticsearch {4 n% O$ d& Y" C
    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]. q  C  M- d+ Z
    index => "elk-syslog-%{+YYYY.MM.dd}"6 p2 _3 `! `4 d, _
  }}
2 B0 `, Z0 p4 |* ^* o}! K& b* y6 I4 i6 d6 x% A+ K5 K: X8 M
( v. C+ U4 w0 Z! L( A
# /usr/share/logstash/bin/logstash -f tomcat.conf -t
2 W" B9 i- L+ G# systemctl daemon-reload5 x9 I$ I8 i5 I, _
# systemctl start logstash.service
* t8 Y! Q4 y& x4 o+ L9 h# scp tomcat.conf root@3172.20.22.26
0 y) g2 W  F; b
; Q/ F& g! i7 @. v: g; G####tomcat2,172.20.22.269 q: F# I( G2 ]# N
# ls -lrt logstash-7.12.1-amd64.deb6 ^; p1 J& o; w; c# d! g
# dpkg -i logstash-7.12.1-amd64.deb
6 L8 M- [/ g/ Q. n# vim /etc/systemd/system/logstash.service
. i1 M( j, l# p7 z& ?...3 l9 [; G1 M4 C" F! J  M6 B
User=root7 l0 J) C6 G! h  M7 H. v! s
Group=root
+ X, `* v7 H- Z' U* x- U/ |! T% s...
) k+ v( ]: f: A' o0 y$ s! ^9 |+ v# systemctl daemon-reload* g6 k6 f9 i  o5 v6 m
# systemctl daemon-reload3 U# z5 m: }( `. o# `  ?& i8 {
# systemctl start logstash.service " O9 `! `3 L( g3 U8 `$ P- N$ h: a& Q
通过kibana展现: x6 U. L- X/ k4 R$ [+ ]% }

/ {$ j. X+ O$ o& a
; Y, B# A' t4 P5 c& n# w收集Java日志
0 \$ ]  T# u$ t0 [( O使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并
+ ]4 c6 Q6 C8 |$ f/ l3 t 0 S5 B, c/ a- c$ Q" [$ C
Multiline codec plugin | Logstash Reference [8.1] | Elastic5 Q7 {! m7 U6 u

* |8 L1 Z$ j: O. t8 h添加logstash配置文件 * m1 d! A0 ?* h2 k  R
###收集logstash自身的日志,172.20.22.26
, y$ j* k) m6 S0 b: d8 g5 z# cd /etc/logstash/conf.d7 p2 ]1 K  h2 t/ _4 N
# cat java.conf
. [9 l' j" M' A/ Ninput {
9 B0 d5 D! V' T* L- ~  file {
  R& a/ a6 y9 l  t" c% d9 ?: ?    path => "/var/log/logstash/logstash-plain.log"4 t) M/ x# d9 j& L- ~
    type => "logstash-log"
$ s* U1 Y7 J5 i6 c9 H    start_position => "beginning"
4 z' E3 j* r; I2 T    stat_interval => "3"9 G5 \5 j. E6 u! o( }6 ]1 @
    codec => multiline {
) J  @: E9 s. X      pattern => "^\["
4 V9 d$ q" e+ o, P( y; h      negate => true
. N: v" ^" W5 _7 b2 m% \. T# C      what => "previous" % {- @( W8 @6 s/ h
   }}
2 |9 c1 `  @3 S% z0 y3 B}) ?1 Z, T2 [1 J) D4 z% t* x
output {
5 o7 T0 I! R" G& U  if [type] == "logstash-log" {
; \+ I6 v5 C. q0 q: U  elasticsearch {, v! ^* e3 B7 ~, P
    hosts => ["172.20.22.24"], N- Z$ Q. h5 d" g
    index => "logstash-log-%{+YYYY.MM.dd}"; B9 a3 e6 B) y+ x
  }}
/ ~  a9 N3 x& a; T9 w}
& P$ |% F) \: Z; g) C7 m8 W
/ L4 p, V7 W# o. F/ z. O: @# /usr/share/logstash/bin/logstash -f java.conf -t) ^7 @! V4 z" O6 S+ C
# systemctl restart logstash.service. W$ M$ l1 H* b: b# T
* u6 F0 K: w) R+ M- C
###收集logstash自身的日志,172.20.22.30
0 r% [, H! r7 J1 e+ F# r# j% F  s# cd /etc/logstash/conf.d8 L. N! U/ r! O
# cat java.conf
, U! x: _" p3 P/ o2 oinput {
8 W! _! K$ Y1 p: [  file {
: ]) U3 m9 _) k2 A2 R1 t9 E; s    path => "/var/log/logstash/logstash-plain.log"
' s* W0 }4 f1 q, H7 a+ ?) d1 _    type => "logstash-log"
" S3 ~8 z# r( Z2 H% n& U" L' @    start_position => "beginning"
; n- k, q: s7 E  i3 a) [. ]4 O    stat_interval => "3"
( n7 p" u' @- t9 x4 e, G2 o    codec => multiline {& F3 ?# |/ J; S2 ?
      pattern => "^\["5 ]  y0 Y1 P. j  G5 N8 G6 V
      negate => true+ t# R8 W: c; h" v8 \- d- `
      what => "previous" ' ?+ C! \5 |4 X0 C4 d% G
   }}
6 V: m7 O& M9 f: A6 k( g}
4 Q, \. E0 Q) @# P' ^8 r; routput {
1 T0 m6 \2 g5 p- \% `  if [type] == "logstash-log" {% ^! G1 Y$ R* z& [5 u
  elasticsearch {
( `& t1 b1 y2 f3 ~, Y    hosts => ["172.20.22.24"]2 N% N3 \0 s  j$ o0 D5 W  `- T
    index => "logstash-log-%{+YYYY.MM.dd}": O+ ~! I% D* L' h0 E% L4 z
  }}$ l) \  Y) Q  |1 t. l
}
! o; J# K  u# U2 S9 h/ c
0 M" |+ f+ n! X" a0 b$ y# /usr/share/logstash/bin/logstash -f java.conf -t
: I  c1 F+ ~/ F5 B' H' y/ A6 J# systemctl restart logstash.service * J1 @4 l# Y6 H9 i! t
查看kibana收集到的日志
2 N4 x9 |" u) t7 P
4 K5 L" C4 K; y: v% F 5 b) S% b: M& |5 g1 Q! o* F
# `7 y! ~9 t; o$ a  }% z
filebeat结合redis、logstash收集nginx日志
- W3 [, D$ ^. @0 S" S使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch+ x- x  X( ?' p$ e* Y

6 `+ H! f' _% b$ C& I" U2 gweb1:172.20.22.30,部署好nginx、filebeat、llogstash! z$ O* l- g) t" b

/ f9 y+ \$ M9 L7 G  Q: \( xweb2:172.20.22.26,部署好nginx、filebeat、llogstash& }. Z! q: u- h! l+ @

# ]5 ~  I9 v/ J: rlogstash服务器2:172.20.22.23,redis服务器:172.20.23.157& w: r: @6 r$ T( A8 y

( v4 `, v$ X  C* K3 Anginx服务器相关配置 # r, e# d# `6 B1 P6 [
部署nginx
* ^$ N. A% P# y+ g" W5 c( F# wget http://nginx.org/download/nginx-1.18.0.tar.gz* Z- e7 j# P0 R/ ?, e* H8 `4 P3 ~
# tar xf nginx-1.18.0.tar.gz
5 n1 o( G- {+ ?9 i# cd nginx-1.18.0
$ H( [$ R& V8 x- k8 G# I0 n# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
# j! v: `- b' |3 f1 f# make -j4 && make install4 Y3 o8 I, @- U5 y; H7 P0 T2 |8 o  T
# /usr/local/nginx/sbin/nginx
5 {8 t2 V9 T( @  p% l部署配置logstash & T8 q$ w; l3 H0 w0 w  s5 U6 ~8 a
把filebeat收集到的日志信息发送到redis. T+ Y  I& G8 ^
0 K& c; a) L. `! b8 {, G
# apt install -y openjdk-8-jdk% ]7 i6 b; E( a) j4 |
# dpkg -i logstash-7.12.1-amd64.deb
8 g! z5 J! n1 X% T# cat /etc/logstash/conf.d/beats-to-redis.conf
1 N  s) `1 `$ P" |input {
; n. _# l6 B- t9 w6 U  U( p  beats {/ i* `& y( h2 U7 O3 v
    port => 5044/ E- \* y! ?; K/ q1 z
    codec => "json"0 d( G9 P+ v) C2 ?& b- k. K
  }
! Z( j4 M- ?3 x: T2 G  s  beats {! s% S1 V& o6 B9 R. h# ?0 I" u
    port => 5045% i' c% r4 @/ l1 k' A. i1 ]
    codec => "json"
- V) f3 Y0 \3 m' `  }
$ o5 U6 l/ }) q) I) p  r& w9 k9 Q7 x}  t/ n' d" s7 P8 h( k# a! V
output {$ G8 S! D1 t" R  t  c8 c
  if [fields][project] == "filebeat-systemlog" {: O1 H) R: M. s
    redis {
$ D# ?) h* D0 }2 F# c      data_type => "list"
, R) R% }2 H1 t1 O* W# L" r      key => "filebeat-redis-systemlog"' x2 {; Y9 B0 @, X
      host => "172.20.23.157"; s! }" ?/ m. k
      port => "6379"5 R& Q0 ^6 Z: Y
      db => "0"7 G1 l. T2 g) i) v3 J4 C& }6 L& \
      password => "12345678"6 D6 d; }; G2 i/ _' x% W5 j1 Z
  }}4 E% U4 g0 @+ s! t3 X3 _
  if [fields][project] == "filebeat-nginx-accesslog" {
" j6 \& K8 ]- {& ~9 U    redis {& ?( Y" [$ x7 n! T- B+ z7 _! G
      data_type => "list"+ }6 _+ _) E6 e+ V4 b0 V
      key => "filebeat-redis-nginx-accesslog"8 t' C8 P- d, X* l3 \
      host => "172.20.23.157"2 _/ Q$ S+ L4 A) v) N
      port => "6379"0 I8 }( y& e2 D7 g" y
      db => "1"
  W$ ?5 B% ?- f  }# _6 C      password => "12345678"( w+ \3 P  i* g1 D" d% U) o" X$ i
  }}
1 z) ~. t1 V) ?  ^* F: H  if [fields][project] == "filebeat-nginx-errorlog" {
. r( W& f) u/ S4 n3 z0 _! ~    redis {/ V& K! {! E+ x- b: e6 w; }
      data_type => "list"
4 c6 {. k( L' |) ?! A' r+ d8 n      key => "filebeat-redis-nginx-errorlog"- {8 {2 t. W4 I+ i# c
      host => "172.20.23.157"
- ]! i5 O3 v: _: m0 \3 t' J      port => "6379"
5 L' }# W, Z& z" b2 T! I2 K5 \      db => "1". b" e5 z. N- r% P0 D# K/ N
      password => "12345678"( a9 ]$ y: n& L/ M0 O7 q5 G, D! r
  }}
" A, i; ~; c3 t9 a6 ~8 d( C* Z}. D- S2 Q! ^+ ~& `* s
# systemctl start logstash, O  D9 l8 @" l+ ?1 y
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ 5 P) @$ |* f6 m8 W. y
部署配置filebeat
' x, S: j- G  ?  I2 l通过filebeat收集日志信息发送到logstash& C0 Q9 z+ H( i5 j; d
+ |  f" L, ]1 x& b' H& R0 x
# dpkg -i filebeat-7.12.1-amd64.deb) a/ S+ S- z  G
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"
8 r8 h% a0 X* q9 X* ?filebeat.inputs:
+ J5 h% m2 X& |1 Q9 h- type: log
, E8 h2 j' M3 e4 P4 j' E  enabled: true9 E* J: I) l4 y! d2 l
  paths:4 p' Z& z  \2 I- p* z( `
    - /var/log/syslog
( ^) V4 j2 ~2 X+ g- Q! n  fields:$ P1 B/ b4 t2 y; U
    project: filebeat-systemlog
( I* x: z, ~' C; [) y- type: log
# N( A  `7 A* k1 \0 X  enabled: true# S  a! _: Y- H  _5 h, Y% W
  paths:
4 W6 T. B7 w2 I1 ?1 J    - /usr/local/nginx/logs/access.log
5 h; k' a/ O, m  fields:6 `8 \3 C1 c; _# \' j' U' Q0 r
    project: filebeat-nginx-accesslog
" x* O3 b7 b; j2 o- type: log
& }# ^3 h& p/ U. S+ k0 h# B/ ~  enabled: true3 A) ]5 D' V3 X& b. n) a/ j4 v
  paths:
4 Y- m  P" C- G5 M$ @, m0 u    - /usr/local/nginx/logs/error.log/ `% f. y' I7 T; h/ X1 J% Y
  fields:0 i( t* [' s6 m6 v, t
    project: filebeat-nginx-errorlog
$ S0 d8 _& }1 ^* Gfilebeat.config.modules:
9 c: c, @% c" P8 i! }  path: ${path.config}/modules.d/*.yml& n/ |3 d9 ^' j9 T% I
  reload.enabled: false
4 \7 c8 S  c  b8 }$ r% Bsetup.template.settings:
' C. w  J& j3 z  U4 G& s. S  index.number_of_shards: 14 H- f1 q+ f! Q6 w
setup.kibana:
9 g7 s! ?" I$ G9 k  ?. P; Yprocessors:
8 ]6 Z8 C& x; u3 y  l  - add_host_metadata:
: i. m8 Q% c$ e# ~      when.not.contains.tags: forwarded
" h# a. K1 r; ^  - add_cloud_metadata: ~7 g- g. ^7 C! p2 f- L) m
  - add_docker_metadata: ~/ e, ~) z. t6 @, t3 k/ l
  - add_kubernetes_metadata: ~" N! f; I8 Q- r; m2 {. I
output.logstash:9 }  ~. i" u' d/ Z. Y. w0 w
  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
9 Q* F, D+ g( S6 e+ l  enabled: true
% v- F! [6 I0 \5 ]0 W* f8 f% T  worker: 2
+ H! ~" W" [# T9 u3 J  compression_level: 3
: k+ z+ _, p6 p& H5 n  loadbalance: true, }, X+ y$ d0 p1 \0 b
! E1 N2 T2 r6 R& Y% {0 L+ G
# systemctl start filebeat
  ~5 R3 d" I7 A' c' p) I% a# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/ " X9 I4 K" s. H+ L5 h9 w
logstash服务器配置 9 F# M+ T9 l5 v6 Z0 S
logstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch8 \- V/ H# ]1 i$ O- B/ i: d
- x: _* }6 d$ d
# apt install -y openjdk-8-jdk) o! f: N+ S! M  X5 E
# dpkg -i logstash-7.12.1-amd64.deb' n: A+ i2 m9 k  p6 M: I
# cat /etc/logstash/conf.d/redis-to-es.conf / \: B- C( s  ]3 b
input {& c7 |; ~+ ~2 P" i* n9 y) D8 r
  redis {& d: @2 a1 \& V# D  h1 b  p1 f: I
    data_type => "list"
1 M" T5 H7 R+ @& P* [( ^; C$ \    key => "filebeat-redis-nginx-accesslog"
) S% c  S# P+ G+ X0 _7 S% ^    host => "172.20.23.157"
# A$ H  R+ y9 F# D, q+ w- q5 p9 h    port => "6379"6 @8 G- E( L' @. O2 i
    db => "1"
* Z" v$ B+ |8 O    password => "12345678"
; D6 j# S4 D3 M+ v' b: X  }  |( @5 L  \% Q4 H3 k
  redis {
8 b; D# E9 g7 Q7 `" Y9 i' {    data_type => "list"
+ t6 D' @" x5 {8 o' B! r, Q/ i' j    key => "filebeat-redis-nginx-errorlog"- O' A  K$ J2 P+ c
    host => "172.20.23.157"
" X8 l- x8 Y+ h- A# k3 ^6 m    port => "6379", v& o% y$ e+ W, O- e' T  U
    db => "1"' n* g* |' Z* G. D& D0 J8 W
    password => "12345678"
4 \2 Z& f4 C* l: `6 |  G  }
" l# g6 v: `5 T8 t; C6 g  redis {
, D  }. ], |6 t/ s/ O    data_type => "list"7 l0 F( T) c0 I5 y1 H& i; Z
    key => "filebeat-redis-systemlog", q) R+ h. l7 I; v
    host => "172.20.23.157"
. j4 r3 `' `" }2 X5 N  g    port => "6379"& f2 Z' R  \1 z, K, U. d2 n/ W
    db => "0". G+ Z  G3 |) f/ F9 B7 y& s6 _- a
    password => "12345678"- I9 ^6 B; o4 b) T9 ?- @" c
  }
* e% q2 V! s, ?}* T) A- F- m4 n* A# i, {, [% B
output {+ r( w- ~6 Y7 J# {* B, \
  if [fields][project] == "filebeat-systemlog" {
( P# o: f" A. C; g    elasticsearch {$ R5 x, T1 b" E: o; n7 ]  ]
      hosts => ["172.20.22.28:9200"]6 k) C; t1 Z: U( s. \4 k1 A
      index => "filebeat-systemlog-%{+YYYY.MM.dd}"
7 q+ X2 ^2 J9 M  L9 d& D& t  }}
  S% d. j/ m! L" f& h4 v  if [fields][project] == "filebeat-nginx-accesslog" {0 `0 z/ l5 B2 M4 \3 ]
    elasticsearch {
! V( m3 P) V9 E  P7 e9 @      hosts => ["172.20.22.28:9200"]
7 F' \1 s1 k& U) c" h. Y      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}", T( l0 O: v, \8 E1 ?; z- a4 q6 E* O
  }}
. I! z  f, t' @4 ?4 f* o9 |% @  if [fields][project] == "filebeat-nginx-errorlog" {: p! h& y# F; K1 F5 A5 }) a, G& w$ n
    elasticsearch {' r  @  i& R5 }- Y
      hosts => ["172.20.22.28:9200"]0 G, y) n/ d) y  B9 d0 O! l
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"; ^; o- u: }6 ~7 ^: R3 B
  }}
) u" S) }: t% O}
+ L4 m1 f8 ], A# systemctl restart logstash.service
" N/ ]( D, B. T, ^! ?# f( T( ^0 qredis安装配置 : A) e6 V2 A0 Y- P3 Y  _
redis服务器:172.20.23.157,/ r. h1 {: e* O
% E, Z' N1 z0 E2 |
# yum install -y redis2 }1 |5 z- B# @8 Q( X/ F9 O
# vim /etc/redis.conf
* ?* r/ c! F5 q& q; p####修改以下配置项
/ g8 m6 O5 T: M4 Gbind 0.0.0.0
/ `: Q  d! b. P9 d....
' L. k4 {& @: ^# h/ r) Q, Jsave ""# Z# y: h' Z) s& W: _  N) `- |% \
....% u/ y$ K+ }3 _; Q8 Y
requirepass 123456786 C% r& K, d4 {6 A9 B
....$ x$ `: c2 e! ?! v* [1 C
# systemctl start redis( C6 z1 P6 R" J" M5 X
###测试连接redis8 g& m) \8 c# _/ m$ Z6 w2 X+ {
# redis-cli ! m# a: @, W& D8 c1 G
127.0.0.1:6379> auth 12345678! q2 ~& K$ f2 h0 G; _4 n
OK- I- \% ], V: h& l  M* v
127.0.0.1:6379> ping2 d/ y" m2 K. ^9 g- K  B
PONG/ N- o5 e  g1 ^6 @1 E" j
4 T9 q7 s. ~- |# a( u. }& Z7 P
###验证收集到的日志信息' L% L+ r  S/ t4 f1 p, p
127.0.0.1:6379[1]> keys *
8 k# H( u0 V( d/ a1) "filebeat-redis-nginx-accesslog"
# H# k% X; ~" y3 q, v2) "filebeat-redis-nginx-errorlog"* n2 z6 ^& V" A: T. @! ~; b
127.0.0.1:6379[1]> select 0
) h9 h1 M# D4 }9 N7 t+ ]7 r# UOK
6 ~# h, m- M/ P127.0.0.1:6379> keys *
. b3 U! }4 N& D/ V1) "filebeat-redis-systemlog"
& F0 v# w4 j3 Y, L: m通过head插件验证生成的索引/ C/ w9 s: Y9 w+ R. O! A! @, y2 e% q
- ?) H: ^* @; ]

3 }! q9 }; u2 h9 m; M4 [9 r3 zkibana验证收集到的日志信息 / G. s5 @3 U8 R5 y/ w

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表