扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 352|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK ( N1 \- x) w, Q5 y
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
" L- @& I' y5 U$ h$ d) A5 x2 M
8 L# L  i  W9 R/ R7 \  l; k处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单! M3 ~$ f0 S# w% F% h2 a
Elasticsearch
1 U- n* a5 p3 Z2 Relasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。: K7 L3 w* {. a$ p; p
; Y9 x" A  R  y, Q- h! r% {! \
elasticsearch的特点:
1 r4 P" o9 ^. t3 m- K
4 E: ~! N8 R. R- Q8 X3 H实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
( z: @: n5 l/ y  n( ]0 _) {部署elasticsearch
( ~! N8 c' ~' Z( Y% m7 BGitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发3 b6 w0 q1 \) \8 Y

8 M' ~, l( B& E5 |( z# Jcentos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步, j' O# N' {( ?  h- m2 T5 i2 }1 w
4 c1 I# O0 H& r$ ?
服务器1:172.20.22.24
1 C" z  q8 K1 X3 I
; ^# d" ]6 `3 Y; V1 W服务器2:172.20.22.270 s- R' [3 V+ B# M

5 n) j, N- [, ?% w' a6 a服务器3:172.20.22.282 |7 n) d0 y: j# R5 ]$ i/ f
  b# Q, R6 _% i/ R+ e4 U6 l
###ubuntu  g5 z5 _2 Z7 M! W2 D
# apt install -y ntpdate! I3 `+ E! o! Z. s
# rm -f /etc/localtime4 `( J' K& u0 ^2 K; _' X4 O1 N
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime% Z1 v# Z; e$ a% c
# hwclock --systohc
4 q: b. i' j! C  g# ntpdate -u ntp1.aliyun.com- H3 B& s& @# z( M- b
###设置内核参数
: [) f7 v4 l, c% d* U* m. [2 h- G- K# vim /etc/security/limits.conf
6 [, W! W* T4 |% g. ?3 v*                soft        nofile                500000
( q+ T9 T/ s" g5 K5 S/ `! H+ f*                hard        nofile                500000
4 q; P1 i: _  b" h/ P# vim /etc/security/limits.d/20-nproc.conf 2 B( m& l" v- F8 g) V# ^  i! N! _
*          soft    nproc     4096
- w' U8 u3 d5 welasticsearch soft    nproc     unlimited+ e  g: o+ h% Z
root       soft    nproc     unlimited
8 b" Y# |8 E& g/ R$ ^( w6 s% J/ t###安装jdk
7 e: f$ t! u9 y, a# e# apt install -y openjdk-8-jdk$ Z0 M7 V4 J1 V# K0 g/ w  v( D2 |6 k
! Q5 O7 {, j! J& {2 M
###每个节点都安装! ]. x6 q. D- a. X$ ~/ g" d
# ls -lrt elasticsearch-7.12.1-amd64.deb1 j; w' w  x/ z$ B3 r6 F" n5 y& W2 k
# dpkg -i elasticsearch-7.12.1-amd64.deb+ s! A8 V5 {1 t8 y4 b5 U6 e0 A
###节点1配置文件
9 Q) g7 J. @- ~; j$ ?# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml2 I9 z# K0 |: t
cluster.name: m63-elastic        #集群名称8 J. ]5 h5 Y/ W, P( F/ E- v! j
node.name: node1                 #当前节点在集群内的节点名称
5 X0 q; O! |. t4 b7 Dpath.data: /data/elasticsearch   #数据保存目录
& z) W3 n7 A! hpath.logs: /data/elasticsearch   #日志保存目录$ V9 [. z2 S+ O* L& d3 _) N
bootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap
& _+ R! [. t& d- nnetwork.host: 172.20.22.24       #监听IP
  o6 F* m/ |! W1 M/ S% [. uhttp.port: 9200                  #监听端口
8 L+ d5 y+ g, S* H3 {8 }& y# v###集群中node节点发现列表( a! H# C6 e$ W, B8 `
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]: Y' C0 ]4 q- k3 u
###集群初始化哪些节点可以被选举为master% y/ ~7 D4 ]/ W* _7 G" _" f5 ?* p
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]& _2 e% P$ j3 K; U
action.destructive_requires_name: true$ X& H7 U3 y9 F
# mkdir /data/elasticsearch -p5 q8 v! b1 J+ P: w1 _2 u
# chown -R elasticsearch. /data/elasticsearch
$ h. v& @, \8 z# systemctl start elasticsearch.service8 d' G( Y7 ?( F# X: [  z8 K6 v
###节点25 |/ P3 Z7 C2 v. {3 s! c5 {6 B3 ?
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
3 d& c& E' p6 q3 ]cluster.name: m63-elastic* L& C4 M% e8 |4 ^
node.name: node2: l9 k' }/ \5 U
path.data: /data/elasticsearch: ^7 ?2 M& G$ V4 I4 g" o
path.logs: /data/elasticsearch
- U9 W# e5 @9 [0 u  m) Gnetwork.host: 172.20.22.272 z% Q$ c2 s) Y4 E1 }
http.port: 9200
  B$ h( r+ N3 l, m8 wdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]6 {' N$ L9 W" C' K4 Z: p" b
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]6 z" c, k* T- J# \
action.destructive_requires_name: true
0 W/ s2 K/ X/ R$ L  b4 s/ g# mkdir /data/elasticsearch -p
2 e0 {9 p" r. s! x# chown -R elasticsearch. /data/elasticsearch) y. e5 w3 R* P, ?3 D) `0 A" `
# systemctl start elasticsearch.service
: ]* X3 G( R# c& W' j2 s* k###节点3, }, U$ _: d, h
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml: I% {6 r; x0 ^& {! N
cluster.name: m63-elastic
7 V+ N5 c+ d, z2 W6 H4 F7 P% Onode.name: node3
9 C$ U( c' _$ h/ M7 L  ~path.data: /data/elasticsearch; e( ~: f1 U# |" @7 y
path.logs: /data/elasticsearch+ B. |8 g8 Y7 M5 G
network.host: 172.20.22.282 A) [: `3 v. x5 G8 K* Y
http.port: 9200
9 R4 J+ S" \/ r6 ?1 N  I) jdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
3 y# j8 w! d: \. `cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
) b; E6 l2 l4 oaction.destructive_requires_name: true! g4 q/ ?( H  a9 z
# mkdir /data/elasticsearch -p
" R( w$ G3 N; B0 q8 {# chown -R elasticsearch. /data/elasticsearch
6 K, y( a" n6 j) D% }1 q8 c# systemctl start elasticsearch.service ' d2 X' y$ o9 P0 l
浏览器访问验证
1 p( U6 {' B; t! Hhttp://$IP:9200
2 z8 P$ b. h$ |6 t$ {5 O9 M
" @% I4 O# l. a0 h& d + K6 N  n0 D2 z( {* |
# R$ c. N& u1 |8 w
Logstash
9 x' A( m* p) h( v7 q8 v# zLogstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。
' q9 [3 T! X. P- j. J
( H! T' F) x9 I. F6 h部署Logstash
# W7 t. b7 z# ]) X7 P+ ?  fLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
! x! c- U# \5 Q# Y' q/ K
5 x1 ^. k) a% [& h: ghttps://github.com/elastic/logstash #GitHub1 P: O' l% `7 k& B

. {4 y8 ~" G8 x6 ]* OElastic Stack and Product Documentation | Elastic
" b# X9 d2 J1 Y
3 |) J8 |& j( R2 X; ]1 Q( x环境准备:关闭防火墙和selinux,并且安装java环境! b/ q2 u, [) M0 v# v

8 B& t: |& s. N* F2 z: ?# apt install -y openjdk-8-jdk
. D) g# A6 H  V% m& j! k$ ?0 i# ls -lrt logstash-7.12.1-amd64.deb1 ]+ r. {( o* M" J  I) g
# dpkg -i logstash-7.12.1-amd64.deb; W( N) Q; G/ w# s: @- l
###启动测试/ m7 Y/ W- u3 H2 @! h2 \) f
# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出
$ Y6 A- [6 N7 j1 }6 E3 p$ ]$ _; whello world!~! L* N7 @2 {) P) b) S3 ]5 E! B4 d7 X5 C
{
* G8 m& C/ K8 i! r% ~8 _& s      "@version" => "1",! C$ [, b1 F. J0 |. y3 i4 R& w! o0 v
    "@timestamp" => 2022-04-13T06:16:32.212Z,
: S0 d( h6 [9 b. B- f6 P          "host" => "jenkins-slave",
; d! g8 X$ t* ^$ u$ [  j/ K       "message" => "hello world!~"* F' M( y* y/ T
}
$ X* s0 O$ Y2 L; B" l. [2 x' D###通过配置文件启动
1 U: x$ o7 f$ y2 P2 C# cd /etc/logstash/conf.d/
, J& X$ d$ w$ e& e, f' z' Y2 c- [( n# cat test.conf
& {  X# @0 L/ u! x9 [+ Vinput {
  F) f8 Q# N' f. s; X+ t3 V+ M) J  stdin {}
1 T+ _( m/ W! b2 {}# N* N4 |. `9 w) n- g( \* |# m2 Q
output {5 B. w7 K; g7 {" ^, X4 ?
  stdout {}
' o. e! b2 A% J4 p}
! |' Y6 ~9 Z/ g  J. j8 q( r; R. K! P4 l7 ^
###通过指定配置文件启动/ ~$ i% i. _$ I9 x) Q
# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法
( s  }7 `$ U6 S8 e: a# W9 d# /usr/share/logstash/bin/logstash -f test.conf1 B$ q2 i8 Z2 Y) C& i! `0 {
# i7 ~; T, L( u* Z9 L0 l+ j5 A
####输出到elasticsearch/ Z4 i, `7 n) c5 h- X
# cat test.conf 4 {0 F" {( O# h+ K; g) b
input { 4 \) c$ _# P' E9 b0 Y( s
  stdin {}
: V3 V5 u. I: T2 Y, x1 N}
* i, ?6 I' _1 w' P  ~# D# @output {+ r' [3 p9 W) ]$ B( J9 |/ U$ T
  #stdout {}
6 F% Y5 l$ @7 B  elasticsearch {) S6 b! S7 W/ R* y9 k7 S# l
    hosts => ["172.20.22.24:9200"]
4 ^3 `) }' l5 L7 ]& u1 h7 {, }; j    index => "magedu-m63-test-%{+YYYY.MM.dd}"
4 o, g% O) \/ D) D" X: Z  }
/ E0 q# y5 I( x) g}
" X! j; ~# }" }3 a# }" ^# /usr/share/logstash/bin/logstash -f test.conf
! ^9 O4 i; P% D# t3 Q0 x* Bversion1; J1 ~4 A' k& B, U
version29 [" q& }2 s/ U' @. q; i
version3
; N- ]& @& B9 q0 wtest1; V, _0 Q$ X3 l
test2
" q$ z& K' _5 x! itest3
4 X. b: j  q+ @
/ F. u/ ^) ~: I' ]2 l####elasticsearch服务器查看收集到的数据
5 O; ~8 t) f8 ~% J# ls -lrt /data/elasticsearch/nodes/0/indices/% t2 i( [; k( j9 }$ w  s
total 4
% n. D4 H0 L4 k- Sdrwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA , O: B* g8 [* f/ g1 m* t3 a) |
kibana
& u# k: {, {3 ~: B% ]kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等8 i4 x! Y. g0 W0 a0 \; S

3 Y' q2 n" E: b, @部署kibana
1 T! ~+ @+ K4 K) F2 w' k# ls -lrt kibana-7.12.1-amd64.deb
5 m5 _# x& Y7 l- H! E) |- h# dpkg -i kibana-7.12.1-amd64.deb8 m( W9 T3 w" n8 e9 b8 b
# grep "^[^$|#]" /etc/kibana/kibana.yml
3 y; D1 O0 z1 U& Qserver.port: 5601
6 `( p. X$ I+ x% L4 k4 kserver.host: "172.20.22.24"
, W" f  m% q( S% H" l5 n  welasticsearch.hosts: ["http://172.20.22.27:9200"]
7 v. s7 m0 Z0 s9 C9 Ci18n.locale: "zh-CN"( t; l9 [4 Z+ w! F8 k
# systemctl restart kibana
7 o! v3 ]! {4 b3 n# A, h7 y浏览器访问http://172.20.22.24:5601
4 t2 B/ m0 {/ J7 w* C& R) C2 u
( p! B4 S+ S1 J" ZStack Management-->索引模式-->创建索引模式
9 a* g- }  k4 \* W0 ?' O/ q4 K& j3 L7 e& o. D% Z* X
' E" K% v- l, K, P
选择时间字段
3 d/ N3 ~, q& d& X
' @. q) d! S, o: X8 j" V查看对应创建的索引日志信息" j! O. L, V+ X  M6 _3 O- z0 d
- E  a0 A" W) \! u0 P0 R

, f' o8 {% Y2 Y$ c 8 y7 T! l7 m0 w
收集tomcat日志 , w+ }% _  `) c' G" }
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
) Q. C" o- B, Z, }7 E' ?- O' Q 1 M7 I$ J4 |7 l: O- q$ a
部署tomcat
" T2 T8 ^: S6 i1 N####tomcat1,172.20.22.30/ }% q* n+ T, }, d, m+ Y2 l* v+ U2 g! }
# apt install -y openjdk-8-jdk1 b; Q9 w- U1 _& F  R* e
# ls -lrt apache-tomcat-8.5.77.tar.gz $ ~7 c3 f' G5 d; e
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
; ^2 W$ n( S' R  Q# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/, ?2 h$ Z7 S3 s! {
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat, O/ v5 n' B% ?2 x
# cd /usr/local/tomcat
! T  Q9 g- ~2 N( U& N" @###修改tomcat日志格式为json
' ~* L5 H1 Q6 Z% w" I# vim conf/server.xml4 Y! @3 J; c  q* y6 S$ x! H
....
  f6 @; E: F7 k# {7 H) j 0 J5 Q/ `8 P8 B8 O5 s
....* R; @6 l7 t, s8 H9 V
# mkdir /usr/local/tomcat/webapps/myapp+ u% U8 }$ H/ N
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
: |2 M; _* Q2 E  Q# |. b) Z# ./bin/catalina.sh start
+ Y+ U: m' `( Y# g3 a. s$ W, E) U7 h) T! |- X! I) b4 S
###访问测试
+ s3 H0 L  j, B# curl http://172.20.22.30:8080/myapp/
' Q4 H8 ~' {, r6 S/ h2 O/ y1 O###查看访问日志/ m: |% z' l! ]2 ]% E, S2 d
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log: m% f. B6 M& u* M/ {6 U. V

8 e1 h2 u1 v5 C2 M! G& v9 u# z####tomcat2,172.20.22.266 m" n% p2 c8 o" [
# apt install -y openjdk-8-jdk# L1 Y; m" J: V! u+ i& s
# ls -lrt apache-tomcat-8.5.77.tar.gz
$ e1 Q* T/ _6 J* [-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz/ Q' [$ e3 I. D1 K. k
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/% [9 {& r! _0 M! K; [* \- t4 X5 K6 w
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
, K7 I3 l) l; }) l( Y9 [# cd /usr/local/tomcat
# F: [7 V  ]- H: z9 |) A###修改tomcat日志格式为json
0 Z. r9 p& Y! f( T; S$ g; v# vim conf/server.xml
! F# N) y0 t  P7 j5 t....
. }, b! z  z" N, R0 v
2 Z/ a0 F8 Y  N....
/ i6 Z( c+ u8 L* l) c+ ^# mkdir /usr/local/tomcat/webapps/myapp
8 b. b. ^0 J. v: W# G/ Y3 u# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html6 A& e; e8 A: j
# ./bin/catalina.sh start* l: Z* F5 x4 Q( m

! ]6 M' a1 z  x###访问测试3 R: Z' h* B+ C8 P* K
# curl http://172.20.22.26:8080/myapp/
$ v8 v' b' E. T$ X6 G+ d###查看访问日志8 ^& A8 B& t) i
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
' I+ S4 \. G' V% Z0 ~: E部署logstash
5 r+ J+ C4 I- F8 }; V9 @在tomcat服务器安装logstash收集tomcat和系统日志7 j) u8 D' C7 l* u# g

" T& E, m! {! X8 S1 L" Z' v# w$ }####tomcat1,172.20.22.302 y6 N0 d; K2 F/ g$ h
# ls -lrt logstash-7.12.1-amd64.deb6 }/ y& t1 C! L8 e: K
# dpkg -i logstash-7.12.1-amd64.deb
6 \; Y% K0 m/ f6 e' B' d$ d# vim /etc/systemd/system/logstash.service
+ P$ u  I$ x( d/ j+ d...+ Q0 C' l: H. {
User=root7 s+ B9 I7 N# V& i$ ]6 [
Group=root/ P+ X( Y" `/ K- {: `
...% r, {1 a. s, U- z; v* m) n
# cd /etc/logstash/conf.d! \" u/ j, f9 s) f" ?
# cat tomcat.conf
* b' j: H8 r/ M8 h: ?9 z5 I/ A( Kinput { 0 W$ L9 G- T1 i% _
  file {
# N; S) B5 _$ F$ K    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"( K  S* s6 j' Q$ P0 E  q# @& b
    type => "tomcat-log"! S. t, C8 U8 @" S% i5 u
    start_position => "beginning"
. t4 Z# E0 |3 l+ V    stat_interval => "3"- ~9 ?& A: I( X+ G4 J7 l
  }
7 r) f1 b) s  u6 _  file {
" N2 g5 l) d5 R: {+ }  F    path => "/var/log/syslog"
0 E5 G, e1 W- O: F- v5 }2 N8 |4 ^    type => "systemlog"
4 Q. B5 F. Q5 X) N$ C. J    start_position => "beginning"+ S5 n& H( g* p- [' k2 h
    stat_interval => "3"
: Z7 v6 Q' w9 e, }) _6 l5 G; w$ r  }
7 y( l" Z; m  w0 ]}4 T* S1 V# h) h$ Y& \  C
output {
9 d5 `: l' I9 }6 ^2 Q( F  if [type] == "tomcat-log" {
8 E% t% Y! J7 A6 e3 C( u  elasticsearch {" X, K  b' w4 L6 P8 K# [8 f$ S
    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]# f% f) e  y1 i4 s* K" Z, o% h2 J
    index => "elk-tomcat-%{+YYYY.MM.dd}"
) w) W5 h. Q# G" Q  _. M+ o  }}
: q. a  }+ S$ ^  if [type] == "systemlog" {
" ]' F! n) `8 P( I  elasticsearch {
- B- E, m/ Z3 p* }/ X8 B    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
  S4 t7 Y5 S% q, H1 R/ N6 [9 d4 d    index => "elk-syslog-%{+YYYY.MM.dd}"; M! k5 n$ t/ K/ c: b# J
  }}
, X0 \9 R" V/ Z  p}" Q$ Z' [# F, w' V6 Y& @

$ U+ {% @0 e7 F% z# /usr/share/logstash/bin/logstash -f tomcat.conf -t
- [6 L4 b2 J" Z# x/ U& }% y# systemctl daemon-reload
5 z; ^3 W  S" k" e; \4 W/ ]5 c# systemctl start logstash.service* }5 F" ^* {: ~  A. z
# scp tomcat.conf root@3172.20.22.267 Z" G; j1 A. T6 ?* z$ M" G, J

# S9 r/ h1 N- y, b7 k! w####tomcat2,172.20.22.26
' P: M7 ?; O- q# ls -lrt logstash-7.12.1-amd64.deb
, |$ s+ S, J. L  [, T# dpkg -i logstash-7.12.1-amd64.deb
3 R) m4 ]$ w$ e# vim /etc/systemd/system/logstash.service6 @4 ~8 [. s2 |, |7 P" r* s* f- F
...! ~3 n% P7 ]. b" o* J/ b
User=root
2 w5 v5 K# k$ [7 [Group=root8 B. e0 @; ^4 u. _( Q
...1 P0 H3 T6 V0 v+ J5 j. k5 A
# systemctl daemon-reload
2 ?: ~% R* z4 R# systemctl daemon-reload
6 T, y. @$ o3 \6 A  B) w# systemctl start logstash.service ' Y+ H. ?  \5 f; B
通过kibana展现4 ^7 w, W! B, d9 W0 o7 ?" h8 P4 G
0 l1 k( e# `! P. M! l& ?
3 o  U- |8 N2 s- P2 }& X0 B; F
收集Java日志
! K2 r9 v: }4 X  o1 h% u  D' ~使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并- O" q3 C* r- l$ y5 L
) E. s# ]% y* E+ d) H: o1 R0 c
Multiline codec plugin | Logstash Reference [8.1] | Elastic! k% @+ E! {9 D: _2 x( b  T8 ~
5 z4 I* u7 X6 c; R) R
添加logstash配置文件
3 I) ]+ ^* P( e. Q; ~2 T4 w; l###收集logstash自身的日志,172.20.22.26  Y& l9 m/ H( F* a
# cd /etc/logstash/conf.d
" L  ]6 a2 d: Z8 T9 |6 {. W# cat java.conf 0 F* A" p! ^7 h# Q) x* {4 _  U/ T
input {2 o. ~& i# E: ?+ W2 T
  file {
+ X2 z7 j. u9 g9 s    path => "/var/log/logstash/logstash-plain.log"! s6 s- [5 z+ Y; {1 D1 |
    type => "logstash-log"
+ `0 S- q1 V3 Q, W    start_position => "beginning"7 ]* k0 e- f& ^+ R8 y: n" f6 Z
    stat_interval => "3"
0 U! X/ l- I2 r! J1 d% @8 ]! X    codec => multiline {
, W4 x! y/ ]# f      pattern => "^\[": O$ c, P; `0 [; V/ n8 y, \" w
      negate => true
" a4 B# O2 I7 L; z2 f/ f! r      what => "previous"
3 a' D' i/ O% j# G8 O8 y   }}6 y' p( b7 n& s
}
' j( T5 x5 a, r. J  G( f) g% Xoutput {
3 S: I0 Q* Q/ W  m, X- i  if [type] == "logstash-log" {! `( k7 p  w& w; i& m4 [, w
  elasticsearch {
3 l3 i$ i5 ?" e4 ]: M7 n    hosts => ["172.20.22.24"]1 s* P( v0 N9 u0 B! d
    index => "logstash-log-%{+YYYY.MM.dd}"
4 n6 P) A7 S- i0 e+ ~: {& y2 u5 A  }}
2 N: P5 G9 @! `7 M) t- t5 d}
9 x9 j# [5 r7 j, Y5 ^/ {4 i. ?; j+ b1 f; U* R; h
# /usr/share/logstash/bin/logstash -f java.conf -t
+ n* l+ |; W2 x$ D1 q- W# systemctl restart logstash.service
& X  E9 \7 ?3 C/ A, H: ~# F" B
) ^7 Z* F0 F/ [( q& Y3 J###收集logstash自身的日志,172.20.22.30
" H( ?. x- _( M) [" ?  a) M  j+ e$ @# cd /etc/logstash/conf.d4 T+ n& ]8 C, L
# cat java.conf
' x# f( [# Y# Q3 Qinput {
. L% m& P, R# C: V8 I5 o  file {# i' O  P: `9 v
    path => "/var/log/logstash/logstash-plain.log"
0 G* F4 |; T) x6 b    type => "logstash-log": I3 B1 c3 k) {8 r
    start_position => "beginning"
8 V2 f# c! e$ \5 C1 d" h4 M    stat_interval => "3"; Z, q9 t# Y0 a
    codec => multiline {6 y& B3 n" H- N5 ]1 q: T
      pattern => "^\["
( {- V1 R! {6 k3 q5 X/ u      negate => true7 Z: ]5 W# }% t# M5 F; k; \( Z3 M
      what => "previous" 8 O- f. R+ M8 B4 }9 ], M
   }}- S( s# N7 \6 d, v& Y2 J( Y
}
* I/ q" |3 G4 k4 j6 A5 e$ Soutput {
, V/ g3 @8 {% G) d1 \# Y+ Y  if [type] == "logstash-log" {
$ e! o3 p. r* Z. F4 s  elasticsearch {, j4 l6 H+ o8 u4 q8 H/ l3 }
    hosts => ["172.20.22.24"]2 _- J) e( K# f& {8 s  A
    index => "logstash-log-%{+YYYY.MM.dd}") s, g. K& p  z0 k! ]3 ]
  }}
3 g$ ?  Y: T4 S1 w$ e0 A. M* t5 ^) _+ X}  o: l1 e' ^( I7 Y
" |( ~! u: _" A+ I- Z0 M7 K* a
# /usr/share/logstash/bin/logstash -f java.conf -t
7 L, A7 ?- E$ m/ s$ \3 }4 }# systemctl restart logstash.service 6 j0 G+ ^7 G: @) k; B; b  ~
查看kibana收集到的日志
6 t7 G% k% e; n% X" p
) n. R0 i( H+ U! R; n: g; g& J % o; d3 B: [! b
7 N1 O  E* g. r9 B, E8 d
filebeat结合redis、logstash收集nginx日志 % a5 e/ q  W! ~, g" ?3 n
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
! }6 E5 Y$ }; ]% E 0 S' f3 M/ Z  I. m' G4 E# ?% Z
web1:172.20.22.30,部署好nginx、filebeat、llogstash: L9 l/ s4 s+ x$ ^7 j* f

5 T; w6 {8 T- R: N& s9 V5 ~/ Cweb2:172.20.22.26,部署好nginx、filebeat、llogstash
+ {; {3 q& `( Y" V# o. U9 L) c/ e# |
! S7 Z  R7 |+ I; v% T, rlogstash服务器2:172.20.22.23,redis服务器:172.20.23.157$ W" J3 }$ i1 f* V) C2 U

" H1 |0 z/ k# n" N5 i# tnginx服务器相关配置 % b) e+ G/ P7 S- P  m+ g
部署nginx . [: ?2 h5 I: b
# wget http://nginx.org/download/nginx-1.18.0.tar.gz" }5 e  l" K7 r
# tar xf nginx-1.18.0.tar.gz
/ I$ \7 z5 o: j  H! F# F4 ?# cd nginx-1.18.0
: w) o' b4 `6 \$ J4 {* {# ./configure --prefix=/usr/local/nginx --with-http_ssl_module4 l# I* p, P9 z( `5 C- q7 N$ ]
# make -j4 && make install
; t/ s5 G3 |! r: U! L* Q# /usr/local/nginx/sbin/nginx ' W, ]7 }: k% E: ]& i; D
部署配置logstash
6 j# y; R+ r* p, {$ u1 N2 {: t把filebeat收集到的日志信息发送到redis. g3 l4 p9 ^, E9 ]: ?
/ X) u# t: i/ `  s& `
# apt install -y openjdk-8-jdk
4 `$ p% G$ O% T$ |# dpkg -i logstash-7.12.1-amd64.deb
3 a) n# N4 I3 O$ d" F: @4 p3 Z# cat /etc/logstash/conf.d/beats-to-redis.conf
8 T" f9 i1 K" T( M  Yinput {3 E& T: z- ^. L
  beats {
. Q$ @8 v# @2 n# `5 {    port => 5044
0 |+ W4 p+ A4 A7 k# }1 C" Q    codec => "json"% o0 Q  S3 t$ B
  }
. @/ p1 G# s) u( L( M  beats {! d9 ^# @* F' p! G+ D) ?
    port => 5045% A' j$ E: k9 A3 i( ~- @$ E
    codec => "json"
5 ^4 J3 n* c, o4 d8 w& I. ~1 K: F1 T  }
. |/ j: K2 ?9 {1 L}
$ e& `! i7 a$ ~- V$ B2 K6 Loutput {" J, a8 m. c3 v/ _' w2 O0 k
  if [fields][project] == "filebeat-systemlog" {% S+ J! d& x; |8 }8 q
    redis {
# w! b( ~0 B! Z3 F( b1 h3 |% ]      data_type => "list"9 h+ C2 i: z* [
      key => "filebeat-redis-systemlog"8 z8 r. Y, g+ f+ x; w( A
      host => "172.20.23.157"/ l9 K- M5 ~8 C
      port => "6379"
! y# `: O/ r  k7 s      db => "0". E$ E: i7 Z6 a  V& f3 n, ~) V
      password => "12345678"
4 J: Q; I- [# z9 \+ U  }}
( e1 \3 Z. H# ^2 z' U  if [fields][project] == "filebeat-nginx-accesslog" {
; u* ~4 _- X! ~- V/ Z4 S* n    redis {: s* R0 P, N# m% ?
      data_type => "list"$ V* ^+ Y* p+ W) `: q
      key => "filebeat-redis-nginx-accesslog"+ `% h& e' \0 ]2 H6 x' Z! p
      host => "172.20.23.157"
: }. s* {* j( o+ S* ^  ]# \      port => "6379"
3 B; k8 h+ W& n$ w# K/ l$ I      db => "1": g2 l2 V0 ?+ S0 |) A
      password => "12345678"
9 L3 I  p$ u. [, I  }}  u' ]- {4 \% M" Y
  if [fields][project] == "filebeat-nginx-errorlog" {6 ^$ B1 r& L9 D7 g' E% ^, {) Q
    redis {. W, b9 m- f8 W3 H' W3 H
      data_type => "list"
7 [) K: S! g7 B4 h' ]6 A      key => "filebeat-redis-nginx-errorlog"
# w5 x" ]1 m9 C* B      host => "172.20.23.157"
" Y* a  _1 K, g, A6 Z/ u/ ~      port => "6379"
* ^8 N8 F) ?) q0 U1 s% M8 T      db => "1"
; j& N% M5 ~+ R( a* j      password => "12345678") x/ [; b; X/ P8 o" R6 b6 Y4 D+ B
  }}1 K' z4 l% K3 M: h! t9 n
}% n3 t4 r6 P6 [+ G) w' F
# systemctl start logstash
& _; v( x% `$ ^5 }# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/
/ F  R7 g1 Q0 m! F! x9 s+ }8 H部署配置filebeat
) O2 ]( D1 m+ \- A1 ?通过filebeat收集日志信息发送到logstash
2 K' d' X# w7 \ ! J# Q4 z0 o+ ~# [: b# E
# dpkg -i filebeat-7.12.1-amd64.deb( f) q" R7 U$ I, `2 m8 Q' v
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"$ f- w. e1 J1 q+ m7 I% _
filebeat.inputs:# y# A" U( E' z1 I
- type: log* R! S' h, H' a  ?. f# C2 N
  enabled: true
. k) Z( [) P% e% U  \! A, |  paths:% N, ]# H2 I4 w+ F9 q) O
    - /var/log/syslog! i- Y- I  K! H: d
  fields:8 x# M/ ~& Z$ P. h
    project: filebeat-systemlog1 E/ F, Q7 k/ A" X3 l
- type: log6 K  A( W! u1 X9 i( s! }' q. _
  enabled: true. c5 _- l% @- B$ |
  paths:
$ R, m* }9 |9 O2 P+ F: A! n    - /usr/local/nginx/logs/access.log$ g. M8 C2 n# z- r( u
  fields:
# D" R$ L  D3 s2 |    project: filebeat-nginx-accesslog* V* X4 C/ s/ t/ m" s, N1 y
- type: log
  a( }% y$ ^' v7 ~8 P+ ^8 Q9 B9 F  enabled: true( t7 o6 e$ |& C9 d: C6 Q
  paths:, s' t: M% A1 q) I# c6 q4 _. h/ I
    - /usr/local/nginx/logs/error.log
7 F0 P3 l. T# T% u/ F8 U2 A  fields:
( W( R3 t9 }3 e3 g2 y    project: filebeat-nginx-errorlog
& Q1 M/ G4 }6 o$ \* q: ffilebeat.config.modules:6 G( f: Y, }7 X- `( b
  path: ${path.config}/modules.d/*.yml5 |3 }: @* `3 d) p* [/ S; o
  reload.enabled: false- q- {- \! e6 p1 {6 `
setup.template.settings:
* Y  Y6 O9 ~, ?* S& K& E  index.number_of_shards: 1# f& A  w- |% L* p6 {, i
setup.kibana:
8 D8 e0 H+ v  l( _& H0 [9 u. J# Zprocessors:
% c' I8 _, B: k0 i) m5 z9 b, k  - add_host_metadata:
( a: M; [6 \5 H1 q, X. d& t. [      when.not.contains.tags: forwarded4 }' Y1 O3 L" T8 s6 {
  - add_cloud_metadata: ~0 j  {# B, K/ p0 l  O2 G
  - add_docker_metadata: ~
3 ?+ G* |$ m0 u. e6 R# J5 y4 j  - add_kubernetes_metadata: ~
: }: r+ W  p8 o4 B8 routput.logstash:8 l1 `$ l% l# }3 V7 r
  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]! |  ?2 V* |, P' k: i7 \
  enabled: true! b8 A8 T: _. ]1 ^3 v
  worker: 2
8 Z6 `; ?5 Q% J! w4 N# L1 W$ O5 E- Y  compression_level: 3
9 t% }: m. n& |' X$ E1 T  loadbalance: true
; r0 e8 |* K! D" U6 O. g( H0 ~
( y. S& e* x# h4 u# systemctl start filebeat; N$ ]5 K* E- D
# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/ 1 L- ^: m7 @0 k$ Z3 Y  N
logstash服务器配置 2 ^9 z/ u6 Z$ a! r
logstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
! V! G& }4 ]! b6 J 5 M0 y' f7 o- j6 `* g. g9 j
# apt install -y openjdk-8-jdk
' v  D' v0 D  m; L3 B+ C3 n0 b; V# dpkg -i logstash-7.12.1-amd64.deb
6 n1 @0 A: C  `0 @+ E0 I7 ]# cat /etc/logstash/conf.d/redis-to-es.conf
7 `& m! T5 H+ M0 K' }0 tinput {0 R: v8 F) [1 M5 |$ a5 }% u2 p
  redis {( q0 f2 j6 |& ?$ u
    data_type => "list"
4 j1 y! ~6 ^6 K) Q' w    key => "filebeat-redis-nginx-accesslog"
& v7 E% q  P# Y. F& p- y    host => "172.20.23.157"
/ u) G$ g: {1 i1 I    port => "6379"
4 b: H8 f4 r' Y    db => "1"
; A7 h/ H5 I% s: S/ {3 M    password => "12345678"7 \6 Y9 M" L$ \3 v( e# u8 A* G
  }; N4 D+ K& {7 z
  redis {
0 ]# p6 @/ p7 G! A! p. z! s    data_type => "list"
1 @$ C8 i! E: I) v- Z0 _% a    key => "filebeat-redis-nginx-errorlog"
8 s) q$ m, n( e, F5 }    host => "172.20.23.157"
2 Y( {. `8 G0 Q, h& `3 a/ ^    port => "6379"$ a3 v! K  h( s7 B* h
    db => "1"
9 S( y3 q& B% L' t    password => "12345678"; o0 I& y  J0 C8 A! J6 F3 [; v! `2 m
  }6 J) ~- _* e5 r6 t* I% O2 i
  redis {1 l4 O% ^* j% O! M# Y; A( B
    data_type => "list"' E3 `. L' v) l. v1 }# p
    key => "filebeat-redis-systemlog"
/ U; i( q+ M& a" m  e- R    host => "172.20.23.157"2 p( P7 A) y1 k$ I: y( A
    port => "6379": k( v( N. }3 w/ n! Z8 |( e
    db => "0"7 F+ o$ j: _( N( C+ n7 a5 j0 r
    password => "12345678"; k" @" I! l/ R- o- r
  }; v& O2 q7 K- ~* c9 f6 T
}
: X, t# S* ^0 ~  {7 J# f9 Doutput {
; H6 \1 o# O# D6 [* ~7 G( D  if [fields][project] == "filebeat-systemlog" {
; Z' b; u+ \2 i! x, _6 {: Y+ B    elasticsearch {
# L- P: p! l! S! Z8 [& J      hosts => ["172.20.22.28:9200"]5 N7 v4 Q# T9 b5 \0 Z, t0 B
      index => "filebeat-systemlog-%{+YYYY.MM.dd}"
+ ^  J( K' F' V  }}
9 q- o+ d" ~! V  if [fields][project] == "filebeat-nginx-accesslog" {( z) a3 S$ \/ ]1 y- Z
    elasticsearch {
4 Z' [6 `) \% @& g# n9 u      hosts => ["172.20.22.28:9200"]* H* I0 I" h& M% O+ ^9 ^
      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
" o* t3 d8 i& n9 m  }}
* ~9 r4 _4 M0 }- v* H  if [fields][project] == "filebeat-nginx-errorlog" {; W; E% m$ P8 X, m' I+ [' F: T1 q7 j$ s
    elasticsearch {
4 o9 j9 }! Z# p$ q" I      hosts => ["172.20.22.28:9200"]% j, s/ O2 q9 ~
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
2 U3 Q9 ?9 C4 O$ q7 a; _  }}" f' }  A, n5 ]
}
! F3 ~' k. W0 m% k; `# systemctl restart logstash.service ) P7 y$ _$ R. n% [% @7 |# X
redis安装配置
8 |2 [  e; r0 h: h4 N6 Q0 ~redis服务器:172.20.23.157,
% M8 ?/ H) h: \' O7 v0 C1 K 4 v) ]8 v# g7 m* @0 B4 w9 A4 x
# yum install -y redis: I) U* h6 n3 \4 l% ^2 ]# E9 e
# vim /etc/redis.conf7 Y& J+ G# B4 {+ ?
####修改以下配置项
+ t/ b$ R5 H, l/ Kbind 0.0.0.0: c4 q. z! X# R1 \: I% e) |
....8 l. M6 G: O% k: p2 ?4 s1 _
save ""4 v0 P1 }( W+ y( A
....2 a7 p5 N, C- }* S# d
requirepass 12345678
# B  P$ c/ m% l( c- ^7 u4 f....
, Y* [! f) l5 ^% P% [' |( p# systemctl start redis
2 K) l. y! |! d* C* C4 j###测试连接redis7 g; B: ~, ^1 [# s/ Z* u9 p
# redis-cli 2 t2 ^3 T% w4 }! x! z
127.0.0.1:6379> auth 12345678
* {$ {: {0 P8 EOK# o8 {) O( i. e( P7 |2 n
127.0.0.1:6379> ping' g, p9 j. l4 @. d9 }1 N+ Q
PONG
( ~. v* `$ ~/ D+ r& D- n# s5 E2 Z8 ?; T% F9 U9 J$ p
###验证收集到的日志信息
' z( H4 F: o$ V, \127.0.0.1:6379[1]> keys *
. a- f3 o' x' B+ s/ U5 s1) "filebeat-redis-nginx-accesslog"
; ]& `# c* e* m2) "filebeat-redis-nginx-errorlog"+ |" T  r3 @5 s6 I1 ~
127.0.0.1:6379[1]> select 04 `* ~# B1 p4 c& S. N- N
OK/ N& r1 h) Q0 j& R, r, w! |, t
127.0.0.1:6379> keys *
. M% g% v, O3 O. w/ L: z1) "filebeat-redis-systemlog" + j( @: B3 }! Y) H" z
通过head插件验证生成的索引5 I! j/ N1 a! |, D# p

( w3 N0 @. A! D9 P1 z. K - J+ ^7 _9 ]' m. n! D6 q
kibana验证收集到的日志信息
! Y% c% N5 |% e6 @# E3 N8 v

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表