|
搭建ELK . ]+ j$ a) u; M4 _' V
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
) L# N$ d( E7 Q% ] & y, w% H/ I5 Z; T: [
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单4 N; Y8 p" E4 f5 r- ~& U
Elasticsearch
* _; O; S: [. w4 G$ `/ melasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
6 K* R3 r2 ?; J# ^2 r . ^' A4 J+ a/ p0 m8 {5 k
elasticsearch的特点:5 H2 ^8 X# {$ W! R
9 z* x! T7 m( Q- d& S6 y4 |% |& f实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
! G& H& `+ m: Z9 y部署elasticsearch
% B% W. G! M! ]$ UGitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发 @. D$ f- \' D. l& R$ h- n1 S$ `
8 `2 ~+ L. I% h) n0 lcentos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
/ B- M+ y& l. k+ P, f+ u- e1 | 8 { R; u. t( X- G$ x
服务器1:172.20.22.24; x# B: D( `1 Y. [
! p( i. S a# T$ l& D
服务器2:172.20.22.27# Q1 o( ^3 R& M$ z( |" D/ [% ]! T
& @/ S; {; F% P1 ^- ~. S( }6 s5 g
服务器3:172.20.22.28
0 [" L# @0 E' [
& V' W. L+ V' }0 s, M###ubuntu
* J+ n8 r: z% `/ W1 K# r% F; a# apt install -y ntpdate
. `& t! @: ^) P# rm -f /etc/localtime# E6 D( o& G0 o" ]
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime8 x: T) |* W$ C% l8 Q& E/ @( A6 ]
# hwclock --systohc d/ i& B) y0 T3 q1 ^: Y; f
# ntpdate -u ntp1.aliyun.com6 m2 ^) E$ C6 [ \
###设置内核参数
: x9 @) G' J# V& }" O* Z: O& i# vim /etc/security/limits.conf
8 F) ~! l- X4 a5 k* soft nofile 500000
1 A+ ` H5 o% N* hard nofile 500000' V3 R7 I+ A6 ^1 W1 |
# vim /etc/security/limits.d/20-nproc.conf
* V. a5 g) ?6 L* soft nproc 4096
" i! ]/ ?' ?) w/ Eelasticsearch soft nproc unlimited
r( w# _6 ]& y% Broot soft nproc unlimited
( Y6 ?. C9 V/ v$ ]* ^4 W, Y6 l###安装jdk
9 a& P1 P, I; u2 b B: S# apt install -y openjdk-8-jdk
" ?, C, @5 Q4 g' L5 }: r* z# x* `4 y( Z6 A5 `8 H! H; f8 c
###每个节点都安装
1 W, L/ U/ l8 t8 c# ls -lrt elasticsearch-7.12.1-amd64.deb
' ?6 a7 M/ g( x' K1 z* ]& I# dpkg -i elasticsearch-7.12.1-amd64.deb
3 ^2 A' `: j2 N6 n# i###节点1配置文件/ |; _8 k& M* T0 |
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml9 K) x# P4 x3 W- R1 C4 E, i; N
cluster.name: m63-elastic #集群名称0 r) r( {+ R2 a$ H% U% Q5 }- x
node.name: node1 #当前节点在集群内的节点名称# M3 C6 c1 K! [. Y: |, A) W. r
path.data: /data/elasticsearch #数据保存目录
' }: g# b: G# x/ f9 P. o; dpath.logs: /data/elasticsearch #日志保存目录9 W4 w% O4 ~; y; ]2 X
bootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap, D( Y" ]0 m+ G% o) W
network.host: 172.20.22.24 #监听IP: c; q" f# i& j% Q( |2 d4 l+ i
http.port: 9200 #监听端口
8 B% e; a" g" u& ^5 i###集群中node节点发现列表4 R( {& q4 c1 v
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]7 c, }5 o' L/ ?! t
###集群初始化哪些节点可以被选举为master" L" Z; A4 U0 e7 U. L* _, b
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
$ ]: q6 v$ k2 B$ T( m' `action.destructive_requires_name: true4 g6 l% J5 m' z5 X' V* P7 B9 X+ z, ~
# mkdir /data/elasticsearch -p
; ]7 _8 ~' U* w/ f# chown -R elasticsearch. /data/elasticsearch
1 N* O+ s! h+ N( o# systemctl start elasticsearch.service
/ q4 Q7 _1 `# Q###节点2) Q, Q" [7 ~( ]4 u) l3 v/ ^; i
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
# o |% ?+ d Y, ?cluster.name: m63-elastic
+ X( p& n4 D+ i- [8 q3 `node.name: node2
( b6 _: A( w. N8 }path.data: /data/elasticsearch
) Z$ h7 V, [# G# Z1 _path.logs: /data/elasticsearch# G- A b3 M; Y+ s
network.host: 172.20.22.27, S- D( _+ w0 R4 ]$ h1 \6 A
http.port: 9200+ c' [4 {5 n% o. p, b# C) M
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]4 V% ~: `5 b/ L% @9 }. d% ~* L
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
9 x: u& i; P4 G, j) `* U/ o* j" Iaction.destructive_requires_name: true. B j r; Z' F* g! W8 c+ o3 s
# mkdir /data/elasticsearch -p7 V* {7 i, D! {, v- n. u
# chown -R elasticsearch. /data/elasticsearch+ a0 o1 \* e3 v$ m
# systemctl start elasticsearch.service
4 s N2 ?* u! A###节点33 Y0 g9 }- j r
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml" E8 ~* V4 a ^7 y
cluster.name: m63-elastic
6 S0 I) J$ l `% V' q; cnode.name: node3
6 \! \; k; @* S$ Vpath.data: /data/elasticsearch a8 m4 _0 S8 L1 {
path.logs: /data/elasticsearch6 K# d( X, d( |4 _% S+ M6 P
network.host: 172.20.22.28
- [5 G* A1 Z8 |$ U1 S9 P4 G: i6 p2 Ghttp.port: 9200
2 n( m! a2 I+ b4 \; E% Kdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]. E# X9 [# w1 t9 i# R/ [7 I
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]6 s- k k5 Y7 t
action.destructive_requires_name: true
9 Z, l; j& t6 K. k+ l6 J5 G# mkdir /data/elasticsearch -p
) Y7 ], L) S- I& t2 n" ~2 U& B& g1 b# chown -R elasticsearch. /data/elasticsearch
; z+ K0 `$ U8 K" o# ]3 |* E# systemctl start elasticsearch.service 8 [+ n; ?( F: Y+ g, ~8 ?: l
浏览器访问验证
( U6 h [$ c( Z# v, r& T1 nhttp://$IP:9200
# E; f& C% H! E( S$ n/ D( i0 q3 R p5 P' i) K
+ A7 T* V$ ^# P
; G0 [$ @+ T. w s) B0 ^Logstash " Z9 ?% I1 H6 S' E3 D4 v
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。
; H @5 A% p( y
s. X+ |2 d: W1 W) C, A3 u; ~( w, h部署Logstash
: Q7 l$ [* y0 E# Y2 \9 P: O7 s8 DLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地5 z3 @) M- g3 ~+ L/ n* ?
! L/ L4 O& z) A' ~
https://github.com/elastic/logstash #GitHub5 ]: i' F/ \6 i4 L$ H$ r$ b8 M
! a) Q2 G# ?( O
Elastic Stack and Product Documentation | Elastic' }, x6 h6 I% L( F/ l
. Z* M' x$ v& F
环境准备:关闭防火墙和selinux,并且安装java环境* ]( G% o0 l1 l& G
9 B" |( v% N/ G- H- M$ ^) _
# apt install -y openjdk-8-jdk; p$ U$ I4 K3 k& ]
# ls -lrt logstash-7.12.1-amd64.deb
' X5 n# m8 W3 h: U8 c, L# dpkg -i logstash-7.12.1-amd64.deb$ m4 q# D% W2 g9 |% K! ]6 K
###启动测试5 H) `; Z& g, g9 |+ H1 J$ b
# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出! Y l( z# Z+ E+ x# o9 Y
hello world!~
+ f! z5 ?9 ^8 }; G$ G3 ^. v{$ ~ K* l0 S. n( A, c
"@version" => "1",
) K! @5 S: F7 q "@timestamp" => 2022-04-13T06:16:32.212Z,
0 \; O# Z5 p8 q! `- Y "host" => "jenkins-slave",2 Q, z3 H( P- J( A! [- g$ A G# Q# ~
"message" => "hello world!~", y- |+ S, k7 X4 F; o( Z. y/ V
}/ Y" c- X% H, i5 F8 {/ X0 \
###通过配置文件启动4 M3 m7 j8 X4 b u) y- _0 P4 F
# cd /etc/logstash/conf.d/
$ C( {' i2 u G0 P6 R) h/ Y" s' v' i; T# cat test.conf 0 d, ?% Y" E/ Y5 \# N( \$ T4 M
input {
" T' ]1 }1 F0 f# ]- F stdin {}
! p, W# F; r6 I* P; t8 x/ e% U}
9 t7 k Y9 Q* {1 S7 Zoutput {6 h& e9 l p$ I
stdout {}% V5 W6 P) W) ?# ^+ l
}# k8 J; @! b* w( I1 B
8 X7 s& U- Q: c% v8 A###通过指定配置文件启动
: ]9 i# H5 m7 g, d# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法
, J; B3 x5 @$ g j6 i. \# D" A6 h# /usr/share/logstash/bin/logstash -f test.conf
! [- C& G0 g5 `/ E. @# y; a- H0 d
$ W4 S9 F1 u' `/ q' e$ h* w* {2 u####输出到elasticsearch ]* L3 Q4 o) a0 h* u! L
# cat test.conf 8 W# h! h. |0 ~8 B
input { 8 N9 w/ ~' \2 B# g* O+ K# g2 y
stdin {}
& b& j. m! I7 t4 ^: F' a}
9 R; Y" z0 a" @" T% Ooutput {; w1 G. E8 s' d
#stdout {}
& \: U# @0 n. K' n elasticsearch {
( K; w! C) K; N5 H6 r: u, @ hosts => ["172.20.22.24:9200"]1 I* d/ h) Z- W, s
index => "magedu-m63-test-%{+YYYY.MM.dd}"% _* S/ F% ^& I- |, g2 X! o; Y$ ~
}
- N3 b Q( _( J7 I) p) ~}( R$ m( r5 A o9 s
# /usr/share/logstash/bin/logstash -f test.conf
F: n$ v& F6 [version17 T o9 N6 {, Q* r
version2 }1 x- w9 ]% y: g0 m& y9 s
version33 G5 i9 R- S" B6 L
test1; j" b0 y9 p& E( Z& j
test21 k0 r2 H# Y) I+ @
test3/ y* u. ` x- d, v s
7 k* p/ H/ V0 e* {. G( z3 l, P
####elasticsearch服务器查看收集到的数据/ t& Y" v- n. Q2 y# L
# ls -lrt /data/elasticsearch/nodes/0/indices/3 p- c! }$ |" N
total 4- N. C- [+ ~$ t9 F! r A
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
- { k. t9 N3 N9 ykibana
' L2 K$ q6 @/ s, _! vkibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等
5 j0 W8 h( e+ `2 u3 \ # ]) n3 w* _, K: ?
部署kibana
9 ~$ m2 n- v6 O* S: q. F' V# ls -lrt kibana-7.12.1-amd64.deb
5 Y: O* e" j' h! J7 p# dpkg -i kibana-7.12.1-amd64.deb
0 f( P, d! b8 o; F6 r" G) F8 D# grep "^[^$|#]" /etc/kibana/kibana.yml$ \6 w0 @& W: {
server.port: 5601
7 A1 \/ n) s% a" m$ P" g5 gserver.host: "172.20.22.24"
/ }6 c, \ f2 \. y: Ielasticsearch.hosts: ["http://172.20.22.27:9200"], X+ _) x! {6 U3 O q
i18n.locale: "zh-CN"* I2 \! _% U. V* Z: w, s- ~
# systemctl restart kibana
# G1 S0 o9 m" u6 g$ e浏览器访问http://172.20.22.24:5601
$ m4 T4 ^1 @8 `5 }1 j
. O: F6 M: }4 m1 I# P2 S( n* [1 iStack Management-->索引模式-->创建索引模式
3 k2 _7 P6 i0 G* H" v! e$ U" z, _8 V3 S
* e9 [' H+ L. ]8 W7 y! i
选择时间字段
k; C. h, I1 U4 X/ i/ N; j
8 ^0 Y4 X# j" e9 n6 B# W+ _, w查看对应创建的索引日志信息8 [0 T9 X2 s" ]. r+ T2 Z
9 x( I% X4 f8 g7 }" c" s( I
0 d8 G( d' Z; c 8 ^) v6 u! U8 p( @: z
收集tomcat日志 0 A' o5 o8 j# c: V `* e
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
, Q3 {& D$ Q) A' d, x 4 j) d( S$ j3 H& \' r4 @0 e/ W
部署tomcat $ Q: V/ [, T4 ?8 P9 u
####tomcat1,172.20.22.30
" Q# g+ W$ s/ _% M, [0 t" H# apt install -y openjdk-8-jdk% a4 H; t% K3 `1 l, o7 y
# ls -lrt apache-tomcat-8.5.77.tar.gz ! ]( U: y' k& Y1 u: j( d! C
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz3 @, i/ l! W/ d1 x. @# [; o
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
, Z& E' H# ] V9 `# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
( j- ^3 C4 m$ z9 z+ d7 m- v& R+ ?5 u# cd /usr/local/tomcat
1 r6 z9 W; _4 Z. C z3 p###修改tomcat日志格式为json
8 w! D$ ^. l: `/ D# vim conf/server.xml
4 X) V. h0 Z7 _# Z....
; }7 R2 _' C2 O* l
1 P f* m. n6 @8 x P/ L0 B. |7 P..... Q' I! |1 Q ?6 I. [) Q" E: `* i
# mkdir /usr/local/tomcat/webapps/myapp1 `1 Q5 @" p4 d/ x
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
- b K) W; X% a$ W B* U# ./bin/catalina.sh start+ O1 B5 x8 V- l. l, N, F
7 j- o4 N2 q7 _6 O; r
###访问测试
' x" [/ H N7 z' W: l# curl http://172.20.22.30:8080/myapp/
, _# Y, X [. n1 k2 U: B###查看访问日志
7 a; [6 a0 ?. h+ l0 P# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
1 V3 L) h7 f2 {4 R$ d" a( T) _# y6 @. D" N
####tomcat2,172.20.22.26
9 {. C+ S6 O& o/ Y( w6 G& ^; X. c% {1 u# apt install -y openjdk-8-jdk
1 g. ~) {5 I8 e+ C" y# ls -lrt apache-tomcat-8.5.77.tar.gz 7 i4 q# }" _. ?6 z' q
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz: _9 U; a8 l. P! W+ x6 @/ V6 x
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
. Y! `6 j- [" n# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat* g& ^% u6 n3 B1 k
# cd /usr/local/tomcat% x0 G" t! w8 D9 `
###修改tomcat日志格式为json8 Q1 W+ S3 u" f: q' u% C
# vim conf/server.xml" b# K8 P0 X$ e5 h1 X4 A9 |
....
5 H! K/ e1 G3 m6 Y
7 j' `3 C0 Z& @/ l2 K..../ S3 s% L; h, O
# mkdir /usr/local/tomcat/webapps/myapp
6 x! l ~! |: {" c# D- U$ C# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html
6 ~% p3 Y% a/ _. X) ]# ./bin/catalina.sh start* C2 D8 e4 P$ I6 o0 _% }
( k* H" W; o2 [. |. t###访问测试
6 k2 h9 o2 ^6 U. E$ m# curl http://172.20.22.26:8080/myapp/0 V1 a- E" R, @7 i( j, E1 k& G1 @
###查看访问日志
# |0 K$ P1 I) K6 Z# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log $ x) W0 {$ ^" F( {# Z1 f5 \
部署logstash
7 F. z/ k# [! m$ w A在tomcat服务器安装logstash收集tomcat和系统日志
0 g; f T" m' q$ u" m7 j& w2 f" [3 R
4 Y8 @3 A4 u. E. F: k) L! R####tomcat1,172.20.22.301 o5 {. o0 i% j" r% Y: B
# ls -lrt logstash-7.12.1-amd64.deb
, v- x1 g; E4 P/ H ?# r; h" F# j# dpkg -i logstash-7.12.1-amd64.deb5 Z2 u2 M8 c- R4 ?$ [/ R& }* s' a% v
# vim /etc/systemd/system/logstash.service: c/ S# {+ h! B
...$ v. P6 v9 E3 c! L6 I
User=root
% Q0 y8 j: h! p1 S4 rGroup=root2 n! `+ M9 O# O( G2 f
...
: B4 j1 Q* W& F7 E# cd /etc/logstash/conf.d0 c, g) `/ [: J' E
# cat tomcat.conf
5 }: S+ P! g5 ~/ Minput { , G" |9 P/ r/ _9 i- j( U
file {
, }1 u+ S, l* l& H: l. N# R path => "/usr/local/tomcat/logs/tomcat_access_log*.log"
0 H3 U. I8 [- j0 T! W* L! i type => "tomcat-log"
- L/ {" M. [; H% I H start_position => "beginning". v# [* A0 U% {; h
stat_interval => "3"
5 {+ L5 A% y8 Z }# V3 Y3 e( a# \; P/ P
file {9 V( n+ m) t$ n& U# i4 ~/ R
path => "/var/log/syslog"* Z2 X/ f% B/ v6 c8 h0 I( z
type => "systemlog"
4 W& J3 u( A$ G, w start_position => "beginning"
4 ^ ~- z9 R3 y0 S stat_interval => "3"3 J5 p$ X# n/ I o
}
; A J9 [8 d: Y" C} b }- F/ m7 b% s8 J2 C2 E1 a
output {
7 u/ R# [4 R9 \$ L if [type] == "tomcat-log" {1 o/ M Q' \" }( | a# e9 c% f
elasticsearch {
" Q5 z0 b! I6 ` hosts => ["172.20.22.24:9200","172.20.22.27:9200"]' {' D. Y( g" \1 }: ~( ^, y# {
index => "elk-tomcat-%{+YYYY.MM.dd}". c) _! V) o# ^2 H8 i
}}4 ]4 i o: F v% A& @
if [type] == "systemlog" {
, z0 L2 A! j, W" B# G7 }; v+ D elasticsearch {% Q6 M. o/ ^2 e& C3 n8 Y: k6 ?
hosts => ["172.20.22.27:9200","172.20.22.27:9200"]% D }7 j) f. I. f$ G
index => "elk-syslog-%{+YYYY.MM.dd}"
- U5 T1 L* q2 r" N) B; e( U G! U }}" F* b4 [" k4 u+ Y
}3 E7 s. l5 \) H. O+ v& c- v
+ M6 d8 H: V# d$ [9 e8 W3 J; w# /usr/share/logstash/bin/logstash -f tomcat.conf -t! X! J6 \1 d4 q( ~+ [
# systemctl daemon-reload) f# ^$ ~2 h7 R( ^
# systemctl start logstash.service$ E o4 H: h5 x- w
# scp tomcat.conf root@3172.20.22.26
/ w3 n d# h% i6 F! }+ z8 h4 S! f2 l$ _8 t( _: w" Q& ~
####tomcat2,172.20.22.26
! G$ Z" z# j0 w. ?1 P4 B# ls -lrt logstash-7.12.1-amd64.deb
) O( G$ C: ]1 g# dpkg -i logstash-7.12.1-amd64.deb
/ S! J4 H2 R6 q# m6 E2 j2 D( q9 x# vim /etc/systemd/system/logstash.service
4 e5 a7 M! a7 o$ X) v, c" I...( Q& J2 O" X0 _
User=root
& q) N- H b: z: A* A1 ~. C0 GGroup=root
( \: T' h6 e9 r% h6 `8 u h9 A* J- L...
+ j6 A4 U4 {: p! O# systemctl daemon-reload
8 r# W& l1 A, ^4 T( {5 o# systemctl daemon-reload, M4 h' B2 K. D3 B/ } Y' G" T
# systemctl start logstash.service ! u$ p. }7 J1 G( [* k4 O) E. D6 f, ]
通过kibana展现6 A% d- `2 o0 q; R- t
3 D5 D; H6 |% l; ^
1 D+ B: J$ W- r' v收集Java日志
4 b( R+ k# j, L5 m, T使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并3 {9 T0 Z5 a2 ~- |- e4 D8 G* Q
. L" K7 g9 J2 ^; O
Multiline codec plugin | Logstash Reference [8.1] | Elastic7 y$ X% l5 @, j* w% b
+ g8 [+ G! S5 V$ E0 Y添加logstash配置文件 ( I% T& X. Z% b) J' ]
###收集logstash自身的日志,172.20.22.26
6 J8 M9 C8 I( S# cd /etc/logstash/conf.d8 v t) d/ S0 [8 N# B# x5 T2 V, Z
# cat java.conf 3 E+ V# j, R5 O f- m
input {
0 j7 m8 o" S& p$ m0 I- L; V+ S3 ^0 k file {, T/ i: s* f6 m
path => "/var/log/logstash/logstash-plain.log"$ j$ ? n$ {' f/ t6 A. j, i$ A2 A
type => "logstash-log"
* X3 g* f9 ]; }& |9 W# e" B start_position => "beginning"
8 j' E* j' z" r9 o3 x stat_interval => "3"
" H+ ^# |) _3 ^" d codec => multiline {/ R! f" d8 `2 E% P3 r1 U1 K$ }
pattern => "^\["
b- ]$ Y6 k7 Q! v negate => true
: x2 n6 }& W. ] G: d what => "previous"
3 B7 B. |7 R1 q. ^- ^' f }}% t6 ?+ Q. `! Z4 R7 S/ D( B
}
( ?( G3 w8 t: i- ?, Doutput {
4 m1 z+ \6 E7 N: ^5 j9 W) r if [type] == "logstash-log" {
; @' H+ D. j \" i2 c1 r elasticsearch {
' X& f- a# Y8 P6 _( _ hosts => ["172.20.22.24"]+ m3 S) U4 B( z5 F, Y% e
index => "logstash-log-%{+YYYY.MM.dd}"
/ c& W2 h+ X7 p, X6 ?% U }}
$ g+ M+ Z# G% j}$ {7 L; D3 A9 Q& i/ y0 b
e9 \8 b5 r; x3 \
# /usr/share/logstash/bin/logstash -f java.conf -t( t9 h& G" f5 Q6 ? s' L' t
# systemctl restart logstash.service
7 W- W1 ~' [* p, D! n" g
' R4 f) ]$ M' n###收集logstash自身的日志,172.20.22.30
7 f: F4 o$ s. [$ I- T# cd /etc/logstash/conf.d
z7 E8 J+ o! C0 V- e( ?2 x# cat java.conf / D- U4 ]8 y6 U# \, @
input { m5 | v' K) ^( F; T
file {7 X3 {. p! @- k1 N+ Y- x: q" J% f) c
path => "/var/log/logstash/logstash-plain.log"
. l0 V1 I& i, [$ s! I type => "logstash-log"! _3 X% I/ h' b( R
start_position => "beginning"/ V1 S" A, Z+ V
stat_interval => "3"/ u0 n. @& Q: f/ {
codec => multiline {
) ~% x- `: {+ b2 U- F4 X- L# t: F pattern => "^\["6 X. u9 s9 K7 t- M: `
negate => true* K3 |* Y) R6 q* c* F
what => "previous" 4 `. ~/ r4 i* Y9 c3 w7 q$ r9 e! T" W" p
}}
: v+ a. h) I4 `5 ^: \ F}& k+ t) D/ B# R3 ]
output {) o" v7 D: D7 u4 z% K* |
if [type] == "logstash-log" {) ^6 ?/ L- d( A6 F
elasticsearch {+ e) C6 y- m' B
hosts => ["172.20.22.24"]3 @" R2 s# j# a$ k
index => "logstash-log-%{+YYYY.MM.dd}"- \9 W% u. v! y
}}
. g' q) G1 t0 R0 I}- R% X- Y* ?$ g+ K# K
' c( a' |5 X. Z& H/ ]# V3 k8 C& x) l# /usr/share/logstash/bin/logstash -f java.conf -t
. g2 Q& k$ X r+ J6 X" u# D# systemctl restart logstash.service 7 ^# ~, I; W& _+ _. `( h
查看kibana收集到的日志
( B3 H4 [% K4 L# I9 s' n2 j* D5 V; x$ m! T# J/ C
5 P6 ? X- o4 w* J: Z& T% s. `' w
3 b; o/ ]* f+ H2 |' j3 K2 jfilebeat结合redis、logstash收集nginx日志 - K2 W5 q4 l( i3 S2 z8 F
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch4 y: U) [. R( j" L* I
0 Z8 G$ R8 g4 E- [+ |' B0 g, f
web1:172.20.22.30,部署好nginx、filebeat、llogstash
: V6 p' J$ G* N
6 Q( x' _. P( m, D; A+ cweb2:172.20.22.26,部署好nginx、filebeat、llogstash
f0 }0 }; p u F - Z$ _" F. C6 P9 Q% r Z# y
logstash服务器2:172.20.22.23,redis服务器:172.20.23.157: w2 C, J/ x) {4 X$ m
2 ?8 v j# A: z7 Y9 |* t! H
nginx服务器相关配置 , q$ U2 D$ \& ^' m
部署nginx
! G" g% r+ [1 F! W# wget http://nginx.org/download/nginx-1.18.0.tar.gz/ f. k# P2 O0 E& {: u4 Z! f) n
# tar xf nginx-1.18.0.tar.gz7 q. q" h" c! O/ ?
# cd nginx-1.18.0
1 l" s( r1 F; e' o) e' t' ?# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
/ ~& ^9 { \1 q% ^ A) L! M4 N3 }7 ~# make -j4 && make install. C) G, W5 \- e. D; h7 V- E/ H# ~% d0 P
# /usr/local/nginx/sbin/nginx
. Y1 j, ^& V5 I9 H4 v部署配置logstash
; P5 d8 g ~# R* d8 t0 n把filebeat收集到的日志信息发送到redis
; _1 p* i- x, M; B% u% B
! n1 }9 W6 ?3 V; K7 b* o# apt install -y openjdk-8-jdk
9 Z2 w6 z# z9 z; y* S# dpkg -i logstash-7.12.1-amd64.deb1 [) Z: e3 s5 ?6 |
# cat /etc/logstash/conf.d/beats-to-redis.conf 4 r0 c# e! C( K9 P& |
input {
7 J; c4 ]; x3 S$ C; W beats {
5 ^7 J0 x# m5 j+ f5 k6 h port => 5044
- K. Z: t! v" v6 L1 n; Q- w codec => "json") T V* V+ H6 W
}5 v* S( ]/ W* j( F; E" f) W4 e- h! D
beats {! R! }% Q; e% S0 {" y
port => 50457 p. }1 D! H' G% G$ B* p t
codec => "json"
, \8 L: M5 P+ z7 U% h; }& z }6 x/ f3 D( G8 E/ e! S) ]4 T
}
. ~1 H+ B7 ^$ [8 z' Y9 Uoutput {# _# _7 i) u# r2 \( [
if [fields][project] == "filebeat-systemlog" {
" X: q- s% z3 X redis {. Y2 Y# t1 T) f x/ p3 I' F+ ^
data_type => "list"6 P" S: n0 J5 Q+ V7 n
key => "filebeat-redis-systemlog"
} y9 i& k% N1 i) d host => "172.20.23.157") ?4 \1 u7 ? T
port => "6379"
% s5 z7 I1 u N/ U7 y: t db => "0"
, n* u; t" x, d) n4 y password => "12345678"
+ _# b2 Y. w- N }}
8 x% L# i, q' J" k if [fields][project] == "filebeat-nginx-accesslog" {
, @' _/ o5 m* a3 I2 V redis {
; g" `, z K( K8 J data_type => "list"7 ~5 T9 F l0 o- x8 s7 F/ S8 P2 D
key => "filebeat-redis-nginx-accesslog"
5 h' G, h: b% L* } host => "172.20.23.157"+ I; X" K, W2 `- k% _
port => "6379"5 e k: Z0 u7 F R$ f
db => "1"
' L, G( F) a g" _6 ~$ P password => "12345678"
7 t. Y" W# n4 u5 b7 L }}- g, T$ L5 m. T+ F! R' z
if [fields][project] == "filebeat-nginx-errorlog" {9 z; w! Z, `/ m, R7 \7 E6 M5 S
redis {" [" w! ~: b" g( x0 Q
data_type => "list"1 l! n8 K4 e4 d* d% |, D" D
key => "filebeat-redis-nginx-errorlog"
6 B2 W% b1 s8 v host => "172.20.23.157"4 t' V/ h3 j! i- h
port => "6379"( H2 ]. c! S1 A; x9 O3 v
db => "1"
1 n; W0 D' W: I4 {! n: u password => "12345678"
: f# g9 k# I# r# `* ?4 z }}
4 u/ q; ^. {2 b+ B1 q( _! \}4 r" ~% E* W! g+ c$ Q/ X1 C' k$ h
# systemctl start logstash
6 ]1 j7 t3 I" F' R" @# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ 1 U" ]) P2 C' N
部署配置filebeat + P* r/ }* T% ^# F' X( p
通过filebeat收集日志信息发送到logstash# v0 b2 D% D' M0 j( M' f/ @
* D7 ^. | ~ v1 x# dpkg -i filebeat-7.12.1-amd64.deb% A. P7 N; k9 W3 M3 c m: y7 ?
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"
4 i! T. O. i- s9 r8 }3 Ifilebeat.inputs:
3 G! @8 _: X0 V1 E1 p2 M- type: log
" }+ L% f$ Z) L. G J2 F enabled: true
7 [' ]! k, { Y c* T1 ` paths:) u6 H/ }% t5 D+ `
- /var/log/syslog
" Q" Q( F& h; A# n fields:0 i. F! ^7 r, i( H8 c C- H
project: filebeat-systemlog
% a) c" p( A- M- type: log( G$ o! X4 [" h% K
enabled: true; @) }2 Q1 Y2 X: V d8 N8 E. n
paths:
X% Z5 n; A, ` - /usr/local/nginx/logs/access.log
& P4 c0 K/ @# w8 X+ p2 m( m fields:
8 P/ @. Q) n. X# C project: filebeat-nginx-accesslog/ |, E' X0 x! j% B9 ?
- type: log
9 P9 f7 H4 y9 y) k$ | enabled: true
8 Z) _ M' a9 z" D2 L paths:
7 x* T0 j# M8 K! W: m - /usr/local/nginx/logs/error.log
$ {* }) ?% g6 d1 I4 ?2 I7 G) ~$ { fields:
* h4 ]" _( ~) u* Z project: filebeat-nginx-errorlog
, b8 V" c3 Y/ i* T, kfilebeat.config.modules:5 Q, f7 @! l* y2 q
path: ${path.config}/modules.d/*.yml
* _0 k, m3 h0 A* K, H5 o9 L4 p reload.enabled: false' f% W9 Q% D( `7 K9 R- i- k
setup.template.settings:
, m5 v$ g# ]( E y3 b3 q9 } index.number_of_shards: 1$ n" R: h, v8 O ]6 d9 J k) i" t
setup.kibana:! _( S6 A1 ?/ s# p1 `3 X h! h
processors:1 s, M# k- D5 ~) b) }: d G$ q
- add_host_metadata:
3 [ x0 ~; y4 L% F when.not.contains.tags: forwarded! o7 a; H, t3 w: B- }2 Z/ Z, f
- add_cloud_metadata: ~: Y0 r; Q% ^) c0 U" o6 n/ V, k
- add_docker_metadata: ~2 M& o. u0 N3 O& h2 g
- add_kubernetes_metadata: ~
" h3 I' ]' R; i' m4 a# _output.logstash:
/ r9 o8 b! Q/ B+ d# g6 T2 E hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
% D$ _# D9 P& l X+ Y& n enabled: true
2 l( U f* o; f) S7 J8 ~- p worker: 2: x9 m" U! h7 h2 l) l
compression_level: 34 V4 l* h# {# S4 l5 [
loadbalance: true
& B1 O; v/ u# b: a+ }& ]
, Z8 p8 I! S# p {# systemctl start filebeat# H9 d$ I$ R$ u6 }3 y5 H
# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
) r, z" m9 \, w& p( mlogstash服务器配置
! y. k$ |( U! ologstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
+ W, y& U% p' a0 b
8 l% N* i! z$ t; r) X$ U. N# apt install -y openjdk-8-jdk/ U: d& z- Z- Q- X' G7 o! o; b5 H* R
# dpkg -i logstash-7.12.1-amd64.deb% \! G$ `1 R% K* o/ w
# cat /etc/logstash/conf.d/redis-to-es.conf ) e. T3 U( x# h3 W! G, K
input {6 C$ D$ C$ m* x X* ?
redis {
7 D6 o& s4 w# ]5 h9 g$ C& Z data_type => "list". r% u3 W& o! i; a+ c) M1 v
key => "filebeat-redis-nginx-accesslog"
2 J; v9 h1 y/ m1 P+ | host => "172.20.23.157"
8 i/ I! c' ^) w; M% i1 c3 ~ port => "6379"
7 ^ l4 N1 D0 `# P+ t db => "1"' d$ [6 [0 q; l3 a6 r: C( X
password => "12345678"7 P% M7 d8 E: @0 V
}
' z, X: n0 a2 e; `: L. _ redis {, t U9 u$ F5 e! k9 _: _- m
data_type => "list"6 E: h" W/ }( W; w
key => "filebeat-redis-nginx-errorlog"9 b z) Z& O6 ?3 T$ T, P2 L/ G
host => "172.20.23.157"/ Q) f) ?, q* i" x2 w* h2 k
port => "6379"" b0 @6 w6 @8 @9 s r1 f
db => "1"
: {+ K* D8 ~1 s% _ password => "12345678"
" g2 I2 I+ Y4 [, E) y5 w6 Y }
, T3 a& O4 w2 P) P# R3 @ redis {( O% u1 m$ p* G! |6 ~3 U
data_type => "list"
& T+ ]4 t7 j$ f U9 c/ D& ?9 s key => "filebeat-redis-systemlog"4 y1 N7 o4 d: L# H
host => "172.20.23.157"
& c1 P% v U. E% b1 o4 @- O port => "6379"% ]1 m8 |( u) _* i1 j
db => "0"0 S$ ~3 _( j { b# N3 i! P N
password => "12345678"+ G8 S$ m. @, P) t
}/ R% j3 f' ]+ _$ {4 I' B
}0 L9 o; u0 b5 `" ]! X
output {
; }( \# t- ^: Z if [fields][project] == "filebeat-systemlog" {
- j7 W% J& L1 O: ~) K/ @ elasticsearch { X# A' W. ? v+ N* T
hosts => ["172.20.22.28:9200"]
; Y" ]. |! T3 D: e index => "filebeat-systemlog-%{+YYYY.MM.dd}"
- O; v2 A. z4 x( K! T# V7 L }}! j) U" M t. }+ Q/ J
if [fields][project] == "filebeat-nginx-accesslog" {
B3 f0 Y, q' }/ H# j4 K1 [ elasticsearch {
7 a1 R4 [9 G- X4 a hosts => ["172.20.22.28:9200"]
- k1 x8 k1 U* s' C$ \- j0 r index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
& W/ W6 s S1 n6 Y/ F# V6 H }}9 e: @7 M5 R% m, P% h$ d, C: ~
if [fields][project] == "filebeat-nginx-errorlog" {4 o1 E3 F# v1 y* ^8 q1 G C* D
elasticsearch {, y" u- T8 s$ u+ D# l1 c
hosts => ["172.20.22.28:9200"]
" `+ V& q' T2 T+ g r5 A' X: P index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
/ ?; j- H6 q9 G# }- C) Y" N }}
( {; w' U% ?- z+ m" y4 Z `. l}
' m- a9 V9 s4 b- G# systemctl restart logstash.service
+ P2 `2 L$ `" F% Zredis安装配置
( f2 U/ |* T# j/ r) Z j+ tredis服务器:172.20.23.157,
f( t, c, {% k) e3 a
% ^+ G- l6 A! Y1 c# yum install -y redis6 M3 L! Q" {7 L
# vim /etc/redis.conf
/ o; l& v& {. }1 o####修改以下配置项, m# O6 p$ P# T* }; J6 v/ e
bind 0.0.0.0
8 l; T- l. X2 V9 K5 P+ s# t....4 s/ B8 y$ }2 m" X6 O4 H
save ""
9 T' D2 f! E8 Y, @2 N# G....
) u a* ?; \8 P8 N4 J# F% _requirepass 123456787 R3 c# m: C7 R5 O
...., J( M T- V U/ h5 o( u
# systemctl start redis( ?8 Z' n; A# A1 L4 ^7 j& x
###测试连接redis3 u: |5 m3 q4 P
# redis-cli ! g8 |! |( z: v7 L% J
127.0.0.1:6379> auth 12345678
2 P- k* x! J2 d1 M) i- ^OK- T/ G9 ^) v+ L$ C
127.0.0.1:6379> ping
5 [! b7 j3 j6 M1 U6 `# D' N( E. z, YPONG
4 H, ~) M3 K0 @ b' x: O
5 D2 _" p: j- J& T+ y( a/ \$ j###验证收集到的日志信息& e$ i. ?9 d- }' f5 Q
127.0.0.1:6379[1]> keys *0 ^* d. v- g. B& w' L! }
1) "filebeat-redis-nginx-accesslog"
/ G' |; C) u9 @ S' ^, L2) "filebeat-redis-nginx-errorlog"7 v1 T! O6 ~& X& I) ^/ r$ b U3 Q
127.0.0.1:6379[1]> select 0
& c; m' v& K) s: Q F+ f6 VOK' [1 Z% b, c0 }2 E2 u% V8 Q
127.0.0.1:6379> keys *
2 {, W# G. H/ N1) "filebeat-redis-systemlog"
9 }/ @" N" b, O5 c! \; s0 g" N通过head插件验证生成的索引
5 b1 b5 C% v* G2 @ M. f
+ I. B* [) ~: n, U4 I+ q4 F4 L* N K : g. W! t3 ~/ D% h( \! `8 s" P
kibana验证收集到的日志信息
! i' W* p0 T/ p* J9 g) `1 Y) n |
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|