elastic 官网:
为了便于集中查看多台主机的业务日志,使用 Filebeat, Redis, Logstash的方式进行收集:
(1) Filebeat 监控日志文件的变化, 将新增部分写入redis中, 每行的日志都是redis中某个指定key的list集合的数据;
(2) LogStash 监听 redis中指定key的list数据变化,读取数据,持久化到磁盘文件。
参考:
(1)
(2) 的ELK分类
Redis安装:
参考:
(1) http://www.redis.cn/download.html (2) https://zhuanlan.zhihu.com/p/345272701. 选择需要的版本下载: wget http://download.redis.io/releases/redis-4.0.11.tar.gz
2. 解压安装包: tar xvf redis-4.0.11.tar.gz 3. 编译及安装: 执行命令(1)make 和 (2)make install: 备注: make install实际上就是将这个几个redis几个命令文件加到/usr/local/bin目录下去, 可以ll /usr/local/bin查看 4. 修改 redis.conf 的 daemonize为yes; 5. 启动 reids 服务: cd /usr/local/bin, ./redis-server /opt/apps/redis-4.0.11/redis.conf 6. 看一下进程: ps -ef | grep redis 7. 本地客户端: cd /usr/local/bin, ./redis-cli, 登录一下,确保是可用的。(如果改了端口 ./redis-cli -p port)
Filebeat安装(业务系统日志所在主机):
参考:
(1) (2)1. 选择需要的版本下载: wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-linux-x86_64.tar.gz
2. 解压安装包: tar -xvf filebeat-6.4.2-linux-x86_64.tar.gz 3. 目录下执行: nohup ./filebeat -e -c filebeat.yml &logstash安装:
(1) 下载安装包然后解压:
(2) 准备一个需要加载的配置文件: 这里从redis读取数据,于是起名为 logstash_redis.conf (3) 执行启动命令 nohup ./bin/logstash -f ./config/logstash_redis.conf &
配置:
1. 安装都很简单,看官网的说明即可,重要的是配置方法;
2. Filebeat 配置:
功能: 读取磁盘上的多个日志文件,分不同的key放入redis,方便logstash分别读取;
样例:
#=========================== Filebeat inputs =============================filebeat.inputs:- type: log # Change to true to enable this prospector configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /opt/logs/service1/base.log fields: log_topics: M100_service1_baselog log_ip: 192.168.1.100 scan_frequency: 1s- type: log enabled: true paths: - /opt/logs/service2/base.log fields: log_topics: M101_service2_baselog log_ip: 192.168.1.101 scan_frequency: 1s#============================= Filebeat modules ===============================filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes #reload.period: 10s#==================== Elasticsearch template setting ==========================setup.template.settings: index.number_of_shards: 3 #index.codec: best_compression #_source.enabled: false#============================= File output =============================#output.file:# path: "/tmp/logs"# filename: 'outputFile.txt'#============================= Redis output =============================output.redis: hosts: ["192.168.1.200:6379"] #password: "" key: "%{[fields.log_topics]}"
3. logstash 配置:
input { ## 读取 service1 日志 redis { data_type => "list" key => "M100_service1_baselog" host => "192.168.1.100" port => 6380 threads => 2 type => "M100_service1_baselog" } redis { data_type => "list" key => "M101_service1_baselog" host => "192.168.1.101" port => 6380 threads => 2 type => "M101_service1_baselog" } ## 读取 service2 日志 redis { data_type => "list" key => "M110_service2_baselog" host => "192.168.1.110" port => 6380 threads => 2 type => "M110_service2_baselog" } redis { data_type => "list" key => "M111_service2_baselog" host => "192.168.1.111" port => 6380 threads => 2 type => "M111_service2_baselog" }}output { ## 输出 service1 日志 if [type] == "M100_service1_baselog" { file { path => "/opt/logs/logstash/service1_baselog-%{+YYYY.MM.dd}.log" codec => line { format => "[%{[fields][log_ip]}].%{message}"} } } else if [type] == "M101_service1_baselog" { file { path => "/opt/logs/logstash/service1_baselog-%{+YYYY.MM.dd}.log" codec => line { format => "[%{[fields][log_ip]}].%{message}"} } } ## 输出 service2 日志 else if [type] == "M111_service2_baselog" { file { path => "/opt/logs/logstash/service2_baselog-%{+YYYY.MM.dd}.log" codec => line { format => "[%{[fields][log_ip]}].%{message}"} } } else if [type] == "M111_service2_baselog" { file { path => "/opt/logs/logstash/service2_baselog-%{+YYYY.MM.dd}.log" codec => line { format => "[%{[fields][log_ip]}].%{message}"} } }}
备注: 这样 多台 service1 主机的 日志就都输出到了一个文件中。service2的也是如此。