2017/12/18

Elastic Stack (ELK)


Elastic 公司推出的 ELK 模組,E: Elasticsearch L: Logstash K: Kibana,Logstash 為資料收集工具,它將數據進行過濾和格式化,處理後將資料傳送給 ElasticSearch 儲存資料,最後再由 Kibana 前端網頁介面將資料由 ElasticSearch 取出來,可進行搜尋或是繪製圖表。Logstash 和 Elasticsearch 是用 Java 寫的,kibana 使用 node.js。


ELK 在 5.0 版後,加入 Beats 套件後稱為 Elastic Stack。Beats 是安裝在被監控端 Server 的監控 Agent,能夠直接將資料送給 Elasticsearch 或是透過 Logstash 轉換資料後,發送給 Elasticsearch。


安裝 Elastic Stack 建議用以下安裝順序,且建議都使用相同的版本


  1. Elasticsearch

    • X-Pack for Elasticsearch
  2. Kibana

    • X-Pack for Kibana
  3. Logstash
  4. Beats
  5. Elasticsearch Hadoop

X-Pack 是 Elastic Stack extension,將 security, alerting, monitoring, reporting, machine learning 及 graph capability 合併在一個套件中。


這幾個套件之間的關係如下


比較簡單的方式,可以將 Beats 直接連接到 Elasticsearch,再交給 Kibana UI使用。


Logstash 增加了資料轉換的功能,也加強了整個平台的穩定性。


ref: Deploying and Scaling Logstash


以 docker 測試


啟動一個測試用的 docker machine,安裝了 CentOS 7 及 sshd


#elasticsearch TCP 9200
#logstash beats input TCP 5044
#kibana web TCP 5601

docker run -d \
 -p 10022:22\
 -p 80:80\
 -p 9200:9200\
 -p 5044:5044\
 -p 5601:5601\
 --sysctl net.ipv6.conf.all.disable_ipv6=1\
 -e "container=docker" --privileged=true -v /sys/fs/cgroup:/sys/fs/cgroup --name elktest centosssh /usr/sbin/init

Elasticsearch


ref: Installing the Elastic Stack


ref: 如何在 CentOS 7 上安装 Elastic Stack


Elasticsearch 有下列幾種套件的安裝方式:zip/tar.gz, deb, rpm, msi, docker。


首先安裝 OpenJDK


yum -y install java-1.8.0-openjdk

設定環境變數


vi /etc/profile


export JAVA_HOME=/usr/lib/jvm/java-openjdk
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

source /etc/profile



安裝 Elasticsearch PGP Key


rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

接下來有兩種方式,一種是設定 RPM Respository,或是直接下載 RPM


  • RPM Repository

vi /etc/yum.repos.d/elasticsearch.repo


[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安裝並啟動 elasticsearch


sudo yum install -y elasticsearch

systemctl daemon-reload
systemctl enable elasticsearch.service

systemctl start elasticsearch.service
systemctl stop elasticsearch.service

查看啟動 log


journalctl -f
journalctl --unit elasticsearch

  • RPM

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.3.rpm
sudo rpm --install elasticsearch-5.6.3.rpm

啟動後,以 netstat 檢查 server


> netstat -napl|grep java
tcp        0      0 127.0.0.1:9200          0.0.0.0:*               LISTEN      439/java
tcp        0      0 127.0.0.1:9300          0.0.0.0:*               LISTEN      439/java

TCP 9200 是接收 HTTP Request 的 port,也是 elasticsearch 對外服務的 port
TCP 9300 是給多個 elasticsearch nodes 之間溝通使用的




安裝後的相關檔案路徑


  1. /usr/share/leasticsearch/


    elasticsearch home directory

  2. /etc/elasticsearch/*.conf


    config 目錄

  3. /etc/sysconfig/elasticsearch


    環境變數,包含 heap size, file descriptors

  4. /var/lib/elasticsearch/


    data files 的目錄

  5. /var/log/elasticsearch/*.log


    log files

  6. /usr/share/elasticsearch/plugins/


    Plugin files location

  7. /etc/elasticsearch/scripts/


    script files




設定檔的位置在 /etc/elasticsearch/elasticsearch.yml


參考 ES節點memory lock重要性與實現方式 的說明,系統發生 memory swap 時,會嚴重影響到節點的效能及穩定性,導致 Java GC 由數毫秒變成幾分鐘,因此要避免 memory swap。


note: 因為目前是用 docker 測試,docker 在 ulimit 的設定有些限制跟問題,這個部分的設定就跳過,但正視環境必須要處理這個問題。


用下列指令檢查各節點有沒有啟用 memory lock


# curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
{
  "nodes" : {
    "AhjDVEQJQL6avw43nl3AFQ" : {
      "process" : {
        "mlockall" : false
      }
    }
  }
}

vim /etc/elasticsearch/elasticsearch.yml


# 取消這一行的註解
bootstrap.memory_lock: true

同時要修改系統設定,要不然啟動時會出現 memory locking requested for elasticsearch process but memory is not locked 這樣的錯誤訊息


vi /etc/security/limits.conf


* soft memlock unlimited
* hard memlock unlimited
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

ulimit -l unlimited
systemctl restart elasticsearch

Kibana


安裝 Elasticsearch PGP Key,剛剛在 Elasticsearch 安裝過了,就不用再安裝一次


rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

  • 以 RPM Repository 安裝

note: 這個 repository 跟剛剛的 elasticsearch.repo 是一樣的,不用重複,直接跳到下面安裝的步驟。


vi /etc/yum.repos.d/kibana.repo


[kibana-5.x]
name=Kibana repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安裝 kibana


sudo yum install -y kibana

systemctl daemon-reload
systemctl enable kibana.service

systemctl start kibana.service
systemctl stop kibana.service

查看啟動 log


journalctl -f
journalctl --unit kibana

啟動後,以 netstat 檢查 server


> netstat -napl|grep node
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      340/node
tcp        0      0 127.0.0.1:43968         127.0.0.1:9200          ESTABLISHED 340/node
tcp        0      0 127.0.0.1:43970         127.0.0.1:9200          ESTABLISHED 340/node
unix  3      [ ]         STREAM     CONNECTED     19517    340/node

TCP Port 5601 是 kibana 對外服務的網頁 Port




安裝後的相關檔案路徑


  1. /usr/share/kibana


    kibana home

  2. /etc/kibana/


    設定檔目錄

  3. /var/lib/kibana/


    資料 data files 目錄

  4. /usr/share/kibana/optimize/


    Transpiled source code

  5. /usr/share/kibana/plugins/


    plugin 目錄


kibana 的服務網頁為 http://localhost:5601/




也可以安裝 Nginx 並設定reverse proxy,就可改用 80 Port 存取 kibana。


yum -y install epel-release
yum -y install nginx httpd-tools

cd /etc/nginx/
vim nginx.conf

刪除 server { } 這個區塊。


vim /etc/nginx/conf.d/kibana.conf


server {
    listen 80;
    server_name elk-stack.co;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

sudo htpasswd -c /etc/nginx/.kibana-user admin

輸入密碼


# 測試 nginx 的設定
nginx -t

# 啟動 nginx
systemctl enable nginx
systemctl start nginx

檢查 nginx service


> netstat -napltu | grep nginx
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      510/nginx: master p

kibana 的服務網頁為 http://localhost/



Logstash


安裝 Elasticsearch PGP Key,剛剛在 Elasticsearch 安裝過了,就不用再安裝一次


rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

  • 以 RPM Repository 安裝

note: 這個 repository 跟剛剛的 elasticsearch.repo 是一樣的,不用重複,直接跳到下面安裝的步驟。


vi /etc/yum.repos.d/logstash.repo


[logstash-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安裝 Logstash


sudo yum install -y logstash

  • RPM

wget https://artifacts.elastic.co/downloads/logstash/logstash-5.6.3.rpm
sudo rpm --install logstash-5.6.3.rpm



修改 Logstash 設定,建立 beat input,使用 SSL,也可以不使用 SSL。


設定 openssl


cd /etc/pki/tls
vim openssl.cnf

在 v3_ca 的區塊,增加 server name


[ v3_ca ]
# Server IP Address
subjectAltName = IP: 127.0.0.1

產生 CA 證書到 /etc/pki/tls/certs/ 和 /etc/pki/tls/private/


openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

設定 Logstash的 input, filter, output


vim /etc/logstash/conf.d/filebeat-input.conf


input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

使用 grok filter 解析 syslog 文件


vim /etc/logstash/conf.d/syslog-filter.conf


filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

輸出到 elasticsearch


vim /etc/logstash/conf.d/output-elasticsearch.conf


output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}



啟動


systemctl daemon-reload
systemctl enable logstash.service

systemctl start logstash.service
systemctl stop logstash.service

查看啟動 log


journalctl -f
journalctl --unit logstash

啟動後,以 netstat 檢查 server


# netstat -naptul |grep java
tcp        0      0 127.0.0.1:9600          0.0.0.0:*               LISTEN      788/java
tcp        0      0 0.0.0.0:5044            0.0.0.0:*               LISTEN      788/java
tcp        0      0 127.0.0.1:9200          0.0.0.0:*               LISTEN      196/java
tcp        0      0 127.0.0.1:9300          0.0.0.0:*               LISTEN      196/java
tcp        0      0 127.0.0.1:9200          127.0.0.1:43986         ESTABLISHED 196/java
tcp        0      0 127.0.0.1:44280         127.0.0.1:9200          ESTABLISHED 788/java
tcp        0      0 127.0.0.1:9200          127.0.0.1:44280         ESTABLISHED 196/java
tcp        0      0 127.0.0.1:9200          127.0.0.1:43988         ESTABLISHED 196/java

TCP Port 5044(SSL) 是 logstash 對外服務的網頁 Port


Beats


Bests 是在客戶端機器上收集資料的 Agent,可將資料發送到 Logstash 或是 Elasticsearch,目前有四種 Beats


  1. Packetbeat: real-time 分析網路封包,搭配 elasticsearch 就可當作 application monitoring 及 performance analytics 的工具。目前可解析以下這些 protocol 的封包: ICMP (v4, v6), DNS, HTTP, AMQP 0.9.1, Cassandra, MySQL, PostgreSQL, Redis, Thrift-RPC, MongoDB, Memcache
  2. Metricbeat: 收集 OS 及 一些 Service 的統計指標,目前支援 Apache, HAProxy, MongoDB, MySQL, Nginx, PostgreSQL, Redis, System, Zookeeper
  3. Filebeat: 檔案類型的 log file
  4. Winlogbeat: Windows event log,包含 application, hardware, security, system events
  5. Heartbeat: 定時檢查 service 狀態,只會知道 service 是 up or down


  • 以 RPM Repository 安裝

使用剛剛的 elasticsearch.repo。


vi /etc/yum.repos.d/elasticsearch.repo


[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安裝 filebeat


sudo yum install -y filebeat

  • RPM 直接安裝

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.3-x86_64.rpm
sudo rpm -vi filebeat-5.6.3-x86_64.rpm



filebeat 預設以 output.elasticsearch 為輸出對象,資料寫入到 localhost:9200。以下修改為 監控 /var/log/secure (ssh) 及 /var/log/messages (server log),並輸出到 logstash


vim /etc/filebeat/filebeat.yml


filebeat.prospectors:

- input_type: log
  paths:
    - /var/log/secure
    - /var/log/messages
  document_type: syslog

#--------- Elasticsearch output --------------
#output.elasticsearch:
# Array of hosts to connect to.
#  hosts: ["localhost:9200"]

#--------- Logstash output --------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]
  bulk_max_size: 1024
  #ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false

如果剛剛有設定 logstash beat input 有包含 SSL 的部分,必須將 logstash 的 /etc/pki/tls/certs/logstash-forwarder.crt 複製到客戶端機器上,並將這個設定打開。


  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

設定測試


# /usr/bin/filebeat.sh -configtest -e
2017/11/03 05:58:10.538291 beat.go:297: INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017/11/03 05:58:10.538350 beat.go:192: INFO Setup Beat: filebeat; Version: 5.6.3
2017/11/03 05:58:10.538463 metrics.go:23: INFO Metrics logging every 30s
2017/11/03 05:58:10.539115 logstash.go:90: INFO Max Retries set to: 3
2017/11/03 05:58:10.539679 outputs.go:108: INFO Activated logstash as output plugin.
2017/11/03 05:58:10.539884 publish.go:300: INFO Publisher name: c0ba72624128
2017/11/03 05:58:10.540376 async.go:63: INFO Flush Interval set to: 1s
2017/11/03 05:58:10.540415 async.go:64: INFO Max Bulk Size set to: 1024
Config OK

啟動 filebeat


sudo systemctl enable filebeat
sudo systemctl start filebeat

查看啟動 log


journalctl -f
journalctl --unit filebeat

References


ELSstack 中文指南


Elastic Stack and Product Documentation


Logstash Reference Docs


logstash日誌分析的配置和使用


How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 14.04


Elasticsearch 5.0和ELK/Elastic Stack指南




Elasticsearch 權威指南


ELKstack 中文指南


用 ElasticSearch + FluentD 打造 Log 神器與數據分析工具


Collecting Logs In Elasticsearch With Filebeat and Logstash


ELK+Filebeat 集中式日誌解決方案詳解




Handling stack traces in Elasticsearch Logstash Kibana (ELK)


Handling Stack Traces with Logstash



沒有留言:

張貼留言