Goal:
In these tutorial we gonna cover installation of ELK Stack on fresh amazon ec2 linux (CentOS). We will install Elasticsearch 5.x.x, Logstash 5.x.x, and Kibana 5.x.x. We will also show you how to configure filebeat to forwards apache logs collected by central rsyslog server to elk server using Filebeat 5.x.x.
ELK stack components:
Logstash: Transform incoming logs.
Elasticsearch(ES): Stores logs transformed by logstash.
Kibana: Web interface for searching and visualizing logs stored in elasticsearch, which is proxied through Nginx.
Filebeat: Lightweight Shipper of Logs from client to logstash server.
Prerequisites:
Minimum size to run your ES cluster
RAM --> 4GB
CPU --> 2 core
Disk --> 20 GB (highly varies on your log size)
You many need to increase RAM, CPU, Disk size depending on your log size.
Let's start on our main goal to setup ELK Server
Set Java 8 as system default java
follow corresponding options to set java 8 as default system java.
Install ELK packages
Import elastic-search repo key
Create yum repo file to install elastic-search packages
paste following code in above file[elasticsearch-5.x]
save and exit file and then install elk packages.
Install Logstash
sudo yum install logstash
Install elasticsearch
sudo yum install elasticsearch
Install kibana
sudo yum install kibana
Install nginx
Add all services on system reboot
Configure ELK stack
Configure Elasticsearch
Open file
Change 'network.host' value in above file
Configure Kibana
Open file
Change 'elasticsearch.url' value in above file
Configure Nginx
Create and Open file
And paste following code in above file
Configure Logstash
Create and open file to configure logstash to receive logs from filebeat clients
Paste following code to receive combined Apache logs from central rsyslog server
If all good then Start services one by one
Start elasticsearch
Start kibana
Start nginx
Start logstash
If logstash is not being started and you are getting 'unknown job logstash' then do following steps
Create and open new file
And paste following code
Save and Exit file and start Logstash
Check if any error there
If all good then your elk stack is ready to receive logs.
Configure client to sent logs to elk server
Install and configure Filebeat
Create and open file
And paste following code
Save and exit file.
Install filebeat
Start filebeat on system reboot
Backup original file
Configure filebeat
Open file
sudo vi /etc/filebeat/filebeat.yml
Add following code near 'input_type'
Save and exit and check filebeat syntax
Now by default filebeat forward log to Elasticsearch so change it to Logstash
Search for 'Elasticsearch output' and comment ES output conf uncomment Logstash output conf and replace Logstash host IP with your elkbox private ip and check filebeat syntax.
Start filebeat
Check if any error there
If all good then your pipeline is started to ship log to elk sever.
Happy ELK Stack..!!
In these tutorial we gonna cover installation of ELK Stack on fresh amazon ec2 linux (CentOS). We will install Elasticsearch 5.x.x, Logstash 5.x.x, and Kibana 5.x.x. We will also show you how to configure filebeat to forwards apache logs collected by central rsyslog server to elk server using Filebeat 5.x.x.
ELK stack components:
Logstash: Transform incoming logs.
Elasticsearch(ES): Stores logs transformed by logstash.
Kibana: Web interface for searching and visualizing logs stored in elasticsearch, which is proxied through Nginx.
Filebeat: Lightweight Shipper of Logs from client to logstash server.
Prerequisites:
Minimum size to run your ES cluster
RAM --> 4GB
CPU --> 2 core
Disk --> 20 GB (highly varies on your log size)
You many need to increase RAM, CPU, Disk size depending on your log size.
Let's start on our main goal to setup ELK Server
Install java 8
sudo yum install java-1.8.0-openjdk
Change Java Home as Java 8
Change Java Home as Java 8
sudo sh -c "echo export JAVA_HOME=/usr/java/jdk1.8.0_60/jre >> /etc/environment"
sudo alternatives --config java
Install ELK packages
Import elastic-search repo key
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
sudo vi /etc/yum.repos.d/elasticsearch.repo
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Install Logstash
sudo yum install logstash
Install elasticsearch
sudo yum install elasticsearch
Install kibana
sudo yum install kibana
Install nginx
sudo yum install epel-release
sudo yum install nginx httpd-tools
Add all services on system reboot
sudo chkconfig --add nginx
sudo chkconfig --add kibana
sudo chkconfig --add elasticsearch
sudo chkconfig --add logstash
Configure ELK stack
Configure Elasticsearch
Open file
sudo vi /etc/elasticsearch/elasticsearch.yml
network.host: private_ip_of_box
Configure Kibana
Open file
sudo vi /etc/kibana/kibana.yml
elasticsearch.url: "http://elasticsearch_ip:9200"
Configure Nginx
Create and Open file
sudo vi /etc/nginx/conf.d/kibana.conf
server {
listen 80;
server_name PUBLIC_IP_OF_SERVER;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Configure Logstash
Create and open file to configure logstash to receive logs from filebeat clients
sudo vi /etc/nginx/conf.d/kibana.conf
input {
beats {
port => 5044
host => "current_server_private_ip"
}
}
filter {
if [type] == "central-apache-ssl-access"
{
grok {
match => { "message" => "%{SYSLOGBASE} %{COMBINEDAPACHELOG} %{NUMBER:resptime_ms} %{QS:proxy_ip}" } }
geoip {
source => "clientip"}
mutate {
remove_field => [ "beat", "day", "host", "month", "tags", "source" ] }
}
else if [type] == "central-apache-access"
{
grok {
match => { "message" => "%{SYSLOGBASE} %{COMBINEDAPACHELOG} %{NUMBER:resptime_ms} %{QS:proxy_ip}" } }
geoip {
source => "clientip"}
mutate {
remove_field => [ "beat", "day", "host", "month", "tags", "source" ] }
}
else if [type] == "central-apache-error"
{
grok {
match => { "message" => "%{SYSLOGBASE} \[(?<timestamp>%{DAY:day} %{MONTH:month} %{MONTHDAY} %{TIME} %{YEAR})\] \[.*:%{LOGLEVEL:loglevel}\] \[pid %{NUMBER:pid}] (?:\[client %{IPORHOST:clientip}:%{POSINT:port}\] ){0,1}(?<errormessage>(?:(?!, referer).)*)(?:, referer: %{GREEDYDATA:referer})?" } }
geoip {
source => "clientip"}
mutate {
remove_field => [ "beat", "day", "host", "month", "tags", "source" ] }
}
else if [type] == "central-apache-ssl-error"
{
grok {
match => { "message" => "%{SYSLOGBASE} \[(?<timestamp>%{DAY:day} %{MONTH:month} %{MONTHDAY} %{TIME} %{YEAR})\] \[.*:%{LOGLEVEL:loglevel}\] \[pid %{NUMBER:pid}] (?:\[client %{IPORHOST:clientip}:%{POSINT:port}\] ){0,1}(?<errormessage>(?:(?!, referer).)*)(?:, referer: %{GREEDYDATA:referer})?" } }
geoip {
source => "clientip"}
mutate {
remove_field => [ "beat", "day", "host", "month", "tags", "source" ] }
}
}
output {
elasticsearch {
hosts => [ "elasticserach_ip:9200" ]
}
}
If all good then Start services one by one
Start elasticsearch
sudo service elasticsearch start
sudo service kibana start
sudo service nginx start
sudo initctl start logstash
Create and open new file
sudo vi /etc/init/logstash.conf
description "logstash"
start on filesystem or runlevel [2345]
stop on runlevel [!2345]
respawn
umask 022
nice 19
limit nofile 16384 16384
chroot /
chdir /
#limit core <softlimit> <hardlimit>
#limit cpu <softlimit> <hardlimit>
#limit data <softlimit> <hardlimit>
#limit fsize <softlimit> <hardlimit>
#limit memlock <softlimit> <hardlimit>
#limit msgqueue <softlimit> <hardlimit>
#limit nice <softlimit> <hardlimit>
#limit nofile <softlimit> <hardlimit>
#limit nproc <softlimit> <hardlimit>
#limit rss <softlimit> <hardlimit>
#limit rtprio <softlimit> <hardlimit>
#limit sigpending <softlimit> <hardlimit>
#limit stack <softlimit> <hardlimit>
script
# When loading default and sysconfig files, we use `set -a` to make
# all variables automatically into environment variables.
set -a
[ -r "/etc/default/logstash" ] && . "/etc/default/logstash"
[ -r "/etc/sysconfig/logstash" ] && . "/etc/sysconfig/logstash"
set +a
exec chroot --userspec logstash:logstash / /usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash" >> /var/log/logstash-stdout.log 2>> /var/log/logstash-stderr.log
end script
Save and Exit file and start Logstash
sudo initctl start logstash
Check if any error there
sudo tail -f /var/log/logstash/logstash-plain.log
sudo tail -f /var/log/elasticsearch/elasticsearch.log
sudo tail -f /var/log/nginx/access.log
Configure client to sent logs to elk server
Install and configure Filebeat
Create and open file
sudo vi /etc/yum.repos.d/elastic.repo
[elastic-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Install filebeat
sudo yum install filebeat
sudo chkconfig --add filebeat
sudo cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak
Configure filebeat
Open file
sudo vi /etc/filebeat/filebeat.yml
Add following code near 'input_type'
- input_type: log
paths:
- /var/log/apache_access_log
document_type: central-apache-access
paths:
- /var/log/apache_ssl_access_log
document_type: central-apache-ssl-access
- input_type: log
paths:
- /var/log/apache_error_log
document_type: central-apache-error
- input_type: log
paths:
- /var/log/apache_ssl_error_log
document_type: central-apache-ssl-error
Save and exit and check filebeat syntax
sudo filebeat.sh -configtest
Now by default filebeat forward log to Elasticsearch so change it to Logstash
sudo vi /etc/filebeat/filebeat.yml
Search for 'Elasticsearch output' and comment ES output conf uncomment Logstash output conf and replace Logstash host IP with your elkbox private ip and check filebeat syntax.
sudo filebeat.sh -configtest
Start filebeat
sudo service filebeat start
Check if any error there
sudo tail -f /var/log/filebeat/filebeat
If all good then your pipeline is started to ship log to elk sever.
Happy ELK Stack..!!
Comments
Post a Comment