Getting Started with ELK
22 Mar 2016Introduction
For any troubleshooting or debugging an incident, Centralized Logging is key component because it’s almost impossible to log into each servers and check for logs. In this case, Central Logging System allows you to search through all your logs in a single place. It is also useful, because it allows you to identify issues that spans across multiple servers during specific time frame.
For Centralized Logging Solution, ELK stack provides open source solution to gather all the logs and proides visualization and analysis tool for the gathered logs.
ELK stack consists of four main components:
- Logstash - The server component of Logstash that processes incoming logs
- ElasticSearch - Datbase Server to store all the logs and provides indexing of the log data.
- Kibana - Web interface for serching and visualizing logs.
- Filebeat - Low footprint shipping agent that sends logs to Logstash or ElasticSearch Server
ElasticSearch
ElasticSearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and is released as open source under the terms of the Apache License. In the ELK stack, ElasticSearch is used to store all the logs and indexed.
Configuration
Update Network Host Address to IPAddress of the box
elasticsearch.yml
network.host: <IPAddress>
Install Pluggins
bin/plugin install mobz/elasticsearch-head
bin/plugin -install karmi/elasticsearch-paramedic
Default Port: 9200
Starting ElasticSearch
# Running from command line
./bin/elasticsearch
# Running as Daemon Service
./bin/elasticsearch -d -p id
LogStash
Logstash is an open source tool for collecting, parsing, and storing logs for future use.
Logstash requires a configuration file. Here’s a sample of configuration file.
input {
file {
path => "/home/vagrant/art/logs/*.log"
start_position => beginning
ignore_older => 0
type => "logs"
}
# GET Logs Over TCP 9100 as JSON
tcp {
port => 9100
codec => json
type => "logs"
}
}
filter {
}
output {
# Send output to standard output device/interface
stdout {}
if [type] == "logs"
{
# Send Output to ElasticSearch over HTTP Interface
elasticsearch {
hosts =>"10.66.164.116"
index => "art-logs-%{+YYYY.MM.dd}"
}
}
}
Testing Logstash Configuration
bin/logstash -f logstash.conf –configtest
Running Logstash
bin/logstash -f logstash.conf
Kibana
Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.
Configuration
Update kibana.yml
with elasticsearch URL
elasticsearch.url: "<IPAddress>:<Port>"
Default Port: 5601
Beats
Beats is the platform for single-purpose data shippers. They install as lightweight agents and send data from hundreds or thousands of machings to Logstash or ElasticSearch.
- Beats Types
- Filebeat - Log Files
- Metricbeat - Metrics
- Packetbeat - Network Data
- Winlogbeat - Windows Event Logs
- Heartbeat - Uptime Monitoring
Configure Filebeats
The filebeat.yml
consists of configurations you can use to configure your filebeats.
Test Configuration File
./filebeat -configtest -e