In this guide, you will learn to install Elastic stack on Ubuntu 18.04. Elastic stack, formerly known as ELK stack is a collection or stack of free and opensource software from Elastic Company designed for centralized logging.
It enables the searching, analyzing and visualization of logs from different sources in a myriad of formats. Centralized logging helps in identification of server or application issues from a common point.
Elastic Stack Components
Elastic Stack comprises of 4 main components.
- Elasticsearch: This is a RESTful search engine that stores or holds all of the collected data
- Logstash: This is the component that processes the data and parses it to elastic search
- Kibana: This is a web interface that visualizes logs
- Beats: These are lightweight data shippers that ship logs from hundreds/thousands of servers to the central server on which ELK is configured.
Let’s now see how you can install the Elastic stack on Ubuntu 18.04.
Prerequisites
Before you begin the installation ensure you should have the following infrastructure.
- Ubuntu server 18.04 LTS with root access and a non-root user plus ufw firewall. The following should be the minimum requirements of the server.
- Ubuntu 18.04 LTS
- 4 GB RAM
- 2 CPUs
- Java 8 installed on your system which will be required by Elasticsearch and Logstash.
- NGINX installed on your server which will later be configured to handle Kibana. Recommended Read: Install Nginx on Ubuntu 18.04
With that said, let’s dive in and begin the installation of the Elastic stack on Ubuntu.
1. Install Elasticsearch on Ubuntu
First off, we are going to import Elasticsearch’s public GPG key into APT. Elastic stack packages are usually signed with Elasticsearch signing key to protect your system against package spoofing. In addition, authenticated packages are considered trusted by the package manager.
To import the GPG key run:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, add Elastic repository to the sources.list.d
directory using the command below.
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
The output of the two commands is as shown:
Output
Now update the system’s repository using the command below.
sudo apt update
Sample Output
Now, install Elasticsearch using the command below.
sudo apt install elasticsearch
Output
2. Configure Elasticsearch on Ubuntu
Elasticsearch listens on port 9200. However, we are going to limit outside access so that outside parties cannot access data and shut down the elastic cluster. That said, we are going to make a few modifications to the Elasticsearch configuration file as shown below
sudo nano /etc/elasticsearch/elasticsearch.yml
Find the network.host
attribute and uncomment it and add localhost
as its value. Also uncomment the http.port
attribute.
Output
network.host: localhost
http.port: 9200
Next, start and enable Elasticsearch service as shown.
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch
Output
At this point, Elasticsearch should be up and running. You can verify this by running the command below.
systemctl status elasticsearch
Output
You can also use the netstat
command as shown.
netstat -pnltu
Also, you can run the curl
command as shown.
curl -X GET "localhost:9200"
Output
Great! We have finalized the installation and configuration of Elasticsearch. Next, we are going to install and configure Logstash.
3. Installing and configuring Logstash
The second component of Elastic stack that we are going to install is Logstash. Logstash will be responsible for collecting and centralizing logs from various servers using filebeat data shipper. It will then filter and relay syslog data to Elasticsearch.
First, Let’s confirm that OpenSSL is running. To do that, run.
openssl version -a
Output
To install Logstash, run the command below.
sudo apt install logstash -y
Output
Next, edit the /etc/hosts
file and append the following.
18.224.44.11 elk-master
Where 18.224.44.11 is the IP address of the masterELk server.
We are then going to generate the SSL certificate key to secure the log data transfer from the client filebeat to the logstash server.
To do this, first, create a new SSL directory under the logstash configuration directory ‘/etc/logstash’ and navigate into that directory.
mkdir -p /etc/logstash/ssl
cd /etc/logstash/
Now you can generate the SSL certificate as shown below.
openssl req -subj '/CN=elk-master/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout ssl/logstash-forwarder.key -out ssl/logstash-forwarder.crt
Next, we are going to create new configuration files for logstash. We will create a configuration file ‘filebeat-input.conf’ as input file from filebeat, ‘syslog-filter.conf’ for syslog processing, and lastly a ‘output-elasticsearch.conf’ file to define the Elasticsearch output.
Navigate to Logstash directory and create a ‘filebeat-input.conf’ in the ‘conf.d’ directory.
cd /etc/logstash/
vim conf.d/filebeat-input.conf
Paste the following configuration.
input {
beats {
port => 5443
type => syslog
ssl => true
ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
}
}
Save and exit the text editor.
For the syslog processing log data, we are using the filter plugin named ‘grok’ for parsing of the syslog files.
Create a new configuration ‘syslog-filter.conf’.
vim conf.d/syslog-filter.conf
Paste the configuration below.
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Save and exit the text editor.
Finally, create a configuration file named ‘output-elasticsearch.conf’ for elasticsearch output.
vim conf.d/output-elasticsearch.conf
Paste the following content.
output {
elasticsearch { hosts => ["localhost:9200"]
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Save and exit the text editor.
When that is said and done, enable and start the Logstash service.
sudo systemctl enable logstash
sudo systemctl start logstash
To verify that Logstash is running, run the command.
sudo systemctl status logstash
Sample Output
You can also use the netstat
command as shown.
netstat -pnltu
4. Install and configure Kibana on Ubuntu
Next, we are going to install Kibana using the command below.
sudo apt install kibana -y
Output
Next, we are going to make a few modifications to the kibana configuration file.
vim /etc/kibana/kibana.yml
Locate and uncomment the following attributes.
server.port: 5601
server.host: "localhost"
elasticsearch.url: "https://localhost:9200"
Save and exit the text editor.
Then enable and start the Kibana service:
sudo systemctl enable kibana
sudo systemctl start kibana
Output
You can confirm that kibana is running on it default port 5601 using the netstat
command as shown.
netstat -pnltu
Output
5. Installing and configuring NGINX as a reverse proxy for Kibana
We are using NGINX as a reverse proxy to kibana dashboards. You need to install Nginx and ‘Apache2-utils’ as shown below.
sudo apt install nginx apache2-utils -y
Output
Next, create a new virtual host file named kibana.
vim /etc/nginx/vim sites-available/kibana
Paste the following content into the virtual host file
server {
listen 80;
server_name localhost;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.kibana-user;
location / {
proxy_pass https://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save and exit the text editor. Recommended Read: NGINX location directive.
Next, create a basic authentication for the kibana dashboard using the htpasswd
command as shown.
sudo htpasswd -c /etc/nginx/.kibana-user elastic
Type the elastic user password
Output
In the above example, the username is elastic and the password will be what you provide.
Next, activate the Kibana virtual host configuration and test Nginx configuration.
ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
nginx -t
Output
With no errors, enable and restart Nginx server.
systemctl enable nginx
systemctl restart nginx
6. Installing and Configuring Filebeat
In this step, we are going to configure filebeat data shipper on our elk-master server. This will relay all the syslog messages to logstash which will get processed and visualized by kibana.
To install filebeat run:
sudo apt install filebeat
Next, open the filebeat configuration file.
sudo vim /etc/filebeat/filebeat.yml
We are going to use Logstash to perform additional processing on the data collected by Filebeat. Filebeat will not be needed to send any data directly to Elasticsearch. Therefore, locate and Comment the elasticsearch section as shown.
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
Next, head out to the Logstash section and uncomment as shown.
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
Enable the filebeat prospectors by changing the ‘enabled’ line value to ‘true’.
enabled: true
Specify the system log files to be sent to the logstash server. In this example, we will add the ssh log file ‘auth.log’ and the syslog file.
paths:
- /var/log/auth.log
- /var/log/syslog
Save and Exit.
Finally, copy the logstash certificate file – logstash-forwarder.crt – to /etc/filebeat
directory.
cp ~/logstash-forwarder.crt /etc/filebeat/logstash-forwarder.crt
Now start and enable filebeat.
systemctl start filebeat
systemctl enable filebeat
To check the status of filebeat run:
systemctl status filebeat
Output
7. Testing Elasticsearch Stack
To test our Elastic stack, Open your browser and browse your server’s IP followed by port 5601 which is the port kibana listens to.
ip-address:5601
Enter the username and password and later, the following screen will be displayed.
Click on the ‘discover’ tab and click on ‘Filebeat’ The following interface will appear giving you live streaming of visualized data.
Congratulations! You have successfully installed and configured The Elastic Stack and the Elastic Beat ‘Filebeat’ on your Ubuntu 18.04 system.