AWS Elasticsearch is a powerful search engine for CloudWatch logs. It can help you analyze your logs in more detail, and provide insights into how your CloudWatch system is performing. To use Elasticsearch to analyze your CloudWatch logs, you first need to create an Amazon Elasticsearch instance. You can use the default instance or create a custom instance if you want to use a different search engine, such as Google Sheets or MongoDB. Once you have created an Amazon Elasticsearch instance, you can access it from the AWS console. To start using Elasticsearch in your CloudWatch logs, you first need to set up a new index. To do this, open the AWS console and click on the Indexes tab. In the Indexes section, select the elasticsearch-compose key and value . You can also set up multiple indexes if you want to analyze different sets of data in your logs. Next, you need to set up some basic settings for Elasticsearch. In particular, you should set up a few fields in the elasticsearch-compose key and value . These fields will help Elasticsearch work better with CloudWatch data: hostname (the name of your Amazon EC2 instance), user (the name of an Amazon EC2 account user), and port (the port on which ElasticSearch should listen for requests). You can also specify how many shards per index (if desired), and whether or not to keep track of deleted items. Finally, specify how long each response should take when fetched from the elasticsearch-compose key and value . This setting will determine how often responses are checked for accuracy before being stored in the database. Next, add some lines to yourCloudWatchConfig file so that when CloudWatch starts up it knows about our new index and its settings: [ aws ] aws_elasticsearch = “arn:aws:elasticsearch:1::1” [ aws_
Elasticsearch is an open source search engine. It’s commonly used in conjunction with log aggregation to make analyzing server logs easy. AWS offers it as a managed service along with Kibana, a visualization dashboard for Elasticsearch.
Why Bother with Log Files?
Everything in Linux is logged. System actions are logged to /var/log/syslog, and most applications create log files. Most notably, web servers create log entries for every request, making log analytics a very powerful tool. Nginx logs a bunch of info for every request including:
The IP Address of connecting user A username, if using basic authentication (blank most of the time) The Time of request The request itself (for example, “GET /index. php?url=abc“) The status code returned Number of bytes sent, excluding HTTP headers (useful for tracking the actual size of traffic) The HTTP referer (that is, the site the user came from) The user agent of the user’s browser
For example, if you wanted to know which pages on your site are the most error-prone, you can perform a search by error code and then check what the top pages for each code are.
This isn’t really feasible to do manually, especially if you’ve got a lot of web servers. Elasticsearch (ES) solves this issue by enabling you to perform complex queries on aggregated log files. Kibana is a visualization dashboard, and serves as a frontend for ES. AWS provides both of these as one managed service with AWS Elasticsearch Service.
You are required to pay for the server that Elasticsearch runs on, though you are only charged a slight premium over standard EC2 rates. If you’re just doing analysis every once in a while, you don’t have to run this server all the time. However, if you are running it all the time, you can purchased reserved instances to lower the price.
To make this all work, you need to get your log files out of your EC2 instance and into CloudWatch. AWS makes this easy with the CloudWatch Logs Agent—you should read our guide on setting it up before proceeding with Elasticsearch.
If you just run a web server or two, you probably don’t need an entire service just for looking at your logs. CloudWatch Logs itself has great built in search tools from the Insights tab, and can perform some simple visualizations.
But, if you’ve got numerous servers and a lot of data to analyze, you may benefit from Elasticsearch and Kibana.
Getting Started with AWS Elasticsearch
Head over to the Elasticsearch console and create a new domain. An ES domain is a cluster of servers that operate as one search engine. The setup is fairly simple; give it a name, and specify which instance you want to run. The default is r5.large, but you should probably set up a t2.small instance first to try it out.
Give it a disk size adequate enough to store your logs (the default is 10 GB) and click create. New domains take around 10 minutes to initialize, so grab a cup of coffee and come back in a bit.
Select the ES domain you just set up, and then create a new IAM role for the Lambda function that does the streaming to operate. The policy should be preconfigured, but if it isn’t, it needs access to Elasticsearch:
Click next, and you’ll be asked to specify the format. You can enter a custom format here, or choose “Common Log Format,” the default for most web server logs.
Click through the setup and start the log streaming. You should see logs being ingested within a few seconds.
Head over to your Kibana instance (the endpoint URL is found in the domain info panel), and create the initial index. An all new log info streamed to the Elasticsearch cluster appears, which you can browse through your logs in the Discover tab.
From here, everything is configured, and you’re free to use Kibana how you’d like. Under the “Visualize” tab, options for creating and configuring graphs can be found, which you can organize into dashboards.
You can monitor the overall health of your Elasticsearch domain from the info panel:
Elasticsearch and Kibana do take quite a bit of processing power, especially when working with huge datasets and complicated queries. ES itself can be configured to log its own queries to CloudWatch, under the “Logs” tab, which is useful for seeing which queries take the longest to process (and whether or not you need a bigger instance).