This object contains all the configurations related to where the data is being shipped. This object specifies all the configuration related to the output of the log shipper and where the data is sent. The maximum amount of time to allow for performing flush. The minimum of amounts to send to Humio when flushing the pipeline. The amount of events to store in the buffer. Specify a field and value to add to each event. The source block configures the sources of data that will be sent to Humio. The section only aims to document the set of keys and value required to ship data to Humio and therefore not all of the configuration options which are available in Filebeat are listed. The Filebeat configuration file is located at /etc/filebeat/filebeat.yml on Linux. To get higher throughput, also increase to 32000, for example, to allow buffering for more workers. You may want to increase the number of worker instances worker from the default of 1 to (say) 5 or 10 to achieve more throughput if Filebeat is not able to keep up with the inputs. In case of timeouts, Filebeat will back off, thus getting worse performance then with a lower bulk_max_size. But keep bulk_max_size low, as you may get the requests timed out if they get too large. The Humio server does not limit the size of the ingest request. The default of 200 is fine for most use cases. If all your events are fairly small, you can increase bulk_max_size from the default of 200 to 300. If the log files use special, non-ASCII characters, then set the encoding here. Specify the text encoding to use when reading files using the encoding field. Set the username to a value as required-it will be logged in the access log of any proxy on the path so using the hostname of the sender is a good option. Insert an ingest token from the repository as the password. For example, where $YOUR_HUMIO_URL is the URL for your Humio installation. Insert the URL of your humio installation and its port in the ElasticSearch output to match your configuration. These fields, and their values, will be added to each event. It is possible to insert a input configuration (with paths and fields) for each file that Filebeat should monitor.Īdd other fields in the fields section. Insert a path section for each log file you want to monitor. You must make the following changes to the configuration, see Configuration Example. You can find configuration documentation for Filebeat at the Filebeat configuration page. Data can be sent to Humio by configuring Filebeat to use the built-in Elastic Search output. This API is served both as a sub-path of the standard Humio API and on its own port (defaulting to 9200). Humio supports parts of the ElasticSearch bulk ingest API. The Elastic non-OSS version of Filebeat does not work with Humio. Ĭompatible but requires : falseĪnd _older_versions: true You might also read, the Getting Started Guide. It continues data transmission when the connection is restored.Ĭheck out Filebeat’s official documentation for more information.
If there is no network connection, then Filebeat waits to retry data transmission. When Filebeat reads a file, it keeps track of the last point it read. Finally, it handles network problems gracefully.
It’s also easy to install and run since Filebeat is written in the Go programming language, and is built into one binary. It uses limited resources, which is important because the Filebeat agent must run on every server where you want to capture data. It has some properties that make it a great tool for sending file data to Humio. Humio Library / Log Shippers / Elastic Beats / Filebeat Filebeatįilebeat is a lightweight, open source program that can monitor log files and send data to servers.