Configuring your applications to write logs as container logs is the easy part. When you’re running with dozens or hundreds of containers in production you need a centralized system to store and search your logs. One of the most popular approaches is with the EFK stack: Elasticsearch, Fluentd and Kibana.
In this episode you’ll learn how to run Fluentd (and Fluent Bit) to collect all your container logs and forward them to Elasticsearch for storage. Then you have a central log store with indexed log entries, and you’ll see how to use Kibana to search and visualize log data.
Here it is on YouTube - ECS-V1: Monitoring with Prometheus and Grafana
Docker Desktop - with Kubernetes enabled (Linux container mode if you’re running on Windows).
The basic requirement here is to have your application logs written to stdout, so they’re available as container logs.
Try a simple app:
docker run diamol/ch12-timecheck:1.0
Docker has a plugin system so it can send logs to different collectors. Fluentd is supported out of the box.
Run a Fluentd container to collect container logs:
docker run -d --name fluentd ` -p 24224:24224 ` -v "$(pwd)/demo1:/fluentd/etc" -e FLUENTD_CONF=stdout.conf ` diamol/fluentd docker logs fluentd
You’ll see some log entries from Fluentd itself. The collection config the container is using is in stdout.conf.
Fluentd is listening on port 24224 so Docker can send container logs to
Run the app container using Fluentd logging:
docker run -d --name timecheck ` --log-driver=fluentd ` diamol/ch12-timecheck:1.0 docker logs -f timecheck docker logs -f fluentd
The app container logs are shown in the Fluentd container logs.
Previous versions of Docker wouldn’t show container logs with the Fluentd driver
The EFK stack uses Fluentd to collect logs and forward them to Elasticsearch for storage. Kibana is the front-end to visualize and search the logs.
Clean up and switch to Swarm mode:
docker rm -f $(docker ps -aq) docker swarm init
We’ll deploy EFK as its own stack using this manifest - logging.yml.
The Fluentd configuration is in fluentd-es.conf.
docker config create fluentd-es demo2/config/fluentd-es.conf docker stack deploy -c demo2/logging.yml logging docker service ls docker service logs logging_fluentd docker ps
Open Kibana at http://localhost:5601; add an index pattern for
Now deploy the app as a separate stack, configured to use the Fluentd driver. With the global Fluentd service, every container will use the Fluentd collector running locally on the node.
Here’s the application manifest: timecheck.yml.
Deploy the app:
docker stack deploy -c demo2/timecheck.yml timecheck docker stack ps timecheck docker service logs timecheck_timecheck
Check in Kibana at http://localhost:5601 - apply filter on the
app_name. All the replica logs are collected and stored.
You can run the same stack with Kubernetes - the architecture is the same, but the Fluentd collector configuration is different.
Kubernetes writes container logs to files on the nodes, so Fluentd will use those log files as the source.
Clear down and check Kubernetes:
docker swarm leave -f docker ps kubectl get nodes kubectl get ns
We’re going to use Fluent Bit which is lighter than Fluentd but has a similar config pipeline.
The key specs are:
kubectl apply -f demo3/logging/ kubectl -n logging get pods kubectl -n logging logs -l app=fluent-bit
Browse to the new Kibana at http://localhost:5602 - create index pattern for
All the Kubernetes system component logs are stored in this index - Fluent Bit uses separate indexes for different namespaces.
In the app manifest timecheck.yaml there’s no logging setup. The app deploys to the
default namespace, and Fluent Bit is configured to collect all logs from Pods in that namespace.
Deploy the timecheck app:
kubectl apply -f demo3/timecheck/
Refresh Kibana at http://localhost:5602 - create an index pattern for
apps; back in the Discover tab select the apps index and check logs
Any app deployed to the
default namespace will have logs collected.
Deploy the APOD app:
kubectl apply -f demo3/apod/ kubectl get po
Browse to the app at http://localhost:8014; refresh Kibana at http://localhost:5602 and check the
All the Kubernetes resources are labelled.
kubectl delete ns,svc,deploy,clusterrole,clusterrolebinding -l ecs=v2