14 Logging
Logging¶
- Welcome to the
Logginghands-on lab! In this tutorial, we will learn the essentials ofLoggingin Kubernetes clusters. - We will deploy a sample application, configure log collection, and explore logs using popular tools like
Fluentd,Elasticsearch, andKibana(EFK stack).
What will we learn?¶
- Why
Loggingis important in Kubernetes - How to deploy a sample app that generates logs
- How to collect logs using Fluentd
- How to store and search logs with
Elasticsearch - How to visualize logs with
Kibana - Troubleshooting and best practices
Introduction¶
Loggingis critical for monitoring, debugging, and auditing applications in Kubernetes.- Kubernetes does not provide a builtin, centralized
Loggingsolution, but it allows us to integrate with manyLoggingstacks. - We will set up the EFK stack (
Elasticsearch,Fluentd,Kibana) to collect, store, and visualize logs from our cluster.
Lab¶
Step 01 - Deploy a Sample Application¶
- Deploy a simple
Nginxapplication that generates access logs.
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
- Check that the pod is running:
Step 02 - Deploy Elasticsearch¶
- Deploy
ElasticsearchusingHelm:
helm repo add elastic https://helm.elastic.co
helm repo update
helm install elasticsearch elastic/elasticsearch --set replicas=1 --set minimumMasterNodes=1
- Wait for the pod to be ready and check its status:
Step 03 - Deploy Kibana¶
- Deploy
KibanausingHelm:
- Forward the
Kibanaport:
If you are running this lab in Google Cloud Shell:
- After running the port-forward command above, click the Web Preview button in the Cloud Shell toolbar (usually at the top right).
- Enter port
5601when prompted. - This will open
Kibanain a new browser tab at a URL likehttps://<cloudshell-id>.shell.cloud.google.com/?port=5601. - If you see a warning about an untrusted connection, you can safely proceed.
- Access
Kibanaat http://localhost:5601 (if running locally) or via the Cloud Shell Web Preview, as explained above.
Step 04 - Deploy Fluentd¶
- Deploy
Fluentdas aDaemonSetto collect logs from all nodes and forward them toElasticsearch.
kubectl apply -f https://raw.githubusercontent.com/fluent/fluentd-kubernetes-daemonset/master/fluentd-daemonset-elasticsearch-rbac.yaml
- Check that
Fluentdpods are running:
Step 05 - Generate and View Logs¶
- Access the
Nginxservice to generate logs:
In Kibana, configure an index pattern to view logs:
- Open Kibana in your browser (using the Cloud Shell Web Preview as described above).
- In the left menu, click Stack Management > Kibana > Index Patterns.
- Click Create index pattern.
- In the “Index pattern” field, enter
fluentd-*(orlogstash-*if your logs use that prefix). - Click Next step.
- For the time field, select
@timestampand click Create index pattern. - Go to Discover in the left menu to view and search your logs.
Explore the logs, search, and visualize traffic.
Troubleshooting¶
Pods not starting:¶
- Check pod status and logs:
Kibana not reachable:¶
- Ensure port-forward is running and no firewall is blocking port 5601.
No logs in Kibana:¶
- Check Fluentd and Elasticsearch pod logs for errors.
- Ensure index pattern is set up correctly in Kibana.
Cleanup¶
- To remove all resources created by this lab:
helm uninstall elasticsearch
helm uninstall kibana
kubectl delete deployment nginx
kubectl delete service nginx
kubectl delete -f https://raw.githubusercontent.com/fluent/fluentd-kubernetes-daemonset/master/fluentd-daemonset-elasticsearch-rbac.yaml
Next Steps¶
- Try deploying other logging stacks like
Loki+Grafana. - Explore log aggregation, alerting, and retention policies.
- Integrate logging with monitoring and alerting tools.
- Read more in the Kubernetes logging documentation.