FAQ

Created:2024-11-05 Last Modified:2024-11-05

This document was translated by ChatGPT

#1. Deployment

  1. What is the difference between all-in-one mode deployment and regular deployment?

    Answer: All-in-one means that the storage components clickhouse and mysql do not have corresponding PVCs and are deployed using the hostPath mode. If the K8S cluster has multiple nodes, after restarting the deepflow-clickhouse/mysql Pods, they may drift to other nodes, causing previously collected data to be unqueryable. It is recommended to use all-in-one deployment for experience purposes, and regular deployment mode for testing/POC stages.

  2. How long is the data generally retained, and can it be adjusted?

    Answer: The retention period for different data varies. You can check the retention period for different types of data in server.yaml (opens new window) and adjust the retention period before the first deployment. Modify the default configuration during helm installation and complete the installation.

  3. How to use external MySQL/Clickhouse?

    Answer: Refer to the sections Using Managed MySQL and Using Managed ClickHouse in the Production Deployment Recommendations.

  4. The deployment specifications include two storage components, mysql and clickhouse. What is the difference between them?

    Answer: mysql stores metadata information obtained from the deployment cluster, such as virtual machines, K8S resources, and synchronized collector information. clickhouse stores real-time collected data, such as network flow logs collected from the cluster, and performs aggregation analysis.

  5. After deployment, there is no data on Grafana?

    Answer: Please troubleshoot by following these steps:

    • Check if all Pods are running normally: Execute the kubectl get pods -n deepflow command and confirm that all Pods are in the Running state.

    • Check if DeepFlow Agent and DeepFlow Server are successfully connected. You can check if the service domain has been successfully created using the deepflow-ctl domain list command and check if the STATE is in the NORMAL state using the deepflow-ctl agent list command.

    • If there is no data in the Network - X type dashboard, check if the network card name matches the capture rules. You can view the default capture range using the deepflow-ctl agent-group-config example | grep tap_interface_regex command. If you are using a custom CNI or have set up the network in other ways, you can add the network card matching rules to tap_interface_regex and complete the modification by updating the agent configuration.

    • If there is no data in the Application - X type dashboard, confirm that the application protocols used in the cluster meet the supported list.

  6. I have configured OpenTelemetry data integration/want to use DeepFlow's eBPF tracing and network tracing capabilities, but there is no data in the Distributed Tracing dashboard?

    Answer: Please troubleshoot by following these steps:

    • Using OpenTelemetry integration:

      • Confirm that the application has integrated the OTel SDK or started the OTel Agent.

      • Confirm that the configuration has been completed according to the steps in Configuring DeepFlow. You can check if this feature is started normally on the container node where deepflow-agent is located using the netstat -alntp | grep 38086 command. If the configuration is completed, you can check if there are flow logs with Server Port 38086 in Network - Flow Log.

      • Check if there is traffic from the application to the otel-agent to the container node in the Application - K8s Pod Map dashboard to ensure that this network link is smooth and requests are occurring.

      • Confirm in the Application - Request Log dashboard if there are any anomalies in the sent requests.

    • Using eBPF capabilities:

      • Confirm that the server kernel version meets the requirements.

      • Check all replicas of deepflow-agent: Check if the eBPF module is started normally using the kubectl logs -n deepflow ds/deepflow-agent | grep 'ebpf collector' command, and confirm that the eBPF Tracer function is running normally using the kubectl logs -n deepflow ds/deepflow-agent | grep TRACER command.

#2. Product

  1. What should I do after installation and deployment? Are there any product cases or usage scenarios to share?

    Answer: You can see the cases we share in our Starting Observability (opens new window) series of blogs and troubleshooting (opens new window) series of blogs. You can also review past shares on our Bilibili account (opens new window).

  2. I think some features are not good enough and want to give suggestions. How can I do that?

    Answer: You are welcome to submit a Feature Request on Github Issue (opens new window). If you already have a mature idea, you can also put it into practice directly and submit it in GithubPR (opens new window).

  3. Where can I track the latest developments of DeepFlow?

    Answer: You can check our latest release overview in the Release Notes or follow our latest blogs (opens new window).

#3. Contact Us

If the above help does not solve your problem, you can submit an issue via Github Issue (opens new window) or directly contact us (opens new window) for communication.