![]() The final step is to add the Slot Leader Panel to your dashboard. Save & Test and if all steps were followed correctly, you should get the green success message. Configure the CSV Plugin by specifying the location of the slot.csv file. Please follow the instructions under the section "Installing on a local Grafana:"Īfter the installation, in Data Sources now the CSV Plugin should be listed. This step could be automated but I don't wish to open extra ports for this so I just copy and paste the content of the slot.csv file. In case the slot.csv file is on a different node, copy it to your Grafana Monitoring node manually. The whole script can be copied from here: ![]() I needed 16GB RAM + 8GB SWAP and it took several minutes to query the leadership schedule. Please refer to query leadership-schedule for more details. The cardano-cli query requires additional RAM. In the "Connected Peers" panel go to Alertsĭefine the Rule "Connected Peer Alert" Evaluate every "1m" For "2m" Now we create an Alert to get an email if the Producer Node is not reachable Please not that Alerts can only be created for "Graph" panels! This will send a sample email using the SMTP details we configured earlier.Ĭreate an Alert if Producer Node is not reachable You can use multiple email address separated by " "Ĭlick on "Send Test" if you want to verify your settings. Select the checkbox of "Include image" in case you want to include the image of the panel as the body in the notification email.Īdd the target email in "Email addresses" text area. Select Email from "Type" as we want to send notifications over email.Ĭheck the "Send on all alerts" in case you want email on all alerts. Log in to Grafana with username and password:Ĭlick on the "Bell" icon on the left sidebar.Ĭlick on "Add Channel." This will open a form for adding new notification channel. This example uses a prometheus.$ cat > prometheus.yml to any timeseries scraped from this config. In those cases, exported fields retain their last is only reported as unhealthy if givenĪn invalid configuration. The exported targets use the configured in-memory traffic address specified by the run command. The targets that can be used to collect exporter metrics.įor example, the targets can either be passed to a discovery.relabel component to rewrite the targets’ label sets or to a prometheus.scrape component that collects the exposed metrics. The following fields are exported and can be referenced by other components. Regex filter for consumer groups to be monitored. How frequently should the interpolation table be pruned, in seconds. The maximum number of offsets to store in the interpolation table for a partition. If set to true, all scrapes trigger Kafka operations. If set to true, use a group from zookeeper.Īddress array (hosts) of zookeeper server. This makes your HTTPS connections insecure. If set to true, the server’s certificate will not be checked for validity. The optional key file for TLS client authentication. The optional certificate file for TLS client authentication. The optional certificate authority file for TLS client authentication. The SASL SCRAM SHA algorithm sha256 or sha512 as mechanism. Only set this to false if using a non-Kafka SASL proxy. You must manually provide the instance value if there is more than one string in kafka_uris. The instancelabel for metrics, default is the hostname:port of the first kafka_uris. NameĪddress array (host:port) of Kafka server. Omitted fields take their default values. ![]() You can use the following arguments to configure the exporter’s behavior.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |