Posts

Showing posts with the label decanter

What's new in Apache Karaf Decanter 2.8.0?

Apache Karaf Decanter 2.8.0 has just been released. This release includes several fixes, improvements, and dependency updates. I encourage all Decanter users to update to 2.8.0 release ;) In this blog post, I will highlight some important changes and fixes we did in this release. Prometheus appender The Prometheus appender has been improved to expose more gauges. As reminder, the Decanter Prometheus appender is basically a servlet that expose Prometheus compliant data, that Prometheus instances can poll to get the latest updated metrics. Prometheus appender only looking for numeric data (coming from the Decanter collectors) to create and expose gauges. Unfortunately, in previous Decanter releases, Prometheus appender only looking for numeric value for "first level" properties. It means if a collected data property value was a Map , no inner data was considered by Prometheus appender, even if inner values were numeric. That's the first improvem

What's new in Apache Karaf Decanter 2.7.0 ?

Image
Apache Karaf Decanter 2.7.0 release is currently on vote. I'm a little bit anticipating to the release to do some highlights about what's coming ;) Karaf Decanter 2.7.0 is an important milestone as it brings new features, especially around big data and cloud. HDFS and S3 appenders Decanter 2.7.0 brings two new appenders: HDFS and S3 appenders. The HDFS appender is able to store the collected data on HDFS (using CSV format by default). Similary, S3 appender store the collected data as an object into a S3 bucket. Let's illustrate this with a simple use case using S3 appender. First, let's create a S3 bucket on AWS: So, now we have our decanter-test S3 bucket ready. Let's start a Karaf instance with Decanter S3 appender enabled: Then, we configure S3 appender in etc/org.apache.karaf.decanter.appender.s3.cfg : ############################### # Decanter Appender S3 Configuration ############################### # AWS credentials accessKeyId=... secr

Complete metrics collections and analytics with Apache Karaf Decanter, Apache Kafka and Apache Druid

Image
In this blog post, I will show how to extend the Karaf Decanter as log and metrics collection with storage and analytic powered by Apache Druid. The idea is to collect machine metrics (using Decanter OSHI collector for instance), send to a Kafka broker and aggregate and analyze the metrics on Druid. Apache Kafka We can ingest data in Apache Druid using several channels (in streaming mode or batch mode). For this blog post, I will use streaming mode with Apache Kafka. For the purpose of the blog, I will simply start a zookeeper: $ bin/zookeeper-server-start.sh config/zookeeper.properties and kafka 2.6.1 broker: $ bin/kafka-server-start.sh config/server.properties ... [2021-01-19 14:57:26,528] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) I'm create a decanter topic where we gonna send the metrics: $ bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic decanter --partitions 2 $ bin/kafka-topics.sh --bootstrap-server localhost:9092 -

New collectors in Apache Karaf Decanter 2.4.0

Apache Karaf Decanter 2.4.0 will be released soon and include a set of new collectors. Oshi Collector The oshi collector harvest a bunch of data about the hardware and the operating system. It’s a scheduled collector, executed periodically (every minute by default). You can have all details about the machine thanks to this collector: motherboard, cpu, sensors, disks, etc, etc. By default, the oshi collector retrieves all details, but you can filter what you want to harvest in the etc/org.apache.karaf.decanter.collector.oshi.cfg configuration file. It means now we have the system collector that allows you to periodically execute scripts and shell commands, and the oshi collector that harvests all details about the system. ConfigAdmin Collector The ConfigAdmin collector is a event driven collector. It “listens” for any change on the Karaf configuration and send an event for each change. Prometheus Collector Karaf Decanter 2.3.0 introduced the Prometheus appender to expose metrics on a P

Apache Karaf Decanter 2.4.0, new processing layer

Up to Apache Karaf Decanter 2.3.0, the collection workflow was pretty simple: collect and append. In Karaf Decanter 2.4.0, we introduced a new optional layer in between: the processing. It means that the workflow can be now collect, process, and append. A processor get data from the collectors and do a logic processing before sending the event in the Decanter dispatcher in destination of the appenders. The purpose is to be able to apply any kind of processing before storing/sending the collected data. To use and enable this workflow, you just have to install a processor and change the appenders to listen data from the processor. Example of aggregate processor A first processor is available in Karaf Decanter 2.4.0: the timed aggregator. By default, each data collected by collectors is sent directly to the appenders. For instance, it means that the JMX collectors will send one event per MBean every minute by default. If the appender used is a REST appender, it means that we will call the

Apache CXF metrics with Apache Karaf Decanter

Image
Recently, I had the question several times: how can I have metrics (number of requests, request time, …) of the SOAP and REST services deployed in Apache Karaf or Apache Unomi (also running on Karaf). SOAP and REST services are often implemented with Apache CXF (either directly using CXF or using Aries JAXRS whiteboard which uses CXF behind the hood). Apache Karaf provides examples how to deploy SOAP/REST services, using different approaches (depending the one you prefer): https://github.com/apache/karaf/tree/master/examples/karaf-soap-example https://github.com/apache/karaf/tree/master/examples/karaf-rest-example CXF Bus Metrics feature Apache CXF provides a metrics feature that collect the metrics we need. Behind the hood it uses dropwizard library and the metrics are exposed as JMX MBeans thanks to the JmxExporter . Let’s take a simple REST service. For this example, I’m using blueprint, but it also works with CXF programmatically or using SCR. I have a very simple JAXRS class looki

Apache Karaf Decanter 2.3.0, new Prometheus appender

Image
As said in my previous post, Apache Karaf Decanter 2.3.0 is a major new release bringing fixes, improvements and new features. We saw the new alerting service. In this blog post, we see another new feature: the Prometheus Appender. Prometheus ? Prometheus ( https://prometheus.io/ ) is a popular metrics toolkit, especially for cloud ecosystem. It’s open-source and it’s part of the Cloud Native Computing Foundation. If Karaf Decanter provides similar collecting and alerting features, it makes sense to use Decanter as collector that Prometheus can request. Then, the visualization and search can be performed on Prometheus. Decanter Prometheus Appender The preferred approach with Prometheus is to “expose” a HTTP endpoint providing metrics that the Prometheus platform can retrieve. It’s what the Decanter Prometheus Appender is doing: bind a Prometheus servlet that Prometheus can “poll” get the incoming data from the Decanter Collectors detects the numbers in the event data creates Prometheus

Apache Karaf Decanter 2.3.0, the new alerting service

Image
Apache Karaf Decanter 2.3.0 will be released soon. This release brings lot of fixes, improvements and new features. In this blog post, we will focus on one major refactoring done in this version: the alerting service. Goodbye checker, welcome alerting service Before Karaf Decanter 2.3.0, the alert rules where defined in a configuration file named etc/org.apache.karaf.decanter.alerting.checker.cfg . The configuration were simple and looked like. For instance: message.warn=match:.*foobar.* But the checker has three limitations: it’s not possible to define a check on several attributes at same time. For instance, it’s not possible to have a rule with something like if message == 'foo' and other == 'bar' . it’s not possible to have “time scoped” rule. For instance, I want to have an alert only if a counter is great than a value for x minutes. A bit related to previous point, the recoverable alerts are not perfect in the checker. It should be a configuration of the alert rul

Monitoring and alerting with Apache Karaf Decanter

Image
Some months ago, I proposed Decanter on the Apache Karaf Dev mailing list . Today, Apache Karaf Decanter 1.0.0 first release is now on vote . It’s the good time to do a presentation 😉 Overview Apache Karaf Decanter is complete monitoring and alerting solution for Karaf and the applications running on it. It’s very flexible, providing ready to use features, and also very easy to extend. Decanter 1.0.0 release works with any Karaf version, and can also be used to monitor applications outside of Karaf. Decanter provides collectors, appenders, and SLA. Collectors Decanter Collectors are responsible of harvesting the monitoring data. Basically, a collector harvest the data, create an OSGi EventAdmin Event event send to decanter/collect/* topic. A Collector can be: Event Driven, meaning that it will automatically react to an internal event Polled, meaning that it’s periodically executed by the Decanter Scheduler You can install multiple Decanter Collectors in the same time. In the 1.0.0 re