Posts

Showing posts with the label cellar

Apache Karaf Cellar 2.3.0 released

The latest Cellar release (2.2.5) didn’t work with the new Karaf branch and release: 2.3.0. If the first purpose of Cellar 2.3.0 is to be able to work with Karaf 2.3.x, actually, it’s more than that. Let’s take a tour in the new Apache Karaf Cellar 2.3.0. Apache Karaf 2.3.x support Cellar 2.3.0 is fully compatible with Karaf 2.3.x branch. Starting from Karaf 2.3.2, Cellar can be install “out of the box”. If you want to use Cellar with Karaf 2.3.0 or Karaf 2.3.1, in order to avoid some Cellar bootstrap issue, you have to add the following property in etc/config.properties: org.apache.aries.blueprint.synchronous=true Upgrade to Hazelcast 2.5 As you may know, Cellar is clustered provision tool powered by Hazelcast. We did a big jump: from Hazelcast 1.9 to Hazelcast 2.5. Hazelcast 2.5 brings a lot of bug fixes and interesting new features. You can find more details here: http://www.hazelcast.com/docs/2.5/manual/multi_html/ch18s04.html . In Cellar, all Hazelcast configuration is performed u

Load balancing with Apache Karaf Cellar, and mod_proxy_balancer

Thanks to Cellar, you can deploy your applications, CXF services, Camel routes, … on several Karaf nodes. When you use Cellar with web applications, or CXF/HTTP endpoints, a “classic” need is to load balance the HTTP requests on the Karaf nodes. You have different ways to do that: – using Camel Load Balancer EIP: it’s an interesting EIP, working with any kind of endpoints. However, it requires to have a Karaf running the Load Balancer routes, so it’s not always possible depending of the user security policy (for instance, putting it in DMZ or so) – using hardware appliances like F5, Juniper, Cisco: it’s a very good solution, “classic” solution in network teams. However, it requires expensive hardwares, not easy to buy and setup for test or “small” solution. – using Apache httpd with mod_proxy_balancer: it’s the solution that I’m going to detail. It’s a very stable solution, powerful and easy to setup. And it costs nothing 😉 For instance, you have three Karaf nodes, exposing the follow

Apache Karaf Cellar 2.2.4

Apache Karaf Cellar 2.2.4 has been released. This release is a major release, including a bunch of bug fixes and new features. Here’s the list of key things included in this release. Consistent behavior Cellar is composed by two parts: the distributed resources which is a datagrid maintained by each cluster nodes and containing the current cluster status (for instance of the bundles, features, etc) the cluster event which is broadcasted from a node to the others Cluster shell commands, cluster MBeans, synchronizers (called at startup) and listeners (called when a local event is fired, such as feature installed) update the distributed resource and broadcast cluster events. To broadcast cluster events, we use an event producer. The cluster event is consommed by a consumer which delegates the handling of the cluster event to a handler. We have a handler for feature, bundle, etc. Now, all Cellar “producers” do: check if the cluster event producer is ON check if the resource is allowed, che

Apache Karaf Cellar and central management

Introduction One of the first purpose of Apache Karaf Cellar is to synchronize the state of each Karaf instances in the Cellar cluster. It means that any change performed on one node (install a feature, start a bundle, add a config, etc) generates a “cluster event” which is broadcasted by Cellar to all others nodes. The target node handles the “cluster event” and performed the corresponding action (install a feature, start a bundle, add a config, etc). By default, the nodes have the same role. It means that you can perform actions on any node. But, you may prefer to have one node dedicated to the management. It’s what we name “central management”. Central management With central management, one node is identified by the manager. It means that cluster actions will be performed only on this node. The manager is the only one which is able to produce cluster event. The managed nodes are only able to receive and handle events, not to produce. With this approach, you can give access (for ins

Apache Karaf Cellar 2.2.2 release

What’s new Quite one month ago, we released Karaf Cellar 2.2.1, the first “official” release of the Karaf clustering sub-project. This new Karaf Cellar 2.2.2 release includes bug fixes, especially one bug was a blocker as it was not possible to install Cellar on a Karaf instance running on Equinox OSGi framework. But, it’s not just a bug fix release, we merge two features from the Cellar trunk. Bundle synchronization In Karaf Cellar 2.2.1, we were able to synchronize features (including features repositories) and configuration between Karaf Cellar instances. It means that you can install a feature on one node (cluster:features-install group feature), the feature will be install on each Karaf note. Karaf Cellar 2.2.2 includes the same behavior for pure OSGi bundle. You can install a bundle on one node, the bundle will be installed on each other nodes on the same cluster group. karaf@root> osgi:install mybundle mybundle will be installed on all nodes in the same cluster group. It’s a

Apache Karaf Cellar 2.2.1 Released

Apache Karaf Cellar 2.2.1 has been released today. Cellar is a Karaf sub-project which aim to provide a clustering solution for Karaf. Quick start To enable Cellar into a Karaf, you just have to install the Cellar feature. First, register the Cellar features descriptor in your running Karaf instance: karaf@root> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.1/xml/features Now, you can see the Cellar features available: karaf@root> features:list|grep -i cellar [uninstalled] [2.2.1 ] cellar Karaf clustering [uninstalled] [2.2.1 ] cellar-webconsole Karaf Cellar Webconsole Plugin To start Cellar, install the cellar feature: karaf@root> features:install cellar It’s done: your Karaf instance is Cellar cluster ready. You can see your cluster node ID and, eventually, others cluster nodes: karaf@root> cluster:list-nodes No. Host Name Port ID * 1 node1.local 5701 node1.local:5701 2 node2.local 5702 node2.local:5702 The * indicates your local node (on which you a