CP 7.7 is here! Get the latest security tools for your hybrid workloads | Join Webinar

Confluent Blog

RSS
Data Products, Data Contracts, and Change Data Capture

Data Products, Data Contracts, and Change Data Capture

Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...


Introducing Confluent Cloud Freight Clusters

Introducing Confluent Cloud Freight Clusters

Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.


Contributing to Apache Kafka®: How to Write a KIP

Contributing to Apache Kafka®: How to Write a KIP

Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.


Migrating Kafka to the Cloud: How Skai Went From 90K to 1.8K Topics

Migrating Kafka to the Cloud: How Skai Went From 90K to 1.8K Topics

Skai completely revamped its interactive, ad-campaign dashboard by adding Apache Kafka and an in-memory database—eventually moving the solution to Confluent Cloud. Once on the Cloud, they devised an ingenious architecture for reducing the number of topics they needed.


Introducing the New Confluent Cloud Homepage UI: Enhancing User Experience

Introducing the New Confluent Cloud Homepage UI: Enhancing User Experience

We are excited to announce the release of a new Confluent Cloud Homepage UI, inspired by many conversations and features requests from our customer and field teams. In the past, many users bypassed the Homepage as just another click in the way of what they are trying to accomplish. This redesign...


Streaming BigQuery Data Into Confluent in Real Time: A Continuous Query Approach

Streaming BigQuery Data Into Confluent in Real Time: A Continuous Query Approach

Learn how Confluent Cloud and BigQuery Continuous Queries work together to enable real-time data processing, including the benefits of the integration with BigQuery Continuous Query and a step-by-step guide on setting up getting data from BQ Continuous Query and Confluent Cloud to capture data...


How BT Group Built a Smart Event Mesh with Confluent

BT Group, a large telecoms company based in the UK, has been on a journey to creating a ‘Smart Event Mesh’ with Confluent, enabling the availability of well-governed, real-time streams of data across a hybrid cloud environment. Learn how they’ve navigated this journey so far.


Exploring Apache Flink 1.20: Features, Improvements, and More

Exploring Apache Flink 1.20: Features, Improvements, and More

The Apache Flink® community released Apache Flink 1.20 this week. In this blog post, we highlight some of the most interesting additions and improvements.


New with Confluent Platform: Enhanced security with OAuth Support, Confluent Platform for Apache Flink® (LA), a new HTTP Source Connector, and More

New with Confluent Platform: Enhanced security with OAuth Support, Confluent Platform for Apache Flink® (LA), a new HTTP Source Connector, and More

This blog announces the general availability of Confluent Platform 7.7 and its latest key features: Enhanced security with OAuth Support, Confluent Platform for Apache Flink® (LA), a new Connector, and More


Introducing Apache Kafka® 3.8

Introducing Apache Kafka® 3.8

We are proud to announce the release of Apache Kafka 3.8.0. This release contains many new features and improvements. This blog post highlights some of the more prominent features. For a full list of changes, be sure to check the release notes.


Flink AI: Real-Time ML and GenAI Enrichment of Streaming Data with Flink SQL on Confluent Cloud

Flink AI: Real-Time ML and GenAI Enrichment of Streaming Data with Flink SQL on Confluent Cloud

Confluent Cloud for Apache Flink®️ supports AI model inference and enables the use of models as resources in Flink SQL, just like tables and functions. You can use a SQL statement to create a model resource and invoke it for inference in streaming queries.


Accelerate your data streaming journey with the latest in Confluent Cloud

Accelerate your data streaming journey with the latest in Confluent Cloud

The Q2 2024 Confluent Cloud launch introduces a suite of enhancements across the four key pillars of a Data Streaming Platform - Stream, Connect, Process, and Govern – alongside some significant work we have been doing with our partner ecosystem to help customers unlock new possibilities.


City of Hope Redefines Predictive Sepsis Detection Using Kafka

City of Hope Redefines Predictive Sepsis Detection Using Kafka

As one of the largest cancer research and treatment organizations in the United States, City of Hope’s mission is to transform cancer care. Advancing this mission requires an array of cutting-edge technologies to fuel innovative treatments and services tailored for patients’ specific needs.


How Alexri Amplifies Tech Innovation as a Customer Marketing Manager

How Alexri Amplifies Tech Innovation as a Customer Marketing Manager

As a customer marketing manager on the brand marketing team, Alexri Patel-Sigmon helps bring our customers’ stories to life. Partnering with teams across marketing, sales, product, and more, her team provides customers with a stage to share their data streaming stories, amplify their thought lead...


Building a Full-Stack Application With Kafka and Node.js

Building a Full-Stack Application With Kafka and Node.js

A well-known debate: tabs or spaces? Let’s settle the debate, Kafka-style. We’ll use the new confluent-kafka-javascript client to build an app that produces the current state of the vote counts to a Kafka topic and consumes from that same topic to surface them to a JavaScript frontend.


MiFID II: Data Streaming for Post-Trade Reporting

The Markets in Financial Instruments Directive II (MiFID II) came into effect in January 2018, aiming to improve the competitiveness and transparency of European financial markets. As part of this, financial institutions are obligated to report details of trades and transactions (both equity and...


Running Apache Kafka® at the Edge Requires Confluent’s Enterprise-Grade Data Streaming Platform

Running Apache Kafka® at the Edge Requires Confluent’s Enterprise-Grade Data Streaming Platform

Deploying Apache Kafka at the edge brings significant challenges related to scalability, remote monitoring, and high management costs. Confluent’s enterprise-grade data streaming platform addresses these issues by providing comprehensive features that enhance Kafka’s capabilities, ensuring effici...


Use CL60BLOG to get an additional $60 of free Confluent Cloud