Technology Solutions

Stelo + Apache Kafka / Confluent

Technology Pages Logo-Title Nodes_Kafka-Confluent

Data lakes present a cost-effective option for keeping data in any format and going beyond data capture for deeper analysis. For destinations such as Confluent, Stelo injects data into the cloud using KAFKA to make data available for Confluent Consumers. Stelo efficiently streams data into delta lakes with a focus on fidelity, scalability, and cost.

Stelo easily connects and distributes change data to Confluent so users can engage with their data for cutting edge analysis, machine learning (ML), and artificial intelligence (AI). Stelo is compatible across all data ecosystems to efficiently propagate changes into delta lakes while maintaining the same sequence between the source and the destination. Optimize your data with Confluent in tandem with your traditional data warehouse and scale your data pipeline into a cost-effective solution.

Related Resources

TECHNICAL DATA SHEET →

QUICK START GUIDE →

EVOLVING YOUR DATA MANAGEMENT STRATEGY →

PREPARING TO MOVE YOUR DATA TO DELTA LAKES →

SCHEDULE A DEMO

 

Connects From

Flexible Deployment Icon_Noun 4351350_Green Customizable

Anywhere-to-Anywhere

Avoid vendor lock-in. Stelo uses heterogeneous replication for bi-directional support across all source and destination types. Our open-standards approach allows us to remain vendor-agnostic while providing highly flexible deployment models.

Rapid Deployment Icon_Noun 3967969_Green Quick Setup

Rapid Deployment

Streamline your deployment plan without costly delays. Stelo typically deploys in less than a day and cuts production time down from months to only weeks.

 Time and Labor Icon_Noun 4636078_Green Easy-to-Use

Set It and Forget It

Simple installation with GUI interface, configuration wizard, and advanced tools makes product setup and operation straightforward, with no programming needed. Once running, Stelo reliably operates in the background without requiring dedicated engineering support to maintain and manage. Alter, add, and drop schema changes are replicated automatically.

Near-Zero Footprint Icon_Noun 1465960_Green Low Impact

Near-Zero Footprint

Our process provides ultra-low CPU load (less than 1% typical) to minimize production impact and avoid operational disruption. No source or destination software installation required. Only transfer data you need thanks to Dataset Partitioning.

Data Scalability Icon_Noun 1304652_Green Cost-Efficient

Unlimited Connections

A single instance can support multiple sources and destinations without additional licensing. The Stelo license model is independent of the number of cores to either the source or destination, so you only pay for the capacity required to support the transaction volume. Your data ecosystem can change over time without additional costs.

Automatic Restoration Icon_Noun 4647448_Green Reliable

Automatic Recovery

If a connection is broken, no data is lost. Stelo will automatically resume replication without needing to re-baseline in the event of a connectivity failure.

Stelo and Penn Foster Partner to Create an
Adaptable Data Lakehouse

Penn Foster is an educational institution whose mission is to help students gain the knowledge and skills they need to advance in their field or start a new career. With growing enrollment, the institution decided it was time to transition from a traditional, relational data management solution to a cloud-based, big data solution that works for both their current structured data and their anticipated unstructured data.

After all files were initially dropped into Microsoft Azure Data Lake Storage (ADLS), it became clear that the coding of individual files downstream would be a strain on their resources. In anticipation of their future needs, Stelo offered a pre-release deployment of Stelo V6.1, allowing Penn Foster to leverage the software’s new delta lakes support functionality.

This functionality allowed Penn Foster to:

  • Prove their cloud-based architecture at scale
  • Combine technologies for faster access, faster updates, and improved reliability
  • Minimize the hands-on effort required to transfer and access data
READ THE CASE STUDY

FAQ

Do you support my replication pairing?

The short answer is yes.

Stelo takes full advantage of open standards such as DRDA, SQL, ODBC, and JDBC to maximize compatibility and interoperability within an enterprise network. We are an active member of The Open Group software industry consortium, which was responsible for the adoption of DRDA as an industry standard for database interoperability.

Currently, Stelo supports more than thirty ODBC databases and our Kafka interface can also be used to communicate with cloud-based streaming services such as Azure Event Hubs for Kafka, the Oracle Cloud Infrastructure Streaming service, Amazon Managed Streaming for Apache Kafka, and IBM Event Streams. Stelo can also populate Azure Data Lake Storage Gen2 (ADLSg2) and similar NoSQL data repositories.

Stelo continues to use our open standards approach to ensure that we meet emerging replication requirements. We are continually adding support for new technologies while supporting legacy systems. Stelo is designed to grow with your organization rather than lock you in to any specific database platforms.

Do I need programming experience?

Stelo offers simple installation and GUI-based replication. Our user-friendly, browser-based GUI does not require a programming background to set up or operate. The easy-to-use interface comes standard across all Stelo solutions for snapshot, incremental, and bi-directional replication. Once running, Stelo reliably operates in the background without needing dedicated engineering support to maintain and manage.

Data lake, delta lake, data lakehouse: what's the difference? And where do data warehouses fit in?

Data Lake vs Data Warehouse vs Delta Lake vs Data Lakehouse: The terms can get confusing, but understanding these underlying pieces is critical for ensuring you set up a cost-effective data integration architecture.

A data warehouse is a relatively limited-volume data repository and processor of aggregated structured data from relational sources. The replicated data mirrors the source database to provide traditional query processing. Common applications include data analytics and business intelligence (BI).

A data lake is a large-volume repository of aggregated structured and unstructured data from relational and non-relational sources. Key applications include ML and AI.

A data lakehouse is a big-data architecture that combines benefits of both data warehousing and data lakes, supporting data analytics, BI, ML, and AI applications. A delta lake is an open-source storage layer placed above a data lake to create a data lakehouse, providing critical data governance and scalability for future-proofing your organization.

Stelo's delta lakes connector is compatible across your technology stack to efficiently populate your data lake. Our process can work in tandem with your traditional data warehouse to scale your data pipeline into a cost-effective data management solution. Read our "5 Questions to Answer Before You Start Moving Your Data to Delta Lakes" blog post to learn more about how to get started.

Support Features

Solutions Icon_Green

Accessible Support

Quick support is available for training, troubleshooting, version updates, and data replication architecture. 24/7 Urgent Incident Support is included in annual subscriptions.

Solutions Icon_Green

Highly Experienced Team

Stelo’s technologists have more than 30 years' experience developing reliable data software. Whether you need basic support or have a tricky technical challenge, we can work with you to solve any problem.

Solutions Icon_Green

End-to-End Proficiency

Our team has detailed knowledge of every data platform we support and can troubleshoot end-to-end replication pairing in heterogeneous environments to ensure the pairings are working properly.

Solutions Icon_Green

Constant Evolution

Unlike some other solutions, Stelo won't go out of date. New source and target types are continuously added through active updates to stay compatible with emerging market requirements.

The Latest from Our Blog

How Stelo V6.3 Helps You Master Data Integration
How Stelo V6.3 Helps You Master Data Integration

How Stelo V6.3 Helps You Master Data Integration

Nov 28, 2023 7:45:00 AM 2 min read
Sunsetting: What to Do When Your Data Replication Tool is No Longer Supported
Sunsetting: What to Do When Your Data Replication Tool is No Longer Supported

Sunsetting: What to Do When Your Data Replication Tool is No Longer Supported

Aug 29, 2023 8:46:34 AM 3 min read
Unboxing Stelo V6.1: MERGE Support
MERGE Support

Unboxing Stelo V6.1: MERGE Support

Apr 25, 2023 10:03:00 AM 4 min read
Unboxing Stelo V6.1: PowerShell Scripting, Support for Linux and Container-Based Deployment
PowerShell Scripting, Support for Linux and Container-Based Deployment

Unboxing Stelo V6.1: PowerShell Scripting, Support for Linux and Container-Based Deployment

Apr 18, 2023 9:44:00 AM 3 min read

Get Started

These three steps will help you ensure Stelo works for your needs, then seamlessly deploy your solution.

1

Schedule a Demo

Our expert consultants will guide you through the functionality of Stelo, using your intended data stores.

2

Try Stelo

Test the full capability of the software in your own environment for 15 days. No obligations.

3

Go Live

When you're ready, we can deploy your Stelo instance in under 24 hours with no disruptions to your operations.

SCHEDULE A DEMO