Unboxing Stelo V6.1: High-Performance Support for Non-SQL Destinations
Stelo recently released Stelo Data Replicator V6.1. Like V5, V6.1 offers robust, real-time data replication but with added features to support evolving data infrastructure. Over our 30-year history, we’ve developed best practices for moving data that still guide us today.
Over a three-part blog series, we’ll break down important new features in V6.1. Here, we’ll cover how we’re providing high-performance support for non-SQL destinations.
In short, we can support non-SQL destinations like Azure Data Lake Storage Gen2 (ADLS Gen2), AWS and Confluent Connectors and achieve an order of magnitude improvement in message delivery time by:
- Switching from a language-oriented interface (i.e., ODBC) to a message-oriented mechanism (i.e., Kafka) or native interface (i.e., DataBricks or DataFrames)
- Implementing extensible Stelo Data Sink Connectors
- Changing the underlying transport methodology
Each of these changes represent a reaction to shifts in the way data is used and managed today. So, how did we get here?
The way we think about language- and message-oriented interfaces has evolved.
In V5, the language-oriented interface was strictly based on open database connectivity (OBDC). OBDC is a high-level application programming interface (API) for translating requests that leverage SQL to issue queries for data retrieval. Developed in the 1980s, ODBC is a well-established programming interface that accommodates different drivers for different databases while preserving a common programming interface. Previously, it was customary for each database vendor to have their own interface.
At the time, ODBC was a great solution and an ideal mechanism for change data capture applications to place data in a database for retrieval by another application, which would consume it. This allowed a variety of different applications to be built without specialized knowledge of the source of change data. Now, we are using the new protocols to eliminate the need for an intermediary database to host the data. The goal is to achieve greater independence between the two processes.
Moving forward, we started to see more and more programmers designing interfaces for modern databases with formats optimized for web servers, like JSON. While this created unnecessary work at times, something good came out of these efforts. We started to see a low coupling coefficient between data sources. In other words, companies wanted to make sure their information could be moved from place to place without concern for source or destination. As mentioned above, “self-defining” was the new ideal.
Today, we recognize that language-oriented interfaces are challenging to use and the information they deliver can be hard to use too. As of V6.1, Stelo Data Replicator no longer relies on an ODBC interface; however, we recognize when its capabilities are contextualized, ODBC can still offer efficiencies in modern data management.
There’s a growing need for data sink connectors.
Over time, it’s become more popular to move data into delta lakes, where messages are self-defining and can be encoded with JSON. To accommodate, Stelo stopped using ODBC in V6.1. Instead, we’re using Stelo Data Sink Connectors. These custom connectors extend Stelo Data Replicator by allowing it to send data through a simple application, much like a web server, that reads the data and moves it to a repository. By design, data experiences very little transformation between the source and the delta lake.
In essence, Stelo Data Sink Connectors are lightweight software components that understand how to obtain change data and original data, move it across a communication link and transform it into the native interface. Each custom connector features a code fragment, written in Scala, that requires no modification on the customer side, so it’s still destination agnostic. Expanding on the sink metaphor, Stelo Data Sink Connectors act as a hose between Stelo Data Replicator and the data sink, which drains into a delta lake.
There’s a better understanding of how certain underlying transport methodologies can cause inefficiencies.
Through trial and error, it became clear that while JSON was ideal for how the data is encoded, it was an inefficient format for transport. Through our custom connectors, we can leverage Java database connectivity (JDBC) and deliver highly efficient data movement by contextualizing the capabilities of JDBC. Efficiency translates to more communication bandwidth for data messaging.
With decades of expertise in data movement, Stelo uses best practice to help customers move change data to/from a wide variety of sources and destinations. In V6.1, we re-engineered how data is communicated and presented to achieve a level of performance that’s not available off-the-shelf.
Contact us to request a V6.1 demo. For more information on V6.1 features, check out Part 2 of our Unboxing Stelo V6.1 series.
- Data Replication (16)
- Data Ingestion (11)
- Real Time Data Replication (9)
- Oracle Data Replication (4)
- iSeries Data Replication (4)
- v6.1 (4)
- DB2 Data Replication (2)
- JDE Oracle Data Replication (2)
- Solution: Delta Lakes (2)
- Technology: Oracle (2)
- StarSQL (1)
- Technology: Aurora (1)
- Technology: Azure (1)
- Technology: Databricks (1)
- Technology: IBM DB2 (1)
- Technology: Informix (1)
- Technology: Kafka (1)
- Technology: MySQL (1)
- Technology: OCI (1)
- Technology: SQL Server (1)
- Technology: Synapse (1)
- April 2023 (3)
- February 2023 (1)
- November 2022 (2)
- October 2022 (1)
- August 2022 (1)
- July 2022 (1)
- May 2022 (2)
- December 2020 (20)
- October 2018 (2)
- August 2018 (3)
- July 2018 (1)
- June 2017 (2)
- March 2017 (2)
- November 2016 (1)
- October 2016 (1)
- February 2016 (1)
- July 2015 (1)
- March 2015 (2)
- February 2015 (2)