- Starter
- Business
- Enterprise
- On-Premise
- Add-on
Overview
Change Data Capture (CDC) pipeline allows extracting changes in real-time from the databases which support CDC and loading them into Greenplum.
We use heavily modified embedded Debezium engine for CDC.
Supported source databases
- MySQL
- SQL Server
- PostgreSQL
- Oracle
- DB2
- MongoDB
- AS400 (IBMI platfroms)
When to use this pipeline
Use the pipeline described in this article to extract data from the CDC-enabled database and load it into the Greenplum database in [almost] real-time.
Flows optimized for Greenplum
Flow type | When to use | |
|
|
When you need to extract data from any source, transform it and load it into Greenplum. |
Bulk load files into Greenplum | When you need to bulk-load files that already exist in server storage without applying any transformations. The flow automatically loads data into staging tables and MERGEs data into the destination. | |
Stream CDC events into Greenplum | You are here | When you need to stream updates from the database which supports Change Data Capture (CDC) into Greenplum in real-time. |
Stream messages from a queue into Greenplum | When you need to stream messages from the message queue which supports streaming into Greenplum in real time. |
How it works
The end-to-end CDC pipeline extracts data from a CDC-enabled database and loads it into Greenplum.
There are two options when creating a pipeline:
1. A single flow that streams CDC events directly into Greenplum.
This flow streams CDC events into the server storage location in real-time and periodically (as often as every second) loads the data into Greenplum in parallel with the stream. The advantage of the pipeline with a single flow is simplicity. There is just one flow to configure, schedule, and monitor. The disadvantage is the fact that if the streaming fails, the load also fails, and vice versa. Note that the flow is fault tolerant. After fixing the issue the stream and load resume from the last successful checkpoint.
2. A pipeline with independent Extract and Load flows
- Extract Flow: this Flow extracts data from a CDC-enabled database.
- Load Flow: this Flow loads data into Greenplum.
The extract and load Flows are running in parallel which guarantees a very high processing speed and low latency. In fact, the actual pipeline can include multiple independent extract and load Flows which allows it to scale horizontally between multiple processing nodes.
Prerequisites
- CDC is enabled for the source database.
- The Greenplum instance must be available from the Internet.
- The gpload utility must be installed on the same VM as Etlworks. Contact Etlworks support at
support@etlworks.com
if you need assistance installing the gpload.
Install and configure Greenplum gpload utility
Read how to install and configure the Greenplum gpload utility and the command to execute the gpload.
A pipeline with a single flow
Greenplum CDC flow streams CDC events into the designated server storage location in real-time and periodically (as often as every second) loads the data into Greenplum in parallel with the stream.
There is no need to create a separate Flow for the initial load. The first time it connects to a CDC-enabled source database, it reads a consistent snapshot of all of the included databases and tables. When that snapshot is complete, the Flow continuously reads the changes that were committed to the transaction log and generates the corresponding insert, update, and delete events.
Read more about CDC in Etlworks.
Step 1. Create a CDC Connection for the source database.
Read how to create a CDC connection.
When creating a CDC connection, enable Do not enclose null values in double quotes.
Step 2. Optionally configure storage location.
Read how to change the default location.
Step 3. Create a Greenplum connection for the destination.
Step 4. Create a connection for history and offset files.
Read how to create CDC Offset and History connection.
Etlworks CDC connectors store the history of DDL changes for the monitored database in the history file and the current position in the transaction log in the offset file.
Typical CDC extract flow starts by snapshotting the monitored tables (A) or starts from the oldest known position in the transaction (redo) log (B), then proceeds to stream changes in the source database (C). If the Flow is stopped and restarted, it resumes from the last recorded position in the transaction log. The connection created in this step can be used to reset the CDC pipeline and restart the process from scratch.
The connection, by default, points to the directory{app.data}/debezium_data
.
Step 5. Create Greenplum CDC flow.
In Flows clickAdd flow
. Type incdc
in Select Flow type. SelectStream CDC events into Greenplum
.
Left to right: select the CDC connection created in step 1, select tables to monitor in FROM, select the Greenplum connection created in step 3, and select or enter the Greenplum table name in TO. When streaming data from multiple source tables set the destination table using a wildcard template in the following format:schema.prefix_*_suffix
, whereschema
is a Greenplum schema to load data into.
Step 6. Configure load parameters
Click theMAPPING
button, select theParameters
tab.
If needed modify the following Load parameters:
Load data into Greenplum every (ms)
: by default, the flow loads data into Greenplum every 5 minutes (300000 milliseconds). The load runs in parallel with the CDC stream, which never stops. Decrease this parameter to load data into Greenplum more often.Wait (ms) to let running load finish when CDC stream stops
: By default, the flow loads data into Greenplum every 5 minutes. The CDC stream and load are running in parallel, so when streaming stops, the flow executes the load last more time to finish loading the remaining data in the queue. It is possible that the load flow is still running when the stream stops. Use this parameter to configure how long the flow should wait before executing the load last time. Clear this parameter to disable the wait. In this case, if the load task is still running, the flow will finish without executing the load one last time. The flow will load the remaining data in the queue on the next run.Action
: the action can beMERGE
(default) orINSERT
. If the action is set toMERGE
the flow will INSERT records that do not exist in the destination table, UPDATE existing records, and DELETE records that were deleted in the source table.Lookup Fields
:MERGE
action requires a list of columns that uniquely identify the record. By default, the flow will attempt to predict the Lookup Fields by checking unique indexes in the source and destination tables, but if there is no unique index in either table it is not guaranteed that the prediction will be 100% accurate. Use this parameter to define the Lookup Fields in the following format:fully.qualified.table1=field1,field2;fully.qualified.table2=field1,field2
.
The other parameters are similar or the same as for the flow type Bulk load files into Greenplum.
Step 7. Schedule Greenplum CDC flow.
We recommend using a continuous run Schedule type. The idea is that the Flow runs until it is stopped manually, there is an error, or (if configured) there are no more new CDC events for an extended period of time. It restarts automatically after a configurable number of seconds.
Monitor running CDC flow
Read how to monitor running CDC flow.
A pipeline with independent extract and load flows
Create and schedule Extract flow
CDC extract flow extracts data from a CDC-enabled database and creates CSV files with CDC events in the configured location. These files are loaded into Greenplum by Load Flow.
There is no need to create a separate Flow for the initial load. The first time it connects to a CDC-enabled source database, it reads a consistent snapshot of all of the included databases and tables. When that snapshot is complete, the Flow continuously reads the changes that were committed to the transaction log and generates the corresponding insert, update, and delete events.
Read more about CDC in Etlworks.
Step 1. Create a CDC Connection for the source database.
Read how to create a CDC connection.
When creating a CDC connection, enable Do not enclose null values in double quotes.
Step 2. Create a file storage connection for CDC events.
This connection will be used for staging files with CDC events.
Read how to create a connection for CDC events.
The connection, by default, points to the directory {app.data}/debezium_data/events
.
Step 3. Create a connection for history and offset files.
Read how to create CDC Offset and History connection.
Etlworks CDC connectors store the history of DDL changes for the monitored database in the history file and the current position in the transaction log in the offset file.
Typical CDC extract flow starts by snapshotting the monitored tables (A) or starts from the oldest known position in the transaction (redo) log (B), then proceeds to stream changes in the source database (C). If the Flow is stopped and restarted, it resumes from the last recorded position in the transaction log. The connection created in this step can be used to reset the CDC pipeline and restart the process from scratch.
The connection, by default, points to the directory {app.data}/debezium_data
.
Step 4. Create CSV format.
This format is used to create CSV files with CDC events.
Read how to create CSV format.
Step 5. Create CDC extract flow.
In Flows click Add flow
. Type in cdc
in Select Flow type. Select Stream CDC events, create files
.
Left to right: select CDC connection created in step 1, select tables to monitor in FROM, select connection created in step 2 to stage files with CDC events, select format created in step 4.
You can now execute flow manually or schedule it to run continuously.
To stop CDC Flow manually, click Stop
/ Cancel
.
Note that as configured the CDC flow never stops automatically. It is a recommended configuration. You can configure the CDC Connection to stop when there are no more new CDC events for an extended period of time. Read more.
Step 6. Schedule CDC extract flow.
We recommend using a continuous run Schedule type. The idea is that the extract Flow runs until it is stopped manually, there is an error, or (if configured) there are no more new CDC events for an extended period of time. It restarts automatically after a configurable number of seconds.
Monitor running CDC extract flow
Read how to monitor running CDC extract flow.
Create and schedule Load flow
This Flow is used to load files created by the CDC extract flow into Greenplum.
Read more about Greenplum ETL flows.
Step 1. Create a new Greenplum connection.
Read how to create a Greenplum connection.
Step 2. Create a Flow to load data in Greenplum
Start creating a Flow by opening theFlows
window, clicking +
, and typing file to greenplum
into the search field:
InFlows
click[+]
, type infile to greenplum
, and select the Flow.
Step 3. Configure ETL transformation.
Select or enter the following attributes of the transformation (left to right):
1. Server Storage Connection for CDC events created in this step.
2. CSV Format created in this step.
3. A wildcard filename that matches the names of the files created by the CDC extract flow: *_cdc_stream_*.csv
.
Note: If the source connection supports default wildcard templates (parameter Contains CDC events
is enabled or the source connection is created using CDC Events connector) the wildcard filename can be selected in FROM.
4. Server Storage Connection created in this step.
5. CSV Format created in this step.
6. The wildcard destination table name in the fully qualified format:schema.*
.
7. Select the Connection tab and select the Greenplum connection created in step 1.
Step 5. Configure MERGE into Greenplum.
By default bulk load flow INSERTS data into the Greenplum table. To configure MERGE:
1. Click the MAPPING
button.
2. Set Action
to MERGE
, enable Predict lookup fields
.
Step 6. Configure flow to automatically handle source schema changes.
The load flow always creates a table in Greenplum if it does not exist.
Enable Alter target table if the source has columns that the target table doesn't have
to automatically add missing columns to the target table.
You can now execute the Load flow manually.
Step 6. Schedule load flow.
Schedule flow to run as often as needed. These are the options:
- Run flow periodically (as often as once a minute).
- Run flow continuously (as often as once a second).
Comments
0 comments
Please sign in to leave a comment.