- Starter
- Business
- Enterprise
- On-Premise
- Add-on
Overview
Change Data Capture (CDC) pipeline allows extracting changes in real-time from the databases which support CDC and loading them into the relational database which supports the bulk load.
We use heavily modified embedded Debezium engine for CDC.
Supported source databases
- MySQL
- SQL Server
- PostgreSQL
- Oracle
- DB2
- MongoDB
- AS400 (IBMI platfroms)
When to use this pipeline
Use the pipeline described in this article to extract data from the CDC-enabled database and load it into the destination database which supports the bulk load in real-time.
How it works
The end-to-end CDC pipeline extracts data from a CDC-enabled database and loads it into the destination database which supports the bulk load.
There are two options when creating a pipeline:
1. A single flow that streams CDC events directly into the destination database.
This flow streams CDC events into the designated stage in the local or cloud storage in real-time and periodically (as often as every second) loads the data into the destination database in parallel with the stream. The advantage of the pipeline with a single flow is simplicity. There is just one flow to configure, schedule, and monitor. The disadvantage is the fact that if the streaming fails, the load also fails, and vice versa. Note that the flow is fault tolerant. After fixing the issue the stream and load resume from the last successful checkpoint.
2. A pipeline with independent Extract and Load flows
- Extract Flow: this Flow streams data from a CDC-enabled database into the stage.
- Load Flow: this Flow loads data into the destination database.
The extract and load Flows are running in parallel, which guarantees a very high processing speed and low latency. The actual pipeline can include multiple independent extract and load Flows, which allows it to scale horizontally between multiple processing nodes.
Prerequisites
1. CDC is enabled for the source database.
2. The destination database must support bulk load operations.
Below are some of the examples for the most commonly used databases:
- SQL Server BULK INSERT statement
- RDS SQL Server BULK INSERT statement using Amazon S3 integration
- Postgres COPY command
- RDS Postgres COPY command using Amazon S3 integration
- MySQL LOAD DATA INFILE statement
- Amazon Aurora MySQL LOAD DATA INFILE statement using Amazon S3 integration
- Oracle Inline External Tables
A pipeline with a single flow
Create and schedule CDC flow with bulk load
CDC flow streams CDC events into the designated stage in the local or cloud storage in real-time and periodically (as often as every second) loads the data into the destination database in parallel with the stream.
There is no need to create a separate Flow for the initial load. The first time it connects to a CDC-enabled source database, it reads a consistent snapshot of all of the included databases and tables. When that snapshot is complete, the Flow continuously reads the changes that were committed to the transaction log and generates the corresponding insert, update, and delete events.
Read more about CDC in Etlworks.
Step 1. Create a CDC Connection for the source.
Read how to create a CDC connection.
Step 2 (optional). Create connection for staging files in a cloud storage
If you are planning to stage files in AWS S3, Azure Storage, or Google Cloud Storage you need to create one of the following connections:
- Amazon S3 - for staging files in S3.
- Google Cloud Storage - for staging files in GC storage.
- Microsoft Azure Storage - for staging files in Azure Blob.
This step is not required if the flow stages files locally.
Step 3. Create a database connection for the destination.
It is recommended to enable the auto-commit for the destination connection.
Step 4. Create a connection for history and offset files.
Read how to create CDC Offset and History connection.
Etlworks CDC connectors store the history of DDL changes for the monitored database in the history file and the current position in the transaction log in the offset file.
Typical CDC extract flow starts by snapshotting the monitored tables (A) or starts from the oldest known position in the transaction (redo) log (B), then proceeds to stream changes in the source database (C). If the Flow is stopped and restarted, it resumes from the last recorded position in the transaction log. The connection created in this step can be used to reset the CDC pipeline and restart the process from scratch.
The connection, by default, points to the directory {app.data}/debezium_data
.
Step 5. Create CDC flow.
In Flows click Add flow
. Type incdc
in Select Flow type. Select Stream CDC abd bulk load CDC events into database
.
Left to right: select the CDC connection created in step 1, select tables to monitor in FROM, select the database connection created in step 3, and select or enter the destination table name in TO. When streaming data from multiple source tables set the destination table using a wildcard template.
Step 6 (optional). Set connection for staging files in the cloud storage.
If you are planning to stage files in AWS S3, Azure Storage, or Google Cloud Storage select Connections
tab, select connection created in step 2 and select CSV format.
This step is not required if the flow stages files locally.
Step 7. Configure load parameters
Click the MAPPING
button, select the Parameters
tab.
If needed modify the following Load parameters:
Load data into database every (ms)
: by default, the flow loads data into database every 5 minutes (300000 milliseconds). The load runs in parallel with the CDC stream, which never stops. Decrease this parameter to load data into database more often or increase it to reduce the number of consumed database credits.Wait (ms) to let running load finish when CDC stream stops
: By default, the flow loads data into database every 5 minutes. The CDC stream and load are running in parallel, so when streaming stops, the flow executes the load last more time to finish loading the remaining data in the queue. It is possible that the load flow is still running when the stream stops. Use this parameter to configure how long the flow should wait before executing the load last time. Clear this parameter to disable the wait. In this case, if the load task is still running, the flow will finish without executing the load one last time. The flow will load the remaining data in the queue on the next run.Load into staging table
: by default, the Flow will attempt to create and load data into the temporary table. Not all databases support the bulk load into the temp table. When this parameter is enabled (it is disabled by default) the flow will create the staging table instead of temporary. It will automatically drop the staging table on the exit.Bulk Load SQL
: the Bulk Load SQL is used to load files in the cloud or file storage into the staging or temporary tables in the destination database. This is a required parameter. The following {tokens} can be used in the Bulk Load SQL:- {TABLE} - the table to load data into,
- {FILE_TO_LOAD} - the name of the file to load with path and extension,
- {FILE} - the name of the file to load without path ,
- {FULL_FILE_NAME} - same as {FILE_TO_LOAD},
- {FILE_NO_EXT} - the name of the file to load without path and extension, {EXT} - the extension of the file to load without '.'
- Example for Azure SQL Server:
-
BULK INSERT {TABLE}
FROM '{FILE_TO_LOAD}'
WITH (
DATA_SOURCE = 'BulkLoadDataSource',
FIELDTERMINATOR = ',',
FORMAT='CSV',
FIRSTROW = 2,
MAXERRORS = 10
) Action
: the action can beMERGE
(default) orINSERT
. If the action is set toMERGE
the flow will INSERT records that do not exist in the destination table, UPDATE existing records, and DELETE records that were deleted in the source table.Lookup Fields
:MERGE
action requires a list of columns that uniquely identify the record. By default, the flow will attempt to predict the Lookup Fields by checking unique indexes in the source and destination tables, but if there is no unique index in either table it is not guaranteed that the prediction will be 100% accurate. Use this parameter to define the Lookup Fields in the following format:fully.qualified.table1=field1,field2;fully.qualified.table2=field1,field2
.
The other parameters are similar or the same as for the flow type Bulk load files into database.
Step 7. Schedule CDC flow.
We recommend using a continuous run Schedule type. The idea is that the Flow runs until it is stopped manually, there is an error, or (if configured) there are no more new CDC events for an extended period of time. It restarts automatically after a configurable number of seconds.
Monitor running CDC flow
Read how to monitor running CDC flow.
A pipeline with independent extract and load flows
Create and schedule Extract flow
CDC extract flow extracts data from a CDC-enabled database and creates CSV files with CDC events in the configured location. These files are loaded into the destination database by Load Flow.
There is no need to create a separate Flow for the initial load. The first time it connects to a CDC-enabled source database, it reads a consistent snapshot of all of the included databases and tables. When that snapshot is complete, the Flow continuously reads the changes that were committed to the transaction log and generates the corresponding insert, update, and delete events.
Read more about CDC in Etlworks.
Step 1. Create a CDC Connection for the source database.
Read how to create a CDC connection.
Step 2. Create file storage connection for CDC events.
Depending on how you prefer to stage files with CDC events for loading data into the destination database, create one of the following connections:
- The CDC events connection for loading data from the local (server) storage. The connection, by default, points to the directory
{app.data}/debezium_data/events
. - S3 connection for loading data from the S3.
- Azure storage connection for loading data from the Azure storage.
- Google Cloud Storage connection for loading data from the Google Cloud storage.
To improve performance when loading data from cloud storage such as S3, Azure Storage, and Google Cloud Storage, it is recommended that you enable GZip
archiving. The bulk load command must support loading from gzipped files.
Step 3. Create a connection for history and offset files.
Read how to create CDC Offset and History connection.
Etlworks CDC connectors store the history of DDL changes for the monitored database in the history file and the current position in the transaction log in the offset file.
Typical CDC extract flow starts by snapshotting the monitored tables (A) or starts from the oldest known position in the transaction (redo) log (B), then proceeds to stream changes in the source database (C). If the Flow is stopped and restarted, it resumes from the last recorded position in the transaction log. The connection created in this step can be used to reset the CDC pipeline and restart the process from scratch.
The connection, by default, points to the directory {app.data}/debezium_data
.
Step 4. Create CSV format.
This format is used to create CSV files with CDC events.
Read how to create CSV format.
Enable the following properties:
Always enclose
Escape double-quotes
Save Metadata
Step 5. Create CDC extract flow.
In Flows click Add flow
. Type in cdc
in Select Flow type. Select Stream CDC events, create files
.
Left to right: select CDC connection created in step 1, select tables to monitor in FROM, select connection created in step 2 to stage files with CDC events, select format created in step 4.
You can now execute flow manually or schedule it to run continuously.
To stop CDC Flow manually, click Stop
/ Cancel
.
Note that as configured, the CDC flow never stops automatically. It is a recommended configuration. You can configure the CDC Connection to stop when there are no more new CDC events for an extended period of time. Read more.
Step 6. Schedule CDC extract flow.
We recommend using a continuous run Schedule type. The idea is that the extract Flow runs until it is stopped manually, there is an error, or (if configured) there are no more new CDC events for an extended period of time. It restarts automatically after a configurable number of seconds.
Monitor running CDC extract flow
Read how to monitor running CDC extract flow.
Create and schedule Load flow
This Flow is used to bulk load files created by the CDC extract flow into the destination database.
Read more about bulk load flow.
Step 1. Create a database connection for the destination.
It is recommended to enable the auto-commit for the destination connection.
Step 2. Add new bulk load flow.
InFlows
click[+]
, type inbulk load files into database
, and select the Flow.
Step 3. Configure load transformation.
Select or enter the following attributes of the transformation (left to right):
1. Storage Connection for CDC events created in this step.
2. CSV Format created in this step.
3. A wildcard filename that matches the names of the files created by the CDC extract flow: *_cdc_stream_*.csv
for uncompressed files or *_cdc_stream_*.gz
for compressed files.
Note: If the source connection supports default wildcard templates (parameter Contains CDC events
is enabled or the source connection is created using CDC Events connector) the wildcard filename can be selected in FROM.
4. Destination database Connection created in step 1.
5. The wildcard destination table name.
Step 4. Configure how to load data into the destination tables.
1. Click the MAPPING
button and select Parameters
tab.
2. Configure Bulk Load SQL
. The Bulk Load SQL is used to load files in the cloud or file storage into the staging or temporary tables in the destination database. This is a required parameter. The following {tokens} can be used in the Bulk Load SQL:
- {TABLE} - the table to load data into,
- {FILE_TO_LOAD} - the name of the file to load with path and extension,
- {FILE} - the name of the file to load without path ,
- {FULL_FILE_NAME} - same as {FILE_TO_LOAD},
- {FILE_NO_EXT} - the name of the file to load without path and extension, {EXT} - the extension of the file to load without '.'
Example for Azure SQL Server:
BULK INSERT {TABLE}
FROM '{FILE_TO_LOAD}'
WITH (
DATA_SOURCE = 'BulkLoadDataSource',
FIELDTERMINATOR = ',',
FORMAT='CSV',
FIRSTROW = 2,
MAXERRORS = 10
)
3. Optionally set Action
to CDC MERGE
and enable Predict lookup fields
.
4. Optionally configure How to MERGE
and enable Load into staging table
.
How to MERGE
: defines how flow merges data in the temp or staging table with the data in the actual table. The default is DELETE/INSERT: DELETE all records in the actual table that also exist in the temp table, then INSERT all records from the temp table into the actual table. If this parameter is set to MERGE the flow will execute native MERGE SQL if it is supported by the destination database. Note that not all databases support MERGE (UPSERT). For example, PostgreSQL and MySQL do not.Load into staging table
: by default, the Flow will attempt to create and load data into the temporary table. Not all databases support the bulk load into the temp table. When this parameter is enabled (it is disabled by default) the flow will create the staging table instead of temporary. It will automatically drop the staging table on the exit.
Step 5. Configure flow to automatically handle source schema changes.
The load flow always creates a table in the destination database if it does not exist.
Enable Alter target table if the source has columns that the target table doesn't have
to automatically add missing columns to the target table.
You can now execute the Load flow manually.
Step 6. Schedule load flow.
Schedule flow to run as often as needed. These are the options:
- Run flow periodically (as often as once a minute).
- Run flow continuously (as often as once a second).
Comments
0 comments
Please sign in to leave a comment.