- Starter
- Business
- Enterprise
- On-Premise
- Add-on
Overview
Change Data Capture (CDC) pipeline allows extracting changes in real-time from the databases which support CDC and loading them into the Snowflake.
We use heavily modified embedded Debezium engine for CDC.
Supported source databases
- MySQL
- SQL Server
- PostgreSQL
- Oracle
- DB2
- MongoDB
- AS400 (IBMI platfroms)
When to use this pipeline
Use the pipeline described in this article to extract data from the CDC-enabled database and load it into Snowflake in real time.
Flows optimized for Snowflake
Flow type | When to use | |
|
|
When you need to extract data from any source, transform it and load it into Snowflake. |
Bulk load files into Snowflake | When you need to bulk-load files that already exist in the external Snowflake stage (S3, Azure Blob, GC blob) or in the server storage without applying any transformations. The flow automatically generates the COPY INTO command and MERGEs data into the destination. | |
Stream CDC events into Snowflake | You are here | When you need to stream updates from the database which supports Change Data Capture (CDC) into Snowflake in real time. |
Stream messages from a queue into Snowflake | When you need to stream messages from the message queue which supports streaming into Snowflake in real time. | |
COPY files into Snowflake | When you need to bulk-load data from the file-based or cloud storage, API, or NoSQL database into Snowflake without applying any transformations. This flow requires providing the user-defined COPY INTO command. Unlike Bulk load files into Snowflake, this flow does not support automatic MERGE. |
How it works
The end-to-end CDC pipeline extracts data from a CDC-enabled database and loads it into Snowflake.
There are two options when creating a pipeline:
1. A single flow that streams CDC events directly into Snowflake.
This flow streams CDC events into the designated Snowflake stage in real-time and periodically (as often as every second) loads the data into Snowflake in parallel with the stream. The advantage of the pipeline with a single flow is simplicity. There is just one flow to configure, schedule, and monitor. The disadvantage is the fact that if the streaming fails, the load also fails, and vice versa. Note that the flow is fault tolerant. After fixing the issue the stream and load resume from the last successful checkpoint.
2. A pipeline with independent Extract and Load flows
- Extract Flow: this Flow streams data from a CDC-enabled database into the Snowflake stage.
- Load Flow: this Flow loads data into Snowflake.
The extract and load Flows are running in parallel, which guarantees a very high processing speed and low latency. The actual pipeline can include multiple independent extract and load Flows which allows it to scale horizontally between multiple processing nodes.
Prerequisites
1. CDC is enabled for the source database.
2. The Snowflake data warehouse is active.
3. The Stage name
is set for the Snowflake connection or Transformation (the latter overrides the stage set for the Connection). Etlworks uses the SnowflakeCOPY INTO
command to load data into Snowflake tables.COPY INTO
requires a named internal or external stage. Stage refers to the location where your data files are stored for loading into Snowflake. Read how Etlworks flow automatically creates the named Snowflake stage.
4. For loading data from the external stage in AWS S3, Azure Blob, or Google Cloud Storage, the Amazon S3 bucket, Google Storage bucket, or Azure blob needs to be created. Note that Etlworks flow does not create the bucket or blob.
A pipeline with a single flow
Create and schedule Snowflake CDC flow
Snowflake CDC flow streams CDC events into the designated Snowflake stage in real-time and periodically (as often as every second) loads the data into Snowflake in parallel with the stream.
There is no need to create a separate Flow for the initial load. The first time it connects to a CDC-enabled source database, it reads a consistent snapshot of all of the included databases and tables. When that snapshot is complete, the Flow continuously reads the changes that were committed to the transaction log and generates the corresponding insert, update, and delete events.
Read more about CDC in Etlworks.
Step 1. Create a CDC Connection for the source.
Read how to create a CDC connection.
Step 2 (optional). Create connection for staging files in a cloud storage
If you are planning to use an external Snowflake stage in AWS S3, Azure Storage, or Google Cloud Storage you need to create one of the following connections:
- Amazon S3 - for staging files in S3.
- Google Cloud Storage - for staging files in GC storage.
- Microsoft Azure Storage - for staging files in Azure Blob.
This step is not required if you are using internal Snowflake stage.
Step 3. Create a Snowflake connection for the destination.
When creating a Connection, set the Stage name
. For loading files in cloud storage, the named external stage must be configured to read data from the storage type and location configured for the CDC connection. You can override the stage name set for the Connection when configuring the CDC source-to-destination transformation. Read how Etlworks flow automatically creates the named Snowflake stage.
Step 4. Create a connection for history and offset files.
Read how to create CDC Offset and History connection.
Etlworks CDC connectors store the history of DDL changes for the monitored database in the history file and the current position in the transaction log in the offset file.
Typical CDC extract flow starts by snapshotting the monitored tables (A) or starts from the oldest known position in the transaction (redo) log (B), then proceeds to stream changes in the source database (C). If the Flow is stopped and restarted, it resumes from the last recorded position in the transaction log. The connection created in this step can be used to reset the CDC pipeline and restart the process from scratch.
The connection, by default, points to the directory {app.data}/debezium_data
.
Step 5. Create Snowflake CDC flow.
In Flows click Add flow
. Type incdc
in Select Flow type. Select Stream CDC events into Snowflake
.
Left to right: select the CDC connection created in step 1, select tables to monitor in FROM, select the Snowflake connection created in step 3, and select or enter the Snowflake table name in TO. When streaming data from multiple source tables set the destination table using a wildcard template in the following format:SCHEMA.PREFIX_*_SUFFIX
, whereSCHEMA
is a Snowflake schema to load data into. You can use a fully qualified table name:DATABASE.SCHEMA.*
.
Step 6 (optional). Set connection for staging files in the cloud storage.
If you are planning to use an external Snowflake stage in AWS S3, Azure Storage, or Google Cloud select Connections
tab, select connection created in step 2 and select CSV format.
This step is not required if you are using internal Snowflake stage.
Step 7. Configure load parameters
Click the MAPPING
button, select the Parameters
tab.
If needed modify the following Load parameters:
Load data into Snowflake every (ms)
: by default, the flow loads data into Snowflake every 5 minutes (300000 milliseconds). The load runs in parallel with the CDC stream, which never stops. Decrease this parameter to load data into Snowflake more often or increase it to reduce the number of consumed Snowflake credits.Wait (ms) to let running load finish when CDC stream stops
: By default, the flow loads data into Snowflake every 5 minutes. The CDC stream and load are running in parallel, so when streaming stops, the flow executes the load last more time to finish loading the remaining data in the queue. It is possible that the load flow is still running when the stream stops. Use this parameter to configure how long the flow should wait before executing the load last time. Clear this parameter to disable the wait. In this case, if the load task is still running, the flow will finish without executing the load one last time. The flow will load the remaining data in the queue on the next run.Action
: the action can beMERGE
(default) orINSERT
. If the action is set toMERGE
the flow will INSERT records that do not exist in the destination table, UPDATE existing records, and DELETE records that were deleted in the source table.Lookup Fields
:MERGE
action requires a list of columns that uniquely identify the record. By default, the flow will attempt to predict the Lookup Fields by checking unique indexes in the source and destination tables, but if there is no unique index in either table it is not guaranteed that the prediction will be 100% accurate. Use this parameter to define the Lookup Fields in the following format:fully.qualified.table1=field1,field2;fully.qualified.table2=field1,field2
.
The other parameters are similar or the same as for the flow type Bulk load files into Snowflake.
Step 8. Schedule Snowflake CDC flow.
We recommend using a continuous run Schedule type. The idea is that the Flow runs until it is stopped manually, there is an error, or (if configured) there are no more new CDC events for an extended period of time. It restarts automatically after a configurable number of seconds.
Monitor running CDC flow
Read how to monitor running CDC flow.
A pipeline with independent extract and load flows
Create and schedule Extract flow
CDC extract flow extracts data from a CDC-enabled database and creates CSV files with CDC events in the configured location. These files are loaded into the Snowflake by Load Flow.
There is no need to create a separate Flow for the initial load. The first time it connects to a CDC-enabled source database, it reads a consistent snapshot of all of the included databases and tables. When that snapshot is complete, the Flow continuously reads the changes that were committed to the transaction log and generates the corresponding insert, update, and delete events.
Read more about CDC in Etlworks.
Step 1. Create a CDC Connection for the source database.
Read how to create a CDC connection.
Step 2. Create file storage connection for CDC events.
Depending on how you prefer to stage files with CDC events for loading data into the Snowflake, create one of the following connections:
- The CDC events connection for loading data from the internal Snowflake stage. The connection, by default, points to the directory
{app.data}/debezium_data/events
. - S3 connection for loading data from the S3 external stage.
- Azure storage connection for loading data from the Azure external stage.
- Google Cloud Storage connection for loading data from the Google Cloud external stage.
To improve performance when loading data from cloud storage such as S3, Azure Storage, and Google Cloud Storage, it is recommended that you enable GZip
archiving.
Step 3. Create a connection for history and offset files.
Read how to create CDC Offset and History connection.
Etlworks CDC connectors store the history of DDL changes for the monitored database in the history file and the current position in the transaction log in the offset file.
Typical CDC extract flow starts by snapshotting the monitored tables (A) or starts from the oldest known position in the transaction (redo) log (B), then proceeds to stream changes in the source database (C). If the Flow is stopped and restarted, it resumes from the last recorded position in the transaction log. The connection created in this step can be used to reset the CDC pipeline and restart the process from scratch.
The connection, by default, points to the directory {app.data}/debezium_data
.
Step 4. Create CSV format.
This format is used to create CSV files with CDC events.
Read how to create CSV format.
Enable the following properties:
Always enclose
Escape double-quotes
Save Metadata
Step 5. Create CDC extract flow.
In Flows click Add flow
. Type in cdc
in Select Flow type. Select Stream CDC events, create files
.
Left to right: select CDC connection created in step 1, select tables to monitor in FROM, select connection created in step 2 to stage files with CDC events, select format created in step 4.
You can now execute flow manually or schedule it to run continuously.
To stop CDC Flow manually, click Stop
/ Cancel
.
Note that as configured, the CDC flow never stops automatically. It is a recommended configuration. You can configure the CDC Connection to stop when there are no more new CDC events for an extended period of time. Read more.
Step 6. Schedule CDC extract flow.
We recommend using a continuous run Schedule type. The idea is that the extract Flow runs until it is stopped manually, there is an error, or (if configured) there are no more new CDC events for an extended period of time. It restarts automatically after a configurable number of seconds.
Monitor running CDC extract flow
Read how to monitor running CDC extract flow.
Create and schedule Load flow
This Flow is used to bulk load files created by the CDC extract flow into Snowflake.
Read more about bulk load flow.
Step 1. Create a new Snowflake connection.
This Connection will be used as a destination.
When creating a Connection, set the Stage name
. For loading files in cloud storage, the named external stage must be configured to read data from the bucket or blob configured for the cloud storage Connection created in this step.
Step 2. Add new bulk load flow.
InFlows
click[+]
, type inbulk load files into snowflake
, and select the Flow.
Step 3. Configure load transformation.
Select or enter the following attributes of the transformation (left to right):
1. Storage Connection for CDC events created in this step.
2. CSV Format created in this step.
3. A wildcard filename that matches the names of the files created by the CDC extract flow: *_cdc_stream_*.csv
for uncompressed files or *_cdc_stream_*.gz
for compressed files.
Note: If the source connection supports default wildcard templates (parameter Contains CDC events
is enabled or the source connection is created using CDC Events connector) the wildcard filename can be selected in FROM.
4. Snowflake Connection created in step 1.
5. The wildcard destination table name in the following format:SCHEMA.*
, whereSCHEMA
is a Snowflake schema to load data into. You can use a fully qualified table name:DATABASE.SCHEMA.*
.
Step 4. Configure MERGE into the Snowflake tables.
By default bulk load flow INSERTS data into the Snowflake table. To configure MERGE:
1. Click the MAPPING
button.
2. Set Action
to CDC MERGE
and enable Predict lookup fields
.
Step 5. Configure flow to automatically handle source schema changes.
The load flow always creates a table in Snowflake if it does not exist.
Enable Alter target table if the source has columns that the target table doesn't have
to automatically add missing columns to the target table.
You can now execute the Load flow manually.
Step 6. Schedule load flow.
Schedule flow to run as often as needed. These are the options:
- Run flow periodically (as often as once a minute).
- Run flow continuously (as often as once a second).
Comments
0 comments
Please sign in to leave a comment.