1. Added option to automatically add the columns to the destination table
User suggestion: Alter destination table if the column doesn't exist
Prior to this release is the source included columns that do not exist in the destination these columns were ignored.
In this release, we added an option to automatically alter the destination table by adding new columns.
2. Added option to calculate the destination name (TO) when loading files or tables by a wildcard name
User suggestion: Allow table renaming when using wildcards
In this release, we added an option to calculate the destination name (file, table, etc.) when processing files or tables by a wildcard.
3. Performance improvements when loading files into Snowflake and Reshift
Prior to this update the
File to Snowflake and
File to Redshift Flows where reading the source CSV files line by line and recreating it in the stage, then executing the
COPY command to load staged files into the Snowflake or Refshit.
In this update, we added the ability to skip the part where files are extracted, transformed, and re-created in the stage and simply copy files directly to the stage. When loading large files it can provide up to 10 times a performance boost.
When this option is enabled the system will not be able to perform any transformations. It will simply load already existing files as-is.
4. Improvements for log-based change replication (CDC)
4.1 Upgraded Debezium to 1.2
4.2 Multithreading: the ability to load files for the same destination sequentially
Affected area: CDC Flows
In this update, we added the ability to load files for the same destination sequentially. It is specifically important when loading CDC events (
DELETE) serialized as CSV files in parallel. Even when loading in parallel the events for the same table need to be played in the same order as they were emitted by the source database.
4.3 New option for MongoDB CDC connector
Affected area: MongoDB CDC connector
Read how to set the new name of the
Object (_id) field.
5. Added column length multiplier
Affected area: loading data in Redshift
In Redshift, there is no
NVARCHAR datatype, and
VARCHAR(1) is 1 byte, not 1 character. So Unicode strings are getting truncated when the Flow automatically creates the Reshift table from the source.
You can now configure the destination Redshift Connection to multiple the original column length. Even with the multiplier, the max column length will not exceed 65535.
6. Added the ability to revoke the OAuth token for Connections that support OAuth2 authentication
7. The ability to recreate the target table when HWM is enabled
User suggestion: Allow "Recreate target table" for hwm transforms
Prior to this update, the Recreate target table if the source has columns that the target table doesn't have option was disabled if the transformation was configured with high-watermark-change-replication (HWM).
In this update, we removed this restriction.
8. Usability improvements
8.1 New Import report
In this update, we redesigned the Flow import report, displayed right after the successful (or unsuccessful) import.
The same report is displayed when importing Connections, Formats, and Listeners.
It is now easier to understand the reason and take corrective action if something went wrong during the import.
8.2 The ability to delete remote Integration Agent
Affected area: configuring remote Integration Agents
It is now possible to remove previously configured remote Integration Agents used to run data integration Flows behind the firewall.
9. New connectors
10. Goole BigQuiery connector is now set to auto-commit by default
Affected area: Google BigQuery connector
Since Google BigQuery connector doesn't support manual transactions (commit/rollback) it is now set to
auto-commit by default.
1. Fixed SSH Connection leak when extracting data using CDC Flow
Prior to this release, if the CDC Connection was using the SSH tunnel when it was trying to rewind the MySQL binlog position or when the underlying database Connection was dropped by the server the system was no closing the SSH session.
2. Fixed parsing CSV files when the value of the column is a single "
Affected area: CSV Format
In this update, we fixed the bug causing IndexOutBounds exception when the value of the column in the CSV file is a single double-quote character (
3. Fixed importing macros referenced in nested Flows
In this update, we fixed the bug which was causing an error when importing macros referenced in the nested Flows.