1. Built-in change data capture (CDC) for MySQL, Postgres, SQL Server, and Oracle:
Read about other change replication options available in Etlworks Integrator:
2. New Flows for injecting data in Amazon Redshift:
3. New Flow for directly loading files into Snowflake (without extract-load):
4. New Flow for directly loading files into Redshift (without extract-load):
5. It is now possible to configure SQL action (
MERGE) for Snowflake and Redshift data injection Flows.
6. Snowflake and Redshift Flows now automatically create the ad-hoc Format for loading data (previously, it was required to create a Format as a database object).
7. Snowflake Flows can now load data from the internal and Azure stage, in addition to the previously available S3 stage.
8. It is now possible to use CDC when the target is:
9. Usability enhancements:
New metadata (tables, files, fields, etc.) selector:
New layout for transformation editor. It is now possible to select
TO objects from the list without entering mapping editor:
10. Up to 10x performance improvement when using
IfExist SQL actions.
11. It is now possible to split a large CSV file into smaller chunks while creating it. It is specifically useful when injecting large (millions of rows) datasets into Snowflake and Redshift:
13. It is now possible to test XSL transformation in Etlworks Explorer:
Fixed a bug that prevented creating an XML attribute with an empty value.
Other fixes and performance improvements.