1. New sign-up experience and life-cycle notifications
In this update, we completely revamped the end-user experience for when a new user is adding an account. The biggest change is that we don't auto-generate a password anymore. Instead, we send a short-lived link to the user via an invite which can be used to complete the registration.
We have also added more notifications for important life-cycle events, such as activations and deactivation of the account and others.
2. Ability to capture and set HTTP headers in user-defined APIs
Affected areas: user-defined PUSH APIs
In many cases, important information is passed in the HTTP headers.
It has always been possible to set the HTTP headers when working with third-party APIs. The ability to work with HTTP headers in user-defined APIs was previously limited.
In this update, we introduced the ability to capture and set HTTP headers in user-defined APIs.
3. New fields are available when working with captured and ignored exceptions
Prior to this update, it was possible to ignore all or specific exceptions during flow execution. All ignored exceptions were captured and stored in the etlConfig object and could be used later to save the error in a database or send a notification.
In this update, we added new fields to the exception object. The following example shows how to use new fields when sending notifications about ignored exceptions.
4. Automatic conversion of the SQL Server TIMESTAMP field to LONG
Prior to this update, the MS SQL Server binary TIMESTAMP and ROWVERSION fields were loaded as a byte array and mapped to the BINARY data type. Note that the BINARY fields are automatically Base46 encoded when saved as a part of the CSV file, which caused an issue when loading tables with these fields into the Snowflake or Amazon Redshift.
In this update, we introduced an automatic conversion of the MS SQL Server binary TIMESTAMP and ROWVERSION fields to LONG when creating CSV files. It is no longer required to use SQL to translate the TIMESTAMP or ROWVERSION to the number.
5. Added option to wait for the completion of the multi-threaded transformations
Affected areas: all source-to-destination flows.
In this update, we introduced an option to set a synchronization point for parallel source-to-destination transformations. It allows flow to wait for the completion of the parallel transformations before proceeding to the next step.
1. Better handling of partially matching filenames in Redshift and Snowflake flows
Prior to this update, when the destination table names were partially matched (for example,
customerorders) the system created files to load with partially matching names, which in some cases was caused collision when loading data into multiple Amazon Redshift or Snowflake tables in parallel.
In this update, we fixed it by making sure that the filenames are unique enough to not cause collision even when they are partially matched.