New
1. Improved Mapping in Redshift flows
Previously, it was required to manually enter the Redshift table name as a transformation parameter. Also, it wasn't possible to select the actual Redshift column when mapping columns in the source to the columns in the destination.
In this update, we simplified mapping for Redshift flows.
First, it is now possible to select or enter the Redshift table name as a TO in the source-to-destination transformation. Entering the staged file name in the TO and the Redshift Table Name in the Parameters is no longer required, but still supported for backward compatibility.
Second, it is now possible to select the actual Redshift column name when mapping columns in the source to the columns in the destination.
2. Added ability to automatically adjust the list of columns to load based on the actual columns in the target Redshift table
It is quite typical when the source (for example, the table in the OLTP database) and the destination (Redshift table) have a different number of columns. If it is the case, by default, the flow will fail since the Redshift COPY command cannot load files which have more or fewer columns than the target table.
In this update, we introduced an ability to automatically adjust the list of columns to load, based on the actual columns in the target Redshift table.
3. Added ability to set Snowflake Stage Name at the connection level.
It is now possible to set the Snowflake Stage Name at the Snowflake connection level and use it in all transformations.
The Stage Name is no longer a required parameter in the transformation but we kept it for the backward compatibility.
4. Improved mapping for Snowflake flows
Previously, it was required to manually enter the Snowflake table name as a transformation parameter. Also, it wasn't possible to select the actual Snowflake column when mapping columns in the source to the columns in the destination.
In this update, we simplified mapping for Snowflake flows.
First, it is now possible to select or enter the Snowflake table name as a TO in the source-to-destination transformation. Entering the staged file name in the TO and the Snowflake Table Name in the Parameters is no longer required, but still supported for backward compatibility.
Second, it is now possible to select the actual Snowflake column name when mapping columns in the source to the columns in the destination.
5. Added ability to automatically adjust the list of columns to load, based on the actual columns in the target Snowflake table
It is quite typical when the source (for example, the table in the OLTP database) and the destination (Snowflake table) have a different number of columns. If it is the case, by default, the flow will fail since the Snowflake COPY INTO command cannot load files which have more or fewer columns than the target table.
In this update, we introduced an ability to automatically adjust the list of columns to load, based on the actual columns in the target Snowflake table.
6. Added support for HTTP PATCH
HTTP connector now supports HTTP PATCH method, which works similarly to PUT and POST.
7. User-defined PUSH APIs can now return a configurable response
Previously, the calls to the user-defined PUSH API endpoints were asynchronous and were returning a pre-defined response immediately.
In this update, we introduced an ability to call PUSH API endpoints synchronously and return a user-configurable response.
8. Flow variables passed in "run flow by name" API are now propagated to global variables
The Run Flow by Name API endpoint accepts optional URL parameters.
Previously, these parameters were only propagated as Flow Variables and could only be referenced in the Source and Destination queries.
After this update, the optional URL parameters are also getting propagated to global variables. The global variables can be used in the FROM and TO fields in the source-to-destination transformation, as well as connection parameters.
9. We added drag&drop and Search to the History in Explorer
It is now possible to drag&drop SQL from the History in Explorer and filter previously executed SQL statements.
10. Added ability to filter users by "currently online"
It is now possible to see who is currently online and filter our users by "online only".
Fixes and Improvements
1. Fixed handling fields with BPCHAR data type in the Redshift.
2. Fixed parsing of the XML node with value AND attributes:
<images>
<image "order"="1">url</image>
<image "order"="2">url</image>
</images>
3. Fixed multiple bugs related to Inbound Email connection.
4. Fixed downloading binary files in Explorer
5. Multiple fixes and usability improvements in Mapping Editor and Explorer
Comments
0 comments
Please sign in to leave a comment.