1. Permissions-driven UI
Affected areas: roles and permissions
Prior to this update, we relied on the backend to notify you that certain functions are not available. For example, if a user with a viewer role was trying to update the Flow, the system generated an error.
In this update, we introduced the permission-driven UI, which adjusts itself based on the roles and permissions given to the specific user. Using the example above
Save is always disabled for the user with a viewer role.
We also renamed the
Executor role to a more logical
2. Working with on-premise data
Affected area: working with on-premise data
In this update, we made our remote Integration Agent available to the general audience. Agents can be used to work with the on-premise data.
Prior to this update, this technology was available to the selected customers only.
Integration Agent is a zero-maintenance, easy-to-configure, fully autonomous ETL engine running as a background service behind the company’s firewall.
It can be installed on Windows and Linux.
Remote Integration Agents allow you to run Flows that use on-premise applications and databases. Outbound communication between Etlworks cloud instance and Remote Integration Agent is fully secured as data is not staged.
Note that this is the first public release of the Integration Agent. In the next couple of releases, we plan to dramatically improve integration between remote Agents and the cloud instance. Please stay tuned.
Read about using remote Integration Agents.
3. The ability to use SQL to view data in the specific Excel or Google worksheet in Explorer
Prior to this update, it was required to use a specific Excel Format or Google Sheets Connection to view the data in the specific worksheet in Etlworks Explorer.
In this update, we extended our built-in SQL engine with the ability to select data from the specific worksheet.
- Selecting data from the specific Excel worksheet in Etlworks Explorer
- Selecting data from the specific Google Sheets worksheet in Etlworks Explorer
4. The ability to read from and write into different worksheets using the same Google Sheets Connection
Affected area: working with Google Sheets
In Etlworks Integrator, the name or index of the Google Sheets worksheet is configured in the Google Sheets Connection.
Prior to this update, creating a new Connection to read/ write from/ into different worksheets was required.
In this update, we introduced the ability to read from and write into different worksheets using the same Google Sheets Connection.
This functionality is similar to the one that we introduced early for Excel.
5. The ability to delete source files after loading them into the destination
Affected area: source-to-destination transformation when the source is a file
One of the typical uses-cases is when you want to extract/load the file and then delete it.
In this update, we added the ability to automatically delete the source file(s) after loading data into the destination.
The transformation can be configured to delete the source file(s) on success, on error, or both.
Affected area: scripting transformations
In addition to the existing Before Extract, For Each Row, and After Extract, we added After Load scripting transformation.
As the name says, it is executed in the last step of the extract-transform-load after the data was successfully loaded into the destination. The transformation can be used to set global and Flow variables, logging, clean up, or any other reason.
You can use JavaScipt (default) or Python in the scripting transformations.
7. The ability to set the number of threads when copying files to the internal Snowflake stage
Affected area: loading data in Snowflake
When configuring a Flow to load data in Snowflake, it was always possible to use the internal named Snowflake stage. All you need to do is create a named stage and set the destination in the Snowflake source-to-destination transformation to the Server storage. It can save you some money since, most likely, you are already paying for the internal Snowflake storage.
Now, the Flow needs to copy files to load from the local hard drive (the volume associated with the Server storage Connection) to the internal Snowflake stage. We are using the Snowflake PUT command to copy files into the internal storage.
One of the parameters supported by
PUT command is
PARALLEL= number of threads. The default number of threads is 4, but we added the ability to change it in this update. It can be anything from 1 (no parallel load) to 99.
8. The ability to limit the number of files processed by COPY/ MOVE/ DELETE/ RENAME/ ZIP/ UNZIP Flows
Affected area: file management Flows
In this update, we introduced the ability to limit the number of files that can be processed by the
If the value of the property
Maxium Number of Files to Process is greater than 0, the Flow will stop processing the files after the number of processed files will reach the configured threshold.
Use it if the number of files is extremely large and you want to process them in chunks.