New
1. Support for Parquet format: https://support.etlworks.com/hc/en-us/articles/360018660554-Parquet
2. Support for Avro format: https://support.etlworks.com/hc/en-us/articles/360018660234-Avro
3. Etlworks can now handle collisions when importing flows: https://support.etlworks.com/hc/en-us/articles/360014484874#CollisionPolicies
4. Ability to add tags when importing flows: https://support.etlworks.com/hc/en-us/articles/360014484874#Addingtags
5. The JSON parser can now handle arrays with a different number of elements. The following is an example of the irregular JSON array which can now be correctly parsed:
[
{"first":"Joe","last":"Doe","dob":"01/01/2001"},
{"first":"Simba","height":"24","weight":"100","age":3},
{"first":"Some","last":"Body","age":17}
]
6. It is not required anymore to enter the Dimension when configuring the Google Analytics connection. Before this change, if not selected or entered, the default dimension (ga:Browser) was used.
7. Added ability to configure Timeout and Auto-retry for Google Analytics connection: https://support.etlworks.com/hc/en-us/articles/360013966634#Connectionparameters
8. Snowflake flows can now create Avro, Parquet and XML files in the stage (in addition to previously available CSV and JSON). Read about data formats supported by Snowflake: https://docs.snowflake.net/manuals/user-guide/data-load-prepare.html
Fixed
1. A bug that prevents using dynamically calculated file names when copying and moving files to S3 and Google Cloud Storage.
2. A bug that prevents viewing nested documents in Explorer under some rare conditions.
Comments
0 comments
Please sign in to leave a comment.