Avro defines a data format designed to support data-intensive applications and provides support for this format in a variety of programming languages.
Etlworks can read and write Avro files, including nested Avro files. The files in Avro format can be used to load data in Snowflake.
The following parameters are available when configuring the Avro format:
- Compression Codec - the compression algorithm used when creating Avro files. You don't need to select the algorithm if all your files are uncompressed or you are only reading the Avro files.
- Normalize nested records with one field - if this option is enabled (it is disabled by default) the Avro parser will create less nested datasets when reading the nested Avro files with an array, containing only one field.
- Column names compatible with SQL - this converts column names to SQL compatible column names by removing all characters except alphanumeric and spaces.
- Treat 'null' as null - if this option is enabled, Integrator will treat string values equal to 'null' as actual nulls (no value).
- Trim Strings - if this option is enabled, Integrator will trim leading and trailing white-spaces from the value.
- Schema - the schema is used to create Avro files. You can leave this field empty if you are only reading the Avro files.