When to use this Format
CSV (or comma-separated values) is one of the most commonly used data exchange Formats. In Etlworks Integrator, you can actually define what character is used as a separator between values and lines, as well as other parameters.
Use CSV Format when configuring a source-to-destination transformation that reads or writes CSV files.
Other uses cases:
- Loading data into Snowflake
- Loading data into Amazon Redshift
- Loading data into Azure Synapse Analytics
- Loading data into Google BigQueue
- Loading data into Greenplum
- Bulk loading data into a database
To create a new CSV Format, go to
Connections, select the
Formats tab, click
Add Format, and type in
csv in the
Below are the available parameters:
Delimiter: a character used as a delimiter between values. The default is the comma:
Enclosure Character: a character used to enclose the field value. By default, only the values that contain Delimiter will be enclosed. Enable
Always encloseto enclose all values.
Always enclose: if this option is enabled, the system will always enclose fields in quotes (assuming that the
Enclosure Characteris configured). The default behavior is to enclose fields only if they contain the delimiter character.
Enclose header: if this option is enabled, the system will enclose fields names in the header in quotes (assuming that the
Enclosure Characteris configured). This option is disabled by default.
Escape double-quotes: if double-quotes are used to enclose fields, then a double-quote appearing inside a field will be escaped by preceding it with another double quote.
Line Separator: a character used as a separator between lines.
Default Extension: the default extension is used when the file name doesn't have an extension. If not entered,
datis the default extension.
Has multiline records: enable this if you expect to have multi-line records in the source document. It only works if
Ignore Byte order mark character (BOM): uncheck this option if you do not want the parser to ignore the byte order mark character (
BOM). The default is
BOM size: the size of the BOM character. If
Ignore Byte order mark character (BOM)is enabled for the format, the connector will attempt to detect the BOM character in the first line, and if it exists, remove it by executing
line.substring(bom_char_size). The actual size of the BOM character depends on the file encoding but in some cases, the wrong BOM is used with an otherwise correctly encoded file (very common for UTF-8 encoded files), so the connector truncates more than it should. If you know the size of the BOM character, set
BOM sizeto the positive value equal to the size of the BOM. The most common size for UTF-8 encoded files is 1.
Template: a template in the CSV Format. If this field is not empty, the Etlworks Integrator will use it to populate column names and data types. It is an optional field. Example:
Column names compatible with SQL: this converts column names to SQL compatible column names by removing all characters except alphanumeric and spaces.
Noname Column: the name of the column when the file does not have a header row. The column name will be the value of the
column index. Example:
Use First Row for Data: if checked, it is assumed that the file doesn't have a
Skip First Row: if this option is enabled, the system skips the first row in a file. Typically, this is used together with Use First Row for Data.
Skip not Properly Formatted Rows: if this option is enabled, the system skips rows which do not conform to the CSV Format. For example, a row might have a different number of columns than other rows do.
Skip Empty Rows: sometimes CSV files contain completely empty rows with no values and delimiters. Etlworks Integrator can be configured to skip these rows. Otherwise, it will generate an exception when reading such a file.
Skip rows with fewer columns than header: if this option is enabled, the system skips rows that have fewer columns than a header row.
Skip rows with more columns than header: if this option is enabled, the system skips rows that have more columns than a header row.
Document has extra data columns: if this option is enabled, Etlworks Integrator will be able to read CSV documents, even if the number of header columns is less than the number of data columns.
Enforce number of data columns: if this option is enabled, Etlworks Integrator will enforce the number of data columns by setting it to the same value as the number of fields in the CSV header. Use it if the number of fields in the header is less than the number of data columns and you only want to parse the data columns which have the corresponding field in the header.
Start row: if this value is not empty, the system will start reading the file from the specified 1-based row and will ignore previous rows.
End row: if this value is not empty, the system will stop reading the file after the specified 1-based row.
Transformation type: the default is
header. Read more about using the preprocessor to modify the contents of the source document. If
headeris selected and
Filter or Preprocessor or Headeris not empty, the header will be added at the beginning of the file, followed by an end-of-line character, followed by the actual data.
headeris selected for
Transformation type, the specified header will be added at the beginning of the file, followed by an end-of-line character, followed by the actual data.
All fields are strings: if this option is enabled (it is disabled by default) the system will create all fields with a string data type. Otherwise, it will parse the field's value and attempt to detect the data type.
Save Metadata: if this option is enabled the system will create an XML file with the same name as the CSV file. The XML file contains information about the actual data types of the columns as it was detected during the extract from the database. If present this information will be used during the load to set the data types of the columns in the destination. Enable this option if you want to preserve the exact data types when extracting data from a database and creating the CSV files. Read more about enabling this option for the CDC connector.
Date and Time Format: a Format for timestamps (date & time).
Date Format: a Format for date (date only, no time).
Time Format: a Format for time (time only, no date).
Parse Dates: if this option is enabled, and the date or time value is not recognized as one of the Formats defined above, Etlworks Integrator will try to parse it using one of the well-known date & time Formats.
Trim Strings: if this option is enabled, Etlworks Integrator will trim leading and trailing white-spaces from the value.
Treat 'null' as null: if this option is enabled, Etlworks Integrator will treat string values equal to
nullas actual nulls (no value).
Value for null: a string that will be used instead of a null value. A typical usage example is setting
Value for nullto
\Nso the Redshift
COPYcommand can differentiate between an empty string and
NULLvalue. Read more about using this option to differentiate between SQL
NULLand empty string
Encode CLOB fields using Base64: if this option is enabled (default), Etlworks Integrator will encode fields with the CLOB data type (large TEXT fields) using the Base64 algorithm.
C: if this option is enabled system will convert the value of the column with the
BITdata type from
0when creating the file.
Remove EOL characters: if this option is enabled (default), the system will remove end-of-line (EOL) characters (
\r) from the field's value when creating a file.
Reorder columns based on the order of columns in mapping: when this option is enabled (it is disabled by default) the CSV connector is able to create CSV files with the specific order of columns. Simply enable this option for the destination CSV Format and configure the order of fields in mapping.
S: if this option is enabled (it is disabled by default), the system will strip not printable characters, such as null-character (
/0), from the data row when creating a file.
Maximum number of rows in file: the maximum number of rows in the file when creating new CSV files. Use it to split a large CSV document while creating it. It is extremely fast and efficient, compared to splitting the existing document.
Encoding: character encoding when reading and writing CSV files.
No encodingmeans there will be no additional encoding.