Etlworks Integrator 2.4 has been released!
1. Added the ability to set up two-factor authentication for your Eltworks login
Affected areas: security.
Two-factor authentication adds an extra layer of security on top of your username and password when logging into Etlworks by requiring verification of the login through a second linked device, such as Google Authenticator.
2. Added the ability to change the First Name, Last Name and Email associated with your Etlworks login
Read how to:
3. Added the ability to automatically reorder columns when loading data in Snowflake and Redshift
By default, Redshift COPY, and Snowflake COPY INTO commands insert values into the target table's columns in the same order as fields that occur in the data files.
Prior to this update, if the order of columns in the source was different from the order of columns in the target table the system was throwing an exception.
In this update, we introduced the ability to automatically reorder columns in the source to match the order of columns in the target. When this option is enabled, the system also automatically sets the data type for each column to match the data type in the target. This option is ignored if the target table does not exist yet.
The option can be enabled under MAPPING/Parameters/Handling schema changes:
4. Improved reliability and configurability of the CDC flows
Affected areas: CDC connectors
In this update, we improved the reliability and configurability of the CDC flows.
The following new configuration options were added:
- Use Internal Queue - If the use of queue is enabled, the system will store CDC events in the queue before sending them further in the pipeline for processing. The queue is used to reprocess events in case of failures. This option is ignored if Switch to Snapshot if Error is enabled.
- Extra columns - The columns specified in this field will be added to the end of the stream. The column value will be either empty or set to the value of the global variable with the same name.
- Maximum Queue Size - A positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to the stream. This queue can provide back pressure to the transactional log reader when, for example, writes to the stream are slower. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size.
- Maximum Batch Size - A positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048.
- Poll Interval in Milliseconds - Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds or 1 second.
- Connection Timeout in Milliseconds - A positive integer value that specifies the maximum time in milliseconds this connector should wait after trying to connect to the database server before timing out. Defaults to 30 seconds.