When to use this connector
Use this connector to create Flows that work with files in Microsoft Azure Storage.
You can also use it to create a Flow that loads data in Snowflake and Azure Synapse Analytics.
Create a Connection
Step 1. In the Connections
window, click +
, enter azure storage
,select Azure Storage SDK
.
Step 2. Enter Connection parameters
Authentication
-
Storage account
: the storage account name. -
Autenticatin type
: the following authentication types are available:-
Access Key
: authentication with Access Key. -
SAS token
: authentication with SAS token. -
Client secret
: authentication using custom Azure app and Client secret.
-
Authentication with Access Key
Step 1. SelectAccess key
in Authentication type
.
Step 2. Log in to the Azure portal.
Step 3. Select storage account.
Step 4. Select Access keys
.
Step 5. Click Show keys
.
Step 6. Copy key1
and paste it into the field Access Key or SAS token
.
Authentication with SAS token
Step 1. SelectSAS token
in Authentication type
.
Step 2. Log in to the Azure portal.
Step 3. Select storage account.
Step 4. Select Shared access signature
.
Step 5. Enables services, resources types, permissions, set expiration date in the future, and click Generate SAS and connection string
.
Step 6. Copy SAS token
and paste it into the field Access Key or SAS token
.
Authentication using custom Azure app and Client secret
Step 1. SelectClient secret
in Authentication type
.
Step 1. Create an Azure app and give it access to the storage account(s).
Step 2. Paste Client ID, Client Secret, and Azure Tenant ID into the corresponding fields.
Other parameters
-
Container
: the container for files. It is similar to an Amazon S3 bucket. -
Directory
: the directory under the container. This parameter is optional. -
Files
: the actual file name or a wildcard file name, for example,*.csv
. -
Add Suffix When Creating Files in Transformation
: you can select one of the predefined suffixes for the files created, using this Connection. For example, if you selectuuid
as a suffix and the original file name isdest.csv
, Etlworks will create files with the namedest_uuid.csv
, where uuid is a globally unique identifier such as21EC2020-3AEA-4069-A2DD-08002B30309D
.
This parameter works only when the file is created using source-to-destination-transformation. Read how to add a suffix to the files created when copying, moving, renaming, and zipping files.
-
File Processing Order
: Specifies the order in which source files are processed when using wildcard patterns in ETL and file-based flows (e.g., copy, move, delete). The default setting is Oldest, meaning files are processed starting with the oldest by creation or modification time. Choose from various criteria such as file age, size, or name to determine the processing sequence:- Disabled: wildcard processing is disabled,
- Oldest/Newest: Process files based on their creation or modification time, Ascending/Descending: Process files in alphabetical order, Largest/Smallest: Process files based on their size.
-
Archive file before copying to
: Etlworks can archive files, using one of the supported algorithms (zip or gzip), before copying them to cloud storage. Since cloud storage is typically a paid service, it can save money and time if you choose to archive files.
-
Contains CDC events:
When this parameter is enabled, the Etlworks adds standard wildcard templates for CDC files to the list of available sources in the FROM selector.
Decryption
When Azure Storage Connection is used as a source (FROM
) in the source-to-destination transformation, it is possible to configure the automatic decryption of the encrypted source files using the PGP algorithm and private key uploaded to the secure key storage.
If the private key is available, all source files processed by the transformation will be automatically decrypted using the PGP algorithm and given key. Note that the private key requires a password.
Comments
0 comments
Please sign in to leave a comment.