BimlFlex Settings Reference Documentation
This document outlines the metadata and framework Settings available in BimlFlex.
These settings drive the behavior of the BimlFlex product.
By changing settings, the produced artifacts can adapt to specific requirements for file locations, naming conventions, data conventions etc.
Align these settings with the organizations best practices and environmental requirements.
Note
For additional information about using Settings in BimlFlex, please refer to the Settings Editor documentation.
AzCopy
Setting |
Type |
Description |
---|---|---|
Path | The file path to the local installation of the "AzCopy" Command. | |
Log Location | Sets the log location for AzCopy v.10 log files to the AZCOPY_LOG_LOCATION environment variable. | |
Version | The AzCopy version used. This should match the AzCopy command found in the AzCopyPath setting. Use a numeric such as 8 or 10 as values to specify the AzCopy version. | |
Log Level | The log level for AzCopy v.10 logs. Available log levels are: "NONE", "DEBUG", "INFO", "WARNING", "ERROR", "PANIC", and "FATAL" | |
Cap Mbps | The AzCopy v.10 transfer speed cap in Mbps. | |
Concurrency | Specify the number of concurrent AzCopy operations to start. To reuse the current environment variable without setting this value for each object, leave this blank when using AzCopy v.10. | |
Use AzCopy | Should the generated files be uploaded using "AzCopy" as part of the SSIS Packages. Disable if the solution uses an alternative file management process. | |
Create Container | Determines if the pipeline includes a step to create the destination container. Only enable this when needed, such as to allow load scenarios where the target container is removed between loads. | |
Set Environment Variables | Determines if the environment variables controlling AzCopy v.10 are set before each object is loaded. | |
Use SAS Token | Determines if AzCopy uses a SAS Token for access. This is the only supported authentication mechanism for AzCopy v.10. |
Azure
Setting |
Type |
Description |
---|---|---|
Data Factory Name | The default Azure Data Factory name to use. | |
Subscription Id | The default Azure Subscription Id to use. | |
Resource Group | The default Azure Resource Group name to use. | |
Key Vault | The default Azure Key Vault name to use for Linked Services. | |
Function Bridge Name | The default Azure Function Bridge Name to use. | |
Function Bridge Key | The default Azure Function Bridge Key to use for authentication between the Azure Data Factory and the Azure Function App. | |
Use Function Bridge Key Vault | Determines if the Azure Function Bridge uses the Azure Key Vault specified in the AzureFunctionBridgeKeyVault settings. | |
Function Bridge Key Vault Name | The name of the Azure Key Vault used by Azure Data Factory to access the function key for the Azure Function Bridge when the "AzureFunctionBridgeUseKeyVault" setting is enabled. | |
Function Bridge Key Vault Secret | The name of the Azure Key Vault secret used by Azure Data Factory to access the function key for the Azure Function Bridge when the "AzureFunctionBridgeUseKeyVault" setting is enabled. | |
Function Bridge Key Vault Secret Version | An optional Azure Key Vault secret version used by Azure Data Factory to access the function key for the Azure Function Bridge when the "AzureFunctionBridgeUseKeyVault" setting is enabled. If not specified, the current version of the secret will be used. | |
Integration Runtime Name | The default Azure Data Factory Self Hosted Integration Runtime name to use for Linked Services. | |
Integration Runtime Is Shared | Is the referenced Self Hosted Integration Runtime shared from another Data Factory? | |
Linked Resource Id | The Resource Id for the referenced Self Hosted Integration Runtime Linked Service that is shared to this Data Factory. | |
Deployment Container | The Storage Container to use when accessing Blob Storage for linked ARM template deployment. | |
Deployment Account Name | The Storage Account name to use when accessing Blob Storage for linked ARM template deployment. | |
Deployment Account Key | The Storage Access Key to use when accessing Blob Storage for linked ARM template deployment. | |
Deployment SAS Token | The Storage Access SAS Token to use when accessing Blob Storage for linked ARM template deployment. | |
Emit Powershell Environment Checks | Determines if the PowerShell deployment files include environment verification by calling Get-InstalledModule -Name Az. | |
File Compression Codec | The compression type (Codec) to use for the Azure Data Factory File Dataset. | |
File Compression Level | The compression level to apply for the Azure Data Factory target File Dataset. | |
File Encoding Name | The File Encoding Name for the Azure Data Factory target File Dataset. | |
File Null Value | The Null value definition to use for the Azure Data Factory target File Dataset. | |
Archive Stage | Determines if the staged Blob Storage files are moved to the defined archive container, once processing is completed. This will use AzCopy v.10 commands, so requires AzCopy v.10 and SAS Tokens for access. | |
Stage On Extract | Determines if the Azure-based Extract and Load process runs the stage process for the extracted data in the destination database. | |
Create Dummy File | Should the Staging package copy a "placeholder.dummy" file to accommodate the PolyBase limitation when no files exist. | |
Create External On Stage | Should the Staging process "DROP" and "CREATE EXTERNAL TABLE" before running the Staging Stored Procedure. | |
External File Conversion | By default, the extraction process from a source to blob storage applies several conversions to create files that are supported the target storage type. This setting allows control of this conversion process. | |
Distribute Round Robin Temporary Tables | Enable to use Round Robin distribution in Azure Synapose temporary tables instead of the default Hash distribution. | |
Archive Landing Files | Determines if the landed files are moved to the defined archive container, once processing is completed. | |
OnError Landing Files | Determines if the landed files are moved to the defined error container on error, once processing is completed. | |
Delete Landing Files | Determines if the landed files are deleted once processing is completed. |
Azure Copy
Setting |
Type |
Description |
---|---|---|
Enable Staging | Determines if the Azure Data Factory Copy Activity uses Staging. Use this together with Copy Method "PolyBase" to load data to PolyBase supported targets. | |
Staging Settings | The staging settings to use when enabling Staging for the Copy Activity. Use "@@this" to automatically use the Linked Service associated with the PolyBase Landing connection. | |
Enable Logging | Enable to use logging in the Azure Data Factory Copy Command. | |
Log Settings | The settings for the logging in the Copy Activity, when logging is enabled. Use "@@this" to automatically use the Linked Service associated with the PolyBase Landing connection. | |
Copy Method | For the Copy Activity, specifies the Copy Method to use. Bulk Insert allows direct inserts to the target. PolyBase allows automatic staging in a Blob Container and loading through external tables and PolyBase to supported targets. | |
PolyBase Settings | The default Azure PolyBase settings to use when using the Copy Method "PolyBase" and enabling Staging for the Copy Activity. | |
Retry Attempts | Maximum number of retry attempts for Azure Copy. | |
Retry Interval | The number of seconds between each retry attempt for Azure Copy. | |
Timeout | Maximum amount of time the Azure Copy Activity can run. Default is 7 days. Format is in D.HH:MM:SS. | |
Secure Input | When enabled, input from the activity will not be captured in ADF logging. | |
Secure Output | When enabled, output from the activity will not be captured in ADF logging. | |
Data Integration Units | Specify the powerfulness of the copy executor. Value can be 2-256. When you choose "Auto", the Data Factory dynamically applies the optimal DIU setting based on your source-sink pair and data pattern. | |
Degree of Copy Parallelism | For the Copy Activity, specifies the degree of parallelism that data loading should use. | |
Data Consistency Verification | Determines if the Copy Activity validates data consistency for supported sources and sinks. |
Azure Storage
Setting |
Type |
Description |
---|---|---|
Archive Container | The Container Name to use for the archive process. This should be indicative of the purpose of the contents, such as "archive". | |
Archive Account Name | The Azure Blob Storage Account Name to use for archiving data as files in blob storage, | |
Archive Account Key | The storage Access Key to use when accessing the Blob storage. | |
Archive SAS Token | A storage access SAS Token to use when accessing the Blob storage. | |
Error Container | The Container Name to use for the error process. | |
Error Account Name | The Azure Blob Storage Account Name to use for error files in blob storage. | |
Error Account Key | A storage Access Key to use when accessing the error file Blob storage. | |
Error SAS Token | A storage access SAS Token to use when accessing the error file Blob storage. | |
Stage Container | The Container Name to use for the staging process. | |
Stage Account Name | The Azure Blob Storage Account Name to use for staging data as files in blob storage. | |
Stage Account Key | The storage Access Key to use when accessing the staging Blob storage. | |
Stage SAS Token | The storage access SAS Token to use when accessing the staging Blob storage. | |
Blob Storage Domain | The AzCopy domain to use. | |
External File Format | The default External File Format definition to use. | |
Folder Naming Convention | Naming convention to use for Azure Storage Folders. | |
File Naming Convention | Naming convention to use for Azure Storage Files. |
Core
Setting |
Type |
Description |
---|---|---|
Add SQL Defaults | Enable to add SQL Default constraints to tables created. SQL Defaults are always added to staging layer tables. | |
Global Default Date | The global default date to use for timelines. Please note that this value does not override the RowEffectiveFromDate. | |
Convert GUID To String | Determines if a source column of type "GUID"/"UniqueIdentifier" is automatically converted to a String data type. | |
Use BimlCatalog | Determines if BimlFlex uses the BimlCatalog database for logging, auditing and configuration variables. | |
Use Custom Components | Determines if BimlFlex uses the Ssis Custom Components to log RowCounts and Generate Hash Keys. | |
Hash Null Value Replacement | The Null value replacement to be used when hashing. | |
SSIS Hash Null Value Replacement | The Null value replacement to be used when hashing using the Varigence BimlFlex SSIS Custom component. | |
SSIS Convert Date To String With Scale | Used to control the converted DateTime in the FlexToBk to ensure compatability with the SQL code. | |
Use SSIS Compatable Date Format | Used to control the converted DateTime in the FlexToBk to ensure compatability with the SQL code. | |
Integration Key To Upper | Determines if strings in the Integration Key will be upper-cased automatically. | |
Lookup Table Pattern | The table naming pattern for the Lookup Table. | |
String Concatenator | The string value used in concatenating Integration Keys and Hash values (sanding element). Defaults to "~". | |
Hash Algorithm | The hashing algorithm to use. ("MD5"/"SHA1"/"SHA2_256"/"SHA2_512"). | |
Hash Binary | Determines if the generated hash value is stored as a binary representation or as a string representation. | |
Hash Integration Key | Determines if the Integration Key is hashed. | |
Use SQL Compatible Hash | Determines if the SSIS custom inline hashing component use a hashing approach compatible with the SQL Server "HASHBYTES()" function. | |
Use SQL Compatible Row Hash | Determines if the SSIS custom inline hashing component for Full Row Hashing use a hashing approach compatible with the SQL Server "HASHBYTES()" function. | |
Cast Boolean to True False for Hash | Determines if the SQL inline hashing function for MSSQL, SQLDB and Synapse converts BIT (Boolean) values to True/False instead of 1/0. | |
Root Path | The default root path for any other BimlFlex related file operations. | |
Archive Path | The default path for file archive operations. | |
Export Path | The default export path for file exports. | |
Import Path | The default import path for file imports. | |
Lookup Cache Path | The default path for Cache files used in lookups. | |
Configuration Path | The folder where SSIS configuration files are stored. | |
7 Zip Path | The location of the 7-zip executable for zip operations requiring 7-zip. | |
Archive Import | Determines if imported files are archived after processing. | |
Zip Archive Import | Determines if imported files are compressed when they are archived. | |
Zip Output File | Determine if the created output file is zipped. | |
Zip Extract File In Memory | Determines if the file zip process runs in-memory or through file streams. Files larger than 2 GB are always zipped through file streams. |
Data Mart
Setting |
Type |
Description |
---|---|---|
Append Dimension | The string to append to Dimension object names. | |
Append Fact | The string to append to Fact object names. | |
Append Staging | The string to append to the Data Mart staging object names. | |
Infer Dimension Members | Determines if the Data Mart process infers Dimension Members. | |
Stage On Initial Load | Determines if the Data Mart stage process executes as part of an initial load. | |
Append Schema | Determines if the Schema name is appended to Object names in the Data Mart. | |
Apply Lookup Filter | Determines if the SSIS Lookup checks for existing rows and applies a filter condition when joining the source table to the destination table. This applies to Data Mart processing and aims to optimize memory usage. | |
Display Database Name | Determines if the Database name is included in Data Mart object names. | |
Display Schema Name | Determines if the Schema name is included in Data Mart object names. |
Data Vault
Setting |
Type |
Description |
---|---|---|
Use Hash Keys | Determines if the Data Vault uses Hash Keys. Alternatively, Natural Keys can be used by disabling this setting. | |
Accelerate Link Satellite | Determines if the BimlFlex Accelerator creates Link Satellites from source metadata, containing attributes and effectivess attributes. | |
Accelerate Link Satellite Keys | Determines if the BimlFlex Accelerator adds the Integration Key to Link Satellites. | |
Append Link Satellite Record Source | Determines if the BimlFlex Accelerator appends the Record Source to the Link Satellite name. | |
Accelerate Hub Keys | Determines if the BimlFlex Accelerator adds source key columns to the Hub in addition to the Integration Key. | |
Accelerate Link Keys | Determines if the BimlFlex Accelerator adds source key columns to the Link in addition to the Integration Key. | |
Accelerate Link Integration Keys | ||
Accelerate Correct Key Names | Should the Accelerator correct Integration Key names based on the Object Business Name | |
Apply Data Type Mapping DV | Determines if the Data Type Mappings are applied to the Data Vault. The Data Type Mappings function allow expansion of data types. | |
Naming Convention | Naming convention to use for Data Vault Accelerator. | |
Stage Reduce Link Keys | Enable this to reduce additional Link hash keys in the staging table. | |
Accelerate Show Columns | Enable to set the default Accelerator view to show all Columns. | |
Accelerate Show Expanded | Enable to set the default Accelerator view to show the Expanded view (Hubs, Links and Satellites) instead of the Data Vault Backbone (Only Hubs and Links). | |
Process On Stage | Determines if the Data Vault stored procedure is called after the Extract has been done. For Azure projects this must be combined with "AzureStageOnExtract". | |
Infer Link Hub | Determines if the Data Vault process loads all involved Hubs when a Link table is loaded, independent of Hub loads from referenced tables. Enabling this setting will force BimlFlex to always load all corresponding Hub tables when a Link is loaded from a given source, even though this could be redundant because the Hub information may be provided by referenced tables.This applies to scenarios where the source system reliably manages referential integrity. | |
Apply Lookup Filter DV | For Staging-to-Data Vault processes, determines if the SSIS Lookup checks for existing rows by applying a filter condition joining the staging table to the destination table. This is to optimize memory usage. | |
End Date Satellite | Determines if end dating is applied to the Data Vault Satellites. | |
Single Change Satellite | Enable if loading into Data Vault and the Delta only has single changes to each key. This will provide optimized ELT loads. | |
ELT Delta Is Derived | Enable if loading into Data Vault and the Delta has already been derived. This will provide optimized ELT loads for scenarios like streams or insert only transaction source tables. | |
Use Cache Lookup | Enables using the file based cache lookup feature of SSIS (for lookups). | |
Use Transactions | Enable to wrap Data Vault ELT load processes in transaction statements. | |
Use Where Exists | Enable to use WHERE EXISTS type SQL statements instead of LEFT JOIN type statements for lookups in ELT processing. | |
Pushdown Parallel Processing | ||
Stage Surrogate Keys | ||
Bridge Lag Days | Specify the number of days the Bridge process should go back and look for changes to reprocess. | |
Pit Lag Days | Specify the number of days the Point-In-Time process should go back and look for changes to reprocess. |
Data Vault Naming
Setting |
Type |
Description |
---|---|---|
Append Hub | The string to append to Hub names. | |
Append Link | The string to append to Link names. | |
Append Link Satellite | The string to append to Link Satellite names. | |
Append Satellite | he string to append to Satellite names. | |
Append Point In Time | The string to append to Point In Time table names. | |
Append Bridge | The string to append to Bridge table names. | |
Append Surrogate Key | The string to append to Surrogate/Hash Key column names. | |
Default Schema | The default schema to use for the Data Vault. | |
Prefix Surrogate Key | Should we prefix the Hubs and Links Surrogate Keys. | |
Append Reference | The string to append to Reference table names. | |
Append Hierarchy Link | The string to append to Hierarchical Link names. | |
Append Same As Link | The string to append to Same-As Link names. | |
Append Schema | Determines if the schema is appended to the names of accelerated Data Vault object. | |
Display Database Name | Determines if the database name is displayed in the Data Vault. | |
Display Schema Name | Determines if the schema name is displayed in the Data Vault |
Delete Detection
Setting |
Type |
Description |
---|---|---|
Apply Delete Detection PSA | Use the default process to insert detected deletes into the Persistent Staging Area table. | |
Apply Delete Detection DV | Use the default process to insert detected deletes into the Data Vault Satellite tables. | |
Enable Delete Detection | Determines if BimlFlex applies a separate key load pattern that will enable detection of hard deletes in the source |
Model
Setting |
Type |
Description |
---|---|---|
Naming Convention | ||
Use Short Names for Hubs | ||
Apply Naming Convention | Naming convention to use for objects and columns. | |
Infer Integration Key From | The convention to infer the Integration Key from. Case sensitive options are "None", "PrimaryKey", "UniqueKey", "FirstColumn", "IdentityColumn" and "ColumnName::[NameOfColumn]". |
|
Retain Existing Metadata | Comma-separated list that contains the existing Metadata that should be retained when doing an import of existing data. | |
Pad Integration Key | Number of Characters to pad the Integration Key to. | |
Append Integration Key | The string to append to Integration Keys. | |
Key Ends With | The strings that BimlFlex interprets as key identifiers. | |
Add Record Source To Integration Key | Import Metadata will add "@@rs" to Integration Keys if true. | |
Change References To Integration Key | Determines if the Import Metadata feature adds derived Integration Keys based on source references, or use source columns for references. | |
Import Views | Determines if database Views are imported when importing Metadata. | |
Integration Key Concatenation Order | Determines the column order in the derived Integration Key. |
Naming
Setting |
Type |
Description |
---|---|---|
Suffix Or Prefix Column | The "SuffixOrPrefixColumn" key defines the behavior when defining column names. | |
Suffix Or Prefix Object | The "SuffixOrPrefixObject" key defines the behavior when naming objects. | |
Append Procedure Name | The string to append to procedure names. | |
Append Batch Name | The string to append to Batch names. | |
Append Load From Psa Name | The string to append to the Load From PSA process name. | |
Stage Column With Business Name | When defining a Business Name for an Column in the Business Overrides section, setting this to true will use the Business Name as the staging column name. | |
Stage Object With Business Name | When defining a Business Name for an Object in the Business Overrides section, setting this to true will use the Business Name as the staging table name. | |
Use Record Source As Append | Specifies if the record source should be appended to object names | |
Use Record Source As Schema | Determines if the Record Source is used as the schema name for Staging and Persistent Staging Area tables. |
Operations
Setting |
Type |
Description |
---|---|---|
Archive Retention Period | The archive data retention period to use for the BimlFlex database cleanup process. | |
Snapshot Retention Period | The snapshot data retention period to use for the BimlFlex database cleanup process. |
Orchestration
Setting |
Type |
Description |
---|---|---|
Constraint Mode STG | The table reference constraint mode to apply for the STG (Staging) tables in BimlStudio for diagram previewing. Allowed values are "DoNotCreate", "CreateAndNoCheck" and "CreateAndCheck". | |
Constraint Mode DV | The table reference constraint mode to apply for the Data Vault database. Allowed values are "DoNotCreate", "CreateAndNoCheck" and "CreateAndCheck". | |
Constraint Mode DM | The table reference constraint mode to apply for the Data Mart database. Allowed values are "DoNotCreate", "CreateAndNoCheck" and "CreateAndCheck". | |
Rollback STG | Determines if the Batch orchestration engine rolls back (delete) committed changes to the Staging database in case of a failed process. | |
Rollback PSA | Determines if the Batch orchestration engine rolls back (deletes) committed changes to the Persistent Staging database in case of a failed process. | |
Rollback DV | Determines if the Batch orchestration engine rolls back (deletes) committed changes to the Data Vault database in case of a failed process. |
Snowflake
Setting |
Type |
Description |
---|---|---|
Account | The Snowflake account name to use. | |
Region | The Snowflake region to use. | |
Warehouse | The Snowflake warehouse name to use. | |
Database | The Snowflake database name to use. | |
Schema | The Snowflake schema name to use. | |
Password | The Snowflake password to use. | |
User | The Snowflake user name to use. | |
SnowSQL Config | Location of the Snowflake SnowSQL configuration file. | |
SnowSQL Path | The path to the local installation of the Snowflake SnowSQL CLI Client tool. | |
SnowSQL Connection | The Snowflake SnowSQL connection to use. | |
File Format | The Snowflake file format to use. | |
Remove Stage | Determines if the Snowflake stage is removed prior to loading the new stage file. | |
Auto Suspend | Determines where the Snowflake database can Auto Suspend. | |
Scale Up | Determines if the Snowflake processing should scale up the Snowflake Warehouse at the start of the Batch. | |
Scale Up Size | The size the Snowflake Warehouse can be scaled up to. | |
Scale Down | Determines if the Snowflake processing should scale down the Snowflake Warehouse at end of the Batch. | |
Scale Down Size | The size the Snowflake Warehouse can be scaled down to. | |
Output Path | The folder where SnowDT database files are created. |
SSDT
Setting |
Type |
Description |
---|---|---|
Include .NET Core Project Support | Determines if SSDT Project files and build script files are created with .NET Core support. | |
.NET Core Targets Path | The folder where the .NET Core Target and build support files are located. | |
Output Path | The folder where SSDT database projects are created. | |
Suppress TSql Build Warnings | Suppress TSql Build Warnings. | |
Include External Tables | Determines if External Tables are included in the generated SSDT Project. | |
Overwrite External Table Defaults | Determines if existing external table-related files are overwritten. | |
Include Master Key | Determines if the Master Key statement is includes in the SSDT Project. | |
Default Master Key | The default Master Key SQL Statement to use. | |
Include Credential | Determines if the Credential statement is included in the SSDT Project. | |
Default Credential | The default Credential SQL Statement to use. | |
Include External Data Source | etermines if the External Data Source statement is included in the SSDT Project. | |
Default External Data Source | The default External Data Source SQL Statement to use. | |
Include External File Format | Determines if the External File Format statement is included in the generated SSDT Project. | |
Default External File Format | The default External File Format SQL Statement to use. |
SSIS
Setting |
Type |
Description |
---|---|---|
Server | The SSIS Server name to use for generated deployment script files. | |
SSISDB | The SSISDB database name to use for generated deployment script files. | |
Folder | The SSIS Catalog folder name to use for generated deployment script files. | |
Create Folder | Add Create SSIS Catalog Folder in SSIS deployment script files. | |
SqlCmd Override | Override the sqlcmd connection in the Create SSIS Catalog folder in the deployment script. |
|
BLOB Temp Storage Path | The Blob Temporary Storage Path that SSIS uses to spool temporary data to disk when it runs out of memory. | |
Buffer Temp Storage Path | The Buffer Temporary Storage Path that SSIS uses to spool temporary data to disk when it runs out of memory. | |
Command Timeout | SSIS Command Timeout to use. Override the value here to change the default SSIS behavior. | |
Auto Adjust Buffer Size | SSIS Auto Adjust Buffer Size configuration for supported SQL Server versions. | |
Check Constraints | SSIS Destination configuration for checking constraints. Defaults to "False", as that is recommended for data warehouse destinations. | |
Default Buffer Max Rows | SSIS Data Flow configuration for Default Buffer Max Rows for supported destinations. | |
Default Buffer Size | SSIS Data Flow configuration for Default Buffer Size for supported destinations. | |
Delay Validation | Determines if generated SSIS packages use delayed validation for metadata validation. | |
Engine Threads | Maximum number of SSIS engine threads to employ. | |
Max Concurrent Executables | Maximum number of concurrent SSIS executables to employ. | |
Maximum Insert Commit Size | SSIS Data Flow configuration for Maximum Insert Commit Size for supported destinations. | |
Process Subfolders | Determines if a flat file source loop loads files in subfolders, of the specified source folder. | |
Rows Per Batch | SSIS Data Flow configuration for Rows Per Batch for supported destinations. | |
Validate External Metadata | Determines if generated SSIS packages validate external metadata. |
Staging
Setting |
Type |
Description |
---|---|---|
Persist History | Provides an option to override the Connection level attribute "PersistHistory" for more granular control. | |
Apply Data Type Mapping Stg | Determines if the Data Type Mappings that are applied to source tables are used in the Staging and Persistent Staging databases. | |
Delete Import File | Determines if imported files are deleted after processing. | |
Add Row Hash Key Index | Enable to add a unique, non-clustered constraint on the FlexRowHashKey and EffectiveFromDate columns in staging tables. | |
Extract File Encoding | For an extracted file, specify a different encoding than the standard Unicode as produced by the BimlFlex source to file process. Valid choices are "ASCII", "BigEndianUnicode", "UTF32", "UTF7", "UTF8", "Unicode". | |
Extract File Split Size | The file size to split extracted files into multiple files for. | |
Use UTF8 Data Conversion | Determines if SSIS Express-based extract packages apply UTF8 data conversion. | |
Use TRY_CAST Conversion | Determines if the select-to-stage tables uses TRY_CAST and TRY_CONVERT. | |
Select Blob Row Order By | Row Order definition to use for Blob source queries. | |
Select Stage Row Distinct | Determines if the select-to-stage tables apply a row number function based on the defined key, or us a distinct based on the full row including the HASH. | |
Apply Extract Conversion In Data Flow | Determines if SSIS extracts-to-file apply data conversion for target files in the SSIS Data Flow instead of in the source select statement. |
Staging Naming
Setting |
Type |
Description |
---|---|---|
Append Name External | The string to append to External tables when using PolyBase. | |
Append Name Landing | The string to append to Landing tables when deploying using Azure Data Factory Copy directly to the database. | |
Schema Name Pattern | Specific override behavior for the schema name for staging tables and other database assets. | |
Object Name Pattern | Specific override behavior for the object name for staging tables. | |
Delete Object Name Pattern | The name to use for the Delete Objects when using Delete Detection. | |
Delete Schema Name Pattern | The name to use for the Delete objects schema when using Delete detection. | |
Append Record Source | Determines if the Record Source Code from the connection is appended to the staging object name. | |
Append Schema | Determines if the source Schema is appended to the object name in the staging layer. | |
Display Database Name | Controls if the source database name should be included in the generated SSIS package name. | |
Display Schema Name | Controls if the source schema name should be included in the generated SSIS package name. |
Staging Persistent
Setting |
Type |
Description |
---|---|---|
Append Schema | The string to add to the PSA objects when Staging and Persistent Staging are co-located in the same database. | |
Temporal Table Pattern Name | The string to add to the Temporal PSA objects when Staging and Persistent Staging are co-located in the same database. | |
Enable End Date | Apply end dating in the PSA. This will allow timelines to be maintained in the PSA. Disable this to configure an insert-only approach for the PSA for optimized load performance. | |
Bypass Persistent Checks | Enable this to bypass lookups, and directly applies all records to the Staging and Persistent Staging tables. | |
Use Cache Lookup | Determines if the PSA lookup caches the data to disk when using SSIS. Use this if it is not possible to use the normal in-memory lookup behavior due to memory constraints. | |
Disable Stage, Persist Only | Enable this to skip the staging layer and only persist changes directly in the PSA. This applies to SSIS output only. | |
Delta Collapse Rows | Should the PSA Processing apply row collapsing logic. | |
Delta Detection in SQL | Determines if the delta detection applied when loading changes to the PSA uses a SQL procedure that runs on the Staging Area table, and not as part of the PSA pattern. | |
Delta Is Late Arriving | Does the PSA load include late arriving deltas. | |
Delta Is Single Change | Enable this if loading into PSA and the delta only has single changes to each key. This will provide optimized ELT loads. | |
Delta Stage All Rows | Determines if the delta process stages all rows. Otherwise, the process will compress row changes into distinct change rows - removing full row duplicates. | |
Delta Is Derived | Determines if a PSA table already receives a data delta. Enable this if loading into PSA and the delta has already been derived earlier. | |
Merge All Rows | The Merge All Rows setting will replace the hash (Row Hash Type 1 and Row Hash Type 2) with a NULL value in the source-select of the merge statement. Only applies in the specific case when the source is an "Object" and the target is a "Table". | |
Truncate PSA If Has Rows | Determines if the PSA table should be truncated if it already contains records. This should be disabled for normal PSA behavior. Enable this for specific requirements where previously loaded rows should be discarded. |