Preparing BimlFlex for Microsoft Fabric
Before working with Microsoft Fabric metadata, you should ensure that your environment is configured to use Fabric appropriately. This section will walk you through the required software and system configurations you will need to connect BimlFlex to your Microsoft Fabric environment.
Getting Started with Microsoft Fabric in BimlFlex
Prerequisites
Before configuring BimlFlex for Microsoft Fabric, ensure you have:
- An active Microsoft Fabric workspace with appropriate permissions
- Data Factory configured in your Microsoft Fabric workspace
- Appropriate authentication credentials (Service Principal or Managed Identity)
- OneLake or Azure Data Lake Storage Gen2 for landing and staging areas
Quick Start from Sample Metadata
The fastest way to get started with Microsoft Fabric is to load one of the pre-configured sample metadata sets. BimlFlex provides two Fabric-specific samples:
| Sample | Description | Use Case |
|---|---|---|
| Fabric Data Vault | Pre-configured for Data Vault implementation | Building a silver layer with Hub, Link, and Satellite patterns |
| Fabric Datamart | Pre-configured for dimensional modeling | Building bronze-to-gold layer data marts |
To load a sample:
- Navigate to the BimlFlex Dashboard
- Select from the Load Sample Metadata dropdown
- Choose either Fabric Data Vault or Fabric Datamart
The sample metadata includes pre-configured projects, connections, and objects that demonstrate best practices for Fabric implementations.
Configuring a Project for Microsoft Fabric
To configure an existing project for Microsoft Fabric:
- Navigate to the Projects editor in BimlFlex
- Change the Integration Template from Integration Services to Data Factory
- Configure a Landing Connection for data extraction
- Save the project
Once configured for Fabric, the project will display a Fabric icon indicating the target platform.
If you're starting from the SQL Server SSIS sample (Sample 01), you can convert it to Fabric by simply changing the Integration Template to Data Factory and configuring the appropriate connections.
Configuring BimlFlex Settings
BimlFlex uses Settings to adapt to specific requirements for file locations, naming conventions, data conventions etc.
Align these settings with the organizations best practices and environmental requirements.
Configuring a Fabric Connection
This section outlines any specific considerations needed when configuring BimlFlex to use Microsoft Fabric across the various Integration Stages.
| Field | Supported Values | Guide |
|---|---|---|
| Integration Stage | Source System, Staging Area, Persistent Staging Area, Data Vault, Data Mart | Details |
| System Type | Fabric Lakehouse, Fabric Warehouse | Details |
| Connection String | Fabric connection string | Details |
| External Location | OneLake path or ABFSS path to the storage location | Details |
| External Reference | Fabric connection ID (internal identifier) | Details |
Integration Stage
BimlFlex provides support for the use of Microsoft Fabric as both a target warehouse platform and as a Source System.
You can use Fabric Lakehouse as a source going into another Fabric Lakehouse, enabling scenarios where you process data between multiple Fabric components.
Naming patterns and schemas can be used for separation as needed.
Landing Area
Microsoft Fabric is not currently supported as a Landing Area but a Landing Area is required when using Data Factory to orchestrate data movement.
The recommendation is for the Landing Area to use:
- OneLake: Land data directly into the Fabric Lakehouse Files area
- Azure Data Lake Storage Gen2: Traditional ADLS Gen2 landing
In addition to the Landing Area it is also important that the Settings for an Azure Blob Stage Container, Azure Blob Archive Container, and Azure Blob Error Container are populated correctly.
Ensure that all Azure Blob Container Settings are configured properly:
Additional resources are available on the Microsoft Docs sections:
Object File Path Configuration
In addition to the global storage settings, BimlFlex allows granular file path configuration at the Object level. These settings are configured in the Object Editor and override the default paths for individual objects.
Settings
| Setting Key | Setting Description |
|---|---|
| Source File Path | The path where source files are located. Used when the Object represents a file-based source. |
| Source File Pattern | The naming pattern for source files. Supports expressions for dynamic file matching. |
| Source File Filter | The filter applied to the Filter Activity after the Metadata Activity to select specific files. |
| Landing File Path | The path where extracted data files should be placed during the landing process. |
| Landing File Pattern | The naming pattern for landing files. Supports expressions including @@this for object name and pipeline().parameters for dynamic values. |
| Persistent Landing File Path | The file path where raw landing files are stored persistently for historical tracking and reprocessing. |
| Persistent Landing File Pattern | The naming pattern for files stored in the Persistent Landing File Path. May include date, time, or other variable elements. |
Examples
| Setting Key | Example Value |
|---|---|
| Source File Path | /mnt/source/sales/ |
| Source File Pattern | sales_*.parquet |
| Source File Filter | @greater(item().lastModified, pipeline().parameters.LastLoadDate) |
| Landing File Path | @@this/Landing/ |
| Landing File Pattern | @concat('@@this', '_', replace(replace(formatDateTime(pipeline().parameters.BatchStartTime, 'yyyy-MM-ddTHH:mm:ss'), ':', ''), '-', ''), '.parquet') |
| Persistent Landing File Path | @concat('@@this/Archive/', formatDateTime(pipeline().parameters.BatchStartTime, 'yyyy/MM/dd/')) |
| Persistent Landing File Pattern | @@this_@@timestamp.parquet |
The @@this placeholder is automatically replaced with the Object name at runtime. These settings apply per-object and take precedence over connection-level defaults.
System Type
The System Type should be set according to the target Fabric component:
- Fabric Lakehouse - For Microsoft Fabric Lakehouse targets
- Fabric Warehouse - For Microsoft Fabric Warehouse targets
Connection String
For Fabric connections, the connection string is configured through the Connection. See the Connections Section for details on configuring connection string values and Azure Key Vault secrets.
External Location
The External Location field specifies the storage path for the connection. For Fabric connections, this can be:
- OneLake path:
abfss://<workspace>@onelake.dfs.fabric.microsoft.com/<lakehouse>/Files/ - ABFSS path: For Azure Data Lake Storage Gen2 connections (e.g.,
abfss://<container>@<storage-account>.dfs.core.windows.net/<path>)
This field is used to define where data files are stored and accessed during data movement operations.
External Reference
The External Reference field stores the unique identifier associated with the connection. Within Microsoft Fabric, every connection has an internal ID that BimlFlex uses to reference the correct connection when executing pipelines.
This identifier can be found in the Fabric portal and should be entered in the External Reference field for each connection.
The External Reference is required for all Fabric connections. This ID enables BimlFlex to properly reference the Fabric connection when generating and deploying Data Factory pipelines.
For additional details on creating a Connection refer to the below guide:
Configuring History Persistence (Bronze Layer)
For each source, you can configure whether to persist history in your bronze layer:
- Persist History enabled: Delta detection and full history tracking in the Persistent Staging Area
- Persist History disabled: Only maintains current state of source data
To enable history persistence:
- Navigate to the source connection in the Connections editor
- Enable the Persist History toggle
- Save the connection
When enabled, BimlFlex automatically implements delta detection and maintains a complete history of all data changes in your bronze layer.
Deploying the target Fabric Environment
When Microsoft Fabric is used as the target platform, BimlFlex will generate the required SQL scripts for the deployment of all the tables, stored procedures, and the Data Vault default inserts (Ghost Keys).
Once generated the scripts can be manually deployed to the required Fabric workspace.
Generated Artifacts
BimlFlex generates all artifacts required for your Fabric solution. You do not need to write notebooks, stored procedures, or pipeline code manually. The generated artifacts include:
| Artifact Type | Description |
|---|---|
| Warehouse Tables | DDL scripts for creating all warehouse structures |
| Lakehouse Tables | DDL scripts for creating lakehouse tables |
| Notebooks | Spark notebooks for data processing in Lakehouse |
| Stored Procedures | T-SQL procedures for Warehouse transformations |
| Data Factory Pipelines | Complete pipeline orchestration including copy activities, notebook execution, and error handling |
Pipeline Features
Generated pipelines include sophisticated data movement logic:
- High watermark lookups for incremental loading
- Copy activities with proper connection settings
- Notebook execution for staging layer processing
- Automatic file handling (archive/error movement)
- Error handling and retry logic
Generating Fabric SQL Scripts
Using Microsoft Fabric as the target platform requires the generation of the appropriate Table Script and Procedure Script options when using Generate Scripts in BimlStudio. Additionally if Data Vault is being used the standard Data Vault Default Insert Script can be used to generate the required Ghost Keys.
For additional details on generating DDL refer to the BimlFlex DDL generation guide.
Deploying Fabric SQL Scripts
Once BimlFlex generates the scripts they can be executed against the target Fabric workspace.
For Fabric Lakehouse, scripts can be deployed through:
- Fabric Notebooks
- Spark SQL
For Fabric Warehouse, scripts can be deployed through:
- Fabric SQL Editor
- T-SQL scripts
Ensure you are using the appropriate workspace and schema when executing the SQL scripts. The scripts may not contain explicit workspace references and depend on the user executing the script to have selected the appropriate context.
For additional details on using Microsoft Fabric SQL capabilities, refer to the Microsoft documentation:
Orchestration
BimlFlex automatically generates the orchestration artifacts as part of the standard build process. The actual artifacts generated depends on the Integration Stage that is configured for the Project.
There is no difference in the process of deploying the environment and Data Factory orchestration compared to any other target infrastructure in BimlFlex.
As a final check, please ensure the following configurations were made:
- Create a Landing Area
- Provide a configured Connection with Secrets entered
- Configure the External Reference for each Fabric connection
- Configure and review the generic Data Factory environment settings
- Azure Blob Stage Container Settings
- Azure Blob Archive Container Settings
- Azure Blob Error Container Settings
For additional details on generating and deploying Data Factory artifacts refer to the below guides:
Benefits of Using BimlFlex with Fabric
BimlFlex provides significant advantages when building Microsoft Fabric solutions:
- No Code Required: Your team only needs to understand data modeling—BimlFlex generates all notebooks, stored procedures, and pipelines automatically
- Focus on Design: Concentrate on source-to-target mappings and transformations, not implementation details
- Automatic Updates: As Microsoft Fabric evolves, BimlFlex templates are updated to ensure optimal implementations
- Data Vault Accelerator: Full access to the Data Vault accelerator for modeling hubs, links, and satellites
- Transformation Support: Apply transformations directly in BimlFlex, including macros for reusable patterns (like PPI encryption)
- Data Lineage: Complete data lineage visualization for any object in your solution
- Schema Documentation: Automatic schema diagrams and documentation generation