Skip to main content

Preparing BimlFlex for Microsoft Fabric

Before working with Microsoft Fabric metadata, you should ensure that your environment is configured to use Fabric appropriately. This section will walk you through the required software and system configurations you will need to connect BimlFlex to your Microsoft Fabric environment.

Getting Started with Microsoft Fabric in BimlFlex

Prerequisites

Before configuring BimlFlex for Microsoft Fabric, ensure you have:

  1. An active Microsoft Fabric workspace with appropriate permissions
  2. Data Factory configured in your Microsoft Fabric workspace
  3. Appropriate authentication credentials (Service Principal or Managed Identity)
  4. OneLake or Azure Data Lake Storage Gen2 for landing and staging areas

Quick Start from Sample Metadata

The fastest way to get started with Microsoft Fabric is to load one of the pre-configured sample metadata sets. BimlFlex provides two Fabric-specific samples:

SampleDescriptionUse Case
Fabric Data VaultPre-configured for Data Vault implementationBuilding a silver layer with Hub, Link, and Satellite patterns
Fabric DatamartPre-configured for dimensional modelingBuilding bronze-to-gold layer data marts

To load a sample:

  1. Navigate to the BimlFlex Dashboard
  2. Select from the Load Sample Metadata dropdown
  3. Choose either Fabric Data Vault or Fabric Datamart

The sample metadata includes pre-configured projects, connections, and objects that demonstrate best practices for Fabric implementations.

Configuring a Project for Microsoft Fabric

To configure an existing project for Microsoft Fabric:

  1. Navigate to the Projects editor in BimlFlex
  2. Change the Integration Template from Integration Services to Data Factory
  3. Configure a Landing Connection for data extraction
  4. Save the project

Once configured for Fabric, the project will display a Fabric icon indicating the target platform.

tip

If you're starting from the SQL Server SSIS sample (Sample 01), you can convert it to Fabric by simply changing the Integration Template to Data Factory and configuring the appropriate connections.

Configuring BimlFlex Settings

BimlFlex uses Settings to adapt to specific requirements for file locations, naming conventions, data conventions etc.

Align these settings with the organizations best practices and environmental requirements.


Configuring a Fabric Connection

This section outlines any specific considerations needed when configuring BimlFlex to use Microsoft Fabric across the various Integration Stages.

FieldSupported ValuesGuide
Integration StageSource System, Staging Area, Persistent Staging Area, Data Vault, Data MartDetails
System TypeFabric Lakehouse, Fabric WarehouseDetails
Connection StringFabric connection stringDetails
External LocationOneLake path or ABFSS path to the storage locationDetails
External ReferenceFabric connection ID (internal identifier)Details

Integration Stage

BimlFlex provides support for the use of Microsoft Fabric as both a target warehouse platform and as a Source System.

You can use Fabric Lakehouse as a source going into another Fabric Lakehouse, enabling scenarios where you process data between multiple Fabric components.

Naming patterns and schemas can be used for separation as needed.

Landing Area

Microsoft Fabric is not currently supported as a Landing Area but a Landing Area is required when using Data Factory to orchestrate data movement.

The recommendation is for the Landing Area to use:

  • OneLake: Land data directly into the Fabric Lakehouse Files area
  • Azure Data Lake Storage Gen2: Traditional ADLS Gen2 landing

In addition to the Landing Area it is also important that the Settings for an Azure Blob Stage Container, Azure Blob Archive Container, and Azure Blob Error Container are populated correctly.

danger

Ensure that all Azure Blob Container Settings are configured properly:

Object File Path Configuration

In addition to the global storage settings, BimlFlex allows granular file path configuration at the Object level. These settings are configured in the Object Editor and override the default paths for individual objects.

Settings

Setting KeySetting Description
Source File PathThe path where source files are located. Used when the Object represents a file-based source.
Source File PatternThe naming pattern for source files. Supports expressions for dynamic file matching.
Source File FilterThe filter applied to the Filter Activity after the Metadata Activity to select specific files.
Landing File PathThe path where extracted data files should be placed during the landing process.
Landing File PatternThe naming pattern for landing files. Supports expressions including @@this for object name and pipeline().parameters for dynamic values.
Persistent Landing File PathThe file path where raw landing files are stored persistently for historical tracking and reprocessing.
Persistent Landing File PatternThe naming pattern for files stored in the Persistent Landing File Path. May include date, time, or other variable elements.

Examples

Setting KeyExample Value
Source File Path/mnt/source/sales/
Source File Patternsales_*.parquet
Source File Filter@greater(item().lastModified, pipeline().parameters.LastLoadDate)
Landing File Path@@this/Landing/
Landing File Pattern@concat('@@this', '_', replace(replace(formatDateTime(pipeline().parameters.BatchStartTime, 'yyyy-MM-ddTHH:mm:ss'), ':', ''), '-', ''), '.parquet')
Persistent Landing File Path@concat('@@this/Archive/', formatDateTime(pipeline().parameters.BatchStartTime, 'yyyy/MM/dd/'))
Persistent Landing File Pattern@@this_@@timestamp.parquet

note

The @@this placeholder is automatically replaced with the Object name at runtime. These settings apply per-object and take precedence over connection-level defaults.

System Type

The System Type should be set according to the target Fabric component:

  • Fabric Lakehouse - For Microsoft Fabric Lakehouse targets
  • Fabric Warehouse - For Microsoft Fabric Warehouse targets

Connection String

For Fabric connections, the connection string is configured through the Connection. See the Connections Section for details on configuring connection string values and Azure Key Vault secrets.

External Location

The External Location field specifies the storage path for the connection. For Fabric connections, this can be:

  • OneLake path: abfss://<workspace>@onelake.dfs.fabric.microsoft.com/<lakehouse>/Files/
  • ABFSS path: For Azure Data Lake Storage Gen2 connections (e.g., abfss://<container>@<storage-account>.dfs.core.windows.net/<path>)

This field is used to define where data files are stored and accessed during data movement operations.

External Reference

The External Reference field stores the unique identifier associated with the connection. Within Microsoft Fabric, every connection has an internal ID that BimlFlex uses to reference the correct connection when executing pipelines.

This identifier can be found in the Fabric portal and should be entered in the External Reference field for each connection.

note

The External Reference is required for all Fabric connections. This ID enables BimlFlex to properly reference the Fabric connection when generating and deploying Data Factory pipelines.

tip

For additional details on creating a Connection refer to the below guide:

Configuring History Persistence (Bronze Layer)

For each source, you can configure whether to persist history in your bronze layer:

  • Persist History enabled: Delta detection and full history tracking in the Persistent Staging Area
  • Persist History disabled: Only maintains current state of source data

To enable history persistence:

  1. Navigate to the source connection in the Connections editor
  2. Enable the Persist History toggle
  3. Save the connection

When enabled, BimlFlex automatically implements delta detection and maintains a complete history of all data changes in your bronze layer.

Deploying the target Fabric Environment

When Microsoft Fabric is used as the target platform, BimlFlex will generate the required SQL scripts for the deployment of all the tables, stored procedures, and the Data Vault default inserts (Ghost Keys).

Once generated the scripts can be manually deployed to the required Fabric workspace.

Generated Artifacts

BimlFlex generates all artifacts required for your Fabric solution. You do not need to write notebooks, stored procedures, or pipeline code manually. The generated artifacts include:

Artifact TypeDescription
Warehouse TablesDDL scripts for creating all warehouse structures
Lakehouse TablesDDL scripts for creating lakehouse tables
NotebooksSpark notebooks for data processing in Lakehouse
Stored ProceduresT-SQL procedures for Warehouse transformations
Data Factory PipelinesComplete pipeline orchestration including copy activities, notebook execution, and error handling

Pipeline Features

Generated pipelines include sophisticated data movement logic:

  • High watermark lookups for incremental loading
  • Copy activities with proper connection settings
  • Notebook execution for staging layer processing
  • Automatic file handling (archive/error movement)
  • Error handling and retry logic

Generating Fabric SQL Scripts

Using Microsoft Fabric as the target platform requires the generation of the appropriate Table Script and Procedure Script options when using Generate Scripts in BimlStudio. Additionally if Data Vault is being used the standard Data Vault Default Insert Script can be used to generate the required Ghost Keys.

tip

For additional details on generating DDL refer to the BimlFlex DDL generation guide.

Deploying Fabric SQL Scripts

Once BimlFlex generates the scripts they can be executed against the target Fabric workspace.

For Fabric Lakehouse, scripts can be deployed through:

  • Fabric Notebooks
  • Spark SQL

For Fabric Warehouse, scripts can be deployed through:

  • Fabric SQL Editor
  • T-SQL scripts
danger

Ensure you are using the appropriate workspace and schema when executing the SQL scripts. The scripts may not contain explicit workspace references and depend on the user executing the script to have selected the appropriate context.

tip

For additional details on using Microsoft Fabric SQL capabilities, refer to the Microsoft documentation:

Orchestration

BimlFlex automatically generates the orchestration artifacts as part of the standard build process. The actual artifacts generated depends on the Integration Stage that is configured for the Project.

There is no difference in the process of deploying the environment and Data Factory orchestration compared to any other target infrastructure in BimlFlex.

As a final check, please ensure the following configurations were made:

tip

For additional details on generating and deploying Data Factory artifacts refer to the below guides:

Benefits of Using BimlFlex with Fabric

BimlFlex provides significant advantages when building Microsoft Fabric solutions:

  • No Code Required: Your team only needs to understand data modeling—BimlFlex generates all notebooks, stored procedures, and pipelines automatically
  • Focus on Design: Concentrate on source-to-target mappings and transformations, not implementation details
  • Automatic Updates: As Microsoft Fabric evolves, BimlFlex templates are updated to ensure optimal implementations
  • Data Vault Accelerator: Full access to the Data Vault accelerator for modeling hubs, links, and satellites
  • Transformation Support: Apply transformations directly in BimlFlex, including macros for reusable patterns (like PPI encryption)
  • Data Lineage: Complete data lineage visualization for any object in your solution
  • Schema Documentation: Automatic schema diagrams and documentation generation