Transformations
Container for child dataflow transformations definitions.
This is a collection of child transformation/component definitions for this data flow task.
Permitted Collection Child Definitions
Child | API Type | Description |
---|---|---|
<AdoNetDestination /> | AstAdoNetDestinationNode | The ADO.NET destination loads data into a ADO.NET-compliant database that uses a database table or view. |
<AdoNetSource /> | AstAdoNetSourceNode | The ADO.NET Source reads data from a ADO.NET-compliant database that uses a database table or view. |
<Aggregate /> | AstAggregateNode | The Aggregate transformation aggregates and groups values in a data set. |
<Audit /> | AstAuditNode | The Audit transformation performs specified audits on the incoming dataflow rows and adds audit information to the outgoing dataflow rows. |
<AzureBlobDestination /> | AstAzureBlobDestinationNode | The Azure blob destination writes data to Azure blob storage. |
<AzureBlobSource /> | AstAzureBlobSourceNode | The Azure blob source reads data from Azure blob storage. |
<AzureDataLakeStoreDestination /> | AstAzureDataLakeStoreDestinationNode | The Azure Data Lake Store destination writes data to an Azure Data Lake Store. |
<AzureDataLakeStoreSource /> | AstAzureDataLakeStoreSourceNode | The Azure Data Lake Store Source reads data from an Azure Data Lake Store. |
<BalancedDataDistributor /> | AstBalancedDataDistributorNode | The Balanced Data Distributor Transformation creates splits its input rows into equal proportions and distributes it to its output paths. This is normally used to better take advantage of the parallelism features of your target server or to distribute rows across server shards. You will need to install this component on your server as it is published by Microsoft as a download. Please see http://blogs.msdn.com/b/sqlperf/archive/2011/05/25/the-balanced-data-distributor-for-ssis.aspx for more information. |
<Cache /> | AstCacheTransformNode | The Cache Transform transformation writes data from a connnected data source to a specified cache. This data can later be used by the lookup transformation. Using a cache enables multiple lookups to share the same cache rather than separately loading duplicate data. |
<CdcSource /> | AstCdcSourceNode | The CDC Source reads data from a a CDC-enabled SQL Server table and outputs rows that reflect the changes that have taken place in the specified processing range. |
<CdcSplitter /> | AstCdcSplitterNode | The CDC Splitter component partitions rows from a CDC Source component based on its status as an inserted, updated, or deleted row. |
<CharacterMap /> | AstCharacterMapNode | The Character Map transformation converts character data by applying string transformation operations. |
<ConditionalSplit /> | AstConditionalSplitNode | The Conditional Split transformation routes source data rows to different output paths by evaluating expressions. |
<ContactVerify /> | AstContactVerifyNode | The Melissa Data Contact Verify custom component is used to detect suspicious values, standardize, normalize, verify, correct, or supplement various properties of contact information, including names, addresses, geocoding, phone numbers, and email addresses. It uses a local or remote database that is regularly updated with the latest data from the U.S. and Canada. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information. |
<CopyColumn /> | AstCopyColumnNode | The Copy Column transformation copies input columns and uses them to create new columns in the transformation output. |
<CustomComponent /> | AstCustomComponentNode | The Custom Component transformation represents a custom component in SSIS. |
<CustomComponentSource /> | AstCustomComponentSourceNode | Defines a custom source component. |
<DataConversion /> | AstDataConversionNode | The Data Conversion transformation converts the value in a column from its present type to a specified type and copies it to a new column. |
<DataMiningQuery /> | AstDataMiningQueryNode | The Data Mining Query transformation runs a Data Mining Extensions (DMX) querie against a specified data mining model. |
<DataReaderDestination /> | AstDataReaderDestinationNode | The DataReader destination loads dataflow rows into an ADO.NET data reader for consumption by other applications or components. |
<DataStreamingDestination /> | AstDataStreamingDestinationNode | The DataStreaming destination writes data to DataStreaming Destination. |
<DerivedColumns /> | AstDerivedColumnListNode | The Derived Column transformation applies expressions to input columns to generate new column values. |
<DqsCleansing /> | AstDqsCleansingNode | The DQS Cleansing transformation replaces values of incoming data flow columns with corresponding values that have been corrected using rules specified in a Microsoft SQL Server Data Quality Services (DQS) instance. |
<ExcelDestination /> | AstExcelDestinationNode | The Excel destination loads data into Microsoft Excel workbooks. |
<ExcelSource /> | AstExcelSourceNode | The Excel source extracts data from Microsoft Excel workbooks. |
<ExportColumn /> | AstExportColumnNode | The Export Column transformation exports column values from rows in a data set to files. |
<FlatFileDestination /> | AstFlatFileDestinationNode | The Flat File destination writes data to a flat text file in a format specified by a FlatFileFormat definition. |
<FlatFileSource /> | AstFlatFileSourceNode | The Flat File Source reads data from a flat text file in a format specified by a FlatFileFormat definition. |
<FlexibleFileDestination /> | AstFlexibleFileDestinationNode | The Flexible file destination component writes files to azure storage. |
<FlexibleFileSource /> | AstFlexibleFileSourceNode | This reads from a file located on Data Lake Gen 2 or Azure blob storage. |
<FuzzyGrouping /> | AstFuzzyGroupingNode | The Fuzzy Grouping transformation groups data set rows that contain similar values. |
<FuzzyLookup /> | AstFuzzyLookupNode | The Fuzzy Lookup transformation looks up values in a reference data set by using fuzzy matching. That is, matches can be close rather than exact. |
<GlobalVerify /> | AstGlobalVerifyNode | The Melissa Data Global Verify custom component is used to verify, correct, and supplement various properties of contact information across more than 240 countries, ensuring that address records are correct and complete. It uses a local or remote database that is regularly updated with the latest international data. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information. |
<HdfsFileDestination /> | AstHdfsFileDestinationNode | The HDFS destination writes data to a Hadoop cluster. |
<HdfsFileSource /> | AstHdfsFileSourceNode | The HDFS Source reads data from an Hadoop cluster. |
<ImportColumn /> | AstImportColumnNode | The Import Column transformation loads data from files into columns in a data flow. |
<IsNullPatcher /> | AstIsNullPatcherNode | The Is Null Patcher will rewrite the specified columns to a default value if they have a null value. |
<Lookup /> | AstLookupNode | The Lookup transformation combines data in input columns with data in columns in a reference data set. It is the data flow equivalent of a SQL join. |
<MatchUp /> | AstMatchUpNode | The Melissa Data Match Up custom component is used to identify duplicate contact records. Even when contact properties are not exact matches, the component uses fuzzy matching and a variety of scoring algorithms to identify matches. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information. |
<Merge /> | AstMergeNode | The Merge transformation combines date from two sorted data sets into a single data set. |
<MergeJoin /> | AstMergeJoinNode | The Merge Join transformation merges data from two data sets by using a join. |
<Multicast /> | AstMulticastNode | The Multicast Transformation creates multiple copies of source data and distributes them to multiple output paths. |
<MultipleHash /> | AstMultipleHashNode | The Multiple Hash custom component is used to produce hash values for a selection of input columns using a choice of hashing algorithms. It is capable of producing multiple hashed output columns with a single instance of the component. You will need to install this component on your server as it is published on CodePlex as a community project. Please see http://ssismhash.codeplex.com for more information. |
<OdbcDestination /> | AstOdbcDestinationNode | The ODBC Destination loads data into an ODBC-compliant database that uses a database table, a view, or an SQL command. |
<OdbcSource /> | AstOdbcSourceNode | The ODBC Source reads a data source using one of the ODBC adapters available on the host system. |
<OleDbCommand /> | AstOleDbCommandNode | The OLE DB command transformation executes SQL statements to access and transform external data. |
<OleDbDestination /> | AstOleDbDestinationNode | The OLE DB destination loads data into an OLE DB-compliant database that uses a database table, a view, or an SQL command. |
<OleDbSource /> | AstOleDbSourceNode | The OleDbSource reads a data source using one of the OLEDB adapters available on the host system. |
<OracleDestination /> | AstOracleDestinationNode | The Oracle Destination loads data into an Oracle database that uses a database table, a view, or an SQL command. This is done specifically using the Attunity Oracle Connector. |
<OracleSource /> | AstOracleSourceNode | The Oracle Source reads an Oracle data source using the Attunity Oracle Connector. |
<PercentageSampling /> | AstPercentageSamplingNode | The Percentage Sample transformation generates a sample data set by randomly selecting a specified percentage of data flow rows from the input. |
<Personator /> | AstPersonatorNode | The Melissa Data Personator custom component is used to verify, correct, and supplement various properties of contact information, ensuring that records are correct and complete. It uses a local or remote database that is regularly updated with the latest international data. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information. |
<Pivot /> | AstPivotNode | The Pivot transformation creates a less normalized representation of a data set by taking multiple rows and converting them into a single with multiple columns. |
<RawFileDestination /> | AstRawFileDestinationNode | The Raw File destination loads dataflow rows into a raw data file using a file format that is native to this component. This destination and the accompanying source component is intended to improve performance by leveraging the native file format. |
<RawFileSource /> | AstRawFileSourceNode | The Raw File Source reads dataflow rows from a raw data file using a file format that is native to this component. This source and the accompanying destination component is intended to improve performance by leveraging the native file format. |
<RecordsetDestination /> | AstRecordSetDestinationNode | The Recordset Destination component creates an in-memory ADO recordset that is stored in a variable. |
<RowCount /> | AstRowCountNode | The Row Count transformation counts rows as the data flows through it and stores the total row count in a variable after the last row is counted. |
<RowSampling /> | AstRowSamplingNode | The Row Sampling transformation generates a sample data set by randomly selecting a specified number of data flow rows from the input. |
<Scd /> | AstSlowlyChangingDimensionNode | The Slowly Changing Dimension transformation checks for dimension attribute changes in incoming data, determines how related records are updated, and specifies how the updated records are inserted into data warehouse dimension tables. |
<ScriptComponentDestination /> | AstScriptComponentDestinationNode | The Script Component Destination type corresponds directly to a SQL Server Integration Services script component with an input buffer. |
<ScriptComponentSource /> | AstScriptComponentSourceNode | The Script Component Source type corresponds directly to a SQL Server Integration Services script component with output buffers. |
<ScriptComponentTransformation /> | AstScriptComponentTransformationNode | The Script Component Transformation type corresponds directly to a SQL Server Integration Services script component with both an input buffer and output buffers. |
<SmartMover /> | AstSmartMoverNode | The Melissa Data Smart Mover custom component is used to identify contacts who have reloacted and automatically update their contact information. It uses a local or remote database that is regularly updated with the latest U.S. data. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information. |
<Sort /> | AstSortNode | The Sort transformation sorts input data in the specified order and then copies the sorted data to an output file. |
<SqlServerPdwDestination /> | AstSqlServerPdwDestinationNode | The SQL Server PDW destination loads data into a Microsoft SQL Server Parallel Data Warehouse (PDW) appliance. In later versions, PDW is rebranded as the Microsoft Analytics Platform System. |
<TeradataDestination /> | AstTeradataDestinationNode | The Teradata Destination loads data into a Teradata database that uses a database table, a view, or an SQL command. This is done specifically using the Attunity Teradata Connector. |
<TeradataSource /> | AstTeradataSourceNode | The Teradata Source reads a Teradata data source using the Attunity Teradata Connector. |
<TermExtraction /> | AstTermExtractionNode | The Term Extraction transformation extracts terms from input text columns and directs the terms to output text columns. |
<TermLookup /> | AstTermLookupNode | The Term Lookup transformation extracts terms from input text, places the terms in an input column, and compares these terms to terms in a reference table. |
<TheobaldXtractSapSource /> | AstTheobaldXtractSapSourceNode | The Theobald XTRACT Source will connect to an SAP database to extract records from a specified table. |
<UnionAll /> | AstUnionAllNode | The Union All transformation combines rows from multiple input paths into a single output path, using column mappings where necessary. |
<Unpivot /> | AstUnpivotNode | The Unpivot transformation creates a more normalized representation of a data set by taking values from multiple columns in the same row and breaking it up into multiple rows with a label column and a column containing the original data value. |
<XmlSource /> | AstXmlSourceNode | The XML source reads an XML data file, optionally validating it, and creates a data flow output rows with the resulting data. |