• PRODUCTS
  • SUPPORT
  • DOCS
  • PARTNERS
  • COMPANY
  • QUOTE
  • ACCOUNT
  • STORE
QUOTE

Documentation

Support Forums
What can we help you with?
Topics

Transformations

Container for child dataflow transformations definitions.
This is a collection of child transformation/component definitions for this data flow task.
Permitted Collection Child Definitions
ChildAPI TypeDescription
<AdoNetDestination />AstAdoNetDestinationNodeThe ADO.NET destination loads data into a ADO.NET-compliant database that uses a database table or view.
<AdoNetSource />AstAdoNetSourceNodeThe ADO.NET Source reads data from a ADO.NET-compliant database that uses a database table or view.
<Aggregate />AstAggregateNodeThe Aggregate transformation aggregates and groups values in a data set.
<Audit />AstAuditNodeThe Audit transformation performs specified audits on the incoming dataflow rows and adds audit information to the outgoing dataflow rows.
<AzureBlobDestination />AstAzureBlobDestinationNodeThe Azure blob destination writes data to Azure blob storage.
<AzureBlobSource />AstAzureBlobSourceNodeThe Azure blob source reads data from Azure blob storage.
<AzureDataLakeStoreDestination />AstAzureDataLakeStoreDestinationNodeThe Azure Data Lake Store destination writes data to an Azure Data Lake Store.
<AzureDataLakeStoreSource />AstAzureDataLakeStoreSourceNodeThe Azure Data Lake Store Source reads data from an Azure Data Lake Store.
<BalancedDataDistributor />AstBalancedDataDistributorNodeThe Balanced Data Distributor Transformation creates splits its input rows into equal proportions and distributes it to its output paths. This is normally used to better take advantage of the parallelism features of your target server or to distribute rows across server shards. You will need to install this component on your server as it is published by Microsoft as a download. Please see http://blogs.msdn.com/b/sqlperf/archive/2011/05/25/the-balanced-data-distributor-for-ssis.aspx for more information.
<Cache />AstCacheTransformNodeThe Cache Transform transformation writes data from a connnected data source to a specified cache. This data can later be used by the lookup transformation. Using a cache enables multiple lookups to share the same cache rather than separately loading duplicate data.
<CdcSource />AstCdcSourceNodeThe CDC Source reads data from a a CDC-enabled SQL Server table and outputs rows that reflect the changes that have taken place in the specified processing range.
<CdcSplitter />AstCdcSplitterNodeThe CDC Splitter component partitions rows from a CDC Source component based on its status as an inserted, updated, or deleted row.
<CharacterMap />AstCharacterMapNodeThe Character Map transformation converts character data by applying string transformation operations.
<ConditionalSplit />AstConditionalSplitNodeThe Conditional Split transformation routes source data rows to different output paths by evaluating expressions.
<ContactVerify />AstContactVerifyNodeThe Melissa Data Contact Verify custom component is used to detect suspicious values, standardize, normalize, verify, correct, or supplement various properties of contact information, including names, addresses, geocoding, phone numbers, and email addresses. It uses a local or remote database that is regularly updated with the latest data from the U.S. and Canada. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information.
<CopyColumn />AstCopyColumnNodeThe Copy Column transformation copies input columns and uses them to create new columns in the transformation output.
<CustomComponent />AstCustomComponentNodeThe Custom Component transformation represents a custom component in SSIS.
<CustomComponentSource />AstCustomComponentSourceNodeDefines a custom source component.
<DataConversion />AstDataConversionNodeThe Data Conversion transformation converts the value in a column from its present type to a specified type and copies it to a new column.
<DataMiningQuery />AstDataMiningQueryNodeThe Data Mining Query transformation runs a Data Mining Extensions (DMX) querie against a specified data mining model.
<DataReaderDestination />AstDataReaderDestinationNodeThe DataReader destination loads dataflow rows into an ADO.NET data reader for consumption by other applications or components.
<DataStreamingDestination />AstDataStreamingDestinationNodeThe DataStreaming destination writes data to DataStreaming Destination.
<DerivedColumns />AstDerivedColumnListNodeThe Derived Column transformation applies expressions to input columns to generate new column values.
<DqsCleansing />AstDqsCleansingNodeThe DQS Cleansing transformation replaces values of incoming data flow columns with corresponding values that have been corrected using rules specified in a Microsoft SQL Server Data Quality Services (DQS) instance.
<ExcelDestination />AstExcelDestinationNodeThe Excel destination loads data into Microsoft Excel workbooks.
<ExcelSource />AstExcelSourceNodeThe Excel source extracts data from Microsoft Excel workbooks.
<ExportColumn />AstExportColumnNodeThe Export Column transformation exports column values from rows in a data set to files.
<FlatFileDestination />AstFlatFileDestinationNodeThe Flat File destination writes data to a flat text file in a format specified by a FlatFileFormat definition.
<FlatFileSource />AstFlatFileSourceNodeThe Flat File Source reads data from a flat text file in a format specified by a FlatFileFormat definition.
<FlexibleFileDestination />AstFlexibleFileDestinationNodeThe Flexible file destination component writes files to azure storage.
<FlexibleFileSource />AstFlexibleFileSourceNodeThis reads from a file located on Data Lake Gen 2 or Azure blob storage.
<FuzzyGrouping />AstFuzzyGroupingNodeThe Fuzzy Grouping transformation groups data set rows that contain similar values.
<FuzzyLookup />AstFuzzyLookupNodeThe Fuzzy Lookup transformation looks up values in a reference data set by using fuzzy matching. That is, matches can be close rather than exact.
<GlobalVerify />AstGlobalVerifyNodeThe Melissa Data Global Verify custom component is used to verify, correct, and supplement various properties of contact information across more than 240 countries, ensuring that address records are correct and complete. It uses a local or remote database that is regularly updated with the latest international data. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information.
<HdfsFileDestination />AstHdfsFileDestinationNodeThe HDFS destination writes data to a Hadoop cluster.
<HdfsFileSource />AstHdfsFileSourceNodeThe HDFS Source reads data from an Hadoop cluster.
<ImportColumn />AstImportColumnNodeThe Import Column transformation loads data from files into columns in a data flow.
<IsNullPatcher />AstIsNullPatcherNodeThe Is Null Patcher will rewrite the specified columns to a default value if they have a null value.
<Lookup />AstLookupNodeThe Lookup transformation combines data in input columns with data in columns in a reference data set. It is the data flow equivalent of a SQL join.
<MatchUp />AstMatchUpNodeThe Melissa Data Match Up custom component is used to identify duplicate contact records. Even when contact properties are not exact matches, the component uses fuzzy matching and a variety of scoring algorithms to identify matches. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information.
<Merge />AstMergeNodeThe Merge transformation combines date from two sorted data sets into a single data set.
<MergeJoin />AstMergeJoinNodeThe Merge Join transformation merges data from two data sets by using a join.
<Multicast />AstMulticastNodeThe Multicast Transformation creates multiple copies of source data and distributes them to multiple output paths.
<MultipleHash />AstMultipleHashNodeThe Multiple Hash custom component is used to produce hash values for a selection of input columns using a choice of hashing algorithms. It is capable of producing multiple hashed output columns with a single instance of the component. You will need to install this component on your server as it is published on CodePlex as a community project. Please see http://ssismhash.codeplex.com for more information.
<OdbcDestination />AstOdbcDestinationNodeThe ODBC Destination loads data into an ODBC-compliant database that uses a database table, a view, or an SQL command.
<OdbcSource />AstOdbcSourceNodeThe ODBC Source reads a data source using one of the ODBC adapters available on the host system.
<OleDbCommand />AstOleDbCommandNodeThe OLE DB command transformation executes SQL statements to access and transform external data.
<OleDbDestination />AstOleDbDestinationNodeThe OLE DB destination loads data into an OLE DB-compliant database that uses a database table, a view, or an SQL command.
<OleDbSource />AstOleDbSourceNodeThe OleDbSource reads a data source using one of the OLEDB adapters available on the host system.
<OracleDestination />AstOracleDestinationNodeThe Oracle Destination loads data into an Oracle database that uses a database table, a view, or an SQL command. This is done specifically using the Attunity Oracle Connector.
<OracleSource />AstOracleSourceNodeThe Oracle Source reads an Oracle data source using the Attunity Oracle Connector.
<PercentageSampling />AstPercentageSamplingNodeThe Percentage Sample transformation generates a sample data set by randomly selecting a specified percentage of data flow rows from the input.
<Personator />AstPersonatorNodeThe Melissa Data Personator custom component is used to verify, correct, and supplement various properties of contact information, ensuring that records are correct and complete. It uses a local or remote database that is regularly updated with the latest international data. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information.
<Pivot />AstPivotNodeThe Pivot transformation creates a less normalized representation of a data set by taking multiple rows and converting them into a single with multiple columns.
<RawFileDestination />AstRawFileDestinationNodeThe Raw File destination loads dataflow rows into a raw data file using a file format that is native to this component. This destination and the accompanying source component is intended to improve performance by leveraging the native file format.
<RawFileSource />AstRawFileSourceNodeThe Raw File Source reads dataflow rows from a raw data file using a file format that is native to this component. This source and the accompanying destination component is intended to improve performance by leveraging the native file format.
<RecordsetDestination />AstRecordSetDestinationNodeThe Recordset Destination component creates an in-memory ADO recordset that is stored in a variable.
<RowCount />AstRowCountNodeThe Row Count transformation counts rows as the data flows through it and stores the total row count in a variable after the last row is counted.
<RowSampling />AstRowSamplingNodeThe Row Sampling transformation generates a sample data set by randomly selecting a specified number of data flow rows from the input.
<Scd />AstSlowlyChangingDimensionNodeThe Slowly Changing Dimension transformation checks for dimension attribute changes in incoming data, determines how related records are updated, and specifies how the updated records are inserted into data warehouse dimension tables.
<ScriptComponentDestination />AstScriptComponentDestinationNodeThe Script Component Destination type corresponds directly to a SQL Server Integration Services script component with an input buffer.
<ScriptComponentSource />AstScriptComponentSourceNodeThe Script Component Source type corresponds directly to a SQL Server Integration Services script component with output buffers.
<ScriptComponentTransformation />AstScriptComponentTransformationNodeThe Script Component Transformation type corresponds directly to a SQL Server Integration Services script component with both an input buffer and output buffers.
<SmartMover />AstSmartMoverNodeThe Melissa Data Smart Mover custom component is used to identify contacts who have reloacted and automatically update their contact information. It uses a local or remote database that is regularly updated with the latest U.S. data. You will need to install this component on your server as it is published by Melissa Data. Please see http://www.melissadata.com/data-quality-ssis for more information.
<Sort />AstSortNodeThe Sort transformation sorts input data in the specified order and then copies the sorted data to an output file.
<SqlServerPdwDestination />AstSqlServerPdwDestinationNodeThe SQL Server PDW destination loads data into a Microsoft SQL Server Parallel Data Warehouse (PDW) appliance. In later versions, PDW is rebranded as the Microsoft Analytics Platform System.
<TeradataDestination />AstTeradataDestinationNodeThe Teradata Destination loads data into a Teradata database that uses a database table, a view, or an SQL command. This is done specifically using the Attunity Teradata Connector.
<TeradataSource />AstTeradataSourceNodeThe Teradata Source reads a Teradata data source using the Attunity Teradata Connector.
<TermExtraction />AstTermExtractionNodeThe Term Extraction transformation extracts terms from input text columns and directs the terms to output text columns.
<TermLookup />AstTermLookupNodeThe Term Lookup transformation extracts terms from input text, places the terms in an input column, and compares these terms to terms in a reference table.
<TheobaldXtractSapSource />AstTheobaldXtractSapSourceNodeThe Theobald XTRACT Source will connect to an SAP database to extract records from a specified table.
<UnionAll />AstUnionAllNodeThe Union All transformation combines rows from multiple input paths into a single output path, using column mappings where necessary.
<Unpivot />AstUnpivotNodeThe Unpivot transformation creates a more normalized representation of a data set by taking values from multiple columns in the same row and breaking it up into multiple rows with a label column and a column containing the original data value.
<XmlSource />AstXmlSourceNodeThe XML source reads an XML data file, optionally validating it, and creates a data flow output rows with the resulting data.

© Varigence