Specifying Data Target Options

In this section:

After the data targets have been added to the flow, there are two types of options that govern the behavior of the data flow when copying data into the targets.

Data flow properties include:

Target-specific properties include:


Top of page

x
Load Options

How to:

Reference:

Load options specify the way DataMigrator loads data into the targets. If you are moving data to more than one target, they will all use the same load type. The load type is displayed on all target objects and, if changed in one target, the change will be reflected in all the target objects.

Load options vary depending on the type of target. They can include:



x
Reference: DB2 Target Bulk Load Options

DB2 targets have the following options:

Load Options window

Loading method

INSERT. Insert the rows only. This is the default.

REPLACE. Drops the existing destination table, and creates a new table.

Note:



x
Reference: DB2 Target Insert Records from Memory Options

The Bulk Insert API (FASTLOAD) could be in effect for DB2 targets with load option Insert records from memory. The command that changes the settings for the DB2 FASTLOAD is:

ENGINE DB2 SET FASTLOAD [ON|OFF]

If effective, the default value for FASTLOAD is ON.

A SET command for DB2 to control whether or not the FASTLOAD operations are recoverable is:

SQL DB2 SET FASTLOAD_OPTION RECOVERABLE/NONRECOVERABLE

The default value is RECOVERABLE.



x
Reference: Greenplum Extended Bulk Load Options

Greenplum targets have the following options:

*Column Delimiter

Delimiter characters between fields in the intermediate flat file that is loaded into the database. The delimiter can be up to four characters and can be specified as the following:

  • TAB - A tab character. This value is the default.
  • a - A character string. For example, ~.
  • 0x nn - A hex code. For example, ox44 (a comma), or 0x0D0A (a return and a linefeed). The hex code uses ASCII for Windows or UNIX systems and EBCDIC for IBM Mainframes.
Maximum Number of Load Sessions

The maximum number of simultaneous Load Sessions logged on to the database. This number depends on the features of the database that is used for bulk load processes.

Maximum Number of Load Session Restarts

The maximum number of restart attempts for each Load Session that ends with error.

*Escape Character

Specifies the single character that is used for escape sequences, such as ,, or @;. Also used for escaping data characters that might otherwise be interpreted as row or column delimiters.

New Line Character(s) Replacement

Enable this option if required by the DBMS Load Utility.

  • OFF. No replacement required. This is the default value.
  • SPACE. Replace New Line Character(s) with the SPACE character.
  • a. An ASCII character, for example, ~. Replaces New Line Character(s) with the specified character.
  • 0x nn. A hex code, for example, 0x44 (a comma). Replaces New Line Character(s) with the specified hex code.

    The hex code uses ASCII for WIndows or UNIX systems, and EBCDIC for IBM Mainframes.

*Column Enclosure

Characters used to enclose each alphanumeric value in the intermediate flat file that is loaded into the database. The enclosure consists of up to four printable characters. The most common enclosure is one double quotation mark. Numeric digits, symbols used in numbers, such as a period (.), plus sign (+), or minus sign (-), and 0x09 (Horizontal Tab), 0x0A (Line Feed), or 0x0D (Carriage return) cannot be used in the enclosure sequence. In order to specify a single quotation march as the enclosure character, you must enter four consecutive single quotation marks.

Note: When a non-default value is used, the intermediate file is created as the Greenplum term CSV, instead of what is called TEXT.

Global Settings

These options are only available when BULKLOAD=ON and apply to all new flows using Greenplum as the target.

Note: ALIAS_CASE controls the case of column names when new tables are created. Greenplum users may want to select Enforce lower case.



x
Reference: Hyperstage Extended Bulk Load Options

Hyperstage targets have the following options:

Bulk load utility via disk file has no additional options.

Maximum number of load sessions

Maximum number of load sessions logged on to the Hyperstage database.



x
Reference: Informix Target Bulk Load Options

Informix targets have the following options:

Load options window

Lock tables while load

Table locking prevents other users from changing data in a table that is being loaded. Locking is necessary for maintaining data integrity while concurrent users access database information.

Recognize HEX binary data in char

Recognizes hexadecimal data in character fields.

Check syntax only

Checks the syntax, but not the data being loaded.

First rows to ignore in input file

Numbers of rows to ignore before starting to load data.

Commit every row(s)

The row interval to commit or write transactions to the database.

Maximum number of rows with errors

The maximum number of errors allowed before loading stops. The default is the target RDBMS default.



x
Reference: MS SQL Server Extended Bulk Load Options

The Extended Bulk Load Utility for MS SQL Server has the following options:

*Commit ever row(s)

The number of commited rows. This value should be a multiple of the Block size.

Maximum number of errors

The maximum number of errors allowed before loading stops.

Packet size to send

The packet size in bytes.

Maximum Number of Load Sessions

The maximum number of load sessions logged on to the database.

Maximum Number of Load Session Restarts

The maximum number of restart attempts after recoverable load session error.



x
Reference: MySQL Bulk Load Options

There are no additional bulk load options for MySQL.



x
Reference: MySQL Extended Bulk Load Options

The Extended Bulk Load Utility for MySQL has the following options:

*Commit ever row(s)

The number of commited rows. This value should be a multiple of the Block size.

Maximum Number of Load Sessions

The maximum number of load sessions logged on to the database.

Maximum Number of Load Session Restarts

The maximum number of restart attempts after recoverable load session error.



x
Reference: Teradata Extended Bulk Load Utility

The Teradata Extended Bulk Load Utility uses TPT (Teradata Parallel Transporter) to load data.

The Teradata Extended Bulk Load Utility has the following options:

Teradata Extended Bulk Load Utility

Commit every row(s)

The row interval to commit or write transactions to the database.

Maximum number of sessions

The maximum number of TPT sessions logged on to the Teradata database.

Rejected records maximum number

The maximum number of rejected records that can be stored in one of the error tables before the job is terminated. If unspecified, the default is unlimited.

TDP Id

The Teradata Director Program Id.

Account Id

The Account Id used to access the database.

Data Error Table

Name of table to contain records that were rejected because of data conversion errors, constraint violations, or AMP configuration changes. This must be a new table name. The default is the database default.

Index Violation Table

Name of table to contain records that violated the unique primary index constraint. This must be a new table name. The default is the database default.



x
Reference: Nucleus Target Bulk Load Options

Nucleus targets have the following options:

Load options window

Version of ndl
Single-user

Starts the server in exclusive mode. No other database sessions may be established.

Multi-user

The server must already be started in multi-user mode and listening on the default port 1285.

Overwrite the error log file
Yes

Creates a new file.

No

Appends to existing log.ndl file.

Commit loaded rows if error occurs
Yes

Issues a commit.

No

Issues a rollback.

First input file bytes to ignore

Number of bytes to skip before loading data.

Null indicator

Specifies the character to represent null.

First input file rows to skip

Number of rows to skip before loading data.

Number of input file rows to process

Number of rows to process before stopping.

Disconnect all other connections

Disconnects connections.

Skip rows w/ unprintable characters

Skips rows with unprintable characters.

Server password

NDL server password. Required if Single User (ndls) mode and a server password actually exists.



x
Reference: Oracle Target Bulk Load Options

Oracle targets have the following options:

Load Options window

Load Path

Has the following options:

Conventional

Direct

A direct path load builds blocks of data in memory and saves these blocks directly into the extents allocated for the table being loaded. A direct path load uses the field specifications to build whole Oracle blocks of data, and write the blocks directly to Oracle data files. Direct path load is much faster than conventional load, but entails some restrictions.

Direct Parallel

Uses ORACLE parallel processing for faster loads. This option is only available for target tables with no keys.

Bind size

Size (in bytes) of the bind array. When the array is full, the array is loaded into the target.

Read size

Size (in bytes) of the read buffer.

Unrecoverable

If yes, the transaction will not generate redo entries.

Loading method

APPEND. If data already exists in the table, new rows are appended to it. If data does not already exist, the new rows are simply loaded. This is the default.

REPLACE. Drops the existing destination table, and creates a new table.

INSERT. Insert the rows only.

Number of logical records to skip

Specifies the number of logical records from the beginning of the file that should not be loaded. By default, no records are skipped.

Number of logical records to load

Specifies the maximum number of logical records to load. By default all records are loaded. No error occurs if fewer than the maximum number of records are found.

Number of errors to allow

Specifies the maximum number of insert errors to allow. If the number of errors exceeds this value, the load is terminated. The default is 50.

Commit every row(s)

The row interval to commit or write transactions to the database.

Version of sqlldr

Version of the SQL*Loader utility.



x
Reference: Red Brick Target Bulk Load Options

Red brick targets have the following options:

Load Options window

Start record number in input field

Row number to begin copying data to the target.

End record number in input field

Row number to end copying data to the target.

Locale

The combination of language and location.

Maximum number of discarded records

The maximum number of discarded records allowed before loading stops. The default is the target RDBMS default.

Discard file name

File where duplicate records are discarded. The records will be stored for possible reloading.

RI discard file name

File where discarded records based on referential integrity will be stored for possible reloading.

Loading mode

INSERT. Insert the rows only. If the table is not empty, the load operation ends. This is the default.

APPEND. Used to insert additional rows of data into an existing table. Each new row must have a primary-key value that does not already exist in the table. Otherwise, the record is discarded.

REPLACE. Replaces the entire contents of a table.

MODIFY. Used to insert additional rows or to update existing rows in a table. If the input row has the same primary-key value as an existing row, the new row replaces the existing row. Otherwise, it is added as a new row.

UPDATE. Updates existing rows in an existing table. Each new row must have a primary-key value that is already present in the table. Otherwise, the record is discarded.

MODIFY AGGREGATE. If the primary key of the input row matches an existing row in the table, the existing row is updated as defined for the specified aggregate operator. If the primary key of the input row does not match an existing row in the table, the row is inserted.

UPDATE AGGREGATE. If the primary key of the input row does not match the primary key of a row already in the table, the input row is discarded. If it does match an existing row, the existing row is updated as defined for the specified aggregate operator.

Optimize

OFF. Indexes are updated when each input row is inserted into the data file, which provides better performance when the data being loaded contains many duplicate rows. This is the default.

ON. Overrides the global optimize mode setting in the rbw.config file.



x
Reference: SQL Server Target Bulk Load Options

SQL Server targets have the following options:

Load Options window

Format file version

Version of the file used to store the format information for each field.

Maximum number of errors

The maximum number of errors allowed before loading stops. The default is the target RDBMS default.

First row to copy

Row number to begin copying data to the target.

Last row to copy

Row number to end copying data to the target.

Packet size to send

The packet size in bytes.

Commit every row(s)

The row interval to commit or write transactions to the database.



x
Reference: Sybase Target Bulk Load Options

The Bulk Load Utility via a Disk File for both Sybase ASE and Sybase IQ has the following options:

Sybase Target Bulk Load Options

Column Delimiter

Is the delimiter character(s) used between fields. It is used by the intermediate flat file used to load the database.

The delimiter can be up to four characters and can be specified as:

TAB. A tab character. This is the default.

a. A character string, for example ~.

0x nn. A hex code, for example, 0x44 (a comma) or 0x0D0A (a return and a linefeed).

Row Delimiter

Is the delimiter character used between records. It is used by the intermediate flat file to the load the database. The row delimiter can be specified in the same manner as the (field) delimiter except that character comma (,) is not permitted.

First row to copy from input file

Row number to begin copying data to the target.

Last row to copy from input file

Row number to end copying data to the target.

Activate Server Log

This parameter is available for Sybase IQ only.

Activate log information about integrity constraint violations and the types of violations. With this option, two types of LOG files could be written into a temporary space in the configuration directory: MESSAGE LOG file etlblk.msg and ROW LOG file etlblk.log.

For Sybase IQ:

  • Yes. Writes both MESSAGE LOG and ROW LOG files: etlblk.msg and etlblk.log.
  • No. The LOG files are not written. This is the default value.

Note: To use the Activate Server Log parameter, a Sybase logon ID should be used to connect to the DataMigrator server. This requirement is in effect for the Bulk Load Utility via a Disk File option only. For the Extended Bulk Load Utility, there is no need to use Sybase logon ID connecting to the DataMigrator server.

For a Sybase Adaptive Server IQ target, the load will fail if the database returns more errors than the limit set in the flow.



x
Reference: Sybase Extended Bulk Load Utility

The Extended Bulk Load Utility for both Sybase ASE and Sybase IQ has the following options:

Sybase properties

Commit ever row(s)

The row interval to commit or write transactions to the database.

Column Delimiter

Is the delimiter character(s) used between fields. It is used by the intermediate flat file used to load the database.

The delimiter can be up to four characters and can be specified as:

TAB. A tab character. This is the default value.

a. A character string, for example ~.

0x nn. A hex code, for example, 0x44 (a comma) or 0x0D0A (a return and a linefeed).

Row Delimiter

Is the delimiter character used between records. It is used by the intermediate flat file to the load the database. The row delimiter can be specified in the same manner as the (field) delimiter, except that character comma (,) is not permitted.

Activate Server Log

This parameter is available for Sybase IQ only.

Activate log information about integrity constraint violations and the types of violations. With this option two types of LOG files could be written into a temporary space in the configuration directory: MESSAGE LOG file etlblk.msg and ROW LOG file etlblk.log.

For Sybase IQ:

  • Yes. Writes both MESSAGE LOG and ROW LOG files: etlblk.msg and etlblk.log.
  • No. The LOG files are not written. This is the default value.
Maximum number of Load Sessions

The maximum number of Load sessions logged on to the database.

Maximum Number of Load Session Restarts

The maximum number of restart attempts after recoverable Load session error.



x
Reference: Teradata Target Bulk Load Options

Teradata targets have the following options:

Bulk Teradata Options

Note: Teradata has two bulk load programs that DataMigrator supports. FastLoad is used for new target tables, and MultiLoad is used for existing target tables. Not all options shown in the image above are available for new target tables.

UNIX only

Two environment variables, $FASTLOAD_EXE and $MLOAD_EXE, should be set for using bulk load.

$FASTLOAD_EXE. Specifies the location of the Teradata FastLoad utility.

$MLOAD_EXE. Specifies the location of the Teradata MultiLoad utility.

Two lines should be added to the profile, or server start up shell script (edastart), that are similar to the following:

export FASTLOAD_EXE = ~teradata/fastload
export MLOAD_EXE = ~teradata/mload
Rejected records maximum number

The maximum number of rejected records allowed before loading stops. The default is the target RDBMS default.

TDP Id

The Teradata Director Program Id.

Account Id

The account Id used to access the database.

Maximum number of sessions

The maximum number of MultiLoad or FastLoad sessions logged on to the Teradata database.

Start record number in source

Row number to begin copying data to the target.

End record number in source

Row number to end copying data to the target.

Work table

Name of the work table.

Acquisition phase errors table

This table provides information about all errors that occur during the acquisition phase of your Update operator job, as well as some errors that occur during the application phase if the Teradata RDBMS cannot build a valid primary index.

Application phase errors table

This table provides information about uniqueness violations, field overflow on columns other than primary index fields, and constraint errors.

Loading Method

INSERT. Inserts the rows only. This is the default.

UPSERT. Does inserts for missing update rows.

Note: This option is available for existing target tables with keys.

AMP Check

The MultiLoad response to a down Access Module Processor (AMP) condition.

ALL. Pauses the MultiLoad job when AMP is down.

NONE. Allows the MultiLoad job to start, restart, or continue as long as no more than one AMP is down in a cluster.

APPLY. Inhibits the MultiLoad job from entering or exiting the application phase when an AMP is down.

Note: This option is available for existing targets only.

Log Table

Restarts the log table for the MultiLoad checkpoint information. This option is available for existing target tables only.

Notes

Enter an annotation for this target.



x
Procedure: How to Set Options for Relational Targets
  1. In the data flow workspace, right-click the data target, and select Properties.

    The Target Properties pane opens.

  2. For existing targets, select whether to remove data prior to loading the data target in the Prior to Load Option drop-down menu. The options are:
    • No Changes. Does not delete the records already in a data target.
    • Delete all rows from table. Deletes all rows and creates a database log.
    • Truncate table. Deletes all rows from the table but does not generate a database log. This is a faster option than Delete all rows from table.

      Note: Truncate table is not supported by DB2 on IBM i.

  3. Select a Load Type. The options are:
    • Insert/Update. Allows you to set a behavior when loading records.
    • Insert records from memory. Speeds the loading of the data target by inserting a block of rows at once. You can set the row interval to commit or write transactions and the number of records to load in a block. This option is only available for relational databases that support it, including DB2 on i V6R1 for CLI, DB2 on z, Informix, MS SQL Server, MySQL, ORACLE, Sybase ASE, Teradata 13, and UDB.

      This option:

      • Requires clean input data. If any row in the block causes a data source constraint violation, such as not null or unique index, the entire block is rejected.
      • Does not provide row counts (the number of records inserted or rejected) in the detail log or statistics. NA (not available) will appear instead.
      • Does not provide record logging. For example, rejected records cannot be written to a flat file for review.
    • Bulk load utility via a disk file and Extend Bulk Load Utility. Use database bulk loaders instead of iWay to insert data into a target. DataMigrator automates bulk loading for Hyperstage, Ingres, Informix, Microsoft SQL Server, IBM DB2, Teradata, Nucleus, Oracle, Sybase Adaptive Server Enterprise, and Sybase Adaptive Server IQ. You can set format version, maximum number of errors, first and last row to copy, packet size and row interval to commit or write transactions.
    • Slowly Changing Dimensions. Enables you to load a data target with column values that change over time. There are two column types. Type I simply overwrites the value. Type II allows you track activity before and after the change. Type II changes are handled using either an activation flag or an end date/begin date. When you update a Type II column, you do not actually change its value. Instead, you update the activation flag to inactive, or the end date to the current date by default. You then add a new record with the new value and the activation flag on by default or the begin date set.
    • IUD Processing. Enables you to load a data target with only the records that have changed. This feature is an optional, add-on component to DataMigrator.

      Note: The IUD Processing option is only available for existing relational targets.

    • IUD/Slowly Changing Dimension. Is a combination of the previous two load types. The Insert/Update/Delete indications in the CDC logs are not applied directly to the target. Instead, the SCD Types of the target are used.


x
Reference: Target Properties Pane for Relational Targets

To access the Target Properties pane, right-click a target object and select Properties.

Relational target

This shows the Insert/Update Load option.

The Insert records from memory Load option looks like this:

Load Options window

The Bulk load utility via a disk file Load option looks like this for SQL Server. Other relational targets will have different options.

Load Options window

The Slowly Changing Dimensions Load option looks like this:

SCD load type

The IUD Processing Load option looks like this:

Load Options window

The Target Options of the Target Properties pane for relational targets contains the following target options:

Notes

Enter an annotation for this target.

Type

Is the target type.

Adapter

Is the target adapter.

Connection

Is the connection for the data target. For a relational data target, this is a database server. For ODBC, this is a data source.

Synonym

Is the synonym name.

Table

Is the name of data target table.

Keys (only available for new targets)

Is the names of key columns in order.

Prior to Load Options: (only available for existing targets)

These options are only available for existing targets.

No changes

Does not delete the records already in a data target.

Delete all rows from table

Deletes all rows and creates a database log.

Truncate table

Deletes all rows from the table, but does not generate a database log. This is a faster option than using Delete all rows from table.

The Target Load Options include the following load options:

Load Type:

Note: Although Load Type appears in every data target, it is set on a per flow basis. Changing the type in one target will reset it for all targets.

Insert/Update

Specifies the behavior of the DataMigrator while loading data.

Insert/Update options can be set on a per target basis.

If you select this option, you can set a behavior when duplicate records are found from the If the record exists drop-down menu.

  • Include the record. Allows the relational database target to handle duplicates.

    Note: If you select Include the record, the record is passed directly to the relational database, which determines whether to accept it or not. If inserting a record would result in a duplicate key, the RDBMS will reject it due to a Unique Index constraint violation and return an error. Processing continues even if such errors occur, up to the number of errors specified under Stop processing after __ DBMS errors in the General properties of the flow.

  • Reject the record. Issues a SELECT command against the target table to see if a record exists with the key values. If there are no key columns, DataMigrator screens all columns. If the record is found, the record is rejected.
  • Update the existing record. Updates records if the key value on the input record, or entire record if there are no keys, are found in the table. All non-key values are updated in the target.
  • Delete the existing record. Deletes the record if the key on the input record (or entire record if no keys) is found in the table.

    If you select Update the existing record or Delete the existing record from the If the record exists drop-down menu, you can also set behavior when the record does not exist using the If the record does not exist drop-down menu:

  • Include the record. Includes the key in the data target.
  • Reject the record. Does not include the key in the data target.

    Note: The Reject, Update, and Delete options can adversely affect performance because they determine the existence of a key value on an incoming record before performing the specified action. This is done by issuing a SELECT command against the target table, then waiting for the response. If one of these actions is required, try to limit the volume of records in the incremental change. These actions will perform best if there are unique keys on the table.

  • Commit every row(s). Specifies the row interval to commit or write transactions to the database.
Insert records from memory

When you specify this load type, you need to specify:

  • Commit every row(s). Specifies the row interval to commit or write transactions to the database.
  • Block size. Specifies how many records you want to process at a time.

Note: The value for Commit every rows should be a multiple of the Block size, for example, 10000 and 2000.

Bulk load utility via a disk file

These options will depend on your target type.

Extended bulk load utility

This is available for Hyperstage and Teradata. Options will depend on target type.

Slowly Changing Dimensions

This option is only available for existing relational targets.

Enables you to load a data target with column values that change over time. There are two column types. Type I simply overwrites the value. Type II allows you track activity before and after the change. Type II changes are handled either using an activation flag or an end date/begin date. When you update a Type II column, you do not actually change its value. Instead, you update the activation flag to inactive, or the end date to the current date. You then add a new record with the new value, and the activation flag on by default or the begin date set.

  • Updates for Type I. Enables you to specify how updates will affect Type I fields. The options are Change all rows or Change only active rows. Change all rows is the default value.
  • Commit every row(s). Specifies the row interval to commit or write transactions to the database.
IUD Processing

Enables you to load a data target with only the records that have changed. This feature is an optional, add-on component to DataMigrator. When this option is selected, the Prior to Load option is eliminated.

Commit every row(s) specifies the row interval to commit or write transactions to the database.

IUD/Slowly Changing Dimensions

This load type is a combination of the previous two load types. The Insert/Update/Delete indications in the CDC logs are not applied directly to the target. Instead, the SCD Types of the target are used and deleted records in the source are marked as inactive in the target.



x
Procedure: How to Set Options for FOCUS/FDS or XFOCUS Targets
  1. In the data flow workspace, right-click the data target and select Properties.

    The Target Properties pane opens.

  2. For existing targets, select whether to remove data prior to loading the data target in the Prior to Load Options section. The options are:
    • No changes. Does not delete the records already in a data target.
    • Drop Table. Drops and recreates the data target.
  3. Select a behavior for loading records using the If the record exists and If the record does not exist drop-down menus. These options can be set on a per target basis. They are:
    • Include the record. Includes the duplicate record in the data target.
    • Reject the record. Issues a SELECT command against the target table to see if a record exists with the key values. If there are no key columns, DataMigrator screens all columns. If the record is found, the record is rejected.
    • Update the existing record. Updates records if the key value on the input record, or entire record if there are no keys, are found in the table. All non-key values are updated in the target.
    • Delete the existing record. Deletes the record if the key on the input record (or entire record if no keys) is found in the table.
  4. If you select Update the existing record or Delete the existing record from the If the record exists drop-down menu, you can also set behavior when the record does not exist using the If the record does not exist drop-down menu:

    Include the record includes the key in the data target.

  5. Click OK.

    Note:

    • For FOCUS/FDS or XFOCUS targets, the only Load Type available is Insert/Update.
    • Synonym names for FOCUS/FDS files cannot be longer than eight characters.



x
Reference: Target Properties Pane for FOCUS/FDS or XFOCUS Targets

To access the Target Properties pane, right-click a target object and select Properties.

Focus target

The Target Load Options of the Target Properties pane for FOCUS/FDS or XFOCUS targets contains the following fields and options:

Notes

Enter an annotation for this target.

Type

Is the target type.

Adapter

Is the target adapter.

Synonym

Is the synonym name.

Note: Synonym names cannot be longer than eight characters.

Data File

Is the name of the data file pointed to by the synonym.

On IBM z/OS, to create a file in HFS (hierarchical file system), enter the name. To create a dataset, enter the name as

//’qualif.tablename.FOCUS’

where:

qualif

Is a fully qualified location.

tablename

Should match the synonym name.

Keys (only available for new targets)

Is the number of key columns.

Prior to Load Options: (only available for existing targets)

No changes

Does not drop (delete) the data target.

Drop Table

Drops and recreates the data target.

Load Type

Specifies the method DataMigrator uses to load data.

Insert/Update

Since DataMigrator uses Insert/Update to load FOCUS/FDS or XFOCUS targets, you can set a behavior when duplicate records are found from the If the record exists drop-down menu. These options can be set on a per target basis.

  • Include the record. Includes the duplicate record in the data target.
  • Reject the record. Issues a SELECT command against the target table to see if a record exists with the key values. If there are no key columns, DataMigrator screens all columns. If the record is found, the record is rejected.
  • Update the existing record. Updates records if the key value on the input record, or entire record if there are no keys, are found in the table. All non-key values are updated in the target.
  • Delete the existing record. Deletes the record if the key on the input record (or entire record if no keys) is found in the table.

    If you select Update the existing record or Delete the existing record from the If the record exists drop-down menu, you can also set behavior when the record does not exist using the If the record does not exist drop-down menu:

    • Include the record. Includes the key in the data target.
    • Reject the record. Does not include the key in the data target.
    • Commit every row(s). Specifies the row interval to commit or write transactions to the database.


x
Procedure: How to Set Options for Flat, Delimited Flat, Formatted, and XML File Targets
  1. In the data flow workspace, right-click the data target and select Properties.

    The Target Properties pane opens.

  2. For existing targets, select whether to remove data prior to loading the data target in the Prior to Load Options section. The options are:

    No changes does not delete the data target. New records are appended.

    Delete File drops and recreates the data target.

  3. Click OK.

    Note:

    • For flat files and XML files, the Load Type cannot be changed from Insert/Update, and the data is always appended.
    • For formatted file type EXL2K, the only Load Type available is Loading Flat File using HOLD.



x
Reference: Target Properties for New XML Targets

DataMigrator can create a new XML document with name/value pairs. For more information on any other structures, see Creating a Data Flow Using a Target Based on a Predefined XML Schema.

Note: In order to create an XML document, a Server Administrator must first configure an adapter for XML.

To access the Target Properties window, right-click a target object and select Properties.

The Target Options of the Target Properties pane for XML Targets contains the following fields and options:

Notes

Enter an annotation for this target.

Type

Is the target type.

Adapter

Is the target adapter.

Synonym

Is the directory where the synonym is located.

Data File

Is the name of the XML document described by the synonym. DataMigrator will also create an XML Schema Definition with the same name as the data file and an extension of .xsd.

Load Type

Is set to Insert/Update for XML documents. Note that no updates are currently performed.

Records

Is the name of the top level element.

Record

Is the name of the row level element.

Column

Is the name of the column level element.



Example: New XML Targets

The following example uses the same table ibisamp/dminv as a source with all the actual columns, and then creates a new XML target called dminvx. The element names are entered as shown:

Records - Inventory

Record - Item

Column - Detail

The first three rows of the resulting XML document would look like the following example when viewed in a browser:

XML Target Example



x
Reference: Target Properties Window for Delimited Flat File Targets

To access the Target Properties window, right-click a target object and select Properties.

The Target Options of the Target Properties pane for Delimited Flat File Targets contains the following fields and options:

Notes

Enter an annotation for this target.

Type

Is the target type.

Adapter

Is the target adapter.

Note: Delimited Flat File will only appear as a target type if it is configured on the server.

Synonym

Is the directory where the synonym is located.

Data File

Is the name of the data file pointed to by the synonym.

On IBM z/OS, to create a file in HFS (hierarchical file system), enter the name. To create a dataset, enter the name as

//’qualif.tablename.DATA’

where:

qualif

Is a fully qualified location.

tablename

Should match the synonym name.

Code Page

Specifies a code page for the target data.

Field Delimiter

Is the delimiter character(s) used between fields. The delimiter can be up to 30 characters. It can be specified as:

TAB. A tab character. This is the default.

a. A character string, for example ~.

0x nn. A hex code, for example, 0x44 (a comma), or 0x0D0A (a return and a linefeed). The hex code uses ASCII for WINDOWS or UNIX systems and EBCDIC for IBM Mainframes.

Header

Inserts column headings as the first row of data and surrounds the column names with the character specified in the Enclosure field.

Enclosure

This character is used to surround the column headings when Header is set to Yes.



x
Reference: Target Properties Pane for Flat File Targets

To access the Target Properties pane, right-click a target object and select Properties.

The Target Options of the Target Properties pane for Flat File Targets contains the following fields and options:

Notes

Enter an annotation for this target.

Type

Is the target type.

Adapter

Is the target adapter.

Synonym

Is the synonym name.

Data File

Is the name of the data file pointed to by the synonym.

On IBM z/OS, to create a file in HFS (hierarchical file system), enter the name. To create a dataset, enter the name as

//’qualif.tablename.DATA’

where:

qualif

Is a fully qualified location.

tablename

Should match the synonym name.

Code Page

Specifies a code page for the target data.

Prior to Load Options: (only available for existing targets)

No changes

Does not delete the data target.

Delete File

Deletes and recreates the data target.



x
Reference: Target Properties Window for Formatted File Targets

To access the Target Properties window, right-click a target object and select Properties.

The Target Options of the Target Properties pane for Formatted File Targets contains the following fields and options:

Notes

Enter an annotation for this target.

Type

Is the target type.

Adapter

Is the target adapter.

Format

Is the format of the target data.

Synonym

Is the directory where the synonym is located.

Data File

Is the name of the data file pointed to by the synonym.



x
Procedure: How to Set the Data Flow Record Logging Options

Setting the data flow record logging options allows you to write particular types of transactions to log files.

Although Record Logging options appear in every data target, they are set on a per flow basis. Changing the options in one target will reset them for all targets.

  1. On the Flow tab, in the Tools group, click Properties, or right-click anywhere in the workspace and select Properties.

    The flow properties pane opens.

  2. Expand the Record Logging attribute.
  3. Select the check box next to the options that you want to log.

    Record logging options are optional and are not available when the data target is a formatted file.

  4. Click Save in the Quick Access Toolbar.

    For more information on record logging options for a data flow, see Flow Properties Pane - Record Logging Attribute.


iWay Software