This document gives the step by step procedure to develop a data stage job. This job gets the data from a Complex Flat File and Loads in a table.
- Step 1 – Import COBOL File Layout
- Step 2 – Import Table Definitions
- Step 3 – Create DS Jobs
- Step 4 – Job Compilation
- Step 5 – Job Execution
Step 1 – Import COBOL Definitions
- Open Webspehere Datastage Designer Window.
- Import à Table Definitions à COBOL File Definitions. (Shown as below Fig)
- Provide the path of the COBOL Layout File in the COBOL File Pathname as shown below and Click Import.
- After Click import, the Table and To source will be updated Automatically. The Table will be saved as mentioned in the ‘To folder’ Tab.
Step 2 – Import Table Definitions
- Open Webspehere Datastage Designer Window.
- Import à Table Definitions à Start Connector Import Wizard.
- In the Connector Metadata import window, Select desired connector from ‘Connector selection’ list and click Next. In the below figure, the ODBC connector is selected.
+
- In the ‘Connection details’, provide the appropriate Datasource and the credentials and then click Text.
- In the Filter Page, Select the Appropriate schema name and rpovide the table name in the ‘Name filter’ from the text box and click ‘Next’
- In the selection section, select Appropriate table name and click ‘Next’
- After Reviewing the Table Information, click ‘Next’.
Step 3 – Create DS Job
- Open Datastage Designer à File à New à Parallel Job. Then Click ‘Ok’
- Drag and Drop the required [CFF stage, DB2 connector stage, transformer stage] stages from the Repository palette.
- Right click on one stage and drag it to the next stage to create a link
- Rename the stages and links with a proper name.
Please Note that, a File to Table Load job would require a transformer stage only if there is a mapping columns required between the source file and the destination table, Unless it is not required.
- Set the Requried Parameters for the job. [Database Name, Username, Password, Table Name and the schema name.]
- After set the Parameters, Go to the CFF_table_input Properties – From the File Options tab, Provide the file name of the Complex Flat File.
In the file options tab you need to specify the type of file from which it reads or writes.
When your file is source file, the options you can select.
- File(s) – A single file or multiple files. This option is the default.
- File Pattern – When you have a group of files. This option is available only for source stages.
- Source – One or more Programs and arguments that provide source data to the import Operator.
- Source List – A file which contains the names of multiple programs that will provide source data to the import operator.
When your file is Destination file, te Options you can select.
- File(s) – A single file or multiple files. This option is the default.
- Destination – It will read one or more programs and arguments that data was exported.
- Destination List – A file that contains the names of multiple programs that provide destinations for the exported data.
- CFF_table_input Properties – From the Records tab, Load the Imported COBOL layout as we did in STEP-1 and then Click ‘OK’
- Now the table defintions will be loaded into the stage as shown in the screenshot
- Double – Click the TRNS_table_mapping stage – Map all the source file columns to output target as shown in the below screenshot. (You can apply any logic in the transformer stage based on the business requirement unless omit this stage if it is direct mapping.)
- DB2_table_output – From the Properties tab, the parameters have to be set for DB Name, Username, Password and the Table name.
Once you given the Appropriate connection details, test the DB connections by clicking ‘Test’ and View the data by clicking ‘ViewData’. Then Click ‘ok’
- DB2_table_output – Column properties. From the columns tab, load the table definition that you have imported in step – 2. Select the fields that are requried to load into the target table.
- If the Write mode is set to Bulk Load – Set the associated Properties too
- Change Message File – To a File Directory, where the DB2 loader writes diagnostic messages. The database instance must have read/write privileges to the file. (Note:- Give the right path to the message file, the file name is the defaulted one which is shown the below screenshot)
Since we are doing non-logged insert, there should be some place we need to log the errors (in case of abends). So, we are giving the file name in the “message file” option.
- Change Non-Recoverable load – Yes. With this options, table spaces are not put in backup pending state following the load operation, and a copy of the loaded data does not have to be made during the load operation.
- Data Buffer Size – Set depending on the resources available.
- Sort Buffer Size – Set depending on the resources available.
- CPU Parallelism – Set depending on the CPUs available.
- In addition to the above mentioned properties, one can parameterize the record count and Array size options to have the flexibility of entering different values for these options, as per our requirement, each time the job is run. We also need to optimize these values to suit the size of the tables involved.
- Save the Job.
Step 4 – Job Compilation
- Clickt the Icon to compile the Job.
- Once it is sucessfully compiled, you can be able to see the window as shown. Click ‘Close’ then.
Step 5 – Job Execution
- Click the Icon to Run the Job.
- The table is loaded Successfully.
Hi,
ReplyDeleteThank you for posting pictures, I tried importing COBOL copybook, but it is throwing error. Can you share your copybook file?
Thanks,
Utpal