Thursday, February 27, 2014

NLS settings to Trademark symbol in Datastage


Changes made in the job:

Step 1:
Added parameter NLS_LANG with default value as ‘AMERICAN_AMERICA.AL32UTF8’

Step 2:

Changed the NLS from project default(ISO_8859) to ‘ASCL_MS1252’

Step 3:

Changed the column metadata to ‘Unicode’


Target:
 Should be able to see TM symbols

Sunday, February 9, 2014

Datastage Debugging

The article talks about Datastage debugging techniques. This can be applied to a job which
  • is not producing proper output data or
  • to a job that is aborting or generating warnings
  1. Use the Data Set Management utility, which is available in theTools menu of the DataStage Designer or the DataStage Manager, to examine the schema, look at row counts, and delete a Parallel Data Set. You can also view the data itself.
  1. Check the DataStage job log for warnings or abort messages. These may indicate anunderlying logic problem or unexpected data type conversion. Check all the messages. The PX jobs almost all the times, generate a lot of warnings in addition to the problem area.
  1. Run the job with the message handling (both job level and project level) disabled to find out if there are any warning that are unnecessarily converted to information messages or dropped from logs.
  1. Enable the APT_DUMP_SCORE using which you would be able see how different stages are combined. Some errors/logs mentioned the error is in APT_CombinedOperatorController stages. The stages that form the part of the APT_CombinedOperatorController can be found using the dump score created after enabling this env variable.
    This environment variable causes the DataStage to add one log entry which tells how stages are combined in operators and what virtual datasets are used. It also tells how the operators are partitioned and how many no. of partitions are created.
  1. One can also enable APT_RECORD_COUNTS environment variables. Also enable OSH_PRINT_SCHEMAS to ensure that a runtime schema of a job matches the design-time schema that was expected.
  1. Sometimes the underlying data contains the special characters (like null characters) in database or files and this can also cause the trouble in theexecution. If the data is in table or dataset, then export it to a sequential file (using DS job). Then use the command “cat –tev” or “od –xc” to find out the special characters.
  1. Once can also use “wc -lc filename”, displays the number of lines and characters in the specified ASCII text file. Sometime this is also useful.
  1. Modular approach: If the job is very bulky with many stages in it and you are unable to locate the error, the one option is to go for modular approach. In this approach, one has to do the execution step by step. E.g. If a job has 10 stages, then create a copy of the job. Just keep say first 3 stages and run the job. Check the result and if the result is fine, then add some more stages (may be one or two) and again run the job. This has to be done till one is unable to locate the error.
  1. Partitioned approach with data: This approach is very useful if the job is running fine for some set of data and failing for other set of data, or failing for large no. of rows. In this approach, one has to run the jobs on selected no .of rows and/or partitions using the DataStage @INROWNUM (and @PARTITIONNUM in Px). E.g. a job when run with 10K rows works fine and is failing with 1M rows. Now one can use @INROWNUM and run the job for say first 0.25 million rows. If the first 0.25 million are fine, then from 0.26 million to 0.5 million and so on.
    Please note, if the job parallel job then one also has to consider the no. of partitions in the job.
  1. Other option in such case is – run the job only one node (may be by setting using APT_EXECUTION_MODE to sequential or using the config file with one node.
  1. Execution mode: Sometime if the partitions are confusing, then one can run the job in sequential mode. There are two ways to achieve this:
    1. Use the environment variable APT_EXECUTION_MODE and set it to sequential mode.
    2. Use a configuration file with only one node.
  1. A parallel Job fails and error do not tell which row it has failed for: In this case, if this job is simple we should try to build the server job and run it. The server jobs can report the errors along with the rows which are in error. This is very useful in case when DB errors like primary/unique violation or any other DB error is reported by PX job.
  1. Sometimes when dealing when DB and if the rows are not getting loaded as expected, adding the reject links to the DB stages can help us locating the rows with issues.
  1. In a big job, adding some intermediate datastes/peek stages to find out the data values at certain levels can help. E.g. if there 10 stages and after that it is going to dataset. Now there may be different operations done at different stages. After 2/3 stages, add peek stages or send data to datasets using copy stages. Check the values after at these intermediate