You may get many errors in datastage while compiling the jobs or running the jobs.
Some of the errors are as follows
a)Source file not found.
If you are trying to read the file, which was not there with that name.
b)Some times you may get Fatal Errors.
c) Data type mismatches.
This will occur when data type mismaches occurs in the jobs.
d) Field Size errors.
e) Meta data Mismach
f) Data type size between source and target different
g) Column Mismatch
i) Pricess time out.
If server is busy. This error will come some time.
ds_Trailer_Rec: When checking operator: When binding output schema variable "outRec": When binding output interface field "TrailerDetailRecCount" to field "TrailerDetailRecCount": Implicit conversion from source type "ustring" to result type "string[max=255]": Possible truncation of variable length ustring when converting to string using codepage ISO-8859-1.
Syntax error: Error in "group" operator: Error in output redirection: Error in output parameters: Error in modify adapter: Error in binding: Could not find type: "subrec", line 35
Solution:Its the issue of level number of those columns which were being added in transformer. Their level number was blank and the columns that were being taken from cff file had it as 02. Added the level number and job worked.
Out_Trailer: When checking operator: When binding output schema variable "outRec": When binding output interface field "STDCA_TRLR_REC_CNT" to field "STDCA_TRLR_REC_CNT": Implicit conversion from source type "dfloat" to result type "decimal[10,0]": Possible range/precision limitation.
CE_Trailer: When checking operator: When binding output interface field "Data" to field "Data": Implicit conversion from source type "string" to result type "string[max=500]": Possible truncation of variable length string.
Implicit conversion from source type "dfloat" to result type "decimal[10,0]": Possible range/precision limitation.
Solution: Used to transformer function'DFloatToDecimal'. As target field is Decimal. By default the output from aggregator output is double, getting the above by using above function able to resolve the warning.
When binding output schema variable "outputData": When binding output interface field "RecordCount" to field "RecordCount": Implicit conversion from source type "string[max=255]" to result type "int16": Converting string to number.
When setting APT_DEFAULT_TRANSPORT_BLOCK_SIZE you want to use the smallest possible value since this value will be used for all links in the job.
For example if your job fails with APT_DEFAULT_TRANSPORT_BLOCK_SIZE set to 1 MB and succeeds at 4 MB you would want to do further testing to see what it the smallest value between 1 MB and 4 MB that will allow the job to run and use that value. Using 4 MB could cause the job to use more memory than needed since all the links would use a 4 MB transport block size.
NOTE: If this error appears for a dataset use APT_PHYSICAL_DATASET_BLOCK_SIZE.
. While connecting “Remote Desktop”, Terminal server has been exceeded maximum number of allowed connections
SOL: In Command Prompt, type mstsc /v: ip address of server /admin
OR mstsc /v: ip address /console
2. SQL20521N. Error occurred processing a conditional compilation directive near string. Reason code=rc.
Following link has issue description:
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.messages.sql.doc%2Fdoc%2Fmsql20521n.html
3. SK_RETAILER_GROUP_BRDIGE,1: runLocally() did not reach EOF on its input data set 0.
SOL: Warning will be disappeared by regenerating SK File.
4. While connecting to Datastage client, there is no response, and while restarting websphere services, following errors occurred
[root@poluloro01 bin]# ./stopServer.sh server1 -user wasadmin -password Wasadmin0708
ADMU0116I: Tool information is being logged in file
/opt/ibm/WebSphere/AppServer/profiles/default/logs/server1/stopServer.log
ADMU0128I: Starting tool with the default profile
ADMU3100I: Reading configuration for server: server1
ADMU0111E: Program exiting with error: javax.management.JMRuntimeException:
ADMN0022E: Access is denied for the stop operation on Server MBean
because of insufficient or empty credentials.
ADMU4113E: Verify that username and password information is on the command line
(-username and -password) or in the <conntype>.client.props file.
ADMU1211I: To obtain a full trace of the failure, use the -trace option.
ADMU0211I: Error details may be seen in the file:
/opt/ibm/WebSphere/AppServer/profiles/default/logs/server1/stopServer.log
SOL: Wasadmin and XMeta passwords needs to be reset and commands are below..
[root@poluloro01 bin]# cd /opt/ibm/InformationServer/ASBServer/bin/
[root@poluloro01 bin]# ./AppServerAdmin.sh -was -user wasadmin
-password Wasadmin0708
Info WAS instance /Node:poluloro01/Server:server1/ updated with new user information
Info MetadataServer daemon script updated with new user information
[root@poluloro01 bin]# ./AppServerAdmin.sh -was -user xmeta -password Xmeta0708
Info WAS instance /Node:poluloro01/Server:server1/ updated with new user information
Info MetadataServer daemon script updated with new user information
5. “The specified field doesn’t exist in view adapted schema”
SOL: Most of the time "The specified field: XXXXXX does not exist in the view adapted schema" occurred when we missed a field to map. Every stage has got an output tab if used in the between of the job. Make sure you have mapped every single field required for the next stage.
Sometime even after mapping the fields this error can be occurred and one of the reason could be that the view adapter has not linked the input and output fields. Hence in this case the required field mapping should be dropped and recreated.
Just to give an insight on this, the view adapter is an operator which is responsible for mapping the input and output fields. Hence DataStage creates an instance of APT_ViewAdapter which translate the components of the operator input interface schema to matching components of the interface schema. So if the interface schema is not having the same columns as operator input interface schema then this error will be reported.
1)When we use same partitioning in datastage transformer stage we get the following warning in 7.5.2 version.
TFCP000043 2 3 input_tfm: Input dataset 0 has a partitioning method other than entire specified; disabling memory sharing.
This is known issue and you can safely demote that warning into informational by adding this warning to Project specific message handler.
2) Warning: A sequential operator cannot preserve the partitioning of input data set on input port 0
Resolution: Clear the preserve partition flag before Sequential file stages.
3)DataStage parallel job fails with fork() failed, Resource temporarily unavailable
On aix execute following command to check maxuproc setting and increase it if you plan to run multiple jobs at the same time.
lsattr -E -l sys0 | grep maxuproc
maxuproc 1024 Maximum number of PROCESSES allowed per user True
4)TFIP000000 3 Agg_stg: When checking operator: When binding input interface field “CUST_ACT_NBR” to field “CUST_ACT_NBR”: Implicit conversion from source type “string[5]” to result type “dfloat”: Converting string to number.
Resolution: use the Modify stage explicitly convert the data type before sending to aggregator stage.
5)Warning: A user defined sort operator does not satisfy the requirements.
Resolution:check the order of sorting columns and make sure use the same order when use join stage after sort to joing two inputs.
6)TFTM000000 2 3 Stg_tfm_header,1: Conversion error calling conversion routine timestamp_from_string data may have been lost
TFTM000000 1 xfmJournals,1: Conversion error calling conversion routine decimal_from_string data may have been lost
Resolution:check for the correct date format or decimal format and also null values in the date or decimal fields before passing to datastage StringToDate, DateToString,DecimalToString or StringToDecimal functions.
7)TOSO000119 2 3 Join_sort: When checking operator: Data claims to already be sorted on the specified keys the ‘sorted’ option can be used to confirm this. Data will be resorted as necessary. Performance may improve if this sort is removed from the flow
Resolution: Sort the data before sending to join stage and check for the order of sorting keys and join keys and make sure both are in the same order.
8)TFOR000000 2 1 Join_Outer: When checking operator: Dropping component “CUST_NBR” because of a prior component with the same name.
Resolution:If you are using join,diff,merge or comp stages make sure both links have the differnt column names other than key columns
9)TFIP000022 1 oci_oracle_source: When checking operator: When binding output interface field “MEMBER_NAME” to field “MEMBER_NAME”: Converting a nullable source to a non-nullable result;
Resolution:If you are reading from oracle database or in any processing stage where incoming column is defined as nullable and if you define metadata in datastage as non-nullable then you will get above issue.if you want to convert a nullable field to non nullable make sure you apply available null functions in datastage or in the extract query.
DATASTAGE COMMON ERRORS/WARNINGS AND SOLUTIONS – 2
1. No jobs or logs showing in IBM DataStage Director Client, however jobs are still accessible from the Designer Client.
SOL: SyncProject cmd that is installed with DataStage 8.5 can be run to analyze and recover projects
SyncProject -ISFile islogin -project dstage3 dstage5 –Fix
2. CASHOUT_DTL: Invalid property value /Connection/Database (CC_StringProperty::getValue, file CC_StringProperty.cpp, line 104)
SOL: Change the Data Connection properties manually in the produced
DB2 Connector stage.
A patch fix is available for this issue JR35643
3. Import .dsx file from command line
SOL: DSXImportService -ISFile dataconnection –DSProject dstage –DSXFile c:\export\oldproject.dsx
4. Generate Surrogate Key without Surrogate Key Stage
SOL: @PARTITIONNUM + (@NUMPARTITIONS * (@INROWNUM – 1)) + 1
Use above Formula in Transformer stage to generate a surrogate key.
5. Failed to authenticate the current user against the selected Domain: Could not connect to server.
RC: Client has invalid entry in host file
Server listening port might be blocked by a firewall
Server is down
SOL: Update the host file on client system so that the server hostname can be resolved from client.
Make sure the WebSphere TCP/IP ports are opened by the firewall.
Make sure the WebSphere application server is running. (OR)
Restart Websphere services.
6. The connection was refused or the RPC daemon is not running (81016)
RC: The dsprcd process must be running in order to be able to login to DataStage.
If you restart DataStage, but the socket used by the dsrpcd (default is 31538) was busy, the dsrpcd will fail to start. The socket may be held by dsapi_slave processes that were still running or recently killed when DataStage was restarted.
SOL: Run “ps -ef | grep dsrpcd” to confirm the dsrpcd process is not running.
Run “ps -ef | grep dsapi_slave” to check if any dsapi_slave processes exist. If so, kill them.
Run “netstat -a | grep dsprc” to see if any processes have sockets that are ESTABLISHED, FIN_WAIT, or CLOSE_WAIT. These will prevent the dsprcd from starting. The sockets with status FIN_WAIT or CLOSE_WAIT will eventually time out and disappear, allowing you to restart DataStage.
Then Restart DSEngine. (if above doesn’t work) Needs to reboot the system.
7. To save Datastage logs in notepad or readable format
SOL: a) /opt/ibm/InformationServer/server/DSEngine (go to this directory)
./bin/dsjob -logdetail project_name job_name >/home/dsadm/log.txt
b) In director client, Project tab à Print à select print to file option and save it in local directory.
8. “Run time error ’457′. This Key is already associated with an element of this collection.”
SOL: Needs to rebuild repository objects.
a) Login to the Administrator client
b) Select the project
c) Click on Command
d) Issue the command ds.tools
e) Select option ‘2’
f) Keep clicking next until it finishes.
g) All objects will be updated.
9. To stop the datastage jobs in linux level
SOL: ps –ef | grep dsadm
To Check process id and phantom jobs
Kill -9 process_id
10. To run datastage jobs from command line
SOL: cd /opt/ibm/InformationServer/server/DSEngine
./dsjob -server $server_nm -user $user_nm -password $pwd -run $project_nm $job_nm
11. Failed to connect to JobMonApp on port 13401.
SOL: needs to restart jobmoninit script (in /opt/ibm/InformationServer/Server/PXEngine/Java)
Type sh jobmoninit start $APT_ORCHHOME
Add 127.0.0.1 local host in /etc/hosts file
(Without local entry, Job monitor will be unable to use the ports correctly)
12. SQL0752N. Connect to a database is not permitted within logical unit of work CONNECT type 1 settings is in use.
SOL: COMMIT or ROLLBACK statement before requesting connection to another database.
1. While running ./NodeAgents.sh start command… getting the following error: “LoggingAgent.sh process stopped unexpectedly”
SOL: needs to kill LoggingAgentSocketImpl
Ps –ef | grep LoggingAgentSocketImpl (OR)
PS –ef | grep Agent (to check the process id of the above)
2. Warning: A sequential operator cannot preserve the partitioning of input data set on input port 0
SOL: Clear the preserve partition flag before Sequential file stages.
3. Warning: A user defined sort operator does not satisfy the requirements.
SOL: Check the order of sorting columns and make sure use the same order when use join stage after sort to joing two inputs.
4. Conversion error calling conversion routine timestamp_from_string data may have been lost. xfmJournals,1: Conversion error calling conversion routine decimal_from_string data may have been lost
SOL: check for the correct date format or decimal format and also null values in the date or decimal fields before passing to datastage StringToDate, DateToString,DecimalToString or StringToDecimal functions.
5. To display all the jobs in command line
SOL:
cd /opt/ibm/InformationServer/Server/DSEngine/bin
./dsjob -ljobs <project_name>
6. “Error trying to query dsadm[]. There might be an issue in database server”
SOL: Check XMETA connectivity.
db2 connect to xmeta (A connection to or activation of database “xmeta” cannot be made because of BACKUP pending)
7. “DSR_ADMIN: Unable to find the new project location”
SOL: Template.ini file might be missing in /opt/ibm/InformationServer/Server.
Copy the file from another severs.
8. “Designer LOCKS UP while trying to open any stage”
SOL: Double click on the stage that locks up datastage
Press ALT+SPACE
Windows menu will popup and select Restore
It will show your properties window now
Click on “X” to close this window.
Now, double click again and try whether properties window appears.
9. “Error Setting up internal communications (fifo RT_SCTEMP/job_name.fifo)
SOL: Remove the locks and try to run (OR)
Restart DSEngine and try to run (OR)
Go to /opt/ibm/InformationServer/server/Projects/proj_name/
ls RT_SCT* then
rm –f RT_SCTEMP
then try to restart it.
10. While attempting to compile job, “failed to invoke GenRunTime using Phantom process helper”
RC: /tmp space might be full
Job status is incorrect
Format problems with projects uvodbc.config file
SOL: a) clean up /tmp directory
b) DS Director à JOB à clear status file
c) confirm uvodbc.config has the following entry/format:
[ODBC SOURCES]
<local uv>
DBMSTYPE = UNIVERSE
Network = TCP/IP
Service = uvserver
Host = 127.0.0.1
ERROR:Phantom error in jobs
Resolution – Datastage Services have to be started
So follow the following steps.
Login to server through putty using dsadm user.
Check whether active or stale sessions are there.
ps –ef|grep slave
Ask the application team to close the active or stale sessions running from application’s user.
If they have closed the sessions, but sessions are still there, then kill those sessions.
Make sure no jobs are running
If any, ask the application team to stop the job
ps –ef|grep dsd.run
Check for output for below command before stopping Datastage services.
netstat –a|grep dsrpc
If any processes are in established, check any job or stale or active or osh sessions are not running.
If any processes are in close_wait, then wait for some time, those processes
will not be visible.
Stop the Datastage services.
cd $DSHOME
./dsenv
cd $DSHOME/bin
./uv –admin –stop
Check whether Datastage services are stopped.
netstat –a|grep dsrpc
No output should come for above command.
Wait for 10 to 15 min for shared memory to be released by process holding them.
Start the Datastage services.
./uv –admin –start
If asking for dsadm password while firing the command , then enable impersonation.through root user
${DSHOME}/scripts/DSEnable_impersonation.sh
Some of the errors are as follows
a)Source file not found.
If you are trying to read the file, which was not there with that name.
b)Some times you may get Fatal Errors.
c) Data type mismatches.
This will occur when data type mismaches occurs in the jobs.
d) Field Size errors.
e) Meta data Mismach
f) Data type size between source and target different
g) Column Mismatch
i) Pricess time out.
If server is busy. This error will come some time.
Some of the errors in detail:
ds_Trailer_Rec: When checking operator: When binding output schema variable "outRec": When binding output interface field "TrailerDetailRecCount" to field "TrailerDetailRecCount": Implicit conversion from source type "ustring" to result type "string[max=255]": Possible truncation of variable length ustring when converting to string using codepage ISO-8859-1.
Solution:I resolved changing the extended col under meta data of the transformer to unicode
When checking operator: A sequential operator cannot preserve the partitioning
of the parallel data set on input port 0.
Solution:I resolved by changing the preserve partioning to 'clear' under transformer properties
Solution:Its the issue of level number of those columns which were being added in transformer. Their level number was blank and the columns that were being taken from cff file had it as 02. Added the level number and job worked.
Out_Trailer: When checking operator: When binding output schema variable "outRec": When binding output interface field "STDCA_TRLR_REC_CNT" to field "STDCA_TRLR_REC_CNT": Implicit conversion from source type "dfloat" to result type "decimal[10,0]": Possible range/precision limitation.
CE_Trailer: When checking operator: When binding output interface field "Data" to field "Data": Implicit conversion from source type "string" to result type "string[max=500]": Possible truncation of variable length string.
Implicit conversion from source type "dfloat" to result type "decimal[10,0]": Possible range/precision limitation.
Solution: Used to transformer function'DFloatToDecimal'. As target field is Decimal. By default the output from aggregator output is double, getting the above by using above function able to resolve the warning.
When binding output schema variable "outputData": When binding output interface field "RecordCount" to field "RecordCount": Implicit conversion from source type "string[max=255]" to result type "int16": Converting string to number.
Problem(Abstract)
Jobs that process a large amount of data in a column can abort with this error:
the record is too big to fit in a block; the length requested is: xxxx, the max block length is: xxxx.
the record is too big to fit in a block; the length requested is: xxxx, the max block length is: xxxx.
Resolving the problem
To fix this error you need to increase the block size to accommodate the record size:
- Log into Designer and open the job.
- Open the job properties--> parameters-->add environment variable and select: APT_DEFAULT_TRANSPORT_BLOCK_SIZE
- You can set this up to 256MB but you really shouldn't need to go over 1MB.
NOTE: value is in KB
For example to set the value to 1MB:
APT_DEFAULT_TRANSPORT_BLOCK_SIZE=1048576
The default for this value is 128kb.
When setting APT_DEFAULT_TRANSPORT_BLOCK_SIZE you want to use the smallest possible value since this value will be used for all links in the job.
For example if your job fails with APT_DEFAULT_TRANSPORT_BLOCK_SIZE set to 1 MB and succeeds at 4 MB you would want to do further testing to see what it the smallest value between 1 MB and 4 MB that will allow the job to run and use that value. Using 4 MB could cause the job to use more memory than needed since all the links would use a 4 MB transport block size.
NOTE: If this error appears for a dataset use APT_PHYSICAL_DATASET_BLOCK_SIZE.
. While connecting “Remote Desktop”, Terminal server has been exceeded maximum number of allowed connections
SOL: In Command Prompt, type mstsc /v: ip address of server /admin
OR mstsc /v: ip address /console
2. SQL20521N. Error occurred processing a conditional compilation directive near string. Reason code=rc.
Following link has issue description:
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.messages.sql.doc%2Fdoc%2Fmsql20521n.html
3. SK_RETAILER_GROUP_BRDIGE,1: runLocally() did not reach EOF on its input data set 0.
SOL: Warning will be disappeared by regenerating SK File.
4. While connecting to Datastage client, there is no response, and while restarting websphere services, following errors occurred
[root@poluloro01 bin]# ./stopServer.sh server1 -user wasadmin -password Wasadmin0708
ADMU0116I: Tool information is being logged in file
/opt/ibm/WebSphere/AppServer/profiles/default/logs/server1/stopServer.log
ADMU0128I: Starting tool with the default profile
ADMU3100I: Reading configuration for server: server1
ADMU0111E: Program exiting with error: javax.management.JMRuntimeException:
ADMN0022E: Access is denied for the stop operation on Server MBean
because of insufficient or empty credentials.
ADMU4113E: Verify that username and password information is on the command line
(-username and -password) or in the <conntype>.client.props file.
ADMU1211I: To obtain a full trace of the failure, use the -trace option.
ADMU0211I: Error details may be seen in the file:
/opt/ibm/WebSphere/AppServer/profiles/default/logs/server1/stopServer.log
SOL: Wasadmin and XMeta passwords needs to be reset and commands are below..
[root@poluloro01 bin]# cd /opt/ibm/InformationServer/ASBServer/bin/
[root@poluloro01 bin]# ./AppServerAdmin.sh -was -user wasadmin
-password Wasadmin0708
Info WAS instance /Node:poluloro01/Server:server1/ updated with new user information
Info MetadataServer daemon script updated with new user information
[root@poluloro01 bin]# ./AppServerAdmin.sh -was -user xmeta -password Xmeta0708
Info WAS instance /Node:poluloro01/Server:server1/ updated with new user information
Info MetadataServer daemon script updated with new user information
5. “The specified field doesn’t exist in view adapted schema”
SOL: Most of the time "The specified field: XXXXXX does not exist in the view adapted schema" occurred when we missed a field to map. Every stage has got an output tab if used in the between of the job. Make sure you have mapped every single field required for the next stage.
Sometime even after mapping the fields this error can be occurred and one of the reason could be that the view adapter has not linked the input and output fields. Hence in this case the required field mapping should be dropped and recreated.
Just to give an insight on this, the view adapter is an operator which is responsible for mapping the input and output fields. Hence DataStage creates an instance of APT_ViewAdapter which translate the components of the operator input interface schema to matching components of the interface schema. So if the interface schema is not having the same columns as operator input interface schema then this error will be reported.
1)When we use same partitioning in datastage transformer stage we get the following warning in 7.5.2 version.
TFCP000043 2 3 input_tfm: Input dataset 0 has a partitioning method other than entire specified; disabling memory sharing.
This is known issue and you can safely demote that warning into informational by adding this warning to Project specific message handler.
2) Warning: A sequential operator cannot preserve the partitioning of input data set on input port 0
Resolution: Clear the preserve partition flag before Sequential file stages.
3)DataStage parallel job fails with fork() failed, Resource temporarily unavailable
On aix execute following command to check maxuproc setting and increase it if you plan to run multiple jobs at the same time.
lsattr -E -l sys0 | grep maxuproc
maxuproc 1024 Maximum number of PROCESSES allowed per user True
4)TFIP000000 3 Agg_stg: When checking operator: When binding input interface field “CUST_ACT_NBR” to field “CUST_ACT_NBR”: Implicit conversion from source type “string[5]” to result type “dfloat”: Converting string to number.
Resolution: use the Modify stage explicitly convert the data type before sending to aggregator stage.
5)Warning: A user defined sort operator does not satisfy the requirements.
Resolution:check the order of sorting columns and make sure use the same order when use join stage after sort to joing two inputs.
6)TFTM000000 2 3 Stg_tfm_header,1: Conversion error calling conversion routine timestamp_from_string data may have been lost
TFTM000000 1 xfmJournals,1: Conversion error calling conversion routine decimal_from_string data may have been lost
Resolution:check for the correct date format or decimal format and also null values in the date or decimal fields before passing to datastage StringToDate, DateToString,DecimalToString or StringToDecimal functions.
7)TOSO000119 2 3 Join_sort: When checking operator: Data claims to already be sorted on the specified keys the ‘sorted’ option can be used to confirm this. Data will be resorted as necessary. Performance may improve if this sort is removed from the flow
Resolution: Sort the data before sending to join stage and check for the order of sorting keys and join keys and make sure both are in the same order.
8)TFOR000000 2 1 Join_Outer: When checking operator: Dropping component “CUST_NBR” because of a prior component with the same name.
Resolution:If you are using join,diff,merge or comp stages make sure both links have the differnt column names other than key columns
9)TFIP000022 1 oci_oracle_source: When checking operator: When binding output interface field “MEMBER_NAME” to field “MEMBER_NAME”: Converting a nullable source to a non-nullable result;
Resolution:If you are reading from oracle database or in any processing stage where incoming column is defined as nullable and if you define metadata in datastage as non-nullable then you will get above issue.if you want to convert a nullable field to non nullable make sure you apply available null functions in datastage or in the extract query.
DATASTAGE COMMON ERRORS/WARNINGS AND SOLUTIONS – 2
1. No jobs or logs showing in IBM DataStage Director Client, however jobs are still accessible from the Designer Client.
SOL: SyncProject cmd that is installed with DataStage 8.5 can be run to analyze and recover projects
SyncProject -ISFile islogin -project dstage3 dstage5 –Fix
2. CASHOUT_DTL: Invalid property value /Connection/Database (CC_StringProperty::getValue, file CC_StringProperty.cpp, line 104)
SOL: Change the Data Connection properties manually in the produced
DB2 Connector stage.
A patch fix is available for this issue JR35643
3. Import .dsx file from command line
SOL: DSXImportService -ISFile dataconnection –DSProject dstage –DSXFile c:\export\oldproject.dsx
4. Generate Surrogate Key without Surrogate Key Stage
SOL: @PARTITIONNUM + (@NUMPARTITIONS * (@INROWNUM – 1)) + 1
Use above Formula in Transformer stage to generate a surrogate key.
5. Failed to authenticate the current user against the selected Domain: Could not connect to server.
RC: Client has invalid entry in host file
Server listening port might be blocked by a firewall
Server is down
SOL: Update the host file on client system so that the server hostname can be resolved from client.
Make sure the WebSphere TCP/IP ports are opened by the firewall.
Make sure the WebSphere application server is running. (OR)
Restart Websphere services.
6. The connection was refused or the RPC daemon is not running (81016)
RC: The dsprcd process must be running in order to be able to login to DataStage.
If you restart DataStage, but the socket used by the dsrpcd (default is 31538) was busy, the dsrpcd will fail to start. The socket may be held by dsapi_slave processes that were still running or recently killed when DataStage was restarted.
SOL: Run “ps -ef | grep dsrpcd” to confirm the dsrpcd process is not running.
Run “ps -ef | grep dsapi_slave” to check if any dsapi_slave processes exist. If so, kill them.
Run “netstat -a | grep dsprc” to see if any processes have sockets that are ESTABLISHED, FIN_WAIT, or CLOSE_WAIT. These will prevent the dsprcd from starting. The sockets with status FIN_WAIT or CLOSE_WAIT will eventually time out and disappear, allowing you to restart DataStage.
Then Restart DSEngine. (if above doesn’t work) Needs to reboot the system.
7. To save Datastage logs in notepad or readable format
SOL: a) /opt/ibm/InformationServer/server/DSEngine (go to this directory)
./bin/dsjob -logdetail project_name job_name >/home/dsadm/log.txt
b) In director client, Project tab à Print à select print to file option and save it in local directory.
8. “Run time error ’457′. This Key is already associated with an element of this collection.”
SOL: Needs to rebuild repository objects.
a) Login to the Administrator client
b) Select the project
c) Click on Command
d) Issue the command ds.tools
e) Select option ‘2’
f) Keep clicking next until it finishes.
g) All objects will be updated.
9. To stop the datastage jobs in linux level
SOL: ps –ef | grep dsadm
To Check process id and phantom jobs
Kill -9 process_id
10. To run datastage jobs from command line
SOL: cd /opt/ibm/InformationServer/server/DSEngine
./dsjob -server $server_nm -user $user_nm -password $pwd -run $project_nm $job_nm
11. Failed to connect to JobMonApp on port 13401.
SOL: needs to restart jobmoninit script (in /opt/ibm/InformationServer/Server/PXEngine/Java)
Type sh jobmoninit start $APT_ORCHHOME
Add 127.0.0.1 local host in /etc/hosts file
(Without local entry, Job monitor will be unable to use the ports correctly)
12. SQL0752N. Connect to a database is not permitted within logical unit of work CONNECT type 1 settings is in use.
SOL: COMMIT or ROLLBACK statement before requesting connection to another database.
1. While running ./NodeAgents.sh start command… getting the following error: “LoggingAgent.sh process stopped unexpectedly”
SOL: needs to kill LoggingAgentSocketImpl
Ps –ef | grep LoggingAgentSocketImpl (OR)
PS –ef | grep Agent (to check the process id of the above)
2. Warning: A sequential operator cannot preserve the partitioning of input data set on input port 0
SOL: Clear the preserve partition flag before Sequential file stages.
3. Warning: A user defined sort operator does not satisfy the requirements.
SOL: Check the order of sorting columns and make sure use the same order when use join stage after sort to joing two inputs.
4. Conversion error calling conversion routine timestamp_from_string data may have been lost. xfmJournals,1: Conversion error calling conversion routine decimal_from_string data may have been lost
SOL: check for the correct date format or decimal format and also null values in the date or decimal fields before passing to datastage StringToDate, DateToString,DecimalToString or StringToDecimal functions.
5. To display all the jobs in command line
SOL:
cd /opt/ibm/InformationServer/Server/DSEngine/bin
./dsjob -ljobs <project_name>
6. “Error trying to query dsadm[]. There might be an issue in database server”
SOL: Check XMETA connectivity.
db2 connect to xmeta (A connection to or activation of database “xmeta” cannot be made because of BACKUP pending)
7. “DSR_ADMIN: Unable to find the new project location”
SOL: Template.ini file might be missing in /opt/ibm/InformationServer/Server.
Copy the file from another severs.
8. “Designer LOCKS UP while trying to open any stage”
SOL: Double click on the stage that locks up datastage
Press ALT+SPACE
Windows menu will popup and select Restore
It will show your properties window now
Click on “X” to close this window.
Now, double click again and try whether properties window appears.
9. “Error Setting up internal communications (fifo RT_SCTEMP/job_name.fifo)
SOL: Remove the locks and try to run (OR)
Restart DSEngine and try to run (OR)
Go to /opt/ibm/InformationServer/server/Projects/proj_name/
ls RT_SCT* then
rm –f RT_SCTEMP
then try to restart it.
10. While attempting to compile job, “failed to invoke GenRunTime using Phantom process helper”
RC: /tmp space might be full
Job status is incorrect
Format problems with projects uvodbc.config file
SOL: a) clean up /tmp directory
b) DS Director à JOB à clear status file
c) confirm uvodbc.config has the following entry/format:
[ODBC SOURCES]
<local uv>
DBMSTYPE = UNIVERSE
Network = TCP/IP
Service = uvserver
Host = 127.0.0.1
ERROR:Phantom error in jobs
Resolution – Datastage Services have to be started
So follow the following steps.
Login to server through putty using dsadm user.
Check whether active or stale sessions are there.
ps –ef|grep slave
Ask the application team to close the active or stale sessions running from application’s user.
If they have closed the sessions, but sessions are still there, then kill those sessions.
Make sure no jobs are running
If any, ask the application team to stop the job
ps –ef|grep dsd.run
Check for output for below command before stopping Datastage services.
netstat –a|grep dsrpc
If any processes are in established, check any job or stale or active or osh sessions are not running.
If any processes are in close_wait, then wait for some time, those processes
will not be visible.
Stop the Datastage services.
cd $DSHOME
./dsenv
cd $DSHOME/bin
./uv –admin –stop
Check whether Datastage services are stopped.
netstat –a|grep dsrpc
No output should come for above command.
Wait for 10 to 15 min for shared memory to be released by process holding them.
Start the Datastage services.
./uv –admin –start
If asking for dsadm password while firing the command , then enable impersonation.through root user
${DSHOME}/scripts/DSEnable_impersonation.sh
very informative. keep it up
ReplyDeletewow this blog is gold very interesting posts!
ReplyDeleteThank you dear for your awesome blog
ReplyDelete