Category: Troubleshooting

Ben Admin Snapshot Error

I ran into this error with the Ben Admin Snapshot process. It was the second time I ran into it, so I thought that I had better make sure that I got this in my notes so I would remember it.

Here’s the error…

Costs Not Found For Pgm/Plan/Optn: <benefit program>/<plan type>/<option code>.
 
Application Program Failed
 In Pgm Section  : MOVE-TO-PDEFN                                                                                                                                               

Application Program Failed
 In Pgm Section  : TABLE-ACCESS(PSPBATBL)                                                                                                                                      
 
 
Application Program Failed
 In Pgm Section  : MAIN(PSPBASCH)                                                                                                                                              

Read More

HP_RTBC:RTBCMain/HP_RTBC_INSTALL_DEPT_SEL Issues

After installing a series of updates (852825, 872267, 872523, 874418), I ran into a SQL error.  I am not sure which update it was, and actually, the find definition references pointed to 808749, which is HRMS 9.1 Bundle #1.

As I am not able to open a ticket on this issue currently, this is what I did to fix the problem …

I noticed this error in one of my CI logs.

SQLExec: 3 select list items in parameter list, but SQL has 1 select columns. (2,124) HP_RTBC.RTBCMain.OnExecute  Name:Job_RTBC  PCPC:4183  Statement:89
Called from:HP_RTBC.RTBCMain.OnExecute  Name:MainProcess  Statement:9
Called from:JOB.LASTUPDOPRID.SaveEdit  Statement:3(2,124)

As a result, I couldn’t hire anyone.  The problem appeared to be this line:

SQLExec(SQL.HP_RTBC_INSTALL_DEPT_SEL,
        &rsJobTmp(&i).JOB.SETID_DEPT.Value,
        &rsJobTmp(&i).JOB.DEPTID.Value,
        &rsJobTmp(&i).JOB.EFFDT.Value,
        &DeptUseEncumbrances,
        &ProcOption,
        &HP_AUTO_RTBC);

I decided to inspect the SQL, and I found this code:

SELECT USE_ENCUMBRANCES
FROM PS_DEPT_TBL A
WHERE A.SETID=:1
AND A.DEPTID =:2
AND A.EFFDT = (
SELECT MAX(B.EFFDT)
FROM PS_DEPT_TBL B
WHERE B.SETID=A.SETID
AND B.DEPTID = A.DEPTID
AND B.EFFDT<=%DateIn(:3))

That’s the problem.  The SQL only has 1 field in the select list (USE_ENCUMBRANCES), and the PeopleCode expects 3 for the 3 variables (&DeptUseEncumbrances, &ProcOption, &HP_AUTO_RTBC);

I did a little research, and as best I could tell, this PeopleCode is the only one that uses the SQL definition.

I couldn’t do a Find Definition References on the Application Package PeopleCode, but I could do a SQL query to determine which projects it was involved in:

SELECT * FROM PSPROJECTITEM
WHERE OBJECTVALUE1 = 'HP_RTBC'
AND OBJECTVALUE2 = 'RTBCMain';

These are some of the updates that I just installed, so I bet that was my problem.

So, I updated the HP_RTBC_INSTALL_DEPT_SEL SQL definition and added the two extra fields as best as I could figure.

SELECT USE_ENCUMBRANCES
 , HP_RTBC_OPTION
 , HP_AUTO_RTBC
 FROM PS_DEPT_TBL A
 WHERE A.SETID=:1
 AND A.DEPTID =:2
 AND A.EFFDT = (
 SELECT MAX(B.EFFDT)
 FROM PS_DEPT_TBL B
 WHERE B.SETID=A.SETID
 AND B.DEPTID = A.DEPTID
 AND B.EFFDT<=%DateIn(:3))

And, that fixed my problem!

Installing Gobal Payroll Rules

This post is a follow up to this thread that I opened.  Unfortunately, at this point, I haven’t figured out what I am doing wrong.  If you see something, please comment either here or on the thread at Oracle Forums.

I am trying to install the  Global Payroll Rules for 856737 and 857845 (both Non-Rule + Rule).

I found that I do this at this navigation: Set Up HRMS > Product Related > Global Payroll & Absence Mgmt > Elements > Manage Global Payroll Packages > Apply Rule Package:

Applying Rule Package

I found a little documentation in PeopleBooks.

Then, I put in the path to my scripts directory, saved, and clicked process.

Adding Scripts Directory

It said the scripts were not found:

Script Not Found Message

But, as far as I can tell, the scripts are in the directory!

Scripts directory

Just for kicks and grins, I copied the DAT files into the same directory:

Scripts Directory with Dat files

Still no luck!

My next attempt is the try to see if case matters.  The files are in lower case in the directory, but on the page the Package ID is uppercase.

Using Lowercase Package ID

That still doesn’t work because it must be an uppercase field:

Rule Package ID made Uppercase

Update:

My problem was the “/” at the end of path.  Once I added the “/” it worked!

Missing Slash

Then, when I clicked process, it ran the data mover script:

Data Mover Script Running

 

 

Troubleshooting Report Repository Error

I thought I had my environment built, but I keep finding little problems.  Well, this time it was the report repository.  I got the reports posting, and I thought I was home free.  The solution ended out being simpler than I thought.

Let me just walk you through some of the things I checked to see if it helps any of the problems you may be having.

My specific error was: “Site name is not valid“.

The cause was that my Report Repository path was not configured.  PeopleTools > Web Profile > Web Profile Configuration

Blank Report Repository Path

The catch was that somehow I got mixed up on which Profile I was using.  You can check your profile easily from the Web Profile History:

Determining Current Web Profile from Last Loaded

On the search page of the Web Profile History, you can see which profile was loaded last.  You can sort the list by clicking on the “Profile was Loaded” heading.  If you click a second time, it will sort it descending.  Notice that previously I was using the Dev profile on port 8000.  When I last installed PIA, I must have configured it for the Prod profile on Port 80.  So now, I need to configure things on the Prod profile.

Here’s some other things you can check.  Make sure your distribution node is connected to your server:

Report Node connecting to Server

Then, check the Report Node.  Make sure the URL matches the URL you are using to connect to the Portal.  The domain name and port should match.  If there is no port number like mine, you are using port number 80.  Then, check the portal domain name.  For me, I am using the default “ps”.

According to this post, you also need to check the local node.  In PeopleTools > Portal > Node Definitions, look for the node with Local Node set to “Y”:

Default Local Node Search

Now, make sure the node is set to password authentication.  I am not sure what else to check here.

Node Options

Finally, check your configuration.properties file for the path.  I am not sure what this setting controls because mine was wrong just now, yet it seemed to be working.

Look in <PS Home>\webserv\<domain>\applications\PORTAL.war\WEB-INF\psftdocs\<domain>\configuration.properties.

configuration properties location

The Repository path is in that file:

Repository Setting in Configuration.Properties

Resources

Tracing Tips and Resources

I had an issue recently where I needed to trace a process to try to determine where a bug was occurring.  So, I decided to try to document how I could setup tracing on a single process.

To enable tracing, you need to open the process definition.  Online, navigate to PeopleTools > Process Scheduler > Processes.  Then, search for the process that you want to trace.  Make sure that you change the search page so that you are searching by process name and not the process type.  Once you have the Process Definition open, go to the Override Options tab.  Then, change the parameter list to “Append” and enter the desired trace settings.

Applying the Trace Settings
Applying the Trace Settings

This is the recommended setting by Ketan on PeopleSoft Support & Tips:

-TRACE 135 -TOOLSTRACESQL 31 -TOOLSTRACEPC 1984

I think I would prefer these settings instead:

-TRACE 7 -TOOLSTRACESQL 3 -TOOLSTRACEPC 3596

My options aren’t to say that Ketan’s are wrong.  I just cut out some of the options I wouldn’t use to trim the extra info in the log file.  I also normally use traces for troubleshooting, so my options are tuned for that.

You can read below for details on what each of these does.

The -TRACE option sets the tracing for the App Engine program generally speaking.  For example, it can turn on the trace output that displays which steps run in which order.  Here are the different options (from the psappsrv.cfg file):

  • 1 = Trace STEP execution sequence to AET file
  • 2 = Trace Application SQL statements to AET file
  • 4 = Trace Dedicated Temp Table Allocation to AET file
  • 8 = not yet allocated
  • 16 = not yet allocated
  • 32 = not yet allocated
  • 64 = not yet allocated
  • 128 = Timings Report to AET file
  • 256 = Method/BuiltIn detail instead of summary in AET Timings Report
  • 512 = not yet allocated
  • 1024 = Timings Report to tables
  • 2048 = DB optimizer trace to file
  • 4096 = DB optimizer trace to tables

If you’ll notice, 135 is “Timings Report to AET file”, “Trace Dedicated Temp Table Allocation to AET file”, “Trace Application SQL statements to AET file”, and “Trace STEP execution sequence to AET file”.  My preference just leaves off the Timings Report.  I think the timings report would be good for performance tuning, but if you are troubleshooting, it is just overhead.

The -TOOLSTRACESQL option specifics the SQL tracing besides the SQL executed by steps in an App Engine program.  This would include any PeopleTools system database communication, and more importantly, it would include any database access caused by a component interface driving an online screen.

Here are the options:

  • 1 = SQL statements
  • 2 = SQL statement variables
  • 4 = SQL connect, disconnect, commit and rollback
  • 8 = Row Fetch (indicates that it occurred, not data)
  • 16 = All other API calls except ssb
  • 32 = Set Select Buffers (identifies the attributes of columns to be selected
  • 64 = Database API specific calls
  • 128 = COBOL statement timings
  • 256 = Sybase Bind information
  • 512 = Sybase Fetch information
  • 1024 = SQL Informational Trace

Again, the recommended setting is 31, which includes: “All other API calls except ssb”, “Row Fetch (indicates that it occurred, not data)”, “SQL connect, disconnect, commit and rollback”, “SQL connect, disconnect, commit and rollback”, “SQL statement variables”, and “SQL statements”.  My choice would just be to see the SQL statements and the variables because that is the most important information.  Commits and Rollbacks can be important, but you generally know when those happen and don’t need the extra information.  The fetches will tell you how many rows you will receive but it won’t give you any of the data, so it just clutters the trace file.  Those options can be helpful, but unless you need them, I would leave them off.

Finally, the -TOOLSTRACEPC option turns on tracing for PeopleCode.  This could be PeopleCode steps in an App Engine, or it could be PeopleCode in a component accessed by a Component Interface.  Here are the options:

  • 1 = Trace Evaluator instructions (not recommended)
  • 2 = List Evaluator program (not recommended)
  • 4 = Show assignments to variables
  • 8 = Show fetched values
  • 16 = Show stack
  • 64 = Trace start of programs
  • 128 = Trace external function calls
  • 256 = Trace internal function calls
  • 512 = Show parameter values
  • 1024 = Show function return value
  • 2048 = Trace each statement in program (recommended)

The recommended option is 1984, which includes: “Show function return value”, “Show parameter values”, “Trace internal function calls”, “Trace external function calls”, and “Trace start of programs”.  For me, I want to see what PeopleCode is running, so the most important option would be 2048 — Tracing each statement.  The variable assignments, fetched values, parameter values, and function returns all show the data as it flows through the PeopleCode, so I find those helpful in troubleshooting.  For me, that adds up to: 3596.

Resources

More Memory Leak Troubleshooting

Recently, I have been doing a lot of Memory troubleshooting.

Here are some windbg commands that helped:

Load the sos module (for .Net 4.0)

.loadby sos clr

List all of my object specific to my program:

!dumpheap -stat -type <parent namespace>

List all the instances of a particular object:

!dumpheap -type <namespace>.<object class name>

List references to the object:

!gcroot <address>

Break on garbage collection:

!findroots gen any

List references to the obect (must break in garbage collection):

!findroots <address>

Show sizes in the memory:

!eeheap

List large strings in memory:

!dumpheap -type System.String -min 10000

Then, I also found that you can use the SOS module with within Visual Studio.  First, in the project settings, I had to go to the Debug tab and check “Enable unmanaged code debugging” under “Enable Debuggers”.  Then, in the immediate window, I could run the following command:

.load sos

I thought my problem seemed to be related to the Data Grid view just like this forum post.  The post lists a code fix, but my problem is finding out where I place the code.  This blog mentions putting the code in the Form Close, but my problem is during the execution of the program.

Actually, it ended up being that I was loading a hidden column into the grid that was taking a lot of memory.  I had a Select * SQL statement, and I had to list only the fields that I wanted in memory.

Resources

UserPreferenceChangedEventHandler Resources

Memory Leak Troubleshooting

This is a tad off topic, but I don’t want to loose all of these reference links for tracking down memory leaks with a VB.Net program.

Here is the article that helped out a lot.
Rico Mariani’s Performance Tidbits: Tracking down managed memory leaks (how to find a GC leak)

As far as I can tell, you download the Vadump program from here:
Windows 2000 Resource Kit Tool: Vadump.exe

Since I have Visual Studio 2010 installed, I found it here:
<Visual Studio Home>\Common7\Tools\Bin\winnt

WinDbg, I didn’t find on my system. So, I had to download it in the Windows SDK from here:
Download and Install Debugging Tools for Windows

The CLR Profiler is another tool that I tried:
CLR Profiler

I haven’t finished, so maybe, I will have to write a follow up article.