Tuesday, March 01, 2005

Analyzing Your Application’s Performance

Analyzing Your Application’s Performance
To improve an application’s performance, you must be able to measure it in a controlled environment before and after you make changes. It’s imperative that you have a repeatable runtime environment, a set of input data, and a backup of that data.
Before each measured run, you should restore the input data to disk and remove any residual data from main storage so the number of physical disk operations isn’t unduly influenced. To remove the database files from main storage, use the SETOBJACC (Set Object Access) command specifying the file name and the *PURGE option.
Two criteria establish batch job performance. One is the amount of completed work versus the elapsed time required to complete it; the other is the amount of completed work versus the amount of system resources required to complete it. With batch jobs that process database data, you determine the amount of work done by counting the number of logical disk I/O operations that occur during the job’s lifetime. Logical disk I/O operations are OS/400 Data Management’s way of counting database file accesses.
To collect the data you need to evaluate your application’s performance, you can use either the OS/400 Performance Monitor or (as of V4R4) Collection Services. For each job, you need
• the job name
• the user ID
• the job number
• the job’s elapsed time
• the CPU time required
• the number of physical disk I/O operations
• the number of logical disk I/O operations (If you know the number of primary input database records (e.g., the number of customer account records processed), you should use this value instead of the logical disk I/O record count.)
Additionally, you need the total CPU time used for all jobs and tasks in the system during the runtime of the job(s) in question, the number of collection intervals, and the length (in time) of each collection interval. The system collects this information and stores it in the Performance Monitor’s QAPMJOBS file or Collection Services’ QAPMJOBL file in the performance data collection library.
Finally, you also need to know your system’s CPU model and feature code and the number of CPUs in use. This data is in the report header lines in either
These numbers can help you determine what type of performance to expect after you make changes. For simplicity, assume your batch job(s) will be run standalone so the machine’s total CPU capacity is available for the application. If this isn’t the case, estimate what percentage of the total capacity is used by other jobs and adjust your application’s performance expectations downward accordingly.
If you know the number of application records you have to process in your application test, use that value to establish your throughput and resource-cost values. For example, the application may consist of a single job (or a string of individual jobs running one at a time) that process 100,000 customer account (CA) records per hour. You might want to normalize the throughput and resource usage rates. The processing rate normalized to one second is 27.7 CA/second (100,000/3,600). If the job used 1,000 seconds of CPU time to process the records, the CPU resource cost is 10 milliseconds per CA record (1,000/100,000).

No comments:

Post a Comment