Tuesday, March 01, 2005

SMAPP

SMAPP
The System Managed Access Path Protection (SMAPP) function provides recovery capability for users who want it but don’t want to manage a full recovery process. You can tell SMAPP your database recovery time objective (or use the SMAPP default value shown on the EDTRCYAP (Edit Recovery Access Path) command menu), and SMAPP will ensure that the appropriate database access paths are journaled to meet it.
When you turn SMAPP on, it periodically calculates the database recovery time for running applications. If the recovery-objective value is exceeded, SMAPP adds additional access paths to the access paths it’s currently journaling. If you collect Performance Monitor or Collection Services data, you can see the estimated recovery times and the number of system-managed (journaled) access paths in the Journaling section of the Performance Tool’s Component Report.
Where do implicit Journal Records go?
Two types of application-throughput slowdown can occur when you use SMAPP. In the first case, the access paths are journaled into the system ASP (which may be RAID-protected) rather than into a separate ASP (which is mirrored). This happens if the access path’s underlying physical file is not being journaled. By default, SMAPP puts the journal receiver into the system ASP. However, if the underlying physical file is being explicitly journaled, SMAPP’s access path journal records will go to the same journal receiver being used by the physical file journaling.
Note that you control whether the physical file is journaled and where the journal receiver is. If you want to use SMAPP to automatically turn access-path journaling on and off, and you want to optimize access-path journal write performance, then consider journaling the physical file to a journal receiver located in a separate, mirrored ASP. The other option is to take control from SMAPP and explicitly journal the larger (and most costly to rebuild) access paths to the journal receiver in the mirrored journal ASP.
Purging Data Base pages
The other type of slowdown is related to the purging of large numbers of changed database physical and logical file records from main storage. System functions monitor the number of changed pages in memory, and the system starts writing them to disk when a certain threshold is reached. Some customer benchmark tests have shown batch throughput degradation as high as 60 to 70 percent when this happens.
Whether you encounter this situation depends on several factors. For example, if you’re running for a long time in a very large storage pool, you’re more likely to reach the threshold than if you’re running in a smaller, restricted pool. You won’t accumulate as many changed pages because the storage allocation functions are "stealing" the pages for memory requests for other data. Also, if the jobs don’t run for a long time but instead finish and close the files, the changed pages no longer linger in main storage; they’re written when the file is closed.
When you’re using SMAPP, journaling the underlying physical file, and using a separate ASP for the journal receivers, you should also employ the CHGJRN RcvSizOpt(*RmvIntEnt) parameter to optimize performance. This parameter basically puts a "fork in the road" and routes the onerous (unnecessary for database recovery) SMAPP journal entries to a separate set of disk drives (still in the separate journal ASP). The system journaling functions split the use of the journal disk drives and set aside a separate set of disk arms for the journal receivers that will receive the SMAPP entries.

No comments:

Post a Comment