Changing Your PS Database Platform: The Design Phase

Posted by: Brent Martin in PeopleTools

Tagged in: Replatform

Brent Martin

In my previous article I described how you might approach planning and requirements gathering for a PeopleSoft database replatforming project.  You would exit that phase with solid RICE object inventories and an idea of any new PeopleTools functionality you will deploy.

I’d like to take you through some thoughts and considerations for the remaining phases (Design, Build, Test, Deploy).  This article will just focus on the Design phase and i'll publish the other articles in the near future.  For this discussion, I’m going to assume we’re doing a PeopleTools upgrade along with the replatforming effort, and we’re going to keep existing functionality/features/customizations the same as the current state.  This is usually a good idea because the technology changes will be challenging enough. 

The Design Phase

You might think a basic replatforming project doesn’t require a lot of design.  Once you have the object inventories from the planning/requirements gathering phase you have enough information to start modifying the SQL to make it work on the new database platform.   The one thing I would suggest though would be to bundle the raw object lists into logical chunks of work.  For example, if you have a component, page and process that work together to execute a process you should bundle them together so a developer can unit test all of them at the same time.  If you want to deploy new PeopleTools features you’ll want to spend some time deciding which ones will be useful and how you will configure, test, and deploy them.

But there’s a bit more work you need to do in this phase.  First, you’ll want to identify any external system that might need to be updated as well.  Any system that uses database links or has dependencies on your current PeopleTools environment (think Component Interface libraries) will need to be investigated to determine the impact and appropriate action to take.

Another decision you’ll need to make is with reports and queries.  You probably have a LOT of public queries, and you may have a lot of reports.  nVisions in particular seem to multiply if you don’t have strong governance processes to limit users to specific standard reports. 

So how do you deal with this situation?  It’s not always cost effective to upgrade and test every one.  Here are a couple of suggestions to manage this problem:

1)   Ask your users to provide their business critical reports and queries.  This will be the list that you “certify” that they will work correctly and perform well on the new platform.  You’ll spend whatever time is necessary during development and the testing phases to make very sure that these queries and reports are defect free.

2)   Identify all of the reports and queries that have been run in the last couple of years via process scheduler and query audit tables.  All of these will be your priority 2 set.  Scan this set using automated techniques to identify problems, correct any that fall out, and unit test everything that is modified.  Be sure a good percentage of these are tested in future test phases and give users an opportunity to test them during user acceptance testing.

3)   Other reports and queries won’t be touched.  Breaks here will be handled by your post go-live break-fix process.

The Design Phase is also when you should prepare your testing plan and your communication plan.

While this phase is progressing, your DBA team should execute an initial replatform from the database on the old platform to the database on the new platform.  For this exercise, we’ll just use data mover to extract every table on the source database platform to a flat file, and to import it to the new database platform.  Once on the new DB platform you’ll need to manually adust tables like PSDBOWNER, PSOPTIONS, etc.  Execute the PeopleTools upgrade (if necessary) and you’re done. Don’t expect this to go quickly the first time around – allow yourself 2-4 weeks in the schedule.  And capture all of the steps in detail because it will be the start of your cutover plan.  The environment this exercise produces will become your new development environment so that you can start your build phase. 

Also during this phase you should make decisions about the technologies and tools you’ll use to make your job easier.  One is Oracle GoldenGate.  GoldenGate is a Change Data Capture tool that supports multiple database platforms.  It gives you some amazing capabilities around extracting table data from one platform in parallel, shipping the extracted files to a target file server, and importing them in parallel while the extract is still running.  One of my clients was able to replatform a 1.2TB database in about 8 hours using GoldenGate.

And Oracle GoldenGate lets you take this to the next level.  After the initial copy is in sync with your production database, GoldenGate can read the source databases redo logs and keep the target database in sync in real time.  This feature enables you to achieve a “zero-down-time” cutover, or as close as it gets.   If you don’t want to commit to Oracle GoldenGate for the long term, Oracle does offer licenses for a specific period of time (say 1 year).  I can provide pricing if you’re interested.

SwissSQL has some promising tools that could make the development phase easier.  I don’t have any specific experience with the tools they offer but it’s worth a look. You might be able to automate some of your replatform coding with these tools and your knowledge of the PeopleSoft metadata.

PeopleSoft Performance Monitor is a promising tool for a number of reasons:

1)   It tracks what code is executed.   With that data, you could create a report from your test database that shows what items in your RICE object list are actually executed.  That kind of feedback about your code coverage can allow you to add or remove test cases as needed.

2)   It can show you the performance of SQL and PeopleCode.  Most replatforming projects will have a goal of improving performance (or at least not making it worse), and if you have baseline performance data from your current system you can show that you’re meeting that mark despite what users might say to the contrary.  And having empirical data can put a lot of concerns to rest for your executive sponsors.

The down-side to PS Performance Monitor is that it takes up a good amount of overhead and if your current platform is already strapped for resources it might not be practical to implement it against your current production system.

That's my thoughts at the moment about the design phase.  I'll give you some insight into the build phase in my next article.

Trackback(0)
Comments (0)Add Comment

Write comment

security code
Write the displayed characters


busy