Best Practices in QA and Migrations
Todays post was suggested by @lenotooleBI via Twitter. Whenever you mention “Best Practices”, the first thing that usually comes to mind is Object Management and Quality Assurance. Any software development workflow is going to involve the same basic concepts of a Dev / QA / Production, sometimes in the form of multiple projects or even multiple servers. As always, “Best Practices” are ideas and recommendations and by no means the law. Use what works best for your environment and what strikes a balance between not getting in your way but providing the kinds of protections you need. Usually what you need is going to depend on the size of your project and team, and I’ve actually worked on teams of all sorts of sizes, so here are some of my experiences and opinions.
Why Separate Environments?
Production is the project that your end users are going to use. This is the live project where the real data resides and business decisions are made. Common sense says that you shouldn’t be developing directly in this project, but aside from the obvious “what if it doesn’t work”, there’s also the factor that if you’re midway through changing objects, existing reports may be incorrect if they’re half changed. At the very minimum, you’ll want separate Dev and Production projects for this very reason alone. This allows you to implement your changes and test them in Dev and then promote them in total to your Production project once approved. MicroStrategy Object Manager is the tool used to perform these migrations.
How Object Manager Works
MicroStrategy Object Manager determines which objects to move based on matching the Version ID on each object. In order for this to work, you have to have the discipline not to change objects directly in Production, no matter how small the change. Setting up security to prevent Developers from doing this is easy enough, but Administrators need to be extra careful. Even if you apply the exact same change in both environments directly, they’ll have different Version IDs. The next time you go to do a migration, it will want to overwrite this object. Though the change won’t technically matter (since the content will be the same), it can still confuse you or even mask other migration problems you’d otherwise have noticed.
When you drag a single Report object across Projects in Object Manager, the tool searches all of the dependents and those object’s dependents and so on so that it grabs all of necessary new objects and changes for the things that support your Report. A keen eye will notice that while Public Objects always seem pretty straight forward in terms of what’s getting moved, you commonly see seemingly unrelated Schema Objects in every migration. If you query the DSSMDOBJDEPN table in your Metadata, you’ll find a simple (and incredibly useful) table that shows which objects depend on each other. You’ve got what equates to a Parent Object and a Child Object, for example, a Metric (Parent) and it’s Fact (Child). What’s interesting is when you look at Attributes and Tables. An Attribute is a parent of a Table, and a Table is a parent of that same Attribute. Since in Object Manager’s selection method continues searching through every dependency, this allows Tables to “climb” out to other Attributes, and their Tables and so on. I call this “Schema Spidering” and it’s why you pretty much get the entire schema with every migration. Sometimes that can mean there are schema changes unrelated to your report that you don’t want to move. Unfortunately, since Object Manager determines that these are necessary components, it’s not optional. In that case, you’ll need to identify them and manually remove them from Production post-migration.
To aide in this process, I did a presentation at MicroStrategy World 2010 entitled, Improving Object Migration Management with Ease. It outlined a simple utility that we built to make managing large migrations very easy for both the Developer and the Administrator. I encourage you to check out that presentation. The slides are verbose and all of the SQL queries you need are provided at the end. It was very successul in that implementation and I’ve received lots of positive feedback from different organizations that have used this process.
QAing reports can actually be tricky in large or complex environments. Usually, your development Data Warehouse doesn’t contain all of the data (or at least, not as recent) as Production, so it can be difficult to QA vs a test Data Warehouse. However, there may be table or ETL changes in that test environment that your new report needs to run. In general, I like to point my Dev projects directly to the Production warehouse so that we can test reports with real data. But it can become a nuisance if you have to keep flipping around between a Dev and Prod warehouse. Here are 2 tips to make that easier:
- Instead of changing the DSN of the project (which requires a project reload or IServer restart), you can edit the DSN and change the Connection it’s using. This will apply immediately with no restart needed, but it applies for the entire project. If multiple developers are working on different things, this could still cause some issues.
- You can use User Connection Mapping to set individual users to point to separate Data Warehouses. Using this method, User A can login and have their reports hit a Production Warehouse, and User B can login and have their reports hit a Development Warehouse. For more information on configuring this feature, see this technote.