Envisn's IBM Cognos Blog

What’s Wrong with IBM Cognos Audit Data

Written by Rick Ryan | February 28, 2014

By Paul Hausser, Envisn, Inc.
The title of this article has to be taken in the context of how Cognos administrators would like to use Cognos audit data. Many would like to be able to use the activity data in multiple ways to better manage their environment and know what’s of value to their users as measured by actual usage. So our focus here is on understanding what’s available out of the box and as compared to the ultimate vision of what may be possible.

 

Purpose and Structure

While one may never know all the reasons for their creation and inclusion in the product, it appears that the primary reasons were for debugging purposes and ongoing customer support related issues. The 21 Cognos audit tables are:

  1. COGIPF_ACTION
  2. COGIPF_AGENTBUILD
  3. COGIPF_AGENTRUN
  4. COGIPF_ANNOTATIONSERVICE
  5. COGIPF_HUMANTASKSERVICE
  6. COGIPF_HUMANTASKSERVICE_DETAIL
  7. COGIPF_MIGRATION
  8. COGIPF_NATIVEQUERY
  9. COGIPF_PARAMETER
  10. COGIPF_POWERPLAY_DIM_USAGE
  11. COGIPF_POWERPLAY_LEVEL_USAGE
  12. COGIPF_POWERPLAY_MEASURE_USAGE
  13. COGIPF_RUNJOBSTEP
  14. COGIPF_SYSPROPS
  15. COGIPF_THRESHOLD_VIOLATIONS
  16. COGIPF_USERLOGON
  17. COGIPF_RUNREPORT
  18. COGIPF_RUNJOB
  19. COGIPF_POWERPLAY
  20. COGIPF_EDITQUERY
  21. COGIPF_VIEWREPORT

Of these 21 Cognos audit tables their focus falls into the following categories:

  • Tables 1 through 9 are concerned with troubleshooting, administration and development.
  • Tables 10 through 12 provide detail on the use of dimensions, levels and measures within PowerPlay.
  • Tables 13 through 15 deal with job steps, systems properties and threshold violations monitoring on an ongoing basis.
  • Table 16 keeps track of user logon data.
  • Tables 17 through 21 are focused on activity data for different classes of objects within Cognos. These five tables along with table 16 are the primary ones most Cognos administrators will be concerned with since they relate to tracking and monitoring usage within their environment.

Audit Tables Need to be Decoded

The raw event data from these five tables is not very useful as is. The reason for this is that there are a number of things you need to know about these tables and they don’t come with any instructions.

If accuracy, precision and broadly dimensioned data are your ultimate goal then you’re taking on a bit of work, but the payoff from this can be huge. The major steps here include the following:

  1. Determine how the tables relate to each other. While some of can be seen at a basic level the rest involves decoding the tables and the data in them.
  2. You need to perform activity, both interactive and batch, in Cognos Connection and the various studios that will be logged into the tables. This activity should cover all possible permutations that can occur.
  3. Then determine how all of this activity gets logged into the audit tables.
  4. You have to interpret the log data and determine what it means and how it can be used.
  5. Create a rule based automated system for reading the logs, dealing with data anomalies, normalizing it for a star schema and storing it in newly defined tables.
  6. Identify the reference data that needs to be used with the log data to maximize its value. Performing the steps above will give you insight into what reference data will be useful. See Figure 1 for how this all comes together into a Framework Manager Model.

Really?

If all of this seems like a lot of work it’s because it is. But once you’ve done it successfully there are no limits to how the audit data can be used.

An obvious question to ask is, “Why not just use the audit data “as is” along with the sample reports that come with Cognos 10?”. You can do this but at some point you may find that the data does not hang together. Some of the issues with the raw audit data and the sample reports include:

Data anomalies abound in the audit tables. Some records need to be trimmed to get useable data and in other cases what’s actually provided is not what the table column says it is, eg, PACKAGEPATH sometimes is actually the model path. You can’t build a useful reporting system on usage when key elements can have different meanings.

Narrow dimensionality – Standard dimensions provided limit the sample reports from Cognos to some very basic views of data. By decoding the audit data and combining different data items it’s possible to create new dimensions. Examples would be user, package and content hierarchies.

Depth – The sample reports are essentially list reports that limit the ability to provide any depth to analyses you may want to do. For example, it’s difficult to create cross tab reports with the basic audit data.

Reference data is in many ways the secret sauce here. The right reference data can unlock multiple creative ways to use the Cognos audit tables. It provides context for understanding the audit data. An example is reference data on inactive or deleted objects The audit data persists in the audit tables after an object has been deleted from the Content Store; having a list of inactive objects enables you to filter out these objects from your usage analysis and narrow your focus.

The Art of the Possible

We discussed above the need to understand how data gets logged around certain activities and what data gets logged that time. This may seem like a total waste of time but it’s the only way you can build the rule based system that gives you the precision you need. And it also makes it possible to create additional dimensions and measures for such things as average run times, average session time, view count, etc. The examples in Figures 2 – 4 below show how some of these can be used.

In our next blog we will focus on the Cognos Audit Extensions and their use.

© 2014 Envisn, Inc. – All rights reserved - Cognos Tools