Measuring IT Service Management (ITSM) Processes

To differentiate themselves from competitors, or to comply with legislation and customer organisational policies, many organisations, such as those that provide technology outsourcing, have a compelling need to measure their IT Service Management processes to demonstrate they conform to best practice standards.

Best practice are techniques, methodologies or frameworks that have proven to reliably lead to desired results, usually formulated through a mixture of common sense, experience and research.

Generally accepted as superior to alternatives due to their potential to be able to produce superior results, or because it has become the standard way of doing things. They can be a means of complying with legal, ethical or quality requirements, as well as an alternative to mandatory or legislated standards.

When a “one size fits all” style best practice is not appropriate for an organisation’s needs, then it may be blended and tailored to fit their requirements, culture and history.

Compliance to best practice can be based on self-assessments or benchmarking, a feature of several of accredited standards such as ISO 9000 for Quality management and ISO 2000 for Information Technology (IT) Service Management.

There are complementary best practice frameworks associated with IT Service Management, such as the universally accepted ITIL framework.

Benchmarking and self-assessment difficulties

Carrying out a best practice compliance self-assessments or benchmarking, has its challenges.

Due to its very nature, you could be forgiven for thinking that benchmarking well documented best practice would be simple. However, experience has shown a variety of inconsistencies, indicating this is not always the case. Despite assessment scores indicating a high standard of compliance, anecdotal and business evidence has shown that the best practice does not always achieve the expected and desired Return on Investment (ROI).

These challenges include:

  • Lip Service implementation – All necessary documentation and artefacts can be shown to be in place, and tick all the compliance boxes. However, it may be confined to a small subset of people, not exploited organisation wide, with most staff unaware of its existence or where to find the artefacts.
  • Expensive – Having subject matter experts conduct the benchmark can be an expensive undertaking, both in terms of the cost of the engagement, and the time and resource costs for staff to attend workshops and interviews, and travel costs associated with a widely dispersed organisation. The cost of the assessment has, at times, proven to be counter-productive to funding the recommended improvements, with the outcomes ultimately relegated to “shelf-ware”.
  • Time delays – In capturing the data, due to busy staff and scheduling complexities, as well as for analysis that produces outcomes worthy of the investment made in the assessment. Reported outcomes may need to meet strategic deadlines such as board meetings or budgeting, and any delays and missed deadlines can defeat the objective of the assessment.
  • Consistency – Different subject matter experts have different methods of capturing the data, scoring and analysing the outcomes. Therefore, the “benchmark” in effect, becomes a one-off, with no genuine relation to any follow-up benchmark, unless conducted by the same consultant. Different consultants from the same consultancy can produce differing results. There is a high reliance on the knowledge, skills and experience of the consultant.
  • Cookie Cutting – Subject Matter Experts have been known to produce suspiciously similar outcomes from a follow-up benchmark, or even against another organisation’s assessment results.
  • Subjectivity – Outcomes can be highly subjective and skewed, especially when rewards are offered to stakeholders for meeting process KPI’s, or due to pressure from senior managers, leading to outcomes which reflect vested interests. Self-assessments are particularly prone to this, with data captured more easily able to be “massaged” to produce required outcomes, rather than a reflection of “the truth”.
    Suspicions that a consultants’ use of proprietary methodologies may have skewed the outcomes to up-sell new technology, training or professional services, can reduce confidence in the outcomes, even if unfounded.
  • Garbage-In, Garbage-Out – For self-assessments, there may be a lack of suitable local understanding of the best practice framework, or use may be made of “free” dubiously obtained question sets. This can result in incorrect interpretation of the framework, the analysis of the scores, as well as the outcomes.

Without asking the appropriate questions of the appropriate people, you cannot expect outcomes which have a trusted level of integrity. This can mean managers will be reticent to base important decisions on them.

Measuring IT Service Management Processes

 

Measuring IT Service Management Processes 2

 

Our Business Assessment Tools can easily help you validate your compliance. Click here to find out more.