Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Wednesday, 30 March 2011

Is your organisation ready for 360? Think again...

There is no doubt that a well implemented 360-degree review and feedback process will make a significant and cost-effective contribution to your organisation. But, is now the time? We share what we consider to be the main considerations for maximising the effectiveness of 360 feedback programmes. If you come up short on any of these, do think carefully about the timing.

1. Ask yourself – is it still a step too far?

360 should be approached as an evolutionary way to capture feedback. If there is no historical approach to feedback being a fundamental and accepted part of the culture it may encounter significant obstacles. It may be that a formalised review processes and one to one feedback needs to be introduced to lay the foundations for the full 360 and help to realise the value it can add to an open, honest culture with a genuine desire to improve performance. Perhaps a pilot in a certain part of the business (usually the top) might be a better starting point?

2. Can I create a ‘What’s in it for them’?

When positioning the 360 with the end-user it is imperative that a clear purpose is defined. Is the overall outcome designed to support Management Development, Coaching, Career Development or Performance Management? Are you introducing new competencies, ways of working or bonus schemes? It may be some or all of these. By exploring with the users it will help to sell the “what’s in it for me” gaining buy-in and provide clarity to how the organisation will use the results.

3. Can I deal with ‘emotional’ objections?

Explain how the 360 will be administered, who will ensure it happens, who will collate the results and how, when will they receive the feedback and who from? Ensure that the process is transparent and all can see what the desired outcomes are. It is useful to show at this stage that the 360 process will be revisited to allow individuals to see how they have improved based on feedback captured over time.

4. Can I create a ‘What’s in it for the Management population’?

Is there an overarching strategy or goal that the organisation is working towards? 360 can be extremely effective when clear links can be seen between the outcomes and the future vision of an organisation. Are there values or a core mission statement that the behaviours link to?

5. Can I enlist their support as opposed to just agreement? 

Identify the key stakeholders to act as “Champions” supporting the pilot of the process and promoting its worth and usefulness as a management tool. These may be a Senior Management Group or well respected members of specific business areas. This group would then define and promote the organisational need for the 360 i.e. to identify current skills against those required for future growth and develop training plans to assist this.

6. Are the questionnaires fit for purpose and considered relevant? 

Here possible ensure that the questions reflect the desired competencies. If the organisation does not use competencies ensure that the language used is common across the organisation, appropriate to the respondents and can clearly elicit the desired responses. It is useful to discuss the design of the questions and format with a pilot group of differing levels to ensure your format will deliver what is required and provide one clear consistent message of its worth. Do see our other resources on this subject.

7. Are these outputs aligned with other core Talent Management processes? 

Wherever possible align the feedback to the Personal Development and Career Planning process within the organisation (you do have those don't you?!). Formalising action plans based on the feedback and reviewing quarterly shows commitment to the users. It also ensures the feedback is revisited and discussed regularly keeping the process alive and helping to embed it into the organisations culture. The choice should be given to the individuals to discuss the action plans with their manager or a mentor. Sharing by choice in this way can then help to naturally encourage a feedback rich team who seek to adopt the process into every day operations.

8. Are those tasked with delivering and receiving the feedback ready, willing and able?

Be specific when and how the feedback will be delivered. Ensure that the individuals are briefed on the stages on feedback - shock, anger, rejection, and acceptance. This helps them to mentally prepare for the sessions and understand their emotions are natural and expected. In our experience individuals can move more quickly to acceptance (and therefore action) when they understand the stages and the reasons for their feelings. Thus helping the feedback to be digested and understood more fully. Ensure each individual understands that by being a willing participant in a 360 feedback process, they own the feedback. By accepting to be involved, make sure any ground rules are laid out in advance and that choosing to decline is 'ok'. It is only they that can act upon the feedback and use it to provide deeper self insight. Also explain that what they receive is in no way altered or edited - it is the views of their chosen respondents as provided on the forms.

Saturday, 8 January 2011

How fine should the fine print in competency frameworks be?

Much of the work we do with clients involves, at some level, job analysis or competency frameworks. Working with large numbers of these you do begin to see patterns and consistent themes, but we do also see a great variation in the depth, breadth and structure of competency models. When developing a framework to underpin your talent management efforts, it can be fiendishly difficult to strike a balance between something that will apply to most and making it too generalised to be useful. The potential application and therefore level of detail and specificity required in a framework is therefore something well worth debating before you embark on a competency development project.

Research by James Meachin and Stephan Lucks (recently reported in the BPS’s Assessment and Development Matters, Vol. 2 No. 3, 2010) explored the optimal level of ‘granularity’ for competency frameworks when used as predictor measures and assessment criteria. Research into the effectiveness of various personality constructs to predict job performance suggests that some of the broad measures, such as the Big Five, have limited predictive validity, but that this might be improved when you correlate job performance with some of the finer-grain sub-traits, such as ‘dependability’. This would suggest that better predictions of job performance are made by fine-grain, or more specific, behavioural criteria.

Based on the literature, Meachin and Lucks hypothesised that assessment centre ratings which were based on a fine-grain competency framework would produce better correlations between conceptually-matched job performance measures (line manager ratings). In other words, they’d result in a more accurate prediction of high performance on the job. Interestingly, what they actually found is that the predictor measures showed stronger correlations with line manager performance ratings as they became broader, not narrower. Aggregating the competency scores into a general, overall measure of performance seemed to be a more reliable way of predicting high-performing individuals than picking on their performance in specific competency areas.

For practitioners, this is useful information. In order to create robust assessment processes, which differentiate between higher and lower performing candidates, a job analysis or competency framework has to provide depth and a level of detail which makes explicit the behaviours and competencies which are important to success or which demonstrate effectiveness. Undoubtedly, in the arena of assessment for development purposes, the value is in the detail – in helping people understand the specific aspects of their performance or behaviour which makes them more or less effective. But in recruitment, by being overly reliant on the detail and by honing in on one or two areas which may be deemed to be crucial to the job, we could be missing the bigger potential picture.

So, perhaps the optimal situation is to have a detailed, granular competency framework, which sets out the specific behavioural indicators across a number of competencies (no less than 6, and no more than 12?). By collecting assessment data against your framework (through performance appraisal, assessment processes, or 360 degree feedback), you should then perform a factor analysis on your competencies to determine whether there are any higher-order (or coarse-grained) factors underlying it (this may result in a general, overall performance construct or perhaps two or three clusters of competencies). Aggregating competency scores in line with these underlying factors and making decisions based on these broader measures is likely to improve the reliability of the framework, and ensure that you’re not letting potentially good candidates slip through the net.

Wednesday, 12 August 2009

Assessing Values as part of the Appraisal Process

Having just seen a post from a 360 service provider suggesting that there should be a separate section on the assessment of Values in an appraisal, I felt compelled to offer our thoughts on this.

Frankly, we don't agree with the idea of a separate section that prompts a manager and their employee to talk about 'values' so explicitly - how crass! It prompts rather theoretical discussions about personal values and people find this overly invasive and fear being exposed - hardly conducive to a productive appraisal meeting and could easily derail the process completely. Magnify that by how many people 'go through this process' and it could be a potential disaster.

We do however think that values, and more explicitly how values manifest themselves in the context of the work someone performs, is an area of assessment in an appraisal or 360 review. The skill is in developing the right questions or combination of questions that tease out how these values play out in the work place and the impact they have on the individuals colleagues, and this is the approach we always take.

I suspect that a 'simple values section' is the hinting at right direction but it is the wrong implementation and a short cut to doing the job properly.

If your provider simply says, 'we'll add a values section on at the end', ask them why the values aren't entwined into the rest of the assessment or questionnaire.

After all, that's how values appear and are observed in the work place.

If your service provider 'doesn't get it' or can't or won't do this, then our genuine advice is to walk away before any damage is done.