“Best Practices” Refresher for HR and Leadership Assessments
If you’re reading this, chances are that you or your organization has used assessments at some point for employee selection or development. That said, chances are also that the assessment was used incorrectly in some way at a critical juncture. And, too often it seems that assessments are either OVER-emphasized or UNDER-emphasized by hiring managers, which further undermines SOPs and the efficacy of assessments.
HR pros can attest; however, that the winning formula for evaluating role fit is equal parts of a candidate’s technical competency (necessary skill set) and cultural compatibility (necessary alignment to the company’s core values). Assessments can provide uniquely objective insights about an individual’s role fit, but only when tools are used in a targeted way and balanced with other information that is collected as part of a comprehensive due diligence process.
This simple view has been born out in our Executive Search practice boasting a 98% successful placement rate and our decades of academic work designing and validating psychometric assessments(A). Therefore, this article combines our research and expertise with the rigorous criteria(B) set by the Standards for Educational and Psychological Testing and the Uniform Guidelines for Employee Selection Procedures to offer a quick refresher of the three basic issues defining best practice for assessments – (1) What tool to choose, (2) When to administer it, and (3) How to use the results.
First, choose the right tool…
- Measure cognitive ability, not just personality: “Hire for attitude, train for skill” is a common mantra, but it’s a misguided perspective on hiring. Many so-called attitudes, motivational or temperament factors are in fact learnable skills themselves. Further, studies show that “General Mental Ability (GMA)” is most predictive of job success. Think about it, cognitive and emotional intelligence underpin ability and speed to new learn things, as well as capacity to adapt to new circumstances and handle stress. Personality, on the other hand, has been shown by independent research to be an inconsistent predictor of professional performance. It seems counterintuitive, but personality factors per se do not drive behaviours in the workplace.
- Industry relevance: It doesn’t makes sense to use a hammer when the task calls for a screwdriver. Likewise, it’s important to ensure that your assessment was designed to match the competencies suitable to the industry for which you are hiring. Often assessments are overly general versus being specific to an industry or even role. Some general, transferable skill sets across industries or employment levels do exist, but it’s best practice to consider specific competencies. The O*net database is an invaluable resource for HR practitioners and hiring managers who want readily defined competency models for various roles across various industries. These can help you understand whether a given tool covers the right skills sets and attitudes needed to assess applicants or incumbents.
- Normative benchmarks: Which approach gives the most honest information about your strengths and weaknesses – looking into a proverbial mirror or asking someone who has observed your performance over time for a candid appraisal? Many assessments are self-referential, i.e., they provide scores or feedback based solely on how a person perceives him or herself. This is not as meaningful as a normative assessment, whereby a person’s attitudes, knowledge and skill set are compared to independent benchmarks or a peer group. After all, people may believe themselves to be the smartest or most capable individuals in the room, but this belief can be quickly tested when the person’s actual performance is compared to known, bias-free standards. This is what proper normative tools do – they show where someone ranks on a scientific, objective scale.
This is a critical complement to other aspects of due diligence, which tend to be subjective in nature and amount to individuals relying on subtle cues or “gut instincts.” For example, researchers have noted the strong tendency for interviewers to make decisions based on superficial observations. One simulation found interviewers rated applicants more highly if they showed greater amounts of eye contact, head movement and smiling, as well as other non-verbal behaviour. Such physical clues accounted for 80% of the variance between candidate ratings, and this is consistent with other research showing that individuals tend to be significantly biased and give candidates artificially higher ratings due to eye contact and social status.
- Legal-defensibility: A thermometer for measuring “temperature,” a meter stick for measuring “length,” and a scale for measuring “weight” should all work reliably and accurately regardless of who is using these tools. Likewise, psychometric assessments for measuring attributes like “personality, knowledge areas, and skill sets” should function exactly the same way. Sadly, this basic criterion for quality is often assumed but rarely tested properly in the world of tests and assessments.
Speaking to this issue, Aethos™ recently published a peer-reviewed article in the Employee Relations Law Journal addressing the critical need for screening and selection tools to be demonstrably “fair” to all test-takers, i.e., they must have strong psychometric quality – meaning they give scores and feedback that are uniformly reliable, valid in what it purports to measure, and valid in its ability to predict meaningful, performance-related outcomes. Moreover, professional testing standards dictate that scores should be “immune” from response-biases related to test-taker demographics like age, gender, or country of origin. This latter aspect is typically unaddressed by assessment creators, and it puts organizations at risk for adverse impact claims by test-takers.
Next, administer it at the right time…
- Include as part of a broader due diligence process: No hiring or promotion decision should be based on a single piece of paper – be it a resume, reference letter or assessment report. Like a 360-degree appraisal process that aims or comprehensive and corroboration among data points, organizations should use a standardized process for due diligence that involves objective assessment, behavioural interviewing and proper reference checking all aligned and working in tandem. This approach does take more time and money investment on the front end, but it will significantly reduce bad hires and wasted money on the back end. For example, research reveals that even for low-level positions a failed hire costs a company double the person’s salary. At higher levels, the cost can be six times the salary. Hiring people is arguably the most important thing any organization does – the cost of failure is significant. And so is the cost of maintaining mediocre performers and the less-than-optimal “fits” with a company’s culture.
- Use before interviews to guide direction: Roadmaps and GPS help travellers stay on course to reach a destination, especially in unfamiliar territory. Similarly, a good interview process helps you stay targeted on the factors most relevant in assessing role fit. Here is assessments can help structure interviews. There is no universal process or definition for structured interviews, but the minimum characteristic of a structured interview involves asking candidates standard questions organized around a set of job requirements. When a structured approach is used, some findings suggest that the outcomes can be as powerful as such proven techniques as ability tests and assessment centres. It’s therefore not surprising that some research suggests that candidates view the employment interview as the most suitable and obvious measure of job-related abilities, and therefore a mutual exchange of relevant information predictive of future performance.
The most useful objective assessments provide feedback about candidates that help facilitate this exchange of job-related information. So, while objective assessment and behavioural interviewing can be effective when used separately; they are even more powerful when used in conjunction. The best assessments arguably do not substitute for human decision-making by giving wholesale appraisals such as “fit-unfit” or “hire-don’t hire.” Instead, we have found that the most effective assessments give feedback that (1) provides information on what can be known and (2) guidance on where further to question or reference check about issues or nuances in an individual’s results that require clarity or corroboration.
Lastly, use the results correctly…
- Use with everyone: Aethos™ uses the 20|20 Skills™ assessment as a core component of its executive search work. This is done to bolster the due diligence on candidates and to prevent real or imagined instances of preferential treatment. Indeed, administering assessments inconsistently to some but not all equates to positive or negative preferential treatment, which equates to legal risks of claims grounded in bias and prejudice. Therefore, use assessments in consistent ways.
Plus, their consistent use has the added benefit of helping in screening a candidate pool. For instance, some research shows that about 20% of candidates will self-select out of the application process when they learn that a standardized assessment is part of a company’s due diligence. This probably due to many reasons – test anxiety, feelings of skill incompetence or a perceived burden on one’s time – but whatever reason it occurs, this is about 20% of the candidate pool that didn’t need an interview. The lesson is clear – assessments help screen out people, as much as they assist with selecting the right candidate.
- Nuances matter, look beyond generalizations: Two people standing side-by-side can look very similar when viewed from a distance, but marked differences emerge when one is close enough to see the details. This is similar to the concept of comparing people based solely on scores versus understanding details in response patterns on assessments. Scores are useful for screening and making quick comparisons, but knowing about response patterns is invaluable during the selection phase for avoiding incorrect conclusions based on being ignorant of key details that impact a candidate’s role fit. Often responses to assessment questions are affected by situational variables under or out of a person’s control in the workplace or at home. It is recommended that HR practitioners and hiring managers take time to study the nuances in response patterns or in the interpretations of results provided by the assessment process.
Psychologically speaking, humans are complex machines operating in complex environment. Nuances, idiosyncrasies and mitigating circumstances can and do occur, they can impact response patterns, and therefore nuances can make all the difference when evaluating role fit among a short list of seemingly equally-qualified candidates. Take time to be trained on understanding the nuances captured by an assessment, and then learn to use these uncovered nuances and idiosyncrasies in test takers to your benefit in screening and selection.
Studies generally show that assessment outperforms traditional behavioural interviews 4:1 in predicting job performance, so a targeted and valid assessment can give organizations a major competitive edge in identifying and retaining top talent. But that’s only the first part of the equation. The other part is to correctly choose and use these tools. We encourage interested readers to consult the “sample publications” section for more detailed information and guidelines. And in closing, we also recommend that HR practitioners and hiring managers take as much time researching and choosing the right assessment for them, as they do in the due diligence for their hiring. Simply put, proper due diligence is always the name of the game.
(A) Sample Aethos™ publications
Houran, J., & Lange, R. (2007). State-of-the-art measurement in Human Resource assessment. International Journal of Tourism and Hospitality Systems, 1, 78-92.
Lange, R., & Houran, J. (2009). Perceived importance of employees’ traits in the service industry. Psychological Reports, 104, 567-578.
Lange, R., & Houran, J. (2015). “Quality of measurement” – the implicit legal cornerstone of HR assessments. Employee Relations Law Journal, 40, 46-60.
(B) References
Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement, 2002)
Uniform Guidelines for Employee Selection Procedures (Equal Employment Opportunity Commission, Department of Labor, Department of Justice, & the Civil Service Commission, 1978)