What is bias?

Bias can be understood as any systematic measurement error. In social and cognitive sciences we talk about mental shortcuts often leading to judgment error. These are prejudice and tendencies or preferences in favour of one thing, person or group compared with another. We are often not aware that we think and act in a biased way, but there are several techniques available that allow researchers and practitioners to measure and counteract biased thoughts and behaviours.

In decision making, bias is related to heuristic thinking. This is an efficient, experience-based strategy for making a decision, which often but not always leads to a correct outcome. When a heuristic thinking process leads to incorrect judgments, we say that the judgment is biased. The use of simple decision rules instead of careful scrutiny of information can be a good strategy when decisions need to be taken fast and when they incur relatively little cost of error. However, this strategy is costly when the impact of the judgment is high (e.g. decision to hire or promote).

In social perception, there are several forms of cognitive biases or heuristic fallacies that may cause individuals to judge others incorrectly. Individuals can be drawn to rely on false information or fail to use valid information. In self-assessment, we often encounter response bias, which is a particular tendency to adjust one’s answer to a question, depending on one’s motivation or on the context.

Understanding bias in professional interviews

A professional interview is not biased itself but the person who is interviewing may allow his or her judgment to become biased. This happens almost automatically and outside of the interviewer’s awareness. This is especially the case in zero-acquaintance interactions such as a job interview, where first impressions are particularly strong and sensitive to heuristic thinking and bias. This tendency is further exacerbated when there is limited time to reach a decision.

Below we explain and illustrate some of the most common biases we find in social sciences that can greatly – and negatively – impact professional decision making.

Primacy effect or first-impression bias

Facts or impressions that are presented first tend to be better learned or remembered than material presented later. In a social context, the first information gained about a person has an inordinate influence on later impressions and evaluations of that person.

Example: The first minutes during a first encounter as in a job interview will have disproportionate weight on the hiring decision. The interviewee’s appearance, handshake other relatively little job-irrelevant information will be more salient to the interviewer than most other job relevant information the interviewee presents later (e.g. professional experience). In addition, first impressions are hard to change once they have been formed. Structured interviews are often suggested to minimise or at least equalise this bias across candidates.

Confirmation bias

This is the tendency to gather evidence that confirms pre-existing expectations, typically by emphasizing or searching supporting evidence while dismissing or failing to seek contradictory evidence.

Example: The interviewer’s intuitions, based on inaccurate or incomplete first impressions (see above), often feel “right” and become consolidated because of the confirmation bias. Early hunches about a person’s abilities or personality will set expectations for which only information in line with this expectation will be searched for. For example, if on a CV the interviewer reads that the candidate comes from a renowned college or company, this candidate will have a higher chance of being perceived as particularly intelligent or competent. Even if contradictory evidence is presented later on, such as objective test results, the interviewer may ignore such evidence and stick to his or her first opinion, thus having a false sense of confirmation. This is what we call a self-fulfilling prophecy.

Recency effect

The most recently presented facts or impressions are learned or remembered better than material presented earlier. In social contexts, it can result in inaccurate ratings of a person’s abilities due to the inordinate influence of the most recent information received about that person.

Example: The interviewer will better remember recent information obtained about a candidate as well as from the most recent candidates in a series of job interviews. Evaluations a person’s job fit will again be influenced more heavily by a limited number of recent facts that do not form a complete, objective picture.

Halo effect

This is a response bias in which a general evaluation of a person (positive or negative), or an evaluation of a person on a specific trait (e.g. as attractive), influences judgments of that person on other specific dimensions.

Example: A candidate who is liked will be judged as more intelligent, competent, and honest than he or she actually is.

Similar-to-me effect

Individuals usually get along with people who tend to look and think the same way they do.

Example: Employees or candidates who speak the same language or have other things in common with their manager or interviewer (e.g. same hobbies, origin) will be liked more than other candidates or team members and therefore have a higher chance of getting hired or receiving a positive evaluation. Moreover, the manager or interviewer will show more positive and encouraging behaviours towards these individuals, who will in turn have a better performance.

In-group bias

Individuals tend to favour one’s own group, its members, its characteristics, and its products, particularly in reference to other groups. The favouring of the in-group tends to be more pronounced than the rejection of the out-group, but both tendencies become more pronounced during periods of inter-group contact. Any social group can be affected, such as age, gender, religion, sexual orientation. At the regional, cultural, or national level, this bias is often termed ethnocentrism (tendency to base perceptions and understandings of other groups or cultures on one’s own).

Example: Similar as above, employees or candidates who belong to the same group as the interviewer or as other high-performing employees in a company, will automatically benefit from this and have a higher chance of getting promoted or hired (e.g. Caucasian young males) compared to “untypical” candidates (non-white, female, older adults).

Contrast and assimilation effect

Two seemingly unrelated stimuli (e.g. personal characteristics) can be perceived as more different or more similar when they are encountered together or when one immediately follows the other.

Example: These effects happen for example in a series of consecutive interviews and may bias the interviewer’s evaluation in favour of a candidate coming right after a negatively evaluated candidate for example.

Fundamental attribution error

Individuals tend to overestimate the degree to which someone else’s behaviour is determined by his or her enduring internal personal characteristics (personality, attitudes, or beliefs) and, correspondingly, to minimise the influence of external factors, i.e. the surrounding situation, on that behaviour.

Example: A candidate arriving late for an interview due to unforeseeable circumstances will unfortunately likely be negatively evaluated because the behaviour of arriving late is more likely to be seen as an indication of an incapacity of the candidate to come on time (low conscientiousness).

How does Vima limit bias?

AI can help to reduce biased or subjective judgment of people. Vima actively limits bias in two main ways: by focusing on behaviour measurement and by training its algorithms on quality labelled data.

First, Vima’s algorithms learn to consider objective and dynamic behaviours that correlate with certain traits and skills, as perceived by a reliable group of expert observers. Compared to verbal responses to self-report questionnaires, behavioural responses in a spontaneous video are not so easy to distort or manipulate. Furthermore, by measuring and integrating behaviours from multiple modalities (face, voice, speech, and gesture), the model prediction is robust and comprehensive, and as a consequence less affected by faking attempts or introspection error. In sum, you are what you do, not just what you say you do.

Second, Vima relies on trained experts to label its large pools of video data using standardized annotation techniques. Standardized annotation involves several aspects that contribute to the limitation of bias and subjectivity, including (but not limited to):

  • Expert annotators have no personal background of the recorded person (reducing the similar-to-me effect and the confirmation bias).
  • They follow a strict protocol with clearly defined and research-driven scales and video presentation conditions (reducing the halo-effect, the contrast and assimilation effect, and the recency effect).
  • They are instructed to focus on the assessed person’s verbal and nonverbal behaviour only (not on appearance or non-relevant demographic variables).

Furthermore, extreme care is given to every other related part of the process in building Vima’s technology in order to meet the highest standards of data quality, notably:

  • Verified technical quality: Vima’s recording tool includes a face and voice verification step to ensure that a valid video is uploaded. Furthermore, each collected video is scrutinized on a list of technical quality criteria to avoid noise affecting computational analyses.
  • Language matching: Traits and skills can be expressed and perceived in different ways by speakers of different languages. Moreover, difficulty in speaking a certain language may well reflect lower skill and trait perceptions. Vima actively prevents language fluency and cultural bias to slip into its algorithms by using only quality data for training. Concretely, this means that language-specific prediction models are built on data from native or high fluency language speakers from specific countries where that language is a mother tongue. It also means that these data were labelled by annotators speaking the same native language and have the same nationality as the individuals they evaluated (e.g. an English native annotator from UK or US viewed and rated videos from English native or high fluent speakers from US or UK).
  • Cultural diversity: While language-specific models are necessary, culturally singular models would not reflect the reality of a culturally mixed world. That is why Vima tries to represent this mixed reality into its models and communicates will full transparency on the rich cultural metadata of each prediction model (country of origin, country of residence, other languages spoken, etc.). This valuable information helps Vima to give expert guidance and select the most appropriate solution (model, application) that fits the needs of a company.
  • Gender balanced dataset: Vima reduces gender bias by training its technology on a perfectly balanced (50% male, 50% female) dataset of videos evaluated by an equally gender-balanced pool of annotators.

Thus, by carefully selecting and annotating the data on which its algorithms are trained, Vima is constantly and relentlessly working to reduce all possible biases that its algorithms could deploy at scale, avoiding the pitfalls that are consequences of poor training data (“garbage in, garbage out”). In the end, Vima’s quality AI technology empowers people with actionable insights into personality traits, soft skills and emotions, to make a better person assessment, which will ultimately improve decision making and fairness overall.

DOWNLOAD PART 5

“AI” TO SERVE PEOPLE AND ENHANCE

THEIR EXPERIENCES.

Gain Insights