Understand the Science behind the Innovation.

Quick Facts

Results in less then 5 minutes

24 Psychologists’ accuracy

No Individual Bias

No Gender Bias

read more

5% of average variation

read more

From billions to thousands

read more

Ethics and Compliance

At Vima we consider data protection and privacy as part of our core duties and therefore ensure that our methodologies and technology are GDPR-compliant and follow the Swiss Federal Act on Research Involving Human Beings as well as the Swiss Federal Act on Data Protection.

The Science Behind

From video CV to personality traits & skills

From a self-presentation video consisting of billions of raw features, our model uses state-of-the-art machine learning algorithms to compress the audio, text and visual channels into less than 15’000 meaningful multimodal features. This set of features is then passed through different regressors to infer the traits and skills. The multimodal aspect of the features is handled performing fusion in different levels of the predictive model to boost its generalization capacity.

Ground Truth

A cornerstone of good AI is the quality of its ground truth. Vima’s algorithms to assess traits and soft skills meet the best possible standards of data quality, as they are built on human annotation of short videos according to a scientifically validated protocol and careful selection of a large group of 24 annotators specialized in human behavior.

ICC (Intraclass Correlation Index) & R2

 Vima makes sure to build its machine learning solutions on a reliable and valid benchmark. The ground truth data used for training its models passes the statistical test of good reliability of the labels provided by a large group of expert raters: average ICC = 0.68. The performance of the models predicting personality traits by using visual-audio-text features extracted from the video CV, is quantified using the R2 metric. Vima’s prediction models perform with an accuracy that is state-of-the-art in its field: average R= 0.4.

No Gender Bias

Together with our academic partners (Idiap & UNIL) we tested for gender bias against women in our trait and skill predictive algorithm, potentially causing them to be more negatively evaluated than men. Analyses of our fully balanced dataset showed that females are actually rated slightly more positively than males on certain skills and traits (both by human annotators and by the algorithm). We thus found no proof of an overall negative gender bias.

References

REFERENCES: (keep the format with Italic, etc.)

* https://hbr.org/2019/06/will-ai-reduce-gender-bias-in-hiring
* https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai
* Eagly, A. H., Makhijani, M. G., & Klonsky, B. G. (1992). Gender and the Evaluation of Leaders: A Meta-Analysis. In Psychological Bulletin (Vol. 111, Issue 1, pp. 3–22). https://doi.org/10.1037/0033-2909.111.1.3
* Heilman, M. E. (2012). Gender stereotypes and workplace bias. In Research in Organizational Behavior (Vol. 32, pp. 113–135). JAI Press. https://doi.org/10.1016/j.riob.2012.11.003

Test / Retest Reliability

We tested the stability of our predictions with +800 individuals who recorded their video up to five times with 2-6 weeks in between takes. The results showed that ILA predictions were stable: the average variation was as low as +/- 5%. In sum, natural fluctuations in behaviour and expression do not overshadow the overall stability of our personality and skills.