Frequently Asked Questions
QUESTIONS / ANSWERS
- Which browsers can I use to access my Virtual Presence Assessment?
- When I reach the Camera/Mic setup page, my Chrome browser says “Permission denied by system”
- Can I do my Virtual Presence Assessment on my mobile device?
- The system is slow… is this normal?
- When I start my assessment, it doesn’t move past the “Loading AI Models”.. what should I do?
- My email isn’t recognized…what can I do?
- Assessment Design
Which browsers can I use to access my Virtual Presence Assessment?Ensure you are using Chrome or Edge browsers only. (Safari and Firefox not currently supported). If you have access issues: o Please check our Frequently Asked Questions page. o If your issue cannot be resolved, please contact [email protected]. Be sure to describe the issue, and also include the browser type and version you are using.
When I reach the Camera/Mic setup page, my Chrome browser says “permission denied by system”This can happen if your Chrome browser is set to block the camera/mic and not allow sites to request it. This is not a default setting in Chrome. But here’s how to change it so the Virtual Presence Assessment will work: o In Chrome, type the following in your URL bar: chrome://settings/content o Scroll down to the “Permissions” area and look for Camera and Microphone. Make sure each is set to “Sites can ask to use your camera/mic”. If it’s on “Don’t allow sites to use your camera/mic” then the Virtual Presence Assessment won’t be able to ask you for access. o Once updated, re-start the Virtual Presence Assessment or refresh the page where we ask for Mic/Camera. You should then be prompted for access.
Can I do my Virtual Presence Assessment on my mobile device?No, the Virtual Presence Assessment is optimized for your desktop experience, since this is where we assume more important video events are occurring.
The system is slow… is this normal?All our AI runs locally in your browser. It can take a little time as your browser loads it in, depending on bandwidth and system resources. Give it up to a minute when loading.
When I start my assessment, it doesn’t move past the “Loading AI Models”.. what should I do?Some institutions* or devices may have web-filtering/security software running locally that is blocking the download of our tools. One example of this is ZScaler. If this happens to you, we recommend either disabling the blocker while you take the Virtual Presence Assessment or adding assessment.virtualsapiens.co to the white-list domains in your web filtering tool. *If you believe your organization is blocking our web address and models, you can try this on a non-work laptop, or using a different Wifi. We are always here to help troubleshoot access issues.
My email isn’t recognized…what can I do?Ensure you are using the email address you originally registered. If this still does not fix the issue, please reach out to [email protected] with your email address.
How many times should I take the Virtual Presence Assessment?
We recommend taking the assessment 2 to 3 times. The tool provides detailed recommendations on how to improve, and practice is a proven method of behavior change.
Plus data shows that taking at least two assessments led to improvement for a whopping 93% of professionals.
Do I have to respond to the questions in English?
No. While the tool text is in English and the questions asked in the videos are in English, the responses can be in any language listed in the language dropdown.
However, the videos themselves are in English. This would mean listening in English and responding in the language of your choice. The results would also be in English unless the user manually opts to translate the page within their Chrome browser.
Is there an age limit to use the Virtual Presence Assessment?Yes – you must be 18 years old to take the assessment.
How do Virtual Sapiens AI models and insights accommodate different cultural norms of communication?We have based all of our current thresholds on Western cultural communication norms for video. What’s interesting, is that there is not yet enough research on how cultures in Asia or elsewhere behave differently on video, since everything is so new. That being said, we encourage the use of our software within contexts of western dominant professional communication over video.
What about racial and gender bias?
We focus specifically on universally supported, research backed behaviors, rather than culturally dependent ones in order to steer clear of the racial/gendered bias one might normally see with certain emotion AI. For example, we detect ‘facial expression variation’ as a metric for expressivity and developing the ability to express our intention effectively. We don’t ever suggest specific emotions one should display, nor do we bias towards one emotion over another.
As the company develops, having cultural specifications would be super interesting to do, but for now we focus on those which translate across the world due to their strong evolutionary inheritance.
How are metrics of virtual presence formulated, weighted, and scored?
Our metrics reflect elements of virtual presence with a focus on visual and vocal communication cues over video. In other words, our AI analyzes the nonverbal communication aspects of presence on video.
All metrics, thresholds, and insights are vetted by behavioral science and nonverbal communication experts, and reference peer-reviewed research from a cross-section of behavioral science, neuroscience, and anthropological perspectives.
We measure ‘behaviors as a factor of time’ and whether the participant is in the role of active speaker or listener. In this sense, no one behavior in isolation will result in a poor score or metric flag, but specific behaviors repeated over a certain time, will result in an area of improvement being detected and shared with the user.
What are some examples of research that informed the assessment?
- Eye gaze/framing and its effects on impression formation
- Negative associations with neutral facial expressions on video
- Use of hand gestures in influence and information retention
- The CANDOR corpus: Insights from a large multimodal dataset of naturalistic conversation
- A neural mechanism of first impressions
- First Impressions: Making Up Your Mind After a 100-Ms Exposure to a Face
- The Benefit of Power Posing Before a High-Stakes Social Evaluation
- Body posture effects on self‐evaluation: A self‐validation approach