Girish Mahajan (Editor)

Virtual Human Interaction Lab

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Field of research
  
Virtual Reality

Location
  
Stanford, CA USA

Website
  
Official Website

Operating agency
  
Stanford University

Established
  
2003

Director
  
Jeremy Bailenson

Nickname
  
VHIL

Field of research
  
Virtual reality

Phone
  
+1 650-736-8848

Address
  
Stanford Recycling Office, McClatchy Hall, Room 411. Department of Communication 2050, Stanford, CA 94305, United States

Stanford s virtual human interaction lab


The Virtual Human Interaction Lab (VHIL) at Stanford University was founded in 2003 by Professor Jeremy Bailenson, associate professor of communication at Stanford University. The lab conducts research for the Communication Department. VHIL's mission statement includes: "The mission of the Virtual Human Interaction Lab is to understand the dynamics and implications of interactions among people in immersive virtual reality simulations (VR), and other forms of human digital representations in media, communication systems, and games. Researchers in the lab are most concerned with understanding the social interaction that occurs within the confines of VR, and the majority of our work is centered on using empirical, behavioral science methodologies to explore people as they interact in these digital worlds. However, oftentimes it is necessary to develop new gesture tracking systems, three-dimensional modeling techniques, or agent-behavior algorithms in order to answer these basic social questions. Consequently, we also engage in research geared towards developing new ways to produce these VR simulations."

Contents

Experiencing a vr demo from the stanford virtual human interaction lab


Faculty and research staff

  • Jeremy Bailenson, professor of communication, VHIL Founder
  • Shawnee Baughman, Lab Manager, B.S. and M.S. in communication at Stanford University
  • Digital anonymity

    Digital media, and avatars more specifically, have made it increasingly easy for users to interact anonymously. In digital worlds our avatars may differ from our physical world selves on a variety of characteristics ranging from name and physical appearance to demographics and attitudes. We are studying how digital media users who anonymize themselves via their avatars may be perceived differently from media users who use avatars that resemble their physical world selves. We are asking questions such as, is ostracism more aversive when it comes from an anonymous or identified digital media user? And, are media users who choose to be anonymous treated differently from media users who are merely assigned anonymous avatars?

    Mediators and mimicry

    A mediator's success hinges on two important factors: impartiality and rapport. Ironically, the process of establishing rapport can undermine the mediator's ability to convey a sense of impartiality. Thus, mediators face a dilemma – a dilemma that we believe digital media might be able to help solve. We are how exploring the affordances of online dispute resolution (ODR) may help mediators strike a delicate balance between developing rapport and maintaining impartiality. One area that is of particular interest to us is digital mimicry. Mimicry is known to elicit a wide variety of favorable responses; using tracking technology and computer algorithms we can make virtual mediators subtly yet perfectly mimic disputants' head movements.

    Out-of-body experience

    What if the virtual self could "feel" in a virtual world the same way the physical self can feel in the physical world? Navigating virtual 3D environments, performing remote surgery, and tanning on a virtual island would become second-nature at this level of full immersion. We are studying ways to create and measure this phenomenon, known as self-presence, or an out-of-body experience. Current questions we are asking in this research area include what stimuli are necessary to induce digital body ownership and what modifications of avatars and virtual environments increase self-presence.

    Augmented perspective taking

    Perspective taking is the ability to mentally put oneself in the shoes of another to imagine what the other person might be thinking and feeling in a certain situation. Immersive virtual environments allow people to vividly share the perceptual experiences of others as if they are in the heat of the moment. In essence, our abilities to take the perspective of another person can be augmented by viscerally sharing their experiences - seeing, hearing, and feeling what the other person did in a particular situation. We can now literally climb into the skin of the other person to fully embody their body and senses. Current projects explore how novel affordances of interactive digital media such as immersion and interactivity can enhance the ability to understand other minds and how the virtual experience can influence our attitudes and behaviors.

    Self-endorsing

    Self-endorsing is a novel persuasion strategy made possible by the advancement of interactive digital media. The self is no longer just a passive receiver of information, but can simultaneously partake in the formation and dispersion of persuasive messages, persuading the self with the self. What may have sounded like a topic of a futuristic science fiction movie can now be easily and rapidly done using simple graphics software. Tapping into the framework of self-referencing, research on self-endorsing explores how using the self as the source of persuasive messages can powerfully influence attitudes and behaviors in various persuasive contexts.

    Automatic facial feature detection and analyses

    While most prior research on facial expressions involve some form of manual coding by human coders based on established facial coding systems (e.g., FACS), this methodology uses just a small webcam and computer software to predict an individual's errors and performance quality based only on facial features that are tracked and logged automatically. Using just the first five to seven minutes of facial feature data, researchers were able to predict a participant's performance on a 30-minute experimental task with up to 90% accuracy. There are countless applications for this methodology that would facilitate research of other media effects. For instance, this methodology can predict purchasing decisions based on facial expressions (e.g., "buying" face vs. "not-buying" face) while participants engage in an online shopping task. Researchers can also monitor emotional fluctuations in real time as people make their selection of media content and verify whether or not the choices are contributing toward maintaining a good mood (i.e., mood management theory; Zillmann) based on their facial expressions. In addition, advertisers could benefit by receiving real-time data on the participant's responses to advertisements. Automatic facial feature analysis is not yet a perfect 'looking glass' to a person's mind, but its advantages are obvious and promising.

    Proteus effect

    Researchers discovered that by allowing a subject to use an avatar of varying attractiveness or height, this affected how they acted in a virtual environment. They adapted to the role they felt their avatar played.

    Transformed social interaction

    The phenomenon of transformed social interaction hopes to explore what occurs when behaviors that take place in collaborative virtual environments are augmented or decremented. The lab's hope is to see how permitting commonly impossible behaviors in virtual environments alters and ultimately enhances the way that people perform in learning and business meetings.

    Facial Identity Capture and Presidential Candidate Preference

    Through this line of research, it was found that by morphing a subject's face in a 40:60 ratio with that of John Kerry and George W. Bush, the subject was more likely to prefer the candidate that shared their features. This study has implications concerning the use of a voter's image and overall face morphing during national elections to sway a voter's decision.

    Virtual aging's affect on financial decisions

    Researchers found that when subjects were presented with digital, older versions of themselves they subsequently adapted their spending behavior to save more for the future.

    Eye witness testimony and virtual police lineups

    In collaboration with the Research Center for Virtual Environments and Behavior, the National Science Foundation, and the Federal Judicial Center, VHIL examined the capabilities of pointing out witnesses during a police lineup while in a virtual environment. VR gives witnesses the opportunities to examine in a 3D environment, at different distances and even gives them the opportunity to examine the suspect at the recreated scene of the crime.

    Diversity simulation

    Using virtual reality allows people to truly experience the proverbial "walk a mile" in someone else's shoes. By allowing participants to experience another race or gender, researchers at VHIL hoped to raise awareness about ongoing issues with diversity.

    References

    Virtual Human Interaction Lab Wikipedia