As cool as augmented and virtual reality might be, we have barely scratched the surface of their potential. This team seeks to chang
person wearing a mixed reality headset.

TL;DR

With a fundamental focus on understanding human perception and cognitive and social functions, faculty members from optics, engineering, natural sciences, medicine, and the libraries seek to awaken the awesome potential of augmented and virtual reality. They aim to do it through a center that catalyzes fundamental research, advances in hardware, technology translation, enhancement of computing capabilities, and the development of applications. Jump to: their big idea; the why and why us; implications for Rochester’s reputation; the last word.

Before we dive in, let’s establish some fundamental definitions.

Augmented reality (AR) puts digital content on top of our physical world, changing how we see and interact with it in real-time. A classic example is Pokémon Go, but day-to-day, many of us experience it through Snapchat and Instagram filters.

Virtual reality (VR) is a fully immersive experience. A VR headset cuts you off from the physical world and replaces it with a wholly new, simulated environment—such as a stream of music notes you need to slash with lightsabers in the rhythm-based game Beat Saber.

Extended reality (XR) is an umbrella term used to refer to AR, VR, and everything in between.

(If this has you itching to experience XR, visit the Mary Ann Mavrinac Studio X, Rochester’s XR hub, on the first floor of the Carlson Science and Engineering Library. There, you can try the Microsoft HoloLens 2 (AR), Meta Quest 3 (VR), and other XR equipment.)

Now that we’ve covered the basics, we can get to the good stuff.

In the mid-1960s computer scientist Ivan Sutherland envisioned a world with AR and VR, which he presented in his essay “The Ultimate Display.” (You can read the essay in full via Wired, which calls it a “seed-bomb for emergent technology.”) Despite this technology being on our innovation radar for decades, we’re still waiting for it to live up to its potential and transform how we access information and interact with the world.

Immature optics, a lack of powerful computational hardware, and insufficient digital content have slowed progress in the visual realm. Similarly, audio AR/VR systems—from basic stereo to more sophisticated object-based, multichannel setups—have been the subject of intensive research but are far from the ceiling of possibility. Finally, seamless integration with human perception, cognition, and behavior is limited. The proposed Center for Human-Centric Augmented and Virtual Reality (CHAVR) has designs to advance the state of AR/VR.

The team

Co-leads:

  • Nick Vamivakas
    Marie C. Wilson and Joseph C. Wilson Professor of Optical Physics
    Dean, Graduate Education and Post-Doctoral Affairs

     

  • Duje Tadin
    Professor, brain and cognitive sciences, ophthalmology, neuroscience, Center for Visual Science
    Director, training for Center for Visual Science
     

  • Meg Moody
    Assistant director, Studio X
     

  • Mujdat Cetin
    Professor, electrical and computer engineering, computer science
    Robin and Tim Wentworth Director, Goergen Institute for Data Science and Artificial Intelligence
    Director, New York State Center of Excellence in Data Science and Artificial Intelligence
     

  • Jannick Rolland
    Brian J. Thompson Professor of Optical Engineering
    Professor, optics, biomedical engineering, Center for Visual Science
    Director, Center for Freeform Optics
    Director, R.E. Hopkins Center for Optical Design and Engineering
     

  • Susana Marcos
    Nicholas George Endowed Professor in optics, ophthalmology, optics
    David R. Williams Director, Center for Visual Science
     

  • Benjamin Suarez-Jimenez
    Associate professor, neuroscience
    Associate professor, Center for Visual Science

 Read more.