Week 1

Introduction to Virtual Reality and Demonstrations


Information about the Course


How this class relates to to other similar / related CS courses


CS 422
User Interface Design Focus on developing effective user interfaces
Every spring
CS 424
Visualization & Visual Analytics
Focus on visualizing and interacting with different kinds of large data sets
Every fall
CS 426
Video Game Programming Focus on creating complete audio visual interactive (and fun) experiences Every spring
CS 488
Computer Graphics I Focus on the basics of how computers create images on screens, OpenGL Every fall
CS 522
Human Computer Interaction Focus on interaction and evaluation of interactive environments once every other year
CS 523
Multi-Media Systems Focus on the creation of Educational Worlds once every other year
CS 524
Visualization & Visual Analytics II
Focus on visualizing and interacting with 3D data sets
once every other year
CS 525
GPU Programming Focus on shaders and parallel processing once every other year
CS 526
Computer Graphics II Focus on current trends in computer graphics
once every other year
CS 527
Computer Animation Focus on creating realistic motion once every other year
CS 528
Virtual Reality Focus on immersion once every other year


Warning about jargon


General Definitions

'Virtual Reality' is a nice buzzword that can mean a lot of different things depending on who you talk to.

The key element to virtual reality is immersion ... the sense of being surrounded.

A good novel is immersive without any fancy graphics or audio hardware. You 'see' and 'hear' and 'touch' and 'taste' and 'smell'

A good play or a film or an opera can be immersive using only sight and sound.

But they aren't interactive which is another key element.

Older textual computer games from the late 70s and early 80s  such as Adventure, Zork, and the Scott Adams (not the Dilbert guy) adventures  are immersive and interactive and place the user within a computer generated world, though that world was created only through text. You can play adventure online at http://www.astrodragon.com/zplet/advent.html. You can play Zork online at http://thcnet.net/zork/index.php or http://www.xs4all.nl/~pot/infocom/ or at several other sites. The Scott Adams adventures are playable at http://www.freearcade.com/Zplet.jav/Scottadams.html

video: https://www.youtube.com/watch?v=TNN4VPlRBJ8

Games in the early 80s started to incorporate primitive graphics to go along with the text, such as Mystery House below.
video: https://www.youtube.com/watch?v=asOhTnQv8PE

and even simple 1st person graphics in games such as Akalabeth and Wizardry below, though the screen refresh rate was something less than real-time. The screen took a long time (up to several seconds) to re-draw so these games tended to be more strategy-based on a turn-taking model.
video: https://www.youtube.com/watch?v=P0jSh_MKM1M

If we move towards modern computer games, they are immersive and interactive. These also have the advantage of being real-time running at 30 to 60 frames per second, another key element.

Another key element of VR is a viewer centered perspective where you 'see' through your own eyes as you move through a computer generated space, interact with objects there, and more often than not kill everyone you meet. The way you see the environment is limited to a screen with a narrow angle of view and you use a keyboard / joystick / gamepad to change your view of that scene.

VR adds the concepts of head tracking, wide field of view and stereo vision

Head tracking allows the user to look around the computer generated world by naturally moving his/her head. A wide field of view allows the computer generated world to fill the user's vision. Stereo vision gives extra cues to depth when objects in the computer generated world are within a few feet.

As Dan Sandin likes to say, this gives us the first re-definition of perspective since the Renaissance (16th century)

Albrecht Dürer, Draughsman Drawing a Recumbent Woman (1525) Woodcut illusion from 'The Teaching of Measurements.'

Audio also plays a very important role in immersion (try listening to a modern Hollywood film without its musical score) and haptic (touch) feedback can provide important cues while in smaller immersive spaces.

And there is some work in trying to deal with smell (the HITLab in the late 90s, and Yasuyuki Yanagi, Advanced Telecommunications Research Institute, Kyoto more recently) and taste (Hiroo Iwata, University of Tsukuba.)


So here is a picture that puts a lot of this together ... Randy Smith of General Motors in their CAVE. Randy is real. The car seat Randy is sitting in is real. The rest is computer generated.


VR Hardware

HDM,
        BOOM, and Fish Tank VR systems

For large format based systems, some companies that sell these things are:

For Head Mounted Displays, the previous generation of $10,000 - $20,000 displays by companies like NVIS have mostly been supplanted by a new generation of low cost gaming-related displays:

and there are other interesting solutions that have been in development for a couple decades such as the Virtual Retinal Display


Current Uses

There is quite a bit of work going on in various research labs in VR. New devices are being created, new application areas being worked on, new interaction techniques being explored, and user studies being performed to see if any of these are valuable. What is much harder is getting the technology and the applications out of the research lab and into real use at other sites - getting beyond the 'demo' stage to the 'practical use' stage is still very difficult.


A Bit of History

1960 - Morton Helig - http://www.sensomatic.com/sensorama/

Sensorama - https://www.youtube.com/watch?v=vSINEBZNCks

(image from http://www.mortonheilig.com/InventorVR.html)

patent for first HMD

(image from http://accad.osu.edu/~waynec/history/lesson17.html)

1965 - Ivan Sutherland - University of Utah

1966 - Ivan Sutherland


(image from http://accad.osu.edu/~waynec/history/tree/images/hmd.JPG)

1971 - Fred Brooks

1975 - Myron Krueger


(image from http://resumbrae.com/ub/dms424/05/01.html)

1982 - Thomas Furness III

1984 - Michael McGreevy and friends

1985 - Jaron Lanier & VPL research

1986 - Ascension Technologies founded from former Polhemus employees

1989 - Autodesk

1989 - Fake Space Labs

1992 - Electronic Visualization Laboratory

1993 - GMD - German National Research Center for Information Technology

1993 - SensAble Technology

1993 - HITLab at University of Washington

1996 - Intersense founded

1998 - TAN / Royal Institute of Technology in Stockholm

1999 - reach in Technology

2003 - University of Arizona

2003 - Electronic Visualization Laboratory

2009 - KAUST

2009 - UCSD Calit2 / KAUST

2014 - Oculus


VR Components


I'm going to give a brief overview here and then we will go into each of these areas in more detail in the coming weeks


Display

It is important to note here that although the field is called 'virtual reality' the goal is not always to recreate reality.

Computers are capable of creating very realistic images, but it can take a lot of time to do that. In VR we want at least 15 frames per second and preferably 20 in stereo.

For comparison:

The tradeoff is image quality (especially in the areas of smoothness of polygons, anti-aliasing, lighting effects, transparency) vs speed.

Though in some cases, like General Motors, they sacrifice frame rate (frames per second) for visual quality.


If we want stereo visuals then we need a way to show a slightly different image to each eye simultaneously. The person's brain then fuses these two images into a stereo image.

One way is to isolate the users eyes (as in a HMD or BOOM) and feed a separate signal to each eye using 2 display devices. Each eye watches its own independent TV.

Another way is to use two display devices and filter what each eye sees. There are several different ways to do this.

We can use polarization (linear or circular) - polarization was used in 3D theatrical films in the 1950s and 1980s and the current generation. One projector is polarized in one direction to show images to the left eye, and the other projector is polarized in the other direction to show images for the right eye. Both images are shown on the same screen and the user wears lightweight glasses to disambiguate them.

This same technology can be used on televisions by adding a polarized film in front of the display where even lines are polarized in one direction and odd lines are polarized in the other direction. The user only sees half of the resolution of the display with each eye.


We can use colour - this has been done for cheaper presentation of 3D theatrical films since the 50s with red and blue (cyan) glasses as you only need a single projector, or a standard TV. It doesnt work well with colour and is somewhat headache inducing after an hour.

We can use time - this was common in VR in the 90s and the 00s but is rapidly falling out of favor today.  Here we show the left eye image for a given frame then the right eye image for the same frame, then move on to the next frame. The user wears LCD shutter glasses which ensure that only the correct eye sees the correct image by going opaque on the eye that should be seeing nothing. These glasses used to cost over $1000 each in the early 90s. They were the basis for the early 3D televisions and cost around $100 per pair. Now they are down to $30 per pair.

In all these cases both of the eyes are focusing at a specific distance - wherever the screen is located. There is no way for the user to change focus and bring parts of the scene into focus and let others go out of focus as in the real world . 


"people hate helmets, but people like sunglasses"

ergonomics and health issues of various displays

Typically museums and other places with many visitors it is necessary to either give the glasses away to the user (with the paper ones) or wash them (with the polarizing ones) to keep things sanitary. This is more difficult with HMDs where people have tried using alcohol wipes.


Image Generator

Need a computer capable of driving the display device at a fast enough rate to maintain the illusion.

In the past (i.e. the 90s) that usually means either simple scenes, very specialized graphics hardware, or a lot of work in optimizing the software. But this is less true today where scenes are getting more complex, the hardware more commonplace, and the software more capable, mostly thanks to the video-game industry.

Benchmarks on CPUs and graphics cards aren't really very meaningful. They can give ballpark figures but there are a lot of factors that combine to give the overall speed/quality of the virtual environment.

Multiple processors are usually required, since there tend to be multiple simultaneous jobs to be performed - i.e. generating the graphics, handling the audio, synchronizing with network events.

Multiple graphics engines are pretty much required if you have multiple display surfaces

Ability to 'pipeline' the graphics is pretty much required


Tracking System

At minimum you want to track the position (x, y, z) and orientation (roll, pitch, yaw) of the user's head - 6 degrees of freedom.

You often want to track more than that - 1 hand, other hand, legs?, full body?

An Important factor is how far the user can move - what size area must the tracker track?

Can line of sight be guaranteed between the tracker and the sensors?

What kinds of latencies are acceptable?


Input Device

Input devices are perhaps the most interesting area in VR research. While the user can move their head 'naturally' to look around, how does the user navigate through the environment or interact with the things found there?


Audio System

Ambient sounds are useful to increase the believability of a space

Sounds are useful as a feedback mechanism

Important in collaborative applications to relay voice between the various participants

Spatialized sound can be useful


Networking

Often useful to network a VR world to other computers.

We need high bandwidth networking for moving large amounts of data around, but even more important that that we need Quality of Service guarantees, especially in regards to latency and jitter.


Authoring Tools

most VR programming is done 'from scratch' or using lab-based software. There are a few major commercial products in use but they come and go fairly regularly.



what we want



 
classic CAVE from the early 90s

to put this hardware into context, in 1991 we had



CAVE2


VR has gone through several hyper phases with the biggest being in the mid 80s and mid 90s. With the release of low cost headsets we are now in the midst of another hyper phase. 


Coming Next Time

Tools we are going to use in the class


Before Next Class

Make sure you have a current copy of Chrome and Firefox on your laptop

Make sure you have a working Wi-Fi connection to UIC WiFi

Create an image (jpg, png, pdf) with a photo of yourself, your name, and your interests related to this course

Next time at the beginning of class everyone will connect to the CAVE2, drag and drop their image onto the screen and give a brief 1 minute introduction so we can get to know each other a little bit.


so, for example I could show something like this:


We will all be using the SAGE software regularly in the class so be sure to bring your laptop with you to class each day.


last revision 1/12/15