s
ATLAS in silico: System  
Overview
System
Data & Mappings
Texts

Implementation of ATLAS in silico on the Varrier™ auto-stereographic virtual reality display and on a passive stereo 3D rear-projection display system are descrived in this section.

The installation has undergone multiple phases of development. It was originally developed for the one-of-a-kind Varrier™ 60 LCD tile, semi-circular, 100-million pixel autostereographic display located at the UC San Diego Calit2 Immersive Visualization Laboratory. Later it was translated to a stereoscopic 3D rear-projection display system with rectilinear geometry to enable the artwork to tour to exhibition venues. Access to the Varrier™ display, developed at UIC EVL by Sandin et al. (2005) and subsequently moved to UCSD, was generously provided by Tom Defanti, Dan Sandin, Jurgen Schulze, Greg Dawe, Larry Smarr (founding director, Calit2) and Ramesh Rao (director Calit2 / Qualcomm Institute) while the author was at UCSD/Calit2.

 
 
Figures:
  1. Block diagram: software and system
  2. Diagram of the ATLAS in silico Varrier™ installation: compute, audio and tracking/input
  3. Audio spatialization strategies for each display geometry
  4. Varrier™ auto-stereographic display with tracking and audio systems
  5. Two camera optical markerless tracking module
  6. Video: Optical markerless head/hand tracking based interaction (33 seconds)
  7. Video: Tetherless IR reflective marker based head and hand tracked interaction (22 seconds)
  8. Diagram of ATLAS in silico stereo 3D rear-projection display installation: compute, audio and electromagnetic tracking (Flock of Birds)
  9. Wireless 3D mouse for use with Vicon IR camera tracking system for stereo 3D rear projection
  10. Visitors interacting with ATLAS in silico via Kinect body, head/hand tracking in stereo 3D rear projection system
  11. Floor diagram and technical and power overview for the system in Figure 10
  12. Gestures for Kinect-based interaction on stereo 3D rear-projection system
  13. Video: Gesture based interaction wtih ATLAS In silico on stereo 3D rear-projection system (2:55)
 
     
Go to top of page System and Block Diagram  
  Infrared (IR) motion tracking, custom computer vision, multichannel spatialized audio, 3D graphics, networking, SQL database, and stereoscopic display systems combine to create the installation and participant experience.  
 
Figure 1: Software and system block diagram for the installation.The generative virtual world and interactive experience is created through the integration of several interrelated software modules running on a computer network with output to a stereoscopic display and multichannel audio system.
 
     
 
Varrier™ display: The semi-circular autostereeographic display is comprised of 60 custom LCD panels. An image of the display is in Figure 2 below. The display surrounds an active participant in a tracked volume of approximately 10-feet x 10-feet out in front of the display surface, and has additional space beyond the tracked volume for multiple viewers. A 16 node (1 master, 15 slave) dual-operton Linux cluster connected by a 1 Gigabit network performs computation for stereo 3D graphics output to the display. Fifteen groups of 4 displays are each driven by a compute node with two GeForce 7900 GPUs.
 
     
 
Figure 2: ATLAS in silico Varrier™ installation: compute, audio, tracking/input.
     
 
Display
Autostereographic, 100M-pixel, semi-circular 60LCD-tile, 1 node per 4 LCDs driven by 2 GeForce 7900 GPUs, cylindrical; simulated virtual barrier in world coordinates distributed and coordinated w/physical barrier strip on LCD surface.
Computation
16-node (1-master/15-slave), dual-opteron Linux, 1 Gigabit backplane, wision, audio, tracking servers.
Graphics, Audio, Data Modules
COVISE VR framework, OpenSceneGraph, OpenGL; Pure Data, MySQL, OpenCOVER (graphics subsystem)
Audio
(10) Meyer MM-4+Crown xls 202 amp, 1 UltraCompact Sub, cylindrical orientation
Tracking/Interaction
Head/hand; ARTrack2 IR cameras, reflective markers headband/wand - secondary optical markerless, VRCO Trackd (tracking input to application)
 
     
     
Go to top of page Graphics, Tracking/Input, Audio, Data and Displays  
 
Graphics Module (Varrier): Graphics is the main software module for the system. It uses multiple software libraries including COVISE (developed by HRLS high performance comput center at the University of Stuttgart) as its VR framework, OpenSceneGraph (OpenSceneGraph.org), and OpenGL (OpenGL.org). At runtime, the head (master) compute node spawns child processes on each of the 15 rendering nodes. Once graphics is running, it connects to the sound server through a network socket connection and to the tracking server. Graphics takes input from the physics sub-module and the shape grammar generator sub-modulle, both of which are driven by GOS and social and environmental data stored in a mySQL database. The graphics module performs state control for the integration of all system modules. Graphics receives tracking/input information and in response it updates the stereo 3D imagery that are output to the display. The system stays syncrhonized by communication between the head node and the 15 rendering nodes, each of which runs a copy of the COVISE application for the installation. Each render node receives event data from the head node and waits for commands from the head node to update the stereo imagery on its corresponding 4 display tiles.
Tracking/Input: Several versions of the tracking code have been developed for the input/interaction systems corresponding to the display type used in each phase of the project. These include the Varrier™ display's ART IR cameras, custom optical markerless tracking, and three types of tracking for the rear projection system (electromagnetic (Flock of Birds), Vicon IR cameras, and Kinect). Each is described in more detail below. Irrespective of which display type or tracking system is used, tracking information is relayed to the graphics module using a trackd® server.
Trackd® server: Trackd server is an application created by (https://www.mechdyne.com/) that functions as an interface for tracking and input devices. It passes position (x,y,z coordinates) and orientation (roll, pitch, yaw) of tracked markers or other tracked content (head, hand, gestures etc.) corresponding to each of the tracking methods that have been used for the all phases of the installation development to the graphics module.
 
     
 
Audio: A multichannel audio monitor array is attached to each display system. For the Varrier this is comprised of 10 Meyer Sound MM-4 speakers and corresponding amplifiers and a sub. The speakers are arranged in a cylindrical geometry. (See Figure 2) For the rear-projection system it is comprised of either 8 Genelec 8040 speakers in a rectangluar arrangement without a sub, or 8 KRK Rockit-6 speakers in a rectangluar arrangement plus two KRK Rockit-10 subs. (See Figure 8 and 10)
Sound is gerenated in real-time by a custom PD (Pure Data https://puredata.info/ by Miller Pucket) patch running on a dedicated Linux server. The sound server receives messages from both the graphics and tracking modules. In response to the tracking and graphics control messages, the PD server renders output to the multichannel audio system. Messages control the position of the sound, they relay GOS data and contextual metadata values required for parameterization of the sonification, and trigger multiple layers of sound elements, textures and events. Sound rendering utilizes a blend of delay and amplitude panning. Panning is spread out with deployable channel delays based on the display system geometry and not vvalues from head-related transfer function in order to increase the control over the sound source location for a larger listening area that extends beyond the display configuration sweet spot for the central interacting participants, out to the space in which the large group of additional viewers/listerns is positioned in the tracked volume. A given sound signal delay is proportional to the distance between its virtual location (yellow circle in Figure 3 below) and a loudspeaker location.
 
     
 
Figure 3: Audio spatialization strategies for each display geometry: The figure shows speaker positions and distance based generation of delays (tn) and amplitude (an) values for the different display geometries (Left: Varrier, Right: rear-projection). Yellow circle represents a potential virtual sound source. The virtual sound source is moved along the plane of the speakers. For the Varrier (left) this is a half-cylinder, and for the rear-projection system it is a rectangle.
 
     
 
Data: GOS data, metadata, and contextual socio-environmental data are accessed at runtime via a series of SQL queries. The virtual environment is initialized with the complete first release of the GOS metagenomics dataset.
 
     
     
Go to top of page Varrier™ - Auto-Stereographic Display  
     
 
Figure 4: Varrier™ 60 LCD tile, semicircular, 100-million pixel auto-stereographic display with ART IR tracking cameras, IR illuminators, and Meyer 10.1 audio system. Computing for the display is accomplished by a 16 node cluster (specification is detailed below).
 
     
 
Autostereographic display: A physical parallax barrier strip on the surface of each LCD panel is combined with a computed virtual barrier to create the autostereographic effect so that users do not have to use 3D glasses (active or passive) to see 3D stereo imagery. This combination produces a wide field of view by interleaving left and right eye perspectives and simulating the action of the LCD's physical barrier screen in virtual world coordinates. This is done in combination with distributing and correlating perspectives through the physical line screen applied to the surface of each of the 60 LCD display panels. For additional development and technical details see Sandin et al., (2005).
Registration of the physical line screen applied on the surface of the custom LCD panels with the computed virtual line screen and eye projection points and participant tracked eye positions is accomplished via IR markers on a headband worn by the participant. Participants experience autosterographic imagery while within the tracked interactive volume (approximately 10 ft x 10 ft) in front of the display. (See Sandin et al., (2005). for additional information).
The combination of custom LCD panels physical and virtual parallax barrier technology and precision tracking enables the user to experience stereoscopic 3D without the need for specialized 3D glasses (e.g. polarized or active shutter). See the video in Figure 7 below for demonstration of IR marker based tracking.
 
     
     
Go to top of page Tracking and Interaction  
 
Tetherless head and hand tracking for full-body interaction and input is accomplished with ARTtrack2 IR cameras (ART Advanced Realtime Tracking) and the use of IR reflective markers on a controller (hand position) and headband (approximates interpupillary distance). (Note: ARTtrack2 cameras are the predcessors to the current ARTtrack5 cameras.) See Figure 2 above for location of IR cameras.
In addition to the ART IR camera based tracking and interaction, a custom optical markerless tracking software module as developed. Both tracking modules ran in parallel for interaction on the Varrier.
 
     
 
Figure 5: Two camera optical markerless tracking module: The Varrier display installation configuration utilized two tracking and activity detection methods and software modules. One module processed input from the ARTtrack2 cameras which tracked IR reflective markers on the user headband (interpupilary distance, head and body position) and IR-reflective controller (See yellow squares (ART IR cameras) in Figure 2 above). The second processed input from overhead cameras 1 (angled) and 2 (top down) utilizing openCV (https://opencv.org/) to create an optical markerless gesture-based interaction mode. See video below in Figure 6.
Both of tracking systems and software modules ran simultaneously on the Varrier. The first, enabled the autostereography, and interaction with a wand controller. The second, camera based module enabled gesture based, non-IR marker based interaction, but did not impact the autosterography of the display. This optical markerless system was developed prior to widespread use of commodity based tracking sensors in gaming such as the Kinect. It has since been superceded by a Kinect implementation for tracking and gesture-based interaction on the rear-projection display system. (See Figures 10 - 13)
 
     
 
Video: Optical markerless interaction with ATLAS in silico stereoscopic 3D graphics on Varrier display. (33 seconds duration. Note: this video is silent.)
 
     
 
Video: Use of thetherless head and hand tracking using IR reflcetive markers on a headband and within a hand-held controller wand with ARTtrack2 IR cameras for interacting with ATLAS in silico on the Varrier™ display.
 
     
     
Go to top of page Translation from Varrier™ to Passive Stereo Rear Projection  
  ATLAS in silico was translated to a stereoscopic 3D rear-projection display system with rectilinear geometry to enable the artwork to tour to exhibition venues.  
 
Compute systems: To run the ATLAS in silico modlues on the Varrier display requires a 16 node cluster in addition to audio and tracking servers. As described above in the modules section, the head node on the cluster spawns child processes on each of the additional 15 renderering nodes. The main difference between this and the rear-projection display version is that the projection based system has only one graphics node, which functions both as the head node and the render node. As showsin in the blcok diagram an described in the modules section, the graphics node both renders stereo images, one for each eye, and manages state control via message passibng to the tracking and audio nodes in the three workstation cluster. Figure 8 below shows the first translation of the installation to a projection system that utilized an electromagnetic, Flock of Birds tracking system for interaciton/input.
 
   
 
Figure 8: Rear projection display configuration utilizing a Flock-of-Birds electromagnetic tracking system and multi-channel audio.
     
 
Display
Passive stereo, Da-Lite 3D Virtual Black, 144" x 84" ; (8) Genelec 8040, no sub, rectangular; polarization preserving, rear projection screen, circular polarization filters, glasses.
Computation
Linux: 1-Graphics, 1-Tracking, 1-Audio
Graphics, Audio, Data
COVISE VR framework, OpenSceneGraph, OpenGL; Pure Data, MySQL, OpenCOVER (graphics subsystem)
Tracking/Interaction
Head/hand; Flock of Birds or VICON + 6DOF mouse; VRCO Trackd (tracking input to application)
 
     
 
Graphics (rear-projection): In contrast to the Varrier implementation that has a head node and multiple rendering nodes, graphics on the rear-projection system has one head node that also the renders the images. The one graphics workstation renders both the image for the left and right eye to generate a 3D stereo effect as compared to the tiled display implementation, where each node renders stereo images on four LCD tiles.
 
     
 
Tracking/Input (rear-projection): The rear projected display system has used three types of input/tracking methods since the inital port. The first was a Flock-of-Birds electromagnetic tracking system (http://www.ascension-tech.com/ - previous company site that redirects to https://www.ndigital.com/) as seen in Figure 8 above. This is a tethered head/hand tracking implementation. The second input system used tehterless head and hand tracking via 4 Vicon cameras to track IR reflective markers on the headband (interpupilary distance, head and body position) and a custom IR-reflective controller (3D mouse) (See Figure 9 below). This is very similar to the tracking/input on the Varrier display implementation. The third, and current input system uses a Kinect for body tracking (activity detection) and for tehterless head and hand tracking/interaction. (See Figure 10, and 13 and video below.).
 
     
  Audio and Data modules are identical for both display configurations.  
     
 
Figure 9: Wireless 3D mouse with IR reflective markers for use with Vicon IR cameras.
Custom controller developed at UCSD Calit2 by Greg Dawe and Jurgen Schulze for the installation.
 
     
     
Go to top of page Floorplan - Stereoscopic Rear-projection Installation  
     
 
Figure 10: Visitors interacting with ATLAS in silico via Kinect body, head/hand tracking in stereo 3D rear projection system. The installation uses a circular polarization preserving rear projection screen, two stacked projectors with circular polarizing filters, a tracked volume of 10 feet x 10 feet in front of the display, tethereless head, hand and activity tracking via Kinect body tracking SDK, and an 8.2 audio system.
  Figure 10 (above) shows ATLAS in silico as installed in the author's research space, the xREZ Art + Science Lab during a public open house event in 2019. The floorplan diagram in the figure below corresponds to the setup as seen in the image above.  
     
 
Figure 11:Floorplan diagram of the stereoscopic rear projection system/installation.
 
Installation components include: 1) Projection: two stacked projectors with circular polarized glass filters behind the sceen, a polarization preserving rear-projection screen, 2) Truss: truss system to mount the audio system and cabling. 3) Audio: The audio system includes 8 audio monitors and 2 subwoofers. 4) Tracking/interaction: Full body interaction is accomplished by tetherless activity/head/hand tracking via a Kinect sensor within a 10-foot by 10-foot tracked volume in front of the display screen. 5) Computing: Computing includes a networked tracking workstation (Windows), audio workstation (linux) and graphics workstation (linux). 6) Networking: Other components include an audio interface, network switch and assorted audio, network and power cabling. 7) Power: Total power consumption is 5031 Watts (equivalent to ~3 to 4 15 A circuits).
Stereoscopic 3D: Participants use circular polarized 3D glasses to view and interact with the virtual evnrionment (graphics) in stereoscopic 3D. The participant experience is facilitated through full-body interaction enabled by head, hand and activity tracking via the Kinect sensor.
 
     
     
 
Figure12: Four interactive gestures. Top left: scroll/point/highlight, Top right: clap to click/select, Bottom left: close fist to select, follows open palm as seen in bottom right. Bottom right: Open palm to explore and attract objects in VR environment. These gestures are combined with modifiers, such as right arm with closed fist extended overhead combined with open palm of left hand, to fly through or rotate the virtual world (See video in Figure 13 below).
 
     
 
Figure 13: Video of gesture based interaction with stereoscopic 3D rear projection display setup of the installation. This video shows how a user can combine multiple gestures in sequence for selection actions as well navigating/exploring the virtual environment.
 
     
     
Go to top of page References
 
     
     
     
     
spacer170px spacer600px spacer200px