Virtual Reality and Real Virtuality

Completed under the supervision of Slobodan Radosavljevic, Lecturer at UQTR. Developed from January to April 2020 during COVID-19, the project was created with limited access to campus resources and ultimately premiered online when galleries closed.
Overview
In my capstone project exhibited at Galerie R3 (UQTR), I brought media technology and theory into an installation to ask two questions: how can we confirm we are in reality, and what distinguishes reality from virtuality? In a semi-enclosed space, I lived inside a room rebuilt in both physical and virtual form to make that boundary perceivable.
Project Facts
Role: Artist / VR Implementation
Team: Individual project (faculty supervision)
Context: Capstone project for Galerie R3 (UQTR); presented online due to COVID-19 closures.
Tools: SketchUp, Unity, Microsoft Visual Studio, Oculus Quest hand tracking; cameras; projections; TV
Outputs: Installation + performance; three-perspective video system; post-performance visitor mode
The UX Question
How might we help an audience assess reality when perception is split across headset view, physical-camera view, and recorded view?
How do we keep the work understandable when the live performance is absent?
Audience & Journey
-
Viewers watch from outside while the performance in 3D space is partially blocked, so they cannot see the whole image at once.
-
They reconstruct the event through external 2D projections from multiple viewpoints.
-
After the performance, visitors enter the installation and interpret the work through the environment and the three videos.
-
They leave with a question: should virtuality replace reality or coexist with it, and how should we manage that relationship today?
Key Design Decisions
-
Built a room that exists in both worlds: a real room and its virtual counterpart.
-
Used hands as the bridge: what is seen in VR is tied to embodied touch and presence.
-
Projected the VR headset screen view in real time to externalize the virtual perspective.
-
Projected a headset-mounted camera view in real time as a parallel reality channel.
-
Recorded the full performance with a third camera and looped it afterward for visitors.
-
Planned the build in three steps—virtual scene, real scene, then hand tracking/performance—to keep the system testable.
Build / Prototype / Iterate
I modeled a room matching the gallery space in SketchUp, imported it into Unity, and implemented the scene and hand tracking. Under a tight timeline, the hardest parts were learning two tools quickly (including programming) and completing localization so the headset scene corresponds to the physical installation.
-
Model → import → implement interaction/hand tracking.
-
Calibrate localization between virtual and physical space.
-
Tune the three-video system so meaning holds even without live performance.
Outcome
The final work supports two phases: a live performance and a post-performance visitor experience. Visitors can still interpret the work through the installation and three video perspectives when the performance is absent.
Original French essay










