Shifting Planes :Vertical Wayfinding Through Augmented Reality Assisted Navigation

Rebecca Rhee
14 min readDec 18, 2021

--

Masters of Design Thesis — Research Phase

Overview

How do we navigate the environment arounds us? At a high-level glance, navigating your environment consists of a few simple steps: finding out where you want to go, how to get there, recognizing you’ve arrived and how to return (Kim et al., 2015). In a familiar environment, we are able to complete this instinctively. Even in unfamiliar places, while we may not get there as smoothly, with a little assistance we can still get to our destination. In the end, we focus on the path before us and pay little attention to how we move. We don’t pay attention to our sense of balance nor do we consciously think about the placement of our feet as we walk. Rock climbing expands upon this and adds additional complexities to our relationship with navigation. A climber still needs to follow through those simple tasks, however additional steps have been added. The simple action of wayfinding has become more challenging. We are no longer moving on a horizontal plane but must fight gravity in order to reach our destination on a vertical one. Our sense of orientation has been flipped and we must strategically move with our bodies right down to our fingers and toes.

I had been climbing for about two to three years. The thrill of climbing high walls and finishing challenging routes drew me in. It was cardio without feeling like it was exercise. This past spring, I found out some friends also were interested in the sport. Some were more experienced than others, but all the same we all wanted to try it. I hadn’t gone since I moved to Washington so it was great to start climbing again with other people. I wanted to push myself and see how much further I could go. Convincing friends to join me in this pursuit took no effort at all. During the summer my buddies and I climbed nearly every day. Determined to climb our way through from beginner level routes to more advanced ones. Our goal was to eventually become lead certified. Through this endeavor, I was reminded once more that to climb any route it required more than just the climber to be successful.

Climbing is a complex sport that includes many different forms like Bouldering, Top Rope, Lead, and Free climbing. There are many more forms outside of these four, however these are the most known. While these forms vary from each other, they all require a bit of problem solving to navigate their way to the top. Routes can be complex and at times climbers can get stuck. Top rope and lead have a secondary person or a belayer that manages the safety of the climber but also provides beta (tips) when needed.

In gyms, you’ll often hear climbers yelling ‘take’ or ‘falling’ to prepare the belayer to either pull the rope tight or to catch. This is partially due to a climber either failing to advance or pausing to figure out a solution to a problem. Climbers and belayers confer together to try and solve the problem to vertically navigate this complex puzzle. Occasionally outside disruption can make this difficult to do, especially when the distance between both partners become greater. A climber could need some beta from their belayer but wouldn’t hear them due to noise levels. A friend of mine described it as being a part of the ambient noises. In other cases, the path could be undefined and hard to navigate because of the lack indicators. Due to some of these frictions I contemplated what a more seamless interaction between a climber and belayer could be? How climbers navigate a vertically complex environment. And how could this be approached.

With the most recent introduction of the Metaverse and other industry adoption, Augmented Reality (AR) and Virtual Reality (VR) has proven that it could be a major player in interaction design. Research into AR/VR user experience and development has only scratched the surface of what could be possible in the future. Features like speech-based commands, gaze tracking, and line of sight displays are areas of interaction researchers and designers utilize, study and innovate. Mobiles, tablets and head mounted displays (HMD) are devices used to interact in AR/VR. Mobile phones are highly accessible and industries favor this method for spaces like marketing. However this would not be ideal for activities that limit hand-use. HMDs are useful for this reason because they lack the sole need to touch a screen in order to manipulate an object. Instead people can utilize speech and gaze tracking to navigate around or motion gestures to activate features. Other benefits of HMDs allow for 3D collaboration. Projects like machine development with people over multiple distances asynchronously interacting with virtual models (NASA Jet Propulsion Laboratory, 2020) are some implementations today.

These are areas that intrigue me. Questions like how can we improve upon the interaction model for AR/VR that allow for synchronous collaboration? How can we design an experience that uses seamless intuitive and multi-modal interactions? Rather using 2D conventions like menu selection, how can we use the 3D space to react and interact with the environment around us? In this thesis, I aim to use rock climbing as a space to bring new considerations to the AR/VR experience. Like assisted vertical navigation of complex environment, and/or real-time collaboration. It will require me to contemplate multiple facets of interactions to make a seamless experience for climbers and belayers. In order to start, I needed to gather insight to lay down a starting point for my thesis to these questions.

Secondary Research

In the space of mixed reality and climbing, current products focus on enhancing the climbing experience through gamification. By adding competition, sensationalism and/or customization, users can both exercise while having fun all within the convenience of their home or climbing gyms. But it leans away from a classic climbing experience. Other research has been conducted to explore rock climbing with a virtual overlay onto a physical wall (Kosmalla et al., 2017). Kosmalla et. al looked to use proxies as most augmented walls focused on enhancing the boards or using remote controls to manipulate the climbers body. They used physical holds as markers to map in a virtual environment (VE) which was projected onto a wall. This provided tactile feedback while seeming to climb a realistic rock wall. While the idea of combining a physical climbing wall with a VE is a viable direction, the project fell into the gamification spectrum. However, the research of combining the physical with a VE provides a basis to explore as a possible avenue to consider.

I also needed to understand if vertical wayfinding would be possible with the technology available today. Current studies look at the use of AR/VR to design wayfinding aids through complex environments and expand to allow navigation through multiple floors (Kim et al., 2015). Additional studies have looked into wayfinding through speech-based navigation and the perception of distance (Feng et al., 2006). This gave insight to how directive communication should be considered between two people. However, these studies only considered horizontal navigation or moving up and down one horizontal plane to the next.

Bock et.al looks at vertical navigation through a different lens. The research was to see if vertical navigation was possible and if a person’s orientation was obstructed by navigating through a virtual maze. What they found was that participants were relatively able to still navigate through a vertical maze almost as well as a horizontal one (Bock et al., 2020). The possibility of true vertical navigation wasn’t far off.

Primary Research

With the insights gathered, over the following weeks, I conducted several field experiments. These experiments would help me understand the space of rock climbing and how to approach designing an AR experience.

Two Person Camera View

In the initial phase of the field research, to observe the different view from the climber’s vs belayer’s perspective the participants wore GoPro cameras and strapped it to their heads. I wanted to have the camera match their line of sight (LoS) as close as possible to mimic what the climber and the belayer were seeing.

Both views were time synced and laid out side by side. These are the discrepancies revealed between the two perspectives:

Climber:

  • The climber sometimes tests their grip on each hold before proceeding. A seemingly good hold can turn out to be terrible once the climber has received tactile feedback from their fingers or toes.A hold is considered good when the climber has relatively easy placement to grip and provides some resistance where a bad hold can be slippery, shallow, flat, or even smooth.
  • The severe upward angles can make holds look deceptively good until the climber places their hand or foot onto the hold.
  • The position of the hold that can stump the climber and require testing out different positions. Distance can also be an issue for the climber. From a certain distance, a hold can be perceived to be closer until the climber attempts to reach for it.

Belayer:

  • The downward angle they’re positioned in tends to skew their ability to accurately estimate the proximity of a hold to the climber.
  • The provided beta can be quite difficult for the climber to accomplish due to the foot hold being too high or the hold being too far. It isn’t until the climber attempts to do the beta does the belayer get a better indication of distances.
  • The belayer’s ability to assess a good vs bad hold is skewed because of the severe angle of their LoS. Until the climber places their hands or feet on the hold can the belayer properly assess the quality. This falls in line with Knapp and Loomis’ study on perceived egocentric distance (Knapp & Loomis, 2003) from the observer’s perspective. The distance and angle is not favorable for the belayer to accurately measure an object.

Five Camera View

Expanding upon the insight from the initial exploration, I decided to have 5 cameras attached to the climber to provide LoS from the hands and feet. I discussed how the climber needs tactile feedback to assess the quality of the hold or to measure the distance. Therefore it seemed to make sense to record from the perspective of the hands and feet. The fifth camera was mounted on the head.

Each video alone did not reveal much, when viewed at the same time, it was completely overwhelming to watch and impossible to analyze. However when the videos were curated to show hand or foot movement in-sync with the FOV from the head’s LoS, it revealed possible opportunities to consider. The following insight was gathered from the curated recording:

  • The climber is only able to move when they perceive with their hands and feet that their placement is steadfast and that their balance won’t be compromised.
  • The climber used their eyes to anticipate action while their core, hands and feet were used to follow the action through.
  • The cameras and rope obstructed each other so the climber had to reassess hand placement due to this. Any camera operated system would need to be lightweight and extremely compact in order for climbers to possibly consider wearing extra gear.
  • FOV would need to be wider as the videos were too tightly cropped in, preventing the viewer to garner any additional information from the recordings.

Laser Pointer Beta

Some of the friction that comes from climbing is the belayer’s inability to clearly relay beta due to a number of reasons. The lack of the full climbing jargon like matching, gaston, dyno and the level of noise can often contribute to the difficulty of communicating with the climber. To overcome this, a simple laser pointer test was conducted to see the effects of having a direct mode of pointing.

This is not a new method. Climbing instructors would use a laser pointer to help young students figure out a solution to a problem. But I felt it necessary to try it out with participants to get a first hand observation. The exploration revealed the following:

  • The belayer was able to concentrate mostly on the safety of the climber and less on describing moves to them.
  • The laser pointer was quite effective in directly communicating beta to a climber. The belayer could quickly point to a hold, and the climber would immediately follow through.
  • It optimized the time it took for the belayer to describe the look and the proximity of the hold. Therefore, the climber no longer needed to process and translate what the belayer was relaying and could simply look for the indicator.
  • While it doesn’t eliminate all the friction it revealed that direct mode of pointing on the wall alleviated some of the indirect interaction.
  • However, the belayer must always have both hands or at least a break hand on the rope at all times. Therefore it was inefficient for the belayer to hold the laser pointer while managing the rope. This leads to the necessity of having a hands free interaction.

Lead Class

The final exploration conducted was advancing from a top rope climber/belayer to a lead climber/belayer. This involved taking a three day course over three weeks to learn how to clip in correctly, to fall safely, to belay correctly and to catch a fall. At the end of the course, I had to take a test to become lead certified.

From the lessons, I learned that at both ends, the climber and belayer’s list of roles and responsibilities increased in comparison to top rope. The climber has to always be aware of the way they clip into a quickdraw. Should the climber accidentally clip in the wrong side of the rope or the wrong direction through the clip, this could cause some serious issues. If the climber falls, the rope could release from the quickdraw and cause the climber to fall farther or create an issue for the belayer to lower the climber safely. They must also reassess their climbing technique since they release one hand off the wall in order to clip in their rope. Therefore, they must find optimal balance to safely do this.

The belayer’s cognitive load has also become greater. In top tope, while it is important for the belayer to always watch the climber, they are able to take a more relaxed approach. The risk of falling is much less than lead. In lead, the belayer is mostly feeding rope to the climber, but they must pay attention to how much slack is given at all times. Since there must almost always be slack, this means that the height of the fall for a climber will be greater. Belayers must always assess the position of the clip in relation to the climber’s harness to understand how much or how little slack should be given incase of a fall. Both hands at all times must always be on the rope, the break hand is especially crucial as there is little preventing the rope from sliding through the belay device.

DISCUSSION

The four explorations provided insight that will affect the direction of this thesis to varying degrees. A number of possible design opportunities have also been opened up due to the observations of these recordings. Some opportunities focused on the climber’s role. Possible designs consider an experience that allows a climber to perceive their route effective and train before attempting to climb. This relates to some techniques climbers will use. Climbers will always assess a route before they actually attempt it. Some will simply assess by looking, while others will move their hands in the air to simulate a test run and create a mental map of their body placement. With augmented reality, the display could consider providing a visual map using both speech, gaze, and hand tracking to pre-map the route before climbing. This map could disappear or reappear at moments of take or when the climber verbally recalls it. It could also possibly be shared with the belayer for them to assess the mapping and help the climber along when they are trying to solve a part of the problem. Collaborating together to analyze the map to figure out the best beta. The key point would be to complete this interaction without distracting either party.

Other considerations discussed also pertained to hands-free interactions of both the climber and belayer. This is especially crucial as both participants need their hands and/or feet to complete their tasks. From the five camera view recording, it’s obvious to the viewer that multiple camera views can be overwhelming. We do not want to add to the load of the belayer. Should this be a direction considered, then superimposing the climber’s perspective to the belayer needs to be unobtrusive and intuitively designed. This could potentially allow the belayer to perceive quality and distance of holds more accurately if their LoS could mimic the climbers. However, the question would be, how could we design a way for the belayer to view this without overwhelming them. A possible combination of speech-based commands or motion-tracking, like shaking the head could shuffle different camera positions. But before this could happen, the belayer would need a wider angle to see the holds around each limb to analyze the surroundings. The climber’s proximity to the wall means the LoS was too narrow to allow for the belayer to gain much information. Additionally, the placement of the view through the HMD would be another space to consider.

Haptic feedback, hands-free laser projection and drone based cameras were also considered. Each with a viable approach for other methods of communication. Haptic feedback would allow for hands-free communication by relying on various vibration patterns to cue directions for the climber. A pattern for each limb could relay which hand or foot should move and possibly use strength or speed of vibration to relay direction.

The laser pointer could be mounted on the belayer’s head or foot and it power on/off using simple verbal commands. Alternatively, the laser could come from the climber and the belayer would control the direction of the laser. But this might add some physical obstruction and the climber might not want to wear additional gear on the body.

The last consideration was possibly providing a third person perspective with a drone. However, this was quickly dropped as climbers vehemently said that drones would be an annoyance and would not be welcomed. Another option could consider the use of a simpler system with a balloon mounted camera using voice and motion tracking control. This option would be silent and possibly less intrusive to both parties. Overall, certain opportunities would not work or would require a large amount of time commitment to focus on prototyping alone. This could possibly derail the timeline and would be better directed elsewhere.

Further exploration is needed for a more focused direction. Based on the research conducted, next step would be to isolate and map out a crucial moment during a climb. Breaking it down into specific steps and storyboarding the moment to consider where, why and how AR intervention could be valuable or unnecessary. Considering an experience that focuses on collaborative navigation. I hope by the end of this to have a seamless interaction for vertical wayfinding and contribute to spatial interaction research in AR/VR experience.

References:

Bock, O., Fricke, M., & Koch, I. (2020). Human wayfinding in the horizontal versus vertical plane. Journal of Environmental Psychology, 70, 101446. https://doi.org/10.1016/j.jenvp.2020.101446

Feng, J., Sears, A., & Karat, C. (2006). A longitudinal evaluation of hands-free speech-based navigation during dictation. International Journal of Human-Computer Studies, 64, 553–569. https://doi.org/10.1016/j.ijhcs.2005.12.001

Kim, M. J., Wang, X., Han, S., & Wang, Y. (2015). Implementing an augmented reality-enabled wayfinding system through studying user experience and requirements in complex environments. Visualization in Engineering, 3(1), 14. https://doi.org/10.1186/s40327-015-0026-2

Knapp, J., & Loomis, J. (2003). Visual Perception of Egocentric Distance in Real and Virtual Environments. In Virtual Adapt. Environ. (Vol. 11, pp. 21–46). https://doi.org/10.1201/9781410608888.pt1

Kosmalla, F., Zenner, A., Speicher, M., Daiber, F., Herbig, N., & Krüger, A. (2017). Exploring Rock Climbing in Mixed Reality Environments (p. 1793). https://doi.org/10.1145/3027063.3053110

NASA Jet Propulsion Laboratory. (2020, September 17). Visualizing Space Exploration: AR, VR & Emerging Tech (live public talk). https://www.youtube.com/watch?v=Xv_9C5Mms4w

--

--

Rebecca Rhee
Rebecca Rhee

No responses yet