Back in November 2017, Sammie de Vries (an upcoming entrepreneur from Amsterdam) approached me with his idea to set up Virtual Reality (VR) sessions for groups. He was about to collaborate with Storytrooper to found a start-up company that specializes in coming up with concepts for VR multiplayer and asked me to join this initiative.
I initiated this project because of my fascination with VR and my love for game design which I discovered in my third year during the Applied Game Design minor in 2017 where I started to experiment with VR. At the beginning of this course, I set myself a goal to create a multiplayer VR game and solve all the challenges that might occur while I do it.
Because I now have over a year of experience in working with different VR headsets and the Unity program in general, I was able to build a demo for Storytrooper in the relatively short timespan of the graduation project as opposed to having to learn a completely new platform to work with. For example, Internet of Things could also be a interesting platform to make people interact with each other while experiencing a story. But in order to get this working it requires heavy back-end coding which is not my specific area of expertise. For this project I’ve collaborated with several others that supported me in setting up the project and maintaining quality during the development. Some undertakings were successful while some of them were not. Sprints were used to validate all the ideas and assumptions during development.
Working with the Gear VR and HTC Vive headsets during my internship
3.2. Context
Storytrooper is a storytelling company owned by Christina Mercken that organizes immersive workshops for anybody who is interested in a good story; based on true events or entirely fiction. Christina can often be found telling her stories at Mezrab ‘The house of stories’ during the special storytelling evenings on fridays. This is a creative center located in the basement of Amsterdam’s Pakhuis Wilhelmina where upcoming talents and established names on the areas of comedy, poetry, spoken word and storytelling come together. Besides these special friday evening events Christina can be hired to speak at private events. She also travels across cultural events like Oerol with her act as a freelance storyteller/performer.
In the past years, as technology arises and our attention span gets shorter, Storytrooper would like to create new storytelling experiences that intrigue participants and keeps them engaged for longer timeframes – thus creating a stronger storytelling experience using technology, sound and visuals.
Enriching her act with new experimental technology also offers opportunities for expanding her audience and revenue. She noticed upcoming Virtual Reality initiatives like the VR Arcade and Virtuality, where groups of people can come together to fully emerge themselves in a virtual environment. However, most of these places are either focussed on experiencing the newest AR and VR hardware (like the recently opened Virtuality) or only offer a selection of experiences from existing digital stores (e.g. VR Gamehouse). There is one exception to this trend: the VR Arcade in Amsterdam. They offer their visitors a tailor-made multiplayer VR experience involving wireless free-roaming and shooting at zombies/aliens. Also, all of these places are bound to a set location and boast pretty hefty pricetags: most tickets start at €25-30 for half an hour (Unbound VR, 2018).
Because the third-party software that’s available doesn’t fit the repertoire of Storytrooper (the experiences are set and cannot be altered to tell one of her own stories), she needed a tailor-made application and somebody to help her create one. This is where this project came to life; together with Christina and Sammie, I want to explore the possibilities that story-driven VR can offer for groups. Their purpose is to be able to show a proof of concept that excites potential investors in the coming year.
Christina speaking at Mezrab – Photos courtesy of Mezrab.nl
Christina telling one of her stories at the weekly storytelling night
(Videofootage by Martin van Houwelingen)
Hardware
Because Christina is often working at remote locations, it’s important for her VR-act to be mobile enough to take with her. It also shouldn’t be too much of a hassle to set up on the location she’ll be performing.
I chose the combination Gear VR + Samsung Galaxy S8 because this setup is mobile and the phones can be quickly swapped out when the battery is low or in case the system crashes. My full considerations on this choice can be read in Appendix 2: Hardware research at the end of this documentation.
Samsung Gear VR and Galaxy S8 phone
Software
To develop the VR application I used Unity 3D, a game development platform for Windows and MacOS. It can be used to develop many different applications for a wide variety of devices and is not limited to the conventional definition of a video game *. It can even be used to create non-gaming apps that are normally built in programs that require more hands-on coding (Celada, Jan. 2015).
* “A videogame is a game which we play thanks to an audiovisual apparatus and which can be based on a story.” (Esposito, Niclas. 2005)
The Appendix 3: Developing for Gear VR contains more information about Unity, how I used a cloud-based solution for wireless multiplayer and how to set up the Android development environment.
4.3. Concepting
4.1.Design Challenge
My design challenge for this project is as follows:
‘How can a virtual reality experience
create immersive storytelling sessions for groups,
keeping them intrigued and engaged* ?’
*Within this project’s context, with intrigued and engaged I mean the player’s interests in moving along with the story and his urge to explore how the story develops.
Main research questions
– How can cooperation be stimulated within a multiplayer VR environment?
– What gameplay, narrative, and social elements cause players to stay engaged?
Sub-question
What forms of interaction can be utilized to make the gameplay feel intuitive
and immediately understandable?
4.2.Target audience
Although Storytrooper’s audiences vary greatly, I’ve focused my research on this project on early VR adopters. These are young (20-30) and open-minded people who love storytelling and are curious about new technologies.
Early VR adopters are people who are already interested in or familiar with VR. They are eager to try innovations and tend to accept this innovation more than the average consumer. The product I’m working on is still a proof of concept and therefore is positioned at the very start of the adoption lifecycle. Targeting this specific group has the benefit that participants often already have experience with using VR and are more able to see through the flaws that come with a proof of concept (since they are aware of the newest innovations by full-grown companies and know what a fully developed product can be like).
Position of my product on the Technology Adaption Lifecycle model – image courtesy of Wikimedia
4.3.Scenario & Script
Close your eyes for a moment. Now imagine walking around on a festival with your friends and suddenly you all get involved in a mission to space. Drifting among the stars you’ll soon feel like you’re part of something bigger than yourself, and you must rely on your friends to succeed in this wacky quest to retrieve some weird object you’ve never heard of before…
For the story, we first had a brainstorm with the whole team and, after some iterations, the concept and story became as follows:
You and some friends are walking around on a festival when suddenly you discover a giant spaceship made out of cardboard, sheets etc. Next to the ship, there is an actor dressed as a child playing spaceship. He asks you to join him on an adventure to space to retrieve a weird object he has lost which you’ve never heard of before. He shows you around the inside of the spaceship and hands you a cardboard helmet with a pair of VR goggles inside of it and when you put this on it transforms you into a virtual crewmember. From here, you have to work together with the rest of the players to get the spaceship up in the air. After a while, the ship crashes on an alien planet and the players meet a group of aliens who turn out to react to music. By combining scrap from the crashed spaceship they can either build instruments or weapons. If players decide to play along with the aliens, this will result in a massive rave and the aliens transport the players back to Earth. If they decide to attack the aliens, after a while they will be crushed by one of the alien’s giant friends.
During the first two weeks of the project I started by setting up small demo’s featuring possible interaction methods that could be used for the experience. I had developed for HTC’s Vive and Microsoft’s Hololens before but besides a simple videoplayer I had never built something on this scale for the Gear VR before. So this phase was about testing out interactions but also to get me into a workflow and recalling the syntax of Android’s SDK.
Because the Gear VR doesn’t include motion tracking of the hands (which was included in the first concept), I was limited to using gaze-input and the included Gear VR controller. Therefore I decided to first explore some of the possibilities gaze input offers. This was important because I first needed to know which interaction methods I would want to implement in the final product.
Sketch of all Gaze-interactions that I could think ofPrototyping different types of interaction
After playing with all the different methods of interaction, I had a pretty good overview of all that was possible for the final product and which elements I thought could work out the best. Ultimately, I decided to use curiousity and playfullness as key components to get users emerged in the situation and encourage them to behave as I intented by my design. I went along with gaze input along controller input for a while but eventually dropped the former completely during design sprint 3 because combining them felt clumsy at some points and using a motion controller felt more intuitive. Furthermore, demanding two different types of input could even be confusing to unexperienced users.
5.2.Methods
Because the nature of the product I worked on differs from traditional 2D User Experience (UX) designing, it was important for me to have a minimum viable product ready as soon as possible. This was the case because concepts in Virtual Reality are really hard to imagine without wearing the hardware, especially if this involves multiplayer. It is something you must first experience yourself in order to fully understand it’s effect. The most valuable results for me came from testing the product ‘in action’ on participants, rather than having them fill in a survey.
Playtesting / Speaking out Loud
The method I used the most is playtesting the experience with test subjects and having them speak their observations out loud. I would record this playtests on video and analyse the behaviour of the players, so any possible issues or uncertainties could be resolved in a next iteration of the demo. I have included a detailed log of these tests in Appendix 6: Prototyping log of the product biography.
The Octalysis framework by Yu-Kai Chou
To find out how to make the experience as much fun as possible, I looked into (video)game design as the experience is essentially set up like a multiplayer VR game. One of the most popular places at the moment to learn about the fundamentals of gamification and behavioral design is Yu-Kai Chou’s Gamification & Behavioral Design.
This framework lays out the structure for analyzing the driving forces behind motivation of users to keep using your product or service. The core of his theory is that Gamification means much more than just taking game elements and cramming them into a product. According to his model, gamification is a design principle that mainly concerns the process of human motivation.
The model is called Octalysis because of the model’s octagon shape (ancient Greek : oktá-, “eight”) which depicts 8 Core Drives that together form the motivation for everything you do.
A fully filled-in Octalysis framework by Yu-Kai Chou. Image courtsey of yukaichou.com
The game elements (on the outside of the model) each correspond to one of the Core Drives. For example, when the user unlocks a milestone in your product, this adds to the Core Drive “Empowerment” because this game element makes the user feel their efforts lead to results. And allowing users to design their own Avatar boosts the “Ownership” Drive because they feel the newly created virtual identity is now theirs.
I used these two Drives in my product:
Empowerment (of Creativity & Feedback)
The Empowerment Core Drive is often found in products that encourage the use of creativity in the process of accomplishing a product’s goal. Not only is this expression of creativity important when applying the Drive, but also the constant feedback a user receives from his actions. Good examples of activities that utilize this Drive are clay modelling and building Lego sets. These activities are fun in-and-of themselves and don’t need constant newly designed input to keep them fun and engaging. In game design, the perfect example of this Drive is the sandbox game genre.
Unpredictability (& Curiosity)
The human being has a nature of always wanting to find out what will happen next. We are curious beings, and this Drive makes use of this attribute. When you don’t exactly know what will happen next, the brain gets engaged to find out and you will keep thinking about it often. A textbook example of a company that has mastered this Drive is Netflix, a video subscription service that offers video series that span entire seasons and keep audiences engaged by traditionally ending each episode with a nerve-racking cliffhanger. This Drive is also the primary factor behind gambling addiction and can be used to run lottery programs that engage users to keep using a product or service. This behaviour is often perfectly demonstrated in the Skinner Box experiment, where an animal irrationally presses a lever frequently because of unpredictable outcomes.
Task Analysis
One of the tools I used during the UX design is Task Analysis. A Task Analysis consists of a diagram that shows the actions taken by the users to achieve their goals. The main goal of laying these steps out is to find out the nuances, motivations and reasons behind each action taken. This way any unclear or unnecessary steps can be removed from the final design (Interaction Design Foundation, 2018). I performed a task analysis for every user test in the Appendix 6: Prototyping log.
Example of a Task Analysis: texting a hospital system – Author/Copyright holder: Andreas Komninos, The Interaction Design Foundation. Copyright terms and licence: CC BY-SA 3.0
The Lean Startup method
The Lean Startup is a method for setting up a new product through continuous innovation using a build-measure-learn feedback loop. I would make an assumption or have an insight. Then I’d sketch or propose a fitting solution, decide on what and how to implement, build it and try it out with one or more test subjects. I would go through these sprints over and over, each time improving my product bit by bit. This method is very valuable when you’re working on a product where you don’t really know what’s gonna come up, which was really the case with my project.
A diagram of the Lean Startup method. Image courtesy of theleanstartup.com
5.3.Design Sprints / Minimum Viable Product (MVP)
During development I went with a 2-player set-up because I had 2 headsets available and this way I could test in a faster way. The product could potentially be scaled up for groups of 16 players.
Sequence structuring sprint: ‘The Lobby’
The ‘lobby’ where players log into the game
In his presentation at the Game Developers Conference 2017, Colin Foran (Creative Lead at HBO entertainment) talks about the experience his team had with combining linear and interactive content to tell a character-centric story for the Westworld VR installation. He describes a lobby space that allowed their users to familiarize themselves with all of the interactions before the experience started. This way the users would be paying more attention to the narrative elements since they’re less distracted by the sudden appearance of new interactive possibilities.
I found this was a great idea to implement. Not only was this a proven way to familiarize new players with the experience but by adding this introduction section I could also sneak in the first sync-point to make sure the game waits for every player to be fully installed before transporting everybody to the main scene and beyond.
MVP 0.1 – Proof of Concept
Main scene in sprint 1
Summary
The first prototype was a setup with two players in different places of a spaceship. The player in the cockpit has to communicate the colors on his dashboard to the player in the back. When the player in the back presses the right buttons, the spaceship engine starts. I took this game mechanic from the games Keep Talking and Nobody Explodes and Spaceteam. These cooperative games revolve around each player having different visual cues that can only be seen by that particular player. Then he has to communicate this and the other player(s) needs to anticipate on it by performing certain actions that are only available within their own play space. This prototype was intended as a first proof of concept of how teamwork in VR could work.
With this first prototype I wanted to test if:
The participant understands the event triggers within the scene
If so: in what way he expects to interact with the triggers
Overall impression on flow and narrative
Results
I tested this first prototype and the results were:
The player at first doesn’t understand the gaze-controls for logging in and tries to log in by touching the button with the provided hand.
The volume was too low, the player didn’t have a chance to adjust this up untill the voice over starts playing.
Starting right away with a teamwork puzzle might not be the best way to get inexperienced players emerged in the game. It can be hard to understand for players that are still finding out about the kinds of input they can give to the game.
For the next iteration, it’s better to move this to the next scene and start with a simple sequence that involves pushing buttons to make players familiar with this input mechanic.
Testing sprint 1
MVP 0.2 – Setting the scene
Main scene in sprint 2
Summary
This second prototype included the first story elements and had players interact more intuitively with their environment. Player’s curiosity and playfulness were triggered by doing so. I found that allowing players to interact with their environment in a direct way (mashing buttons by moving the controller) and rewarding them with amusing feedback for doing so caused them to stay more engaged. Players were now seated next to each other so that they could see the other player’s head move around. This added a social element which also kept them engaged.
Changes in Sprint 2:
Exit gaze input
I decided to focus solely on controller input since it’s a more direct and intuitive way of interacting with elements in the direct environment. At this point, only logging in at the lobby is still performed by gaze input.
Position of the players
The players were now seated in a circle around a dashboard, instead of one player in the front of the ship and the rest of the crew in the back. This way the social presence is strengthened because fellow player’s movements are easier to spot.
New gameplay elements
I added a sequence for getting the ship up in the air by creating music together. The buttons in front of the player can each be pressed to create a rhythm with odd sounds. This sequence is meant to make players aware of how the buttons work, triggering their curiosity and playfulness while doing so.
Summary
The goal of the second prototype was to solidify the feeling of social presence and to add a fun factor to the experience. Allowing users to play around with their surroundings should get them more immersed. I wanted to test the following elements:
Is the function of the buttons clear?
If so: how does the player interact with them?
Overall impression and remarks on possible issues or uncertainties while playing the experience
Results
I tested this new prototype and the results were:
The voice over starts too soon, participants don’t have the chance to adjust the volume level until the audio starts.
One of the test players doesn’t recognize his own avatar and thinks he’s looking at an NPC (non-playable character).
The players really enjoy being able to hit the buttons and receiving audial feedback while doing so.
Testing sprint 2
MVP 0.3 – Introducing multiplayer mechanics
The final act in sprint 3 – visual instructions are shown on the screens, accompanied by audial instructions from the narrator
The final act’s mechanics
For the final act of the demo (working together to avoid getting the spaceship hit) I expanded upon my mechanic from design sprint 1. In this scene the players see buttons appear on their own display that they have to shout to the other players at the table that are in reach of that particular button. If the button is pushed on time, the ship will avoid colliding with objects but if not the objects will crash into the ship. The players can’t die or ‘Game Over’ while doing this, so groups of players won’t get stuck during the experience but they’ll still receive feedback from the narrator. After a while, the game moves on to the next scene automatically because an unavoidable drifting toilet seat hits the front of the spaceship.
Changes in Sprint 3:
Log-in room overhaul
I removed the voice-over and gaze input from the log-in room and replaced it with visual instructions and controller input. This way the log-in sequence can be timed by the player himself. There is background music playing to give feedback about the volume of the headset so players can adjust the audio level before any important voice-over starts playing.
Old way (left) VS new way (right) of logging in
‘Selfie-display’ – feedback about the look of the player
Because players were unaware of their own avatar’s appearance, I added player-cams to the dashboards of the players in the main scene. This display shows live footage of the player moving during the experience, like a mirror. This way the player gets clear feedback about what he looks like in the game.
Old way of giving feedback on what the player looks like (left) VS new way (right)
Summary
This build includes all of the scenes for the demo, so the story could be played from beginning to end. From here on I was able to start refining and validating the entire experience, therefore I wanted to test:
Is the mechanic of the final act clear?
If so: can the player keep up with the desired tempo of gameplay?
Will the prototype hold up technically during a complete playtrough in the ‘wild’?
Overall impression and remarks on possible issues or uncertainties while playing the experience
Results
I tested this new prototype and the results were as follows:
The multiplayer part in the final sequence is not working as intended, it feels chaotic and the mechanic remains unclear.
The only way to communicate with other players is to take off or lower the volume of the headphones, which greatly breaks the immersion.
The how-and-why of suddenly having an avatar at the start of the experience remains unclear, the narrative of players transforming into members of the space crew (as described in Appendix 4: script and storyboard) is missing.
Besides some minor bugs and the chaotic final act, the experience could be played through in its entirety without game-breaking issues.
MVP 0.4 – Rethinking multiplayer and onboarding elements
Changes in Sprint 4:
New login instructions
While playing Fail! Factory on my Oculus Go I stumbled upon an even better way to provide instructions on recentering the controller.
Fail Factory’s instructions VS my instructions using this format
Voice chat!
I added voice chat to the experience, players could now hear each other talk without having to remove the earphones. This should increase the social presence and the overall ease of communicating.
New multiplayer mechanics for the final act
The final act felt chaotic to players because they thought the buttons on their screens corresponded to the buttons on their own panel because they look too similar. Also, they were expected to act within a short time-frame which doesn’t help either. I decided to look around for other mechanics to stimulate teamwork within the environment I created.
In his presentation at the Game Developers Conference 2015, Alistair Aitcheson (an independent developer of silly and chaotic room-scale multiplayer games) phrases the following quote:
My job as a designer is not to create elegant systems.
It is to engineer interesting social situations.
Alistair Aitcheson with some of his quirky inventions. Images courtesy of Alistair Aitcheson.
This point of view was very interesting to me personally and for my development process, as I intend to stimulate fun social interaction over a perfectly streamlined VR game. My concept lends itself to be a lot more ‘outside the box’ and this led me to come up with some new concepts for the mechanics in the final act of the demo to stimulate teamwork and a feeling of social presence.
Concept 1: Sing along!
The newly added voice chat also unlocks gameplay possibilities. This idea involves a Singstar setup where players have to sing along to a tune when it’s their turn in order to please the rock(et)ship. Based on the phenomenon of karaoke bars and the natural human behaviour of humming.
Concept 2: Keep her steady!
This concept includes the same mechanic as sprint 3 but has more distinct buttons to activate, thus making the mechanic clearer and easier to understand right away. It also provides more context to why the actions are needed (the autopilot fails and the crew has to take over).
Concept 3: Mash THE button!
During the Ubicomp course in 2016, I came up with this idea of a physically active game that involves running between colored blocks to hit the corresponding block once a lamp on top of a pole changes to that color. I took the idea from the Pokémon Stadium 2 minigame ‘Pichu’s Power Plant’ and the Guitar Hero game franchise. This mechanic is also found in Test your Strength games often situated at fun fairs. I thought this was a great idea to re-use because we had a lot of fun playing around with the lo-fi prototype back in the day and it complements the player’s enthusiasm during the first 3 sprints of playing around with the buttons. For this sprint I decided to go with this concept.
In the concept, there is a progress meter on the lamp post over the entire vertical axis. Players need to mash the same button as the lamp’s color, but the lamp keeps changing color over time and mashing the wrong button or not hitting anything at all causes the meter to slowly decrease. The narrative from concept 2 can be used here. The ship has been hit and is out of power, players need to generate enough electricity to move on.
Prototype of this concept built with Arduino microcontrollers (2016)
New way of receiving your avatar
Because the how-and-why of suddenly having an avatar at the start of the experience remained unclear up to this sprint, I thought of some mechanics for transforming players into members of the space crew (as described in Appendix 4: script and storyboard).
Concept 1: Build-A-Vatar!
This concept is based on the avatar creation modules often seen in video games. With it, players are able to customize their own playable character and this adds to the Ownership & Possession Core Drive as described in the Octalysis framework. I discarded this idea because it would take too much time for all players to finish their avatar.
Concept 2: Connect the dots to create avatar!
This mechanic, taken from the connect-the-dots puzzles for kids, utilizes the Unpredictability & Curiosity Core drive to slowly reveal the player’s avatar while he connects the dots on a paper in front of him. When the drawing is finished, the player suddenly transforms into the character in front of him. Like the first concept, this sequence would take too much time for all players to finish their drawing and I discarded the idea.
Concept 3: The avatar toy dispenser
Personally, I’m quite fond of toy dispensers where you put in a coin and a plastic ball with a random toy rolls out of the machine, or opening a Kinder surprise egg. I thought this concept of receiving a different avatar everytime you enter the experience and push the dispenser’s mystery button would be a great idea to implement in the onboarding process. This concept again thrives on the player’s Unpredictability & Curiosity Core drive to keep coming back to see what avatar he’ll receive this playthrough. I decided to use this final concept in my product since it’s the quickest way of the three concepts to onboard a player, the toy-like assets fit this concept really well and the randomness factor should theoretically give the replayability of the final product a boost.
This build should do a better job of stimulating teamwork and add to the feeling of a joint victory. To validate this assumption, I wanted to know:
Is the mechanic of the final act clearer than before?
If so: can the players keep up with the desired tempo of gameplay?
Do participants communicate to each other about the mechanic?
If so: what kinds of things do they say to each other? Instructions, uncertainties, rallying cries etc.?
Results
I tested the new prototype and the results were as follows:
One participant has some difficulties with the pacing of the color change interval.
One participant has difficulties hitting the right button, he doesn’t feel in control of which button he touches.
The position of the lamp, as opposed to the buttons, feels a bit too far away for two of the participants, they can’t see both at the same time.
The red/purple and light/dark blue look too much alike to the participants. The lamp causes the hue of the buttons to change.
The feeling of teamwork is present. One participant suggests to also implement teamwork right at the start before the first scene starts.
The final mechanic runs a lot smoother than the previous version.
This demo is the final version for this graduation project and includes the last refinements and feedback from playtests 5, 6 and 7.
Adjustments in the last 2 project weeks:
Slower change interval of lamp colors
The lamp is a bit smaller so it can be viewed from better angles
The buttons are more apart and the hitboxes of the hands are made smaller
Narrative is added to the start
Some panels were added at the start to better sync player states
This version was tested 3 last times to validate the last refinements. The final comments were:
Participants still try to wave to the other player but I couldn’t manage to implement visuals of other player’s controllers. They don’t really see it as a problem because they still feel like the other player can see their controller.
The experience feels like making fun on your own while still experiencing a shared world.
The final mechanic is immediatly understood by all players
This is the final iteration of the demo, as seen by Player 1:
This is a video of the experience with 2 players logged in to showcase the demo’s context:
7.6. Conclusion
7.1.Conclusion
During my research, I found that allowing players to play around with interactive elements in their environment created a social element because players would talk about it to each other. Using direct input by providing players with a hand to hit brightful colored buttons felt intuitive to them and is something that’s immediately understandable. Rewarding them with amusing feedback for playing around with the interaction points caused them to stay more engaged.
The final MVP is a great starting point for a story-driven group experience to be fleshed out by Storytrooper once more advanced hardware is released. Christina was completely unfamiliar with VR at the start of this project and was pleased to see how her story would translate to this platform when the project was finished. Although the final experience doesn’t require very intensive cooperation, players mostly enjoy being able to have the freedom to try things out themselves and making remarks on it to each other. The biggest value of the final demo turned out to be not necessarily the focus on teamwork, but rather on having each individual making fun on their own while still experiencing a shared world. Using comical narrative was also a key element to keep players engaged and intrigued to find out what the next objective would be.
The biggest challenges I faced during development were the restrictions of the underdeveloped hardware and my inexperience with synchronizing online gameplay. I’m especially disappointed that the hand-tracking within the Photon Network didn’t work out, being able to wave to each other really adds to the social part.
Would I have started this project 1-1,5 year later with the experience I have now, I would’ve gone with the newly announced Oculus Quest headset. This would have allowed me to implement a lot more of the initial ideas because it allows players to walk around using positional inside-out tracking. Also, the better tracking of hands would mean players can actually pick up objects and move them around the spaceship which would really have been a game changer.
Application Package A computer file containing the entire Android application. The file extension is .apk.
Build A build is a compiled and executable pre-release version of a program. During developmen, several builds are created for testing purposes on the desired device.
Compilor/compiling (error) A compilor is a program that translates human-written code to machine language (1’s and 0’s). When compiling, this program checks the code to make sure it’s written correctly according to the programming language’s rules. If this is not the case, the code won’t run and the project can not be built (this is called a compiling error).
Oculus Signature File When developing VR applications for Oculus platforms, the builds must be signed with an Oculus-issued Oculus Signature File, or osig. This signature consists of a file that you include in the application in order to access protected low-level VR functionality on the device. Each signature file is tied to a specific device.
Remote Procedure Call A Remote Procedure Call (abbreviation: RPC) is used to execute code from one device to another device within a shared network, which is executed as if it were a local procedure call.
Sideloading Sideloading is a term used for installing programs from an unofficial source, getting around the official distribution channels for that platform.
Software Development Kit Software Development Kits (abbreviation: SDK’s) are sets of software development tools that are critical for developing a platform-specific app. They must first be installed in order for code targeting these SDK’s to work.
Unity Unity is a game development platform for Windows and MacOS. Unity supports three scripting languages, C#, UnityScript (a scripting language with the syntax of JavaScript) and Boo.The program consists of a drag-and-drop editor environment and adding scripts to components within the game. The program can be used to develop applications for a wide variety of devices and is not limited to the conventional definition of a video game.
User Experience User Experience focuses on the overall experience of a person using a product. User Experience design is about how to optimize the way users interact with and experiences the product.
Virtual Reality Virtual Reality (abbreviation: VR) is the term used to describe a computer-generated (three-dimensional) environment which is experienced through simulation of sights and sounds. This technology convinces the user that he exists in this virtual environment by utilizing electronic equipment such as headsets with screens and spatial audio inside, along devices like gloves and suits that provide physical feedback when interacting with the simulation.
Lamkin, Paul. for Wareable, (2018). Best VR headsets 2018: HTC Vive, Oculus, PlayStation VR compared.
Consulted May 2018, last updated July 2018 via www.wareable.com/vr/best-vr-headsets-2017
M. LaValle, Steven. (2016). ‘Virtual Reality – Cambridge University Press’. Chapter 12.2 – Recommendations for Developers Consulted between May-October 2018 via http://vr.cs.uiuc.edu/
For this project I’ve collaborated with several others that supported me in setting up the project and maintaining quality during the development.
Sammie de Vries – Project founder / Entrepreneur
Sammie is the ‘man with a plan’. Contacting me back in november 2017, he came up with the original plan of Virtual Reality sessions for groups involving silent disco elements and motion tracking. For this project he’s been arranging the general contact between the parties involved. He’s been working on the story details together with Christina and also established a start-up company with her to develop more of these multiplayer Virtual Reality concepts in the near future.
Christina Mercken – Client (Storytrooper) / Storyteller
Performs regularly at Mezrab ‘The house of stories’ and also freelance on events like Oerol with storytelling workshops and performances.
She now wants to explore the possibilities that new interactive ways of storytelling can offer for her company Storytrooper.
Glenn Wustlich – 3D artist
Currently studying Communication and Multimedia design and with a history in Game Design he’d like to deliver cool and pleasant experiences. Whether it’s a website, app, video or video game he focusses on presenting a great user experience through visual style and UX principles.
For this project he’s mainly been working on designing an aesthetically pleasing and technically well optimized 3D assets. When I required a specific asset, I’d ask Glenn to model it.
Jasper Boonstra – DevOps engineer / Storyteller
Jasper is a Psychology graduate with 6 years of experience in creating interactive narratives. He is now employed fulltime as DevOps Engineer at ING and is combining both the narrative and the technical sides to create unique experiences with the use of technology. For this project he elaborated on the concept and helped with some technical difficulties.
10.2.Appendix 2: Hardware research
Conclusion
Choosing one of the headsets that run on a smartphone is the best thing to do because they are mobile and the devices that run them can be quickly swapped out when the battery is low or in case the system crashes. The Galaxy S8, S8+ and S9 pack the most power and batterylife for the best price. I chose the combination Gear VR + Samsung Galaxy S8. They could both be borrowed at the academy for the length of the project, this was the deciding factor for my choice.
Hardware research
Because Christina is often working at remote locations, it’s important for her VR-act to be mobile enough to take with her. It also shouldn’t be too much of a hasstle to set up on the location she’ll be performing. I made a comparison of the most popular headsets that are available at the moment using the summaries provided by Wareable and PCmag.com.
After looking at some wired headsets that need powerful computers to drive them, it became evident that I needed to go for a lower-end solution for practical and financial reasons.
The high-end HTC Vive models can be equipped with adapters to make them wireless, but the battery life is too short to provide power throughout a day of performing on location. This means it’s neccesary to buy a few spare adapters, on top of the hefty pricetags (> €10.000 total) of the headsets and the computers that power them. If the project would be a single player experience this could be considered. But having to buy, transport, power and set-up multiple computers and headsets each with their own trackers is not desirable for this project. The hardware needs to be as portable as possible.
With all wired high-end solutions ruled out, the most obvious choice would now be the Oculus Go for it’s high-resolution screen and lightweight design. I happen to own one myself, and I looked into the difficulty level of putting custom apps on it. The operating system is based on the same one as Gear VR’s, with the exception that the Go is locked in Oculus Library mode. This is tricky because on an Android phone you can just drag & drop your application to one of the folders on the phone, install it and execute from the phone’s homescreen. On Oculus Go this is not possible, you can still transfer files back and forth via USB but you cannot execute applications directly from the Oculus Library as an app. To get your app running, you need to transfer it using command lines in Android Debug Bridge (adb) then find and select it on a list within the headset’s developers tab. When developing and testing an application, this is not a fluid process if you just want to check some minor adjustments. The other downside to the Go is the short battery life, and it’s impossible to swap batteries so it’s the same case as with the wireless Vive.
It would be more wise to choose one of the headsets that run on a smartphone because they can be quickly swapped out when the battery is low or in case the system crashes. Also, because phones recieve faster processors each year, the system can easily be upgraded with the newest model without having to buy a new headset for it. I was able to borrow some Gear VR’s from the academy, so I went with this model. In my comparison I wasn’t able to pinpoint the specifications of screen resolution and exact weight because this depends on the phone used, so I made another chart depicting the different features of the most popular Gear VR phones using this article by VRheads.com to get me started.
Comparing popular Android phones that have capabilities to run VR applications
The Galaxy S8, S8+ and S9 pack the most power and batterylife for the best price. I chose the combination Gear VR + Samsung Galaxy S8. They can both be borrowed at the academy for the length of the project, this was the deciding factor for my choice.
10.3.Appendix 3: Developing for Gear VR
The three most commonly used programs for developing games are Unreal Engine, CryEngine and Unity 3D. Each of them has their own distinct features and qualities: Unreal and CryEngine are more targeted towards graphically demanding games, Unity has the advantage of offering cross-platform development. TechnoByte (2017) and Thinkwik (2018) published articles on considering which program to choose. I went with Unity 3D because this is a program I’m already familiar with and because I’ve chosen to use relatively low-end hardware I don’t need the best graphics and visual effects that Unreal and CryEngine are known for.
Unity is a game development platform for Windows and MacOS. Unity supports three scripting languages, C#, UnityScript (a scripting language with the syntax of JavaScript) and Boo.The program consists of a drag-and-drop editor environment and works by adding scripts to components within the game. The program can be used to develop many different applications for a wide variety of devices and is not limited to the conventional definition of a video game *. It can even be used to create non-gaming apps that normally are build in programs that require more hands-on coding (Celada, Jan. 2015).
* “A videogame is a game which we play thanks to an audiovisual apparatus and which can be based on a story.” (Esposito, Niclas. 2005)
Unity Editor and the build-in compiler MonoDevelop on the right side
Setting up the Android development environment
In order to make the Unity scene compatible with the Android platform, the Android and Java Software Development Kits (SDK’s) need to be installed first. SDK’s are sets of software development tools that are critical for developing a platform-specific app. They must be installed in order for code targeting these SDK’s to work.
Unity generates an application package (APK) which contains the entire Android application. Normally, all experiences on Gear VR are accessed through the official Oculus app. Here you can buy experiences in the Store and add them to your library from where you can launch them. But in order to easily test the application on a local device before publishing it to the public it’s neccesary to sideload it. Sideloading is a term used for installing APK’s from an unofficial source. It is done by enabling the Developer Mode on the phone and then generating a Oculus Signature File (osig) at the Developers’ section of the Oculus website. A generated osig file contains that particular phone’s device ID. The app can then be transferred over USB and installed on the device. When launched, the Oculus environment is loaded just like the apps from the official library. The ID’s that are associated with the osig files are compared with the ID of the device it’s running on to check if one of them matches. Each signature file is tied to a specific device, so for every unique device you want to test on you’ll have to generate and include a seperate osig file in the build.
Oculus Android app
Photon Unity Networking framework
To make wireless communication between players possible, I had to set up a server to connect the player’s gameworlds with each other. To do this there are a few cloud-based solutions available (Unity Multiplayer Network, Photon Cloud, Google Cloud) that all have different pricing plans depending on how many players active in the game at the same time. Since this demo won’t have more than 3 players interacting at the same time (thus never exceeding the bandwidth provided in the free tier), the choice of networking platform didn’t really matter that much. Especially because in the final concept, all devices will connect over local Wi-Fi using a computer server. This is to make the system independent of a working internet connection when performing at location. The local Wi-Fi connection works at close range because the router (and optional signal repeaters) can be hidden within the decor. This ensures I can work with the most stable wireless connection possible. However, during the testing phase a cloud-based solution is better to work with because that way I don’t have to carry around the hardware for the server, I can just connect the phones to Wi-Fi and the system is ready to go.
The intended set-up to make sure internet won’t be a requirement for the experience
I ended up using Photon Cloud (by Exit Games) because this was very easy to set up, especially using this tutorial by SDKboy.com.
The way I implemented Photon’s system in my project was by triggering Remote Procedure Calls (RPC’s) through code and syncing the state and location of decor pieces that move around for all the players.
Photon View, the core script that drives Photon networking, as seen in the Unity EditorIllustration of the logic behind Remote Procedure Calls
More information on how Photon was used during this project can be found in the paragraph Sync-points within Appendix 6: Prototyping log
Optimizing assets for Mobile VR
One of the goals was to deliver a virtual world which is richly filled with small details and objects which players can interact with. To achieve this, a couple of things had to be considered. Unlike regular VR which usually depends on powerful high-end PC’s (e.g. minimal Oculus Rift PC requirements, 2018), VR for mobile is much more limited in available resources due to weaker hardware. We had to be extra picky in distributing these resources to deliver a smooth experience for our players. One of the ways of doing this is by optimizing our game assets.
We need to have a low polygon count for each of our assets but less polygons means less detailed objects. There are several tricks and methods to output a low poly object while maintaining the details. One of which is ‘baking’ a high-polygon mesh on a low-polygon mesh. Doing so will result in a normal map.
A normal map contains data on how the engine will catch and reflect light on the surface which the normal map is applied to. This way we can achieve an illusion of depth on flat surfaces! Combined with carefully crafted textures we were able to realize our goal.
Example of a normal map:
The panel on the left has a completely flat surface, the panel on the right has many polygons to form a pattern on the surface.The relievo of the right panel is ‘stamped’ onto the left panel without adding any polygons to it, creating the illusion it does no longer have a flat surface.
10.4.Appendix 4: Script and Storyboard
The original script (in Dutch):
LET’S PLAY SPACESHIP!
SETTING
Op een festival een groot kartonnen ruimteschip – kruising tussen iets wat een kind gemaakt zou hebben en cartoonesk. Van binnen net zo cartoonesk en kinderlijk, met stoeltjes waar de bezoekers op kunnen gaan zitten en in elkaar geknutselde dingen die we associëren met reizen, zoals opklaptafeltjes in de stoel voor je, asbak in de armleuning, allerlei knoppen boven je hoofd.
INTRO – Come into my space ship… + prototype deel
Doelen:
Bezoekers;
– laten wennen aan VR / leren wat ze kunnen,
– een wereld van kinderlijke fantasie intrekken waarin alles kan
– duidelijk maken wie ze zijn (ruimtereizigers)
– wat ze tot elkaar zijn (de laatste overlevenden)
– waar ze zijn (gecrashed op een onbekend planeet)
– en hen een gevoel van verbondenheid geven (wij tegen het onbekende)
Verhaal:
Voorbijgangers worden door een kinderlijk speels persoon uitgenodigd om te komen spelen in hun ruimteschip. Bij binnenkomst krijgen de bezoekers een kartonnen badge opgeplakt en worden ze snel ingezworen bij Starfleet *naam nog bedenken* (Spaceboats, Lightyear trips?). Allemaal alsof het een groot kinderspel is, maar dan wel van een kind dat zijn/haar spel heel serieus neemt. Hij/zij strapt de bezoekers in en vertelt enthousiast over het schip en wat het allemaal kan (hierin zitten veel hints voor straks, bijvoorbeeld: maakt de oude sigaret niet boos!). Eenmaal ingestrapt krijgen de bezoekers een kartonnen space helmet op, waarin de VR bril – verwerkt zit (met die helm lossen we het probleem op dat ze elkaars gezichtsuitdrukkingen niet kunnen zien).
Eenmaal met de bril op, zitten ze in een exact kopie van het ruimteschip, maar nu kunnen ze ermee virtueel interacteren, bijvoorbeeld het asbakje openen en een sigaret zien die boos vraagt of ze “nou echt nog altijd roken!?”. En ze krijgen hun avatars: allemaal cartooneske karakters met kartonnen space helmets op. De avatars verschillen van elkaar wat betreft kleur, vorm en textuur, dus bijvoorbeeld een grote paarse pluizige, een achtarmige roze en een groen semi doorzichtige. Dit is het leermoment waarin de bezoekers kunnen wennen en spelen met voorwerpen en VR.
Dan begint de count down en wordt het spel “echt”. De stoelen beginnen te trillen, kiepen iets naar achter, er klinkt geraas, het schip (de stoelen) trillen en door de ramen zien ze n een grote vlam en dan uit het raam de almaar kleiner wordende aarde en dan… gewichtloos in het heelal… buiten de sterrenstelsels, binnen in het schip zweven de voorwerpen rond (kunnen ze ook mee spelen).
Dan weer trillen, alarm gaat af, lampjes knipperen, het schip stort neer. De begeleider is weg. Alleen een computerstem: Crash landing. Crash landing. Survivors: 5. Planet: unknown. Inhabitants: unknown. Threat level: unknown. Dangers: unknown. Warning: inhabitants are known to… krrrrrrr…. Ontgrendeling van de straps en nu tijd om het ruimte schip te verlaten…
STORY – Fight or dance?
Setting: een onbekende wereld, met overal puin van het neergestort kartonnen ruimteschip met verder, rare planten, insecten… overal geluiden, bewegingen, onduidelijk wat veilig en vijandig is. Hier en daar gebeuren dingen die bezoekers kunnen doen schrikken, maar die uiteindelijk lief of grappig zijn, raar insect dat voorbij rent, bubbel lucht wat opkomt. Alle gebeurtenisjes gaan gepaard met muzikale geluidjes.
Doel:
– versterken van onderlinge verbondenheid door middel van muziek en samenspel
– bezoekers uitdagen tot puzzelen, spelen met de ruimte en de “wezens”
– de uitkomst van het verhaal laten leiden door de interpretaties van de bezoekers
Verhaal:
Bezoekers wennen aan de ruimte, merken dat niets dreigends is, totdat er ineens overal rondom aliens staan. Zijn ze vriendelijk of niet? Het is niet gelijk aan ze af te lezen. Wat ze wel doen is mirroring: de bewegingen van de bezoekers kopiëren. De bezoekers kunnen dingen om hen heen samenvoegen, bijvoorbeeld een kapot stuk ruimteschip met een bloem. Ze kunnen verschillende variaties maken: muziekinstrumenten, maar ook wapens. Als twee verschillende voorwerpen samenkomen, maken ze een geluid. Als een instrument in elkaar wordt gezet klinkt er een stukje muziek waarop de aliens een beweging maken, een klein stukje dans dat ook geinterpreteerd zou kunnen worden als een aanvallende beweging. Als een wapen in elkaar wordt gezet klinkt ook geluid, maar reageren de aliens met angst. Hoe meer instrumenten er komen, hoe blijer de aliens. Hoe meer wapens, hoe angstiger.
Tips: als het te lang duurt voordat bezoekers dingen in elkaar zetten, kunnen de aliens dat voordoen en zo de bezoekers stimuleren dit ook te doen.
Friend or foe? kenmerkend voor de aliens is dat onduidelijk moet zijn of ze vriendelijk of vijandig zijn. Hun bewegingen zijn intens maar kunnen zowel enthousiast en bewogen of vijandig en agressief worden geïnterpreteerd. Bijvoorbeeld ze vinden een deel van een instrument en houden die schreeuwend boven hun hoofd.
THE END – Back home again
Doelen:
– einde aan verhaal maken
– een bezoekers verschillende eindes laten meemaken, zodat ze met elkaar ervaringen uitwisselen en weer terug willen komen om de andere mee te maken
– bezoekers weer terugbrengen naar het festival
Verhaaleinde 1:
Met ieder instrument wat erbij wordt gemaakt worden de aliens blijer. Met ieder instrument ontstaat iets meer van een dans. Totdat ze allemaal worden gespeeld en dan ontstaat een echte rave, samen met de aliens. De aliens omcirkelen dan de ruimtereizigers en stimuleren ze om elkaars handen te pakken. Zodra ze dat doen flitst er een fel licht, gaan de brillen uit en staan ze weer in een tent waar buiten een festival plaatsvindt.
Jasper: met genoeg instrumenten kunnen muziekstukken zichtbaar worden die de bezoekers kunnen spelen, Guitar Hero stijl. Met genoeg instrumenten kunnen ze dan een bekend lied spelen, wat de aliens natuurlijk fantastisch vinden. Deze Guitar Hero Band edition zou een verbindend effect moeten hebben tussen de bezoekers, en ook met de aliens, die er helemaal in opgaan. Als grote afsluiter is er een bass drop met de aliens.
Verhaaleinde 2:
Met ieder wapen erbij worden de aliens banger en de herrie uitgebreider. Totdat er vanuit de verte een enorm wezen komt die met zijn voet iedereen plat stampt, een flits licht en de oorlogsvoerders staan weer gewoon in een tent met buiten het festival.
Story arc
P1-P2 of the storyboard : INTRO – Come into my space ship… [DEMO]
The demo for my graduation project contains this first arc of the story
Goals
For users to:
– Get familiar with VR / teach them what user input is possible.
– Get immersed in a childish fantasy where everything can happen.
– Identify who they are (space travellers).
– Identify where they are (crashed on an alien planet).
– Understand who they are to each other (the last survivors).
– Feel connected with each other (the last survivors VS the unknown).
P3-P4 (first 2 images of each page) of the storyboard : STORY – Fight or dance?
Goals
– To create a interconnection between players by utilizing music and teamwork.
– To challenge users to solve puzzles, play around with the environment and the creatures.
– To let the final arc of the story depend on the interpretations of the users.
P3-P4 (last 3 images of each page) of the storyboard : THE END – Back home again
Goals
– To wrap up the story
– To make visitors experience different endings of the story, so they’ll want to trigger another ending next time (after discussing the story with other players).
– To create a logical transition from Virtual Reality back to the ‘real world’.
10.5.Appendix 5: Interaction test environments
Conclusions of the tests
After playing with all the different methods of interaction, I had a pretty good overview of all that was possible for the final product and which elements I thought could work out the best.
Ultimately, I decided to use curiousity and playfullness as key components to get users emerged in the situation and encourage them to behave as I intented by my design. I went along with Gaze-input along controller input for a while but eventually dropped the former because combining them felt clumsy at some points. Furthermore, two different types of input could even be confusing to unexperienced users.
Testscenes 1
The first two weeks of the project I started by setting up small demo’s featuring possible interaction methods that could be used for the experience. I had developed for HTC’s Vive and Microsoft’s Hololens before but besides a simple videoplayer I had never built something something on this scale for the Gear VR before. So this phase was about testing out interactions but also to get me into a workflow and recalling the syntax.
Because the Gear VR does’nt include motion tracking of hands (which was included in the first concept) I was limited to using Gaze-input and the included Gear VR controller. So I decided to first explore some of the possibilities this offers. This was important because I first needed to know which interaction methods I would want to implement in the final product.
I started by sketching out all the possible Gaze-selection interactions that I could think of on a single sheet:
I then thought of 3 testscenes that were using elements from this sheet. I built a demo of them and took them to the first meeting with the team to show them the possibilities. After trying them out, they liked Scene 1 the most because of the playfullness in the interaction, but they also remarked that Scene 2 had a greater potential for storytelling elements.
The sketches for the first testscenes
The main HUB of the testscenes, players can switch between scenes by entering doors with different colors and themed icons on them
1. ‘Galaxy’ – making use of a user’s playfullness and curiousity
‘Galaxy’, a soundboard where you can make music by looking at animated buttons
2. ‘Toys’ – making use of blind spots in a user’s field of view
2. ‘Toys’, a room in which nothing odd seems to happen untill after some time a alarmclock will ring behind the user. When the user turns his head to see where the sound comes from, the toys that are on the table in front of him come to life and spooky music along with sound effects of children having joyful conversations are heard.
3. ‘Garden’ – making use of action-reaction within a scene
3. ‘Garden’, I didn’t completely finish this one because I had to move on with testing new insights. The idea is that by first looking at the watering can, then the water and at last the gardening pot you would set them in motion and the watering can would water the pot. After this, giant flowers would grow from the pots and stay in a hovering state above you, spinning in the air.
Testscene 2
During these weeks I also experimented by letting the user walk around a level by making a walking motion. I eventually dropped this because it made my testpersons desorientated. It was still a very fun experiment to do though.
I also thought of a multiplayer-mechanics using this discarded method of moving around. In the situation below, a group needs to solve a block puzzle by walking up the blocks to move them around. The team captain is the only one who can see the pictures on top of the blocks because he is located in a higher place. He needs to guide his team by giving orders to the players below.
Conclusions of the tests
After playing with all the different methods of interaction, I had a pretty good overview of all that was possible for the final product and which elements I thought could work out the best. Ultimately, I decided to use curiousity and playfullness as key components to get users emerged in the situation and encourage them to behave as I intented by my design. I went along with Gaze-input along controller input for a while but eventually dropped the former because combining them felt clumsy at some points. Furthermore, two different types of input could even be confusing to unexperienced users.
10.6.Appendix 6: Prototyping log
Preparation
I started by sketching out the set-up of the scene and setting up a Proton testscene using this tutorial by SDKboy.com. This was my framework to build upon in the coming weeks.
Videofootage of the very first scene with three players connected over Wi-Fi (15 May 2018):
Sync-points
In order to make sure each player would be synced during major events within the demo I built in some moments where the application would wait for the other players to arrive at that point in the timeline. I call them ‘sync-points’ or ‘check-points’ and they operate by sending a value of the sync-point’s ID to the other players over the internet remotely when the player arrives at that point. From here the application waits untill this value matches the number of participants. When this is the case, every application triggers the next event locally.
This came in handy especially during the log-in sequence because not all the players get loaded into the game at the same time. In fact the entire log-in sequence was built into the game for this purpose only.
I illustrated my method of syncing the events with an example of 3 players entering a scene and triggering local events at different times. They enter the next event at the same time because the sync-point in each player’s game waits for the rest of the players to get to that point:
My method of triggering local events using Photon Networking
Method of logging the playtests
To capture my playtests I used Photo Booth for Mac or my iPhone if I don’t have my laptop with me. For the screencaptures on the Android phones I first tried out DU recorder, but it turned out this app will get stuck while the phone is in VR mode.
I found that the Youtube Gaming app does a decent job recording the footage while in VR mode, but it is still very difficult to get all the footage synced because the framerates differ.
I started recording my playtests as soon as the (audial) narrative was added so the setting could be clear for participants.
The Lobby
In his presentation at the Game Developers Conference 2017, Colin Foran (Creative Lead at HBO entertainment) talks about the experience his team had with combining linear and interactive content to tell a character-centric story for the Westworld VR installation. He describes a lobby space that allowed their users to familiarize themselves with all of the interactions before the experience started. This way the users would be paying more attention to the narrative elements since they’re less distracted by the suddenly appearing new interactive possibilities.
I found this was a great idea to implement. Not only was this a proven way to familiarize new players with the experience, but by adding this introductional section I could also sneak in the first sync-point to make sure the game waits for every player to be fully installed in before transporting everybody to the main scene.
*This playtest happened spontaneously, I sadly did not have the screencapture software ready and installed on the phones at the moment. Therefore I did my best to reenact all of the interactions afterwards. Future playtests will have the captured footage of what participants see as an overlay of the live-action footage.*
Goals of this test:
– To see if the participant understands the event triggers within the scene
– If so, in what way he expects to interact with the trigger
– To see if the narrative and setting are clear
– Overall impression and remarks on possible issues or uncertainties while playing the experience
Videofootage of the playtest:
Videofootage of the demo:
Task analysis Playtest 1 / Build V0.1
What I’ve learned from this test & other notes:
– Participant at first doesn’t understand the gaze-controls for logging in and tries to log in by touching the button with the provided hand.
– The volume was too low, participant didn’t have a chance to adjust this up untill the voice over starts playing.
– Starting right away with a teamwork puzzle might not be the best way to get unexperienced players emerged in the game. It can be hard to understand for players that are still finding out about the kinds of input they can give to the game. For the next iteration it’s better to move this to the next scene and start with a simple sequence that involves pushing buttons to make players familiar with this mechanic.
Feedback from the story writers
Having seen a video of this first version, Christina and Sammie had some minor feedback on elements of the story that they found distracting. Giving the players stereotypes like ‘Captain’ and ‘Seducess’ shaped a hierarchy between players. They found it makes more sense to give players a weird avatar but provide no narrative about who or what they are exactly. Also there should be something about making music early on, it’s part of the experience’s theme.
After reading this feedback i proposed to them a slightly different flow of the story and some new game mechanics for my demo.
I’m dropping the whole character hierarchy and archetypes element in the introduction, everybody still has distinct (genderless) appearances but no backstory and they spawn sitting together at a round table to really get that team feeling going early on. Also, because the experience should be focussed on music (it’s a ROCKship!) the only way to get the ship running is by starting a jam together by mashing buttons.
After flying trough space for a while, the ship enters a space-garbage belt. In this scene the players see buttons appear on their own display that they have to communicate to the player at the table that is in reach of that button. If the button is pushed on time, the ship will avoid colliding with objects but if not the objects will crash into the ship. The players can’t die or ‘Game Over’ while doing this, so groups of players won’t get stuck during the experience but they’ll still receive feedback from the AI. After a while the crash sequence starts automatically because an unavoidable drifting toiletseat hits the front of the spaceship.
Changes for V0.2 :
Position of the players
The players are now seated in a circle around a dashboard, as opposed to one player in the front of the ship and the rest of the crew in the back.
New gameplay elements
I added a sequence for getting the ship up in the air by creating music together, this sequence is meant to make players aware of how the buttons work and to get their ‘playfullness’ going. Like I wrote in the testresults from V0.1, this sequence had to be simplified. I also started on the teamwork sequence where the ship needs to avoid objects in space.
First notes on how scene 2 should work
Build V0.2
Playtest 2 – Danny de Vries – 18-06-2018 – Build V0.2
Goals of this test:
– To see if the participant understands what user input is expected from him
– Overall impression and remarks on possible issues or uncertainties while playing the experience
– How the participant reacts to the first draft of the mechanic in the final act
Videofootage of the playtest:
*The videofootage is not sync due to different framerates and performance drops during VR gameplay*
What I’ve learned from this test & other notes:
– The voice over starts too soon, the participant doesn’t have the chance to adjust the volume level untill the audio starts.
– The participant doesn’t recognize his own avatar and thinks he’s looking at a NPC (non-playable character).
– The multiplayer part in the final sequence is not working as intended, it feels chaotic and the mechanic remains unclear.
– At the end of the playtest both players got stuck between different waves. In my scene I was positioned at the ending of wave 3 while in the other player’s world wave 3 was never triggerd. This was because I forgot to change the value for this trigger back from 1 player (for development) to 2 players.
Task analysis Playtest 2 / Build V0.2
Changes for Build V0.3:
Log-in room
The participant did not recognize himself in the mirror during the log-in sequence (“What is that fish doing here?” and afterwards: “Was I looking at myself in another dimension?”). He also remarked that the voice-over in this room starts too soon, he was not fully installed yet but the audio for the instructions were already playing.
Therefore I removed the voice-over altogether and replaced it with visual instructions. This way the log-in sequence can be timed by the player himself. There is background music playing to give feedback about the volume of the headset so players can adjust the audiolevel before any important voice over starts playing.
Old log-inNew log-in
‘Selfie-display’ – feedback about the look of the player
I also added a player-cam to the dashboard of the player in the main scene: a display that shows live footage of the player’s movements during the experience. This way the player gets feedback about what he looks like in the game.
Old way of giving feedback on what the player looks likeNew situation with a ‘selfie-display’
New models of players and dashboard displays
Glenn Wustlich sent me some of his new models for the players, the dashboard and the displays.
Some of the new assets for V.2
Build V0.3
This build includes all of the scenes for the demo, so the story can be played from beginning to end.
Videofootage of the demo:
Game logics Build V0.3
This flowchart displays the general logic behind the code of this version:
Build v0.3 flowchart
Interesting fact: The spaceship never moves
During the experience it seems the spaceship tilts up and down and thrusts forward in order to take off, evade and crash down. I had to deal with simulating these effects because if the ship would actually move around it would seriously mess up the physics of the objects and the players inside of it. That’s why the spaceship actually never moves, it’s the objects and the camera on the outside that move around the ship to create this effect.
During take-off the world moves and tilts around the ship
Evading the garbage is accomplished by manipulating the dashcamera and moving the garbage at the same time
Playtest 3 – Eva Reussien – 27-06-2018 – Build V0.3
Goals of this test:
– To see if the participant understands what user input is expected from him
– Overall impression and remarks on possible issues or uncertainties while playing the experience
– How the participant reacts to the first draft of the mechanic in the final act
Videofootage of the playtest:
*The videofootage is not sync due to different framerates and performance drops during VR gameplay*
What I’ve learned from this test & other notes:
– The multiplayer part in the final sequence is not working as intended, it feels chaotic and the mechanic remains unclear.
– The only way to communicate with other players is to take off or lower the volume of the headphones, which greatly breaks the immersion
– The model for the octopus was still turned around 180 degrees from the other player’s perspective
Task analysis Playtest 3 / Build V0.3
Issues with recentering the controller
Recentering the controller is still not working ideally, the Home button is difficult for users to detect with their fingers and pressing the backbutton next to it will prompt to quit the application for users. I tried remapping the recenter-function to the bigger clickpad on top of the controller through OVRInput.RecenterController() but this didn’t seem to work.
On further inspection, it turned out this type of function has been deprecated from the mobile SDK. On the official Oculus Developer’s forum somebody had the same problem where they wanted to implement this function because the application was to be experienced in a a theme-park like environment.
Because I couldn’t solve this issue right now through software, I decided to implement a little tweak in the hardware to better guide the user to the location of the button and prevent them from quitting the experience.
This holder slips around the controller and prevents accidental backbutton-pressing
Intermediate change:
New model for the spaceship
Glenn Wustlich sent me a new version of the spaceship. After reviewing the model I asked him to also provide cardboard textures so the players would start in the same environment as they are in before putting on the headset. After logging in the players travel from the actual world to the fantasy world, this is made clear within the experience by mimicking the decor before completely changing the look of the surroundings.
The cardboard decor before travelling to the fantasy world
The environment of the ship after logging in to the fantasy world
The ship as seen from the outside during the main scene
Playtest 4 – Matz Schütz – 3-09-2018 – Build V0.3
Goals of this test:
– Overall impression and remarks on possible issues or uncertainties while playing the experience
– How the participant reacts to the first draft of the mechanic in the final act
Videofootage of the playtest:
*The videos are presented seperately due to different framerates and performance drops during VR gameplay*
Test footage:
Screencap:
What I’ve learned from this test & other notes:
– The how-and-why of suddenly having an avatar at the start of the experience remains unclear, the narrative of players transforming into members of the space crew (as described in Appendix 4: script and storyboard) is missing.
– Besides some minor bugs and the chaotic final act, the experience could be played through in its entirety without game-breaking issues.
– The model for the octopus was still turned around 180 degrees from the other player’s perspective
Task analysis Playtest 4 / Build V0.3
Build V0.4
Changes for V0.4 :
Utilizing a player’s playfulness to increase the feeling of teamwork
Most participants keep playing with the buttons during the first sequence of the demo because they can keep hitting them to make weird sounds. They don’t really pay attention to the narrator’s instructions and enjoy hitting them because they are fun in-and-of themselves (this corresponds to the Empowerment Core Drive from the Octalysis framework described earlier). But this way the goal of this section doesn’t really become clear right away, and also the feeling of completing the task together isn’t really present. That’s why I came up with a way to make activating the spaceship feel more like a mini-game. I tried out a feedback loop with a meter that fills up when the player is playing music and decreases when the player stops. The goal is to both fill up your meter, and this triggers the spaceship to launch. This way the players feel like they’ve accomplished the task together and this should increase the feeling of teamwork early on.
The ‘music-meter’
New login instructions
While playing Fail! Factory on my Oculus Go I stumbled upon an even better way to provide instructions on recentering the controller.
Fail Factory’s instructions VS my instructions using this format
Playtest 5 – Tom Vonk – 4-10-2018 – Build V0.4
Goals of this test:
– To see how the participant reacts to the new character selection element
– To see how the participant reacts to the overhauled final mechanic; is it now more clear what the goal is and can the player anticipate on the goal?
– Does the game have a more faultless performance during the final act?
Videofootage of the playtest:
*The videos are presented seperately due to different framerates and performance drops during VR gameplay*
Test footage:
Screencap:
What I’ve learned from this test & other notes:
– The participant has difficulties with the pacing of the color change interval
– The participant has difficulties hitting the right button, he doesn’t feel in control of which button he touches
– The participant nonetheless has a feeling of teamwork. He suggests to also implement this right at the start before the first act starts
– The final mechanic runs a lot smoother than the previous version
– The model for the octopus was still turned around 180 degrees from the other player’s perspective
– To see how the participant reacts to the new character selection element
– To see how the participant reacts to the overhauled final mechanic; is it now more clear what the goal is and can the player anticipate on the goal?
– Does the game have a more faultless performance during the final act?
Videofootage of the playtest:
What I’ve learned from this test & other notes:
The red/purple and light/dark blue look too much alike. The lamp causes the hue of the buttons to change.
The model for the octopus was still turned around 180 degrees from the other player’s perspective
Task analysis Playtest 6 / Build V0.4
Playtest 7 – Bas John – 9-10-2018 – Build V0.4
Goals of this test:
– To see how the participant reacts to the new character selection element
– To see how the participant reacts to the overhauled final mechanic; is it now more clear what the goal is and can the player anticipate on the goal?
– Does the game have a more faultless performance during the final act?
Videofootage of the playtest:
What I’ve learned from this test & other notes:
The red/purple and light/dark blue look too much alike. The lamp causes the hue of the buttons to change.
The position of the lamp as opposed to the buttons feels a bit too far away for the participant, he can’t see both at the same time.
The model for the octopus was still turned around 180 degrees from the other player’s perspective
Task analysis Playtest 7 / Build V0.4
Build V0.5
Changes for V0.5 :
Slower change interval of lamp colors
The lamp is a bit smaller so it can be viewed from better angles
The buttons are more apart and the hitboxes of the hands are made smaller
Narrative is added to the start
Some panels were added at the start to better sync player states
– Final walkthrough of the product. Are the last refinements working?
Videofootage of the playtest:
What I’ve learned from this test & other notes:
I made a mistake in the camera settings, this caused the other player’s model to disappear. But the participant didn’t even notice this because he was paying attention to his own actions.
The participant didn’t like the sudden ending.
Task analysis Playtest 8 / Build V0.5
Playtest 9 – Danny de Vries – 18-10-2018 – Build V0.5
Goal of this test:
– Final walkthrough of the product. Are the last refinements working?
Videofootage of the playtest:
What I’ve learned from this test & other notes:
The player gets a bit nauseous from the framerate drop between scene 0 and scene 1.
The camera needs a small adjustment as the model clips through it at a certain angle.
The lack of the other player’s controller visibility was not an issue to the participant. He didn’t notice until I mentioned it to him.
– Final walkthrough of the product. Are the last refinements working?
Videofootage of the playtest:
What I’ve learned from this test & other notes:
No big insights from this test.
Task analysis Playtest 10 / Build V0.5
Game logics Build V0.5
This flowchart displays the general logic behind the code of this final version:
Build v0.5 flowchart
10.7.Appendix 7: The Octalysis framework
The Octalysis framework by Yu-Kai Chou
To find out how to make the experience as much fun as possible, I looked into (video)game design as the experience is essentially set up like a multiplayer VR game. One of the most popular places at the moment to learn about the fundamentals of gamification and behavioral design is the Yu-Kai Chou’s Gamification & Behavioral Design course on Udemy, which is given away for free on a regular basis. He does this in order to recruit more subscribers for his monthly premium model of the course. Normally, the Udemy course costs €194,99 (last checked: August 27th, 2018).
Yu-Kai Chou with his book on the Octalysis framework theory. Image courtsey of yukaichou.com
Yu-Kai Chou is an expert on gamification with over 15 years of experience in the field of gamification and motivational design (YuKaiChou.com, 2018). His much-appraised work includes the Octalysis framework. This framework lays out the structure for analyzing the driving forces behind motivation of users to keep using your product or service. The core of his theory is that Gamification means much more than just taking game elements and cramming them into a product. According to his model, gamification is a design principle that mainly concerns the process of human motivation.
The model is called Octalysis because of the model’s octagon shape (ancient Greek : oktá-, “eight”) which depicts 8 Core Drives that together form the motivation for everything you do.
A fully filled-in Octalysis framework by Yu-Kai Chou. Image courtsey of yukaichou.com
The game elements (on the outside of the model) each correspond to one of the Core Drives. For example, when the user unlocks a milestone in your product, this adds to the Core Drive “Empowerment” because this game element makes the user feel their efforts lead to results. And allowing users to design their own Avatar boosts the “Ownership” Drive because they feel the newly created virtual identity is now theirs.
The 8 Core Drives of the model:
Meaning (Epic Meaning & Calling)
This Core Drive makes the user feel like they’re doing something greater than themselves. It motivates the user to keep going ‘for the greater good’, and may even get users to perform unpaid work. A great example of a product thriving on this Core Drive is Wikipedia and other Open Source products alike. The people who update and maintain these systems do this because they feel their work is read and appreciated by other people all over the world, and this motivates them enough to spend hours upon hours of their free time contributing to the product in question.
Empowerment (of Creativity & Feedback)
The Empowerment Core Drive is often found in products that encourage the use of creativity in the process of accomplishing a product’s goal. Not only is this expression of creativity important when applying the Drive, but also the constant feedback a user receives from his actions. Good examples of activities that utilize this Drive are clay modelling and building Lego sets. These activities are fun in-and-of themselves and don’t need constant newly designed input to keep them fun and engaging. In game design, the perfect example of this Drive is the sandbox game genre.
Social Influence (& Relatedness)
The Social Influence Drive draws it’s effectiveness from the social elements that drive people. This can be traits like mentorship, acceptance, social responses, competition or envy. It also relates to the need for people to draw closer to other people, places, or events they can relate to. A good example of the latter is nostalgia: when a product reminds you of childhood memories, this greatly increases the chance for you to buy or use the product because you already have a positive association with it. In marketing strategies, this Drive is often used to trick people into thinking a product is the ‘next big thing’ everybody is talking about and you really shouldn’t miss out on it.
Unpredictability (& Curiosity)
The human being has a nature of always wanting to find out what will happen next. We are curious beings, and this Drive makes use of this attribute. When you don’t exactly know what will happen next, the brain gets engaged to find out and you will keep thinking about it often. A textbook example of a company that has mastered this Drive is Netflix, a video subscription service that offers video series that span entire seasons and keep audiences engaged by traditionally ending each episode with a nerve-racking cliffhanger. This Drive is also the primary factor behind gambling addiction and can be used to run lottery programs that engage users to keep using a product or service. This behaviour is often perfectly demonstrated in the Skinner Box experiment, where an animal irrationally presses a lever frequently because of unpredictable outcomes.
Avoidance (& Loss)
Avoidance & Loss is the Drive that plays into the human need to avoid negative things happening. This can be a simple thing like avoiding losing some work a user has done up to a certain point, but after more effort is put in, this Drive can ‘lock’ a user into keep using a product or service. They have passed a point of ‘no return’ where quitting makes them feel everything they’ve done so far is useless and finishing their task is a wiser thing to do. Situations with opportunities that have to be act upon within a short timespan really activate this Drive because it makes people feel like if they don’t act immediately, they’ll never get the chance to act again.
Scarcity (& Impatience)
This is the Drive that makes you want to own or do something because you can’t. The harder it becomes for you to have or do it, the more interesting it becomes, and not being able to to recieve it right away makes users think about it even more obsessively. Many free-to-play games have based their revenue model on this Drive. In Farmville for example, when you plant your crops you’ll have to wait several hours for them to grow before you can harvest them. Or you can buy some fertilizer with real-world money to make them grow right away. This business model works even more effectively on children and young adults because they have a very impatience nature and can’t relate the in-game items to monetary value as much as adults (the newest addition to this trend is Fortnite, the multi-billion dollar money machine aimed at children).
Ownership (& Possession)
The Ownership & Possession Drive revolves around being motivated because you feel like you own or control something. When you have this feeling of ownership, you’ll also grow the need to improve it’s value or quality. In game design, it’s usually found in the form of virtual goods and currencies. Being able to customize what you’ve collected or earned also adds to the feeling of ownership, this is why virtual Avatars and customizable profiles work so well to get users engaged. This Drive is also the reason we like to collect things like figurines, puzzle pieces or foreign coins.
Accomplishment (& Development)
This Core Drive has everything to do with the human nature of making progress, developing skills and overcoming challenges. It is easily triggered by being able to earn (virtual) badges, trophies and achievements, often to show off to others. It’s important that earning these ‘prizes’ requires a form of challenge, as a reward without a serious challenge doesn’t really mean much at all to the user. It’s one of the most commonly used Drives as it is easy to include in the form of points, badges, leaderboards and leveling systems.
White Hat & Black Hat design
The Core Drives in the top half of the model are positive motivators, which are called ‘white hat’ gamicifation techniques. The techniques in the bottom are negative motivators and are called ‘black hat’.
Left brain VS right brain
Yu-Kai Chou divides the Core Drives between logical and emotional. For this he uses a metaphor of the left and right brain. The Core Drives on the left of the Octalysis framework are associated to ownership, logic and calculations. The Core Drives on the right are based on social aspects, creativity and self-expressions.
Rock(et)ship Octalysis Framework
I used Yu-Kai Chou’s Octalysis generator to visualise my product within the Octalysis framework:
Product mapped within the Octalysis framework
The generator reviews my product as follows:
“Your experience is heavily focused on White Hat Core Drives, which means users feel great and empowered. The drawback is that users do not have a sense of urgency to commit the desired actions. Think about implementing light Black Hat Techniques to add a bit more thrill to the experience.
Also, your Right Brain Core Drives are much stronger than Left Brain Ones, which means your experience is much more intrinsic in nature. This is great because users genuinely enjoy your experience. You can also consider adding in more Left Brain Game Techniques to add more feeling of accomplishment, more ingrained ownership, and more controlled limitations to spice up the experience.” (Octalysis tool, 2018)
Udemy’s certificate of completing the Gamification course
10.8.Appendix 8: Other Deliverables
10.8.1.Project proposal
The original project proposal from March 2018 can be viewed here (in Dutch):
Problem statement
Storytrooper is a storytelling company founded by Christina Mercken that gives fun workshops for anybody who is interested in a good story. Christina can be hired to speak at private events and she travels cultural events like Oerol and Mezrab as a freelance storyteller/performer.
In the past years, as technology arise and our attention span gets shorter, Storytrooper would like to create new storytelling experiences that intrigue participants and keeps them engaged for longer timeframes – thus creating a stronger storytelling experience, using technology, sound and visuals.
Design challenge
How can a virtual reality experience
create immersive storytelling sessions for groups,
keeping them intrigued and engaged ?
Vision
Using virtual reality, I create gamified experiences for groups on location. Allowing them to solve fun and challenging quests in a unique and immersive way.
May 14 / Pinpointing the concept
Scenario & Script
Illustration of the product
Storyboard of the customer journey
June 4 / First testing environment
Low-fi Scene 1 demo
(prototype 0.1) sounds and narrative
June 11 / First testresults
Comments of
participants after playing the demo
using Participant observation
June 18 / Second testing environment
More polished Scene 1 demo using
(prototype 0.2) information gathered in the empathy
map
June 25 / Second round of testresults
Results on understandability of
the desired behaviour
Just before the Summer stop began, our group had a session in which we presented our project to other students and gave each other feedback on our work.
Post-its with feedback I received on my presentation:
The most important feedback I got was:
– The Design Challenge is not very clear, what is the exact problem you’re solving?
– Do participants have any influence on the story? (E.g. is the story set or is the experience set up like a sandbox game?)
– How exactly do participants get immersed into the experience?
– How many participants can join?
– Look into game heuristics to back up the (social) interactions.
– Look into ABN Amro’s The Lockdown (a mobile AR escape room) as an example for a mobile solution.