We were very fortunate to have the opportunity to learn directly from Jake Knapp, pioneer of the Design Sprint process and author of ‘Sprint’ - honed during his time at Google Ventures. We participated in a highly intensive, hands-on workshop in Bologna earlier this year and came back to Dublin all excited about this innovative design process we just had first-hand exposure of. Now it was time to put all our new learnings into practice so we decided to complete the Sprint process on one of our R&D projects. But what is a Design Sprint?
A Design Sprint is a ‘battle-tested’ process to define a problem, compare and contrast competing ideas, create a quick prototype, and get user’s feedback - all in 5 days.
This 5-days, 5-steps structure would give us enough time to test our product and answer critical questions, with a dedicated day for each of the following:
We decided on wayfinding as the topic for our internal Sprint. We've had plenty of experience working with mapping and location in mobile apps, but wanted to build a complete location-based system to push our ideas even further. For this purpose, we believed in the potential of using a blend of new technologies to design and build a wayfinding solution.
To narrow it down, we established the focus on a university campus wayfinding solution. As we are currently based in DCU Alpha campus, an innovation hub for companies in Dublin, we felt it would be the perfect setting for an initial test of our concepts.
We are passionate about technology at Marino and we will take any chance to venture into new technologies, so as well as implementing the Sprint process we took the opportunity to explore emerging technology, hardware and devices we were excited about. We were thinking in terms of augmented reality, IoT, wearables, voice interfaces, beacons or computer vision. So much potential and many things to try out, right? So, where to start?
We started by setting a long-term goal for the Sprint: for a ‘person to be at the right place at the right time, no matter their ability’, achieving this by effectively routing people around the campus.
We continued by turning fears into questions by writing ‘What If’ notes that would surface possible issues we might encounter. With our ‘What If?’ notes we started to think about potential technology limitations, app discovery and adoption, last minute location changes, facilities discovery, accessibility for people with disabilities, and overall, about the need to design for everyone, no matter what that person’s ability.
Once we listed all our initial questions, we mapped out a basic user flow of how they would start interacting with our product and the steps a user would follow to get to their destination.
From the beginning, we considered our audience, and had a discussion of the benefit of designing for ‘residents’ (‘students’, in a normal university setting) vs ‘visitors’. This was tricky to decide since, in our view, both audiences would have similar needs in some contexts but very different depending on the type of campus. In our case, as we are based on a university innovation campus occupied by companies, there are no classes or students as such. Since we wanted to test with our current location, we decided our primary audience would be ‘visitors’, people who might be looking for a company office they are about to meet, a meeting room or any other place where an event could be happening within the campus.
‘Lightning Demos’ are a great way to get a good understanding of what’s already out there. Before we got started, we decided to carry out some individual research and come back with a few ideas each, so we could discuss as a group. We didn’t limit ourselves to look at competitors’ products but to a range of concepts that solved similar kind of problems in any context.
The range of ideas we came with was broader than we imagined. We took inspiration not only from other products or apps, but bowling or Hotter/Cooler games, research papers, spatial sound videos, film scenes and even Geiger Counters. Some interesting examples we discussed were:
We drew one idea per sheet and presented them to the team. After discussing all ideas, we organised them by topics as orientation, tactile navigation, spatial positioning, companion, speech, computer vision or context.
We found this part of the process very valuable. The ideas captured by the team provided creative solutions that contributed to solving our problem from different perspectives. This was the final step in gathering all the inspirational materials we needed to start sketching a solution.
After a full day of understanding the problem, deciding on a goal for the Sprint and getting some inspiration from our Lighting Demos, we began focusing on solutions.
We started generating ideas individually, they could be written or drawn, didn’t matter. These ideas would feed into our ‘Crazy 8’ sketches.
For those of you new to the Design Sprint process, the Crazy 8 exercise basically consists on producing 8 different sketches on a 8-times folded sheet of paper. You can experiment by drawing your initial ideas, then exploring the opposites, thinking of alternatives or work on a flow. Challenge is, you have 1 minute for each idea to be sketched - no pressure!
At this stage we had a stack of solutions. That was great, but it was also a problem since we couldn’t prototype and test them all. To decide on what ideas we would focus on, everyone was given small dot stickers to mark the ideas they considered should be included in our prototype. This would create a heatmap that would guide us towards what ideas we should include in our storyboard. For this process to work, a person has to act as the Decider to make a final call when voting - so our Decider received extra dot stickers to vote with.
Once we decided on our focus, it was time to create the storyboard - a plan for our prototype. We chose to sketch out a story for a user to find their way to a destination being led by 3D spatial sound and audio cues. We drew the whole flow we were thinking of, which included voice recognition, AR and computer vision.
We carried out research on how to design for voice and wrote a script to suit the storyboard. However, this wasn’t really the core of our user testing since we wanted to firstly confirm if 3D spatial sound guidance would work so we focused on prototyping how to guide a person on a journey from A to B.
We also sketched secondary interactions as checking your current location, how arrow directions might change or how a user might switch between views. We wouldn't be using these yet, but it was useful to put those thoughts down for now.
For our initial prototype, there were a few elements we were wondering how to ‘mock up’ to quickly test our concept, such as 3D spatial sound, haptics, warnings or 'happy jingle' sounds. We decided to only choose some elements for our first few tests and went for the 3D spatial sound, warning elements and audio cues. It was time to make it real and create a 'fake' model of the final solution. As Google Ventures says:
"[Day 4] is about illusion. You've got an idea for a great solution. Instead of taking weeks, months, or, heck, even years building that solution, you're going to fake it. In one day, you'll make a prototype that appears real....And on [day 5], your customers—like a movie audience—will forget their surrounding and just react."
Prototyping a solution certainly presented a challenge for our design team, since visual interfaces weren’t the core element we had to plan and prototype. In this case, the core task lied in 3D sound and voice interfaces - a more abstract design to represent and communicate, as you might imagine.
We also had to prototype this fast, it wouldn't be realistic to be able to build a fancy 3D sound app in a day so we had to be creative and find ways of 'faking' the experience for our testers to feel they were being guided by an app. We had to go out and measure the number of steps a user might take to complete our test route, create a constant sound that would guide users in the right direction or find a soundscape app to mimic how 3D sound might work.
Since we wanted to test our 3D sound concept first, we decided not to start prototyping the UI yet but focus on guiding a person by the sole use of sound.
We established as our user testing goal to guide people to their destination without the use of their vision. Our hypothesis was we could achieve this by using 3D spatial sound, combined with voice guidance in some cases. Even though we started thinking of a UI solution within the storyboard, we decided to leave this dimension out of our user testing scope.
For this first test, we decided to test the path between the DCU Alpha campus entrance and our office. Our core research question was: What is the best way of guiding a person to their destination?
A few different approaches arose from the Discovery phase and we wanted to investigate what would work best. Some of our questions were:
After we established our user testing goals and research questions, we recruited a few volunteers in the office to test our concept - they didn’t know what they were getting into!
To get started, we wanted to test our hypothesis on distance representation. Our initial idea was that it might be easier for people to understand distances in number of steps than understanding metres to be walked. We were curious to test this idea, even though we were concerned about the number of steps not being calibrated for each person.
For this first test, we obstructed our tester’s vision and attempted to guide them to their destination by calling out the number of steps they had to take for every segment of the path. We complemented this by giving them clock directions (meaning 9 o’clock to turn left, etc.) when it was time to make turns.
The use of clock directions was based on research that indicated that:
‘Blind people usually use a clockwise notation to orient themselves’.
Our findings:
For this second test, we also obstructed our tester’s vision and used an external sound source to guide testers to their destination. For this purpose, we created a constant sound that testers would need to follow. To complement this constant sound, we introduced the use of ‘spatial’ warning sounds to alert testers if they were going away from the path. Testers could hear these warnings from either their right or left side, according to their deviations from the path.
Our findings:
In our third test, we also obstructed our tester’s vision and tried to remotely guide them to their destination. In this case, we were not talking to the tester directly but creating the illusion they were interacting with the sound only - getting instructions through their headphones.
We used the SoundShade app to guide people to their destination. We gave headphones to our testers so they could listen to a constant sound marking the way, and a secondary sound warning them when they were going off the path - located on either right or left side. For this we used a constant sound of a fireplace marking the way and a storm sound indicating deviations on their path and warnings on obstacles.
Our findings:
We successfully guided people to their destination by using 3D spatial sound (binaural audio), which validated our initial hypothesis. The Sprint process was a cost-effective way to test our concept. It allowed our team to have enough time to come up with ideas, test and learn from them, but not enough time to overthink those concepts.
This process gave us a shortcut to explore a range of new technologies without the time and expense of launching a fully functioning product. Testing early was vital for us to learn about what worked and what limitations we might encounter in future steps.
This project also allowed us to expand our knowledge on emerging technologies and explore new approaches to the design process. As we mentioned, it certainly presented a challenge for our design team, since visual interfaces weren’t the core element but 3D sound and voice interfaces - a more abstract design to represent and communicate.
This Sprint has made us certain of current technology limitations and how to work within these. It has also served us a starting point for a range of ideas we would like to investigate further in future sprints.
As we confirmed the possibility of guiding a person to their destination by using 3D spatial sound (binaural audio), our next step to turn this concept into a reality is to build an audible wayfinding interface that guides a person to their destination.
We have started working on a 3D spatial sound prototype that guides people by using 3D sound and a complementary visual compass that defines waypoints to direct people to their destination. Our concept is that guidance wouldn’t rely on maps but on audible cues supported by a simplified interface to represent position, directions and final destination. We are investigating some additional challenges that arose from our user testing sessions as how to not only represent direction but distance through 3D spatial sound or how to communicate what direction 3D sound is coming from. We are working on refining this experience to effectively represent distance through sound and for 3D sound position to be clearer to users.
This process will also involve the design of meaningful sounds that will successfully guide people, and communicates the difference between front and rear sounds.
Alongside this, we are undertaking research on geolocation technology - investigating from beacons to wifi signals, GPS or electromagnetic fields. We are currently running experiments with beacons and looking for the best solution to accurately detect location both indoors and outdoors. Unfortunately, our findings so far have shown that currently beacons are unlikely to offer anywhere close to the accuracy that we would need for our initial 3D spatial sound wayfinding scenario.
We will therefore need to consider slight changes in our scenario to adapt to current technology limitations in a second phase of this project. Due to the mentioned limitations, we are looking at alternatives not to rely on beacons for positioning but use them as proximity landmarks.
For this purpose, our next step will be to build a beacon proximity grid to improve location accuracy. We started developing a sample application that places beacons throughout our office and maps out the nearest beacon - which would make tracking a person’s movement through the room possible and identify when they approach a beacon’s proximity.
We have seen through this process that, while working as a cross-functional team, we can quickly get up and running with a new idea, each of us bringing our strengths and experience to creating something completely new. And now we’re looking forward to keeping on building on our initial findings. As Rick Farrell, Google's Senior UX Designer says:
‘Each phase of the (Sprint) process generates more than concepts, it actually generates motivation to get things done in the future.’
And this initial Sprint has certainly given us the motivation to continue to keep learning and experimenting with emerging technologies at Marino. Now it’s time to take what we learnt and make it a reality.
Got a big opportunity, problem or idea? Need answers quickly? Our Design Sprint Package is a practical method for answering critical business questions. It works for companies or organisations of any size. Talk to us about a Design Sprint.
We’re ready to start the conversation however best suits you - on the phone at
+353 (0)1 833 7392 or by email