Facing Reality: Taking a Visitor-Centered Approach to Augmented Reality

During this 60-minute session, we will share details of how we took a visitor-centered approach to the three-year development of Lumin, the Detroit Institute of Arts’ augmented reality tour. Topics will include the museum’s collaboration with mobile tour development company GuidiGo, the process of selecting objects and developing experiences to foster meaningful connections with art, and the evaluation findings that guided us along the way. We will end the session with a reflection about lessons learned and what’s next for AR in museums.

Transcript

Unknown Speaker 00:00
All right, I think we can finally get started. Thank you so much for your patience. And Donnie Nelson did a wonderful job. I hope you got some sweet notes out of that. But I'm really, really happy to finally be here with my colleagues. Unfortunately, one of our team members is not present. That's Richard Scott, pretty critical to this team. But the team that I have here, people that I very much care about, I have Megan dB. And so Alicia and David Letterman, and my name is Sandra Matilda Schumann, we are going to be discussing three years worth of research on augmented reality and how that makes sense for our communities. So I hope that you enjoy, and we're just going to right jump back into it. I'd advise to keep the questions to the end, we have a lot of content to go through, but do ask all of the questions that you want. And after this session, I happen to meet with whoever has further questions. Well, alright, let's get into it. So what the heck is looming? This is where I'm really hoping that this video actually works. But let's give it a shot. What you are seeing in your screen is not that wonderfully produced video. But I'm showing you that this is a handheld device. And what he does is that you can select a stop within the museum, and then you can follow directions. So this will be indoor Wayfinding. So you have a bunch of blue dots, it gets you from where you are within are very complex museum institutions. And it can get you to a stop. So here, one of my colleagues is walking through some galleries, you're seeing that it, it can see the corners and can take you in a way that is not gonna make you triple you want be avoiding seeing anybody so you can see people walking in front of you, meaning you probably are not gonna be stumbling, which is a good thing. And here you are arriving to a room where we have a mummy in this room is actually quite dark. So I hope that you can actually see in the screen. But the other mode not only can take you to different places, it can also help you get to know a little bit more about art and contextualized in in really interesting ways. So for example, here in the Maumee, we're using archival material from our conservation department, and in collaboration with them, we put together the X ray overlay. Another thing that you can totally do with this device is not only you can go around the PCE and you will notice something this is not necessarily like PokemonGo that the AR asset is just floating in the air. This one uses persistence technology, this is anchored within the space. So as you can see people are going around, they're moving around and the AR layer is is staying with where it's supposed to stay. If you really like the AR Mo, that's awesome. But you can also tap on Learn More, and you're gonna get snippets of deep diving into some further content. This was actually a really fun collaboration with the Conservation Department because they had a great opportunity to offer some information from the science community that is often not shared within art institutions. So alright. That's basically how it works. And I'm gonna skip to the end. All right, come on, maybe aim it that way. I'm just gonna do this. Okay, we're back. We're good. So this has been an ongoing collaboration with our partners at a guy to go, I don't know, David, do you want to learn or share a little bit more about how you are in collaboration with Google in itself? Where you happen?

Unknown Speaker 03:50
Yeah, sure. Thank you. Hi, everyone. So we started to work with Google and tango in 2015, actually, and we were working previously with them on Google Glass. And they reached out to say, hey, we have a cool new device. Would you like to work with us? And we, we had that kind of good and bad experience with them on Google Glass? So we said yes, but no. And they say, Yeah, but you should have a look at that, because it's an amazing tech. And so I remember walking through the Computer History Museum in Mountain View with one of the product manager with Tango. And he was showing me a Wayfinding experience kind of like the same that we have here with the DIA and I was like, wow, how is it possible to have those blue dots like kind of like magic appearing on the floor, you know, and you look around you move on it? For the first time actually. Somebody I mean company design a hardware that was aware of the environment. So now maybe you have iPhone, Android, and you can play with some AR core or AR kit cool stuff, but back in 2015 In AR was basically you look at a QR code, and then you get a Yoda or mini statue of something. But it was no, there were no persistent AR at the time. So we look at the Tango device, we were like, wow, this is we can do a lot of things in the museum space, in terms of educational content. And that's where we reach out. I mean, we were in touch with Richard, at the end, we reached out to the I say, Hey, is it do you think we could do something with this technology? And they said, Yeah, we we have a project. And that's how it started.

Unknown Speaker 05:32
And one of the things for those who are not fully familiar with persistence, technology really is an extension to the capabilities of your device for it to understand its position within space. This brings wonderful opportunities. Well, let's talk about how we set up the project. Okay.

Unknown Speaker 05:49
Do I need this? Yes. How does this work? Oh, there it goes. Okay, Project Background. So we, the DIA had a really interesting thing that happened where there was a private donor that gave us a nice chunk of money to create an app. And that was the direction and so like, that's like a nightmare, right? Like everybody who wants a museum to have an app. It was like, whoa, okay, so we explored different options, we had explored creating our own content management system so we could build our own tourism house, we explored a few other options and finally landed on a our partly because of our connection with guide ago, but also because we including Andrea did a lot of research to figure out if AR was actually even a sustainable thing to experiment with. We decided it could be sustainable, we were going to take a chance and think we were going to believe that it was sustainable and move forward. And we went from there to start interpretive planning. So this is a little different than the DIA usually does it for interpretive planning. Usually, when we start we start with, you know, content, a big we shape, you know, raw content into a relatable storyline, we think about the visitor experience, what do we want the visitor to think, feel? And do is they experienced the works of art? How can we support that meaning making, but this and then you choose the appropriate tool based on that right, like so you decide you want to share a certain piece of content, and then you look for the best way to share that here. We are, we did it kind of backwards, we decided we were going to experiment with augmented reality in the museum. So we were trying to figure out ways to use that technology, and find the content that that technology could best convey. So we did a lot of exploring in our building. We have 65,000 square feet, we're huge. It's basically three buildings. In one, we have the old building in the center that was built in 1927. We have a wing that was built in 1960. And then we have another wing that was built in the 70s. So it's, as you can imagine, it's like any large encyclopedic museum, it's like impossible to navigate. Right? Well, it is possible, but it's really hard. It's like a big challenge that we've been facing for our whole existence. So one of the first things that came to mind, especially in conjunction with the wayfinding possibilities was that this was an opportunity to take visitors into the dark and hidden corners in the museum. So you see here on the screen, I didn't change the slide, because I am looking at my computer. Sorry. Okay, so how can we connect people to art interpretive planning Wayfinding. Okay, so on the one side, here, we have our arts of the Islamic world, the darker gallery with the with the carpet. And then over on the other side, we have Rivera court, which is a very public visible space. And then this is a very dark corner of the museum that doesn't necessarily see a ton of traffic. So we targeted this area, just as a, an opportunity to take people to a part of the museum that's rarely visited. So that was the first thing we did is we sort of sighted out the places in the museum that would, that we could take people to that would be new for them. We and then we started to just wander the galleries and think about what could we do. So we wandered the whole building, we looked at different options. We were looking at empty spaces like this next to our mural and our indigenous Americans galleries. And we were thinking, you know, you could project an AR map on there, you could explore all these objects and then carry the objects back to the map and put them on the map. Like we were really having like crazy ideas. And then here we have a a shrine, a Shango shrine, it's a shrine meant to connect with the gods Shango. And these are the poles and we were thinking about a game experience where visitors could connect these poles to a photograph of the real shrine that these poles came from. So we were think Getting about games, we were thinking about overlays, we were thinking about interactivity. And we were really trying to figure out how AR could do different things in the museum.

Unknown Speaker 10:10
Um, this is an example of an idea that worked out very well and sort of set the interpretive planning in a really like, tight direction. So these two objects here, they're called Kill goes. And a lot of, you know, if you do manage to get back to this dark corner of the museum, and you're a middle school, you look at that, and you think it's a toilet, right? Like, like kids are like, I don't know what that is, it's not a toilet, it's actually a really important water filtration system that was used in the Middle East to purify and cool water. So we do have a picture of so these little stands, as you can see, here, they there would be a jar on top of them, and the jar would be made of a porous material that would filter the water, and then what was filtering it, it would cool it. And so we do have a picture showing the kilvo with a with a jar on top of it, you can see it up there on the label. But that really wasn't an effective way of communicating the purpose of the object. And so you know, David and the great animation team, that guy to go, worked with a bunch like primary source documents, images of the jars, which really don't exist anymore. They're made of porous material. They're very fragile, and sort of rebuilt a digital working Kyoga that shows up next to the actual Kilgour and explains that, you know, show instead of tell right, so you can see exactly what it's doing. It's filtering water, and there's even like water drips, so you can hear it. And this was a moment where we were like, okay, AR is going to be really good for helping people see things that you can't see. And then also a good opportunity to show people how some of these objects actually work. So here we go. See it works. So then we started looking for other objects in the museum that used to be used. So right here we have a teeny tiny little Sumerian cylinder seal. In real life, it's about this big, it's about an inch big, we have a lot of them in our collection. And what they were used for is to sign packages and important documents, contracts, that sort of thing. So if you were somebody in the ancient Middle East, you would have your cylinder seal. And every time you had to sign an important document or leave your mark on something or approve something, this is what you would use to do it. So you would roll it across clay, which an ancient Middle East was basically your paper, right? So on display, you can see it's in this little case here, there's a ton of them, they're little tiny objects, and then the imprints from them are displayed next to them. But it's not as effective as seen at an action. So let me Andrea, can you play this, okay. So this is how a cylinder seal works we made it huge too. And there it is, and then you can explore the rest of the seals, and you can see them up close and really, really appreciate these tiny, tiny detailed carvings that were part of everyday life 1000s of years ago. Okay. Another opportunity, like with AR that you saw with the mummy is that you can actually overlay animations directly onto objects. So this here we have another object from our ancient Middle East galleries. This is a palace frieze that would have been seen on the walls of this fellow here, where's the pointer? This guy, this fellow here, this king's palace. And you know, when most people think of ancient sculpture, they usually think of this right Gray, beige, white, you know, that real natural carving materials. But in reality, if you would have seen this 1000s of years ago, it would have been brightly colored. So we work very closely with our curator to get colors that were reminiscent of the colors that they you know, think that these look like. And it's a really impactful simple experience where you hold up the device and you tap, okay, and then the colors show up. So it's a really great way to engage people with what what these objects were in their time.

Unknown Speaker 14:50
And then there's a lot of really great interactive opportunities for AR. So we came up with an opportunity to play a game there's a couple game names in the AR in the lumen Tor. This one, it works really well. So this is the mu Shu dragon cute name, right? And he is part of the gates of Babylon. So he he's actually, I don't know why I gendered him as a he, it's a it's a dragon it probably has no gender. So anyways, I started about that. And so he this dragon is actually made up of for predators. So what we do is we ask users to try to connect the predators to the parts of the body so it's made out of a rattlesnake. It's Scorpion, Elian and Falcon, and you connect those. And then your reward is a full immersive experience of walking through the gate, the Ishtar Gate into ancient Babylon. And you can see here, oh, the mystery shoe dragons on here. So this is really a jewel in the experience, David's animation team is incredible. And this is just an opportunity for visitors to walk around the space that doesn't exist anymore. This this gate is, you know, at a museum, not in the Middle East. So this is a moment that this this gate is placed back in its original context. And people in Detroit, Michigan can explore it. It's really great. All right. So after we tested all these different experiences, we were doing evaluation the whole time. And it went pretty well. I mean, there were there. We'll talk a little bit later about some of the challenges with the user experience. But in terms of the overall the overall engagement, nine out of 10 users said that it helped them engage with the art 85% overall satisfaction, people were really happy with the content found the content really engaging. And 88% of people said they learned something new. So that we looked at those stats and for a prototype looked at that as a huge success and as a good signal to continue developing the project. So when we started, we had what was it seven stops on the first floor. And then after this evaluation, and since the evaluation went so well, the Knight Foundation granted us some more money to further develop the project from the SEC to the second and third floors. We are the prototype for lumen one a gold Muse award at a PM. So that was even more encouragement to move forward. Okay, what's next? At least Yeah.

Unknown Speaker 17:47
Hello, hi. I'm Elisa Vieira. So I'm gonna talk a little bit, I'm gonna go a little bit deeper now into wherever valuation and what we learn from visitors. So at this stage of the process, as Megan just mentioned, evaluation is part of Dia practices. So Luma wasn't gonna be any different. We needed to do user testing. And we needed to hear from visitors to be able to move forward. And after the first iteration of lumen happened, that was pretty much from January to April 2017. We started conducting user testing we had, we have the luxury of the DA has a luxury of having an evaluation department. So we had an internal evaluator. Ashley Mirasol, is her name. And she collected data from visitors, we could learn from the feedback that we that she gather. And as Megan just mentioned, it was we were happy, it was fairly successful. Now, we're some interesting findings, though, and surprised some surprising things that we found. But overall, the first seven stops that we tested, that was the first iteration of lumen. It was all throughout the first floor of the museum, we have three floors on the DIA. That's That's what was tested at that point. And at the same time, we were already starting to develop new stops and thinking about how we were going to move forward. So but the methods that were used by the evaluator, so pretty much and this is something that we've been continuing using some very short survey cards, I think it has like three questions on them. That we collect, close to the pickup location for the device. Actually also did some follow up through email with some of the visitors or, you know, agree to continue talking and tell us more give us more feedback. So that happened with some of the visitors. And also she did observations on some short interviews with some visitors as she was walking around and following them and seeing what they were, how they were interacting. So as we move forward, like I said, we use the results of this initial evaluation to make more informed decisions. And we ended up remedied in some of the content for the existing stops. And so we developed a new ones that also, you know, became a little bit

Unknown Speaker 20:31
different with time. Let me see. So, for example, some of the unfamiliar things, I mean, we could talk for many hours,

Unknown Speaker 20:47
about what we've learned. But we'll select a few of the surprising aspects that we discovered through the evaluation. And first of all, we realized that device was heavy, we knew it, but you don't realize it until you have to be holding it for a while. So it wasn't until we saw visitors going through the entire experience. And this was only one Florida Museum, that we realized that you know, people put it down because you know, it has a handle, and it's still something that we needed to figure out as the technology evolves, how we were gonna move forward to make a little lighter for for them to experience lumen. We also noticed that visitors were using, you know, usual behaviors, they will do on their own phones, like people are used to pinching and zooming on the screen. And this was not the way you will do that all you will work with AR because you are using the screen, but the object is an AR object in front of you. So you have to get closer to the object, you cannot see it on the screen is not on the screen is actually you know, an object 3d virtual object. So because AR objects are real objects, you physically have to move closer. And so it was not necessarily intuitive. It to intuitive for visitors we realize to to do this. So the other thing was exploring their surroundings. Now, I probably should have more of this. So the Hangout, this is an example by the way that Megan show you the video, this is an example of one of the places that people would see this visual platform and tried to zoom on the screen, instead of getting closer to the object. And with time, people started adapting to, you know, the dyno their own behavior, but you cannot expect for people to switch, you know, the way they behave automatically to some type of a new behavior, pretty much. So and then for other example that I was going to show you right now, you saw how the, the shark gate. In the video, you saw that while in the picture, it wasn't on a video, we use it as a every wire for visitors to see in the context of the museum as they were walking through the gate, we push a little bit the boundary with some experiences. And this is one of them, in which visitors had to look up and sometimes look behind to say certain things. So this is an example of an archetype piece that was actually a double sided altarpiece. And it's an entirely it's not an object, it's not, it's based on the idea from the collection, but it is in the center of the room. So you have to look at the three paintings on one wall, turn around, and then you see these immense, you know, altarpiece that you can actually walk around and see the back of it. So that was a lot a lot for people to

Unknown Speaker 24:04
to get, you know, very quickly. So this is an example of how a very complex game that we have created for people to find their paintings on this interface. And then also to see it to see the back of it and walk around that God kind of confused and and we realized that gaming experience was too much. So based on devaluation, this was one of the places that we wanted to keep the surprise factor and the wow factor, which is definitely still there. But we have to simplify the gaming experience and have less steps to it and make more clear instructions and not assume people are gonna look to places that there is really nothing physically in in the museum, but just in the visual experience. So another thing that we did that all of this is based on, you know the results of the evaluation Some of the solutions that we came up with was training people. So we said, Okay, we're gonna add one near stop. And this is gonna be like a playroom, which ended up being really cool. Cuz guiding all the beautiful work with this playroom. And where should we create it and we've replicated some of the actions that visitors were going to have to go through when they get to the other stops, but with a virtual object, and it proved to be successful, but it made the experience a little longer as well. So you have to also, you know, it was one more stop. So what's an initial solution? That has helped, definitely, we also realize we need a more read them written directions, but clear ones and short ones. So we have to, we have to simplify the content as well as as well as adding some of the missing links for to help people navigate the experience. But the play room simplify also the rest of the experiences in anticipating what they were gonna go through. So these are some of the solutions that we tested. The other let me see what else. So in terms of what I mentioned, when you have to look in unfamiliar places, like looking up or looking behind you, we added this, you know, little blue arrows, or you'll see in a video later, that will also help people don't appoint at the right place. Because it wasn't, it wasn't an intuitive, in some cases, to find what we needed them to say. So I'm basically at this point, I realize we do need to respect a little bit more than human behavior, like the natural behavior, to be able to make it more of an effortless kind of experience and not make it a more exhausting experience. So as I mentioned, we strip away some of the complexity of some of the steps that were was to gamify. Basically, too much, too much to do. And we also, were more intentional in aligning the objects with the obvious sidelines in the museum as you go in their room or so that is more things will show up more in front of you. And you don't have to look too hard to find them and look around to her. We also they also work some more on the content, like I said, but the careful in you know, sizing and placing the digital objects was very important. So that people could appreciate the details. And we don't, we don't expect them to walk too far to get too close. And so what sort of find the middle point to help visitors to make it more effortless for visitors. So, and I have this quick slide, although David's gonna talk a lot about technology. But at this point, we also discovered that we still have some glitches that we needed to, to work on. So that was something that we definitely knew that had to be addressed. And, and some of them were related to occlusion and navigation issues, and David's gonna go in detail into that.

Unknown Speaker 28:28
And in moving forward, as I mentioned, as we were learning from the visitor feedback, we were also already adding several new stops to the tour, especially on the second floor and later new starting, you know, with the content for the third. And so we ended up rewriting most of the interpret the content and a lot of the instructions and sort of like taking a step back and on things that for us were already kind of like second nature because we've been testing ourselves but for visitors encountering for the first time it was rather, you know, too hard for them. So we needed to guide them a little bit more. And of course, continue addressing the technology issues that we have. So we will have love to show you a lot of videos of a lot of other stuffs are pretty cool. We have but we've selected one plays on the second floor of the museum that we at this point started developing content for that was rather unique Megan show you a photo of the river record. And the record as you can see here is as opposed to the other experiences that you will go to a place in the museum and you will interact with one object. You are interacting with an entire room full of panels around you though, that looks like this. So this is a, this was a totally unique object because it is an entire room. And you have to look up behind that, like you have to look everywhere. So as you can see here, we ended up because of how big this space is and how much interesting content that actually AR could help visitors connect with. We ended up adding three stops in this room in this river court. And the first stop was more of an immersive experience. And this is this is it. Basically, we're helping visitors visualize how the court looked before the murals were there. This is 27 murals around you and the walls before were all blank. So we use archival material, and guy to go to a beautiful wall work. As you can see, recreating the huge fountain was in the center of this, this court. And you can hear the water dripping, and the fountain you can you see the plant. So the architectural details were recreated and you're basically immersed in this space that looked like the way it was in 19 like 20, something before Rivera can't do a DIA, so the other stops. So you can imagine that developing this tab was a little bit different because like I said, you the other stops were about engaging with one object usually in front of you. And there was a it was a learning experience. Also, to apply what we learned in that room to the other experiences that we were creating or remedying. So just to show you very quickly, the two other stops that we added to the river court. One of them we take visitors through preparing the walls before the artist started painting. So and you can see how I are, it's so good for this kind of content. Because there's no way people can know the layers that are behind the paint if you don't take them through something like this. And basically, visitors can go through the steps to prepare to become pretty much an assistant to the artists and prepare the walls with the plaster and all the layers that go through it and then transfer the drawings until they see the full murals as they are now. So we do that with one of the walls. And also as part of the this experience on the Learn More visitors can learn also, who are the people represented in the murals because many of them are

Unknown Speaker 32:59
many of the many assistants that work with Dr. Rivera, that people automatically assume one artist created this but it was a bunch of people that work hard to make the pigments that have to prepare the walls, so many things to have too many jobs that happen so so we also highlight this this invisible people and the names of these people and who they are and as far as we can find information and at the same time without overwhelming the visitor with Tomash. That's what we did for a while we we pretty much call that Rivera during experience. So during the construction of the murals, and then the river after experience, which is the after the the murals, thank you had already been created, explores some of the symbolism, the iconography of the murals, and this is one of our favorite experiences in which we had an AR visual, a statue of what linguine that sovereignly appears after visitors to discover and click on the figure does behind it. The Edgar Rivera was inspiring this Mesoamerican sculpture and goddess called Guadalupe Gwei. And we're pretty much bringing it to life so people can compare how he modify the Lego figure to create a machine or a piece of machinery for the mural. So at the same time we have Mesoamerican music playing as you see the sculpture you can walk around it. So in a big space like this, we could do something like size so it was perfect for the space. So all of the learning that we EDA after coming up with these three stops and the rest that we did on the second floor, help us move forward or so with the remediation. And now David's gonna dive into more technology stuff.

Unknown Speaker 35:16
Thank you. So I'm going to take just five to 10 minutes to go through what is possible to do in AR in 2019. So we are at the end of the decade, on Yeah, 10 years already. And so there are a lot of things that we can do. And they are saying that, where AR can still made, make some some progress. So what AR has come a long way since four to five years. So in 2015, really Tango opened, pave the way to this new AR capabilities. Once again before it was mainly based on a QR code on a marker to display something. And since Tango created Google credit tango, we had Apple that launched AR kit in 2017, then Google launched AR core, then Facebook, snap, even Amazon Microsoft with HoloLens. So all those huge company invested billions of dollars in AR. So I would say that today, a designer or developer, if they want to do something in the AR, they will use maybe 1520 30% of the capabilities that all those companies offer us. And because there are so many that it's hard to kind of use it as you use them all. So I want to run through the main features and to share with you what it's possible to do or not today. So the first things you need when you create AR experience is plane detection, this is a basic, you need to know where where the floor where the desk, where all the walls to position an object on, on the reality, you know, that I would say works really well, except in very dark environment. You know, it's kind of the same with the human eye, you know, if I ask you to position a glass, in a dark environment on a table, we might miss it actually. So dark is never good for AR and also we need texturing. So this is perfect for AR a lot of motifs, a lot of colors, this is not great for AR You know, but I would say it works in 95% of the time. So the second thing you need for AR when I say when we say AR is persistent AR like a DIA typically the mummy that you can see here has to be inside the soccer figures. Otherwise, we defeat the purpose, you know, if the if the skeleton is beside it won't work. So what we need to anchor object is a point cloud solution. So basically, something where the device know where it is by capturing what the called feature points in the surrounding. So like like a human eye, we know where you are, because you know the door is here, you know the table is here. So you kind of know where you are in the surrounding. So we have several options for that Apple as well map Google as cloud anchor, Microsoft as Azure spatial anchors, another couple of vendors have different solution. But I would say that anchoring an object in 2019 is possible. And it works perfectly at a room scale or an object scale level. So it will work well here. We work well here. In a large venue like the DIA, we'll talk about it but it's it's more challenging to have an always on localization. That's the point. So then another feature that we need for AR is motion tracking, world tracking and other type of tracking. So world tracking or motion tracking is what you see on the left is when you move with your device, the rendering of the object adapt which is what you need to feel like it's real you know, that is something that worked perfectly and without any new sensors Tango you needed an infrared sensor you needed two cameras now with just a phone a regular phone. It works. Face Tracking AR kit AR core offer that option. So you can do cool things with very striking. We might do something with a DI at some point I know with Egyptian mask or whatever. Object Tracking image tracking works pretty well too. So this is this is a key challenge in AR It's occlusion management so you can see all this to picture what we mean by occlusion. So if without people occlusion, you get this image on the left with people occlusion management, you get the result on the right, this is what you want to get. So what happened with AR is a digital object always appear in front of the real world. So let's, in this case, if somebody come across the device and the wall, you will have the impression on the left. If you have occlusion management on the device, then you will have the same vision, same impression as the human eye, you know. And these when we started, the project with the DEA did not exist. And it's not implemented yet. So you have to think about the occlusion and you have to think about where people could come between you and the actual experience. So there are way to avoid that. But the good news in 2019, there have been a lot of progress in occlusion management, one company

Unknown Speaker 40:58
make a huge progress called sixty.ai. And they have real time occlusion management, which is impressive, if I move the the glass year, the device will track and will occlude this image, which is pretty impressive. Apple has launched human occlusion so it works only it's kind of AI AR. So it works well with human which is what you have the most in museum normally speaking, and there will be a lot of progress to in 2020. So I would say occlusion was something really new in 2019, which was not did not exist before. Then you have a real world effect. So typically, these are great they are they consume a lot of CPU power, but you can have light adaptation. So the AR experience will adapt to the darkness of the of the room. So it's not bright when the room is dark. That's great. And there is also a reflection that can be handled. Now that is amazing. Because what you see here, of course, the sphere is a digital object, and the glass is a real one. And you can see the reflection in the digital object. And that is working only on AR kit. So Apple for the moment, but AR core will, I guess catch up soon. Almost the last point so interaction, we talk about it with Andrea McGann and Alicia. So interaction, you can add almost any type of interaction in AR so directly to the digital object or also to the with the physical world. That's why you need the device to understand the world because then when you tap on a specific point, the device understand what action has to be triggered. The new thing to I would say 2018 is that you can have a multi user experience. So different user can see the same thing. So the the example that you see often is kind of ping pong table or tennis table where several person share the same AR experience. And last thing, so this is where I think AR I mean, I think it's a reality should make progress is always on localization. So at the DIA, it works I would say 80% of the time 87. So 77 60%. So, so when, because and so that is the thing. So we had a lot of help from Google when we created this experience, because we were I think for the first time they were really using Tango at that scale. And it's still one of the biggest venue using tango. But believe it or not, since Google stopped Tango deprecated tango in 2017 there have not been any replacement yet for that technology. So we are two years after and we are working with different vendors, Microsoft is one of them. Other vendors to try to find a replacement technology for Tango, and it's it's pretty hard actually. So I'm sure we'll make it in 2020 but to have an always on localization. So wherever you are in the museum, you the device know where you are. So it means that the device capture feature point all the time in real time. So that is a challenge, you know, and so and typically the challenge we have at the DIA it's sometimes people just go out drink a coffee and what do they do they take their device and they put it like that, you know, to the device is looking at the table and trying to find a future point but there is no future point. It's just just a table so it drained the battery and then they go they stopped drinking coffee they go back and lens doesn't work because the device is like totally messed up here. So so that kind of detail that we have to think it's the real life in the museum, people start people chat people are waving their device put in the pocket, you know, and and that is the real enemy of AR is the urn plan, motion and movement of the of the visitors. So, and the photo on the right. It's taken from Google. So I think Google has something that you can experience today in some city in the US. It's AR in Google Maps. So the US VPS visual positioning service. So when you are using Google map, you can see AR panel to show you where to go in the space. And it's based on December point cloud technology, but also Street View, and AI intelligence. So

Unknown Speaker 45:48
a lot of company are working in making possible the fact that the device know where it is at every moment in the space indoor and outdoor, but this is a next frontier for AR. So that's pretty a overview of what's possible today. So a lot of things are possible. The it's always the same thing, it's now what do you want to do with AR Because it's, it's just a tool, you know, it's a great tool, but at the end of the day, it's a tool to serve a purpose. So that's the most difficult thing to solve.

Unknown Speaker 46:21
Thank you so much, David, go, I have the clicker, wherever it is. For the next part. We are running completely out of time. So what we're gonna go ahead and do is go through a really fast retrospect. So I'm gonna ask my panelists, we're literally gonna spend about 10 or 15 seconds between each slide. So we're gonna do ignite style. We have a lot of learnings to go through and I want to make sure that I get at least a couple of questions. So

Unknown Speaker 46:51
there we go. Okay. Is it me? At least Yeah.

Unknown Speaker 46:55
Okay, well, I'll be clicking back, I will be clicking away from you.

Unknown Speaker 47:02
So okay, so emulation. Just a quick recap. Talk to people, if you don't have an evaluator, it's important to be on the floor, talk to people observe people see how they are using the device and how they're experiencing your product. And I have

Unknown Speaker 47:21
to change sorry, we're gonna only spend.

Unknown Speaker 47:26
Oh, it's me. Okay. So your audience might not be who you expect, this was my favorite finding of the whole thing. So we had an average age of like, 47 years old or something of users, which was really, really interesting. And the big takeaway from this was that age isn't really a barrier to tech, you know, I think older adults have such self conscious feelings about technology and being able to use it. But inaccessible design is the problem.

Unknown Speaker 47:53
Eric can be absolutely exhausting. So do have this in mind and focus on simplifying the interface and listening to natural human behavior. You can do incredible experiences, but you need to be meaningful.

Unknown Speaker 48:08
Also, less is more use AR for what AR is best for not. University set definition of using it for additional content, if it's not a good faith for AR,

Unknown Speaker 48:21
that's what all the curators wanted. They were like, Oh my gosh, you can put a virtual book. No. Okay, oh, this is me, this is another favorite one. You might not need AR. It's it's just a fact, you might not need it. So one of the things about this this project is that we tested a lot of options. And a lot of the options told us that we might not need it in this case. So but this is particularly interesting in terms of photographs. Because a lot of people might think, Oh, well, if you have a photograph, you don't need to make a reproduction in augmented reality, but it works better.

Unknown Speaker 49:03
But if you do join into an AR You can have incredible experiences but planning for sustainability is absolutely key. So focus on creating meaningful experiences that can be scalable, reusable, adaptable, technology's changing, changing so adapt to that. David consider limitations. I'm just gonna go through them. Consider do consider the limitations of AR so once again, be intentional using AR where the limitations of reality results in specific addressable visitor needs

Unknown Speaker 49:43
Okay, so quickly quickly on this one Yeah, it's there is still some limitation we have been through them at dark environment, not good. People crossing everywhere in the gallery not good. So there is still limitations, but like kind of every technology actually. So considering the limitation right from the get go, it's better.

Unknown Speaker 50:08
And even despite those technical difficulties, go ahead, Andrea

Unknown Speaker 50:13
can absolutely build build meaningful connections with that, I totally encourage you to experiment to see if this is something that can once again respond to a legitimate visitor need. But what is crucial here, and probably the biggest outcome from all of this might be be intentional and engaging your audiences to co create, they will help you make it better. So what the heck happened and what we're doing right now, I have less than one minute to tell you. But the project has been a three year experiment. So far, we've learned incredible data out of this, we've gotten to know how human behavior happens around augmented reality, it's been an absolute pleasure to collaborate with Gary go, I think that we've been able to collaborate also in the learning, there are some things we've learned from guiding others some things like Eric has learned from us. And we are evolving together. So in the next steps, as we mentioned, there are some limitations with the current technology that we've been using the specific persistence technology, we're adapting to it, we recognize that it might take a few months to get to where it's going to be a satisfactory normal experience. For the meantime, worthy reducing and focusing on reducing the moments of confusion, exhaustion, or all in all dissatisfaction. So frustration, a way to do it was to reduce the amount of steps to target the areas where we know the technology works best. And that has really helped in to improve the experience so far. So test around limitations in technology does not mean that your project needs to end. I hope that this has been a helpful conversation, we were able to write a chapter on this a few months back, so do check it out with if you haven't heard about this humanizing the digital AI is a publication that a number of the colleagues in MCN, released in the past few months. While you can buy it in Amazon for a hardcopy, you can also access it for free in the MCN website. So go in there, there's all kinds of essays that are definitely worth exploring. And if you want to get in touch with us do keep our contact information, we are more than happy to continue developing these ideas, and we love talking about how augmented reality is changing the world in a good in a good way when you are meaningful with it. So I'm gonna, we are able to take on I'm gonna say a few questions. But we do want to leave a couple of minutes before so. So the next thing can join in. anybody has a question? Yes. And do project please.

Unknown Speaker 52:43
Your work with us? Ma'am. Thank you for that. I was wondering if you could speak a little bit more to how we're using it. I saw kind of the blue dots to different people kind of sucked into the screens and using it to navigate or wasn't a more of a getting to a place I might put it down. I might pick it up again. What was that behavior? And regarding navigating what?

Unknown Speaker 53:03
That's a really excellent question. And we would have gotten into it. We had quite a bit of notes on each one of our findings. But I think that we learned that visitors were using it a little bit more different than we were expected. But in a way, it was super helpful to take them from point A to point B, at the beginning of the project, the technology was a little bit different. It was a very unfamiliar behavior for the users. Now it's evolving. Because once again, we're not dealing with the same type of users that we were a few years back. So I'd say Megan, I know that you might want to talk a little bit about the realization of Yeah,

Unknown Speaker 53:39
yeah. So the, the wayfinding actually made people very aware of how much they were missing. And so which was really interesting, because if you like ask visitors in general, like what they need, they'll be like, oh, I want a tour that takes any place to place to place. But once they get on a path and they're on a path, they start to realize everything else that's around them. So we didn't necessarily find that as a negative finding. It just we actually liked that. This was an opportunity for visitors to focus but also it allowed visitors to realize how much more there was at the museum.

Unknown Speaker 54:15
I so great projects. Thank you for sharing. I'm just curious, but I know that often when we think about AR and VR, it's folks that focuses on the visual. And I'm just curious if you're thinking about experimenting more with auditory and haptics. The device itself can often be a distraction, if somebody gets so locked into but also thinking about accessibility and inclusion. At the expense of just

Unknown Speaker 54:49
did you want to cover the cosmos stub which will be probably a good example of that or David.

Unknown Speaker 54:56
I can talk really briefly on the 3d And binaural audio experience. So we try, we basically, it's super complicated to design. So you can place objects, we can place also 3d sources, three audio sources in the space, so having the sense, but we found that it was really difficult to design the experience and to make our partner at the museum aware of what's possible or not. And so there was a couple of project we had to abandon because the result was not what we were expecting and what they're expecting, because it was really hard to imagine six different audio sources in a room, for example. So the technology is here, but I would say for us, at least the challenge is how you design it with 3d some space experience,

Unknown Speaker 55:45
one stop where it worked really well, for the day, we is called the kosmos, hub, you can collect sounds related to an object, because sounds were very important in correlation to that object, I'd say that it was effective in that. So they are definitely interesting ways to use it. But and they're also for accessibility checkpoints in the future. There's some exciting experiments that are happening around aiding visually impaired to move around the gallery style

Unknown Speaker 56:22
it's yeah, it's so the device management and distribution to visitors has been a challenge. You know, our frontline staff, they're great, they're very patient, but they have been taking the brunt of technical difficulties, which has caused a lot of frustration around distributing the device. So that was one of the big reasons why we reduced the stops and tried to find places where localization wouldn't drop out, to not only improve the visitor experience, but the improve the experience of our staff, it's really embarrassing to have a visitor come back with a device that isn't working, and then to give them another one, and then they come back and it isn't working, you know. So it has been a process. And we're still working very closely with our frontline staff to keep moving this forward. And we have become much better at involving them in in the planning for distribution. We've done surveys with them, we've done group chats with them, we've really collected their feedback to figure out how to make it a good experience for them to so they can have fun.

Unknown Speaker 57:21
And to add to that note for the content creation, we have been absolutely engaging with the frontline staff, because at the end of the day, they're the people that are engaging. I don't know if that's where the question wasn't a little bit. But as a sidetrack, I did want to mention that. We could probably take one more question and then we

Unknown Speaker 57:43
can Yes, 100% it. Yes. So until now, and statistics might be evolving. But we cannot assume that people have a $600 phone in order to experience or museum, we had a grant to support that. So we were able to offer this for free. So I definitely do encourage you to consider that distribution aspect as an accessible checkpoint as well. So we are considering that not only in do they have to download or do they need to use Bluetooth today, but also from the frontline staff. So I think that we're getting to way better position or frontline staff is very, very engaged with us. So yeah, hopefully answer your question.

Unknown Speaker 58:30
And the other reason was because there weren't devices at that time that existed that could that could read the world the way that are the devices that we use did.

Unknown Speaker 58:38
And I mean, maybe David could even talk about me, but I know that devices are evolving as well. We have one more question back there real quickly. David, I mean, the device.

Unknown Speaker 58:52
I would say if you take the 14 Stop, sorry. Half a half a gigabyte. Yeah,

Unknown Speaker 58:59
thank you, everybody.