Unknown Speaker 00:00
Hello, welcome to our talk today, this last session of MCN. We're so glad you're here with us. Yeah. My name is Becca Shrek and Gast. I'm director of exhibitions at Carnegie Museum of Natural History. And these are my creative colleagues, David Letterman from guy to go, and Robin own white from media combo. So our talk today is about is we are nature how a physical exhibition became a virtual experience. So I want to tell you some about we are nature, we are nature was open in 2017 to 2018, for about a year. It was sort of a flagship exhibition for us. The Carnegie Museum of Natural History is sort of changing the way that we were approaching interpreting Natural History and human history from one that was sort of very separate, to undertake more interconnected. And we had this gallery that's funded in a certain way that we have funds to turn over exhibitions. So we knew that this exhibition could only be open for one year. And what we were trying to do is invite visitors in, give them resources about the evidence of environmental change, but at the same time, motivate them and empower them in their experience of that content. This exhibition was about two weeks away from coming down. And David was on site, we were working on an augmented reality project together, and we were walking through we are nature, and I was lamenting about oh, it's thinks when exhibitions are temporary, because we put a lot of effort into it. And it was testing very well with visitors. We've got our educational programs kind of honed, we were like cruising. And he said, Oh, you know, I have a colleague that could probably scan this for us. I don't know what we'll do with it, but we could scan it. And it seemed like a great idea. We didn't know exactly how we would apply that. So we we managed to get the contract scope change together and time to invite Robin and her group to come to the museum and scan the exhibition two days before we took it down. So this is a just a film. Sorry, this is a film, we used a lot of the assets that we created for the exhibition. So we already had labels and content and everything but some we had to create some new pieces so that it had the right context in a virtual environment. This is just to give you a sense of sort of the content of the show something that media combo produced for us it was the sound Yeah, sounds nice. Thanks for your patience.
Unknown Speaker 03:12
Welcome to we are nature living in the Anthropocene, an exhibition at the Carnegie Museum of Natural History in Pittsburg.
Unknown Speaker 03:24
What the heck is the Anthropocene? the Anthropocene is a newly proposed epic or geological time period defined by humans effects on the environment. What makes the Anthropocene especially powerful is not only the amount of change, but the condensed period of time in which these changes are taking place. Changes that normally take 1000s or even millions of years to manifest are now happening in centuries. We are not separate from nature, our nature in our decisions affect all life on Earth. Welcome to the Anthropocene. Sorry
Unknown Speaker 04:30
So this is a web version, we created two versions of this virtual experience. The first is a web accessible version, which we weren't sure we were going to make when we scanned it and then also a VR entirely immersive experience in the Oculus. So there's a couple of different ways you can start the experience. Yeah, one is to freely explore the gallery space and to see Two exhibits in detail. And the other is to open a guided tour and that guided tour we used some, okay, sorry it'll play doesn't need to go over here
Unknown Speaker 05:34
so let me rewind a little bit. So there's a couple of different ways you can enter the exhibition, if you're in the free explore mode, the guided tour had sort of an audio companion. So if you aren't able to see the exhibition, there was sort of an accessible audio companion to the show. Um, so we can put it on our website and at the museum. But here's the map version, where you're sort of seeing the major theme areas of the exhibition. And we also had a list, so where you can sort of see photographs and eye level views, so different ways to get into the content and decide where where you want to navigate to. So it's a little faster than it was. So what happens is, when you select where you're going to go, it's a little little slower than that, you kind of zoom in, and you land and audio clip, essentially, we hired a voice actor to read all the labels and record them. And then, so that started that area, so you could hear it, but you're still able to free explore. So you're not stuck looking at something at this point, but you're hearing it. So we wanted to do that with sort of an audio asset and as part of the experience. So we have lots of assets, we have specimens, we had labels. So we were able to sort of lay these in and make them available. And they sort of exist in three dimensional space. We also had a lot of media in the exhibition. And the scope of our contract was pretty limited, it wasn't a very expensive project. We knew we wanted to, there are some media experiences that were tested really positively and had a bigger impact. So we thought we would recreate those as interactives in the virtual space. And the other ones we already had the assets. So we video recorded and laden video in those places. So you can see that here.
Unknown Speaker 07:46
three story building building 19 This
Unknown Speaker 07:49
is a way to interpret things we had. But here's an interactive piece about how you feel about anthropogenic change. And here.
Unknown Speaker 08:00
Yes, no, we're not sure.
Unknown Speaker 08:03
And that's it allows you to do kind of a polling activity in the in the experience itself. So this is the Oculus version of the same experience, it's much more immersive, we were able to use the same assets for this, but it's just a little more dramatic than the web version of it.
Unknown Speaker 08:44
So why did we do it? What was the purpose of the work? Well, first, we wanted to honor what we had created because exhibits are so complex to make. A lot of passion goes into it research design, and physical production. We felt that even though the exhibition was able to be open on the one year that the VR is available for as long as we want so we can extend the life of that moment. low environmental impact was a pillar of the physical exhibition design. We reused just about everything we had in there. But even then, you know, we painted a bunch of pallets black and use old risers and painted everything. Most of that exhibition went into the trash are back into storage cabinets. So when this was closed, even though it it itself was not sustained, the VR is living on. So from a sustainability standpoint, it works. And one of the things that I'm finding is very useful, is it it's available for reference now. So it's an archive. When you take exhibitions down, the specimens all go in their cubbies and holes, your label cop He gets filed away on the server, and it never comes together again, it can never be whole, it only remains in your memory. And this is a way for it to be archived in its wholeness. And the other you know, idea is that we're extending beyond the walls of the museum, because the building as a problems, space to think about it as limited reach. We want to share our story fluidly through like email something easy, or through the Oculus app. So we're able to connect that visitors and colleagues all over the world are given like a portal to our physical space, to our ideas, so access is big. And because we have developed so much educational programming around the we are nature exhibition, and we're still using it in sort of our school interactions, this was something we could use to bridge sort of the effects of the experience that our school groups and education teams were having at the museum to pre and post visit. And now, David and Robin can talk some about that.
Unknown Speaker 11:16
So on the next slide. So now we're going to take you through the production flow. So the first thing that we had to do was to capture the first thing we had to do was to was to capture the experience. And we knew when we started as Becca said that we wanted to have both a well I knew anyway, we got to have both a virtual, you know, a headset experience and also a web based experience. And we also figured out that we would want to be able to give people the opportunity to explore all the way through on their own. And also maybe just give them a highlights tour as well. And so, so that was you know, sort of how we that was how we we structured the way we were going to work. So the first thing we did was to organize the scanning and photogrammetry of the exhibition space, which was about 7000 square feet. And it was pretty good, it took about four hours for us to to capture the space we just used, we used a device that allowed us to create 2d, high res 360 panoramic photos, which were ultimately stitched together to create the environment to which people can move. And basically, the way they move is pretty much the way that you move through Google Streetview. Basically jumping from point to point, which you know, which works both on the web and also in headset. And every time you stop, you can look around full 360 degrees, there's something to see. So it really is an immersive experience. And it works in a headset. And at every stop, there's either audio and or video and or labels, there's always some sort of and occasionally some interpret interactive thing. So there's always something to help you have a better understanding of what you're looking at, in a way. Because I think especially with natural history exhibits, when you're confronted with a stuffed Wolf, you want to know why you're looking at that wolf. So we knew right from the start, that interpretation was going to be, you know, an important component of it. But as Becca mentioned, we this wasn't a big budget project. And we knew from the beginning that rather than write a whole script to guide people through we were going to work with the labels that existed. And so, so that's what we did, because it's just easier to listen to something than it is to read something and be looking at something at the same time. And because especially in VR audio is a really good way to direct people's attention, although we didn't actually shoot or document or produce any Ambisonics sound for this. But anyway, as you saw a little bit earlier in the little movie when you arrive at a location if you see the audio icon you can tap on it either with the controller or or you know on your screen and and it plays and all the labels the label copy was adapted slightly for you know, for voice recording because you tend to speak differently than then you read but they're all very short and pretty much to the point and one of the things that we did was rather than just simply recording a voice speaking these things, we did try to animate that experience a little bit by adding room tone and sound effects. So you know room tone is very subtle and just makes you kind of feel like you're in a space a little bit which was an important I think an important thing for for this and the you know the idea of of adding the sound effect So we added to go along with the pictures and the things that are the images and the things that that the the narrator was talking about. So we added sound of rain or the sound of birds. And sometimes we added the sound of manmade sources like traffic or heavy machinery being operated, just to emphasize the significance of the message. To create an end by creating a more sort of sensory experience. The one thing we didn't plan on was the experience that people would have when they first got to the opening of the exhibition, when you walk into a physical space, and exhibition, there's introductory text everyone has, most people have no problem reading it, and find it useful to help them frame, you know, whatever they're going to, they're about to see. But in this experience, it's really what's the word not positive to be to be just staring at a wall trying to read texts. So we thought, since the introductory text is really important for people to sort of understand why they're in this space, and what they hope to understand and get out of it.
Unknown Speaker 16:04
We just did something really simple, which was to record the text of the introductory of the introductory label, and throw pictures up at it essentially, to create that little introductory video that you saw. So that it just helps people visualize what what the words are, and they don't have to be reading the words there. You know, it's, it was a better introductory experience. We just understood that if they were going to skip the introduction, they would just really miss the point of what they were doing, because it's sort of the connective tissue for for everything that you see after that. So yeah, so you know, we met, we definitely were able to meet the goals that you that you set out. And we've when this this will be as I had said, version will be available on the Oculus go store in about three weeks. But meanwhile, right now, it's available on the web. And so it serves multiple audiences and multiple purposes. And because we're running a little bit out of time, I'm just going to hand it over to David.
Unknown Speaker 17:07
I'm gonna I'm gonna go quick on how how we built it, based on the capturing and creation that media combo did. So. So once we had all the so what media combo gave us is a 3d model that you can see on the left. So it's a really, it's a real 3d model photogrammetry. But it's not a high res photogrammetry model, because it would be too heavy to handle in, in unity, or in WebGL, or VR. So it's a combination. And that's the magic of it. It's a combination of a real 3d model, and three 150 panorama. And you can you may see on the right, or does bubble, you can see. So it's unity. And so the 150 panorama were really centered inside unity. So inside the 3d model, and you can all see also see a small camera on the bottom right screen with a label. And so that's how we position the content inside the 3d models. So you have the best of the two worlds, you have a 3d model. So we can make it interactive as much as we want. But you have high res panorama. So it's when you are inside the Oculus version, or the web version you have, you can read the label. That's why That's why we didn't have to put a magnifier on every level because you could read the actual level of the exhibition, which you couldn't do in photogrammetry, pure photogrammetry. Then the third point was how you move in an exhibition. So okay, we all know about street view. So we kind of copy what works pretty well. So we went into that with because set example of the video. So you had two three options to move one is teleportation. So you see the exhibition from above, you click and you are teleported in any part of the exhibition, second one was a list. And the third one was you can move with your fishing rod controller or whatever you call it. from one point to some actually from one center of a panorama to another center center. You can't move a meter away or feet away you have to move jump from one center to the other center. But because there were so many panorama you feel like you walk actually, we only had to fine tune the pace and the acceleration because if it's too fast, fast you can get motion sickness inside the VR headset. So I think it's it's not too slow because so you can believe you move and not too fast so you get sick for point how we display information so we wanted to Will the information to be visible, but not too visible. Because otherwise, it would get into the way of the of the experience. So some hotspots were needed when it was not needed. As I was saying, no, no magnifier. So three kinds of content text, video, as you have seen, and the play the audio content, audio at the beginning, we had this big audio player, and it was really in your face. And at some point was like We don't need, we don't need to fast forward or play pose, we just need to play stop because the content was so short enough. So we simplified the controller. So so it's really subtle and not an obvious not I mean, permanent.
Unknown Speaker 20:52
And last part, the interaction. So because we have the 3d model, it's basically we can do anything we want any kind of interaction, like in a video game, you know. So that's, I think the key takeaway of this project is what you have seen here is just the beginning. If tomorrow, because team wants to do another room, but a totally virtual room, we can add it to the model. If we want to do a quiz, we can do it. If so it's but we have the foundation to build anything we want based on reality and digital experience. So what we did as a start is this simple experience with visitors. So they could choose what emotion was inspired by the exhibition. And we would randomly show them three posters that were actually written during the exhibition. So it was kind of a part of digital but real with a handwritten notes. That was pretty cool. And the poll. So normally, in the actual exhibition, it was like cork right there, we'll put so you could see, you wouldn't have the exact figure that you could have a sense of who answered what. So here in VR, we reproduced it with, with the actual digital poll. That's pretty neat. I think we are missing we have five minutes for any questions. Do you have?
Unknown Speaker 22:34
So right now you're using it kind of as a way, it seems like a preserving an exhibit that has already happened in the future, do you foresee creating that to exist, like concurrently with an exhibit so that people in the physical exhibit can see how digital users are interacting with it?
Unknown Speaker 22:52
For this exhibit? Yeah. We're sharing it around a lot in email. I mean, that's probably the most fluid way it's, it's making its way currently, we but we do think about sort of a long term play of this, like I was just talking to some other colleagues about connecting this exhibition to other exhibitions on the Anthropocene, when you could really create a virtual museum of the Anthropocene. By scanning and creating these experiences, I don't think I answered your question about doing this again, but like, at the same time, yeah, I think that one of the things we learned from this was like having it be so end loaded, really made us just have to think and do really quickly. I think that if we had started it, and it had been part of our planning process, we would have thought more about the story and how we might have composed that story in a specific way for this platform. But I you know, my perspective, if we're creating these things, is physical things. And you can go in and scanning for hours and have that, you know, information, it seems silly to do any type of temporary show that goes in the bin, we really do go in the bin is a way for us to access that. And the technology is, at this point pretty good to get high res imagery. Well, some of the things we learned about scanning it is you know, lighting is tricky. Yeah. So maybe doing it several times like our lighting is like blasted by and we couldn't, you know, really change that or turn that down in the model. So if we had spent more time with it, than what we would have maybe made things a little more dramatic and Jewel like also like our ceiling is just as detailed as everything else in the show. So we may have toned down some of those things like you would in a in a 3d virtual model that you're creating to preview at the exhibition. But for us, I think yet just exhibitions are expensive to make. One. They're very time consuming to make. So If you're budgeting, add in, I don't know 2% of the cost less than just to scan it. And if nothing else, it's accessible archive of the show. So if somebody comes in and goes, Oh, yeah, I loved you, I you know, I love Monet. I love it. And I'll give, you know, I want you to do another exhibition. I don't know, I'll give you a million dollars. You can share it from a fundraising perspective, able to share your your entire catalogue of things, not just what's open at the museum at any given time. And they're there together in their entirety. So that's exciting.
Unknown Speaker 25:44
I have a question for the you're mentioning, keeping this as an archival thing, and what that archival preservation looks like for this VR.
Unknown Speaker 25:52
Yeah. I think it's TBD. Right now, you know, online, I mean, it could have a long life sitting there, you the Oculus go, it's, you know, made that I think that technology will probably age and change faster.
Unknown Speaker 26:12
But the programming language, which is which is unity, and, or sometimes people are using Unreal, I mean, that is the basis for all sort of virtual reality projects, and it's going to be around, you know, for quite a while, I mean, obviously, software evolves, like everything else, you know, but it's, it's, it's really hard to predict how long, how long it will how long it will last, but as a, you know, as a, as a method for providing the experience of what it was like to be in that exhibition, even if all you do is 360, special captures, and then stitch them together so that you have an immersive experience, it's it's much more compelling than looking at a bunch of stills or, you know, even reading a catalogue. So it really gives you a better sense of what it was like to be in the exhibition, without, you know, taking all those assets, putting all of them into unity, and doing CGI and totally recreating this fantastic. You know, doing all the things that you could do to really make you feel like you're in the actual museum galleries, you don't really need to go that far, to still have a pretty good experience, especially if you include you know, the label con, if you include the interpretation, I think audio and video, really add interactivity to help they help a lot. You need interpretation for it, if you want to. If you don't want to walk people through the exhibition yourself, then you need to have some sort of interpretation for it. Questions.
Unknown Speaker 27:53
So have you guys done projects like this before? And if you have, were there any things you encountered in this one that were like unexpected challenges?
Unknown Speaker 28:02
This was the first one that we that we did. And, you know, we, I think we learned quite a few things, we tried to learn on the fly, and improve what we could, you know, within the timeframe that we had. And some things have to do with budget. And it sort of depends on how much you know how much time and effort you want to put into it. But usually that takes advanced planning, I would say, you know, so instead of waiting until the exhibition is about to come down and say, okay, you know, we're having this exhibition come up, whenever it is. And as Becca was suggesting, build into the budget, that kind of thing that you want to that you want to do, you know,
Unknown Speaker 28:38
I think so we do a lot of AR project. So it's based on Unity each time. So I think the the tricky part here was to optimize all the content. So it loads quickly, when you walk, you don't have to wait. Because there is a lot of I mean, there's 150 panorama plus is big 3d models. So it has to be smooth. But that was the main challenge. Other than that, it's a pretty simple VR project. Once everything is set up, you have to install the right you know, markers to get the content, but a little bit of UI work too. But even though the the experience not super complex, so it was really easy. So mainly optimization and adjustment.
Unknown Speaker 29:22
And the VR is here if anybody wants to see it, check it out. And it'll be on Oculus in a couple of weeks ago is a it's a $200 headset. So as headsets go it's not expensive. Well, thank you.
Unknown Speaker 29:43
Thank you so much for coming