Unknown Speaker 00:00
I'm Chris Barr, I am director of Arts and Technology Innovation at an organization called the Knight Foundation. It's a private philanthropy, focused on a few things, journalism, communities, and arts. And within our arts program, I run all of our art and technology work and a bunch of that has to do with museums. And, and what we want to get into today is a little bit about a program we ran, was it a year ago, or longer? Around prototyping. So to give you some ancient history, when I started at night, I can't see everyone Hi, everyone. So I started my foundation seven years ago, and I was brought in to the organization to run something called the night prototype fund. And at the time, that particular funding opportunity was coming out of our journalism and media innovation program, really addressing really the crisis that was happening within journalism, and is still happening within journalism, around the transition to digital and what does digital publishing look like? And how do we need to change the practice of journalism? Through my through my work in my foundation, I did a number of technology and innovation work on the journalism team but also around civic technology, and arts and culture. And and a couple of years ago, we were thinking at the foundation. How do we spur innovation within the field of arts and specifically, with the large cultural institutions that we work with in in largely an eight communities but but now nationally, and, and someone said, Hey, Chris has an MFA, maybe he can. He knows the technology stuff, maybe he should, he should run some of that. And have that was born a couple rounds of prototyping that, that were in the same spirit of what we're doing in the journalism field, but aimed more at cultural institutions, largely museums. And one of the reasons that I think prototyping is important. And this kind of experimental work in the field is important in general, is because we tend to when we focus on technology, focus on the results of innovation, right, we focus on the nouns rather than the verbs. And, and as I've looked at organizations and institutions, I've been thinking a lot more about how do we enable institutions to build the muscles to have repeatable innovation within their walls? And so how do we learn the process is of, of not just having a one off great idea, but having systems within our walls to come to good ideas reliably, and move them through rounds of testing and verification and validation. And so the night prototype fund focuses on these kinds of questions, it says, what sort of idea do you have about how technology might impact audiences of the future? And what can we do to verify and validate the assumptions that you have within that? And if you look at sort of the kind of innovation organizations that are cited like IDEO or these sorts of things, we're looking at a few things, right, we're looking one, is it technically feasible? Right? And often, that's the easiest one to answer. Yes, we can, we can probably make it right. The tech is probably the easiest part. These folks will tell you differently. The desirability do people actually want it if we make this thing, do do people want to use that thing? And often that is answered too late in the game. Usually we wait till after we've made the thing to answer the question around whether or not people want it. And this is why prototyping becomes really important. How can we get to the answer of whether or not people might want the thing that we want to make? Before we spend all the time and money and resources to actually making it? And how can we quickly figure out what's the better thing to make if we are wrong about our assumption about the needs and desires of the people that we serve? And then finally, the third being
Unknown Speaker 04:47
around viability? Sure, we can make it short people want it. But can we sustain it? Is there a business model behind it? Because if you just get those two The first to a yes, on the first two answers and a no. On the third, then we have some problems, right, we're creating value, but we're not finding ways to recoup the value that we're putting into the world, into the world to sustain our institutions. So these are some of the things that are sort of behind the spirit of the prototype fund. And these these three organizations, folks have worked on projects in the last year as part of that program, the program itself is really centered pretty heavily on human centered design, we bring folks in, we do some training, actually, one of our trainers was Dana, I am not Dana, when she was going to moderate this panel and something else came up. We think about those assumptions really heavily, we think about what are we testing? What are we trying to learn through the act of making, right? What can we what can we learn by doing, and that's really the spirit of the prototype Fund, and the spirit of these projects. So what we're going to do today, this is three of, I think, a dozen projects that we had in this cohort of grants. And we're going to do a quick because we only have 30 minutes in here. I've already talked too long, a little show and tell of what folks made through this very quick grant process. We also ask people to work much faster than a typical museum timeline, because that's what technology often demands. And and then we'll we'll do some questions and answers. So thanks.
Unknown Speaker 06:39
Are you the Bob Barker, Mike, makes me feel special. So my name is Shane Richie, I'm the Creative Director of experimentation and development at Crystal Bridges Museum had been blocked and saw. I'm going to go here. So I'm going to very give a very, very quick summary of what this project was and how he went about it. We our evaluation report is available at the link online, I mean, on the screen right now, it will come back up later, if anybody wants to learn more about it, go check it out first or come find me. I'd love to ramble about it for a while. So this this project started off, kind of out of this, this this frustrations of screens as being this this imperfect delivery platform for interactive content. If you know if the content that your content that you're delivering is visual in nature than a screen is great, but if you need to deliver information or content that's not visual than the screen that shouldn't necessarily be a default. So this is this project started as kind of a combination of a couple of things. First of all, is how can we deliver audio in the gallery without any use of a screen at all? How can we build the interface? The second trial behind that was was how would it change the gallery experience if we could create this immersive and directional soundscape with that interpretive content. And then much later, addition, which will become more clear a little bit later is this idea that if we're providing directional audio in the gallery, then that could present has a lot of potential to be used as an accessibility tool in the galleries. So this idea of a non screen interface really led us to thinking about the gallery as the interface for the way you you interact with and browse art. You know, if the gallery is designed, well, then a visitor walks in, they do a quick scan of the space. They're quickly told which direction the designer or the curator wants them to go when they go that way, or they browse and they wait for something that kind of catches her eyes, and they move in towards that. So we started thinking of how can we use? How can we use that that behavior that the visitor already is expecting from them to deliver the audio in a way that really makes sense. So we had this idea to use augmented reality to kind of accomplish this, but instead of delivering graphics or visuals, anchored in space, we're going to be using audio anchored in space. But for this to work, we really had to create the the effect that the audio was coming from a direction you know, the ultimate effect was that we wanted, we wanted the painting or the work of art to sound like it had audio coming from it, but without putting speakers in the space. Let's see so so we really started looking at binaural or directional, immersive audio. I'm sure most of y'all are familiar with this. But if you're not really quickly, it's this idea that the the location of the sound, so creating salsa pans from side to side and your headphones, and the location of where that sound is coming from is based on where you are in relation to the painting. So if you're standing at a painting, you're hearing that head on or if you turn to your to your right, and it sounds like it's coming from the left. And there's just loads information about this. It gets really deep and actually really fun to play with it. But that's that's the direction we're going with this audio, how can how can we deliver directional and immersive audio in a way that was really effective. So when we started this, you know, to what Chris was saying earlier, when we started this project, I really had in my mind that this was going to be a technology development project. But the technology part ended up being really easy. It was already there, we developed in unity. We mapped our gallery to the very basic skeleton of our gallery and put the audio assets in the space. That happened really quick. But what we realized as soon as we had that it immediately became a experimentation prototyping around content, and how do you actually deliver the content, as some of y'all may have already picked up on delivering multiple audio sources, in one space can very quickly turn into this kind of cacophony of sounds, there's a lot of design iteration that really went into that. And that's, that's where we spent most of our time with this project. And so we were building, since we knew there's gonna need to be a lot of tweaking there, we built it in an environment that was really easy and quick for us to to, to tweak and then redeploy quickly. So built in unity, all the audio assets were brought into unity as objects. And then we had this this, this process of going into the gallery with a few people that that were not involved in the project, they use it for the first time, they gave us their feedback, we modified tweets in the gallery with the Unity developer and audio producer. And then they were able to go and try it very quickly again. So each iteration, you know, only took us a minute or two to get back onto the hardware test.
Unknown Speaker 11:40
During this process, it was it was really quick, and it was actually great to build it to test it in the space there. So the we were able to achieve this effect of the directional immersive audio really pretty quickly. And effectively, what we still have left to do. The prototype right now is a device that hangs around your neck and uses a camera to tell you where you are, with an attached pair of headphones is clunky, it's not a very good user experience. So that's kind of what we're looking at now to to improve as we move forward. And, and so it's you know, now we're looking at pushing the content farther. How can we tell stories more effectively through the use of this, and then also developing hardware, which is something that's that's new for us, but I'm kind of excited about honestly. And that's me.
Unknown Speaker 12:32
We're gonna just keep moving and hold questions, because this is a short session.
Unknown Speaker 12:39
I just want to yes, do you want to just drive and I'll tell you want to go next. So hi, my name is Patty Reeves. I am the prince of principle UX developer ally interactive. We are a web agency and we worked with Siena brown Prime Access consulting and Cooper Hewitt to create an Alexa skill. Next slide please, Chris. Alexa open, Cooper Hewitt. The idea being that the skill would be a way to allow users to explore a gallery using only a voice interface. Next slide. So Cooper Hewitt's visitors are one in four visitors are 18 to 24. nearly eight and 10 visitors are first time visitors 50% are local in New York City and 47% are involved in design and account for two thirds of repeat visits. So next slide. We assumed that the person that we were going to develop this project for you go the next slide might be someone who wouldn't be able to visually access the gallery and it would give them a way to experience the gallery. And also an important aspect of this project was like the freedom to move around by choice through the gallery based on what sounded interesting rather than just having like a narrative read to you about the gallery. But as we were went through the human centered design process, you know, we also said that students in a classroom could benefit from a skill like this or an individual at home that wants something relaxing to do or a family at dinner. And that by creating visual descriptions about the objects in the gallery, it really opened up all these wonderful, like alternate use cases for why someone might want to explore a gallery using voice. Next slide. So that was the idea, the idea that you could go through a gallery through a physical space only using a voice skill. And to reiterate Chris's point and Shane's point to the technology next slide was really, I would say maybe 5% of what was harder this project 2% Like we did So many iterations in testing, through writing, this was like our very first crazy, insane diagram, we wrote a lot of scripts. And we would test it that way, like using human beings back and forth. And next slide. What we learned was that when you were the original idea of like, being able to like walk through a gallery a quote, unquote, walk through a gallery, using your voice, like, I would like to look at the painting on my left, or I would like to look at the object in front of me, was actually not a really important idea. The important idea was like, the this is like a thematic collection of curated objects. And I'm interested in this, and like, the relationships within the objects was much more important than mapping them out in a physical space. Next slide. So, you know, we started looking at like, what are these objects? And how are they thematically connected to each other. And another important part about a skill is that it's time based, just like this talk is right now, like you, you're stuck, you're stuck wherever I am talking, and you can't speed things up or slow it down. So like, what are kind of ways to like get people to like, efficiently discover the things that are interesting. Next slide. So we next slide, went back to the idea of like, what are like the, what's like, the metadata about this object? And what are the questions that someone going through a gallery would ask you an interpreter. And that was how we built the user interface around it. And like, like, the idea that you could ask questions about these objects as if you are with an interpreter who is telling you. Next slide. So we built the skill, which again, was like really only a very small part of like it was, the skill might have been like the prototype we deliver, but it was really like the third or the fourth prototype in the sense of the project that we did. We tested it with visitors, students, low vision members of the community to get their feedback. And then we learned next slide. Next slide, that this was really exciting project. And one of the things that really struck me about it in the process of working on it was that
Unknown Speaker 17:35
my daughter who's five and can't read, and my mother in law, who's 65, and won't use a computer could both experience this in a way that like, sitting at a computer, reading a website, like neither one of them would have ever been able to do. And I think that this technology is really exciting. But the idea that someone was was gonna go on amazon.com and download this skill, which you could do Cooper Hewitt is that there's there's still, I think, some gaps in the technology before. We'll see lots of museums doing this. Thank you. Go for Brian.
Unknown Speaker 18:18
I think this one might work. Everybody hear me? Okay. No. Can you hear me? Okay. All right. So I have the unenviable task of going last and a 30 minute time slot, it takes me a while just to say my name. So my name is Brian Krishna Center. I'm from the Broad Art Museum at Michigan State University. Next, this is just a quick shout of the team that I assembled for our Smart Label initiative. One thing that's a fun fact here that I think is important is the fact that the people who actually created the Smart Label, were my students. And so we've got an average age of about 22 years. And I think that's important because, you know, they're, they're, where they reside with regard to dreams. And reality is a very tight space. And so they really got it very quickly. Next, this is our museum. It's a very challenging Hadid design building. It's sleek, it's pointy. We're very young at seven, seven plus years. And we're a people centered academic museum and a tier one research institution. Next. So what we wanted to what we wanted to create in a nutshell was we wanted to reinvent the age old object label paradigm. We wanted to create something that appears as much like a standard label as possible, but is so much more a veritable portal of discovery. Questions that we are exploring continuous or will always explore and I think these are probably common questions for all of us. How can museums provide information with enhanced access to the widest possible range of visitors? How can we increasingly incorporate multiple learning styles within our exhibition? means, and how can we effectively invite visitors into the conversations into the stories that we're telling next. So where to start? For me personally, the idea goes back 2023 years where I was really lucky to be working with some visitors, studies gurus, including Randy Korn, who I'm sure probably many of you know, of. And so we were looking at some of the very earliest iterations of interactive kiosks in the museum space. So early in fact that the displays we were using the touchscreens were from the cashier industry. So there, there weren't really touchscreen displays like we know them today. And one thing that we noticed was that the computers obviously had a great deal of capacity, more than we could ever potentially make use of even then. But people were a little circumspect about using them. And at the same time, because it was Randy corn, and she's still very thoughtful, we were studying labels themselves, the old 100 200 year old paradigm of, of interpretation delivery. And we noticed that that was the most effective means for providing interpretation that people probably still is, in some ways. And so I remember in that moment, thinking that there would be a moment where we could combine the two into one where we could utilize the strengths of each and put them into one capitalizing on the point of delivery of an object label. So for prior to receiving funding from the Knight Foundation, I had students cracking open Kindles, and doing all kinds of things for a couple of years. And then we kicked off the project with a think tank, where we assembled a number of different people from our University and beyond number of people from our museum, and maybe not a surprise, there was a lot of inside the box thinking going on there. Next. So we being an academic Museum, there was a lot of research that went along with this project. And our evaluation started out by by doing some surveys in the gallery space. So we, just to kind of paraphrase what we have here, we surveyed over 100 Different people in our gallery space, and the vast majority, over 90% said that they would indeed use interactive labels. And we started to see the beginnings of a very common refrain that you'll see again, and again in these slides. So people wanted to learn more, they wanted to dive deeper into the information, they wanted to take it home with them, they wanted to take the experience home with them, they wanted multimedia modes of learning within the label. So animations, videos, audio, potentially images, and they wanted much more interaction, not just interfacing with the label itself, but also having conversations being able to reach out to the museum itself and the staff. Next.
Unknown Speaker 23:07
This was fun. So one of the things we learned with Dana and Chris, and in the bunch in our human centered design training, is we did some in gallery Lo Fi prototyping, just using paper. I don't know if you can see there. But what this allowed people to do was, you know, have a free stream free flow of thought of what they would like to see. And so this, this Lo Fi prototype was hung right next to a piece of artwork. And, and like I said, I'm not sure if you can see, but I think this is this one, this particular photos written in Mandarin. So it kind of helps to prove some of the elements that we were after. And again, we started to see people wanted more multimedia. They wanted more information, they wanted to dive deeper, and they wanted much more interaction next. These are the components, essentially three different parts I would point out to you and I brought one thing for show and tell. So there's a 3d printed mount my brother and I actually designed this last Thanksgiving, which was a lot of fun. One of the most fun Thanksgivings. And we're actually designing the next one in the next Thanksgiving here coming up. And then we used a Raspberry Pi three computer and a touchscreen display. So pretty simple next. So I took that team of young people and I said, Guess what, we've got a 12 month project. And I'm going to condense this down into about five months. So in five months time, we designed everything, we sourced everything, we tested everything. And we assembled it all into a fine package. And we went live in the form of this op art exhibition which ran in our experimental satellite lab location. Everything came together so beautifully. I can't even tell you how excellent it was. So We worked with an honors psychology course to create content for this exhibition. And they were interpreting somewhat difficult concepts they were they were looking at trying to illustrate Gestalt theories of perception in this art, which is, you know, that's a little bit difficult. And so these labels lend themselves particularly well to that exhibition and that need next. So once we went live this, The show ran, I think for about six months, from December until April of this year. And so we wanted to test out the hardware, we wanted to make sure that we were working on a solid platform. And I can tell you, honestly, this worked so flawlessly, it works better than my smart thermostat at home, it's actually in some ways better than this computer. It, I've worked on prototyping projects for over 20 years. And I can't imagine I can't remember one that worked so well as this one did. So what we did was we push content to these labels, we wanted to make sure that that at a push of a button, you could completely recraft the interpretive content, and it worked beautifully. util utilizing a built in Wi Fi chip, and Google Drive and Google folders, it worked very easily. Next. Next, what we did was, we tinkered a little bit with Bluetooth, because as you'll see in a second, we knew that we wanted to be able to connect with people's phones, with the smart labels. And again, we use I think, a blue cat beacon, it worked beautifully. Next, and, again, more testing. So when we were live in the gallery with these labels, we conducted just over 100 surveys with people. And again, our assumptions. And what we were seeing were spot on. People wanted to learn more, they wanted to take this experience beyond the gallery walls, they wanted to dive deeper, we saw that people wanted to download bibliographies of the exhibition. And they wanted, they really enjoyed the multimedia. In fact, I had people coming up to me saying this was one of the best, most engaging exhibitions that they went to, even though it was a small kind of diminutive sort of presentation, they said it was because of these interactive labels. And they felt like they they were able to have a much more cohesive, deeper dive. But we also realized that because this is sort of a new medium, we need to spend a good deal of time tailoring the user interface to what the visitor will work with the visitors. And so that's that looks like a lot of fun in the future. For us next. I just included this slide just to show we've got a whole lot of data. And if anybody wants any of this, if you'd like me to share it with you, please let me know.
Unknown Speaker 28:04
So what's next, more and more interactivity. We want to integrate more modes of learning. And this is this last one that I put on here is what I'm particularly interested in and excited about, which is creating a chat feature for this label and integrating it with the phone so that people can actually be a meaningful part of the conversation. Next, here's a little glimpse into what we're working on with our user interface on people's mobile phones next. And so that's the big idea where we're going next is we're crafting a progressive web app that works in tandem with the smart labels. But also, I should say that what we're designing can work in a modular way. So you can have smart labels, you can have no smart labels, you can have the progressive web app, and you can have everything working together. And we we realized we wanted to do this because we started talking about should we incorporate audio so on into these labels. And you can certainly do that. But we because everybody's carrying, most people are carrying a smartphone with them, why not use the functionality native to their phones. And one of the things that we're also interested in doing is embedding analytics and our friends at Crystal Bridges, I think, tinkered with this a little bit, and I'm anxious to learn a little bit more about that. And essentially, what we could craft as a label, which, which allows us to observe visitor behavior and respond to it in real time, which I think is pretty exciting. And this is just some contact information, I should say. we've crafted everything that we've done. We want in the spirit of open source, we want to share it widely and so we wanted to create something which is replicable, with fairly low cost, fairly low experience level and we've we've created this Step by step guide that we'd like to share with people interested.
Unknown Speaker 30:04
All right, thank you. This is a short presentation and I see the folks who have the next slot. We're going to do something called the California hot swap. This is something Brian introduced to us is actually how he takes those screens in and out of the walls. Drywall technique. Thank you for for coming to the presentation. Really here. We're just really thinking about iteration. We're thinking about how do we test? How do we iterate rather than just thinking about finished projects? And how do we make that part of the culture of the way we do work within museums, especially with technology? Thanks so much.