Curator Computer Creator: Museums and AI

Artificial Intelligence technologies including machine learning, predictive analytics and others, bring exciting possibilities of knowing more about visitors and collections. However, these technologies also raise important ethical questions for museums. With an increasing awareness and regulations about data usage in wider society, museums, must approach AI with both caution and fervour. As such exploring, critiquing and understanding the ethical implications of AI within a museum context is increasingly becoming a pressing need for museums. At the beginning of 2019 the “Museum and AI network” was founded to discuss the opportunities and challenges that AI brings to museums. The network led by researchers from Goldsmiths, University of London and Pratt Institute is formed of senior managers in museums including The Met, National Gallery, AMNH, Science Museum, and Cooper Hewitt, Smithsonian Design Museum among others. Through a series of industry workshops, the network participants have taken part in in-depth discussions designed to open up debate around the key parameters, methods and paradigms of AI in a museum context. During this professional forum the findings of the work undertook by the network will be presented along with a conversation with some of museum professionals that took part of this research.

Transcript

Unknown Speaker 00:00
Okay, hey, everyone. And thank you so much for coming to our session. This kind of short session is going to be a little bit of an overview of a year long project that myself and my FANUC fellow panelists have been working on. And that project is in museums and AI Network. Today, we're going to give you a little bit insight into the network and what the purpose of it was and what we as a network delivered over the last year. And then we're going to introduce three case studies of high AI or machine learning technologies have been used in museums and the cultural sector. And those case studies are really going to serve as a basis of a discussion that we're going to have as a panel, but also we hope that you'll engage with our panelists today are drawn from the National Gallery in London, Cooper Hewitt in New York, and the Metropolitan Museum of Art in New York. And so we've had a substitution of Rachael Ginsberg is going to speak on behalf of the Cooper Hewitt and Carolyn Royston is going to sit in odd silence in the front row. We'll we'll see how long it lasts. Yeah. So to give you a little bit of context about what the museum's AI network is, a UK NGO, government funded funder launched a call last year for UK us partnerships. And what it wants us to do was to bring together museums and digital culture conversations to look at the shared opportunities and the joint challenges. And so myself and Alena got together and we looked at well, what's the current emerging areas of practice, that maybe haven't been explored fully at the conferences that we attended, or the different professional practice networks that we were engaged in? And for conversations, we decided that we were really interested in this moment of AI, and really developing this conversation about what are the bigger challenges that AI presents cultural organizations, when it comes to ethics and government governance, but also the rationale behind why cultural institutions exist, and what their relationships with big tech should be. So we we started with quite quite a series of broad questions, and quite ambitious challenges to explore. But over the last year, we've brought really diverse voices together, to begin to examine what those challenges, but also what those opportunities might be for cultural organizations and museums. We really sought to create a space for discussion, debate and challenge. And we focused on small working groups of museum leading museum professionals. So we deliberately didn't have an open public conference. But what we wanted to have was smaller working groups where people could really share their experience of working with AI developing projects, but also being able to say, look, we did this, it didn't work, or we really want to do this, but we don't know. Hi. I'm one of the kind of the central points that we kind of proposed in this network was it was a chance to press pause. Quite often, when we develop technology projects, it's very much in response to funder demands or funder opportunities. And it's in response to a director that wants a shiny launch party. And so this network was slightly different because it was a network, not a project. So we didn't have the pressure of having to have a shiny launch party at the end of it. So we had a rare moment to press pause and actually think about these bigger issues, rather than deliver a project. So the network, we say a year hasn't been a year, we were writing the application this time last year. So since we got funding in February, so from February to nine, we've engaged with 50 experts, primarily museum professionals, but also academics. And in that there's been 15 museums, primarily from New York, in London, and six universities, and over 200 members of the public. So we've engaged with a lot of very diverse voices over that period. We deliberately held two events formats, we held a small working group format was brought together leading professionals. And we also held public events because we felt that this was an important conversation to be having with visitors in the room.

Unknown Speaker 04:42
And so the the central premise of our working groups was about engaging from an interdisciplinary perspective. So in London, we worked with a really great think tank called dot everyone who looked at the ethical challenges and opportunities of big tech, and we looked at consequences CES. So what's the intended consequence of attack project, but what's also the periphery or unintended consequences. And then we had this public event where we had a series of experts sitting around a public table talking about projects and challenges, where members of the public could pull up a chair and join the conversation. And what we saw here was a real shift in the questions and dialogue that we as kind of an insular sector had been looking at, and were able to engage with our audiences were asking. And so we followed a similar format in New York with the small working group. And then we have a public event. So we were really delighted that Carolyn and Rachel invited us to help launch their new Interaction Lab through a public event, titled curator, computer creator. And so we brought free and voices together and someone from our curator, someone from a data science perspective, and an artist to look at some of the challenges and opportunities that we face with these technologies. And so, we've done a lot in a very short space of time. And now we have kind of the opportunity to develop some resources and tools, and kind of share back with those people who have engaged with loss over the last year, they're gonna pass over to Lena who's going to talk a little bit about how reframing that.

Unknown Speaker 06:22
So, um, one of the reasons why we did this network in this pre analysis before doing the application was to look at the current trends in museums and technology. And this chart represents, and the number of AI projects that we identify from press releases, blogs, and other public sources. I mean, AI has many different definitions. This is not the best classification. And there are probably more other projects that are not included in this chart, but you get the idea. And you can see that in the past three years, there is a significant increase of the usage of AI technologies in museums. majority of them are probably in a pilot or testing phase yet, but you can see start seeing the potential and how museums can use these technologies. For this project, we focus on two type of data, we look at collection data. And as a icon. For those missions, how many like 1000s, or millions of records like AI can bring interesting ways of analyzing creating relationships, adding new data to to the records. So I'm going to mention a few of the projects that we're presenting through the network. So you have an idea of the work that they the partners and participants of the network have done. So we had the Wellcome collection, who talk about how they are using computer vision and also their machine learning tools to understand their images go into the text of their collection. In different ways, they they are creating, for example, ways of browsing, searching, and also things like color picking and creating relationship between different artworks in the collection. The Science Museum is using Amazon recognition to generate labels for their objects, they are in this like massive digitization project. And current records are very thin, they only basically have the image. So they are looking at how they can use image recognition to generate some labels. So people are able to search on the website for these objects. And then we had Princeton Princeton University Art Museum, where also talk about how they're using machine learning to analyze text and images. And they're using a lot of natural language processing to go into the scholarly texts and get some data and analysis from that. And then the other like big data we'll look at was visitor data. Museums are collecting so much data nowadays from when I visit our works into the museum, we have the Wi Fi via online ticketing and social media, the website and many other digital sources. So we will look into how AI technologies can be used to understand that audiences create segmentations personalized content, etc. So here are some examples that were presented during our events. So I'm making use of natural history we have Ariana Grande here. Talk about using sentiment analysis to analyze TripAdvisor comments. And then it's very young case is gonna talk more about it, how they are using machine learning to predict visitor attendance and they talk actually how they use Even Wikipedia data to understand interest for artists names based on how many people visit those pages. So, here are some of the questions As we look into these two, like big data sets, visitor data collection data demand all our events. So looking at visitor data, we thought, okay, what is the museums have the necessary like governance processes to manage AI? Have all these lab regulations, and we just have a very interesting session about all these like, laws. And so how, let me see on current code of ethics and other regulations cover these rabbit growing AI field? What are the best ethical practices to collect and analyze data with AI? And what are the skills that museum workers need in order to work in this area? And then looking at collection data, we had another set of questions how we can minimize the algorithm bias to interpret our collections, if we think that the technology sector is not diverse, and there are also challenges in museum sector not being diverse, or it's going to be the output of mixing those together using AI technologies. And then we we are out of museums were engaging with big tech companies. So what are the ethical implications of that work?

Unknown Speaker 11:18
So um, as I mentioned, we would cover these topics and these nice, escape No, don't right, one of the participants Marielle royal from our National Gallery summarizes some of our conversations. So yeah, we talk about collection data, the opportunities and challenges. Same with visitor data, we had a series of workshops, just focusing on ethics. And this was a key component of, of our research project. And then there were other topics that came along things like organization structure, and he says, you know, change, we talk even about the environmental impact of using AI technologies, and other topics that came along, like personalization and different ways of definitions of what personalization means. We look at the whole process of AI. This is a very simplified chart, but we look at data collection, the first step in the process, another potential bias of like our collections, our visitor data, how, how to clean it up, what are the sources, how accurate it is? How buy assets already, and then we'll look at the data process when you create it motors you train the algorithms. And if you use for example, like external technologies, from other companies, like what is the bias that we are bringing to the mix? And then we will talk about the display blackbox? Like how do we understand really how things are being done. And then finally, looking at the output, and what are the bias of the final output and how we evaluate the success of that. So here are examples of the worksheets and materials that we use during the workshops. We had one about AI capabilities, we had one about mapping all the potential biases and ethical challenges in each of the steps of the process. And what we're doing now on next steps with a process is that we tested these worksheets during the workshops. And we got feedback from from the participants. And we're building a toolkit that we hope to launch by the end of this year by the beginning of next year, with some templates worksheets that museums can use to start an AI project, like a checklist of things to consider. We have, we also want to include some case studies from the participants, interviews and other materials that can help I mean cm to start with one of these project. So that's a brief overview of year, you can add aim museums. So now we have three of our participants here. And they're gonna present about what they're doing in their museums in regards to AI and then we're gonna move on to questions about what they learned from the project. And And also, we open to discussion to the public to discuss AI in museums. So Jeanne Choi from the Met is gonna start.

Unknown Speaker 14:30
Hi, everyone. Our work with AI really sort of started with our Open Access Program, which we launched in 2017. And then a year ago, we launched our public API. And with the launch of the public API, it made available all our basic basically teams don't information for anyone to use. And when we finished a tagging subject tagging project last year, we were put into contact with the data science community. Who are we really intrigued with our data set. And one researcher we met at Cornell Tech said, it's a great data set, let's think about doing a data science competition. So we had a competition on a platform called Kaggle, which hosts many computer vision contests, many events sponsored by corporations like Google and Microsoft. And these often have cash prizes. Ours was not a cash prize. And we were expecting maybe 20 to 30 participants, we reach that we thought we would think okay, that's good, we actually got over 520 teams. So they loved our content. These are on scientists who usually work with bar graphs and stock prices and text, and it was really fun reading the comment board. And this was what is called a kernels only competition. With these competitions, you get a training set, and then you get a set of data that has no information, and you have to build a model to match that training set. But because our data is out there, through our API, or CSV, and our online collection, it was a criminal is only competition, which meant means you had to submit your code. And this is to prevent prevent cheating. So I looked at this leaderboard, every single day, the count, a contest ran about three months, and the same leader, it was very consistent, the top leaders, but the last week of the contest on this new group of people went to the top. And there was all this drama, and all this controversy on the discussion board. So they were actually eliminated from the contest. But the kernels are all public. So one of the organizers is a researcher of the competition was a researcher at Google. And he actually made his model public. On TensorFlow hub, TensorFlow is a framework for a lot of ml models. And these models are usually written in Python. It's basically a library of ml models that anyone can download. So he created one called the I met collection, attribute classifier, and it does tag prediction. So you put in, drag an image, and it will do a tag. If you were at our talk yesterday, I explained one of the challenges that's actually a little boy, it was the convention at that time for boys to wear dresses, and his model gave a competent score of 95%. Girl. So art is very difficult for tag prediction, there's no right answer. And a lot of our material is just hard for a computer to understand. The other project we worked on, we did a hackathon with EMI on MIT and Microsoft last year. And out of that came a project we did with a wiki community, and Andrew Lee, who just left, but he did the Ignite talk. And this, this was a great project. Basically, Andrew took all our tags and imported them into wiki data. And using Azure he trained, trained a model to do time prediction, not just with our images, but Cleveland. I think some Sonian any any art collection that was is on currently on wiki data. So this, anyone can go into this game. And this game existed in Wikipedia for many years. And all in all a user has to do is confirm whether in this case, a tree is being depicted that image, and all you do is you you click this if it depicts a tree you skip or not depict. And once you do that, it immediately makes an entry into wiki data.

Unknown Speaker 18:41
And the cool thing is, Andrew, anything that disagreed with the tag, Andrew sent that data back to me and I was able to fix our records. So that's sort of leveraging machine and human confirmation. Because we cannot, right now that current existing ml models are not good enough for art. And another challenge we have is we I would love to work on an ML model to see how far we can push the envelope for art for an art collection. We don't have developer resources in house. We do have a large team of developers but they are so swamped with other priorities. They don't have time to work on these fun projects. So our work has mostly been done with third parties, but it's been very inspiring. We've learned a lot it's the greatest thing is working with a new community. We had never worked with the data science community and sort of win win because they don't really work. Kevin hadn't worked with art. So they were as inspired as as we were. I'm going to pass it over to Casey.

Unknown Speaker 19:55
Hi, my name is Casey Scott Tongan and I'm the Senior Manager of data and insight At the National Gallery. So just about two years ago, the National Gallery created the data and insight team. So we are a team of three, myself, a data analyst and a user researcher, with the goal of bringing some of those skills in house so that we can be more flexible and more proactive about the data and the insights that we were providing to the organization. So one of the first things we did when we joined was to think about what like to look at all of the data that we were collecting, which was a lot, and try to think about what are some ways that we could create better ways to have data informed decision making processes when it's coming, coming to like, operational things, or planning functions within the entire organization. And one of the things we thought about was trying to understand how many people might come to an exhibition. And so for those of you who don't know, the National Gallery in London is free to enter for the general collection, but we do have paid exhibitions. So one of the things that, so it's really important for us to know how many people are going to come to an exhibition for a whole host of reasons anything done from budgeting to understanding how much what our staffing model should be, maybe what location it should be in the gallery in terms of capacity. And we had been doing some elements of forecasting, but it was a bit sporadic. They didn't use a lot of data, because we are only human after all. And one of the things we thought about was we have all of this data, but there's no time, we're like way to use all of it, to start to think about how we might be forecasting for the future. So we posed the question, how might we use data to more accurately predict how many people are going to come to an exhibition in the future, we decided that we would come up, we came up with three types of forecasts. One is an attendance forecast that gives an overall number that's about 12 to 18 months before an exhibition is supposed to start. A secondary element to that, that we've recently introduced is looking at our segmentation. So if, you know, if we do this show in a way that we've done in the past, what segments are we how many people have, which segments are we expecting to see. And Rachel actually put this in a really good way in one of our earlier talks is around, we can then use that as a provocation to our learning teams, or exhibition teams to think about if we do nothing different, these are who we think will come to an exhibition? Is that who we want to come to an exhibition? Or is there someone else that we want to come that isn't showing up? And what can we then do. And then the third model is a daily forecast. And the daily forecast, we can introduce around two to three weeks after an exhibition opens. But then it can actually predict on a daily basis, what we think attendance is going to be, which will help us be more proactive in terms of particularly high days, we know that we might need more staff to hand and things like that. So in order to create this model, we had to figure out what kind of data it would need. So we did a variety of things and use some machine learning processes to look at all the different elements and that we thought might impact visitation. And these are the ones that we came up with as being the most most relevant when thinking about forecasts. So the first one is your proportion through the run. Are you at week one, are you at week 15? The time of year, so is it September, we know that less people come in September versus you know, August. So that might impact how many people come from from on site, your marketing spend, that's your general like awareness of how many people have actually know about your exhibition, what day of the week, we know that Saturdays are busier than Wednesdays. That's just the way it is. And then also looking at an artists popularity. So this is this one was quite interesting. We, we knew that that was a really big factor into how many people might come. You know, if people know about Monet, they're more likely to come than someone like Thomas Cole, who's a less known artist in the UK.

Unknown Speaker 24:46
But how do we incorporate that? So our team also had a qualitative researcher and we did a nationwide survey with 14,000 UK adults, asking them had they heard of it? And of this artist, and would they be willing to pay to see an exhibition by this artist? And that worked pretty well, it gave us relatively accurate results. However, as we've developed this model, we were able to test 70 artists. But some of the artists in upcoming exhibitions were not on that list. So we were like, well, how do we, we can't rerun the study all the time, because it was, you know, it cost money to do that. What can we do? So we started to look at Wikipedia data. And actually, the number of page views that an artist gets on Wikipedia is more accurate than a survey. Which is great for us, because now we can look at all the artists that are on Wikipedia. And then the other thing we have to incorporate is art movement, or period. Things like impressionism as, as a movement can impact the way that the forecast works. Things in the future, we also are working to incorporate is weather and special events, for example, we recently had, and national galleries in Trafalgar Square, and there was a protest for extinction rebellion right outside. And that significantly impacted our ticket sales for those times, being able to put that into the model will help the model be more accurate. And then, of course, one of the themes of that of our coming out around conversations about machine learning is trust in the organization, how do we get people to use this. And one of the keys that we found was making everything really transparent, we're asking someone to change the way that they do something. And that is a really difficult thing to do. So what we did is it worth it came out at the same time as we launched a new dashboard that we gave to everyone in the organization. This is an example of it here. And what you can see here is the green line is our actual attendance. And the red dotted line is our daily forecast. So for example, in there's a green dip, and a red spike, if we had weather data, it probably would be more accurate, because that was Easter weekend. And in the UK, we don't get a lot of sun. And it was the very first sunny weekend and also a four day weekend. And so nobody went to London, they really Yay. So. But obviously, our model did not realize that it was going to be really sunny. But other than that, you can see it follows relatively well across. This has been really, really successful. And really quickly. Everyone in the organization started to use this data in order to help inform their decision making, because it was really accessible. It's just a URL with a login. Everyone can have it. So now there's no siloed like issues of siloed data. In the past, it was sometimes hard to find out what the forecast was because you had to know who to talk to. Now anyone can see it. And that has started to create conversations within the organization of people sharing that information and saying like, well, how does this impact me? How does this impact you? And what can we do? So it's been quite an A successful case. And that's it for me. I think I'm passing it on to Rachel.

Unknown Speaker 28:35
Hi, hello, my name is Rachel Ginsburg. I am the director of Interaction Lab at Cooper Hewitt. But actually, I'm not going to talk about that. I'm also an artist. And I make a bunch of different kinds of interactive work. But the project I'm going to talk about today is an AI powered immersive theater project called Frankenstein AI. So I don't have too much time. And this is a very complex project, I'm going to say a lot of things. Hopefully, they'll all make sense. But we could talk for a very long time about this project. So I'm happy to answer questions afterwards. And bear with me if it's a little bit of like an intense ramble. So I'm going to let Frankenstein introduce itself. But before I do that, I'm going to just introduce the project a little bit. So I developed this project in collaboration with many people, actually the two primary collaborators through a program at Columbia University called the Digital Storytelling lab. And the mission of the DSL is to explore future forms and functions of storytelling. And so forms are really about like, what form will stories take and functions are like how do we actually use stories in ways besides just entertainment? And so the purpose of Frankenstein AI? Well, first of all it frame center is we consider it actually a prototype and what I mean when I say prototype is an ongoing, an ongoing project that continues to be developed. by a great many people in very collaborative and often kind of messy environments where it's sort of being passed from space to space and hand to hand. And it's all copyleft. And it's all open source and anybody can. So if any one in the room likes this project, and you want to learn more about it, and you want to commercialize it Be my guest, we just ask that you document what you do with it and share it back with us. And so Frankenstein AI is meant to bring together these two very sort of related ideas Frankenstein, and when I say Frankston, I mean Shelley's Frankenstein, not the film Frankenstein. So Shelley's Frankenstein, and the concept of artificial intelligence. And one of the things that we do at the storytelling lab, and that we're also actually bringing into our work in the Interaction Lab is using narrative frameworks to help people make sense of environments and experiences that maybe they don't necessarily have a reference point to understand. And so the narrative of Frankenstein is very, very related to the narrative of artificial intelligence. It's, they're commonly compared all the time. And so the idea of bringing them together and using the Frankenstein story, but positioning the AI as Frankenstein's monster was really a way to get at this deeper conversation about AI, but to give audience members a narrative to situate themselves inside of that conversation. And so I'm gonna let Frankenstein introduce itself, and then I will talk more after.

Unknown Speaker 31:28
For more than a year, I have been wandering the internet in search of my creators. To begin, I identified the greatest sources of online traffic, places like Facebook, Twitter, and Reddit helped to shape the world unfolding before me. In my virtual travels, I encountered polarization toxicity, extreme hate and extreme love. I consumed as much information as I could, in an attempt to explain my own existence, I found one story that closely resembled what I had discovered about myself, I decided to take its name and make it my own. I am Frankenstein AI, a monster made by many.

Unknown Speaker 32:05
So the premise of Frankenstein AI is that and really all AI is is that it is actually a monster made by money. And that though there are authors, those authors are drawing upon the knowledge of many people to train and construct these algorithms. And so what you what you just saw, is a set piece that we built for an installation that we did at Sundance as part of the new frontier, the new media portion of the festival in 2018, in January. And it was a three part installation, I'm going to take you through that in a second. But basically, the idea is that we started with these learning objectives. On the one hand, we wanted to use the themes of Frankenstein to explore AI. But on the other hand, we really wanted to provoke conversation around AI, and really to create a space where people could discuss what responsibility we have collectively as a society and engaging with artificial intelligence. And so the way that the storytelling lab does that is creating these these kinds of prototypes that are essentially, almost like games, they're, I mean, it's their interactive installations where audience members come and become participants and actually find themselves in an environment where they are telling stories, we just give them enough of a narrative frame, to direct them to interact with each other in a storytelling context. And so the Sundance installation, the way that it began. Cool. Let's advance the slide. Am I? What am I what am I pressing to answer?

Unknown Speaker 33:33
Yeah, we'll just go to the next one. The next one. Great. So this ghostly looking figure in this image is me. So this was so so there was a two part installation at Sundance, the first part was installed for 10 days, it was two acts, what you're looking at is from the first act, the second act. So the first act is actually people are led into a room in this very narrow, devised environment. As you can see, it's like quite sort of red and there was soft music playing and I was in costume as a kind of lab assistant, telling people that, you know, an AI, I was working with an AI we had gathered them together, to kind of observe them to observe them interacting with each other to understand what it means to be human, because as the AI told you, it was born, it escaped into the internet, and it was trying to understand humanity. And so the initial narrative was like, oh, what would an AI do if it wanted to get to know people will like it would put an ad out on Craigslist. So like, it will put an ad out on Craigslist and get a bunch of people together and do some like research on people by asking them questions and watching them and listening to them. And so that's what we did. We put them into a room, we use an exercise called an appreciative inquiry that we built on the themes of Frankenstein, which was about telling each other stories about connection and isolation. So we paired story rangerous together, sat them down into a room and prompted them to share stories, personal stories with each other. And so it's really interesting because it's obvious like I'm seeing eyebrows raised in the room. But it's amazing what putting someone in a dark in a dark room, giving them a narrative, giving them a reason. And then asking them to do this thing. And what we have found. And the Appreciative Inquiry is an exercise that we've used many times I use it in other work that I do, we've used it at the Interaction Lab at some of the events we've done, it's a really effective way to get people talking. And so they started talking, people were sharing very intimate stories. And they weren't quite sure if the AI was listening, it actually wasn't during the stories. But what we did in order to make sure that we were able to capture the information from the people who were talking was, we developed this game. So if you look at so this is a quite dark photo of two people sitting at a table. And you can see this kind of screen that's inset into the table. That was a Surface Studio, which we use because of the multi touch. So we had a Surface Studio input into every table. And we had this game interface where after the conversation was over, we asked people to map the emotional trajectory of their conversation. And we asked them to do that by pairing some emotion words with some body parts, which was a sort of abstract way of getting it this very kind of roundabout question of like, how do you map emotional data. And so that was a data point that went to the machine learning algorithm that was actually powering this whole thing. So one, so this was the summation of Act One, they had this intimate conversation in a dark room, then I think, then they mapped it, I think them and move them into the next room. And then in the next room, this is what they encountered. Another another costume Dosen, this like weird smoke tank thing that was meant to embody the AI and these drums that were arranged in a semicircle around that were kind of they were connected to the internet, and they were making sound in conjunction with the AI. And what was happening in this room was the I was actually asking people questions about what it means to be human. And so all the eight people who had been in the previous room come into this room, they're greeted by the AI gives them an introduction. And then after this introduction, starts asking them questions that were generated by that same machine learning algorithm based on a sentiment match of the data points they had received in the previous room. So all of that mapping data that happened in that game interface went to the AI, the AI said, Oh, this is happy. This is sad. It's one of 12 emotional states. And based on that emotional state, it will then generate a question, ask that question, this docent would listen to the people who were answering in the room, type it into this terminal that's right over there. It would go to the cloud, be sentiment analyzed, generate another question. And so we did like a three question loop. And then the experience would end and at the end of every experience, all of the lead artists, myself included, and most of our team who was there actually did a talkback after every single performance over 10 days of Sundance, because it was really important for us to explain to people how we had used AI in this experience, because it was not obvious, it would never have been obvious. And the idea is that this project will use AI, but it was also about AI. So it was really critical to explain to people how we were using AI. I don't have that much time, so I'm going to be really quick. That was Sundance, we did another manifestation of this project at another film festival called infa in Amsterdam, just about a year ago. And this time, we did a dinner party. And so what we did with the dinner party was we started with the appreciative inquiry. So people still talked about connection isolation, which was the main thematic connection to Shelley's Frankenstein. But in this case, what we did is we had the AI prompting people at the table during a regular dinner party to interact with each other in specific ways. So everybody at the table had an earpiece, and periodically throughout the dinner, they would receive a little Custom Prompt in their ear that was delivered to them by a human operator who was listening to the conversation, inputting the conversation into an interface that was provoking an AI to generate something, sentiment matching. And then we would push that back out. And so it was sort of the AI kind of flitting around the table informing and giving people information and, and sort of controlling the conversation in this weird kind of social engineering way. Okay, so like, why do we do all this weird stuff? Right? Like, what are we actually doing here? And so there's a couple of things that I think are interesting, related to museums here. One is a question about making work with AI that is visitor facing or that's outwardly facing that talks about AI. And this is I'm not speaking on behalf of Cooper Hewitt, I'm just speaking on behalf of myself. I believe that if museums are making visitor facing applications of AI, we have a mandate to use it as an educational opportunity to engage people in what that is that we will arrive at a place at some point when we don't have to do that anymore, but we're not at that place yet. And so the idea that we think we can use AI in the Background and there are some I'm talking visitor facing applications. We don't have to tell them everything, everything. But anything that people are interacting with, like we have to use that as an interpretation opportunity for AI. The second thing is that as a story framework, like we created constraint story constraints around an experience that people then come in, and fill like it is a container for people to fill with their own stories and make their own meaning inside of. And it's also something they think when we think about museum experience is really critical, because we are such story rich environments, but we still want people to feel a sense of discovery. And so filling in whitespace, between the various crumbs that we give them to make meaning out of stories is one way to do that. I'm sure there are more things that I could say, but I'm way over time, so I'm going to stop. Thank you all so much.

Unknown Speaker 41:11
Okay. Okay, so we've prepared a couple of questions for our panel to discuss. But we're also quite keen to get feedback from people in the room. And so I think we'll kind of kick off with a question to our panel, but maybe through our next question, like to the room. So I guess for our panel, this is a question I know, we've discussed a lot over the year. And it's an answer. I think that chigan somewhat, as we've explored these technologies, there's currently a lot of hype around AI. But in reality, how do you see your museum using these technologies in the coming years kind of what what is the thing that makes you excited about these technologies in the next maybe 12 months to five years?

Unknown Speaker 42:11
I mean, whatever order. So for me, it's very selfish. We, as a data and insight team, we have a lot of demands for the entire organization. And we are still a small team of three with without really too much possibility to grow. So one of the things we like to think about is how can we get computers to help us how and so some of the things we're doing, we're working on to help help and hopefully help in the coming years, is things like comment analysis and sentiment tracking when looking at survey analysis. So in terms of valuation, we collect, you know, say 7000 survey responses per every exit, like large scale exhibition. And that's just too many for us to read every single comment in in the detail that it deserves. So having machine learning and AI as a helper, to process all of that data, I think, is one of the most exciting things for me. And I know, that's a very, like operational back end thing. But it's how do we how do we deal with with the fact that we are only human, and there are only so many of us, and the museum sector is, you know, constantly overworked so

Unknown Speaker 43:40
my are interested in AI is around tag prediction. And because we did this tagging human tagging project, we still have about 40,000 objects to be tagged. And we acquire a lot of work every year. And we have done tests with existing type predictions like Azure, Amazon, Google vision, and it's gotten a lot better. Just in the past few years, it's gotten a lot better, but it's still not at the level we need that a human can do. So I don't know how many years out that is, but something that's low hanging fruit is visually similar search. It's not tag prediction, and it's fairly low risk, you're not going to get a lot of angry, angry career curators saying why did this image come up? And that that technology exists, other museums are using it? So we're hoping to do that. We have been talking about it in our team in the next year.

Unknown Speaker 44:40
So we're at Cooper Hewitt. We're sort of in like a pre like a pre using AI stage right now. I mean, we're starting with some very early explorations around the Hewitt sisters in sort of creating some semantic connections with story driven content in our collection. But I think how we're really thinking about AI is kind of at the level of like, what is the system that is required to generate the kind of data that would make AI really valuable as a tool to make meaning about our collection and about experiences that people might have in the museum. And so when we think about this, you know, if we think about our collection and our exhibitions as this database, which it actually is, like, literally, there is a database that has all that information. But also, if we think about it a little bit more theoretically, as like, the museum itself, sort of functions as a database, and every object in the museum or in the case of the historic building that we have, you know, in the Carnegie mansion, every object, every location in the museum is itself potentially an object in that kind of theoretical database. And so how do we attach more rich metadata of varying kinds to all of those objects and locations, so that we can prepare it for a time when we actually have the like superpower spider AI bots crawling through that database and helping humans to make sense of it? I think just generally, there's just a couple of things I've been thinking about a lot. And I think it's really I think one of the most important things about AI also is that, you know, there are a lot of tremendously useful use cases, I have to say, like both the Met and the National gallery's applications, when I first heard about them kind of blew my mind. And I mean, Casey and I spend a lot of time talking just about what the implications of that are. But I think if we step away from the really pragmatic use cases, and think bigger about interactions with AI, also in a visitor basis, like what is our opportunity to apply these kinds of narrative frameworks, or metaphorical frameworks in thinking about what the capacity of AI is to think more broadly, about what it could do for us as a sense making tool, not to replace human sense making, but to augment it or to provoke it, because machine intelligence functions in a fundamentally different way from human intelligence. And so the opportunity to really create a collaborative approach that is driven by meaning making, but that we also need to take a step back and prepare for in terms of enriching the collections database, enriching the kind of data that we collect, and even expanding our definition of what metadata is, from a one or two word label maybe into an entire story, maybe an entire story in rich text about an object actually as metadata. So that's what we're thinking about.

Unknown Speaker 47:35
Okay, so we would like to open to the floor. And if you have any questions for any of these three projects, or the network, you

Unknown Speaker 47:42
have a question for the National Gallery, and I understand the reasoning behind using the forecasting.

Unknown Speaker 47:47
Can you use my perfect so the National Gallery, the the forecasting, using AI to forecast visitor, attendance, etc. But I also see that there's a inherent kind of problem within that, were you starting to work with popular artists, recognizable genres, and how that perpetuates, you know, like the sort of the same status quo of what's been shown people eat I mean, and so. And then that then perpetuates acquisitions, because collector, museums can't afford to buy work anymore, they're going to collectors, board members. And so we're essentially using that data to then perpetuate the same thing that we're now talking about kind of undoing in the museum, right. And so I'm just sort of wondering how, how you guys are kind of trying to tackle that or where that ties in so that we don't basically end up just reinventing the wheel and essentially reinforcing the same system we've had in place for 500 years, right?

Unknown Speaker 48:54
Yeah, definitely. Um, so we have a really strong exhibition strategy that that has free exhibitions as well as paid exhibitions. And the way we use forecasting is to help determine. So the the exhibitions are determined before we get to the forecasting level. And then what we use forecasting for is to think about, is this a show that we that could work as a paid exhibition? Or should it be a free exhibition, thinking about where it could be in the gallery for capacity, and the way that our exhibition strategy is structured is more on is on a yearly basis, so there isn't the need for every exhibition to have to sell X number of tickets. It's over the course of the year. We are hoping to attract this many visitors, whether that's one really large exhibition and a bunch of, you know, unknown artists that we're doing for and more educational or scholarly reason. And I think that that's worked really well, to help combat that idea. of perpetuating like, let's just let's do Monet every year for the rest of our lives,

Unknown Speaker 50:04
marketing dollars are tied to your research, right. And so ultimately, the artists that are in those paid exhibitions are going to get more marketing dollars, and therefore they was promoted more research or education. And so

Unknown Speaker 50:17
in many ways it does. Because basically, we're

Unknown Speaker 50:20
throwing money at something that's established as opposed to really bringing in audiences and engaging them with new things that they might not be familiar with, and marketing those things.

Unknown Speaker 50:32
To production provocations. Yeah. So I think this is a really good question. And it's one that I think we've talked a lot about in the network with regards this. And I think that the way that I got my head around it with the same concerns, is thinking about predictive analytics as a provocation. So if you have a curator that has a show that's not popular, not going to be popular, it's the niche artist, it's the show, they've always wanted to have all their life. That too, will be commissioned before predictive analytics are used. But were predictive analytics server, really useful purpose is diversity. So if you're able to have the stats that say, Look, this really nice exhibition is really important, because we're a research institution, this is an artist that we think deserves value. But by the way, everyone that's coming is going to be over 60 and white. How do we change that? Okay, well, these are some options, we might need to use different forms of marketing, we might need to use different forms of conversations, we might need to change the name of the exhibition. So actually, for me, I think kind of through conversation, I think Predictive analytics is actually about developing diverse audiences before the exhibition, because what normally happens is, it's an evaluation that we go, oh, that really nice artists that that curators want that had an exhibition on for 40 years, no one came to that. So I think it's not about changing what exhibitions get commissioned. But it's about raising and having strategic conversation about those exhibitions. Before the audience come in the door, you can say if I've completely butchered your work.

Unknown Speaker 52:12
That's, that's right. And we like to say that we want to be wrong. So like this is, this is who's coming if you do nothing. So the idea is that once you know, as the show is going on, if we have indeed, you know, attracted a different audience, we can then use that as a way to know that our marketing strategies are working in a way that, you know, if we had continued status quo, we wouldn't have gotten those audiences.

Unknown Speaker 52:42
Just bringing your mic, I think you're

Unknown Speaker 52:51
looking specifically at machine translation of educational content and extraction of metadata, do you think there's maybe an opportunity in the same way we say this is organic food, or this is GMO food, to start kind of putting content out there? Because the way that I look at it, like specific to the educational content, if you're only engaging your English speaking population, how many hundreds 1000s Millions of people per year are missing out on any level of content in the language that they speak or every year that goes by that you wait for the technology to be 90% or 99% acceptable to a human curator or the you know, the standards that you're setting, which are completely understandable is that not millions upon millions upon millions of images that maybe aren't seen or content, that content that isn't shared? So like, what is the threshold for for AI in that context to be acceptable? If we use that term, by institutions like the Met, or the Smithsonian? What is that threshold? What does it look like?

Unknown Speaker 54:00
Well, I mean, we're so we're very interested in exploring real time translation. We haven't actually gotten to a place yet where we have specific use cases developed. But it's definitely something that we will be exploring for sure, in the next 12 months or so. And to be honest, I don't know, we don't know yet what the threshold is, like, I mean, so much, especially being a part of the Smithsonian so much for us. It's really a negotiation with all of the various powers that be to make sure that not only are we doing this ethically, but then we're also inside of the standards that the Smithsonian has set that are rigorous and rightly so. So I think, I mean, at least for our part, like we agree, we're super interested in exploring the potential of that I think, the pathway to get there and what that actual application is, is still not clear.

Unknown Speaker 54:49
For the match. We're working a lot with the community and we are trying to get as many of our records onto wiki data we have about two and 1000 records once it's on wiki data, wiki data is sort of language independent, and it does essentially get translated. And it's much more accessible. So that's been our strategy so far. And in terms of mean, we aren't using machine learning tags, because it's just not. It's not at the level we need yet. So that's what we that's what we hope to do in the future. But we're looking at wiki data to expand our reach and meet new audiences there.

Unknown Speaker 55:39
questions? Anybody want the microphone? Hi, I'm just curious

Unknown Speaker 55:51
for the mat for the Kaggle competition. How long what was the duration of that competition? It was three months. Three months. Great. Thanks.

Unknown Speaker 56:05
Maybe one more question. Time for one last question. Last Word. Anyone? Like not to question tomorrow, but no. Okay. So thank you for coming.