Unknown Speaker 00:00
Okay. Hi, everyone. Welcome to curator brain meet AI brain. I'm Nathan adkison. I am your emcee for this session. To quickly introduce myself, I work at local projects were designed for him in New York City. And I'm on the board of directors of FCM. I'm just starting my third year, and it's been a fantastic experience. I recommend it to everyone. You probably know this. But if you're brand new to MC n, MC n is a nonprofit volunteer run organization that is committed to growing the digital capacity of museum professionals. mmcm has a deep, active and awesome community that's engaged 365 days a year in conversations, webinars, and resource sharing. So I hope you all stay active and involved beyond our extended virtual three week conference this year. If you become a member, you can join the special interest groups, which we call cigs, participate in the membership program and help us all shaped MC ns future. So if you're not already a member, I hope you join us. And I would love to thank Microsoft, who is our registration Assistance Fund sponsor, axial, the Ignite sponsor, and all the other sponsors. So moving on to today's session. This is a QA. So I hope you have all watched the pre recorded presentation. We'll be using the chat box for questions. So you can lob a question my way one of two ways. You can put your name in the chat box, and I will call on you. And you can ask your question that way. Or if you just want to go ahead and type it. I will work through the questions in order. And so with that, I will pass it over to our panelists and let them introduce themselves.
Unknown Speaker 02:09
Excellent. I think I guess I'm up first in the introduction line. So thank you, everyone for you know, for for attending this, this virtual q&a. We appreciate it. So my name is Matt Elliott, I work at the Henry Ford. I'm the head of creative and digital experience. So any questions kind of around, I guess on the Henry Ford side and design and kind of that process, how we worked kind of hand in hand with Luca that in the project, I will be happy to feel those.
Unknown Speaker 02:42
And I'm Ellice Engdahl. I am also at the Henry Ford. I am our digital collections and content manager. So I oversee our collections digitization process I'm or SME for collections, content and data. And then on this project, I also was kind of coordinating a lot of the content work that the curators were doing on the project. So I'm happy to answer any questions about data collections data, the more technical questions should go to Ben, but questions about our collections data, or how that integrated with our collection stories, and our work with the curators.
Unknown Speaker 03:20
Hi, everybody. I'm Brett Renfer, creative director at Blue cadet. And so I've led the creative strategy of actually most of the stuff we did with Henry Ford. So we've been working together for I don't know, Matt, what, four years now or something like that. I feel like I increase it every time we have it MC n. And so happy to answer really any questions around the creative intent. You know what we're excited about for AI now and in the future, or any questions around the interaction kind of in concert with Ben, Ben and I did a lot of the prototyping you saw in the video. And happy. I'm just excited to get with everybody. There's a lot to talk about.
Unknown Speaker 04:00
I couldn't say Hi, I'm Ben, I'm the tech director at Blue cat in the New York office. And like Brett was saying, I was also working on the table, both on the machine learning side and also on the architectural side trying to integrate all the data and make all the pieces fit together. So I'd be excited to talk about anything in the weeds technical data pipeline, or, you know, really just the project or if you have any other questions related to that. Happy to answer them. It's a pleasure to be here.
Unknown Speaker 04:33
All right. Do you want to give any kind of intro to the project or should we dive in with some questions?
Unknown Speaker 04:43
I can I can give a quick kind of intro while we're waiting for people to kind of cube up questions. So hopefully, and yeah, I want to just like when I talk over a slide because it just makes me more comfortable. I'm just gonna share this sort of like GIF here. So, you know, I think for us Really, we spent a lot of time you can see it in that kind of talk, trying to take a lot of data, a lot of stories, a lot of connections, and this really amazing, diverse kind of crazy collection at the Henry Ford, and distill it down to something that was kind of technically powerful and experientially playful and simple. And this is like the craziest thing you can do with it, which is make this endless world of connections that span, you know, the most of the digital collection and Henry Ford. But most of us spend a lot of time making and breaking and tapping and playing. And AI is a piece of that puzzle. But it's really the kind of curator stories and I know a lot of people if you worked on digital experiences at museums, or when you did you know, you always spend, you have all these aspirations and ideas for making an experience that once you put pen to paper becomes so complicated and crazy. And our kind of story that we told them presentation is about like whittling down and finding a focus and finding a way to, you know, to stay simple, in a way. So I think that's a piece we're really excited about with where we landed here. But it was not a linear process, as everyone i'm sure, kind of understands. So. And I think for us, we do feel like it was kind of the tip of the iceberg as far as, you know, using AI as a piece of experience like this. But we strongly believe that, you know, now, in a year, in a couple years, it's still imperative that this is a you know, human and collection and narrative driven story rather than a core AI story. You know, while thf remains not and AI museum.
Unknown Speaker 06:44
Great, I actually have a question in your, in your talk, you use a quote that I love. That kind of can be a brief, I think for all of us and all of our work to amplify the best of technology and the best of humanity. And so I'd like to know for for this experience, specifically, what do you think those were meaning? What do you think the best use of technology meaning the best use of humanity? What
Unknown Speaker 07:10
Unknown Speaker 07:18
Then you first?
Unknown Speaker 07:20
Saw it sounds good. At the same time? It's always a little bit of like, hands up? That's a great question. And I think there are so many different facets to that question that I probably won't be able to answer it right away. But from me being the tech director, I think one of one of the exciting things about AI in the beginning was and this was very naive, sort of way to go into it was thinking you just feed it a bunch of data, and it somehow magically sorts it and provides you with something meaningful out of you know, with with that. And that was just not true. So it took us a while to actually chisel away and figure out what is the most interesting thing that AI can do. And I think we're already seeing one question from David. Hi, I'm about you know, going into similar realm of that conflict of really trying to figure out what was what's valuable about applying AI? And what's valuable about or what do we want to retain about the curatorial process. And one thing that we did realize is that a lot of things that we were trying to do with AI was replicating these curatorial processes. Whereas what it was actually really good at from from our point of view was surfacing new connections that didn't necessarily have a historical context, but allowed us to really branch out and deal with a lot more data than humans could actually. So for example, if you look at the ratio of amount of collection objects that we have on the table, versus the ones that are sort of manually curated to actually have a narrative that's meaningful in the historical context, or in the way that it's been collected at the museum, and that's fairly small. But they they kind of go hand in hand, just because there are more connections that the AI can create. It doesn't necessarily mean they're more meaningful, but it does create this nice balance where you can oscillate between the vastness of the collection and all these interesting small stories. So from my perspective, that's sort of where the both on this table experience for both of those really started to shine, the breadth and the depth.
Unknown Speaker 09:34
Um, go ahead else.
Unknown Speaker 09:35
I was gonna say just a follow on with that. And I think this might get to David's question a little bit to the I think the thing that we really, from the start probably and then kind of meandered and tried some different things and then came back around to it was to we do have this really broad collection. I know again, for people who watch the presentation, you saw the slide we have a historical village village with 80 buildings. In it, we have trains, we have cars, we have design, we have glass artwork, we have pretty much American history is what we have. And it's just this insanely broad collection. And so that's a question we get a lot is like, Well, why would the Henry Ford Museum of American innovation have insert name of thing here? Um, so I think, to me, that's kind of the best of humanity. Like we wanted to sort of explain that without like, here's a didactic paragraph that tells you why all these things go together. Because we honestly have those conversations internally as well. Like, why? Why does this thing fit into the collection? Why does this thing fit into the collection, as a 90 year old collection things accrete over time. So I think that was that's one output that, that everybody's really pleased about is showing how our collections really do fit together, even though they seem so disparate when you look at them.
Unknown Speaker 11:00
And I think like some of our original initial concepts around around having AI was that, you know, we would try to do like a 5050 split, right, like, is a table experience, like, we want to tell these curated stories, but we also want to see what stories AI can tell. And as Ben kind of alluded to earlier, you know, that wasn't, obviously, that wasn't the case. So as we kind of figured out how AI plays into this, like, we knew that our obviously our curatorial angle would be strong and solid. So in that kind of talking to the curators side, we were able to kind of talk them through their experience, like, as we went through it, you know, first kind of the approach of like, Hey, we're thinking it's this 5050 thing. And then the more we dug in and kind of were transparent with them in the process, obviously, like, we knew that wouldn't be the case. And it would, you know, it would have these amazing and magical AI, you know, ml moments, but a lot of it is also driven by the fact that, you know, our curators put all of this stuff together. And then we have these really great, amazing threads that kind of pull for fear there of using AI.
Unknown Speaker 12:20
Yeah, I think the only thing I have to add is, is just that we, from the start knew that we wanted to be highly interactive and let people play a piece in that story. And that was a great challenge to all like the really cool AI based visualizations we made, which were not interactive, right? They just, like, look cool, and you don't get to that, you know, that depth? Or even really, you kind of barely get the breath story. So we one thing I was thinking about is there's this electronic musician from the 70s, Laurie Spiegel, who would take issue with me even calling her an electronic musician, because she talked about how she made folk music. She's like, I make folk music, I just happen to use synthesizers. And so she's all about like, it's not about the tool, it's about the output. And I think that was the sort of us getting past the shyness and excitement of the technology and getting into the like, great. This is a tool to augment this bigger story we're trying to tell that can only be told by people.
Unknown Speaker 13:17
That's great, I think, I think wrapped into that were three or four answers to David's question. So let's, let's move on to this one from Kelsey. She asks, she says the interface you came up with is really beautiful, and feels really artistic. I totally agree. So how did you balance the aesthetic design with the collection content in functionality, and were there any concerns there would be more fun to play with, versus just purely educational?
Unknown Speaker 13:48
I can start in the man if you want to jump in as well. I mean, I think, um, there's kind of a few pieces. This one, we have the luxury of this being a part of a suite of experiences both here and kind of throughout the museum that we've been working on together. So we are able to say like, Hey, we're doing this like pretty heavily didactic, some areas that actually were more of like a mix of a physical exhibit with linear media, some things that are really doing more telling than showing and service like this is the place to really dial up the fun. Um, but I think for for everything like because it was so much about starting with one object and exploding to the collection, seeing all the different ways this could go. We wanted there to be no wrong way to experience it. So it works. If you tap if you dry your hand if you smash all your fingers down. And that was before we even really knew where it was like this is an experiential principle. There's no wrong way to interact with this. And because it's so much about showcasing the the mess, the beautiful mess that Alice is talking about. We were kind of like we need to lean into that. So it's never kind of a worry that was going to be too fun. Because we're allowed to be too fun. And we kind of felt like everything we're doing was pushing towards That's that bigger story. And then all the other kind of aesthetic and interactive flourishes, you see, are just kind of guiding you, you know, it's it's about no matter what you do, or no matter your level of ability, you're getting some kind of reward for your interaction. So if you drag your hand along, and maybe you can't read or I mean, you don't want to, like dive in, you just get this beautiful color that's kind of attracting and rewarding you and just give me this kind of compliment to on the objects and connections you're uncovering.
Unknown Speaker 15:30
I don't know if my answer is quite as robust as Brett's, but we were, I think I'm Henry Ford side, we were all in this being fun. You know, we knew the content was going to be cool. We had seen some early kind of ideas from Blue cadet around design. But we were definitely all in on this just being a fun, interactive experience that, you know, you could you could participate easily on that fun level. Or you can kind of hang out and dive in and get more and more and more into the experience. So if that answers your question.
Unknown Speaker 16:04
And I think just that one flavor to that was all that data was there. And actually, a lot of it was presented on the table. So there were there are still opportunities for visitors to drill down. And I think we even went so far as LS to add some object keys, right that that very advanced users might, might be able to refer to collection items with so it's sort of a way for us to try to focus focus on the playful aspect to lure people in and then let people dive deeper and deeper as they like.
Unknown Speaker 16:38
It's been referenced in the beginning, the number of objects that are curated by actual curators versus the number that are just there. And it's something like under 800, or actually have those connections. And then there's like 25,000 plus on the table. So that that kind of shows you the scale, so the AI can handle 25,000. But the curators could handle 800, you know, which again, kind of gets back to like, what's
Unknown Speaker 17:04
the value of each.
Unknown Speaker 17:08
On the subject of data, actually, Rigoberto asks what kind of data was built into your AI? I think you did have a slide about this. But maybe someone can just recap it briefly. And I'm interested in knowing how you how you excluded the data that you decided not to use, or maybe it was the other way around the data that really attracted you. And you said, Okay, this is where it's at, we're going to focus on this data.
Unknown Speaker 17:35
Let's see, what do you want to take on? Or do you want me to go a little? Sure, yeah. So there's a lot of debate about this. And the so in order, first of all, what you can see on the table are two sort of lenses into the collection. One is, you start with an object, and then you land at another object, and in between, it shows you objects that are visually similar. So if you have a chair and you have a tractor, it'll show you objects that sort of morph from the chair slowly into the tractor. And that sort of just builds a visual connection through the collection. And that data is all based on the images that are coming in. And then we added another much more sort of playful and superficial lens, which is just based on color, right? So you start with something red, and then a blue object, and then you kind of have that gradient and colors. That actually sounds super simple. But there was a lot of debate about what we use for these visual threads, because not all artifacts are visually that easy to decipher. So so there's a lot of debate with with Ellice and the team. You know, do we include photographs? Or do we include 2d objects in general, like there are a lot of beautiful design objects that are new prints and textiles and things like that, but they don't necessarily read as well in these kind of scenarios. So that was that was a careful conversation that that happened over I want to say even months Ellis right to sort of iterate and see what yields interesting results as a delivery experience. But what sort of maintains the authenticity of the collection?
Unknown Speaker 19:22
Yeah, and to me, it's really interesting, too, because I think it wasn't just, like, answer the question and then dump the data. And and then we're done. Like it was this back and forth between like, you know, I'd be like, well, what if we did this and then Ben would take it off and kind of play with it and start tweaking algorithms like start tweaking search, so that like what, you know, I think it was like not acceptable for us to not have any black and white photos just because that you know, we're again, we're a 90 year old institution, most of our photos are black and white photos and most of what we have is photos. So like it wasn't necessarily acceptable to exclude all of those, but then how do you make it so that not, you know, three quarters of what you see is black White photos, but raising those when they need to be raised, and then giving them something interesting to do with the AI, instead of just, you know, hey, a black and white photo looks an awful lot like this next black and white photo looks an awful lot like this Next Wave photo. Um, so it was this, this back and forth. And that's, again, it's the interesting part. It's like, we have 100,000 objects digitized, and a quarter of them are on the table, because again, the other three quarters are mostly black and white photos. So, you know, it's this, it's, there's still a sort of manual process, basically, what we've been doing is all of our digitized objects. So if it's a three dimensional object, those are all on the table. But if it's two dimensional, then it kind of depends on does it have some color? You know, is it a black and white photo? Is it one of like, 1000, racing photos that's on the same race? Yeah, maybe we wouldn't include all of those. So there still is a little bit of a manual process to kind of figure out like, which things are appropriate for this experience. It's not just like, dump it all in and let the AI mix it up.
Unknown Speaker 21:03
Yeah, and that last piece is, I mean, that's what's so different about doing something for an in museum experience, that's part of this larger kind of story about how the collections interconnected that the decisions we made to kind of like curate the AI, I guess, to continue to play off of her time, we're in the service of experience in this space, I think we would do something different and more open ended, if this were kind of like an online digital collection, where you can kind of like, you know, tweak in tune, you kind of have one shot to say, like, Hey, we're nearing the end of this curator strand, or we're at this moment to provide a nice fork in the road for, you know, the end user, let's make this really beautiful, or really compelling, or really stretch across the collection in a way that's really unexpected. Rather than if you're online, you're like, I don't like that result, I'm gonna change this and change this, you know, doing that kind of back and forth. And we might be able to give more raw results.
Unknown Speaker 21:55
And I think to, to that point to, again, it's we have we have a really robust digital collections. Like we didn't want to just have that on the floor, like you could you already can have that on the floor with your phone, right? Like it's mobile, friendly. So um, you know, I think this was a different thing. And maybe again, David, this speaks to your recent question, I'm like, unfortunately, we don't have really analytics because we didn't get to open it because there was this pandemic. It's a touch experience. So and I know that's the thing we can, I'm sure we could talk about for half an hour just by itself. But I think the thought was always this is not something people are going to want to stand there on their feet, when they're in an 800,000 square foot Museum, with many other interesting things to look at, and like spend tons of time you know, if you really want to dig into our Digital Collections Online is the best place to do it. But that this was supposed to be a fun, light experience, that you're not digging too deep. We just didn't want to again, this was another conversation we had as far as like what metadata to show, we didn't want people to be frustrated that they didn't understand what a thing was. But we also didn't necessarily want to say, Here's every piece of knowledge that we have about each artifact.
Unknown Speaker 23:08
I'm scanning the questions here. And actually, Mitch is going to take us back to the beginning. What, what was the brief for this? And was AI built into it from the beginning? Or did that come later?
Unknown Speaker 23:25
I'm trying to remember Brett, I don't know. To be honest, I know that it's something that we we definitely had kind of very early on as an idea. So so the, the Henry Ford Museum of American innovation, we've kind of designated this larger project has been this space to be the intersection of innovation. So it's kind of physically in the middle of the museum. And we've kind of focused it on innovation. And I know kind of speaking to the connectedness of our collection, as Ellice kind of mentioned earlier, like, why do we have all these kind of seemingly disparate things? And how they all connect and what what kind of innovation stories that they tell? So we knew that AI, we kind of wanted AI to be a part of that, because it's, it's innovative, right. And we also knew that we didn't want to like chase tech, as you know, I mean, we say this over and over, and then you kind of find yourself sometimes doing it. So I think early on, we did a little bit but um, I think that, you know, it was definitely involved early on, we just didn't know how into what levels.
Unknown Speaker 24:35
Yeah, I think it was it was more of like the AI was a hypothesis. I think there was this, it was a piece of advanced describing, you know, we're at this kind of like crossroads of the museum. So it's a chance to say here's how before you go into these different areas that are about a specific collection or a specific theme. Here's his moment that kind of like tries to tie everything together. And the other experiences around it are, some are more linear. So it's like let me explain this one connection. To you and dive deep on, you know, prototypes across fashion across design or you know, across agriculture, etc. And this became this idea of like, well, let's have one experience within this puzzle that led, it's a more open ended that says, hey, if this is about individual connections, here's a demonstration or a playground to see the entire collection connected all at once. That was our kind of jumping off point and said, Well, we need to, we need to do something algorithmically to make this happen. We did a really early prototype where Alice just sent me a huge cache of images. And we just kind of started playing with it to see if it was interesting. And so it was really that that was the start of the product processes, we had kind of like a visualization made, it's actually the one that's in the presentation made with AI. And we said, this is really interesting. It's not a cool interactive experience. It doesn't do what we want. But it is really provocative. And that was kind of like our beginning to kind of almost take a huge step back into the underlying strategy, but knowing that we had sent me kind of compelling to start with there with a little machine learning and AI in the mix.
Unknown Speaker 26:03
Alright, time flies while we're approaching the end of the session, but you knew you wouldn't be able to get out of this without a question about analytics. So we have one about comparing the analytics for deep dives, versus the connections along the chain. How did that play out?
Unknown Speaker 26:22
Yeah, absolutely. I tried to get ahead of a couple of questions in the chat. And I'll just sort of refer paraphrase here, that we do have that data. And we're hoping to collect some a lot of that. But since the table installed, right before the lockdown, or actually it finished, install at the first day of lockdown when the museum was closed, we haven't been able to collect meaningful data, our hypothesis is that it's going to be the majority of people are going to be using this on a shallow basis. And then the deeper dive is going to be for the folks who really want to drill down and might be looking for very specific object info.
Unknown Speaker 27:03
Great. Before we wrap up, Brett in in the talk, I think you You gave some kind of mysterious mention of something about a donut. Set. Right? Can you tell us what that's about? Oh,
Unknown Speaker 27:17
Allison, right, do a better job of telling you. But it was a object that we kept coming back to that even in the curator generated maps. There's a How is it? Well is a 200 year old donut that's in the collection? Is that correct? Ellis? Or 150? About 100? I think. So it's 100 year old donut that was donated and as I'm drinking In Memoriam of a person and so when we had a lot of these connections about out of these connections, about death connections about humanity, a lot of roads lead to the death Believe it or not,
Unknown Speaker 27:49
wow, this is this is a food donut. This is a food product
Unknown Speaker 27:52
this is so that I think that's just for me, I would never guess that right? So there's, there's more meaning than just an image or tags with this one.
Unknown Speaker 28:05
I just put a link to the donut in the chat if you want to check it out. So it is it's actually a really touching story. So um, worth worth reading a little blurb on it.
Unknown Speaker 28:15
I mean, I think that's maybe one of the best examples of what can happen with AI That's unexpected. You wouldn't it shows you something that you maybe didn't know is there. It it. It suggests that there's a deeper story to dig into. So really, I I think doughnuts are a great, great place to end. Thanks, everybody, for for tuning in. I don't want to keep you later. But thanks to all of our presenters. So go forth and AI.
Unknown Speaker 28:44
Thank you. Thanks, everybody. And thanks. Thanks, everyone. Yeah, thank you.