Unknown Speaker 00:00
Thank you very much for coming to our talk, which we have titled Don't burn out effectively using existing platforms for in gallery experiences. So over the next 40 minutes or so we will discuss a project we've been working on at the Adler Planetarium in Chicago, Illinois, called mapping historic skies. And many of us in the museum world have experienced putting projects together with, shall we say, limited resources. This has been a frequent topic over the past few days. And I believe we'll continue into this afternoon and tomorrow sessions. And so we really hope that by sharing this experience, we can show how it's possible to create high impact projects on a low budget through departmental collaboration, use of open source tech and general willingness to adapt our goals. Though many people have worked with us on the project, this is our core team. My name is Sam belakang. I'm a postdoctoral fellow and the digital humanities lead for zooniverse.org. Jessica is our Adler digital collections and asset Access Manager. Michael is the others guest experience project manager manager and Becky Rother who isn't with us here today, but helped put this presentation together is the lead designer for Zooniverse, which I will discuss momentarily.
Unknown Speaker 01:23
All right, so we're gonna jump in. So in 2014, the Adler's collection team proposed an early version of a digital project at any age for funding, it was called Digital historic skies. And the goal was to create an interactive mobile app that would teach the general public about art history and science and cultures throughout the world. Through the comparison of historic celestial maps and images of the current night sky. The ultimate goal was to create an app for smartphones that would use GPS to pull associated images from the Adler celestial cartography collection, which would allow users to look at any region of the sky and easily access the relevant historical and cultural constellation depictions. So it was imagine that you would hold up your phone, you would see Orion and then you could swipe through and see how it had been depicted in Chinese culture, Islamic culture, or Western cultures throughout time, which seemed really cool, until they realized they didn't know how to do that. So digital historic skies proposal did receive funding from any age for this prototype. research was done and it resulted in a white paper. But as you can guess, we do not have that application. It offered suggestions for continued development towards that project goal, and set in motion a lot of the actions that ultimately form this current effort that I was tasked with, simultaneous to that project. We were also received funding from NIH for something called the celestial cartography digitization project was, which was a multi year digitization project that actually was going to create the assets for the application. So running those simultaneously also probably was not our best idea. The collections team members at the Adler were able to digitize the institution's collection that had about 4000 historical constellation depictions. And think we had about 600 years of coverage throughout that time and about 15 Different cultures. When I joined the Adler, we had six months left on the grant term, and about 1000 objects left to do. So the burnout was already happening. The initial concept was to use these images to show how constellations we know today in a very western aspect have been depicted over time and have changed throughout cultures. And the biggest challenge, though, is just the sheer size of our dataset, the with only one curator and myself as the only digital person at the Adler, the undertaking of this size would have probably spanned us years to make that application. So this actually then stalled due to the need for restructuring in order to make the goals achievable for such a small team. For example, the original proposal did not foresee the amount of time needed to comb through the actual celestial cartography collection, we didn't even know what objects had constellation depictions in them. So we were doing that while we were digitizing them. And not only did each image need to be cataloged, but each individual constellation within the images needed to be identified cropped out, and then the original digital image needed to be associated with that one and formated for this formatted for the app. Though this is still the hope we still want this app one day because we think it would be really cool. We've kind of had to reimagine that and scale back what we can accomplish in house on a $0 budget. So our focus is now on a smaller scale, achievable goal, which can then be used to support a longer term project that could possibly one day evolve into the original goals of digital historic skies. But our new project is called mapping historic skies and that's what we're here to talk about. At the end goal is no longer this app, but a database that would have all of these constellation images, it would associate each of the names of the constellation. So you could have somebody browse across our collection and see how these things have changed, and how everyone has always looked up and seen something in the stars. The images housed in the database are being created with the help of the Zooniverse platform. So I don't have to go through 4000 images and crap out where there are about 80 constellations and about 4000 images, so we would have quite a few for me to have to crop through. So the interactive workflows on Zooniverse break this down into something that people who don't have a lot of astronomy knowledge can actually still participate in. The intended audience for this interactive is visitors to the Adler's New Chicago knights guy exhibit, which, which opens in two weeks, as well as online participants from all over the world that utilize the Zooniverse platform. And although these two groups of volunteers have different options of tasks that they can participate in, over time, the resulting identification will be added to a database of constellations, and mean that online volunteers as well as our on site, visitors are actually contributing to real research, and are really co creators towards this repository of public knowledge. And so the creation of the database is now kind of the next project phase after what we're going to talk about.
Unknown Speaker 06:26
But that's fine. So as promised, a quick introduction to Zooniverse. Are any of you here actually familiar with this universe? Totally fine, if not great. So Zooniverse is the world's largest platform for online crowd sourced research. And it was founded in 2007, with a single project Galaxy Zoo, which invited members of the public to help classify a dataset of a million Galaxy images based on visual characteristics. Since then, it has grown to a platform with more than 1.9 million registered volunteers. And we've launched in the sort of 12 years since more than 190 projects across the disciplines. So including astronomy, ecology, climate science, biomedical research, history and literature. And you can explore all of our current projects by visiting the URL that's currently on the screen, which is universe.org.
Unknown Speaker 07:33
No, sorry. There we go. So the way that the Universe works is through
Unknown Speaker 07:40
one back, oh, no, we lost it. It's airy fairy. I think it's because your paper is hitting the mouse. It's I don't know where to just.
Unknown Speaker 07:52
Seriously, just God, sorry, guys. To the top. Kidding me right here. The entire internet just crashed. Cool, Jessica. And I'm going to tell you a bit more about how his universe works, because you don't actually need the image on the slide for me to convey this information. So the way the universe works is through what we call people powered research. And that is basically a scenario in which volunteers come to the website and choose a project and then perform a task or series of tasks on a dataset with a specific research goal. And so these tasks can include things like as I said, from Galaxy Zoo, classifying images of galaxies based on visual characteristics, identifying animals in camera, trap images, or transcribing historical handwritten documents, and almost half a billion. That's billion with a B, classifications have been made in the decade that Zooniverse as a platform has been in existence. And the resulting data from these projects have been used in more than 150 peer reviewed publications. We're almost there, we're working on it. Or you're just giving up and down and down waiting. I'm gonna keep going. In the first six or so years that the platform existed. All of our projects were built in house, meaning we were only able to launch about seven projects per year. But in 2015, we launched the Zooniverse project builder, which is a web based application that allows anyone to build and run a crowdsourcing project on our platform free of charge. And as you will shortly be able to see, perhaps there's a graph that's why I'm sort of like stalling here. It's a really effective graph
Unknown Speaker 09:57
for drying I'm sorry. I'm just really not happy with the conference Wi Fi right now.
Unknown Speaker 10:04
So I'm trying Oh, good, as you will see in the forthcoming graph, maybe shortly, is it on them? Yeah, it is there.
Unknown Speaker 10:21
There we go. There's my incredibly effective graph. As you can see here, the availability of this tool has affected the rate of the project launches on the platform quite significantly. So it's a it is a graph of project launches over the years, there is a very steep incline just after 2015, which is when the project builder was launched. And for the past two and a half years, we've been averaging one new project launch per week. And this is what the basic project builder interface looks like. It is a plug and play app. So you don't have to set anything up, you just go to the website. And once you have a Zooniverse account, which all you need is an email address. You don't have to give any more information than that. You are set to start building a project. It doesn't require any experience with web development or coding, you just need A computer with internet access A data set and a research question. And I think the current record for our team members for building a complete project is something like five minutes. So
Unknown Speaker 11:34
all right, so forgot about that one. So before starting the design and development process, the collections team, the Zooniverse team, and our guest experience team, which is pretty much this panel, worked together to create internal project objectives. to direct the development of the in museum interactive. We knew we had $0, we knew we had a tight timeline. And we knew that having no goals wasn't going to end well for us. When the original digital historic skies project was created in 2014. Very little thought was given to project objectives. It was all about the end product. So as we created mapping historic skies, we were more cognizant about aligning our goals with the Adler's mission and vision, resulting in the main internal goals listed here, which are collaboration, visitors hazard prototypes and interactivity with collections. We're focusing less on a single end product this time, and instead on the experience for our guests.
Unknown Speaker 12:33
So based on that information, we had to ask ourselves how we build a project, in line with these specific departmental objectives while also being aware of and working towards the specific project, project objectives, including the creation of this database, and the ways in which researchers might be able to use those resulting images to explore to create, etc. And the answer on this universe platform is by breaking down certain of these objectives into specific workflows. So on the platform, you can create as many workflows as you like within a single project based on those goals, so you can have concurrent paths with different tasks on a single project.
Unknown Speaker 13:18
And that was important for us as well, internally, because it makes it a little easier to manage when it's a team of one person uploading all of these images into a database. So as we started this project, it became clear where the previous iteration failed. Many examples in the Adler collection feature multiple constellations on them. So if you're trying to upload this into an app to necessarily see Sagittarius, there are also four other constellations in there. So how does somebody see how they that one constellation changed. So it makes it difficult for a team of one staff member to show each of these individual constellations. But if you can see here, there's Sagittarius. There's the southern crown, the microscope and the telescope. If we place these online and ask people to identify what constellation they're looking at, they are technically going to be right if they provide four different answers, but wrong in every single sense, because there are four constellations here. So when we were looking at workflows on Zooniverse, we realized we couldn't go straight to crowdsourcing asking people what they're looking at, because there's more things here than one thing. So we decided to first implement different workflows, as Sam mentioned, and the first one is cropping these out. But before we came to that decision, we originally came up with the idea to make a very easy entry level workflow that was going to be sorting images. We thought this would save me time, because we have done research previously for Zooniverse that people are really only willing to crop out between four and five separate boxes before it gets very overwhelming in some of our image Is range from two to 37 constellations and a single one. So not only our maps of 37 constellations extremely small constellation, so the point you cannot physically draw the boxes, but people weren't going to be willing to do that. So I was going to have to crop those into smaller sets. The problem was that I don't necessarily have the time to crop all those into smaller sets. We thought this would be an easy one, because all we were asking people to do was to say, is there more than four constellations in this image? If they said, Yes, it would prompt me to crop that image down. And they said, No, it would go straight to the segmentation workflow. In the end, we tested it on the floor, during open hours and during some events, and we realized it was not a very impactful way to connect with our collections, people didn't know what they were looking at. And they ran out of steam very quickly with just saying yes or no. And ultimately, it was pretty unnecessary, because in a, in order to get these on the Zooniverse platform to present like this, I had to resize all of them anyways, so I was already going into them. And I can tell if there are more than four or five. So it was kind of unnecessary, and just giving our guests menial work that didn't actually give them any impact to the collection. So we actually ended up scrapping this workflow. And moving straight into the segmentation workflow with just me cropping them out as we went into smaller groups of four or five. So in this workflow, participants are just invited to drop boxes around individual constellations. And as is the case with the yes, no workflow, the boxes are aggregated later, to let us know coordinate data of where these constellation depictions are being marked out by our guests. The shape is always a box, though we can't do like a lasso tool. Circle wasn't going to work. So we stuck with a box. But we've seen some complications arise. As to overlap. Guests are very uncomfortable with the idea that they're touching and sometimes overlapping. And when you squeeze 37 constellations into a single, like Southern Hemisphere, they almost all touch. But overall, it's a good first step to getting individual constellations for the identification workflow.
Unknown Speaker 17:10
And so this is our identification workflow, and it's set up very much like a Buzzfeed quiz. And we've learned people are pretty happy with that. Once the multiple constellation images are cropped down to this individual size, it gets sent on to the individual constellation workflow, which is only on universe.org. It unfortunately, is not available on the App. So this is kind of a nice sent home project for people if they've enjoyed doing the cropping workflow, on site, and the interactive, they can go online, and then start to identify some of the things they've cropped out. Volunteers are shown an image of a single constellation, and then are asked to name the constellation if they know it, and then you can bypass the questions. If you don't know it, though, and have zero knowledge of astronomy, we offer very basic questions down to very specific questions. So they start with does this constellation represent a figure, a human and animal or an object, from there, they get a little bit more specific. And they finally result in a recommendation of either a single constellation, or maybe two or three that tend to resemble each other, and they pick the best option from there. Again, these responses are also aggregated and then shared with me so that I can see what people are tagging these assets.
Unknown Speaker 18:29
So just to unpack the creation of these different workflows a little bit more. One of the great things about the project builder is a tool is that it's built for iteration, you are really you're meant to experiment, you're meant to try different things out and make changes as you go based on that experimentation, it's very easy to to edit and update the things that you're building within the within the project builder interface. So as Jessica noted, for example, we abandoned the image sorting workflow after the testing period, based on volunteer response and necessity of task. And similarly, we realized that the segmentation and constellation identification workflows had some specificity regarding the context in which they were used. So this idea of having an experience in an exhibit versus at home on your personal computer. So more specifically, the project builder tools that we needed to set up the decision tree workflow that Jessica just described, and the image identification weren't available on this universe app. And the Zooniverse app was the basis for the in Exhibit workflow. It's a touchscreen on an iPad. And basically what we did was we took the existing Zooniverse mobile app and made some very, very minor changes in order to create what we are now referring to as museum mode. So therefore, Based on the sort of multiple experiences we had with testing, and also running into walls in terms of app development and just availability of tools, we made the decision to keep the identification workflow as an online only option rather than have it be part of that exhibit. So the distinction between the desktop version of the project and this museum mode also brought some new issues to light, particularly in regard to resources that are available for volunteers, through the project builders desktop interface. So the desktop version for all projects, it includes message boards, for example, where volunteers can communicate with one another, as well as members of the research team if they have questions about the project that they're participating in. We also want to acknowledge that when you're working on a project at home, you do have available in browser search capacity that you wouldn't necessarily have when you're on an interactive in an exhibit. And so, museum mode has resources like a tutorial and a field guide, but no capability for online search, or the the opportunity to communicate with other volunteers as one would be able to do on a message board. So we really had to think about how to help ensure that our volunteers would be confident in the tasks that they were completing. And one way we did this is through exhibit signage, which Michael will talk more about momentarily. So the final result for this exhibit is the museum mode, which is a lightly adapted version of the mobile app. We designed the format for widespread use in museums, but particularly for institutions who might wish to have in gallery interactive projects, but who are concerned about the idea of having a browser based application within an exhibit. So museum mode essentially removes the risk of an exhibit guest navigating away from the project and checking their email or going on websites or whatever. And when using this version of the app, guests can only access the workflows that you've set as being accessible. However, those same workflows are still available on the desktop version of the site. So there's an opportunity for guests to continue to participate in projects from home or from their mobile devices as well. If you're at all interested in learning more about this, or particular or potentially adapting something that you're doing within your institution and trying to make use of museum mode, please do reach out to us, we'd love to hear from you. With the caveat that this is very much still a beta version of of the tool, but there's an email address up here on the slide that you probably can't see, but the email address is just [email protected]
Unknown Speaker 23:12
Okay, I'm going to talk a little bit more about incorporating the project into the exhibit. So in 2018, the Oh. So in 2018, the Adler contracted with Cygnus Applied Research incorporated to conduct a survey of 3492 visitors and supporters. The data collected showed 130 2.8% of our guests came to the angler to learn specifically about artifacts to 62% reported attending for Sky shows, and three 50.9% of guests reported wanting a hands on experience. So in addition to these insights, this was an important survey for us, because it revealed that our perceived demographics about our audience didn't really come close to aligning with our actual visitor demographics. Our assumptions were that the majority of our guests were families with young children, when in fact 80% of our guests were coming in to person adult groups. Because of these past assumptions, our current exhibitions aren't always reflective of our audience's interests. 62.6% of our visitors found that there was more content at the Adler for children than for adults. So, next, I'm gonna do an overview of our goals. For the new Chicago's Nightscout exhibit, our director of experience development was the project lead. And we started by having her draft a core concept and exhibition goals that stemmed from this concept. Then our content team which consisted of our Vice President of astronomy and collections, our curator, and our director. Have Publix observing, reviewed and recommended additions to these exhibition goals. And it's also important to note that the exhibit was being developed during a brand a two year brand relaunch that we're still in the midst of developing. So our new brand promise was unveiled internally in April of this year. And the exhibit and brand helps to inform one another. So some things like our brand promise, which is sparking real connections, and some of our brand identity were then retrofitted into the existing exhibit goals through some light rewarding. And simultaneously, some of our brand documentation, like our new mission, were inspired by language initially drafted for this exhibit. So for example, the core idea is that for the exhibit is that our sky above connects us all as humans and in our communities. And our new Adler mission is to connect people to the universe and each other under the sky we all share, which may be sound a little similar. So I'm gonna run through our goals quickly. The first goal is that Chicago's night sky will encourage guests to connect with the sky. Guests may not initially feel how they are connected to the sky. After their visit, they will see how our sky above connects us all as humans across our different communities. goal to spark a connection. Remember that brand promise, with the stars, planets and our beautiful moon in our own night sky in Chicago, encouraged a feeling of inspiration from the sky above, encourage people to seek to protect it. Goal three, is highlight the real work of Adler staff, teens, students and citizen science scientists in protecting and restoring dark skies in Chicago, create a connection to those taking action. And finally, Goal four is to explore the historic communities of people who have connected to the sky through myth, storytelling, art, science and literature. So now connecting the Chicago night sky goals to the mapping historic skies project.
Unknown Speaker 27:15
As you can see, the the images present and the mapping historic skies project dataset directly connected to historic communities, myths, storytelling and art. And by making guests aware of how historic communities have connected with constellations, our guests too can see that those same constellations and they will hopefully begin to understand how the sky connects us all as humans. Finally, the project utilizes crowdsourcing to perform real research, which was something we desperately wanted to display, since almost none of our actual science or research is being conducted at the Adler was present in our exhibits prior to this new one. As Sam said earlier, interactive, interactive Citizen Science allows visitors to take ownership of the collection in a very real and interactive way. So we had this sort of serendipitous connection between Chicago's night skies exhibit goals and the mapping historic skies project. So the goals that are on the screen, I'm gonna talk about how we aligned those with, with the exhibit. So for this, for this project, and for all of the exhibit components, they each had their own sub set of goals, that kind of ladder up to the overarching exhibit goals. And to build these goals, our project lead, created a framework. And, and sorry, I have lost my place. So we, we, as the exhibit team needed the help of the content experts, which in this case, were our collections team and our Zooniverse team, Jessica, Becky and Sam, and to advance our understanding of how this project could integrate into the exhibit. And so to achieve this collaboration, we created a framework for the audience goals that our project lead, created the framework and then our curator and Jessica drafted those goals. And then we had a commenting period for the rest of the team to get feedback. And we implemented and updated are those into our final goals. So after, after going through that process, we arrived at these three final audience goals. One, the audience will become aware of the others collection of constellation maps to the audience will learn to differentiate constellations based on artistic attributes. And three, the audience will learn that constellation depictions have changed over time and across cultures. So next, I'm going to cover just for a little additional context, our exhibits look and feel and show some of our design work. So here you can see our logo. And next it will be the mood board to give a sense of the colors and materials and ideas we were working with Our lead exhibit designer actually worked closely with Becky, who, unfortunately is not here on the designs for the exhibit. And not just the mapping historic skies, designs. But the exhibit designs in general, Becky reviewed all of them, and would sometimes be a thought partner to our exhibit design lead. And then this partnership also helped to evolve the look and feel of the Zooniverse app. We introduced dark mode to the Zooniverse app as a feature, specifically for this project so that it would fit into the exhibits look and feel. And the next slide has evolving sketches. So the top two and the one on the left are the same wall kind of at different angles to show how our ideas kind of iterated and the fundamental interactives there are in the space are consistent, their execution just changes from picture to picture. And the one on the bottom right is actually our final graphic layout, which is shown in greater detail here. So you can see kind of here where the iPads will live on the wall. user testing, workflow, prototyping and evaluation helps to inform the content on the walls here, which we will dive into in a little bit. Jessica, will you hit one more time. So this is a zoom in of that last image just so you can hopefully read a little bit of the text. And I'm going to hand it off to Jessica here to discuss our evaluation process for how some of this text was drafted.
Unknown Speaker 31:28
Yeah, so as Michael has mentioned, we had to fit this Zooniverse app into the exhibition. And as Sam mentioned, we were using the universe out of the box, we did not pay for customizations for any of these things like a, you know, Glossary of all the terms of all the constellations, and you can look through them all. And it's super easy, we have a field guide that kind of does that. But you got to click in, you got to know what you're kind of looking for. This wasn't an interactive that, you know, had a bunch of pretty graphics and fitter, or, you know, designed perfectly, it's out of the box. And we realized very quickly testing on the floor, that guests needed a lot more context, whether they didn't know what a constellation was. And there was a lot of that, or they had just maybe forgotten and needed a gentle nudge, there was a perceived barrier to doing the project, because they're not an astronomy expert. We learned from testing that when our guests knew why they were being asked for the help, aka there is a huge dataset and only me, it was actually floated by a colleague that we put a picture of me up with the 4000 constellation maps, and I could be crying and the maybe people would do it, they were far more likely to help when they realized that they were actually doing something to help the Adler the text panels were added to kind of connect that to the app. So we did bring in instructions on this is a very simple thing. You're just going to draw a box, we brought in some text about why we're doing this, what could this result in, and the fact that there's one staff member and 4000 images, so please help us. And so this all kind of served as a reminder as well of what a constellation is, and that you actually can do this, you don't have to be scared.
Unknown Speaker 33:12
So as Jessica mentioned earlier, the evaluation process was really important to us, as part of how we iterated the design of the app, and the workflow, because in order to really appropriately assess our questions about the functionality and the enjoyment of the app, we really needed to first place it into the context of the audience's wider knowledge base. So as Jessica said, we carried out multiple rounds of testing on the museum floor first at special members night events. And then during museum hours with weekday visitors. And through this testing project process, one of the really interesting things that we learned was that staff presence influenced app use, and also evaluation of the app, which isn't a huge surprise. But we we really wanted to make sure to acknowledge that the experience for guests would be different when it was just them and a touchscreen, without staff intervention. And second, we really needed to find a way to communicate the context of the project quickly and effectively, particularly the underlying research questions to ensure that people knew that they were actually doing something that we were the results were going to be used and that it wasn't just busy work or something to just sort of play around with it was actually going to be used in terms of the resulting output.
Unknown Speaker 34:42
Yeah, and as Sam mentioned, we tested quite a bit. And this is kind of how we learned to cut the image sorting workflow, because people were really awkward about standing in front of me as I'm holding an iPad asking them like, is there more than four? And they're like, Well, yeah. And then we just stare at each other. So we cut that one but we also learned that guests had much more confidence in their ability to participate after that context was given. So they loved talking to the staff, and they loved getting to hear about why we're going to do this. And oh my god, one day, I could hold on my phone and see all this stuff. But like, we can't have a staff member stand next to this, like the entire opening hours and explain why we want you to, you know, interact with this. So that's kind of where that graphic was from earlier. But what we did learn from testing is that not a single one of our guests stopped at a single constellation, classification, most of them are performing at least three, and we saw as many as 20 to 30. At a single time, we were getting a lot of comments so that it was addicting, and it was fun. And people were really enjoying it, which is not usually what I hear when I'm bringing collections on to the floor. But what we did find out is this was not the case for some of our younger guests, we didn't really realize that children were going to be a problem for this workflow, their native tech users, so they were really good at drawing the boxes. But they were very unsure of what a constellation was. So once we could explain it to them, the problem was then they just wanted to look at their favorite one. So if you gave them an example, and said, like, you know, Orion or a zodiac sign, they just wanted to flip through the images to see which one they were looking for. And you can't do that, because we need you to crop them first. And so it kind of became difficult for them to grasp the idea of a figurative constellation as well, they would either draw boxes around every single thing they saw. So if it's Pisces, they drew around each of the single fish, they drew around each of the single, you know, tree branches. Or they just drew one giant box around the entire picture, which is also maybe not so helpful. But we also learned it wasn't just our younger visitors, it was also our older end of the audience did not participate well, either. However, it wasn't from lack of knowing what a constellation is. This was a perceived lack of knowledge and fluency in technology. So the second we presented them with an iPad, they immediately would tell us like, oh, no, that's my granddaughter's type of thing. Like they shut down immediately. And they did not even want to try to do this technology. Once the whole project was explained to them, they were much more apt to try it. But they had to have multiple reassurances that they were not going to be the only person cropping this image, and that the project was not contingent on their knowledge and ability to draw a box, which is something we can do during testing. But again, I can't have a staff member stand next to this at all times and tell people like, Don't worry, 500 Other people have to do the same mistake is you before it is, you know, official. So we were kind of surprised. We didn't realize some of these things were going to be issues universe hadn't had an app on the floor before? No, neither had we had a constant like any kind of collections interactive. So we just didn't even think of these problems. And as Michael has mentioned, before, we envisioned that most of our guests were these young children coming in, in family units. And they're not there two people, two pairs of adults, which serendipitously worked from our results from this project, because that's who does this project best. So we accidentally created an application for the people who are coming on site.
Unknown Speaker 38:16
So just coming back to this idea of I'm not sure or I don't know if I know enough about this. So we hadn't had the experience within the exhibit floor. But that response is actually really frequent. When you work with crowd sourced research. It's just part of inviting the public to participate in what you're doing. And particularly, I think it's an outcome from a tradition that has, or a discipline that has traditionally barred supposedly unqualified people from actually participating in research in this way. It's also why each data point a project receives multiple independent classifications that are then aggregated together. So the task isn't dependent upon a single volunteer, but instead is dependent upon a group of volunteers reaching consensus about a question or a task. So for each workflow, Project builders, if you're if you're using this app for your own work, you're able to choose the number of volunteers who will classify a subject before it's considered complete, and then retired from the project. And I'll return to that process in a few slides.
Unknown Speaker 39:32
So I previously spoke about how we had this, you know, fortunate connection between the exhibit goals and the mapping historic skies project. So now I'm going to talk about how despite that, the project still needed some interpretation and adaptation to fit into the exhibit. One of the most challenging parts of developing this component for the exhibit was determining how to provide context for our guests. We needed to ensure that guests Were aware of the concept of citizen science, aware of the Adler's collection of historic objects, and then also provide instructions for how to use the interactive, which quickly becomes a lot of copy. And also a question as to how to create a hierarchy for that information. So we considered using a video originally, and we got as far as drafting specific goals for what the video needed to do, as well as a storyboard and we even sent an email to our members asking them to provide feedback that we could use in within the video. Before that was cut for budget reasons. Shocking. So next, we pursued an infographic. And due to our designers time constraints, because we only had one working on the project. This never materialized either. So finally, we were resigned to just instructional text. But ultimately, the app is pretty intuitive for our target audiences age that we uncovered from the Cygnus Applied Research survey. Since our adult attendees are usually digit fluent, also the gallery in the gallery, there are two cases of collections objects that are adjacent to the mapping historic skies interactive, which gives context as to what the guest is classifying. So ultimately, through good design, both on the app and also in the space, the exhibit space that complements the interactive, we were still able to make it understandable and intuitive to engage with. The next slide is the current state of the exhibition. So as was mentioned earlier, the exhibit does not open until November 22. So Michael took this on Monday. Yes, so so we don't know exactly how successful this venture is going to be at this this picture. On the left, there is a kiosk and a projection screen. That was the first interactive installed in the exhibit, and that was installed on Monday. So in the wall space on the right, that is where the mapping historic skies project will be installed next week. And last month, we actually had a preliminary meeting about evaluating the exhibit. And this component will without a doubt be one of the aspects that we evaluate. So if you're curious to follow along with our journey, and hear how amazingly successful that this is going to be, please reach out to us. We're happy to chat or hand out cards or whatever afterwards, and follow up.
Unknown Speaker 42:39
So the final topic I just want to address in the next couple of minutes to make sure we have plenty of time for questions, is the resulting data from the project, we really try and accentuate and hit really hard, how data are going to be used. And I think part of that is showing what we're actually going to do with it and how we're going to make sure that that's useful. So something that is a little bit different about this project from our online projects is we need to ensure that it's actually going to last a certain amount of time, we want to make sure visitors can participate in the long term. Rather than having this be an interactive that's only available to early visitors to the exhibit. And to that and to that end, the goals differ from what we would expect from an online only project in which efficient completion of a dataset is a main part of how we set up projects. So for an online project, we would determine the number of classifications we need to get reliable results and then set the retirement rate to something very close to that number. Whereas remapping historic skies, we want to strike a balance between making sure we get the amount of classification data we need for accurate results. So taking into consideration little kids who just want to see the pictures, or people that aren't entirely sure how things work. We want to make sure that that we're getting the amount of data to get accuracy, while also making sure we're not retiring things from the data pipeline too quickly so that we run out of data and people don't actually have anything to do. So we'll keep our retirement higher than what we'd set it to online to ensure long term engagement in the exhibit. And two, we wanted to make sure we had the data pipeline set up before the project began to ensure useful results. These are some of the questions we asked ourselves as we were setting things up. So these include classifications from the exhibit workflow will need to be aggregated and used to create subjects, which will then be used in the identify constellations workflow, so a data pipeline. Once subjects are cropped and classified, where do they live within Adler's content management system? That was another very practical consideration that we had to think about. And then finally, how do members of the public and access those outcomes. So these are sort of medium, and then longer term goals in terms of how this data is going to be used. So here you can see an original image on your left, which contains two constellations. And then on the right is a visualization of the raw classification data. So you see these sort of blurry boxes, which represent that individual annotations aren't necessarily going to conform to exactly the same height and width. And so we use some Python code to cluster the raw markings together, which in if we've used, the correct numbers for classification amounts, will result in a single box drawn for each constellation in the image, allowing for some cleanup. And if you look at the next slide, you can see what some tests look like in terms of how those annotations would look when mapped over the original image. And this, these were created using data from beta testing that we did. So it's actual data provided by guests. And so then these coordinates will be used to crop the original images. And those crop versions will be uploaded to the identification workflow. Just a quick note, if you go back, and these slides are going to be up on the MCN, shared folder, there is a link. We do have some in house aggregation code that works with specific tools on the project builder. So if you are interested in using the project builder, or you don't necessarily have to be a data scientist, we do have some in house scripts that you can play with that are that were created for those specific tools. So we chose don't burn out as part of our title to address the ways that limited resources can and do cause a lot of stress within our field. And a lot of the process that we discussed today and it in because we couldn't do x we did y which I think is probably not unfamiliar to those of you who've worked in museums. While we certainly don't mean to imply that the methods that we described today will alleviate all stress from the process of creating this type of project, we do think that being transparent about the process
Unknown Speaker 47:38
will allow others to find that process like slightly easier, or at least have some solidarity, and hopefully avoid the common problem of paying a lot of money for tech that can't be adapted or reused on the line. And to that end, we also wanted to show why the Zooniverse team is committed to creating this type of reusable and adaptable low cost tech. So it looks like we've got about 10 minutes if anyone has any questions. Thank you so much for coming at this late hour of the day,
Unknown Speaker 48:12
we still have 10 minutes before the happy hour starts. So if anyone has questions,
Unknown Speaker 48:18
and I can come around with actually I'm going to use one because it's a little bit louder.
Unknown Speaker 48:33
Hi. So 4000 images doesn't seem like a lot. Doesn't it doesn't? Like I like I wonder if like, at some point, you guys thought about like, well, what if we just get you know, like, 20 of our friends together and order some pizza and you know, spend all night doing this or something like that? And then sort of the follow up question to that is Do you worry that the guests may sort of decide on crops that ultimately are like, it would have been better if it was a little more like this? Or, you know, like, you know, like I could see almost in that one like people like we're trying to avoid the horse a little bit. You know, where as like, it might be better if you know, if you didn't try to avoid the horse and tried to make it more centered in the box. Those are my questions.
Unknown Speaker 49:26
Yeah. So that's definitely stuff we've thought about. One, I'm flattered, do you think I have 20 friends, but unfortunately, we just don't have that kind of capability at the Adler, like most museums, everybody's already doing more than they're supposed to be doing. So my department is for people in the collections, and it's a collections manager, a curator and a librarian as well as myself. And those three are so terrified of technology that if I asked them to learn how to do Photoshop to crop these out, they might cry. So it was just going to be me and we did talk about the fact that if we want had to invest about five years. On top of everything else I do for the Adler, digital tech wise, we could probably get this done and make this an application. And that was floated for quite a while until we were looking at the exhibit. And they wanted to make an interactive, but they didn't want to spend any money. And so that was part of where this project came out of is they wanted to highlight collections in an interactive way. And they wanted to highlight Zooniverse on the floor, because as Michael mentioned, Zooniverse was not currently represented on the floor. So even though it's a huge part of our staff, and quite a big part of what the Adler does, most of our guests had no idea they were there. And even with our collections, most of the people ran through the galleries to get to Sky shows, and like a telescope only attract so much attention. And so part of this was for the exhibition goal of having an interactive experience using collections and highlighting Zooniverse. So had we not had the exhibit on the horizon, we probably wouldn't be doing this. Well, that's untrue. We would probably be doing this but we probably wouldn't. I don't know if it would be on the floor. I think this was planned as like a Zooniverse. Project Online. Because Sam when she started that part of when Sam Sam came in, as the postdoc, fellow her and I set up this project as something that we needed to get done.
Unknown Speaker 51:21
Yeah. I also think it's really important to acknowledge that while while crowdsourcing can seem like a solely opportunistic goal, in terms of, you know, low cost, high impact outcomes, I do think there is something really powerful to be said, for having an experience a research experience as part of an educational process. And I do you think that the idea of being able to say that, Hey, there's this database here, and I helped to make that is also really important. So not only are we meeting the goals of, you know, highlighting the really cool objects in our collections, we're allowing people's participation with those to not simply be in the context of, you know, via a curator, it's this idea of directed discovery that I really think is is is important for us to be focusing on as well. From the crowdsourcing side of things Trevor Owens at the Library of Congress and IMLS, formerly has has written a lot about the idea of crowdsourcing as a path to engaging with digital collections rather than a means to a specific outcome. And I think this actually fits really well in with with some of the stuff he's written about that.
Unknown Speaker 52:48
Does anybody else have questions? Yeah, thank you guys so much for coming. This is great.