Transcript
Unknown Speaker 00:00
Hi, everybody. Good afternoon. My name is Ethan holda. I'm the Director of Technology at the Cleveland Museum of Art. This is cow I'll do babe, he is the founder and chief data scientist at a Cleveland data science company called pan data, we've been doing a lot of work with them over the last couple of, I don't know, year, year and a half. And so before I get into what we've been working on, I just want to see a show of hands of you know, there's a lot of decision makers in the room. How many of you, in your process of making decisions are trying to implement something? How often do you step back and say, I don't even know, is this working? And anyone? Most of you, so how many of you feel confident that you have the information to know how to answer that question? That just didn't go quite so quickly that time. So what we've been doing, we're how we've tried to arrive at the answer of that question is, is by hiring a data scientist to help sift through all of this data that we're collecting at the museum our mission is, is transformative experiences, through art for the benefit of all people forever. It's written into the deed when Jeptha Wade donated the property for the museum in 1892. And to be successful in this kind of a mission with using your data and everything, you really need to get the support from the highest highest level. And that's why we're super grateful that our director, Bill Greenwald, here, posing in our Art lens gallery is really supportive and really committed, as he said, committed to the use of technology as an interpretive tool. He's also driving force behind our strategic plan. In 2017, we announced a strategic plan that covers the period from 2018 to 2027. And one of the main focuses of it, it was to make the museum into a data driven institution to really begin to make data driven organizations and to really to do that it would have been impossible without the support of the director and the support of the Board of Trustees. In 20. In 2019, we had a record year with 850,000, more than 850,000 visitors. So two years into our strategic plan. We were well on our way, to our ultimate goal of a million visitors a year. A lot of that was because I glossed over this part, a lot of this was because we had the very huge Kusama exhibition, which sold out every single show. So with over 850,000 visitors, we collected a lot of data. But the question is, what is this data telling us? We've been collecting data from the very beginning on all sorts of things, most notably, in 2012, we opened gallery one, a lot of you have maybe maybe heard of it maybe not a iterative digital space for the purpose of building audiences and providing a fun, a fun and engaging way to learn about art to highlight some featured artworks, and also to propel visitors out into the galleries. As an iterative space, we've had a couple of major iterations on it in 2017, we completely redesigned the whole space, got rid of almost all the touchscreens, moved everything to connect some projections. In 2019, we did a complete rotation of the gallery. One of the things which is now called Heartlands gallery, there was even a joke about it last night at the Ignite event, about that name change. And the whole thing about it is really intertwining the art with the technology, all of the content. That is our most of the content that's featured in these interactives is actually there in the galleries. And we kind of are encouraging people to look at artworks in a different way using this technology. And we're rotating the galleries every 18 to 24 months. For those of you who are not familiar with it, I'm going to play the short version of our video about what I'm talking about.
Unknown Speaker 05:05
So, since 2012, we've done a lot of iterative changes, we've changed some of our learning goals or interpretive goals. We've changed about the way that we work with vendors, the way that we hire and how we partner with them. How we train the staff and the role of the staff in that space. Changing our interactives developing our back ends, we included photogrammetry content in this last version of this. But one of the big things that we're trying to figure out is, how are we evaluating this information? How are we how are we deciding if this is a successful space? So to back up from that a little bit, all of this stuff requires a lot of data on our back end, we have a lot of interoperable systems with our collection online, you may have heard of our or Open Access Project, our CCMS, or dams, all of this has to be completely tied together, scalable, modular, and most of all up to date. So by tying this all together, and integrating it all all of this data we're able to achieve, the most important thing is to make sure that everything is correct up to date, all of our interactives update in 15 minutes, could even be less if we wanted it to or arbitrarily decided 15 minutes was good. So with all this data, and all the systems, we're really committed to being transparent with our data and what we're doing with our data. Taking, again, the Artland gallery as an example. We're putting stuff like visitor created tours on that wall that you saw, top 10, favorites, top 50 favorites for stuff that people are favoriting through the app, or through the wall. I mean, it's just it's actually just another way of visualizing, visualizing data, visualizing the stuff that the users are putting into our system. And we're also collecting, like straight up analytics on a lot of stuff like what are people playing? What artworks are they interacting with? Are they playing games more than once? How long are they spending. And we're also taking some of the games, we take photos, and we put them up on this thing called the beak. And it's kind of like our Jumbotron. So with all of this, the museum's digital strategy really mandates that we become this data driven organization that we're that we're collecting, sharing and analyzing this data, the best we can so that our leadership can make these data driven decisions. That that they have the tools basically so that they can they can make educated decisions on things like, you know, how the visitors use the museum? What are their needs? What are their things they enjoy? Are those the same thing? Are they not the same thing? Are we doing those things? And that's really like now that we have access to all the data, we can begin to answer those questions. The original Gallery One was a huge success back when it was gallery one and you know, it really we there was, we've had directors from museums all over the world coming to visit, you know, we have, or early metrics showed an increase these increases in attendance. And I actually worked on it as the Director of Technology at my previous job at local projects since 2011. So I can tell you, it's really great when people like stuff, but it's also is it working, we don't know we have these goals isn't working. So as the space just opened, we didn't really have any concrete ways of telling this and our in house evaluation team did a survey in 2013. The results of it did not come out until 2015. And being an iterative space, we had already kind of iterated out of all of what would be relevant through that. But there were some interesting takeaways from that early evaluation, we found that in trying to measure engagement, we found that we had this lens about 1930s art. And we found that that had the longest period of engagement, it was a medium of two minutes. And that was an interactive where you press the button and you saw a two minute video. So you can kind of figure that out. Whereas the thing that people really seem to love was the game, the sculpture game that matched your pose with a sculpture and the interaction time was like 10 seconds, so not as much engagement. But what is that telling us?
Unknown Speaker 09:39
In 2018, we were actually or 2017. Rather, we were actually fortunate enough to receive an NEA grant to do a deeper dive to take our same analytics team and do a deeper dive into into the success of this space and the impact of spaces like this in general and there's actually Hana written out And Elizabeth Bohlander from CMA have a great paper on this that's available on our website. And I encourage you to all check it out. So some of the things that we found is 36% of the participants, people who participate in this evaluation went into art lens, but that we couldn't really tell like, Well, were they? What were they doing next? If anything? Were they only going into art lens? And did they accomplish any of our educational goals? self selecting they, they said that they did have a greater greater understanding of art and a greater interest in the collection. But were they going out into the galleries, we kind of had no way of knowing. So in the old days, we used to have guards count people in the entrance of each gallery, click click, click, and they're like, guards have more to do than just count people. So we decided we need it a little bit, something a little bit more rigorous. So we use this system called trap sis, these infrared sensors that actually measure traffic flow in and out of spaces. And they have all this data that gets captured into a back end that has an API that allows us to generate dashboards and integrate it with all of our other data. And what we found was, we were under counting by 75,000 visitors a year. And the response to that was the technology must be broken. So So because the essence of doing anything with data is trusting the data we put took our own staff and had them sit in these galleries doing nothing all day, they weren't security guards. So they were just click, click, click, click, click. And the variance was like, would be like two people, one person so we actually could verify that this was this was the good data. And now all of our internal and external reports are generated using the traffic data from these. So we spent a lot of time and effort on counting people but but that's not really all we need to know. We need to know where people are going, how long they're staying, where they're going to go next. And are they learning are they are they hitting these educational goals? So our next sort of iteration on this kind of study is we implemented the Meraki the Cisco Meraki. It's a cloud based system that uses wireless access points, so that any wireless antenna on a phone a laptop, a tablet is collected from the system, whether they're on our network or not. And we hired a local data science firm, since this creates this is this is millions of points of data every day, we hired a local data science firm, Pan data to tell us what all this means, because we knew we had a lot of data and we didn't know what to do with it. And the question was, do we have big data?
Unknown Speaker 12:51
So as a data science consulting firm, we get asked a lot or told a lot, we have some big data, can you help us and it's 100,000 rows that just doesn't fit into Excel. And so in this case, they were generating 10 to 15 million data points a week, representing the roughly 15,000 Plus visitors that were coming into the space. And just to give you a sense of the scale, this system has over 100 Different rocky routers throughout the museum that are viewing the same device multiple different times as it traverses through space. And so it's creating lots of data. But this data is inherently noisy. And so after they had rolled out the system, there are a lot there's a lot of objection internally from the evaluation team just because there was a lot of noise and lack of trust. And so some of the questions that we had to address is how do we translate these noisy estimates of where the devices are into true paths through the museum? How do we understand what a visitor is from a count of devices observed? And how do we build trust in the system so that we can start to adopt it as a means to answer some of these questions. So one of the first things we looked at was, we're seeing way many more devices than we know there are visitors in the space. And so here's a little chart that shows the count of unique devices. This was just a random cut from a week based on how long they spent in the space. And so you'll see is all the way on the left, there's just a large column of more than 10,000 devices that spent less than 10% of a minute. And that just doesn't make any sense. So the Cleveland Museum of Art is actually on a campus shared by a private university Case Western Reserve University. And these were students walking on their way to class that were getting picked up in random way the Wi Fi router would see them. But there's also this red line where there were quite a few visitors that spent 10 minutes at the museum and if any of you've seen the space, it's a very large space who spends 10 minutes at a museum. Those same students were using the free museum to go to the bathroom. And I would know because I was one of those students at one point. And so what we found was about On 20% Of all the devices we were picking up, were actually visitors in the space about 1% of them based on the days they were there and how long they were spending, there were staff. And we were able to rule out which devices to ignore and which devices to focus on. So have a cartoon image here of what this process looks like. So at any given time, two or more routers will spot a person. And for now, let's just say a device is a person. Each one of these routers is going to give you, I think, this person's here, and I'm this confident in my estimate of where the person is. And so we then take that data and we turn it into our own estimate, okay, we're pretty sure based on these three different observations, this person's here, and we have our own confidence around it. And what we build up over time is this series of points and observations with different degrees of certainty. And that allows us to build a path. Now, something in here doesn't look quite right. And I know this, because I've been in this space a lot. Going from point four to five like that is impossible, because that's a giant wall. In fact, you can't get to point four, because that is outside. And so we had to start to deal with some of these issues within the data, and use the relative confidence of these different observation points to say, No, this is likely how the person traveled through the path. And we had, of course, maps of the various different galleries in the museum. And then we were able to take that path that we inferred, and turn that into Gallery level intelligence, where we're able to pinpoint this person was in this space at this time, and we can start to treat it a little bit like you would treat website analytics. The next question was, what is the visitor as a free museum with limited ticketing events, it's very difficult to actually get at that count. And so one of the things we did is we took advantage of the mix event, which happens once a month, on a Friday. And it's a ticketed event. And we looked at the number of unique devices showing up versus the number of tickets that were scanned. And I love it when things work out like this, it was a very simple, straightforward relationship. And the magic number is to two visitors for every one device observed. It's fascinating that it was that easy. So now how do we get this data into the hands of decision makers, we created this awesome dashboard. And it was really exciting to finally be able to say, hey, we can count the number of people in the spaces and how long they were spending in the spaces. And it was even more exciting when we could see these moving dots. And we could actually track visitor paths throughout the space except no one was able to make decisions off of these dashboards.
Unknown Speaker 17:40
Data Solutions fail without trust. And we skipped that first initial trust building step. And so whenever we talk about using data science, there's there's two processes. The first is experimentation. And that's addressing questions like What is a visitor? How do we deal with this noise? What is a path? What's dwell time, but then there's the other half of it, which is operationalization is taking the results of an experiment and translating it into a meaningful tool that is actually used by the relevant decision makers. And you know, it's working when business decisions are being made as a result. So we started to address this question, right after the Kusama exhibition. There is, you know, wanting to understand, okay, we know 36% of visitors go into the Artland space. But did that change when we had the special exhibition that had a huge amount of people coming to the museum, but just going to that specific exhibit? Can we find that in the Morocco data. And so we went we looked and the yellow highlighted period there, there was a notable dip in the percent of people going to Art lens Gallery, and we shared this with the evaluation team. And for the first time, we got back this responsive Well, we already knew that, we told them something very boring. But that was the first time they did not object to the data that was being used in this tool. And so then we were able to go to the next step and show them something new that we were able to identify with this muraki data. And that's the average amount of time people were spending in Artland gallery. And so we built trust by giving them something boring, and taking it to the next level and showing them something that actually has some really interesting patterns here. And there are clearly some days where there's a lot more time being spent within this space. So I want to highlight the study Ethan had just talked about that of the people who went to art lands gallery, they reported that their perception of what they learned about art was greater than people who didn't. We wanted to see if there was a story that corroborated that within the Moeraki data. So we took a cross section of visitors from July 2018 through December 2018. And we found that 177,000 visitors did not spend any meaningful time in Portland's gallery and on average, they spent about 2.2 hours in the museum. And that next chart that's down there, what it's showing you is essentially if all the unique devices captured how many of them went into a specific space. And what you'll notice here is they weren't going to that many spaces. And there's a handful of them that are just very, very red. And that was largely dominated by the Kusama exhibition, they were going to the special exhibition space and spending well over an hour there. But they weren't really going to that many other spaces. Interesting. We decided to look at the people who did go into Heartlands gallery, there are 80,000 visitors who went into art lens, and they spent about 12 minutes longer on average in the museum, which is interesting, but I don't know if it's that meaningful. However, they interacted with almost four times as many spaces. That's fascinating. We wanted to take this to the next level. What about the people who are spending a significant amount of time in Arlen's Gallery, and so there were 37,000 visitors who spent more than five minutes. And these individuals were very interesting, they spent almost 36 minutes on average longer in the museum, they were going to even more spaces and spending even more time in them. And there's just one little box I like to highlight, that's the cafe, they're probably spending more money to. I want to repeat that, again, visitors who are spending more than five minutes on average in Portland's gallery, we're spending 35 minutes longer in the museum, we've since refined this estimate to be closer to an hour, that's really meaningful. Now we can start asking some really interesting questions like does the selection of the objects in Heartlands gallery influence behaviors in the museum, we're able to connect the Google Analytics data that we have on the different gameplays that are happening with an Art lens, relate them back through the open access dataset to the home department that they come from. And correlate that with the data we have on people going to these different spaces and the amount of time they're spending in the respective galleries. The Cleveland Museum of Art has one of the best collections of Asian art in the world. Unfortunately, it is one of the least trafficked areas in the museum. And so we can start to look at experiments like does selecting Asian art and putting them in Art lens gallery influence people's choices in the galleries they go to. We haven't done this experiment yet. But now that we can connect these two datasets together, we can really start to answer some powerful questions. So I'm going to make a prediction here and guess what's on all of your minds. What about privacy?
Unknown Speaker 22:39
Especially with regulations like GDPR, and the more recent one California Consumer Protection Act, and other states seem to follow suit, personally identifiable information is top of mind. And as it turns out, and MAC address, which is what we're tracking, the unique identifier of a device, is how we're able to connect all this data together. Built into this process is an encryption algorithm that basically takes the device and it scrambles it. Because we don't care, we don't need to know it was you or your device that we observed. All we need to know is that a device spent five minutes in Ireland's gallery and they spent an hour longer in there and 100 more devices like that. So we're never able to actually reverse back out from the data, what device it was. And there are processes and controls in place to be able to have someone opt out whether it is through the Cisco Meraki website, or by get letting us know what their MAC address is. And we can use the same encryption algorithm find all the records related to it and delete the data. So what comes next?
Unknown Speaker 23:40
Alright, next. So Cal mentioned very briefly open access. And in a nutshell, that's what comes next. We in January of this year launched our Open Access Initiative, which encompasses a whole lot of things. I'm going to play a fast video just to kind of give an overview of what we did with that
Unknown Speaker 25:35
So I debated whether to include that or not because it's long. And also I do not like the song. But the song is released under CC zero and we're purist so. So our director Bill Greenwald, said at the live stream of our launch that that this is the logical and exciting outgrowth of CMA is inclusive mission that we mentioned earlier, to create transformative experiences through art for the benefit of all people forever. And having a somewhat established history of being a leader, a leader in digital technology and multimedia stuff for museums, we're really trying to model a best practice for, for museums going to do open access. So what are we trying to solve with open access? I mean, a lot of people, we have a great collection, we know we have a great collection. The world doesn't know that Cleveland has a great art museum. And even if they did, they might not ever find themselves in Cleveland, and in our art museum. So how do we reach? How do we reach a broader audience? And how do we get all of our information, this is all of the data that we're releasing through open access. And some of it's really exciting to us, because it's stuff like citations, provenance stuff that we weren't previously, we were releasing to the public before, and a lot of museums aren't doing it either. When we launched in January, we we partnered, as you saw in the video, we partnered with a lot of companies like Microsoft American Greetings, Pan data. Pandita did this actually, really, I wanted to get Carl to talk about what we're super short for time. But they did. They applied machine learning to the didactics and scholarship in our collection, to create the similarities of groupings using a TC TC algorithm, to use the algorithm to really explore how we talk the language of how we talk about art. We incorporated all this information into our collection, redesigning our collection, making all this available all this information available for people who don't know how to use API's. But for people who do know how to use API's, we made that available. And we had people immediately doing things like creating these great Twitter bots, games. This is actually the data hackathon, the results of a data hackathon in Houston, where people did things like like finding similarity, similar, the most popular colors, predominant colors in our print collection. And it also allowed us to integrate with Wikimedia. And what we found in integrating with Wikimedia is over a four month period, for instance, we were getting over 10 times as many people looking at artworks in our collection, and then we're just coming to our website. And then we're looking at different stuff. This is the top 10 of from our website. This is you know, very predictable. It's kind of our greatest hits here with the Caravaggio and the water lilies. This is the top 10 from from the English version of Wikimedia Wikipedia. I didn't even know that we had that hedgehog. They're also looking at different aspects of our collection. This is this is not to scale, but relative bubble chart of of what departments they're looking at. That's Decorative Arts and Design. By far the most popular thing that people are looking at from our collection on on Wikimedia, and it doesn't even it doesn't even write getting a number in the chart on the right, that represents our, our collection online. Same thing with Egyptian art. So as we said, we learned that that Kusama exhibition helped us to get to this get closer to this goal of 1 million visitors a year. And what we're really hoping is, how do we how do we use open access? Or how do we even see how open access is, is affecting physical attendance? We know people are engaging with it. They're using the API, people are seeing this stuff, but is it actually affecting physical attendance and engagement in the museum? We don't know yet. And that'll probably be our topic for next year. So any questions? Good in the back,
Unknown Speaker 29:50
going back to the heartland stuff and the data behind that. How are you going to give out that element of causation there like that? It's not just people who The overall birth and death?
Unknown Speaker 30:08
So that's a really good question. Hello, okay, this
Unknown Speaker 30:13
works. Because I asked him that question. He gave us that information.
Unknown Speaker 30:17
So it's a really good question what we did as a comparison, right? Does anyone who spend five minutes on average in any space mean that they're going to spend longer in the museum overall. And so what we did is we took some of the most popular departments, and we benchmarked what that looked like there. So if you spent more than five minutes in another department or another culture versus, you know, art lends gallery, what was the effect on staying longer in the museum? And so yes, there was some impact, but nowhere near as significant as those who spent five minutes or longer in Artland Gallery, and we found that to be one of the most impactful spaces. You're right, though, to ask the question, correlation versus causation, we can never assert that is what caused it. But there's something very interesting to be said about the behavior that's related to that correlation.
Unknown Speaker 31:04
We have to find a way to measure the time that people spend in our glands versus the retail shops.
Unknown Speaker 31:15
No, well, we That's a great question. Were we able to No, we didn't, we didn't because? Or do you think? I think I think we could I think when we were actually initially analyzing that we were finding people who were in the store, and people who are in the hallway behind the store couldn't get captured. And in the hallway behind the store is the elevators. So we were getting kind of both as one sort of blob of data, and we just didn't have the confidence to do anything with it. Am I remembering that correctly?
Unknown Speaker 31:46
So one of our biggest limitations is the physical properties of the museum and of course, walls and things that you can't get rid of. And so the Wi Fi sometimes gets interference, and that restricted? The difference between granularity and really trying to map out every small nuance space versus statistical strength of reliability. Yes, we can tell you with a lot of accuracy, there was someone here and so we had to draw some boundaries that were not ideal. And so some of those questions we couldn't address.
Unknown Speaker 32:16
How is this data? Typical? Understanding your visitor? What do you mean by from an empathy point of view?
Unknown Speaker 32:33
Right, and it's, it's a great question. And it's one that we're still trying to address because what what we've been doing kind of up into this point is getting is getting the numbers, but we're like through this yet, except for things like, what Kyle was talking about with looking at what collections they're looking at, there's not really a whole lot that so far, we've been able to do with intent, which I think is kind of what you're getting at. So, yeah, we're still we're still kind of figuring out how to do that.
Unknown Speaker 33:05
How have you prepped your decision makers to use this data? And
Unknown Speaker 33:09
are they in it? Are they comfortable using it?
Unknown Speaker 33:12
Well, there we, it was a it was it's a long process, because as we mentioned that at certain points throughout, it's getting getting them to trust the data. And then yeah, it is a it is a question of getting them to getting them to use it. And I think once once they trust the data, they're more willing to, to see how there's these relationships and getting it getting it to a point. I mean, there's not always an A and B, where, oh, we saw this, we saw this great dashboard. So we were gonna do this exhibition and not that that exhibition, it's not always that simple. But they're definitely coordinating, coordinating with us. And we're, like we have, we have so many requests in my department. For this type of data that we've had to start, I've had to start telling my staff like, you guys have to stop just giving, we need to do this in a more organized way. You can't just start just just giving it as much as you want transparency, you have to be you know, you have to be efficient with how you're distributing this data.
Unknown Speaker 34:16
One of the lessons learned that I'd like to share just related to that is initially that that very beautiful dashboard that was useless. What made a big difference was once we get to this point, where we really start to understand the nature of the decisions that were being made around this data, is we set up a monthly meeting where it was just literally okay, we're going to open up the dashboard. We're going to look at it together and we're going to start asking some questions to try to understand how this can be more useful. And that's how we arrived at that ultimate design.
Unknown Speaker 34:45
to correlate the results between the things that people were searching for an Art lens and where they went.
Unknown Speaker 34:52
So there the what the analytics that surrounds the Art lens plays is basic Google Analytics and So it tracks it's not tracking individual person, but it's tracking at this lens an object was played with. And it has the, the identifying session number, a session number, thank you, that you can then attribute back to the culture department. And then we had to bridge the gap. So we have analytics on one side that's linked to galleries that are tied, fortunately, to very specific departments. And we correlated, those two datasets together started looking at that. So it's not a specific person played with this, and then their phone ended up in the Asian collection, but rather, there was an uptick of people playing with objects from the Asian collection in Portland's gallery. And we also observe more people going to the Asian department, within the day within within a week period, we looked at it and week long increments. Wow.
Unknown Speaker 35:56
I mean, clearly, you guys are killing it. super impressive talk. I just want to know, clearly, this relies on a lot of like hardware and software and connected system, like how long has it taken to kind of get to this point?
Unknown Speaker 36:14
A long time. And that I mean, the one thing about you know, talking specifically, I mean, I don't know if you're just referring to the Artland stuff or to the the Meraki stuff. Yes, both the Orleans gallery. I mean, we, it launched in December of 2012. And we've been for the last seven or whatever, seven and a half years had been iterating on it, and every iteration costs money and takes time. And that's a lot of a lot of the efficiencies there is what I mentioned about iterating on how we collaborate and partner with vendors and making that those those better and more efficient partnerships. And then when it comes to stuff like the Meraki that's, that's really just a, you know, a capital and operating decision of like, well, we need to have Wi Fi in here. So let's do let's do it this way. And then we can get all this other stuff. So we're in the back
Unknown Speaker 37:16
of the slide that showed an image in part. Two, self reinforcing that you end up showing people everyone's favorite.
Unknown Speaker 37:33
Everyone. Yeah, that's, that's yeah, that's a really great yeah, creating, creating that creating that feedback loop. Because you show yeah, you show the top 10. And then people favorite stuff that's in the top 10. And then it stays in the top 10. And that's, that is actually something so we with a lot of the on the wall, for instance, the themes rotate through. And we we made conscious decisions in the development of those themes to be like, you know, obviously, the top 10 is the top 10. but for everything else, like, let's let's not put Monet's water lilies in a theme, because you know, that'll create that sort of a feedback loop. Hope that answers your question in the middle of the room, and
Unknown Speaker 38:16
I appreciate you bringing up the PII stuff I wanted to ask, Are You My assumption
Unknown Speaker 38:22
is that with the, with the methodology to use the MAC address and its scrambled form, you're able to identify repeat visitors? If someone comes a week later, you're able to identify that person's back again?
Unknown Speaker 38:36
Yes, yeah, that's correct. But we
Unknown Speaker 38:39
don't know who they are. It's it's just a 16 digit or something hash.
Unknown Speaker 38:44
But related to that, though, one limitation is that devices change over time. And so what we've noticed is that, you know, the average turnover is somewhere between 12 to 18 months is, and we won't see that device again. So one of the things that we've struggled with is identifying true repeat visitors. However, first time devices have distinctively different patterns. And so there's a lot of fascinating analytics that you can do to compare that behavior, those behavior groups.
Unknown Speaker 39:10
Were starting on that journey, but I've been told and changing the technology. So there will be no more MAC addresses. So what are you thinking about as a replacement for that journey? Matthew?
Unknown Speaker 39:21
I mean, I'm not sure yet, because a lot of it is, you know, we rely on what the what the technology does. And we had, you know, for instance, we had a similar issue with this. When we first rolled out our mobile app, and there was a Wayfinding component of it, that was dependent on the man programmatically getting the MAC address, actually from the phone. And then Apple decided, well, we don't want to let you do that anymore. So we had to figure something else out. So it's we're pretty confident that there's that there's an especially I mean, Meraki This is a big product for them. So I'm sure that once you know if there's some sort of end user thing that they'll that they'll figured out.
Unknown Speaker 40:00
So something similar that's on the marketplace is instead of just passively monitoring our social login permissions to get on the Wi Fi network. So that's it requires a little bit more engagement for the user. But it says, Okay, you want to use our Wi Fi network, please give us your email address, or please log in with Facebook. And so what it does is it bypasses that need to be able to detect the MAC address, you get fewer connections that way or observations that way, but there are still ways and tools around that that involve consent.
Unknown Speaker 40:35
Are you able to put together algorithms data? Project? Visitation matters?
Unknown Speaker 40:45
That is a fascinating question. So the data does exist to do that we have not yet done that in this context. We should do
Unknown Speaker 40:55
it. Have a question about the research questions that you started generating? Once you saw this big data? How did you, I guess a couple of questions, not become overwhelmed with all the different questions that you could ask and all the different rabbit holes to go down and not have to come to decide, okay, these are the questions we want to start asking, or these are the ones that we'll research and we want to do next, like half Yeah. How did you grapple with all of
Unknown Speaker 41:30
that? So I mean, the way this whole project started was, you know, is it working? Right? That's been the constant question throughout the project. And there's been a lot of false starts on the questions we've tried to answer with this data. And so the entire project has been iterative. In its nature, one of the most impactful things I think we started to do once we built trust in the system was our monthly meeting. And that, you know, we haven't been doing that the past couple of months that we've been preparing for this. But you know, what, once we get back to that, what's happened is we start to ask these questions. Well, can we look at that? Does the data exist? What's the value in answering that question? And so having the data science voice there with the decision maker voice there and the people who understand the so what is really what's driving what we can do with these research questions, and so it's an ongoing process.
Unknown Speaker 42:22
All right, well, thanks, everybody.
Unknown Speaker 42:24
Thanks.