Using Smart Cameras with Analytics for Public Places


Watch this 50-minute video to learn how Meraki Smart Cameras with Analytics can help organizations safely re-open public places.

Please view Transcript below: 


Kait Miller:                    All right. We're going to go ahead and get started. Welcome everybody to today's webinar on Using Meraki Smart Cameras with Analytics for Public Places. We'll go through some introductions here for all the folks that you'll be hearing from today. My name is Kait Miller and I am a business development manager with World Wide Technology.

John Koebel:                 Hello everyone. My name is John Koebel and I'm a sales engineer with Cisco Meraki.

David Owens:               Hi everyone. My name is David Owens. I'm the CEO and founder of EveryAngle and we're a Meraki technology partner specializing in computer vision applications.

Kait Miller:                    All right. For today's agenda, we are first going to do an overview of the Meraki MV Smart Camera, and then we are going to talk about utilizing the smart camera as a sensor to accomplish analytics, and we're going to talk about the API capabilities of the camera. We're going to talk about our approach to use cases and to achieving your business outcomes, and then we're going to talk about all of the analytics and how applying that technology can help your business in the current environment that we live in and in the future and beyond. One of the things that we've done today is we've sprinkled a few questions for the audience throughout. You'll notice at the bottom of your screen, there's an ask a question box there.

                                    You can type your questions into that question box, and we will answer them, but also as these questions pop up throughout the presentation, feel free to submit your response right in that ask a question box, and we will gather those. If we have time at the end, we'll talk about some of those responses as well. All right, let's get started.

John Koebel:                 Thank you Kait. I'd like to start by talking about what a traditional security camera system or deployment looks like. Now of course, you're going to have cameras, but those cameras are going to feed video and other types of data directly into some concentration point. Normally, we refer to those as an NVR. Now, those NVRs will communicate with servers or other types of appliances, and they'll catalog the video. However, in order to then go back and take a look at that video, you would normally need some management workstation or workstations. As we start layering more and more devices, this system gets more complex. Now in these traditional systems as well, it was rather uncommon that all of these pieces would be made or produced from the same manufacturer.

                                    Of course, in order to certify that these will all work together, each piece usually needed to be within a specific set of firmware or software, and this was usually a quite challenging. Once systems would get to this happy place where everything is working, it would usually just get left in that state. Now the problem with that is that vulnerabilities would not get patched, additional features would not get to real be realized, and bugs would not get fixed. The system would usually sit for at least months, if not years, and people would not necessarily be able to take full advantage of that system.

Kait Miller:                    All right. Here's our first question if you can enter your responses into the ask question box. What equipment is part of your security camera system?

John Koebel:                 All right. Meraki set out to fix a lot of these challenges, and we wanted to design a security system, specifically a camera security system that also enhanced this security department or physical security, and even take those systems to the next level. First off, we wanted to design a system that was extremely bandwidth conscious. Instead of utilizing heavy network resources from every single camera, we wanted to make those cameras only use bandwidth when they absolutely needed to. Our cameras by default will use less than 50K of bandwidth per camera, and a lot of times it's much, much less than that, and that allows us for communication with our cloud, dashboard system, along with sending metadata about the video that we are capturing.

                                    The other thing that we've done is allowed for an intelligent streaming capability. You don't need any specialized management station or workstation in order to view this video. You simply need a web browser from anywhere in the world and in conjunction with the web browser and our dashboard, we are able to determine if you are local to a network with that camera or if you're somewhere outside of that network. This allows us to dynamically bounce that video off of our cloud proxy systems to securely allow you to view that video and make configuration changes from anywhere. Also, this idea instead of using racks of servers and NVR to process that video, we're able to move that processing power out to each individual camera.

                                    This allows us to scale much more so than with utilizing very expensive servers all within a central location. This idea that the more cameras you add, there's not really any more burden put on the entire system. What does the Meraki camera lineup look like? All of our cameras our essentially a video surveillance system built-in, all-in-one to each individual camera. If we take a look at our MV12 series, we've got fixed focal length cameras with 128 and 256 gig of storage and processing power to handle all of those edge compute tasks. We then move to our MV22 series. These are varifocal lens. These allow for 256 gig up to 512 gig of storage.

                                    What's great about these cameras is you've got the flexibility from an installation standpoint because of the optical zooming that is included with these. The MV72 is very similar to the MV22, except it's hardened for harsh outdoor environments where you may have extreme cold, higher temperature, and even some areas where you might see additional condensation or vibration. Now the MV32 which rounds out our portfolio and is the newest of our cameras is really, really exciting, not only because it's a fisheye, but because it allows some additional viewing capabilities. It's VR enabled and it also allows us to turn that camera into a digital pan tilt zoom camera.

                                    Think of this as being able to view everything within a specific area and not missing anything, and we can dewarp that video on the fly to allow you to naturally view that video instead of what you would normally see from a fisheye camera.

Kait Miller:                    Hey John, so we do have a question come in, and they are asking are there any storage options available besides the ones on the camera.

John Koebel:                 Absolutely, and that's a great question. Actually, next up here, I wanted to talk about our cloud archive. Besides the onboard storage that is available with all of our cameras, we also have the ability for you to offload that footage and that information to our cloud archive services which we have in partnership with Microsoft Azure. This allows us to basically store in 3-month increments or I should say in 30 days, 90 days, 180 days and all the way up to a year, and that allows us to not only store the video on the camera, but also up in Azure and keep all the metadata and analytics that were able to extract from that footage, synced up across the entire timeframe of that license for cloud archive.

                                    Now what's not listed on this slide which is something that we've introduced recently this year is also the ability for customers to stream that video directly to a local storage device. You can configure within the cameras, the ability for another device to consume an RTSP stream, so that if you've got an existing video management system, if you've got an existing NVR that you would like to subscribe to individual camera feeds, we also allow you to do that as well.

Kait Miller:                    I'm really stoked for this slide, John. Sorry to interrupt, but I think the flexibility here is just really cool, so just wanted to say. Okay.

John Koebel:                 Yeah. Absolutely, and what Kait is talking about here is the ability for us to be extremely flexible from a deployment standpoint. A lot of times, older security camera systems would be installed using specialized cabling such as coax, Siamese power cables. These are the older analog systems, and a lot of times it can be cost prohibitive to replace those systems with a new IP camera system because of the cabling costs or the cameras might be up very high. You might be required for certain environmentals to have conduit, but you may have a situation where you've got existing power. Because of our cameras, every single one of our cameras across the line has wireless, 802.11 wireless built in.

                                    We're able to configure those cameras initially to connect to at least two S societies. This allows for a primary connectivity to a wireless network, and it also allows for a backup connectivity to a network. Now what this affords us is the ability to install these in some interesting locations. If you've got parking lots that you've only got power on a pole, we can configure using one of our DC adapters to provide power to the camera, and then allow those cameras to connect to a wireless network that is available. This is also great that we've seen a lot of interest in recently in temporary sight stand ups, where you need to get a system or a location set up within hours or days, and usually the best and easiest way to get network connectivity to a location is using Wi-Fi.

                                    We can connect up to an existing Wi-Fi network, and then allow those cameras to reach out to the cloud, and then you can start viewing footage. Now next, I want to just reiterate that our cameras at their heart are a security camera. Okay. They've got all these awesome features for providing additional capabilities as a security camera. We've got video access logs. We can take export footage from our cameras, and we can reorder that and build a timeline. A lot of these really very useful features are the camera as a security system, but we wanted to take it to the next level when we first started designing these cameras. Because people have so many cameras usually in an area, it would be really cool to have them do something else as well.

                                    Since our cameras have all this great processing power in them, we're able to harness that and use the camera as a sensor. Now what we're doing with these cameras is we're actually able to layer on machine learning algorithms that will be able to analyze the footage that a camera is looking at, and determine certain things about that scene. Then what we can do is we can provide that as metadata up to our dashboard to allow you to consume and provide additional context into what is happening in that particular field of view. Now to talk about a couple of examples of this, think of the idea of instead of just looking at this from a security camera standpoint, the cameras are able to detect several things.

                                    They're able to detect the light level that we are currently able to gather from a specific scene. We're also able to detect objects. Instead of just being able to tell you that there's motion within a scene, we can also tell you that well, that motion is a person, and we can detect the people that are within the entire scene or within specific areas of that scene. We're also able to do the same with vehicles. Now what this allows us to do is we can start providing this information as different data points and analytics, so the number of people, the number of vehicles. Think of this as if a vehicle is detected, we want to be able to allow for a snapshot to be taken, so that we can do something with that with other third-party services to actually analyze what's in that scene.

                                    This is what really makes a standard security camera something special. We're able to layer on these additional capabilities for you to consume when you're ready.

Kait Miller:                    Okay, here is our next question. The most important thing to me with a smart camera deployment is, and if you can go ahead and just drop your answers into the ask a question box at the bottom of your screen. Like I said early on, if there's time at the end, we can review some of these, but we're very interested to hear, are you looking for security, are you looking for analytics? Let us know.

John Koebel:                 All right, and where we take this to even the next level is with Meraki MV Sense. This is an additional capability that you can add to your cameras, and this allows us to go much deeper and be much more proactive with what's happening in a scene. We allow and open up some additional API capability that allows you to query our dashboard and get information such as how many people did you detect or how many vehicles did you detect at a given point in time, or what was the average. We can also do that in more real-time, which is tell me how many people are in this particular zone right now. That could be useful if you're trying to determine how many people are in line at a register, how many people are maybe unattended in a specific area in a showroom, how many cars are in line at a drive-through.

                                    What we're also able to do is send this information in real-time sub-second to a broker, and this broker could be something such as a building control system. This could be a life safety system, so we can give you a constant feed of information such as this is the light level, this is the number of people that we have detected within a zone, this is the number of people that we have detected overall within a scene. We can even provide that information with coordinates, XY coordinates with respect to the field of view, and that allows for some additional capabilities and computations even around directionality, and this ability to create an array that could explain the movement of someone as they cross through the scene of one of our cameras.

David Owens:               Thanks very much John. David here from EveryAngle, and just to pick up on the really comprehensive overview that John's provided there, talking about the existence of the Meraki API and the MQTT broker, as a technology partner, what EveryAngle does is we use the API to extract data off all of the different sensors onboard the MV smart cameras, and we then take this information which can be a mixture of raw data or images. We then process this using additional proprietary computer vision algorithms, with the view to being able to deliver specific outcomes, answer specific questions that are relevant to you in the particular nature of the business or organization that you have.

                                    What you're seeing on screen here is really just a simplified visual representation of how we do that, starting the left moving to the right where you can see the MV device. We have the MV sense API which is what is letting us extract the information. We're taking anonymize metadata. Very important to note that in terms of being data privacy friendly, and we're taking this anonymized data. We're then processing this as I said through one or more different machine learning algorithms, whether that's our own proprietary algorithms. We're using some other third-party algorithms on an as needs basis, and then off the other side of this, what we're delivering back is a series of productized applications that deliver very specific outcomes.

                                    As an end customer, what you get to consume from those applications is a mixture of real-time alerts, business intelligence dashboards, virtual assistants, as well as integrations with other third-party systems. Kait, not sure if you want to jump in and say anything at this point on this slide or like keep on powering on.

Kait Miller:                    Yes, so we are going to discuss here some of the approach to how we take a look at use cases that our customers are looking at, and how that leads us into the business outcomes that they are looking for. Just here, so this is actually a lot of real-world example of a picture that I took actually Monday evening around 5:30 p.m. I went out shopping and upon entering the location, I saw this sign. I noticed that it had been updated as of noon that day, so five and a half hours earlier with some manual effort going in here. It really was just one of those moments where I stopped and thought this is exactly why we need to rely on technology to automate some of these things, right?

                                    When we're looking at a use case in the current environment with we're in with the pandemic and physical distancing and limiting the amount of people that we have in a given location, how can we do that in an automated fashion and track those numbers, but also once you have that data, what's the best way to apply it so that it's actually usable and consumable from the customer? Not only do we need accuracy on the data that's being implemented, but we need accuracy on the data that we're showing to our customers, that we're showing to our employees as well. I think five and a half hours, those numbers are probably just a little out of date there. We also want to take a look at the camera density.

                                    There may be a very good way to accomplish an outcome, but if it's going to require 200 or 300 cameras, maybe there's a better way that we can do that. We can look at deploying fewer cameras, simplifying that solution, and still being able to have a very significant impact on that outcome that you're trying to achieve. That leads us into deployability as well. If you have a ton of cameras and a highly complex system that you're looking to deploy, it's going to happen very slowly versus a simple deployment with fewer cameras and less complexity is going to be very rapid. By the same token, that's going to give you a better return on your investment as well.

                                    Again, we want to reduce the amount of hardware, reduce the complexity, make sure that these things can be up and running as quickly as possible, as simply as possible, and delivering those outcomes to you as well.

David Owens:               I think Kait, one of the other things just to jump in there that occurred to me after you shared that image was there is a having a solution in place which technically helps keep people safe, but there is then also people's perception of whether they actually feel safe or not. You could have the whiteboard example there where somebody is every 30 seconds writing out the number and writing in the new number, but people aren't going to feel safe from that. It's not just a question of delivering something that works. It's also being very transparent with customers and colleagues that they have confidence that they can actually see a solution and operation as well.

Kait Miller:                    Yeah, that's a great point. I think with the pandemic response and with limiting occupancy, it really comes down to two things. You and I have talked about often, right? It's employee safety and customer confidence, so exactly what you just said. You need to deliver that feeling, so that they have the confidence that you are taking the appropriate steps, both for their safety and for their employees safety as well. Once we move on beyond this pandemic response, I do believe we'll be on the other side of this, hopefully sooner rather than later. We want to ensure that this is a future proofed investment. There's quite a few other analytics that we can capture, some insights that we can capture.

                                    Just as an example, some accurate people counting, looking at and understanding wait times in line, how often are people spending a lengthy amount of time in front of a different display or a kiosk, capturing your customer's emotions as they engage with certain areas of your locations, some predictive analysis, and also that security that we discussed. As we move forward, we're going to talk about different applications that can give you these insights, and then the business outcomes that those insights deliver.

                                    You're looking at dynamic allocation of resources, and when you get into some of the predictive analysis, you can take a look and say, "Hey, every third Thursday of the month, we need to reduce staff because for whatever reason, the customers aren't coming in as often on Thursdays, but on Fridays, we need to increase staff." You can start to do that in an automated fashion. You can look at how successful a marketing campaign was. We can take a look at customers emotions as they engage with a self-service piece of equipment within a location and determine how you can improve that, so that you can deploy it further and then risk assessment.

                                    Really what it comes down to from a risk assessment perspective is we are establishing a baseline of data, and then we are alerting anytime there's an outlier outside of that baseline. That outlier could be good and it could be something that you want it replicate, or that outlier could be bad and it could be something that needs to be addressed. All right, so now we're going to get into these use cases and how customers are applying this technology. I think this is the fun stuff, but just before we get started, I'd like to hear from you again. You can drop those responses into the question box at the bottom of your screen. True or false, my colleagues and customers feel safe and protected returning to public places.

David Owens:               Great. Thanks very much Kait. Let's talk, I mentioned a few minutes ago and Kait's right. This is the fun stuff, right? This is the cool stuff, but it's all built upon the Meraki MV platform and we could not do this without MV. I mentioned a few minutes ago that we have a series of productized applications, so applications that are designed to deliver specific outcomes that you can deploy rapidly as Kait mentioned with fixed cost. You can do it whether it's on one camera or whether it's on the face and camera in one location or in a hundred locations. Looking at physical distance control, this is an application that was first brought to market really within the past two months to try and help with the COVID-19 return to work and return to operation challenge.

                                    Essentially, what we're doing here is we are with precision counting people in real-time and recording this against a maximum safe occupancy in a space or a place. What we're doing there is taking the overall square footage or the size of the building or particular physical entity, and we're then using the ordinance or guidance in place where you are, whether that's 6-foot separation between people. We're using that to calculate a maximum save occupancy and let's say, for example, that's 50 people. Then what we're doing is counting people in real time, and you can see an image here where you've got multiple different MV cameras from a real-life retail store, where we are automatically excluding any double-counting.

                                    Even though you may have two cameras overlooking the same person, we're only taking a count from one of those. We're delivering that with 98% plus accuracy, and what that lets us do is then to be able to report accurately in real time, unlike that whiteboard marker example that Kait took a picture, report in real time the total number of people are in that space, and whether that is above, below that max safe occupancy level.

Kait Miller:                    How are we helping customers apply this? Again, not on a whiteboard. Some of the things that we add WWT are seeing work for customers is first, it's really a low-tech solution, just creating a directional flow within a location. If you have an entry and an exit door in the same spot, and maybe there's one on each side of the location, use one for entry, one for exit. You have a strict entry area and a strict exit area. Utilizing tape on the floor or on the walls, whether it's a retail location, an office building, manufacturing, warehouse floor, any of those types of situations, creating directional flow to keep people moving one way to reduce the amount of times that they cross each other's paths.

                                    The next thing is digital signage, so again automating those numbers being written on a whiteboard. We can have that be updating constantly and a smart speaker integration. What we see here is it's really an interruption. It's an alarm, it's an alert. It's something that audibly draws the attention of people to the digital signage. When we go back and review the video, we actually start to see that this interruption draws that person's attention not only to the digital signage, but to all of the other visual cues within a location. Without that interruption, a lot of those visual cues might be overlooked, a lot of people are like me and their heads in their cellphone as they're walking around, so that interruption and that speaker integration becomes very important.

                                    We can also integrate this data into your mobile app. You might have a customer that might want to go on to your app and check and see what your density level of your store is before they head over there. I think that could be really powerful for customers as well, is that integration with the mobile application, and we can do the mobile application across any of the apps that we're going to discuss today, but I think you can have a really big impact with the physical density controls application as well.

David Owens:               Perfect. Thanks for that Kait, and actually one just real world insight to share from a number of deployments of that application, you may have a space that you're setting at maximum occupancy say of 20 people, for example. Getting back to this point as to whether people feel safe, ceiling height has a big impact upon perception of whether a place is crowded created or not. You could have the same exact floor area size in two different facilities and two different buildings. If you have a higher ceiling in one than the other, people will actually feel as though it is less occupied.

                                    Actually having some, whether it's digital signage or smart lights to show an all clear, to show that it is green, even if you have lower ceiling heights just to help with that reassurance, simple visual cues like that can actually be very, very powerful. Another example of an application that we have are next-generation footfall. This is a demographic footfall analysis tool. Essentially, it's designed much like with physical density control to have MV cameras deployed at points of entrance or points of exit.

                                    What we're doing is we're replacing the need to use traditional footfall counting technology, like infrared beams, pressure mats, thermal imaging cameras and instead, we're recording people entering using a high precision counting, but we're also anonymously analyzing for emotional state, for gender, for age, generation, object detection. What's interesting about this is if you have a look at deploying this application on just one MV device at an entrance, that gives you rich profile information about who's entering.

                                    If you then have a look at deploying that application elsewhere, either in store in a retail environment or hospitality or sports or entertainment venue, this is where you can then start to understand impression and engagement, what people are interacting with, what they're avoiding, all the way up through to point-of-sale and so on.

Kait Miller:                    As far as applying some of this information, we can take a floor plan, and we can take a look at the most intriguing areas that we want to look at. We can look at self-service areas, and we can adjust the camera so that we are picking up the emotion and the demographic of the user of that self-service area. You can start to identify maybe millennial women are using the self-service area far more often than other demographics or other age groups are using that. You may start to see that different age groups are having a less desirable experience with that self-service area. You can make adjustments to those items, so that you're ensuring customers are happier with them.

                                    Dwell times by zone. If you're running a specific marketing campaign in a specific area of a location, we can take a look at the dwell time of customers in that location if they're approaching it and exiting very quickly. Maybe that marketing campaign needs to be adjusted. If it's increasing your dwell time and time spent by customers spending time in that area, that may be something that you want to run again as it may have been very successful. We can look at wait times in line, and again these are things that we can expose in the app with your mobile app integration. We can do an integration with a POS system. Again, you can start to look at the demographics of persons that are buying specific objects within your location and predictive analytics.

                                    This is what's really cool is after about six to eight weeks of data is collected, we can start to get very predictive with some of this information, predicting how many people are going to be shopping in a store on a given day of the week, a given time of the day, and allow you to make some really smart decisions about staffing levels, especially in the current environment, but this is important beyond the current pandemic as well, being able to really adjust staffing levels based on predicting how many customers are going to be in a store on a given day. Then you can tie some of those predictive analytics into the marketing campaigns as well. We can become very precise with these integrations as well.

David Owens:               I think what's really interesting about that and we can touch on here in the warehouse intelligence app as well Kait is all the market research shows, I think it's, yeah, more than 76% on average, 76% of customers in store will abandon a queue that they perceive to be longer than five minutes in wait time. Understanding dwell times by zone if that zone is over a queuing area can have a huge impact in terms of opportunity cost on revenue, but then also looking at particular key areas of interest. There's a large federal customer as you know that we're both working with and for them, what they're very interested in is understanding as they deploy more automated kiosks in different stores, what's the level of engagement with the view to understanding do they deploy more of these elsewhere.

                                    There is real meaning that can actually be brought to help the customer and the organization make better decisions.

Kait Miller:                    Yeah. I'd have to say I'm one of those that will abort a cue that I think is going to be more than five minutes.

David Owens:               Yeah. I think at five minutes, you're being patient. I think I don't know if I'd even wait five minutes, right? That's just the nature away things are. Being able to report on that and understand how that varies across your network of locations I think is very valuable. On the warehouse intelligence side of things, a couple of key points two point out here. This is an application obviously that we've labeled as warehouse, but is highly relevant for construction, logistics, manufacturing. Essentially, what you're seeing here on the right-hand side of the screen is it's around velocity.

                                    It's around understanding the movement of people and objects in the space, whether that's trying to understand our staff members having to walk to three, four warehouse aisles away from the picking and packing area for fast moving goods that should really be stored in the first one or two aisles most closely located to those areas. Really time and motion and saving money, it's around efficiency. Then on the left hand side, what you're seeing here is just a couple of collate points to give you an idea of some of the capabilities we can bring to bear. On a health and safety standpoint, it could be around detecting whether individuals are using cellphones, for example, when they're inside the warehouse area because they may be distracted.

                                    They may not see oncoming vehicles, so that could be an important health and safety outcome to deliver. Then with regards to security, a number of warehouse operators that we work with take on seasonal staff at different times throughout the year, so they don't know all of their colleagues by face. They may have a policy where they ban staff or prohibit staff using personal bags as an anti-theft measure. Being able to automatically identify any individual who's carrying a personal bag as a preemptive way of identifying potential theft again can be very valuable in terms of loss prevention, shrinkage, et cetera.

Kait Miller:                    Applying some of this intelligence, so we can help you to integrate this information with your access control system. The warehouse intelligent app is able to do some object detection here as David mentioned, and one of the things that we can set our safety protocols for any personal gear that person on that floor may need to be wearing. If they're supposed to have on a helmet, a reflective vest, and boots, we can go ahead and tie right in access control when they badge in. The camera checks for all of these safety items, and will actually deny access until they have the appropriate safety gear for work. We can integrate this also with any Wi-Fi analytics or contact tracing devices they're already being captured, and we can enhance that contact tracing because again, these are cameras.

                                    They're sensing all these objects, but they're cameras at their heart. You actually will have video evidence of any contact tracing items there. You can go back and review that video, and see how closely two people passed by each other. We can do alerts for a break in safety protocol. Let's say there is a device or machinery that should have three people using it, maybe one person driving and two people directing traffic. If there's only two people attempting to use that machine, we can send an alert to a floor manager and ensure that all safety protocols are continuing to be utilized.

David Owens:               I think one of the things also I just call it there as well Kait that we've discussed many times is there's a real conscious effort collectively on the part of WWT, Meraki, and EveryAngle, not to bring solutions to market that well, they may deliver very good outcomes are very complex or time-consuming to be able to go in and deploy and use as a customer. All of the applications that we've been showcasing today, literally you can get an MV device, deploy that, have one of these applications live within a couple of minutes, and start getting value from that application. In this context here with warehouse intelligence or detecting personal protective equipment, that can mean you just get an alert.

                                    There's a violation, somebody's not wearing the hard hat, or the high-vis vest, but then as Kait said, you can then go and leverage the value up on that by taking it to the next level integrating with door access control. The key point is you get to choose how you actually deploy the application that makes best sense for you. It doesn't have to be overbearing. You can deploy it very quickly, but you can continue to go back and lever up more and more value over time.

Kait Miller:                    Yeah, and actually so we did get a question in. It said earlier lux lighting levels were mentioned, is that for fire detection, and if so, is it better or faster than a smoke detector? Go ahead on that.

David Owens:               Do you want me to have a crack of that one?

Kait Miller:                    Yeah, sure.

David Owens:               Okay. I think to be clear, when we're talking around fire detection and using MV and EveryAngle fire detection, it's not designed to be a replacement for your mandated regulatory smoke alarms or fire control systems. Where this came out of the genesis for it was a large utility company who had said, "Look, we've got lots of locations that are unmanned or we have fire control or fire alarm systems that are not monitored." Generally speaking, what happens is when we get some form of an alert and we have to send a human being typically in a car to that location, one of two things happens. Either it's a waste of their time, which is a waste of resources, or there is actually an issue there, and we've actually put them in harm's way.

                                    Being able to use the fire detection application as a means of active triangulation of confirmation of being able to say, "Okay, there's been some generally speaking analog system that has sent an alert and is saying right, we think there is a fire here, and then we can in real time go and report back and say yes there is or no there isn't beyond a certain level of confidence," that can be a great time say, but also as something that helps promote safety in terms of not putting people in harm's way. I can see we've got one other question there Kait just coming in. Are we seeing any differences between Europe or other countries versus US as it pertains to popular go back to work solutions, any practices from other regions that we can learn from?

                                    I think just to take a moment on that, I mean honestly, I think my some review and that would be is that there's a great willingness amongst customers irrespective of geography to invest resources in a technology solution that will meaningfully help keep people safe. By people, I mean customers and colleagues and students, et cetera, but I would say there is a real reluctance, definitely a real reluctance to invest in any solution that is really just a point solution. It's only going to do one thing, and it cannot be leveraged to do anything else.

                                    That was one of the things we were conscious of when we were thinking about the session today, about trying to make it very clear that if you make an investment in Meraki MV and EveryAngle that collectively with WWT, what we will actually be delivering is a platform. It's a platform where you can get meaning, valuable outcomes on day one, but there's no real hard limit to the value you can continue to get from that. I think that'll be my primary point on that. Not a point solution, it has to be multi-purpose. Speaking of which looking at suspicious person detection, I suppose this is a good example of that.

Kait Miller:                    Let me just chime in just real quickly a little bit on that question as well.

David Owens:               Sure.

Kait Miller:                    Yeah. Very, very similar response things that we're hearing here as well, whether it's in the US or elsewhere in the world. I think as WWT, one of the things that we're really focusing on is helping a customer narrow down all of the processes that they need to put in place to adhere to either CDC guidelines or state and local guidelines, and anything that's being put in place across all verticals. If it's a singular office, a shared office space, if it's a warehouse, if it's a retail store, if it's a stadium even, what do we need to account for. We want to create a list of essentially processes that can account for all of those things, and then we go through and see where can technology have the biggest impact on improving, automating, and giving you ROI on that process.

                                    We'll also again go beyond be pandemic and continue to return on investment and continue to deliver business outcomes as we move past this. We're really working in that consulting space to just help customers really wrap their heads around all the things that they need to do.

David Owens:               Yup, and I think that's a good way to make good on that is by demonstrating a number of different outcomes as we spoken about that can be delivered out of the box, but then there's obviously also the capability to do lots more beyond. We really are only scratching the surface here, but it's just a try and get you excited and interested and thinking about the whole operation of your organization and how you could potentially apply computer vision through Meraki MV and EveryAngle with WWT's expertise to actually apply that, to help improve many different aspects of your business.

                                    One of the scenarios that we see or one of the applications on suspicious person detection that's been fantastic interest in this in education K through 12, higher ed, also local government, and retail of course as well, this essentially is an application deployed on the exact same MV device that you will have at a door that will do your next generation footfall, your demographic footfall analysis, your physical density control for COVID-19 return to work. It's the same device, different outcome, and what this does essentially is detect any individual who's wearing any object that conceals their face, any obvious unconcealed weapons.

                                    You'll be able to toggle the objects that you're searching for and classifying suspicious on or off on a per camera basis, and I'll just move on to the next slide here in terms of actually applying this.

Kait Miller:                    Yeah, absolutely. When we take a look at this, we can integrate this solution with virtual assistants for alerting to different persons within a location that may need to be aware of this. We've actually seen some customers that have a close relationship with local law enforcement tie these applications into an automated response from that local law enforcement department. We can tie into an automatically initiate a lockdown protocol based on how suspicious the detection is for a given person, and we can also integrate this with alarm systems. Not only is the lockdown protocol automatically initiated, but we can also set off a series of alarms, interruptions that potentially can put a stop to or interrupt the person that's coming in to potentially do some harm.

David Owens:               License plate recognition and beyond license plate recognition, I suppose is another really, really valuable area to have a look at in terms of what we can do. John touched on this earlier on about vehicle detection and the native capabilities within MV to be able to go and detect vehicles, and I suppose what we're talking about here is taking that to the next level, the next order of magnitude in terms of value. What you're seeing here is a floor plan that was created for a large carpark and in this case, for a healthcare facility. There can be a real challenge in large campus environments trying to identify vacant car parking spots and the travel time it can take for individuals to find these, and in some cases, missing appointments or in a retail or hospitality context.

                                    It can just be general frustration. What we're able to do is essentially use optical character recognition, and we're able to go and effectively turn the characters that are on a license plate into machine readable text, and then using virtual assistants like Kait showed in the previous slide. You can then use these to actually query that information on demand. I know Kait's going to talk more about the application of this in a moment, but I think a key point to note here is that it's not limiting us this capability that we have in the application. It's not limited to just reading the license plate. It's also any other information that we can turn into machine readable text within the scene of the camera, and that can be information that is actually on the actual pavement.

                                    It could be signage. It could be telling you you're in the green zone or the red zone in the carpark, et cetera. An awful lot more you can do there to help contextualize where the license plate information is actually being read from.

Kait Miller:                    One of the use cases that we see for this license plate recognition is really a way to enhance curbside pickup. Very true that many retail locations had curbside pickup capabilities before COVID-19, but we've really seen that expand across just about any store that sells something is now offering curbside pickup. Everyone that I have engaged with and every app that I researched online actually requires a manual effort from the customer as they pull up to obtain their goods. They either are going to pull up and they're going to see a sign that says, "Hey, call this number and let us know you're here," or they're going to have to take out the app, log in to that retailer's app, and let them know that they've arrived, and then they wait for their goods.

                                    Where we see the utilization of license plate recognition to enhance this and automate this is when a customer arrives, we can detect the vehicle, read the license plate, and then integrate that into a queuing system so the employee can see something like a view on the right-hand side there. They're going to see a visual image of the vehicle. Our family vehicle is Kia Telluride and often, we get what's that when we pull up to pick up our order. A visual reference of what the vehicle looks like, the license plate on there so that when the goods are brought out, there is a way to confirm very specifically that the person who paid for those goods is receiving those goods, and not somebody else by mistake as well.

                                    Another application of this that has come up recently is with folks not wanting to go in stores, there's a lot more drive-through takeout. One restaurant in particular is getting some complaints from the neighborhood about littering. We're talking to them about integrating this into a printer, so that they can print the license plate onto the bag of that to go back to try to encourage and reduce the littering there. We've got an environmental use case for this as well. What is next? We would like for you to go to wwt.com, and if you create an account, you will be able to download a Meraki Smart Camera Quick Guide, a 2-pager here that's really cool, has a lot of great information involved in there. You can also request a briefing.

                                    If you have more interest in the Meraki Smart Cameras, you can use this second link here to request a briefing on wwt.com, and we will provide you with an in depth, deep dive demo into the Meraki dashboard and how to utilize the cameras in the Meraki dashboard. There's some additional reading at wwt.com as well that you can take a look at. That wraps us up for today. I just want to thank everybody for attending, thank those of you that we're able to submit some questions here. We're really limited on time, but really appreciate everybody that was able to submit some answers to the questions that we posed throughout, and we look forward to speaking with you again soon. Thank you very much.