?

TEC37 E12: Key Trends in Storage & Data Protection

40:00
76
Plays

This episode of TEC37, in partnership with Dell Technologies, will cover the key challenges that customers face today related to storage and data protection, considerations for choosing the right products and recommended approach in addressing wholistic infrastructure needs.

Please view transcript below:

 

Rob Boyd:                     Well, welcome to Tech 37, the podcast covering technology, education and collaboration from Worldwide Technology and today Dell Technologies. My name is Rob Boyd. Today's organizations are nothing without their data. And as we've seen, a lot of change can happen in a really short period of time. As we're all doing our best to plan for what is a very uncertain future. There are emerging trends along with some new challenges in the storage and data protection space.

                                    Today, we have experts from Dell Technologies and Worldwide Technology, all with diverse experience in data storage protection and resilience. And so gentlemen, it is good to see you. I appreciate you joining us. Let's start with some introductions if you don't mind. Emmett, I'll have you lead us off? What's your full name? What do you do? What are you responsible for?

Emmett Kaczmare...:     Yeah. Thank you. Emmett Kaczmarek. I'm the Global Director of Presale Strategy for the Data Protection Division here at Dell Technologies.

Rob Boyd:                     Excellent. Excellent. Andrew, how about you?

Andrew Braverma...:     Thanks Rob. Andrew Braverman, I lead presales For Dell Technologies unstructured data solutions for the Americas.

Rob Boyd:                     Excellent. Thank you. And Dom from Worldwide Technology.

Dominic Greco:             Dominic Greco with Worldwide Technology, I worked on our Global Engineering Team specializing in data protection.

Rob Boyd:                     Excellent. And then finally, but of course I don't mean this in any other way, Todd, how are you doing?

Todd Bolton:                 Thank you. My name is Todd Bolton. I also work on the global engineering team and I specialize in primary storage covering Dell Technologies.

Rob Boyd:                     Excellent. Okay. So we do have a good split then on kind of the data protection side and storage and where those things will come up. So let's start with you Emmett, just kind of lay the groundwork here. How would you describe the current environment? Which includes, I mean, really our current environment, like since March, maybe. With regards to storage and data protection, what's the best way to characterize it for today's conversation?

Emmett Kaczmare...:     Absolutely. Yeah. So if you look at what's been happening in the industry as a whole organizations, digital transformation now is really being shaped by the current times we're living in. And there's this huge push now with data being distributed to the edge more than it ever has in the past. And there's the need, not only to support all the new work from home requirements that organizations have, but also to protect that data and to also being able to identify when the data has shifted and it's no longer being accessed in the appropriate manner.

                                    There's this security aspect that comes along with this to make sure that as we are pushing that to the edge, as we are being able to stand up and support the people at the edge that we're able to also protect that data and identify when the change rate has skyrocketed or when the file name no longer... Or the file type no longer matches the extension. So it's not just about being able to stand up and support it and protect it, but also recover in the event that there is some sort of a ransomware breach, or there is some bad guys that are getting into the network holistically.

                                    The other big thing that we're seeing is a shift here as organizations are really looking to make sure that they can stand up and support that consume type model. As they're going into this digital transformation. They're going through application rationalization, identifying does it make sense for us to rehost the application. Does it make sense for us to replatform, or refactor or even go out and just straight up repurchase and move it into a SAS model. And so as they're going through this, making sure that they're able to not just stand up and support those environments, but also protect them across the board. Simultaneously though they still have the traditional applications that they're going to need to support as well. So there's this need on both sides here to be able to provide proven technologies while also being able to provide modern innovation as customers are embarking on these journeys holistically.

Rob Boyd:                     You know, when we were talking earlier, there was a term that you used that I thought made a lot of sense in this conversation. Because for one, I like the fact that you guys wanted to talk about the relationship between your kind of data protection and storage as an overall focus on resilience. Which I like, to me anyway, being someone who overly focused on security in the past, there's a distinction there it feels like between simply trying to protect and assume that you're always going to be able to be successful with that versus maybe facing a reality that things are going to happen. And you need to be thinking in terms of your knowledge that had happened and then recovery to an active state. Because it feels like, as I mentioned in the open organizations are so much more dependent on their data than anywhere else.

                                    I mean, literally you can disappear with your data. So I'm curious kind of, as we look at storage specifically, could one of you weigh in on what's happened with the change in storage? Let me do it this way. Andrew, we were joking earlier because I made a joke that you had been focused on unstructured data for quite a period of time in your background. And I was upset that we haven't solved that problem yet. Why is it still unstructured? Seems like we would have structured it by now.

Andrew Braverma...:     And it's actually getting worse, right? You talk about businesses that are really built around their data. You know, we've had this idea of data capital, data as capital. And the fact that the business data that our customers have is fundamental to their business. It is so important for them to be able to capture the data and monetize it. So talk about why haven't we wrangled this? Well, unstructured data is actually getting less wrangled. You look at the data growth across the industry, greater than 80% of new data is unstructured.

                                    And when we say unstructured, that means it doesn't fit into a database. Traditionally, that means it's going to be stored into a file system as files or stored in an object system as objects. But it's increasingly meaning things like streaming data. How do we record what's going on, learn from those recordings as we play them back, and then actually start to act on data in real time or near real time. The challenge of course with that is it's so widespread, there's data coming from everywhere, right? So what's unstructured data, it's genome sequences, as we're trying to figure out how to combat COVID, it's creating movies. You know, recently I had a very interesting conversation with one of my colleagues. How do we deal with creating movies in this time when you can't put all these actors in the same place?

                                    So movie studios are looking at using things like the unreal engine that powers fortnight to make movies with individuals sitting in the same... They look like they're sitting in the same scene, but they're physically completely separate from each other. All of that video data, that's all unstructured. All of the tweak data, all of the sensor data that's coming off of systems in our factories and whatnot. But I think the most relevant one that we have to our own lives today, as we try to open the economy back up is how do we start looking at individuals as they're moving through places and determine sick versus not sick. That kind of binary approach of what that looks like. And we've worked with a lot of ISVs on building video surveillance for a long time. In fact, video surveillance data is absolutely unstructured.

                                    So we work with ISVs that do that. But most of the ISVs, and a few in particular are starting to take advantage of thermal imaging cameras, which are not anything new, but thermal engine cameras that are accurate within half a degree. Which means now we can start to say, instead of "Go scan that person with a thermometer as they enter the store." Well, we don't have to do that. We can look at the cameras and we can have software and we can have AI look at that data and determine, "Well, I should not allow this individual into my store or my business or not." And then we can capture and learn from that data going forward. And of course, we'll have to keep that for a long time for any kind of future concerns or litigation that might come from any of that.

Rob Boyd:                     See, that's interesting. And I want to get Todd to weigh in on this because I'm curious in... It feels like now, because I don't want to do what... it feels like maybe although I enjoy this part of the conversation, there's a real time nature to what you're saying as well in terms of processing... Where's that processing happening and how fast do we get back? Because now I'm thinking of a brick and mortar business. You may be depending on thermal cameras regarding admittance in suddenly if they've got connectivity issues or processing issues there, they may be afraid or put someone in jeopardy by letting in people they shouldn't let in or may have to shut down because they just don't know what they're dealing with. But Todd what's your feel in terms of today's environment. And your specialized from a storage perspective. Correct. And so as you deal with primary storage, what are people missing these days? What's important to understand when it comes to resilience and storage?

Todd Bolton:                 Well, I think that the conversation of storage has changed in general. Right? You know, when I first started, it was pretty simple and straightforward. Right? You had a server, everybody was happy. Today, much like Andrew was saying, the data is scattered everywhere and it doesn't always fit into these nice, neat databases that are all nicely structured, everything one's follow, two's follows. It's changed. So not only do we have to [inaudible] in all the different fields, right?

                                    You still have your core, but now you've got that edge piece and now you've got the cloud coming in. And you've got to be able to communicate across all of those zones as quickly as possible. So storage has changed. Yes, there is still traditional block storage, but we're seeing this explosive growth and you can think of things like phones, right? All of that is added to this explosive growth. People are streaming things all the time. They're doing their email from their phone. Well that all needs to reside somewhere. So how do you get all these things to seamless seamlessly integrate across all those different planes. And then have the conversation over to like Dom and Emmett's space of how do we protect all of that? How do we ensure that all that data that we've got and are putting on some form of storage...

Rob Boyd:                     All right.

Todd Bolton:                 [inaudible] So the conversation has shifted a little bit, I think. Go ahead.

Rob Boyd:                     Well, I was going to cut you off a little bit. We were having a little bit of connection issues. Although I want to hold on to that because I think we're still capturing your point. And Dom, he's kind of teeing you up there at this point. We're kind of raising some questions that need to be considered. And before we go back over maybe to some of the rest of the team here, especially on the Dell side and start talking solutions and then pitfalls is something I'd like to visit. Myths, things that we run into. But Dom, in regards to your area of concern, when we talk about data resilience, perhaps what are the important questions that need to be asked in that area?

Dominic Greco:             Yeah. I think what we're seeing with the remote work uptick is that we're also seeing an uptick in ransomware attacks. Destructive cyber attacks. And we know that data protection is a big part of that. So being able to provide an air gap copy of your data in the event that you are hit with ransomware. And a lot of times when we think about ransomware and cyber resiliency, there's so much focus spent on endpoint security, network security, but data protection really plays a key part in that. And really my message always to customers is don't forget to invest in response and recovery.

Rob Boyd:                     Gotcha. And I would assume, yeah, so I... The classic one, I think there's probably old by now, but it's something I still struggle with, which is maybe something as basic as testing my backups or simulating a failure and seeing how well that that goes. Because there can be an assumption that all the lights are green and that we're going to be just fine. All right. Well, let's just move this to. Oh, go ahead.

Emmett Kaczmare...:     There's actually an interesting aspect there. And I think it ties in back to what Andrew was mentioning earlier is there is this new shift of how we're doing day to day life. Here in the States, pretty much everybody's from a school district perspective, doing virtual learning right now. And one of the things that actually just really happened, one of our customers out West is they had a ransomware event. They got in, they got past that end point security. And that's really why it's important when you're looking, your building, your new ER strategy that you're taking into account what your cyber resiliency is going to be. Because when you look at the NIST framework, the final pillar of that NIST framework is recovery. So you have to think about not if or when they get in, whether not if they get it, but when they get it.

                                    And luckily this customer, the school district out in California had actually taken this into consideration and they had put in an air gap vault, leveraging our technology back in February. So even though their production data center was a crime scene, as they put it. Their vault was completely untouched and they were able to start coming back from this attack. Now they had to bring in all the devices that they provided to their students to wipe them and make sure they were clean before they resumed activity. But what could have been a devastating event that could've completely crippled them for the remaining of the year now as allowing them to resume classes again and get those devices back out and get operations back up.

                                    Because they had taken into account the fact that, "Okay, what happens when the bad guys get in." Because ER strategies previously had really just thought of, "Okay, if there is an event I'm going to have asynchronous replication, I'm going to have access to all my machines at the ER site." In a cyber event, we have to be assuming that we're coming back from nothing that we're having to completely wipe the machines and coming back from bare metal. That really needs to be taken into consideration as organizations are going through this digital transformation and in the new world we're living in where cyber attacks are up nearly 4000%.

Andrew Braverma...:     Yeah. Let me just add to that. And then I think that you think about how you get back from an event that's a really, really important consideration. I think it's also important to think about how you get into that event or long before event. So a lot of the challenges that we see with our customers is not just after the fact not having data, but what happens when you've neglected to capture that data, or perhaps you said, "I only need this for a short term and now I need it again and it's gone." So I think it's this long continuum of what do I capture up front? How long do I keep that for? And how do I protect it? And making those decisions sometimes are difficult. And certainly I would suggest, and it's not just because I like to... I work for a company that makes this stuff.

                                    I certainly suggest store more than you need and make those decisions later. Be really careful when you decide to throw things away. Because the things that we've learned in the past several months, several years is that the data that we have in the past can continue to be useful. Things that seemingly were not useful, we can learn from. And if we don't learn from our past, it's very difficult for us to progress. And I see this is really the case for both how we capture data, how we store it, the policies and decision making we have there. And then of course, to the protection from all kinds of different kinds of events, like Emmett was talking about.

Emmett Kaczmare...:     Oh [inaudible 00:16:10], and learning on these types of events, as well is how we can take and continue to improve our technology. And that kind of goes to the key of this around making sure your environment's hardened and protected. And one of the things that we've recently done on the data protection side, learning from the new attack factors that have been going on in the market is what we seen is these rise of attacks specifically on NTP servers, right? Because at the end of the day, your final line of defense, once there is this breach, this attack is your backups. Now what happens though, or what has been happening is these bad guys, or the ransomware itself had been targeting the NTP servers in customer's environments. Now, all backup applications, all backup appliances are tied into an NTP server so that they can add their internal clock management.

                                    And what these bad guys have been doing is they either spoofing the IP address of the NTP server or the ransomware is actually speeding up and corrupting the NTP server itself. So creates this time drift. So that when it goes and pings the server, instead of saying that it's September, 2020, now it's saying that it's December, 2020, or it's February, 2021. And even if these backup vendors claim that they've got immutable copies, if the system itself thinks that it's no longer the date it is and that, "Oh, I've got to expire off all these copies that I have here, it goes and does that process." And what we've done in hardening our technology is put in place actual Gates that prevent this clock drift. So that now can say, "Okay, and what is acceptable as an 18 hour drift or a 48 hour drift."

                                    And if it hits that wall, then it actually requires human intervention to make any changes to the system. So taking that advanced step, seeing what's going on in the industry as a whole and building it into our technology for our customers to provide them those fail safes, is just one other area that we're going above and beyond to make sure that we're addressing this for our customers out there holistically.

Rob Boyd:                     That was interesting because for one, sometimes I look up because I see, I have like surveillance cameras running here at home with a HomeMate system that I bought. And I have scripts that run that delete off the footage after a certain period of time. So I can easily see how this kind of thing could bite somebody. And I would think 48 hours of drift being allowed feels like that's quite a broad amount of drift, knowing at least thinking back to doing log reading and trying to figure out what happened in an event. I guess if they've drifted, they've all drifted together. If they're all feeding off the same drift source, but these are interesting areas. I'm curious what other type of things... Let me throw this one out. Do you ever have customers that feel like this kind of stuff is not that critical for them because they've begun moving or they have moved most of the critical infrastructure to the cloud. Then the cloud just handled this for them automatically.

Todd Bolton:                 Yeah. There's a lot of conceptions and misconceptions about cloud and that's part of the conversation you have to have, right? Because you know, everybody thinks that you're going to get all of these things automatically like you did in the past, right? When you bought storage from us or Dell and you knew it was going to be protected. You would install some form of DPS and you would have everything backed up and you'd set up all these policies and maybe you're shifting things off. But what they don't tell you is they're basically a repository of information. Unless you suggest our ask for the extra services, right? So those are things that people don't always consider. You know, the cloud is this massive area, but you have to think about all the things you take for granted in a typical data center or on premises. And I think that's what, in some cases doesn't get brought up in conversations.

                                    Because people forget about those things, right? Because they just take it for granted. That's not necessarily the way they all work. They all have those services. But unless you ask, you may or may not get those services.

Andrew Braverma...:     And those services Todd, don't come for free. I think it's really important to think about. Storage costs in the cloud are, are very interesting. When you talk about cloud economics, the real value of cloud is the scale up scale down automatic bursting ability, the ability to rent what you need at the time you need it and not pay for it when you're not. The challenge though and I think this and I'll ask Emmett to chime in as well, but I think it's pretty true on both the unstructured side and on the protection side where the data that we deal with tends to be long lived.

                                    We're not dealing with data that lives over weeks or days. We're dealing with data that potentially can live on for years. So think about things, if I'm making a movie and I need to go and leverage some of those scenes in future movies, which a lot of the studios are doing, I can't delete that. I need to keep that for five years, seven years, 10 years. The same thing is true for data that is being used for machine learning or for artificial intelligence. These are things that tend to use large data sets that last over a large period of time. So the challenge that we have is how do you deal with that in a cloud environment where the economics really work very well for things that are bursty and that change, but don't necessarily work for those longer term type of environments.

                                    And for us we've made some investments in specifically doing that. How do we deal with taking what we would normally do inside a data center, providing the right level of cloud connectivity, allowing the customer arbitrage between the cloud providers. You talk about artificial intelligence, for example, you may want to go into one cloud provider that does a really good job of running GPUs for certain workloads, but perhaps you write an algorithm that is better at using a TensorFlow and there's Google has the TPU and there's a really interesting way of taking advantage of that. Well, if your data is in one particular cloud provider, it's very difficult to get it somewhere else. So you have to think of strategies on how you not just collect all that data from the edge and from all of those sites, but how do you make it cloud accessible so that you can take advantage of those services in ways that are both economically viable, viable, but also really doing the best thing with technology. Finding the best tool for the job, which is sometimes a challenge.

                                    So that frequently means that we have to talk about, well, do you have an on premise copy that you've now replicated into the cloud? That's a possibility. Do you have a multi-cloud or a hybrid cloud strategy where you need to take advantage of a vendor solution that provides connectivity into all three major American cloud providers with direct layer two access to the storage. That's an option as well. So there's a lot of things that you have to think about rather than just picking up everything and moving into the cloud and operating just like you did in your own data center.

Rob Boyd:                     So I'm hearing two things, one of which obviously is don't be assumptive about what you think is supposed to be happening or is happening, be sure and confirm. And it sounds like you're just raising a lot of things that we need to be considering. I want to try something different here. Obviously robot technology is my go to source for strategic advisement. That's independent of, despite the fact that I know, and some of these guys here work directly with you guys with Dell. But go to WWT to potentially get a larger strategic value.

                                    I think the value that Dell provides is extremely strategic and tactical. And I wanted to allow you guys to kind of take off your hat, which says I'm trying not to sell anything specific. But I would like to get specific. And so I wanted Emmett today, what are the technologies and the solutions that you and your team are providing for customers that you've seen you know the lights go off and go, people go, "I'm so glad we're doing this now." We didn't know about it. You know, what kind of stuff is really getting traction, put it that way.

Emmett Kaczmare...:     Absolutely. So really three key areas that we're focusing in on that our customers are really as well aligning with, from their digital transformation as a whole. Going back to the previous we were just talking about cloud, right? And the importance of protecting data in the cloud. There was just an article the other day around how a former Cisco employee, five months after he was let go.

Rob Boyd:                     Deleted all his voicemails.

Emmett Kaczmare...:     [crosstalk] AWS account and deleted 16,000 virtual machines that were housing the WebEx environment, net of that was over $2 million in downtime that costs Cisco as a whole. 1 million from internal employee hours and then a million in what they had to pay out to customers as a whole. So you just look at that right there. That clearly identifies just the need to protect that data in the public cloud. And when Andrew was talking about the cost of storing things in the cloud, and one of the things I love about the public cloud, it's the great equalizer, right?

                                    So everybody's consumption up there in the cloud. You have a cost associated with that. And one of the great things that we've done is taken our proven IP, our de-duplication and reduction algorithms and we're leveraging that for our customers in the cloud. And it's one of the reasons that Gartner raised us in the recent magic quadrant in which had us for the 15th year leadership quadrant, is our ability to provide those reductions greater than anyone else out there. That's one of the reasons we've got over 12,000 customers that leverage our technology across the hyperscalers, across AWS, GCP and Azure. And it's why we're protecting over three exabytes of data in the public cloud.

                                    So that's one key area right there. The other two key areas that we're really focusing in on as well for customers is going back around what we're doing from that ransomware. And really focusing in, on being able to provide customers, the ability to recover when there is an event providing them that air gapped copy of their data with AI and machine learning capabilities that is looking actually at the entropy of the files themselves. To identify when the change rate it goes from being an average of 3% up to 98%. when the file type no longer matches the extension and looking for common attack factors that are known by different types of ransomware strains that are out there.

                                    And the final aspect is going deeper and wider and integration with VMware. Specifically in what's happening with the cloud native aspect and Tanzu, and our integration working with the VMware team to integrate in Velero into our technology stacks so that we can take that open source Kubernetes protection capability and give it enterprise grade capabilities from a speed and from a performance and from a scheduling capability. So when you look at those areas, those are really the three key ones that we're focusing in with and really resonating with our customers because it aligns directly with what they are doing from a transformation side of the house.

Rob Boyd:                     Yeah. I think I've seen storage costs in the cloud being one of the ones that people get most surprised by. It's almost like that water bill where you didn't realize a sprinkler was broken, except now magnified by a hundred thousand. Where suddenly you get this bill and you're like, "Oh, what are we paying for it?" Because I've heard customers are even paying for consultants to help them figure out what they're paying for. And I think what you're saying there is if I was deciphered correctly in one of your first points is this notion of make sure that you're not wasting storage that can be expensive. And especially if it's at a hot tier, faster tier accessible level, that you're not putting something up, that's simply a lot of duplicates of itself or wasted. Let's make sure it's valuable data that you actually need access to. And you're not saving the cruft. And I would imagine that takes some insight.

Emmett Kaczmare...:     And even using technology that reduces down that actual physical print in the cloud, right? Your most expensive thing in the cloud, shouldn't be your protection technology. That should be driving down cost significantly so you can invest in things that are the cool things and really driving the business and giving you that competitive advantage. Being able to invest in microservices, AI and what we're doing by one running primarily on object storage, on the native cloud storage up in the cloud. But two then we're leveraging our de-duplication and reduction algorithms to shrink the physical consumption up there as well, helps to drive down those costs substantially.

                                    ESG did a report and a study and they found that we're anywhere between 40% to 85%, less expensive compared to competing technologies out there. And that's from cloud consumption utilization. So again, going back to that point, the cloud is a great equalizer. And so by being able to drive down those costs and significantly reduce what you're having to pay, allows customers to actually go and invest in areas that are important.

Todd Bolton:                 I want to kind of dovetail on that. If I'm going to bring it back on premises and Dell recently launched a power store system that now leads in the data reduction across all the other testing. Now Worldwide's as much as I'm have a Dell background and I push Dell products, I also have to be fair. But what we do at worldwide is when we bring those systems in house, we test them. And we test, we run the same test across all OEMs. Well, Dell now with power store is number one when it comes to data reduction. So that's on premise that's even before sending it up. So guaranteeing a four to one reduction, which is a pretty damn good number to hit.

Rob Boyd:                     Yeah. Andrew, you were mentioning before we were getting connected here about, I think you guys are probably constantly helping Worldwide Technology outfit their advanced technology center, so that they've got the latest stuff and that they're testing it as mentioned. What was that you were speaking to and what are you seeing, traction to the same question I'd asked Emmett earlier?

Andrew Braverma...:     Sure. I think, let me answer the second part first and then I'll roll into the first part. You know, we really focus on four key areas of the business. First thing, ScaleOut file systems. The 1FS file system that you've seen on Isilon is now available on a product called PowerScale. PowerScale is essentially 1FS that same operating system from Isilon running on industry standard powered servers. The power of that is that we've now taken advantage of the entire Dell supply chain, which is really, really powerful for us to create a physically smaller, lower cost type of solution. In fact we'll get into the lab in just a second, but we have some PowerScale gear coming into the advanced technology center as well. And then we also focus on the object side of the house. So Emmett mentioned object being the cloud native storage media.

                                    The challenge with object in general is that when you put data in object in the cloud provider of your choice, it's generally stuck there. So it's very difficult to arbitrate between cloud providers because getting the data back out, egress costs, all of those things can be very expensive. So we have the ECS object platform, it can run on premise. It can also run in the cloud. And we have, in fact, our Dell technologies cloud partner that allows us to put that in a space that is accessible to all the cloud providers. Same thing for 1FS, by the way, we can run 1FS in the cloud and allow connectivity from Azure, from Google cloud platform and from AWS. But we also now have the 1FS operating system running directly inside Google cloud. So we have connectivity. It is running on our hardware.

                                    So it is very, very high performance, very scalable, really interesting opportunities for us to take data into the cloud without having to deal with all of those inefficient cloud storage costs. And then we've recently moved into this area of streaming data. We've introduced the streaming data platform for doing data analytics in a real time or near real time. And then finally, we have DataIQ, which is our data analytics program and allows us to scan metadata, figure out some of the historical trends. The really nice thing about DataIQ is that it's designed to be pointed both at on-premise storage, whether it's our storage or not, and also into cloud storage. So you can get a lot of really good information about what you have inside your data center, but also the data that's being stored inside those object stores inside the cloud. Now, as far as PowerScale is concerned, the F200 platform, one U-box you box all flash, the idea is that within four rack units, you can have an entire cluster, including all of the backend networking and get started with 1FS.

                                    We are in process of shipping a PowerScale system to the advanced technology center. And you talk about the outcome driven nature of Worldwide Technologies. That's certainly one of the reasons we like working together. Certainly for me driving the outcome is the most important thing. And I'm more than happy to say if we can't solve that problem, I'd much rather go outside. Work with even a competitor to solve a customer's problem.

                                    The real power that I see though in Dell Technologies, especially working inside the ATS with Worldwide is that we've got the portfolio solution. So if you're talking about artificial intelligence, for example, PowerScale is a fantastic platform for the storage, but we're not going to do any of the compute there. But we have PowerEdge C40-140 with Nvidia GPUs, and we can connect that all together. And then we work with Emmett's team on how to make sure that data is protected going forward. So we are all better together. When we stop thinking about each of the individual areas that we focus on as just the silo and focus on the customer's outcome, really driving a business outcome for them leveraging the technology that we have across the portfolio.

Rob Boyd:                     So just to get this straight, you don't like to represent technology that you don't believe in.

Andrew Braverma...:     Absolutely. That's definitely true.

Rob Boyd:                     I like that. In terms of the ATC, as we kind of wrap things up here and I actually want to go to Dom. Dom, can you give us an idea of the resources and how accessible they are remotely. You guys get a lot of toys to play with way beyond Dell, but it's how these things work together so you can simulate customer environments, provide guidance and strategy. How are you guys doing that in this space? What kind of stuff would be good to know?

Dominic Greco:             Yeah, I would say there are two ways that our customers can engage. They can go on our digital platform to wwt.com, create an account. And there's a ton of content out there, labs that they can launch themselves, whether it's data protection storage. So we've got IDPA4400, we've got power store appliance, extreme IO, Data Domain 9900. So they can engage there. Certainly our customers also engage us in POCs. And I think that's really critical because we're able to do these POCs at scale. So customers know that if they're going to buy something, it's actually going to work.

                                    So when I think about some POCs that we've done, we did a very large POC for a customer that had a 36 terabyte Oracle database that they needed to back up. They couldn't protect it in their current environment, it was taking over 24 hours to back up. We're able to assimilate that in the ATC and show that they could back up this workload, I think in like three and a half hours. And they were able to make a decision and buy it. So we give that kind of assurance that if you do buy something, it is going to work, testing it out in APC.

Rob Boyd:                     Very interesting. So, Andrew, excuse me. No, I want to go to Todd. As a final word here, the ATC and the resources you guys provide yes. On the technology side, but you guys also provide the ability for customers to, it sounds like present their issues, present their bigger seemingly intractable problems and help them establish a strategy that may or may not include certain sets of technology.

Todd Bolton:                 Well, it's all that. And then some is kind of how I look at it. To kind of follow on to what Dom was saying. We usually have gear in house that we can create or reconstruct or recreate, I should say a customer's environments. But let's say the customer is undecided and they want to see what Data Domain does against pick another backup vendor. Okay. Or they want to see PowerStore against something Pure sells. So they want to see some sort of object store we're at PowerScale on the Dell side and something from another vendor. We can create those kinds of environments where we, because we understand how the technology works, we can set up and control the environment so that it's apples to apples. And then it's up to us to kind of say, "Well, here are the advantages," and do the pluses and minuses, but it gives them the ability to see it in real life to see what we're saying is true one way or the other.

Rob Boyd:                     No, that's perfect.

Todd Bolton:                 [inaudible] What works in their environment? Because remember you got a app accelerates on one platform may not do so well on another. So it's a learning process for everybody involved. And so that's one of the things I think we bring to the table, we try to stay on the forefront, like Dom said with all the articles, right? So you've got all the latest information. Articles written by engineers for engineers so that people understand what's going on out there. It's no holds barred. It's, this is what we saw. This is how it works. And I think it shows how willing Worldwide is to work with a customer, to work with our partners, to show off and showcase things. And that's what I think the platform is really about.

Rob Boyd:                     That's perfect. Okay. Well, thank you for that. And also I want to thank all of you, to all of our guests. Guys, you represent Dell very well. As always I've learned a lot, and thank you for taking the time to share your knowledge with us. We will have more information, of course, in the notes section underneath wherever you're watching this video so that you can get more information from Dell, as well as engage@wwt.com. But I want to thank our audience for hanging out with us as well. This is one of the reasons I love Worldwide Technology is because they're very educationally focused. And I think all good decisions first come with a level of awareness that we could all use a little bit more with. And I've never seen better partnerships in terms of how they bring everybody in, customers and vendors included to make sure that that we've got a diversity of input as these kind of decisions are being made. Anyway, hope you guys enjoyed it. Thank you for hanging out with us. We'll see you on the next Tech 37.

Emmett Kaczmare...:     Thanks for having us.

Andrew Braverma...:     Thank you.