Future of Business: Palo Alto Networks’ Nikesh Arora on Managing Risk in the Age of AI

ALISON BEARD: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Alison Beard.

Throughout this month, we’ve been sharing conversations with CEOs and founders about artificial intelligence and other critical issues facing leaders today. We wrap up our Future of Business series with Nikesh Arora, CEO of Palo Alto Networks, the leading global cybersecurity company. Over his six-year tenure, Nikesh has expanded and reorganized the business to meet the full suite of enterprise and government client needs, including safely incorporating GenAI into all of its products. I was excited to speak with him at our recent virtual future of business conference about how he’s managing new opportunities and risks and his strategy for innovation when new technologies are developing so rapidly. Here’s our conversation, which includes a few questions from the audience.

Let me just start with talking about the threat and the opportunity of AI in the world of cybersecurity. You said that the only way to fight AI from bad actors is with AI from good ones. So what tools are corporate cyber attackers developing and what tools are you building to stop them? How do you stay ahead?

NIKESH ARORA: If we all believe, which I think we do, that this is going to have some sort of transformative power — how, when it fully sort of manifests itself, still remains to be seen. But I’ve never seen a technology trend move so fast, and there’d be so much excitement around it. At the most fundamental level, we expect that things will happen faster. As development accelerates, we will end up creating more and more connectivity across the entire landscape. Our cars will be connected, our homes will be connected, our street lamps will be connected. You’ll have technology everywhere. In that scenario, in that environment, when everything gets connected, in our cybersecurity parlance we call that the attack surface magnifies or amplifies. So that means bad actors can attack you anywhere, impact your connectivity, impact your technology and create unwanted outcomes. In that scenario, as that speed of connectivity, as that speed of development increases, you have to increase the base of defense from a cybersecurity perspective. And we’re already noticing that bad actors are trying to figure out ways of entering into people’s infrastructure, trying to get in and out, extract data, use it for economic purposes. And their speed is increasing, which means the only way to fight them back is to increase our speed of defense. And the sad truth is that they only have to be right once, we have to be right every time. So we have to try 100 times harder. So I think there’s a lot of effort that needs to go into fighting these potential bad attacks. Bad actors are using generative AI recently trying to figure out how to get into people’s infrastructure, and we’re all arming up to the same.

ALISON BEARD: You’ve now incorporated GenAI across your entire suite of products. It is such a new technology. So how did you manage to incorporate it so quickly? Just talk us through the process of investigating and then applying use cases. And again, at a speed faster than the bad actors.

NIKESH ARORA: In our business we have to get it right. So what we have done is we have packaged a whole bunch of AI/machine learning and neural networks and reinforcement learning into what we politely call “precision AI.” We’ve been working at it for longer than the last few years. I think the whole fundamental premise where it sort of rests is that we’ve been working on a project to collect all the data in the enterprise, collect all the behaviors, understand all the data. We are connected to every security system that exists in a company. We analyze that data real time using machine learning and precision AI. Based on the analysis, we can tell anomalous patterns. And if you believe the anomalous pattern is a bad actor, we stop it mid-flight. So the whole idea is to make security more real-time. So we take large amounts of data, needle in the haystack, look for bad behavior, block it when we find it. But this needs to happen for every company, every entity, every piece of critical infrastructure over the next few years for us to live in a new environment where bad actors are going to move faster and faster.

ALISON BEARD: So under your tenure, Palo Alto Networks has developed this full suite of solutions. You’ve been on a massive acquisition spree to do it. So how do you look at the balance between that and organic growth?

NIKESH ARORA: When you’re building a business, and as we just discussed, we are one of the most innovative businesses in the world. Cyber security is something where people are constantly trying to find a new way of getting into your infrastructure. Now, to protect against that, we have to make sure we always are one step ahead. We’re always thinking ahead. If this new technology arrives, what’s going to be the potential back doors into it or ways to attack it or ways to misuse it? We cannot be the only innovative company in the world. There are other people who are innovating in cyber security. And our whole thesis is we have to be humble. We cannot be too proud. We have to believe that innovation will come from everywhere and if we can’t build it, embrace it and buy it. So we’ve brought about 19 companies in the last six years, with the premise that somebody else has figured out a way to find a new solution to a potential upcoming cyber threat, cyber-attack. Let’s embrace them. Let’s make them part of Palo Alto so that we can actually stitch it together and provide that real-time at base, at high innovation capability to our customers.

ALISON BEARD: And so then once you’ve found those innovative companies that you want to bring into the fold, what is the secret to effectively integrating them, especially when you have sort of small startup cultures that are coming into this much larger organization?

NIKESH ARORA: That’s a great question, Alison,look, I think M&A is an off talked about area and a lot of people get it wrong. When we acquire these companies or we partner with them, we say, “Listen, you’ve done so well, come do this for us.” What we do is we make our teams work for them, that’s rule number one. Take the smarter people, take the people who are really passionate about this, get them to lead these things. Two, we actually double down, give them more resources because we say, “Listen, the whole point of getting you to come be part of our fold is because we might be bigger, slightly slower, but we have more resources.” So we actually give them more resources, which means again, they don’t have to worry about raising money. They don’t worry about hiring people and thinking about their budgets. It’s like, here’s more people. You drive this thing. The only thing we do is we make sure we sit down and spend hours and hours and hours about what the north star in the product strategy needs to be. We do that before we buy these companies because we want to make sure that the people who are leading these are fully aligned with our view of the world. And we of course, will listen to their view of the world and try to find a happy middle ground, which actually means that we can go solve the customer’s problems together in a much more effective fast fashion.

ALISON BEARD: If the end goal is to make Palo Alto this one-stop shop cybersecurity platform — to have everyone using everything of yours — how are you bringing your clients on board? What are those customer conversations like?

NIKESH ARORA: I look at it slightly differently and it may result in a one-stop shop. But I think the motive is slightly the other way around. I think about it saying, listen, if the bad actor is going to move at this space and we have to analyze things in real time, we have to have a handle on all that’s happening in the organization. To have a handle on that, we have to be at every sensor to understand what behavior is happening, what data is coming in. We have to be the AI analytical engine to understand how to analyze that behavior, and we have to be able to connect back to the enforcement points and stop bad behavior from happening. Now, if you fragment that across 30 different security companies, then the risk is that you, the customer, have to build all that capability. I just think it is unfathomable that today it hasn’t been done yet, that our customers are going to get all this right and get it right 2,000 times. Why not let us do it once and then you can deploy it 2,000 times. So we’re trying to slowly consolidate and driving in that direction. And by the way, this is not new news. I started my technology career 30 plus years ago, and I used to work for a company where 25 systems made up our customer relationship management system. That was the way the world worked. Today, you go buy an Oracle, Salesforce, Microsoft, they do all of it for you, end to end. And so it’s not like this hasn’t been done before. Platforms exist. They exist for planning, HR, they exist for finance, they exist for IT services. It’s time that one exists for cybersecurity.

ALISON BEARD: You’ve talked so much about the pace of change, how quickly you need to adapt. So how do you as the leader of the whole enterprise think about short-term versus long-term planning? Can you do long-term planning anymore?

NIKESH ARORA: Of course. Look, all good things take more time. All good things take a lot of considered effort. Whether you’re trying to make a cake, a beautiful one, a wonderful one, or you’re trying to build a product or you’re trying to run a company, you have to think about what the north star is. You have to think about where you’re trying to get. You have to be constantly learning, be paranoid to understand how this thing’s going to happen, and you have to take bigger bets. Look, you guys teach us in finance theory — higher risk, higher return. The question is, as a leader our job is to figure out how to take managed risk. But there’s very little possibility that you take no risk and you get a lot of return. So our job — me and my team, the leadership team here — our job is to think about how do we get to something which is differentiated from everybody else? How do we do that in a managed risk fashion? How do we make sure that we don’t compromise for the short term to achieve the long-term goal? Our long-term goal is to make our customers more secure.

ALISON BEARD: Let’s talk a little bit more about leadership and talent. So you got some recent press for clapping back at someone on social media who questioned your engineering bona fides.

NIKESH ARORA: This was just fun and games, come on.

ALISON BEARD: For the record, you have a degree in electrical engineering from the Indian Institute of Technology, BHU, as well as a master’s from Boston College and an MBA from Northeastern.

NIKESH ARORA: Yes.

ALISON BEARD: So talk about the rise of the engineer CEO and how you balance oversight of both the technical and the business sides of Palo Alto Networks?

NIKESH ARORA: Business aside, at the end of the day, we are all product companies. We all solve a problem. We all deliver either a product or a service. And if you don’t understand your product, you don’t understand your service, it’s very hard to build a great business around it. The key is the leaders have to be product savvy. They have to have a point of view and a vision for the product, where the product is going and what the product future looks like. And if you can do that, then you can build a great business around it. I think if you get too focused on efficiently running a current product portfolio, the risk is that you missed the train or the boat or whichever form of transportation you’d like to pick, in terms of where your product needs to be. And a leader’s job is not just to execute on the current strategies, also define the strategy for the future, or build new products.

ALISON BEARD: So there is a war for talent in Silicon Valley and technology in general. So how do you think about recruiting and retaining the best people, and then also, as AI evolves, the human tech mix?

NIKESH ARORA: We’re lucky, we’re a mission-driven business. We exist to make our customers safer. Being mission-driven helps because there are people in the world who actually want to go work for a mission-driven company versus a social media company. Being the largest cybersecurity company also helps — if you have an aspiration to build a career in cybersecurity, we are one of the better places to come work for because we have proven through our track record that we’re on the bleeding edge of innovation. Customers like us, we are working on very, very cool stuff. So I think the demand function’s there. We attract good people. I think once they come here, the key is do they like how we work? Do they like our culture? So we spend a lot of time making sure that our culture is a place where people have the autonomy, have the alignment from a communication and north star perspective. People feel like this is a comfortable workspace from wherever they come from. So we’re lucky we get great people to come work for us. So hopefully they’re all happy over here. In terms of how AI and humans are going to evolve in the future, to me, AI is a productivity tool and we’ve had productivity tools in the past. We’ve used email. Some argue email may not be a productivity tool that may take away from it. But to that notwithstanding, we’ve had productivity tools which have made us better and better at what we do, whether it’s automation, whether it’s industrialization. Now it’s AI. So yes, will the nature of people’s work change because AI will do some part of it? Definitely, sounds like it. But my early belief is that it’s going to be net positive. It’s going to be useful because the first thing that AI is going to do is take away the boring repetitive tasks, which I’m pretty sure none of us going to get out of Harvard or college to go out and say, “I’m going to go and do repetitive tasks for the rest of my life because that’s really cool.” No, I want AI to do it. Take care of it. Take the repetitive task away. So I think the quality of work is going to increase and the quality of the organizational chart or the shape of the organization chart might have to change in the future. We’ll see how it works.

ALISON BEARD: So I do want to get to some audience questions because we have a lot coming in. First, Nuno asks, “what are the key elements of a successful zero trust security framework? And how can companies, especially small to medium-sized businesses, balance the need for strong cybersecurity with budget constraints?”

NIKESH ARORA: There’s two parts. I think the zero trust question, and fundamentally what zero trust means is that you have to be able to treat everybody who’s accessing your infrastructure the same way where ever and however they come from. So we have products, I’m sure other people in the market have products that deliver that capability. But the key is it goes back to what we talked about, do not fragment your infrastructure on the network side from a security perspective because if you do, you have to manage multiple “security panes of glass” in industry parlance. And that just becomes more complicated and hard to do. So try and find a single vendor that satisfies that problem for you. Now, as it relates to budget and managing, I know it’s a tough question because if you are in a digital business, there’s a possibility — and you’re dealing with end customer data, you have to be careful because there is a possibility if you leave the door open, somebody’s going to walk in. And it’s going to be very expensive for your business in the future if you’re not secure. From that perspective, my view is focus on remediation, focus on detecting security events. Don’t focus as much on security controls. Focus on making sure that if something happens, you have the ability to remediate as quickly as you can. You have the ability to bring it back up as quickly as you can. So focus on remediation and recovery to make sure that’s sort of stitched up. Do not compromise on that because bad stuff can happen. And then spend whatever else you can to make sure that you have robust defenses.

ALISON BEARD: Okay, so we have a couple of questions about bad actors and GenAI. So Kim asks, “how would you differentiate between a bad actor, a person, a human versus a GenAI model taking control of decisions and acting as a bad actor?” And William asks, “are you more concerned about cyber-attacks or the misuse of GenAI resources?”

NIKESH ARORA: So far, the way I think about GenAI is they’re all building a brain. They’re trying to create reasoning and inferencing, so this thing can even think on its behalf. Now, if you think about it, if you hire a really smart person to work with you, the first thing you do is you give them a lot of context. What you do, how you do it. You give it your own information to make that brain adaptable to your business. So I think the world will evolve where these people build these large brains, these large LLM models. And all of us spend time explaining our domain to them and trying to get them to understand what we do so that they can be smarter about our work, not just generally smart. If you have that, then there are two questions. To use that brain in a consultative capacity, where it tells you things but the doing is done by you, or do you give this brain control of the doing part? I think the fear right now is where you talk about the misuse or the abuse by GenAI, it’s — what if I gave it to controls and it misbehaves? What happens then? So good news is you haven’t given control to LLMs or AI models just yet about things. But if you do, the risk is that they could do bad things. So if you get a very smart brain, you explain cybersecurity, you explain all the bad techniques, yes. Could it develop superior hacking techniques? For sure. And if you let it go at it, and we’ve seen early evidence that there’s something called WormGPT. There’s something on the dark web that goes and helps you take a look at CV’s and build capabilities that can attack companies. Yes, we are seeing that bad actors are going to be using these tools just the way that good actors are. So as I said right at the beginning, it’s incumbent upon us to make sure we build defenses that block these things as real time as we can because people will use GenAI for these things. In terms of GenAI models being abused, we have to be careful. A, the abuse will happen in places where we give it control. So whether it’s your LLM driving your car, which all of those are robot taxis and Waymo and Tesla, you’ve seen that that’s giving control to AI to drive your car. So you got to make sure that A, whatever’s driving your car is smart and you’ve got a lot of guardrails and controls to make sure that it can’t go rogue on you. B, you have to make sure that nobody can hack into it and make it do different things. So we’re going to see lots of interesting work happen in blocking hackers from getting into LLMs and making sure that LLMs have enough guardrails that they can’t go off pace.

ALISON BEARD: So this is a follow-up question from Rava Shantar, I hope I’m pronouncing that right. Should we agree on a universal standard for a kill switch for AI if something goes off-script in the way that we’re all thinking about? Are Silicon Valley honchos meeting to discuss this?

NIKESH ARORA: Well, first of all, standards are hard. Most likely bad actors will follow the form of least resistance. So the risk is if everybody doesn’t implement it, then your weakest link is your biggest problem. The flip side also is that if you put a kill switch in AI and it’s running a nuclear power plant and you kill it, then there’s a risk that you intercept a process which can have unintended consequences. So I think there’s a big debate. I think the bigger question and my personal view is that as we march towards AGI, we have to be very careful of who gets control or which model can get control of a critical process. I think that’s where the discussion’s going to be is, yes, you can build all the AI you want. But let’s make sure that we’re not giving it control of critical processes — that we understand how you’re managing it, how you’re regulating it — because we really don’t want AGI or a superior intelligence taking control of critical systems which we cannot intercept, or perhaps to use the listener’s words, have a kill switch for it.

ALISON BEARD: As a global company, how are you working with regulators around the world to make sure that they’re educated on all of these issues, which can seem difficult because they’re not trained engineers, and then that they’re formulating the right policies?

NIKESH ARORA: We’re all very aware of AI. I think we’re all debating the pros and cons of AI. I think there isn’t a regulator in the world or possibly nation state in the world who’s not debating how to build, use AI for progress. And B, how to make sure that it cannot be misused for bad acts. We’re going to see some regulation arrive. I think most of the regulation is going to be around transparency, around understanding how these models work, around how guardrails are put around them, on how these models are going to get or not get control of critical processes. And I think we’re going to have some great steps and possibly some steps we have to revisit in some time.

ALISON BEARD: What is the biggest cybersecurity risk on your radar right now, is it GenAI or AGI?

NIKESH ARORA: It’s fascinating, Alison, a lot of their hacks, if you look in the last six months or last 12 months are not complicated. They’re just hacks that happen because companies haven’t bolted their doors and windows carefully. Or there’s an insider whose credentials have been hijacked because they weren’t careful in keeping their password. So the hacks are not complicated. The real challenge with GenAI or AI is once I get in, the actors are moving way faster. Six years ago when I started, I used to hear about things and took eight days or 10 days and the largest was 47 days, of dwell time for somebody to come in and be in your infrastructure, take the data out. Now, I hear about hours. I think that’s going to go down to under an hour. Then the question becomes the biggest threat is speed. The biggest threat is that large enterprise is not ready to react, to be able to block bad things quickly, recover in under an hour. So we are going to see possible business interruptions if we don’t get our act together.

ALISON BEARD: That was Nikesh Arora, CEO of Palo Alto Networks at our recent virtual Future of Business conference. That’s it for the Future of Business series, but I hope you go back and listen to all four of the episodes. You can also check out all of the episodes we have on the HBR IdeaCast about leadership, strategy and the future of work. Find us at hbr.org/podcasts or search HBR on Apple Podcasts, Spotify or wherever you listen. And if you haven’t already subscribe to HBR, please do. It’s the best way to support our show. Go to hbr.org/subscribe to learn more. And finally, look out for more HBR events in 2025. Thanks to our team, Senior Producers, Anne Saini and Mary Dooe, Associate Producer, Hannah Bates Audio Product Manager, Ian Fox and Senior Production Specialist Rob Eckhardt. And thanks to you for listening to the HBR IdeaCast. I’m Alison Beard.

Adblock test (Why?)

{Categories} _Category: Implications{/Categories}
{URL}https://hbr.org/podcast/2024/11/future-of-business-palo-alto-networks-nikesh-arora-on-managing-risk-in-the-age-of-ai{/URL}
{Author}unknown{/Author}
{Image}https://hbr.org/resources/images/article_assets/2020/09/wide-ideacast.png{/Image}
{Keywords}Leadership,Strategy,Innovation,Risk management,Audio{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}

Exit mobile version