[TRANSCRIPT] IT Visionaries Podcast with Malcolm Harkness

caseyjohnellis
36 min readJul 6, 2021

--

Albert: You don’t get paid for shipping security and that’s a problem. In our thirst for developing the newest technologies, our business practices have enabled and opened huge gaps in our product in data security. If it seems like these cyber attacks are increasing, it’s because they are, and it’s a business problem, not a cybersecurity problem. We keep hearing of new technologies, such as AI, that are going to further enable cybersecurity, but our experts are skeptical.

Casey Ellis: The human without the suit is weak. The suit the human is dumb. AI and machine learning, these different computer learning things that we’ve got to work with now in cyber security, right across the board, they’re levers, they’re not a replacement in my mind for human intelligence. If and when that happens, we’re going to be worried about Skynet, not these conversations. And I’m going to be thinking about how to hack that stuff, to make sure that humans stay safe.

Albert: On this episode of IT Visionaries, we explore the impact, AI and technology are having on society and cybersecurity with Casey Ellis, the founder and CTO of Bugcrowd, and Malcolm Harkins, a cybersecurity advisor, coach, and board member. The two discuss why you’ll never be able to eliminate risk and why the lack of financial incentives is leaving most companies vulnerable to nefarious attacks. Enjoy this episode.

Speaker 3: IT Visionaries is brought to you by the Salesforce platform. The world’s most trusted, low code platform. Enhanced trust, compliance, and governance across all your apps with Salesforce security. Learn more at salesforce.com/datasecurity.

Albert: Welcome everyone to another episode of IT visionaries. And today we’re continuing our security series with Casey Ellis and Malcolm Harkins. This topic is going to be about the impact of AI and technology on society and cybersecurity. But I do these gentlemen a disservice by not letting them introduce themselves. Casey, let’s start with you and tell our audience who you are, where do you work, what do you do, and we’ll kick it over to Malcolm.

Casey Ellis: Thanks, Albert. It’s great to be on, great to be chatting about this, good to see you again, Malcolm as well. My name’s Casey Ellis, as you called out there, so I don’t need to do that part again. From the founder, chairman, and CTO of Bugcrowd. Basically Bugcrowd, what we did was to introduce the concept of connecting the white hat community, this crowd of creative individuals, that can answer security questions that have been waiting at the table, I think at this point in time for a number of decades, but had been disconnected for people that need help. We started off that industry back in 2012. So that’s commonly referred to, or known as bug bounty programs and vulnerability disclosure programs. In truth, we’re finding all sorts of different ways to take advantage of that resource and plug it into the problems that existed. That’s what I do, and that’s why I’m here.

Albert: Malcolm, how about you introduce yourself?

Malcolm Harkins: Thanks and great to be on with both you, Albert and Casey. So currently I’m a board member advisor, coach, mentor, dabble with a lot of security startups and a few others in the industry. Prior to that, I was chief security and trust officer with a web application security defense platform company for a little over a year. Prior to that, I was chief security and trust officer with Silence, and then prior to that, I spent my life at Intel and I was chief security and privacy officer at Intel, so spanned a wide range of fun and eclectic activities.

Albert: So Malcolm, you have a big swath of history in cyber and security space. Casey you’re building, or how would you describe it? Are you building a marketplace, almost of people that are good at… It’s like I come to you to access the best talent for short-term projects? How should we best think of Bugcrowd?

Casey Ellis: In a sense, it’s talent, but it’s also just answers to questions. We’re talking about AI and it’s role, computing and the way that we extract creativity from computer solutions that were built has improved and continues to improve, but there’s still this element of necessity behind access to human creativity. So really, what Bugcrowd does, it’s almost like a knowledge platform, people have security questions that require a creative answer. We’ve got access to all the different places that answer could exist. Our job is to put the question in the right place. That’s summing it up, I think on a conceptual level, practically what that looks like is vulnerability discovery insistence.

Casey Ellis: That’s a lot of what we do ranging from web applications right through to critical infrastructure. We’ve done work with the Department of Defense and systems have gone to the Air Force. It’s this very broad smattering of different technologies where we apply humans to come in and basically help people understand what risks they need to mitigate next, because at this point in time, I feel like security is dinner table conversation. I think it’s consensus that we need to do something. The question is what? How do you get that creative input to help you prioritize and understand that risk so you make that sort of decision.

Albert: All right, so that’s a perfect segue. Let’s dive in. Right before we started this round table discussion, I talked about how one of the biggest, most interesting things to most of our audience members, how is the landscape of attacks changing? So when we first held our first episode, we talked about the Colonial Pipeline hack. The next thing that’s happened more recently is we now see… I saw on this news and while this hasn’t occurred yet, this is, to me, an indicator of where things are going, but Anonymous of course, recently put out a video for those listening at the time of recording was today. They put out a video today or yesterday that said they basically threatened Elon Musk and Tesla and SpaceX potentially.

Albert: It’s not clear whether they’re going to do something, but it does seem like foreign actors are attacking more often. And that is a big thing that is now happening. Of course, we’re coming right off the heels of the SolarWinds attack. Different people are getting sued now for this. This doesn’t seem to be changing. I’m going to start with Malcolm. You’ve seen this over your time and career in your history. Is it just being reported more or are there more and more organized, sophisticated efforts that are now attacking companies? I’d love to hear your perspective on why this continues to increase.

Malcolm Harkins: It’s a little bit of both. So go back on my Intel days, 20 years ago, I drew a picture that I called the perfect storm of risk to explain how threats and vulnerabilities would affect information assets that were changing physically, logically their usage model, exposing risks. My job was to put controls in place. And then I drew this hidden hand on this picture, starting with geopolitical because nation states are both threat actors and threat agents adding to the cycle, but there were also legal and regulatory bodies that are driving additional levels of compliance and legal and regulatory risks. So you have this confluence of independent, yet interdependent things that I theorized 20 plus years ago would create this perfect storm of risk. I think we’re seeing it play itself out. The reality is if it can execute code, it can execute malicious code.

Malcolm Harkins: Whether it be an app, a device, and because of that urban sprawl of technology and frankly, the job we’ve done in the security development life cycle to mitigate risk upfront, and then the sloppy job we’ve done in the back end to operationalized security, we frankly are living in the mess that we’ve created for ourselves because of the poor economic incentives that have existed for the creators of technology and the operators of it to do something the right way and manage the risks better.

Casey Ellis: I completely agree with that. I think the two proof points of that being the brute cause, one is that Anonymous threatened Elon Musk and it’s a viable threat. Like why is that? Because they’re not a nation state. That could be anyone doing that. But the fact that they’ve been emboldened to do it in the first place and the fact that it’s probably having an impact is symptomatic of the fact that we’re not very good at this stuff. There is enough attack surface out there. There is enough debt kicking around for that to be a viable thing for them to say even to begin with.

Casey Ellis: I think beyond that, one of the advisories that came out that was really interesting last year was from a sister in the NSA talking about some nation state uses of [inaudible 00:08:20] in coordinated attacks and SolarWinds and different things like that are examples of how much more pervasive and scatter gone APT can be when they’ve got an objective. I think prior to those announcements, it was always seen as this very targeted, very unique, niche thing that you had to be a certain type of organization to have to worry about. What we’re now learning is that no, they’re economically rational in the same way that we are, and if you don’t have to burn a $2 million [inaudible 00:08:50] to get the job done, why not just download something from exploit DB, because that would probably still work, which is the point. There’s so much debt kicking around out there that those kinds of strategies for those focus still effective, that says something about where we’re up to, I think, from a risk posture standpoint.

Albert: What are your opinions on how modern day apps and companies are built too? Because it feels like we are increasing the vulnerabilities every single time because we rely on… If I think of Mission, which is a small company, the amount of SAS products we use is actually quite substantial. And we have absolutely pushed data through open APIs into each of these toolkits. And so effectively, military style, you’re only as strong as your weakest link. I don’t know where it is, it’s there for sure, but that’s everybody. And we’re just a small company. Our company is only eight people. At a giant company, like a GE or something like that, where there’s 10,000, 50,000, 100,000 employees. I’m curious on your perspective because it seems like there’re more gateways being opened. I would assume that there are more keys to shut these gateways down, but it doesn’t seem like that’s the case. It just seems like there’s increasing vulnerability inside of every business. I’d love to hear your perspectives on this. We’ll start with Malcolm.

Malcolm Harkins: Well again, you have this urban sprawl that occurred, and that’s not only the data, the apps, the devices, the locations, but also the APIs and the connections and the data flows. That’s certainly part of the problem. On the app development side though, again, this is where I think a lot of people have done better, particularly with things like Bugcrowd and vulnerability management programs in that upfront side of things. But the world that we live in, most people are still focused on minimal viable product. Minimal viable product is going to get you maximum security exposure.

Albert: Well said.

Casey Ellis: Incredible for sure.

Malcolm Harkins: That’s just like I said, it’s an economic incentive issue that causes people to be sloppy because they’re not held accountable in the same way you would be in a physical product liability sense.

Casey Ellis: I completely agree with that. This is something that I’ve done talks on for a while now where security’s done ourselves a disservice in industries to go in to develop stuff and say, “Hey, you’re an idiot. You screwed up.” And in the meantime, we haven’t necessarily built a platform or built a business or maintained those things. We haven’t actually experienced from the offensive side, the same economic imperatives and incentives, the people that are deploying that code. That position to say that is a little bit under informed to begin with. But that’s the reality situation. Builders don’t think like brokers, they’re not incentivized to do the same things.

Casey Ellis: They’re logically different. And that’s just a fact. I think we can reconcile those things. It’s a lot of what we do with Bugcrowd. And honestly, I think a root cause of a lot of this stuff in the first place, the idea that people that deploy code don’t necessarily believe in the boogy man, so when they see a security control, they can either suffer the inconvenience they implement, or push the feature by the deadline. They’re going to choose the latter because that’s their job. And honestly, you see that not just in [inaudible 00:12:07] an MVP lens in the startup space, I think that affects the enterprise as well because everyone has those same motivations and those same incentives in place.

Malcolm Harkins: And to your point, I have this framework that I developed back years ago that I called my nine box of controls. One of the dimensions of the nine box of controls is control friction because controls are drag coefficient. They slow down people, data, business processes, chew up on those compute cycles. But the design issue that we’ve got in the industry is that the practitioners in their enterprises are not held accountable to reducing that drag coefficient, hence it causes their abusers of business to go around the control and in the security software and device ecosystem, they just sell a product that also creates too much friction on the developers and on the users and stuff like that. And again, when there’s friction and people go around the control, that generates risk. And since the security industry profits from the insecurity of computing, the more risks that occurs, the more the security industry grows. So again, getting back to an economic incentive, I would submit that the security industry doesn’t have an economic incentive by and large at a macro level to solve the problem.

Albert: Interesting.

Malcolm Harkins: Some are trying hard, but at a broad level, the industry itself doesn’t give about really solving the problem.

Casey Ellis: This is an inconvenient truth, because this is [inaudible 00:13:34]. So I think actually rebutting that as a vendor on this call is impossible because what you’ve just laid out is literally the math of the situation. Something that I throw out a lot with the team at Bugcrowd and with other people I speak to in the security solution space is the fact that this industry is a product of unintended consequence. We’re not actually really meant to be here. And I think it’s helpful to remember that.

Malcolm Harkins: Like said it’s a by-product, it’s an economic inefficiency. And unless we start dealing with it that way, we won’t change the economic incentives across everybody who’s touching it to do it the right way.

Casey Ellis: And that’s where I keep bringing it back to this idea of the fact that error is human. The fact that there will be, why do we exist? It’s because people make mistakes. And because there are things that can be exploited as a product of those mistakes that the adversary, the idea of someone leaving the front door open and another person taking advantage of that predates the internet by a couple of thousand years. This is not an internet thing, it’s a human incentive thing. It’s really the nature of crime, the opportunity for crime. Cool.

Casey Ellis: That’s what we’ve brought into our space. How do we create that awareness in a tech context, because as you’re building a product, as you’re getting your MVP out to market, all you’re thinking about is making that thing work and then getting it adopted. The fact that you can gloss over these inconvenient truths, from my perspective, that’s the root cause of a lot of the problems that we actually see in the market. And it’s not about being perfect. It’s about being just that little bit better that ubiquitously in a way that makes it economically irrational for an attacker.

Malcolm Harkins: If you’re a developer or even a hardware architect, going back to my roots, you don’t get demoted by creating a product that has a security flaw. You get promoted by having the functionality that can sell and getting a patent on that. And that’s how you become a principal engineer. That’s how you run engineering. There’s no penalty for, as an engineering person, for the security flaws you might be creating in the product. Plus risk is temporal. You might not find the security flaws till six months, a year, two years, three years down the road or longer.

Albert: I read the book FairTax, and I was like, oh, we should go to the world of FairTax. It’s pretty simple. And then someone told me, well, it’ll never happen because the system is incentivized to make the tax code as complicated as possible so that you can constantly, if I’m a politician, say, I’m going to help you vote a certain way. And if you’re in accounting, you need the law to change or software needs it to change. It’s constantly updating. TurboTax can sell you a new version every year. Why? Because the laws change. You fundamentally can’t use the version you just had. So they described that way.

Albert: And then someone said, it’s also, you’ve probably heard described the same way in the pharmaceutical world, that they’re not really that incentivized to solve chronic disease. They’re more about having you just exist with a chronic disease. And so in a way, security application development technology and advancement, the money rewards, as you suggested Malcolm, if I can ship the best feature, if I can ship the best product, I win. I win more. And if I have security flaw in it, maybe Casey can fix that for me.

Casey Ellis: I’ll find it.

Malcolm Harkins: But again, it gets back to the friction thing that we were talking about. If you can do it in a way that creates adherence to the business process. Again, I use this analogy of formula one car, that everything is designed for velocity, and even the spoiler on the back, it creates friction. But that friction creates a down force to create cohesion to the track. And the driver, if you think of them as the developer or the end user, how are we designing controls that create velocity towards their mission, their business objective? And if we can do that and reduce those frictions and do it the right way, it becomes a heck of a lot easier.

Albert: Let’s take a quick pause to remind you that today’s episode is brought to you by the Salesforce platform, the number one cloud platform for digital transformation of every experience. Now let’s get back to the conversation. So let’s get started down that path. How should products be developed? I’m curious, what is something that is fundamentally… That if you could redo it and say, “Hey, someone’s going to give you infinite money,” or I don’t know what you guys want, but infinite something. If you can just go back in time and fix this or these things, what were some things that you think fundamentally should change about product development, because you were talking about the incentive structure is upside down, is it fixable? That’s I guess the major question, is it fixable? And if it is, what needs to change?

Casey Ellis: I’m not sure that infinite money is the place that you want to start there because you end up with a bigger perverse incentive on the solution we came up with.

Albert: All right, infinite money, throw that out, whatever rewards incentive you want. If you got a chance to change how products are developed.

Casey Ellis: This is one of the things, so the listeners that don’t understand how bug bounties work, really what the idea is, is that you go out to the open internet and say, okay, if you found something and you report it to us, we’ll pay you for that. And the proportion of your payment is going to be corollary to the impact of what you’ve found. I think the thing that attracted me to this whole space in the first place is that to me that reflects the economic incentives to the adversary, about as closely as you’re going to get on the white hat side. It’s not perfect, but it’s getting pretty close. So those ideas of being able to build out feedback loops around how much does that cost? If we put out five bugs, and we’re getting hosed, what does that say about the cost to attack our business.

Casey Ellis: How do we create that feedback loop as an organization, but then also I think on the receiving side, make that not this dirty laundry thing to be ashamed of going back to the whole to error is human thing, it’s not about hitting people over the head with a stick, it’s about saying, “Hey, here’s where you can improve. This is the cost of not improving.” Some kid from halfway across the world just owned your staff, which means his next door neighbor who might be a bad guy could do the same thing. This is real. And I think for engineers, that revelation, that moment of just not being this theoretical thing that they can just choose to ignore if they’ve got to push a product out by a day to hit a feature deadline, it’s actually impactful to the user. That, to me, is a really powerful thing to begin with. The other thing that I would say is with respect to vulnerability disclosure.

Casey Ellis: The way that I talk about this is almost like a lightning rod. As a company that helps organizations do vulnerability disclosures, what we’re not saying to them is that we can prevent lightning. Lightning is just a thing, it’s going to do what it wants. So what you can do in response to that is to acknowledge the fact that, that’s true and then try to get that impact away from where it’s going to create damage and route it down to the way you can deal with it, which is how a lightning rod works. I think those sorts of ideas, in terms of getting that information back into the business itself, it’s a really process to be able to actually identify what you need to fix next.

Casey Ellis: But I think the broader thing is the idea that they need to put that thing up in the first place. The fact that you’ve even done that is an acknowledgement of the fact that we’re going to have issues. That’s not because we suck, it’s not because we’re not trying, it’s not because we’re terrible at security, or that we’re a bad company or anything like that. It’s just a product of the fact that people are building our stuff. So let’s acknowledge that and actually work with that. I think that baseline, it’s almost like the five stages of grief thing that [inaudible 00:21:19] talks about sometimes. The acceptance piece of that, where it’s like, this is just a thing that we’re going to have to deal with. If you can baseline around that, remove the stigma of when something goes wrong and just align around the fact that it’s going to be a thing.

Casey Ellis: So let’s deal with it and let’s learn from it. Let’s try to reduce its existence in the future. That, to me, is a good starting point, I think.

Malcolm Harkins: I would agree. And I think you said it well, another way to rephrase it is you can’t eliminate risk, just like we can’t physically logically… Earthquakes, right? So you have a building code that you designed to mitigate certain wind shear and earthquakes and fires and stuff like that. We don’t have the equivalent of that professionalism in the broad IT space. The other thing that I would say, there’s a difference between incentives and motivations and you can incent somebody, but it doesn’t mean they’re motivated. It’s the difference between somebody who’s compliant and somebody who cares. I’d rather have somebody who cares than somebody who’s compliant.

Malcolm Harkins: And I think we have to work on both sides of the incentives we were talking about on the economic side, but also the motivations. And we’ve got to start putting more human stories on it and looking at the fact that people’s lives are at stake. And I posted something on this the other day on ransomware and pay or not pay, because it’s a tough dilemma, to pay or not pay. But I said in the organizations that I’ve run and what I’ve always had as a set of principles defined ahead of time, would we pay or would we not pay and under what circumstances, who’s the decision maker, who has input, all that stuff. Just like you would in a physical kidnapping and ransom. This is just a logical one, but you have to recognize if you’re paying, you don’t know who’s on the other side of it.

Malcolm Harkins: And you might be aiding in human trafficking. You might be funding a terrorist. You might be funding a drug lord. And for the reality of it, a lot of people think I overplay it, but it wasn’t that long ago, and I think it was the book Future Crimes that Marc Goodman wrote, former Interpol guy and stuff like that. There was some traceability for the Mumbai bombings in 2008, that indicated about $2 million for the funding of that physical terrorist event came from cyber crime.

Albert: Dang.

Malcolm Harkins: And so if we all looked at it and said our sloppiness, even in managing InfoSec in a mom and pop mid-retail company could lead to nickels and dimes that flows to an event like that, we would act differently. And we would demand more of the security industry to sell us better products.

Casey Ellis: And in terms of the tolerance to those controls being put in place. Because I think that whole idea around, okay, is a solution for ransomware to make payment, illegal? That’s obviously a fragile proposition, but I think it’s getting at the right part of the value chain for the attacker. I do feel like a lot of the conversations around how to mitigate the threat side of this are starting to happen in the right direction, but it does leave open the question, why is that so easy in the first place? And assuming that we can mitigate on that end of things, on the threat side, what will they come up with next?

Malcolm Harkins: And every press announcement on a breach, a sophisticated actor. Really? It wasn’t that flipping sophisticated. We did a sloppy job and we then try and cast it off to deflect liability rather than say, let’s go do attribution for the control that failed. Instead we try and do attribution to the threat actor and threat agent, that’s a distraction from what I’m responsible for as a chief security officer, which is managing the vulnerability of my organization and anything that is not focused on that is a distraction to what I can manage.

Albert: You guys are starting to get fired up, which I love. It was good discussion. I want to dive into something that you guys both have said independently a couple of different times, but you said about fundamentally, incentive structure is what needs to change. How do you envision incentive structure changing? You hinted at it just a moment ago, Casey, where it’s like, hey, maybe it’s a legal thing where actually, the laws have to change to mitigate this type of behavior. Talk a little bit about how should companies get paid, not paid. That’s usually incentives. When I think incentives, I usually think money, but it might not be money. Give me a flavor for what you’re thinking of how, if we fundamentally, as a people, as a human, as business operators thought differently, we could probably mitigate more risk. Like you said, not eliminate, but mitigate more, reduce it some.

Casey Ellis: No, definitely. And to your point around mitigate, not eliminate, I think legal changes are an aspect of it. They’re not a silver bullet. Often, they’re treated like that by legislators. And it does come back to this fact that bad guys are innovative and they have to eat. So even if that law works, you’re going to have to deal with something that comes after the thing that you’ve solved. So I don’t see that as a solution of existent perpetuity, it’s going to be effectively an arms race for as long as crime exists and the internet can facilitate that. My personal belief on this, sometime between now and the heat [inaudible 00:26:24] of the universe, the consumer is actually going to care about whether or not they’re going to get hacked buying your products. That hasn’t happened yet. I feel like we’re trending in that direction.

Casey Ellis: There’s enough narrative around that, I feel like at this point in time, an interesting thing about the Bugcrowd story is that we actually land in US, the same month as Snowden did his disclosures around NSA, which it might be colored a little bit in hind sight by what has happened since with Bugcrowd, but I’ve been in security as a career since high school, which is a long time before we land in the US. To me, that was when people actually started to consider the possibility that they should actually care about this stuff. Prior to that, it wasn’t really a thing. And then the next year you’ve got 60% of the US population getting their credit card breached, so now hacking happens to me, the year after that it’s OPM, it’s Ashley Madison. So now it hurts. 2016, you’ve got the attacks on the elections. So now, my country’s getting hacked and it’s just gotten progressively more dystopian since then. 2020 it’s my five-year-old’s responsible for my corporate attack surface, 2021, Branson was eating the world basically. And my five-year-old’s gone off and basically progressed their influence on my corporate attack surface.

Casey Ellis: We’re in this position now where that’s not just a security conversation. That’s something that the layperson is thinking about. I think if we get to the point where Pat Gray was talking with Tim Watts, who is a politician here in Australia around this idea of retail politics. Once you’ve got an issue at the point where the layperson is voting based on that, or their vote is at least partly influenced by that, at that point, the overall incentive around the reason for that problem existing in the first place starts to get shifted by the political agenda. I think that’s going to be the thing that ultimately shifts this. I don’t know how long that’s going to take, to be honest, there’re different things that I think can be done to influence it. Going back to what I was saying before around some errors I feel that the security industry at large has made, we’ve made it all about the stick.

Casey Ellis: I think it should also be about the carrot. So how do we make it attractive to do this well? If someone’s walking into a Best Buy and choosing between this router and that router, all things considered otherwise being equal, if they know that one’s more secure than the other, what gets them to the point where they choose that other thing? It’s those sorts of things that I think create… We need to make this about actual enablement of the business, not just being an insurance policy and trying to hold off a bad day.

Albert: Malcolm, I was thinking about that with the new Apple campaign. In this case it’s not security, but they call it privacy. It’s about privacy. That’s the principle feature of their phone that they’re touting more so than anything else. Because like what you said, a better camera, a better screen, all these things are becoming, I don’t know, they’re relatively comparable, very comparable. So then they’re saying I’m more private than the other.

Casey Ellis: Definitely. I think with that, my view is the layperson confuses security and privacy just in general, when it comes to this space. It’s interchangeable. That’s not a dumb thing. It’s just confusing. I can understand why that happens. And as an intrinsic value to a consumer, privacy is more relevant, more quickly than security. Security is this abstract thing. What if nothing bad happens.

Albert: It’s like insurance.

Casey Ellis: Whereas with privacy, oh, this is my personal information. I don’t want that to go where I don’t want it to go. That’s a value add for me, I’m going to choose that thing.

Malcolm Harkins: I was going to say, it’s interesting that you were talking about those differentiators for consumers. And I completely agree with you Casey. A couple of years ago, I gave testimony at the US Senate commerce and technology committee on the promise and perils of emerging technology. And one of the senators shortly afterwards introduced a security bill to basically do the equivalent of Energy Star ratings for cybersecurity products, or the safety crash equivalent.

Malcolm Harkins: Couldn’t get it off the ground and actually some of the big tech that was on that panel was against it. Why? Because it would mean more work for them. We’ve got to do those types of incentives to make it easier for consumers. And then I think we also have to look at it, and I’m not big on regulation, but regulation done right will help with some of these things. So think of again, 15, 16 years ago, Sarbanes-Oxley, we had a bunch of financial integrity and financial reporting issues in the US. And then all of a sudden, the CEO, the CFO, the general counsel, the executives had to attest to their internal controls on financial reporting and financial integrity.

Malcolm Harkins: Not that they wouldn’t have errors in their numbers occasionally, but they wouldn’t be material or impactful to the shareholders. We could do the same thing for cybersecurity and say, you have to attest to your processes in the design and development of technology you’re selling and to the technology that you’re operating as a business. If we did that and held people accountable to that attestation with the level of personal liability, you’d create a lot of incentive differences to do it right. It doesn’t necessarily mean spending more money. It just means spending it in the right places, in the right way to affect change.

Albert: So the way that it was described when you talked about not too long ago and earlier in the conversation, how the modern enterprise or modern business relies on so many different technology partners, that’s just the reality of it. We rely tremendously on many, many interconnected technology partners. And if we were to increase the culpability, the responsibility, or however that works, you mentioned, the rating systems, doesn’t your entire chain have to be rated the same way? Otherwise, aren’t you weakened by whoever… Back to that original, whoever’s the weakest link in your stack, isn’t that the problem then?

Malcolm Harkins: Not necessarily. Being vulnerable and being exploitable are two different things. So it depends upon the context of the vulnerability in the context of the enterprise, along with other mitigating controls to whether or not that’s exploitable and can create material or significant harm. And if we did it the same way you do or financial side it, material and significant harm, well we would clear the clutter of all these ankle biter things, because just by dealing with better processes, we’d get rid of that stuff. And then we’d focus on the things that’s going to really bite us and bite us hard. And if we did that, maybe we would have not had the Colonial Pipeline. Maybe we wouldn’t have had a SolarWinds. Maybe we wouldn’t have some of these bigger things that are starting to create a systemic societal risk.

Casey Ellis: Agreed. To me, there’s that part, that’s talking about the stick and the regulatory side of things. I think the other piece of that is to be able to communicate that to the consumer so it’s not just, you’ve violated regulation. How do I explain to a layperson that I’m actually doing this well and how’s there accountability around the ways that I’m doing that so that it’s not just, I take security seriously. There’re things that are practical and that have some degree of consensus around them, makes them understandable to someone that’s making a choice between one product and the next. The more it can be simplified down at that point-

Malcolm Harkins: Cars are highly complex with a lot of integrated systems and parts.

Casey Ellis: Cars are a great example.

Malcolm Harkins: And airplanes, trains, there’s so many different physical analogies of things that have a lot of complexity to them, but we’ve reasonably managed the risk pretty well. Injury happens and unfortunately people die, but by and large, considering the amount of that stuff occurring, we’re not seeing catastrophic things day in and day out. We were in the fifties before Nader pushed seatbelts and stuff like that. So we’ve got to think differently on how we do what I call being a choice architect. So again, I was in Australia actually a few years ago and somebody had asked me how I’d define my role. And I’m like, I’m a choice architect. I’m architecting choices for the business. Just some of those choices I get to make, some the CIO does some, the CTO does, some the CEOs, some of the board. If I architect choices better, then we will end up with better decisions, which will result in lower risk.

Albert: All right. You got me convinced, you’re morning star concept. I like it. I was thinking about that as you guys are talking about how, like you mentioned for aviation or vehicles, we try to treat the vehicle. We hold Toyota responsible for its entire supply chain. If something failed on my car and I’m like, well, who made that part? I don’t think that way. I think Toyota, you built this car. You’re responsible. If there’s a mechanical failure, it’s your responsibility to fix this.

Malcolm Harkins: Or, give an example, look at the air bag issue. A safety mechanism that had a problem and [inaudible 00:35:15] got crucified before it. Well, that’s what happens to the security industry. All of that has failed and the stocks of the companies who sold product that didn’t work goes up. In what world does that make sense that we celebrate the security industry’s failure by increasing their valuations?

Casey Ellis: That goes back, I think, Malcolm, too, [inaudible 00:35:37] incentive around cybersecurity actually solving this problem. We want to do the things that we’re asked to do, but the whole idea of being able to actually go in and solve it at a fundamental level, there’s this almost threshold that exists. We don’t want to do it too well from a pure economic standpoint. And that ultimately goes back to integrity. I’m very aware of that concept as a vendor, I work to execute on the solution in spite of that. And there’re others that do the same thing. But yeah, the fact that that exists as a perverse incentive, I don’t think is something that should be ignored there.

Albert: All these things that you guys are saying, exactly. I’m thinking it’s changed the framework of how I envision or how I think of other companies, because you brought some great elements during this conversation that get me thinking about and relating it to other industries. And one thing I kept thinking about when you talk about mitigating risk is actually this concept of how retailers budget for shrinkage. They just budget, meaning they actively invest in stopping it, but at no point do retailers assume it’s never going to happen. They always assume it’s going to happen.

Casey Ellis: Yeah, exactly.

Malcolm Harkins: When I was in the retail credit industry before graduate school, I managed my department, Southern California credit operations. We maximized net income with 2% bad debt. And we would play with the credit ratings daily. If revenue was running light and bad debt was running low, we would loosen credit. Why? Because we maximized net income with a 2% bad debt. People couldn’t pay on fraud made up that 2% of bad debt.

Casey Ellis: You build it into your business model and plan ahead. I think tying cyber security with automotive, because automotive is a great example of an industry safety protocol and clear regulations around it and clear grading and scores and different things like that. But also I think they have really clear impact of what happens when you get it wrong. Tesla had a vulnerability come in four years ago, five years ago that they were able to basically ingest, remediate, regression test, deploy into a test plate, and then deploy into their entire fleet over the air in 11 days from receiving that report.

Albert: Bananas.

Casey Ellis: That didn’t happen by accident. You know what I mean? That’s a whole lot of planning ahead that goes into that, which goes back to Malcolm, what you were just saying around assumed risk, how you build that into your business model. That’s a cost for them. So they’ve had to plan that again, maintaining all of that infrastructure to make that work is expensive. But what they’ve been able to do is to reduce the cyber risk to their users, reduce the physical risk as a consequence of cyber risk, just in general. This didn’t actually speak to that, but hypothetically, if it did, the problem would have been solved in the same way.

Casey Ellis: I think as a result, this is going back to my whole thing around, how do you make security sexy for the consumer? If you tell that story to someone who’s had their vehicle recalled for a similar problem, they’re going start to think that Tesla’s the most secure product. In that example, you’ve got almost a patent of how to increase usability as you do this stuff in a way that’s relevant to the user to solve the actual problem, but then also make it attractive enough that people want your stuff more.

Albert: I think your point earlier, or both of your points about the consumer demand side, that’s going to start happening because I have noticed myself, I don’t like updating products. I think it’s annoying. And there was like a wave of products, and you absolutely know this, that for some reason, they couldn’t update your firmware over the air. You had to get your USB to download the newest packet off your computer.

Casey Ellis: That’s before they built that and put it in.

Albert: But it has gotten to the point where, you mentioned before with Tesla, we obviously experience it with our phones now, it’s like this idea that you need to be a business constantly innovating and delivering updates. I think that is absolutely a brand desire. Now it hasn’t quite, I don’t think, transformed into hey, this is more secure, like you mentioned before, security and privacy, people confuse the two things. But I wouldn’t be surprised if in the not so distant future, it becomes a major critical factor in decision making for purchasing products where it’s like, hey, how secure is this product?

Malcolm Harkins: I think some of the more adventurous folks who take a first mover position on it will do well because they’ll be able to establish themselves as the security brand that cares about the consumer, cares about the consequences and impact to them, and then again, like you mentioned, the confusion between security and privacy. I’ve always said that security and privacy are like two magnets. Turned one way, they’re perfectly binding because you need good security to have privacy, but security can also encroach upon privacy if it’s not done right. And privacy can stay sometimes too academic, too legal-

Casey Ellis: Subjective.

Malcolm Harkins: And not be practical enough. So when you do that, just like having the magnets turned and then they polarize each other, but we’ve got to do it to design an architectural issue to bind them. And if you start with that as a goal that you’re trying to achieve security and privacy, we will actually lower the risk on both sides of it.

Albert: There you go, gentlemen, I appreciate you guys joining us today on our security series. Thanks for sharing some of the insight and some of the vision you have for what’s happening in the current marketplace, as well as what you think needs to happen outside the present day environment to further enhance mitigation in security. We talked all about a lot of topics, but I got to close with one thing. We keep hearing about artificial intelligence and one of the things that has been postulated a lot, and I’d love to hear your perspective on this, is AI ever going to be so smart where it can stop any attack where I don’t need, no offense to Casey here, but where I don’t need your team? I literally have an AI in my network system constantly looking for bugs and able to detect and fix irregularities.

Albert: I don’t know. How far away is that or is that never going to happen? Because the bad actors, as you said, in the history of time, as we also talked about in this conversation, there’s always been a bad guy. Never have humans lived in peace. Like, oh, everyone is kumbaya. We all love each other. There has always been a bad guy and there’s always going to be, we can assume smart bad guys. Is AI ever going to be able to actually protect us from any bad actor? Or is it always just going to be the race just continues? We’re always going to have someone trying to trump it?

Casey Ellis: The way that I talk about it, it’s like the Iron Man suit. The human without the suit is weak, the suit without the human is dumb. If you view, and this is excluding Jarvis, which is your question. So if you go back to that construct, the idea that AI and machine learning, these different computer learning things that we’ve got to work with now in cybersecurity and right across the board, they’re levers, they’re not a replacement in my mind for human intelligence. To your question around will the singularity, which is how people talk about it, replace human creativity when it comes to defense?

Casey Ellis: The answer that I give to that is that if, and when that happens, we’re going to be worried about Skynet, not these conversations. And I’m going to be thinking about how to hack that stuff, to make sure that humans stay safe. So that’s, to me this abstract, I do think it’s possible just from a theoretical computer science standpoint. You look at quantum and different things that could enable that and actually change the game in terms of how that stuff works. I don’t think it’s outside the realm of possibility. I don’t think we’re anywhere close to it at this point in time. And that would be my answer.

Albert: There you go.

Malcolm Harkins: Mine would be anything that executes code can execute malicious code. AI is made up of machines that has code, that has systems. So AI itself will also be manipulated and compromised and you can manipulate it. It wasn’t that long ago that I think Microsoft had a bot that people were playing with by doing things that then turned around and put profanities and other things out there.

Albert: Started swearing at all the X-Box players or whatever.

Malcolm Harkins: Exactly. So I think for me, I’m a former finance guy, an IT finance guy, an IT procurement guy. Automation’s job is to increase our effectiveness and efficiency. AI is the same thing. It’s the same thing as is that exoskeleton suit. If it’s increasing your effectiveness and efficiency, do it, if it’s not, don’t buy the marketing.

Casey Ellis: And that would be the golden takeaway, I think. It still gets misread today, like there was a, I think a spike in that as AI first got on the scene and people started talking about it where it was oversold in that sense. People expected it to just take care of everything for them. And then that overtime was proven not to be what it actually did. That’s good. But it’s still, I think, perceived all of the time in that way, like this is a replacement for humans, artificial intelligence, it’s in the name. And to be able to double click on that a little bit, say what are we actually talking about here? What does it actually do? What does it useful for? And making sure that everyone’s clear on that side of things I think is really important.

Casey Ellis: Going back to what you were saying, Malcolm around the vulnerabilities in ML and AI itself, we’re starting to really dig into educating people on how to do adversarial ML testing. So like this idea, there was a really funny POC that happened in the Bay Area seven or eight years ago at this point, where people tricked GPS on a certain traffic system to reroute traffic on the peninsula and it worked. There’s a more recent example where literally someone got a whole bunch of phones, put them in a little trolley, walked it really slowly across the bridge and simulated a traffic jam in the same way and was successful in that, diverted traffic. And that was an attack. It’s low tech. But in terms of the concept that it’s proving, that’s where some of the weaknesses exist in these types of things.

Albert: Got many, many machines to behave a different way.

Casey Ellis: Untrusted input.

Malcolm Harkins: You’re exactly right. I used this as an analogy several years ago in a speech because cement providers are starting to put sensors in cement to aid in maintenance and traffic routing and all that stuff. I talked to the CTO of the cement company. I’m like, how are you dealing with security of this? And he’s like, what do you mean? I’m like, what do you mean, what do I mean? And I’m like, look, if I can play with the flow of that data and this was after the Boston bombings, I was like, imagine I want to redirect traffic towards a kinetic event. I could do that and utilize flaws in the logic or the technology and move traffic towards something rather than what it’s intended for if I knew how to manipulate it.

Albert: It’s like the principle technology every spy movie uses to reroute the red lights and green lights, getaway routes.

Casey Ellis: It’s the fire cell, absolutely. I look at ML from my viewpoint, what we’re seeing with Bugcrowd is that we call it the moment, like the degree to which that exists in a particular category of technology tends to be fairly proportional to how that category is adopted in market. And when you think about ML, we’ve just gone and slammed it across everything we can think of. And there’s starting to be evidenced that, that not having been necessarily a thoughtful process. Like the flash crash of 2008, I think it was, where high speed traders, which are ultimately like, this is early ML before Silicon Valley got a hold of it and started to market it, it’s a similar concept. And that got gamed and exploited and tanked the NASDAQ. This can go wrong because people just haven’t thought through the unintended consequence, that’s always going to exist. Anytime people design stuff, that possibility is always there. I think this is no exception to that.

Albert: Casey, Malcolm, I appreciate you guys joining us today on our security series thing for sharing your insight, your career, and your perspectives on what’s happening in the climates of cybersecurity today. I hope you had a good time on the show. Any last words you have for our audience members?

Malcolm Harkins: I always say a fool with the tool is still a fool.

Casey Ellis: That’s a pretty good landing spot. Honestly, my inverse of that is that humans created these in trackable to cybersecurity. I’m biased in saying that given what I do for work, but part of the reason I do what I do for work, is because I believe that, that’s why I founded the company.

Albert: I agree with both of you. Listen, I don’t believe that AI machines can ever solve all the problems. I believe we always need a human trying to solve a puzzle or whatever problem is being presented with us. I look at it from the perspective of me just trying to change a flight through those automation systems. It’s like virtually impossible for anyone who’s ever done it. Like what is your locator number? GDB4, it’s like, I don’t understand. It’s like, damn it, we’re so far away. We’re so far away from something being predictive, analytical. You know what I mean? I think I agree with you guys. These technologies have come a long way, but there’s still a long way to go. And I just don’t see a space where the computer solves everything. I don’t see it because people will always find a way, I can think around it.

Casey Ellis: So in the meantime, people finding a way is part of the solution.

Albert: That’s it.

Casey Ellis: That’s where I was getting to with that, the whole idea of subjectivity, context, application of risk, prioritization, getting ahead of what the attackers are going to do and where they’re going to be most effective. That’s going to be a part of our day job, I think for a long time to come.

Albert: Exactly. Plus the incentives and value systems for people to find and find ways to close these loops. That’s exactly it. Gents, I appreciate you joining me today on IT Visionaries. Thanks for being part of the security series.

Casey Ellis: Thanks for having us.

Malcolm Harkins: Thanks.

Speaker 3: IT Visionaries is created by the team at mission.org and brought to you by the Salesforce platform, the number one cloud platform for digital transformation of every experience. Builds connected experiences, empower every employee, and deliver continuous innovation with the customer at the center of everything you do. Learn more at salesforce.com/platform.

--

--

caseyjohnellis

founder/chairman/cto @bugcrowd and co-founder of @disclose_io. troubleshooter and troublemaker. 0xEFC513EA