Quis custodiet ipsos custodes? worries the classical Roman maxim: “Who watches the watchmen?” But the security vendors who stand guard over today’s networked information systems are under considerable scrutiny from their competitors, their customers, hackers and, increasingly often, governments concerned about national security. Scientific American’s editor in chief John Rennie sat down in Palo Alto, Calif., this past May with representatives from the security industry—and from some of the industries that will rely on the protections they provide—to discuss the challenges they will confront. What follows is an edited transcript of some highlights of those proceedings; a more complete version is available here. —The Editors THE PARTICIPANTS Rahul Abhyankar Senior director of product management, McAfee Avert Labs, McAfee Whitfield Diffie Vice president and fellow, chief security officer, Sun Microsystems Art Gilliland Vice president of product management, information risk and compliance, Symantec Patrick Heim Chief information security officer, Kaiser Permanente John Landwehr Director, security solutions and strategy, Adobe Systems Steven Lipner Senior director of security engineering strategy, Microsoft Martin Sadler Director, systems security lab, HP Labs, Hewlett-Packard Ryan Sherstobitoff Chief corporate evangelist, Panda Security US, Panda Security Who Is Responsible? The panelists agreed on certain priorities for maintaining or strengthening data security. Some of these were technological, but regulatory and legal frameworks were also crucial. DIFFIE: The foremost influence on these things in the next decade is going to be Web services and what I call digital outsourcing. We’re going into a world where there will be a million computational services that somebody else can do for you better than you can do for yourselves. Ten years from now you’ll look around and see what we call secure computing today will not exist. So what is going to be needed is a legal framework that obliges contractors to protect the security of the information. But they cannot respond to the obligation unless the technical machinery can be developed to allow them to protect that information. GILLILAND: Yes, but if you look at how customers are actually implementing technology today, they’re already far behind what it can do. That’s not necessarily the problem now. It’s how do we make this technology practical so that customers can actually address their own privacy issues, their own auditing processes, and manage the protection of their data for themselves to current standards, which for the most part they’re not doing today. LIPNER: For the business customers, you want the sort of things that Art and Whit are talking about: assurance about what will be done with your data, ways to describe the restrictions on it, and so on. For the consumers, you want an environment that they trust and that just works—because a lot of the growth of the Internet and Internet business is based on consumer confidence. We need to increase that confidence and ensure that it’s justified. GILLILAND: The interesting balance that we have to figure out is, How do you enable businesses to continue to share information as rapidly as possible so they can make good decisions and yet make that sharing simple? The Dangerous Human Element Users themselves can be the Achilles’ heel of security systems because of their propensities for error and their tendency (however unwittingly) to trade data safety for ease of use. As such, it falls to technology to compensate for the potential failings of users. HEIM: We should not underestimate the human element. I liken it to driving. The reason we have controls in place such as driver’s licenses is so that people at least have a basic understanding of the rules of the road and how to operate a vehicle safely, so that we can minimize those risks. I don’t think there’s been enough educational outreach to end users on how to use their systems safely. I’m not necessarily proposing there needs to be a “cyber driver’s license,” but you know, that probably wouldn’t be a bad idea because we see that many, many of the observed problems are behavioral in nature. DIFFIE: See, that’s exactly what would be an utterly monstrous idea. Cyberspace is the world of the future. If you don’t have a right to be there, you don’t have a free society. ABHYANKAR: The human element is something that we can’t ignore. We recently celebrated the 30th anniversary of spam. E-mail continues to be something that gets exploited. There is a dark underbelly to technology, and the rate of innovation that the bad guys have and the social engineering techniques they have to steal your data are that much further ahead of what the good guys have. That’s something that technology alone is not going to solve. GILLILAND: If you look at the research that we’ve been doing, around 98 percent of the data loss is through mistakes of human error and process breakdown. Being in the security industry, we’re always going to be fighting the bad guys. But the bad guys are less of the problem around data loss. Being able to steal information is always going to be a business for somebody, and you can’t ever fight all of them 100 percent. But we can stop the large percentage that is human and process error. HEIM: We see on a day-to-day basis that if the technology organization itself can’t anticipate the needs of the individuals, in many cases they will enable themselves to get their jobs done using consumer-grade technologies. The Economics of Modern Hacking Hacking is no longer the province of curious or bored programmers. The production of malicious software is now a business, and that fact profoundly changes the scope of the challenge. ABHYANKAR: The economic model for hacking is so well established that if it were legitimate and you were a venture capitalist looking to put money into this business, you would get good returns, right? The cost of sending malicious e-mail just keeps getting driven down. And anonymity in the network makes it harder to track down the bad guys from a legal enforcement and prosecution perspective. SHERSTOBITOFF: A lot of the activity is not really centered on the original hackers. They’re using middlemen. When you actually investigate, you end up getting to individuals—what they call “mules”—who had no awareness or knowledge that they were becoming victims of this whole scheme. We’re seeing that result as an upsurge from these Web sites that say, “I have a great job for you! Make $1,000 a week!” Law enforcement can’t get to the hacker who created the malicious software; the hacker or the attacker is long gone. The hackers don’t actually conduct the attacks; they sell these creations for money. There’s an underground economy just on sales of these attacks. You can now purchase something for $1,200 and be a cybercriminal. SADLER: So, given that we all understand how sophisticated the bad guys have become, what level of cooperation do you think we should be employing? Because, essentially, we still all compete. We’re fragmented, and the bad guys are coordinated. And there’s plenty of evidence that these different organized criminal elements are actually trading this stuff among themselves. We don’t have that level of cooperation among ourselves. SHERSTOBITOFF: That’s why I would advocate a vendor-agnostic approach here. To circumvent this threat takes not only a technological approach but also a community-sharing response, with research labs working together to share what they’ve seen. Because already, not all the malware samples in our labs come from our customers. We do get them from others in the industry. At the top, we’re not like bitter rivals. It’s a common problem that the industry as a whole needs to respond to. Better Education? Or Better Design? Perhaps surprisingly, the panelists generally foresaw few lasting improvements in data security from better educating end users: the nature of the threats changes too fast. LIPNER: We need to take the burden of sophisticated education off the end user and get to the point where the technology is just helping the user be secure and you’re not imposing pop-up fatigue on users, because it’s counterproductive. A lot of building secure systems is about the user experience. And I think that’s gotten short shrift across the industry. SADLER: I don’t think we should be putting emphasis on education at all. I think it’s only education in extremely general terms that will last more than six months. You look at many of the education programs around the globe, and they’re very, very short term in what they’re telling people to do. Put in place the latest antivirus, that sort of thing. HEIM: If people really knew the consequences of installing that free animated screen-saver widget—that in essence, they are saying, “I trust the developer of this little widget with complete access to my system and all my data”—it might change the way people behave online. SADLER: I think there is an answer, though. You train young children, when they go out, to pay attention to the neighborhoods. “These neighborhoods are kind of safe; these are not.” The equivalent on the Internet now is, we walk out with our entire bank account into the most unsafe neighborhoods, and then we’re surprised when we’re mugged. There has to be separation of concerns. You want people to be able to download the latest screen savers, but in a part of their environment that doesn’t affect their bank account. HEIM: But when we’re dealing with large-scale infrastructures, you need to be able to rapidly apply new patches and to maintain the stability of your environment. And it’s not always clear-cut that if you apply a security patch, that you aren’t going to come crashing down. GILLILAND: I agree there shouldn’t be some driver’s license–like certificate for using the Internet. But why wouldn’t we have basic end-user education when you walk into a company? “Here’s your laptop, here’s your PDA. I’m going to teach you the security principles for Symantec.” SADLER: And how long do you think those principles would last? GILLILAND: Principles can last for a long time. DIFFIE: It depends on what they are. GILLILAND: “Don’t open e-mail or don’t open attachments from people that you don’t know.” DIFFIE: That’s a hopeless rule. LIPNER: The only way you can address that is with underlying security and authentication. You give users a choice, but they have to know there are classes of things that are safe, whether it’s Web sites or attachments or executables. If you tell a user, “You have to read the code, or you have to interpret the SSL dialogue boxes,” that’s too hard. For end users, you have to provide an authenticated infrastructure that allows them to know whom they’re dealing with. GILLILAND: End users will violate the trust, given the opportunity, without a certain amount of education. Even if a warning pops up and says, “Warning: this site appears to be dangerous,” but the site says, “Click here to see Britney Spears naked,” they will still do it. The most effective sort of virus dissemination is always social engineering. Always. LANDWEHR: Isn’t there another way we can look at solving this? Instead of focusing so much on how to educate users about malware, we can change the rules of the game for the hackers so they’re less interested in attacking our computers, because we’re better at protecting the information that’s on them. Then if anybody steals the files that are on the disk, they’re encrypted. If someone accidentally e-mails something, it’s encrypted. If it goes anyplace that it shouldn’t, they don’t have the keys to open it. SHERSTOBITOFF: Agreed. In the financial community, they’re taking on the evolution of out-of-band authentication [joint authentication over two independent systems, such as a networked computer and a cell phone]. Some of the higher rolling traders are getting authentication devices: smart keys, RSA tokens. Some in the financial community are also putting anomaly detection in the back ends to detect suspicious patterns and localizations. Ultimately, financial institutions are adapting their technologies and authentication mechanisms so that they basically do not invite hackers. LANDWEHR: We’re seeing a lot of activity around smart cards. I’ve got my smart card badge here, and it’s the same badge that I use to go into the buildings that we have around the world, but it also has a PKI [public-key infrastructure] credential on it that I can use to log on to applications, encrypt business documents and digitally sign PDF forms. There’s also a PIN code that protects it, just like an ATM card. If you steal the card from me, you get a couple of guesses on the PIN code, and then it stops working. The International Perspective National perspectives on data security and privacy vary greatly. In many respects, the U.S. is lagging in its response to rising threats. SADLER: I think there’s a much greater effort in France, Germany and the U.K. to educate small businesses than in the U.S. So despite my arguing against education, I think the U.S. probably has to get some basics in place for small businesses here. Also, there’s a much better dialogue among academia, government agencies and industry in Europe, particularly in the U.K. and in Germany, than in the U.S. I don’t think the U.S. shows anything like enough common dialogue among those parties. SHERSTOBITOFF: We’re seeing task forces emerge in Europe that are dedicated to thwarting cybercrime. They’re taking an initiative far in advance. But from our talks with the FBI, it is still not there yet in this country. LIPNER: Because there are usages and national purposes specific to Europe and the U.S. government, additional standards will be needed. I think they’ll have to be international. GILLILAND: Obviously, there’s a ton of different privacy regulations that go on throughout Europe. Companies are trying to figure out how to adhere to some process or some policy framework that allows them to follow as many of the rules as they can. That’s the challenge that we haven’t spent a lot of time talking about here. How do people and companies that have been trying to comply with the privacy regulations prove that they have been doing it?
Note: This story was originally printed with the title, “Improving Online Security”.
THE PARTICIPANTS Rahul Abhyankar Senior director of product management, McAfee Avert Labs, McAfee Whitfield Diffie Vice president and fellow, chief security officer, Sun Microsystems Art Gilliland Vice president of product management, information risk and compliance, Symantec Patrick Heim Chief information security officer, Kaiser Permanente John Landwehr Director, security solutions and strategy, Adobe Systems Steven Lipner Senior director of security engineering strategy, Microsoft Martin Sadler Director, systems security lab, HP Labs, Hewlett-Packard Ryan Sherstobitoff Chief corporate evangelist, Panda Security US, Panda Security
Who Is Responsible? The panelists agreed on certain priorities for maintaining or strengthening data security. Some of these were technological, but regulatory and legal frameworks were also crucial.
DIFFIE: The foremost influence on these things in the next decade is going to be Web services and what I call digital outsourcing. We’re going into a world where there will be a million computational services that somebody else can do for you better than you can do for yourselves. Ten years from now you’ll look around and see what we call secure computing today will not exist. So what is going to be needed is a legal framework that obliges contractors to protect the security of the information. But they cannot respond to the obligation unless the technical machinery can be developed to allow them to protect that information.
GILLILAND: Yes, but if you look at how customers are actually implementing technology today, they’re already far behind what it can do. That’s not necessarily the problem now. It’s how do we make this technology practical so that customers can actually address their own privacy issues, their own auditing processes, and manage the protection of their data for themselves to current standards, which for the most part they’re not doing today.
LIPNER: For the business customers, you want the sort of things that Art and Whit are talking about: assurance about what will be done with your data, ways to describe the restrictions on it, and so on. For the consumers, you want an environment that they trust and that just works—because a lot of the growth of the Internet and Internet business is based on consumer confidence. We need to increase that confidence and ensure that it’s justified.
GILLILAND: The interesting balance that we have to figure out is, How do you enable businesses to continue to share information as rapidly as possible so they can make good decisions and yet make that sharing simple?
The Dangerous Human Element Users themselves can be the Achilles’ heel of security systems because of their propensities for error and their tendency (however unwittingly) to trade data safety for ease of use. As such, it falls to technology to compensate for the potential failings of users.
HEIM: We should not underestimate the human element. I liken it to driving. The reason we have controls in place such as driver’s licenses is so that people at least have a basic understanding of the rules of the road and how to operate a vehicle safely, so that we can minimize those risks. I don’t think there’s been enough educational outreach to end users on how to use their systems safely. I’m not necessarily proposing there needs to be a “cyber driver’s license,” but you know, that probably wouldn’t be a bad idea because we see that many, many of the observed problems are behavioral in nature.
DIFFIE: See, that’s exactly what would be an utterly monstrous idea. Cyberspace is the world of the future. If you don’t have a right to be there, you don’t have a free society.
ABHYANKAR: The human element is something that we can’t ignore. We recently celebrated the 30th anniversary of spam. E-mail continues to be something that gets exploited. There is a dark underbelly to technology, and the rate of innovation that the bad guys have and the social engineering techniques they have to steal your data are that much further ahead of what the good guys have. That’s something that technology alone is not going to solve.
GILLILAND: If you look at the research that we’ve been doing, around 98 percent of the data loss is through mistakes of human error and process breakdown. Being in the security industry, we’re always going to be fighting the bad guys. But the bad guys are less of the problem around data loss. Being able to steal information is always going to be a business for somebody, and you can’t ever fight all of them 100 percent. But we can stop the large percentage that is human and process error.
HEIM: We see on a day-to-day basis that if the technology organization itself can’t anticipate the needs of the individuals, in many cases they will enable themselves to get their jobs done using consumer-grade technologies.
The Economics of Modern Hacking Hacking is no longer the province of curious or bored programmers. The production of malicious software is now a business, and that fact profoundly changes the scope of the challenge.
ABHYANKAR: The economic model for hacking is so well established that if it were legitimate and you were a venture capitalist looking to put money into this business, you would get good returns, right? The cost of sending malicious e-mail just keeps getting driven down. And anonymity in the network makes it harder to track down the bad guys from a legal enforcement and prosecution perspective.
SHERSTOBITOFF: A lot of the activity is not really centered on the original hackers. They’re using middlemen. When you actually investigate, you end up getting to individuals—what they call “mules”—who had no awareness or knowledge that they were becoming victims of this whole scheme. We’re seeing that result as an upsurge from these Web sites that say, “I have a great job for you! Make $1,000 a week!” Law enforcement can’t get to the hacker who created the malicious software; the hacker or the attacker is long gone. The hackers don’t actually conduct the attacks; they sell these creations for money. There’s an underground economy just on sales of these attacks. You can now purchase something for $1,200 and be a cybercriminal.
SADLER: So, given that we all understand how sophisticated the bad guys have become, what level of cooperation do you think we should be employing? Because, essentially, we still all compete. We’re fragmented, and the bad guys are coordinated. And there’s plenty of evidence that these different organized criminal elements are actually trading this stuff among themselves. We don’t have that level of cooperation among ourselves.
SHERSTOBITOFF: That’s why I would advocate a vendor-agnostic approach here. To circumvent this threat takes not only a technological approach but also a community-sharing response, with research labs working together to share what they’ve seen. Because already, not all the malware samples in our labs come from our customers. We do get them from others in the industry. At the top, we’re not like bitter rivals. It’s a common problem that the industry as a whole needs to respond to.
Better Education? Or Better Design? Perhaps surprisingly, the panelists generally foresaw few lasting improvements in data security from better educating end users: the nature of the threats changes too fast.
LIPNER: We need to take the burden of sophisticated education off the end user and get to the point where the technology is just helping the user be secure and you’re not imposing pop-up fatigue on users, because it’s counterproductive. A lot of building secure systems is about the user experience. And I think that’s gotten short shrift across the industry.
SADLER: I don’t think we should be putting emphasis on education at all. I think it’s only education in extremely general terms that will last more than six months. You look at many of the education programs around the globe, and they’re very, very short term in what they’re telling people to do. Put in place the latest antivirus, that sort of thing.
HEIM: If people really knew the consequences of installing that free animated screen-saver widget—that in essence, they are saying, “I trust the developer of this little widget with complete access to my system and all my data”—it might change the way people behave online.
SADLER: I think there is an answer, though. You train young children, when they go out, to pay attention to the neighborhoods. “These neighborhoods are kind of safe; these are not.” The equivalent on the Internet now is, we walk out with our entire bank account into the most unsafe neighborhoods, and then we’re surprised when we’re mugged. There has to be separation of concerns. You want people to be able to download the latest screen savers, but in a part of their environment that doesn’t affect their bank account.
HEIM: But when we’re dealing with large-scale infrastructures, you need to be able to rapidly apply new patches and to maintain the stability of your environment. And it’s not always clear-cut that if you apply a security patch, that you aren’t going to come crashing down.
GILLILAND: I agree there shouldn’t be some driver’s license–like certificate for using the Internet. But why wouldn’t we have basic end-user education when you walk into a company? “Here’s your laptop, here’s your PDA. I’m going to teach you the security principles for Symantec.”
SADLER: And how long do you think those principles would last?
GILLILAND: Principles can last for a long time.
DIFFIE: It depends on what they are.
GILLILAND: “Don’t open e-mail or don’t open attachments from people that you don’t know.”
DIFFIE: That’s a hopeless rule.
LIPNER: The only way you can address that is with underlying security and authentication. You give users a choice, but they have to know there are classes of things that are safe, whether it’s Web sites or attachments or executables. If you tell a user, “You have to read the code, or you have to interpret the SSL dialogue boxes,” that’s too hard. For end users, you have to provide an authenticated infrastructure that allows them to know whom they’re dealing with.
GILLILAND: End users will violate the trust, given the opportunity, without a certain amount of education. Even if a warning pops up and says, “Warning: this site appears to be dangerous,” but the site says, “Click here to see Britney Spears naked,” they will still do it. The most effective sort of virus dissemination is always social engineering. Always.
LANDWEHR: Isn’t there another way we can look at solving this? Instead of focusing so much on how to educate users about malware, we can change the rules of the game for the hackers so they’re less interested in attacking our computers, because we’re better at protecting the information that’s on them. Then if anybody steals the files that are on the disk, they’re encrypted. If someone accidentally e-mails something, it’s encrypted. If it goes anyplace that it shouldn’t, they don’t have the keys to open it.
SHERSTOBITOFF: Agreed. In the financial community, they’re taking on the evolution of out-of-band authentication [joint authentication over two independent systems, such as a networked computer and a cell phone]. Some of the higher rolling traders are getting authentication devices: smart keys, RSA tokens. Some in the financial community are also putting anomaly detection in the back ends to detect suspicious patterns and localizations. Ultimately, financial institutions are adapting their technologies and authentication mechanisms so that they basically do not invite hackers.
LANDWEHR: We’re seeing a lot of activity around smart cards. I’ve got my smart card badge here, and it’s the same badge that I use to go into the buildings that we have around the world, but it also has a PKI [public-key infrastructure] credential on it that I can use to log on to applications, encrypt business documents and digitally sign PDF forms. There’s also a PIN code that protects it, just like an ATM card. If you steal the card from me, you get a couple of guesses on the PIN code, and then it stops working.
The International Perspective National perspectives on data security and privacy vary greatly. In many respects, the U.S. is lagging in its response to rising threats.
SADLER: I think there’s a much greater effort in France, Germany and the U.K. to educate small businesses than in the U.S. So despite my arguing against education, I think the U.S. probably has to get some basics in place for small businesses here. Also, there’s a much better dialogue among academia, government agencies and industry in Europe, particularly in the U.K. and in Germany, than in the U.S. I don’t think the U.S. shows anything like enough common dialogue among those parties.
SHERSTOBITOFF: We’re seeing task forces emerge in Europe that are dedicated to thwarting cybercrime. They’re taking an initiative far in advance. But from our talks with the FBI, it is still not there yet in this country.
LIPNER: Because there are usages and national purposes specific to Europe and the U.S. government, additional standards will be needed. I think they’ll have to be international.
GILLILAND: Obviously, there’s a ton of different privacy regulations that go on throughout Europe. Companies are trying to figure out how to adhere to some process or some policy framework that allows them to follow as many of the rules as they can. That’s the challenge that we haven’t spent a lot of time talking about here. How do people and companies that have been trying to comply with the privacy regulations prove that they have been doing it?
Note: This story was originally printed with the title, “Improving Online Security”.