Should Companies Be Required to Share Information About Cyberattacks?
May 22, 2016
Last year, losses or thefts of more than half a billion identities were reported, including the largest data breach in history, involving more than 191 million U.S. voter-registration records.
At the same time, more companies are choosing to keep the scope of the breaches they suffer secret.
Both trends are worrisome, and are likely the tip of the iceberg.
Underreporting of cyberattacks contributes to an incomplete understanding of the magnitude of the threat. It means we are relying on anecdotal information to determine effective defenses against cyberthreats.
Breaches that are disclosed generally involve loss of personally identifiable information or medical records, because reporting of this kind of attack is mandated by state and federal data-breach notification laws.
But the laws we have can best be described as a patchwork of requirements inadequate to protect consumers and burdensome for companies. Indeed, state requirements vary in 47 states, and federal rules are inconsistent across sectors and lack specificity. The Securities and Exchange Commission clarified in 2011 that “material” cyberrisks and intrusions must be disclosed to investors. But the SEC didn’t offer formal guidance on what is “material.” Such vagueness means that most public companies file generic statements about cyberrisk and many still don't disclose intrusions at all.
The Cybersecurity Act signed into law in December, as part of a broader spending bill, creates a framework for voluntary sharing of cyberthreat information. This is a significant step, but it, too, doesn't go far enough. Nothing in the law compels companies to disclose incidents or technical details about breaches. There is liability protection against suits resulting from efforts by companies to monitor their own networks and share threat information. But there is no liability protection for companies sued as a result of a breach.
Of course, even if there were liability protection in the case of a breach, companies could still suffer reputational harm by reporting the breach. But the benefits to society by requiring reporting would outweigh the costs to the individual companies. Requiring not only cyber incidents to be reported but the tactics and techniques used by hackers would create greater transparency, allowing businesses, policy makers and consumers to make more informed decisions about how to manage cyberrisk. It would enable decision makers in companies and government to assess risk as well as progress.
Not all incidents should be disclosed, and not all the details should be made public. We wouldn't want to give malicious hackers a road map to conduct additional attacks. But major attacks—those that have significant consequences for the economy, public health and safety, or national security—should be disclosed to relevant government agencies and to downstream stakeholders that may be affected by the incident. Disclosure creates incentives for improvement. The alternative is to fly blind when it comes to cybersecurity, which is the approach we use now without much success.
After the congressional and presidential elections in the fall, Congress and the new administration should make a serious effort to overhaul state and federal data-breach-notification laws, harmonize requirements and toughen the standard for data-breach notification.
Congress should also enact new laws to encourage more disclosure of incidents as well as relevant technical details to enable IT vendors and companies to close gaps and vulnerabilities that attackers could use to conduct similar intrusions on others.