Bringing Trust Back into the Vulnerability Disclosure Process

Too many defensive security operations lack an offence counterpoint: searching for defaults and vulnerabilities that an attacker can assemble to access a target. Vulnerability disclosure policies and programs fill this gap.

By Naveen Sunkavally

The process of disclosing security vulnerabilities is an unpredictable quagmire.

It is a process that, by necessity, starts off on a confrontational note. The first message sent from a security researcher to an organization is almost always bad news of some sort, and the organization now has a problem to understand and respond to.

From that first report onwards, responses can vary widely. There’s no real script, and sometimes the outcome can turn out to be negative. The recent back-and-forth between the Zero Day Initiative and Exim is a good example, as well as this recent post from a researcher. In extreme cases, researchers get threatened with legal action.

Questionable Outcomes
Some of these cases are tracked on sites like threats.disclose.io. As a security researcher, I’ve personally never been on the receiving end of legal threats but have had questionable outcomes when working with organizations such as them fixing issues silently without acknowledgement, not accepting a vulnerability because it is in a third-party component, patching but delaying significantly before disclosing, etc.

I have been on the other side too. As part of the app sec team at Horizon3, I see the reports that come from other researchers, and a lot of them fall into the “beg bounty” category – closer to scam phone calls than real bug reports. And I can attest to the “bogus CVE” problem, where some researchers file no-impact CVEs to pad their resumes, as being a real issue. The prevalence of these false, no-impact disclosures makes it difficult for end-users to discern which issues are important to address in their environments.

Despite all this, from my experience, most of the time the vulnerability disclosure process works well. Most of the time security researchers and organizations alike work together in good faith to “do the right thing.” These positive cases just tend to be overshadowed by the extremes and don’t get talked about much. Here is one such story of a recent and positive vulnerability disclosure process that I went through.

Reporting a Vulnerability to PaperCut
Earlier this year, I decided to research PaperCut’s Print Management software. This was on the heels of CVE-2023-27350, an unauthenticated remote code execution vulnerability affecting PaperCut NG/MF that was observed being widely exploited in the wild. At Horizon3 we tend to prioritize what we research based on what our clients are running. We found that PaperCut NG/MF is a widely deployed product, especially among our SLED (state, local, and education) client base.

On May 30, I reported a few issues to PaperCut Software using the contact in their security.txt file. In my report, I detailed a path I discovered where an unauthenticated attacker could download or delete arbitrary files on the PaperCut NG/MF server. I thought there was a path to remote code execution but hadn’t quite pieced it together yet. I got a response within 24 hours confirming receipt of my email and that PaperCut had begun investigating.

The following weekend, I figured out a path that could lead to remote code execution in certain common default configurations of PaperCut. On June 5, I sent a detailed follow-up email, which PaperCut promptly acknowledged. Shortly after, PaperCut confirmed the findings in my report, and asked for a call. To my surprise, I also got a note from the PaperCut CEO Chris Dance, who wanted to join the call along with Head of Engineering Anthony Radisich.

I was a bit taken aback and didn’t know what to expect. I accepted the call and recruited the co-founder of Horizon3, Anthony P., to join and provide backup, just in case. In the call, it was clear that both Chris and Anthony deeply understood the issues I had reported at a technical level, and we brainstormed a few viable solutions.

Over the next few weeks, we continued the conversation by phone and email. For us, it was important to give our own clients a heads up about the vulnerability as part of our Rapid Response program. We worked with PaperCut to craft an email to our clients and sent it out.

PaperCut sent me a development build with some of the fixes. I was pleased to see that PaperCut chose to fix all other issues completely rather than just fixing the entry point. I provided a few more suggestions, alongside code review feedback from Chris himself, which PaperCut promptly implemented.

On July 25, PaperCut released PaperCut NG/MF 22.1.3, with the full fix. Our advisory for the vulnerability, CVE-2023-39143, went out on Aug. 4. We wanted to provide enough details in the advisory so that users could detect if they were vulnerable, but not so much that a script kiddie could use it to attempt mass exploitation.

Takeaways
The vulnerability disclosure process with PaperCut is an example of a best-case scenario. PaperCut was prompt in their responses, fixed the issues I reported completely, and were transparent and cordial throughout the process.

I believe this process was successful for two reasons. First, both parties acted in good faith to prioritize the interests of end-users of PaperCut software above anything else. It is in our interest at Horizon3 to protect our own clients and others who use PaperCut, and it’s in the interest of PaperCut to do the right thing for its own end-users.

Second, I believe the process went well because of the high-bandwidth personal communication we were able to establish from our first video call. Though I hesitated at first, I believe the first video call was an important element in establishing trust, and this trust formed the backbone for future communications.

Vulnerability disclosure in general seems to suffer from an incentive problem. While most researchers and organizations are well meaning, there is nothing inherently driving them to “do the right thing” other than the threat of “naming and shaming.”

Researchers deserve to be compensated for the demanding work they put in, but when a bounty or resume improvement becomes the focus rather than protecting end-users, it can lead to negative outcomes. Similarly, organizations themselves must resist the temptation to get defensive, withhold information, and downplay vulnerabilities rather than be transparent and straightforward.

As a developer myself, I know how hard it is to write secure code, and all complex software will inevitably have vulnerabilities.

I also believe that bad stories of vulnerability disclosures gone wrong have warped the ways researchers and organizations communicate. Both sides have “hunkered down” and established their own complex protocols and policies, such as having everything in writing only. On top of that, the widespread usage of intermediaries to facilitate communication between researchers and organizations has made the process impersonal and distant.

No Silver Bullet
I don’t have a silver bullet to repair the current system, but it would be interesting to figure out a way to reform the vulnerability disclosure process in such a way that we can foster more trust and inherently incentivize researchers and organizations to do the right thing.

Chris Dance, CEO of PaperCut, said: “In the eyes of many software vendors, the security research industry is often seen as something that’s risen from the hacker culture, has a red team mindset, and let’s be honest, has an ‘adversarial brand’. This needs to change. Software security is a global concern. We have a common goal. If we automatically adopt a collaborative approach, keep things friendly, it not only helps us to collectively work together to improve security faster, it’s also much more fun.

“At PaperCut, we’ve interacted with dozens of security researchers over the last decade. The Horizon3 team stands out as unique for their collaborative approach. We’re grateful for their partnership and we’re excited to see what they do next.

“To all security researchers out there, we encourage you to reach out to us if you find any vulnerabilities in our products. We’ll take your report seriously and work with you to fix the vulnerability as quickly as possible. We’re all in this together.”

Naveen Sunkavally is chief architect at Horizon3.ai. He previously held a variety of roles at RSA, and most recently was senior architect/staff engineer with the RSA Labs team. He was previously a principal software engineer with Symantec.

Featured

  • Gaining a Competitive Edge

    Ask most companies about their future technology plans and the answers will most likely include AI. Then ask how they plan to deploy it, and that is where the responses may start to vary. Every company has unique surveillance requirements that are based on market focus, scale, scope, risk tolerance, geographic area and, of course, budget. Those factors all play a role in deciding how to configure a surveillance system, and how to effectively implement technologies like AI. Read Now

  • 6 Ways Security Awareness Training Empowers Human Risk Management

    Organizations are realizing that their greatest vulnerability often comes from within – their own people. Human error remains a significant factor in cybersecurity breaches, making it imperative for organizations to address human risk effectively. As a result, security awareness training (SAT) has emerged as a cornerstone in this endeavor because it offers a multifaceted approach to managing human risk. Read Now

  • The Stage is Set

    The security industry spans the entire globe, with manufacturers, developers and suppliers on every continent (well, almost—sorry, Antarctica). That means when regulations pop up in one area, they often have a ripple effect that impacts the entire supply chain. Recent data privacy regulations like GDPR in Europe and CPRA in California made waves when they first went into effect, forcing businesses to change the way they approach data collection and storage to continue operating in those markets. Even highly specific regulations like the U.S.’s National Defense Authorization Act (NDAA) can have international reverberations – and this growing volume of legislation has continued to affect global supply chains in a variety of different ways. Read Now

  • Access Control Technology

    As we move swiftly toward the end of 2024, the security industry is looking at the trends in play, what might be on the horizon, and how they will impact business opportunities and projections. Read Now

Featured Cybersecurity

Webinars

New Products

  • FEP GameChanger

    FEP GameChanger

    Paige Datacom Solutions Introduces Important and Innovative Cabling Products GameChanger Cable, a proven and patented solution that significantly exceeds the reach of traditional category cable will now have a FEP/FEP construction. 3

  • Camden CV-7600 High Security Card Readers

    Camden CV-7600 High Security Card Readers

    Camden Door Controls has relaunched its CV-7600 card readers in response to growing market demand for a more secure alternative to standard proximity credentials that can be easily cloned. CV-7600 readers support MIFARE DESFire EV1 & EV2 encryption technology credentials, making them virtually clone-proof and highly secure. 3

  • Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

    Connect ONE®

    Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation. 3