If I told that you that a bunch of hackers had found a zero-day vulnerability in Microsoft Windows 8.1 you would probably be concerned.
Especially if details of the unpatched security bug had not only been made public, but actual working exploit code had also been released on the internet for anyone else to use.
So, how would you feel if I told you that the hackers work for Google?
In its questionable wisdom, Google’s Project Zero group has gone public with details and proof-of-concept exploit code after one of its researchers found a security hole in Windows 8.1 that can allow lower-level users to become administrators. Once an unauthorised user has admin privileges, of course, they can cause all kinds of trouble.
Was Google right to release the proof of concept code? I don’t think so.
In July last year, when Google’s Project Zero was announced, I praised the company for what appeared to be a responsible stance regarding vulnerability disclosure:
I am encouraged by Google’s approach to disclosing the vulnerabilities. Itsays that it will responsibly report security bugs to the software vendor, not to third parties, and – once a patch is available – will provide a way for internet users to monitor how long it took a particular vendor to fix an issue, and other information.To my mind that’s a better approach than that taken by some security researchers (including some, sadly, who work for Google) who have in the past publicized security holes before a patch which would protect users is available, giving malicious hackers an opportunity to exploit the vulnerability and cause damage.
If they had waited until Microsoft had released a patch for the problem, then it could be argued that it’s acceptable for the Google Project Zero team to release details of the vulnerability. But Google knew that Microsoft hadn’t yet released a patch (heck, you can’t blame them for not rushing after the buggy security patches that have come out of Redmond recently), and yet it felt it was reasonable to release proof-of-concept which malicious hackers could use as a basis for their own attacks.
Fortunately, Microsoft has pointed out that the security flaw uncovered by Google’s researchers isn’t of the highest severity. To exploit the bug, an attacker would “need to have valid logon credentials and be able to log on locally to a targeted machine.”
But it’s still easy to imagine a disaffected employee using the bug to cause mayhem if they so wished.
In its defence, Google says that it reported the bug to Microsoft on September 30th, and that its 90-day disclosure deadline has now passed.
To which I say, so what? That’s no reason to publish exploit code for any Tom, Dick or Harry to pick up and run with.
If you want to apply pressure on a software vendor who you believe is taking too long to fix a security flaw, don’t release blueprints of how to exploit the vulnerability onto the internet. Instead, go to any technical journalist who works on the security beat. They’ll be happy to have the flaw demonstrated to them, and then responsibly report that the bug still hasn’t been fixed.
There’s a right way and a wrong way to raise awareness of zero-day security holes that haven’t been patched yet. Google – the company which famously has the policy of “Don’t be evil” – is going about it the wrong way, and potentially putting many of us at risk.
0 comments:
Post a Comment