This week we released another report on socially-engineered malware protections delivered by browsers. While most articles and blogs seemed to interpret the results properly, there were some inaccuracies that we wish to address. Security is a complex field and the terminology can sometimes be misinterpreted. This can be compounded when vendors who did poorly add their spin, or when the data challenges one’s own beliefs.
Stopping Malware vs. Security:
Some of the key security terms are clarified in a previous post, especially relating to “most secure” browser. Some articles incorrectly stated that we found IE to be the “most secure” browser. What we tested was browser effectiveness at stopping malware from reaching a user or their PCs, not the security of the browser itself or its plug-ins. Modern browsers are a wonderful, free additional layer of protection. They work well with your favorite antivirus software. Browsers however will not stop malware coming via email or USB drives.
The Malware threat:
Malware is arguably the largest security threat facing users today – with more than 60,000 unique, new samples entering circulation each day (source: McAfee). There’s a $14B industry addressing the problem of malware. These test results challenge the comfortable status quo of many of the vendors. The notion that a free product adds so much protection can easily upset the industry apple cart. The assertion that the focus of the test was narrow (made by Google and some others) flies in the face of all generally accepted data. To say one’s browser was “built with security in mind” is nice, but marketing speak. What we’ve offered is hard data about malware protection. The exploitability of the browser is also a very important topic. But, even in this case, data doesn’t support Chrome being ‘more secure’ (i.e. less vulnerable) than other browsers (see CVEs, Secunia disclosures, etc.). We do sincerely applaud the innovations and bug bounties though, and encourage all vendors to build more security in.
Versions tested:
The claim that we tested an “old” version of Chrome is patently false. As stated in the report, the test was run in late September, when version 6 was the current browser. Since then Google has released two other so-called “major” releases, none of which have claimed improvements to the tested SafeBrowsing functionality. Here is the Timeline for Chrome Version Release:
Stable Version Release Date
5.0.375 2010-05-25
6.0.472 2010-09-02
7.0.517 2010-10-21
8.0.552 2010-12-02
Furthermore, Chrome’s sandboxing is designed for exploits protection; it does not protect against socially-engineered malware. You click it, it runs.
At the time of the test, Opera’s website marketed protection from malware as a feature, yet our results showed no protection was available. AVG officials have separately acknowledged that the integration of its technology was not yet complete, confirming our results. Features should only be marketed after they are actually in the product.
Application Reputation:
In a world of dizzying information, there’s much rush to a sound bite of who “won”. The most significant technological message of this test may have been overlooked. This test benchmarked the world’s first implementation of an application reputation system within free web browsers that goes beyond simple black lists. Nascent stand-alone security products such as SolidCore (acquired by McAfee), Bit9 and CoreTrace utilize what is commonly referred to as white listing, and commercial endpoint security products are starting to include some form of this as well. Apple uses a “walled garden” approach to limit exposure to malware on its tightly controlled platforms by pre-approving apps.
In web browsers, so far we’ve seen just black listing. A URL or application is either known to be bad, or unknown. What’s unique about Microsoft’s approach in the IE9 browser is that applications have 3 states: known good (white), known bad (black) and unknown (grey). The combination of good and bad indicators is clearly powerful, stopping 99% of malware via the web download vector. The use of application reputation to identify good applications and bad ones is unique to IE, for now. Will other vendors follow Microsoft’s lead?
Methodology and Open Invitations:
No vendor has influence over what/how we test or where we get malware from. We constantly run our Live Test network with a variety of security products - antivirus, browsers, and other network devices.
Over the past 2 years we’ve been running these tests, NSS has discussed results and methodology (which is included in the report) with all of the browser vendors; even providing sample URLs for validation to them in past tests. Some had even privately acknowledged issues.
Our past invitations to become more involved in the ongoing testing and review of results still stands.