Friday, November 2, 2007

Researchers dig for hidden links in spam

InfoWorld 1/11/2007
Website: http://www.infoworld.com

San Francisco (IDGNS) - Filtering spam messages is a thankless job for software. For every 100 spam e-mails, one message usually gets through, an irritating pitch with links to Web sites selling questionable drugs or sketchy Rolexes.

The links contained within spam are one indicator in determining whether it should be blocked. Often after a large spam run, the addresses of spammy Web sites will be added to blocklists that are used by antispam software to cull future messages with those links.

To get around it, spammers construct e-mails with links that can't be identified by filters but still are valid in the messages, said Christopher Fuhrman, a professor of software engineering in the Department of Software and IT Engineering at the University of Quebec.

Spammers do this by "munging" the HTML -- adding backslashes, taking out tags -- so that the message and its links are still readable by the rendering engines of browsers or e-mail clients but appear as a garble of nonsense to filters. The technique is also known as obfuscation.

It's a trial-and-error process because spammers don't read HTML Web standards. "Spammers just want to get the cash," Fuhrman said.

Tamper with the HTML too much, and the message won't render at all. Too little, and filters snare the message.

So spammers aim for a narrow gap: Most browsers and e-mail clients can render a certain amount of munged HTML, although the tolerances vary depending on the application.

Fuhrman theorizes that spammers test their messages using Microsoft's widely used Outlook program, which uses the same HTML rendering engine as its IE (Internet Explorer) browser.

So Fuhrman and one of his graduate students, Hicham El Alami, are writing a program to use that IE's rendering engine as a way to "parse" messages, or extract the links.

Services such as SpamCop already do this. SpamCop -- part of IronPort Systems, a subsidiary of Cisco -- has a Web-based service that uses algorithms to parse links out of spam messages submitted by users.

Those algorithms are hard to write, although SpamCop's is pretty good, Fuhrman said. Fuhrman and El Alami are interested in creating an alternate way to do that same parsing without needing to consistently tweak an algorithm to keep up with new tricks used by spammers.

It's hard to write a parser that will read links the same way IE's rendering engine does since Microsoft's source code is secret, Fuhrman said. So a better idea would be just to use that engine as part of a program to parse messages. A variety of tools exist to manipulate IE's rendering engine through APIs, Fuhrman said.

The links that IE's engine renders would be reported to a blocklist service. Fuhrman wrote a model version of his idea that works in Java, but El Alami is now working on one for .NET, Microsoft's application development framework.

"I want to ultimately get it as a Web-based engine so that users can paste spam, and when it comes out, it will reveal the links," Fuhrman said.

No comments: