Defending Against Future Spambots and the Erosion of Privacy

When the internet was first becoming widely adopted, spam seemed like it might become a big problem. Luckily spam filters got a lot better, and today I encounter very few issues.

But for a spam or bot filter to work, it has to be able to reliably tell the difference between a human and a non-human. And since we can expect that bots will get progressively more human-like with time, I wonder if certain filters are going to become over-taxed.

A recent article in Wired points out that we may finally be approaching a time when a chatterbot could pass a Turing Test. The article argues that that we are going to have such a vast amount of data to draw upon that the bots might be able to answer previously unanswerable questions.

“Suppose, for a moment, that all the words you have ever spoken, heard, written, or read, as well as all the visual scenes and all the sounds you have ever experienced, were recorded and accessible, along with similar data for hundreds of thousands, even millions, of other people. Ultimately, tactile, and olfactory sensors could also be added to complete this record of sensory experience over time,” wrote French in Science, with a nod to MIT researcher Deb Roy’s recordings of 200,000 hours of his infant son’s waking development.

“He continued, “Assume also that the software exists to catalog, analyze, correlate, and cross-link everything in this sea of data. These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions” and even pass a Turing test.”

Keep in mind a spam bot does not need to pass a Turing Test wholesale to become a nuisance. It just has to be good enough to slip through filters.

The obvious solution is to just employ more advanced filters. However filters can become a nuisance themselves. I still have important messages routed to my spam folder with some frequency. And sometimes it takes me an embarassing three times to pass a captcha test.

More importantly at a certain point, there is a limit to what filters can do to weed out bots based on behavior alone. A functional equivalence between humans and bots means there are no salient differences for the filter to identify.

Again there is a clear solution. Specifically, applications will increasingly need to start relying on proprietary or public “lists of trusted humans.” This is nothing new. When you are on Facebook for example, everyone has been vouched for as “real” by the system. But there are plenty of other more anonymous places on the web where such verification systems are not in place, and that is a large part of their charm. I suspect a trend toward bot-human equivalence will further endanger such havens of anonymity. Better bots are likely to be one more reason why privacy as we know it will have to vanish.

Comments are closed.