Twitter's Frankensteins: The Spambots that look like people

I grew up in Tuolumne County in California’s Sierra Nevada foothill region. My old stomping grounds have been in the news lately, mostly because they’ve been on fire. As the Rim fire has grown to 400 square miles in size, making it the third largest fire in state history, I’ve been following along with reports on Twitter using the hashtag #rimfire.

Along the way, I accidentally discovered a creepy collection of fake Twitter accounts. These aren’t the Twitter accounts that you usually think of as belonging to spammers: pictures of attractive women (or generic Twitter egg icons) spewing out the same link to hundreds or thousands of people. Instead, they’re accounts that, on the surface, look like ones operated by real people, except that they keep tweeting the same out-of-date tweet over and over again, like a creepy Doctor Who monster.

School’s in session, so let the bots run free

During the height of the Rim fire, numerous Tuolumne County schools closed due to smoke from the fire or the threat of evacuation. But they’ve all been open since early September. At some point in middle to late August, with smoke choking the county, someone helpfully tweeted: “#TuolumneCounty Summerville High & Elementary are closed as well as Soulsbyville Elementary due to the air quality of the #rimfire.”

And that’s when it happened. This tweet was captured (presumably because of its use of the trending hashtag #rimfire) and became fodder for automated reposting by endless fake Twitter accounts.

These tweets are clearly not the product of people, but computers. Just look at the ampersand. The original tweeter used an ampersand, but over time the bots tried to map it into HTML code, using the special-character entity &, and then the ampersand in that entity was re-encoded into &. That’s a funny thing computers do—and people don’t.

When I first noticed the series of identical tweets, I was confused. The accounts sending this tweet over and over again seemed real enough. I had to drill into the accounts themselves, and look at the biographical information and tweet timeline, to notice that they’re utterly nonsensical. Every tweet in these accounts seems to have been real, once, made by someone else in a context where it made sense, but it was later processed and reposted in a melange of Twitter litter.

Here’s a user claiming to be a fellow named Kent Redding, and using the Twitter ID @HomesinAustinv:

The user’s bio says he’s a real-estate agent from Austin, Texas, but the location of his Twitter account is "Alaska.” His tweets include not just the one about school closures in California, but others about graphic design and financial aid—and then there are several tweets in Arabic.

If these Tweets are all from one person, I really need to meet this guy. @HomesinAustinv has probably got some amazing stories to tell.

But of course, that account isn’t run by a real person. The real Kent Redding is at the Twitter ID @HomesinAustin. The v was appended by a bot that appropriated the real Redding’s account information in order to add verisimilitude:

The real Kent’s tweets are about, as you might expect, Austin real estate. Clearly his Twitter bio and picture (but not location) were ripped off by the @HomesinAustinv bot.

This particular bot has tweeted 14 times, has 15 followers, and follows 7 other users. This is how spammers (or services that goose your Twitter follower count) sneak past Twitter’s anti-bot checks. They create accounts that seem real, cross-follow other real-seeming accounts, and then sit there until the time is right. To do...something.

When I looked at the accounts that @HomesinAustinv follows (and the accounts that follow it), I descended further into the morass of fakeness. There’s a retired white postal service worker from Oklahoma City who tweets about singing in a celebration of a Muslim holiday, plays the board game based on Joss Whedon’s TV show Firefly, and—most damningly—thanks his mom for cleaning his room and doing his laundry. (In this case, the fake account is @JoeScholesKOC while the ripped-off real account is @JoeScholesOKC.) There’s a tire-store manager from Washington whose Twitter location is inexplicably listed as Colorado and who is really excited about the signings English Premier League soccer club Arsenal will be making during the transfer window.

It goes on. A wall of fake people, Frankenstein creations assembled from avatars, names, bios, and tweets. All of it gets thrown into a blender (perhaps with a hank of hair and a jolt of electricity) and something vaguely human comes out the other end.

Who does it hurt?

Does it matter? Do these creations really hurt anyone? I’d say yes, for a few reasons. In the first place, I assume the real Kent Redding wouldn’t be happy that someone is using his name, bio, and avatar to do...something. The person who posted a helpful tweet about school closures is probably not happy that those words were hijacked.

And if I’m someone who relies on Twitter for news and information, I’m going to be steered wrong. Spam latches on to useful hashtags, like #rimfire. Even a moment of carelessness can go in the wrong direction. A Sacramento Bee outdoor writer noticed one of the fake school-closure posts in the #rimfire hashtag and retweeted it to his followers, not noticing that it came from an account claiming to be held by a London-based all-natural soap company.

And of course, the school-closure information was just wrong.

Can anything be done about this? I’d hope that Twitter would be scanning its service for hundreds of identically worded tweets, user accounts that are almost identical to other user accounts, and accounts with tweets that match other tweets. The spammer arms race will always continue, but it seems to me that Twitter needs to do a better job of fighting it.

Except for horse_ebooks. That’s the one exception for Twitter spam I’ll accept. If only these school-closure spammers were as entertaining as that account.

Subscribe to the Best of TechHive Newsletter

Comments