Some popular brands have suspended marketing campaigns on Twitter after discovering their ads were appearing alongside child pornography accounts.
Affected brands. More than 30 brands reportedly appeared on their Twitter account profile pages, promoting links to exploitative material. Among these brands are Children’s Hospital and his PBS Kids. Other verified brands are:
- Dyson
- mazda
- forbes
- Walt Disney
- NBCUniversal
- coca cola
- cole haan
what happened. Twitter has not given any answer as to what caused the problem. However, according to a Reuters investigation, some of the tweets contained keywords related to “rape” and “teenage” and appeared alongside promotional tweets from corporate advertisers. In one instance, shoe and accessory brand Cole Hahn promoted his tweets, which appeared next to a tweet stating that a user was trading “teenage/children’s” content.
In another example, a user searched for and tweeted content for “Yung girls ONLY, NO Boys” followed immediately by a promoted Tweet from Texas-based Scottish Rite Children’s Hospital.
brand reaction. “We are horrible. Twitter will fix this or they will fix it in any way they can, like not buying Twitter ads,” David Maddocks, Cole Haan’s brand president, told Reuters.
A Forbes spokesperson said, “Twitter needs to fix this issue as soon as possible and will suspend any further paid activity on Twitter until this is fixed.
“There is no place online for this type of content,” a spokeswoman for automaker Mazda USA said in a statement to Reuters, adding that the company now prohibits ads from appearing on Twitter profile pages. He added that there are
A Disney spokesperson said the content was “disrespectful,” and that “the digital platforms we advertise on and the media buyers we use are doing everything they can to prevent such errors from happening again.” We are stepping up our efforts to step up our efforts.”
Twitter reaction. Twitter spokesperson Celeste Carswell said in a statement that the company “has zero tolerance for the sexual exploitation of children,” and has created policies and adopted new positions to implement solutions, including taking steps to prevent child abuse. She said she was investing more resources to dedicate more resources to her safety.
problem in progress. A cybersecurity group called Ghost Data identified over 500 accounts that openly shared or solicited child sexual abuse material over a 20-day period. Twitter failed to remove 70% of them. After Reuters shared a sample of the explicit accounts with Twitter.
More than one million accounts were suspended last year for child sexual exploitation, according to a transparency report on Twitter’s website.
What Twitter does and doesn’t do. A team of Twitter employees concluded in a report last year that the company needs more time to identify and remove child exploitation content at scale. The report noted that the company has a backlog of cases to consider for possible reporting to law enforcement.
Traffickers often use codewords such as “cp” for child pornography, “intentionally being as obscure as possible” to avoid detection. The more Twitter cracks down on a particular keyword, the more it encourages users to use obfuscated text, which “tends to be harder for Twitter to automate,” says the report.
Ghost Data said such tricks would complicate the task of locating the material, but with a small team of five researchers and no access to Twitter’s internal resources, they could find out within 20 days. He said he was able to find hundreds of accounts.
It’s not just a Twitter problem. The problem is not limited to Twitter alone. Child safety advocates say predators use Facebook and Instagram to groom their victims and exchange explicit images. The predator instructs the victim to contact her on Telegram and her Discord to complete the payment and receive the materials. Files are then typically stored in cloud services such as Dropbox.
why you care. Child pornography and explicit accounts on social media are a problem for everyone. Violators are constantly trying to trick the algorithm with codewords and slang, so we can’t be 100% sure that your ads aren’t showing where they should be. If you’re advertising on Twitter, review your placements as thoroughly as possible.
But Twitter seems to be slow to react. If watchdog groups like Ghost Data can find these accounts without access to Twitter’s internal data, it seems reasonable to assume that children can do the same. Why isn’t Twitter deleting all these accounts? What additional data are they looking for to justify the suspension?
Like whack-a-mole, every time an account is deleted, a few more pop-ups appear, and the suspended user may create a new account and mask their IP address. Is this an automation issue? Is there a problem getting local law enforcement to respond? Twitter spokesperson Carswell said the information in a recent report “…is not an accurate reflection of the current situation.” increase. This is probably an accurate statement as the problem seems to be getting worse.
What’s New in Search Engine Land