Twitter's Elon Musk declared that “removing child exploitation is priority #1” in late November. Since then, it has included 120,000 views of a video showing a boy being sexually assaulted, a recommendation engine suggesting that a user follow content related to exploited children, and users posting abusive material, along with delays in taking it down when it is detected and friction with organizations that police it. Twitter’s head of safety, Ella Irwin, said she had been moving rapidly to combat child sexual abuse material, which was prevalent on the site — as it is on most tech platforms — under the previous owners. “Twitter 2.0” will be different, the company promised. A review by The New York Times found that the imagery, commonly known as child pornography, persisted on the platform, including widely circulated material that the authorities consider the easiest to detect and eliminate.
After Musk took over in late October, Twitter eliminated or lost staff experienced with the problem and failed to prevent the spread of abusive images previously identified by the authorities. Twitter also stopped paying for some detection software considered key to its efforts. People on dark-web forums discuss how Twitter remains a platform where they can easily find the material while avoiding detection. “If you let sewer rats in,” said Julie Inman Grant, Australia’s online safety commissioner, “you know that pestilence is going to come.” Irwin and others at Twitter said their efforts under Musk were paying off. During the first full month of the new ownership, the company suspended nearly 300,000 accounts for violating “child sexual exploitation” policies, 57 percent more than usual. The effort accelerated in January, Twitter said, when it suspended 404,000 accounts. “Our recent approach is more aggressive,” the company declared last week, saying it had also cracked down on people who search for the exploitative material and had reduced successful searches by 99 percent since December.
留言