Jennifer Urban and I just published a preview of our work on notice and takedown in the Communications of the ACM (currently paywalled but accessible through most universities). Here’s the gist of it:
As automated systems became common, the number of takedown requests increased dramatically. For some online services, the numbers of complaints went from dozens or hundreds per year to hundreds of thousands or millions. In 2009, Google’s search service received less than 100 takedown requests. In 2014, it received 345 million requests. Although Google is the extreme outlier, other services—especially those in the copyright ‘hot zones’ around search, storage, and social media—saw order-of-magnitude increases. Many others—through luck, obscurity, or low exposure to copyright conflicts—remained within the “DMCA Classic” world of low-volume notice and takedown.
This split in the application of the law undermined the rough industry consensus about what services did to keep their safe harbor protection. As automated notices overwhelmed small legal teams, targeted services lost the ability to fully vet the complaints they received. Because companies exposed themselves to high statutory penalties if they ignored valid complaints, the safest path afforded by the DMCA was to remove all targeted material. Some companies did so. Some responded by developing automated triage procedures that prioritized high-risk notices for human review (most commonly, those sent by individuals).
Others began to move beyond the statutory requirements in an effort to reach agreement with rights holder groups and, in some cases, to reassert some control over the copyright disputes on their services.