Discussions of digital rights management and copy protection sometimes talk about the analog hole—the lamentable fact (from the perspective of “content rights holders”) that there is always a moment where digital content escapes copy protection.

As digital data, culture can be encrypted and subject to license checks and other forms of rights protection. Once books, music, images, and movies become bits, they can be cryptographically verified and managed. Video games can check that you have access (i.e. are fully paid up) before letting you play; an eBook can check to see if it has “expired” before it displays its text. But no person reads, views, or listens to digital data directly. Some mediating software or hardware always transforms that data into legible text, audible sound, or viewable video. And in that moment of transformation, all those carefully managed ones and zeroes become perceptible, but unruly, sound or light. As those bits briefly, frailly escape the digital, they can be captured, copied, or otherwise recorded. Such copying might be crude: taping a DRM-protected music file onto a cassette as it plays; paging through the eBook and taking pictures of (or transcribing!) the text; aiming your phone at a screen to record the copy-protected video as it streams on your television. When the data is sensible, it is capturable.

That is the analog hole—the opening in the circuit of digital circulation when data escapes its binary representation and leaks into some analog format. In the circuit of cultural consumption, the necessity of the analog hole reflects the embarassing fact that people cannot perceive the bits that make up a digital text. In some basic way, we cannot experience digital media without it being transformed into analog form by dull technologies we usually ignore—the technologies which push air and light around rather than (hushed silence and feeling of awe) data; the technologies we call peripheral. And yet this gap—this hole—in the system is also its sole justification; absent that moment of transformation, the data would be of no use, and the circuit would lose its motivation entirely.

I was reminded of this circuit, in which human perception is figured as a lamentable embarassment (a hole to be closed—or at least narrowed as much as possible), when I read the story of Michael Smith, a North Carolina man who earned more than a million dollars with “fake” music streams (Millman) [source]. (Interestingly, Spotify claims that despite its size as a streaming music platform, only 0.6% of the money Smith earned; i.e., he was exploiting the other streaming services.)

The story has an “AI” angle, but I think its interest is in fact much more basic, because there is a a sort of “hole” here—that plays a structurally similar role even if it is not exactly analog. Smith’s scheme worked by first creating songs, that he owned (and so could collect royalties on). He then set up a network of accounts to automatically and continuously play those songs so that he could collect the royalties. At sufficient scale, it turns out, this can be profitable.

In order to work, these songs (here’s the hole) must be “played” in real time, even if only in a virtualized web browser. The song may never actually be materialized as analog sound (it almost certainly wasn’t!), but each audio frame must be dutifully unpacked, decoded and processed, counted like beads on a faithless rosary. Only then is type incarnated as token, the spatialized block of data representing “a song” become the temporal fact of “a stream,” countable for purposes of monetization.I have to assume there is some minimum song length for a stream to count; or that streams are scaled to song length in someway, to avoid incentivizing 1-second long songs; this seems to suggest 30 seconds might be the minimum.

Smith used 1,040 accounts, each of which could stream “approximately 636 songs per day” (which sounds like an average song length of ~2.25 minutes), totaling more than a half-million streams a day.The indictment says that Smith “had 52 cloud services accounts, and each of those accounts had 20 Bot Accounts on the Streaming Platforms.” I suppose that means that he had set 52 instances (of some kind; AWS, virtualized servers…), running, in effect, 20 browser tabs playing his songs day and night. Of course, if all those bots streamed a single song, his scheme would be detected, so Smith needed many songs (thousands of songs) that he could stream, and so collect a small royalty on each (shades of salami slicing). Smith ultimately began using AI to generate those songs; but AI seems incidental to the story. For one thing, many of Smith’s songs were uploaded in 2019 or earlier—too early for the current crop of AI-song generation. When Smith started using AI to generate songs, he remarked how different the new technology is. “Song quality is 10x-20x better now,” he marvels in an email (qtd. in United States District Court, Southern District of New York, sec.24). More essentially, the scheme is enabled less by AI songs, than by the fact that in the monetization process a “song” is merely a sort of empty space in a circuit of cultural consumption.

Like the “analog hole,” what Smith’s scheme exploits is the moment that escapes the otherwise closed circuit of data circulation. In a streaming scam, the music itself is utterly irrelevant but (for the would-be profiteer) frustratingly essential. Like the content protected by DRM, it is both essential (without content, no need for DRM; without music, nothing to stream) and irrelevant to the monetization scheme (so long as you’re streaming, we don’t care what!). Taste itself is certainly not outside algorithmic analysis and manipulation Fry, but the ownership of digital data divorces the fact of ownership from the data itself, and the monetization of circulation (through both streaming and DRM) creates an antipathy between the object and its consumption. Music itself is incidental to the entire process. In an email, Smith wrote “Keep in mind what we’re doing musically here… this is not ‘music,’ it’s ‘instant music’” (United States District Court, Southern District of New York, sec.20). Simply posting silence or random noise would have been risky; Spotify, at least, was already cracking down on “functional noises” as a vector for fraud. But “instant music” isn’t quite right. Smith’s tracks are “music-ish”; they aren’t music, but they are shaped enough like it to bridge the small gap in the monetization circuit that lets the money flow.

Works Cited

Fry, Hannah. Hello World: Being Human in the Age of Algorithms. W. W. Norton & Company, 2019.
Millman, Ethan. Feds Indict Musician on Landmark Massive Streaming Fraud Charges. Rolling Stone, 4 Sept. 2024, https://www.rollingstone.com/music/music-news/feds-arrest-musician-on-massive-streaming-fraud-claims-1235095009/.
Seaver, Nick. Computing Taste: Algorithms and the Makers of Music Recommendation. the University of Chicago Press, 2022.
United States District Court, Southern District of New York. U.S. V. Smith Indictment. 4 Sept. 2024, https://www.justice.gov/usao-sdny/media/1366241.