Sony Music has asked streaming platforms to take down more than 135,000 songs it says were created by fraudsters using generative AI to impersonate artists on its roster.
That was first reported by the BBC, which cited Dennis Kooker, President, Global Digital Business & US Sales, at Sony Music Entertainment. The target artists reportedly include Beyoncé, Queen and Harry Styles, with Bad Bunny, Miley Cyrus and Mark Ronson also likely affected.
Sony’s disclosure came Wednesday (March 18) at the launch of the music industry’s Global Music Report by the International Federation of the Phonographic Industry (IFPI).
Kooker said, according to the BBC, that the “deepfakes” cause “direct commercial harm to legitimate recording artists… In the worst cases, [the deepfakes] potentially damage a release campaign or tarnish the reputation of an artist.
“The problem with deepfakes are they are a demand-driven event. They are taking advantage of the fact an artist is out there promoting their music.”
Sony says the 135,000 tracks it has identified so far are likely only a fraction of what has actually been uploaded. Since last March 2025 alone, the company flagged roughly 60,000 songs falsely attributed to artists from its roster, according to the report. In a submission to the government’s consultation on AI and copyright law, obtained by the Financial Times and The Sunday Times at the time, Sony flagged more than 75,000 AI-generated deepfakes.
“The problem with deepfakes are they are a demand-driven event. They are taking advantage of the fact an artist is out there promoting their music.”
Dennis Kooker, Sony Music Entertainment
The volume is rising as AI tools become cheaper and more accessible, the BBC said.
The IFPI reported that recorded music revenues grew 6.4% YoY in 2025 to USD $31.7 billion — an improvement on the 4.7% rate of growth posted in 2024 — marking the global industry’s eleventh consecutive year of growth.
The report highlighted two key themes shaping the industry’s next chapter: AI innovation and streaming fraud.
IFPI CEO Victoria Oakley is quoted by the BBC as saying: “I think we’ve seen a lot of governments really grappling with this issue because they are trying to square a circle: They are trying to protect creativity and at the same time encourage innovation.
“I’m very optimistic that… in the UK, they [have] decided to pause and think again.”
Streaming fraud, also called streaming manipulation, involves uploading songs under “fake” artists to websites like Spotify, YouTube, Instagram and Apple Music, and artificially inflating play counts to collect royalty payments. The IFPI says AI has “supercharged” the practice significantly, the BBC reported.
Unofficially, the industry believes up to 10% of content across all streaming platforms is fraudulent. Oakley says she wants streaming platforms to deploy tools that can detect AI-generated or fake music at the point of upload. “I hate to say it, but it’s very simple to fix,” she said.
“I think we’ve seen a lot of governments really grappling with this issue because they are trying to square a circle: They are trying to protect creativity and at the same time encourage innovation.”
Victoria Oakley, IFPI
She added: “The challenge of identifying and labelling AI material is absolutely the next critical challenge.”
Sony’s Kooker noted that French streaming company Deezer already has software doing this. The executive said Deezer now categorizes 34% of songs submitted to its platform as AI-generated.
“Transparency shouldn’t be optional, it’s the foundation of a fair and sustainable music ecosystem.”
Dennis Kooker, Sony Music Entertainment
“Is it perfect? I’m sure it’s not, but it’s open and it’s transparent, and it allows people to understand what is happening,” Kooker was quoted by the BBC as saying.
“Without proper identification, fans can’t distinguish between genuine human creativity versus unauthorized, AI‑generated content, which risks creating confusion, undermining trust, and impacting user experiences.”
“Transparency shouldn’t be optional, it’s the foundation of a fair and sustainable music ecosystem,” Kooker added.
Last year, Deezer filed two patents for an AI detection tool, which it said can discover “fully AI-generated tracks.” Deezer has since published periodic updates on how many tracks the tool has flagged.
Nearly two months ago, Deezer said it was now receiving over 60,000 fully AI-generated tracks every day, and is moving to license its AI detection tool to the wider music industry.
For its part, researchers at Sony Music Group‘s parent company, Japan-headquartered Sony Group, have reportedly developed technology to identify copyrighted music embedded in AI-generated tracks. Financial news outlet Nikkei Asia reported last month that the methods outlined in the research could open a path for songwriters to claim compensation when their work is used without authorization.
The deepfake takedowns highlight Sony’s stance against AI in music. Sony Music was among the major music companies that sued AI music generators Suno and Udio in 2024 for “mass infringement” of copyrighted material.
While Universal Music Group and Warner Music Group have settled lawsuits with Udio and partnered with the AI firm on licensing, Sony has remained notably silent. Warner Music also settled with Suno in November.
Music Business Worldwide
Sony Music has taken a significant stand against the rise of generative AI in music, requesting streaming platforms to remove over 135,000 songs that it claims were fraudulently created by impersonating artists under its label. This move, reported first by the BBC, highlights the growing concern within the music industry regarding AI-generated content and its implications for legitimate artists. Notable artists affected by these fraudulent tracks include Beyoncé, Queen, Harry Styles, Bad Bunny, Miley Cyrus, and Mark Ronson.
The announcement coincided with the release of the International Federation of the Phonographic Industry’s (IFPI) Global Music Report, which provides insights into the state of the music industry. Dennis Kooker, President of Global Digital Business & US Sales at Sony Music, emphasized the commercial harm that deepfake songs cause to legitimate artists. He stated that these AI-generated tracks not only threaten to undermine an artist’s promotional efforts but also risk damaging their reputation.
The scale of the issue is substantial. Sony Music indicated that the 135,000 tracks flagged may represent only a fraction of the total number of fraudulent songs uploaded. Since March 2025, the company has identified around 60,000 songs incorrectly attributed to its artists. In previous communications regarding AI and copyright law, Sony noted more than 75,000 deepfake tracks, indicating a rapid increase in the volume of such content as AI tools become more accessible and affordable.
The IFPI reported a 6.4% growth in recorded music revenues in 2025, reaching $31.7 billion, marking the eleventh consecutive year of growth for the global music industry. However, the increase in revenue has been accompanied by a rise in streaming fraud, which has been significantly amplified by AI technologies. Streaming fraud, or streaming manipulation, involves the upload of songs under fake artist names to platforms like Spotify and YouTube, where play counts are artificially inflated to generate royalty payments. Industry estimates suggest that up to 10% of content on streaming platforms may be fraudulent.
IFPI CEO Victoria Oakley expressed optimism that governments are beginning to address the challenges posed by AI in the music industry, acknowledging the need to balance the protection of creativity with the encouragement of innovation. She advocates for streaming platforms to implement tools that can detect AI-generated or fake music at the point of upload, arguing that the solution is relatively straightforward.
Kooker pointed out that some platforms, such as the French streaming service Deezer, have already developed software that categorizes a significant percentage of submitted songs as AI-generated. Deezer has reported flagging over 60,000 fully AI-generated tracks daily and is working on licensing its AI detection tool for broader industry use. The need for transparency in identifying AI-generated content is critical, as Kooker highlighted that without proper identification, fans cannot differentiate between genuine artistry and unauthorized AI content, leading to confusion and a potential decline in trust.
Last year, Deezer took proactive measures by filing patents for its AI detection tool, which aims to identify fully AI-generated tracks. Meanwhile, researchers at Sony’s parent company, Sony Group, are developing technology capable of detecting copyrighted music within AI-generated tracks, which could help songwriters seek compensation for unauthorized use of their work.
Sony Music’s stance against AI-generated music is further illustrated by its legal actions, having previously sued AI music generators Suno and Udio for mass copyright infringement. While other major music companies like Universal Music Group and Warner Music Group have settled with Udio and formed partnerships for licensing, Sony has maintained a more cautious approach, opting for litigation rather than collaboration.
As the music industry grapples with the implications of AI, Sony’s efforts to combat fraudulent content underscore the urgent need for regulatory frameworks and technological solutions to protect artists and preserve the integrity of the music ecosystem. The conversation around AI’s role in music continues to evolve, with stakeholders advocating for transparency, fairness, and respect for creativity in an increasingly complex digital landscape.
