Microsoft files lawsuit against service for creating illicit content using its AI technology

Service utilized undocumented APIs and other stunts to work around the principal safety guardrails.

1/11/20252 min read

PC is claiming that three persons developed a ‘hacking-as-a-service’ business they intended to use Microsoft’s platform for AI-generated content to create malicious and unlawful content. The defendants from foreign countries created tools with the express purpose of evading safety measures Microsoft put in place to mitigate the creation of malicious content by its generative AI services, said Microsoft’s assistant general counsel Steven Masada of the Digital Crimes Unit. They later compromised the credits of authentic paying clients. Using those two things, they came up with a fee based platform which people could use.

A sophisticated scheme

Microsoft is also suing seven people the firm alleges were clients of the service. All 10 defendants were named John Doe because Microsoft does not know them. “In this regard, Microsoft aims to thwart a complex fraud perpetrated by malicious actors who created tools aimed at evading the protective measures that generative AI services by Microsoft and others have in place,” lawyers said in a lawsuit filed in the federal court located in the Eastern District of Virginia and made public on Friday. The three individuals who operated the service allegedly stole the Microsoft customers’ accounts and distributed the access to these accounts using a now-suspended reentry[.]org/de3u site. The service that operated from the last July up to September when Microsoft jumped into action was inclusive of “detailed instructions on how to use these customs tools to produce damaging and unlawful content. The service included a proxy server that routed traffic between its customers and the servers that offered Microsoft’s AI services, according to the complaint. Point made, among other things, the proxy service utilized Microsoft network application programming interfaces APIs that were unauthorized to connect with the firm’s Azure computers. The emerging requests were made to resemble genuine Azure OpenAPI Service API requests and demonstrated utilised unguaranteed API keys to authenticate them. Microsoft attorneys included the following images, the first illustrating the network infrastructure and the second displaying the user interface provided to users of the defendants' service:

Microsoft didn’t explain how the genuine client accounts were infiltrated but said that thieves have been known to develop tools to scan code repositories for API keys which developers unwittingly integrate into the applications they develop. Microsoft and others have been advising developers to strip out their logs and other data from the code they release for years, but it gets overlooked. The company also speculated that the qualifications could have been procured through hacker’s compromising the networks that hosted them.


They further prohibit their generative AI systems to generate certain content in developments like, among them, Microsoft. Interdicted content genres cover any of the following: sexual exploitation / abuse, erotica / pornography, racism / discrimination, and hatred targeting people based on their color, origin, nationality, sex, gender preference, religion, age, disability, and other comparable aspects. It also does not permit creation of content that either threatens or embodies assault or physical harm or contains any act of abuse. Aside from its specific prohibition against such usage of its platform, Microsoft has also come up with controls that analyze both user input of prompts and output for indications that the content requested is contrary to any of these provisions. These code-based controls have been often violated in the last few years by either white hats, researchers or black hats alike. Microsoft did not explain in detail as to how the defendants software was said to be designed to circumvent the measures put in place by the company.