OpenAI said Tuesday it banned several ChatGPT accounts suspected of links to Chinese government entities after users requested proposals for social media “listening” and monitoring tools [reuters.com#1]. The company also removed accounts tied to suspected Russian‑speaking criminal groups and Chinese‑language users seeking phishing and malware assistance, and said its models did not give threat actors novel offensive capabilities [reuters.com#1][theregister.com#1][thehackernews.com#1]. OpenAI’s latest threat report also flagged attempts to design a crawler “probe” for scanning major social platforms and a “High‑Risk Uyghur‑Related Inflow Warning Model” to track movements of “Uyghur‑related” individuals, adding it cannot verify any Chinese government use [engadget.com#1][firstpost.com#1].
Highlights:
- Threat report: OpenAI says it has disrupted more than 40 networks since it began public threat reporting in February last year, and its models refused overtly malicious prompts [reuters.com#1].
- Surveillance designs: A banned China‑origin account used ChatGPT to plan materials and project designs for a social media “probe” to crawl X, Facebook, Instagram, Reddit, TikTok and YouTube for political, ethnic or religious content [engadget.com#1].
- Uyghur tracking: Another banned account sought a proposal for a “High‑Risk Uyghur‑Related Inflow Warning Model” to track movements of “Uyghur‑related” individuals; OpenAI said it cannot verify any government client [engadget.com#1][firstpost.com#1].
- Malware attempts: OpenAI blocked activity including Russian‑speaking actors refining remote‑access trojans and credential stealers, plus Chinese‑language users probing phishing and malware automation, including questions about China’s DeepSeek [thehackernews.com#1][reuters.com#1].
- Defense outweighs abuse: By OpenAI’s estimate, ChatGPT is used to detect scams three times as often as to create them [engadget.com#1].
We found no evidence of new tactics or that our models provided threat actors with novel offensive capabilities – OpenAI
Perspectives:
- OpenAI: It found no evidence that its models gave threat actors novel offensive capabilities. (Reuters)
- OpenAI: It cannot independently verify whether the proposed social media probe was used by a Chinese government entity. (Engadget)
- Chinese embassy in the U.S.: Did not immediately respond to a request for comment on the report. (Reuters)
Sources:
- Is China using ChatGPT for Uyghur surveillance? OpenAI report fuels cybersecurity fears – firstpost.com
- OpenAI bans suspected Chinese accounts using ChatGPT to plan surveillance – theregister.com
- OpenAI bans suspected China-linked accounts for seeking surveillance proposals – reuters.com
- OpenAI has disrupted (more) Chinese accounts using ChatGPT to create social media surveillance tools – engadget.com
- OpenAI Bans Suspected China-Linked Accounts For Seeking Surveillance Proposals – slashdot.org
- OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks – thehackernews.com
- OpenAI Gives Us a Glimpse of How It Monitors for Misuse on ChatGPT – gizmodo.com

