The latest AI model from Chinese startup DeepSeek, known as “R1-0528,” has shown impressive performance in coding, math, and general knowledge benchmarks, almost surpassing OpenAI’s o3 model. However, there are concerns that this updated version may be less willing to address controversial topics, especially those deemed sensitive by the Chinese government.
According to testing conducted by the developer behind SpeechMap, who goes by the username “xlr8harder,” R1-0528 appears to be more restricted in discussing contentious issues compared to previous DeepSeek models. This model is considered the most censored by the developer for criticizing the Chinese government.
As per Wired’s explanation earlier this year, Chinese AI models are subject to strict information controls, with a law implemented in 2023 prohibiting the generation of content that goes against the unity and harmony of the country. To adhere to these regulations, Chinese startups often censor their models through prompt filters or fine-tuning. One study revealed that DeepSeek’s original R1 refused to answer 85% of politically controversial questions set by the Chinese government.
According to xlr8harder, R1-0528 avoids answering questions about topics like the internment camps in Xinjiang, where Uyghur Muslims have been detained. While it occasionally criticizes certain aspects of Chinese government policies, the model tends to provide the official government stance when directly questioned.
DailyTech also noticed this censorship during their testing.
China’s AI models, such as Magi-1 and Kling, have faced criticism for censoring topics sensitive to the Chinese government, like the Tiananmen Square massacre. In a warning last year, Clément Delangue, CEO of AI platform Hugging Face, cautioned against the repercussions of Western companies relying on high-performing Chinese AI models with open licenses.