The digital age has fundamentally transformed how students learn, research, and unfortunately, how they cheat. But this June, as over 13 million Chinese students prepared for the gaokao (the nation’s make-or-break college entrance examination), something unprecedented happened in the artificial intelligence landscape.
China’s biggest tech companies voluntarily pulled the plug on their AI chatbots’ most powerful features.
The Great AI Shutdown of Exam Season
Between June 7th and 10th, as students across China sat down for the rigorous multi-day gaokao examinations, the country’s AI ecosystem went into deliberate hibernation. Alibaba’s Qwen chatbot disabled its photo recognition capabilities for test-related queries. ByteDance’s Doubao followed suit. Tencent’s Yuanbao and Moonshot’s Kimi went even further, suspending their image analysis services entirely during exam hours.
When users attempted to access these features, they were met with straightforward explanations: the functions had been disabled “to ensure the fairness of the college entrance examinations.”
This wasn’t a government mandate or regulatory crackdown. These were voluntary measures by private companies, reflecting a remarkable moment of corporate responsibility in the age of artificial intelligence.
Why the Gaokao Demands Such Extreme Measures
To understand why China’s tech giants took such drastic steps, you need to grasp what the gaokao represents in Chinese society. This isn’t just another standardized test (it’s a singular gateway that determines the educational and professional trajectory of millions of young people).
The gaokao is China’s only pathway to university admission. Unlike systems in the United States or Europe where students can demonstrate their capabilities through multiple channels (extracurricular activities, essays, interviews, or standardized tests taken multiple times), Chinese students get one shot. One exam. One chance to secure their future.
The mathematical reality is stark: 13.3 million students competing for a limited number of university spots, with the most prestigious institutions accepting only a tiny fraction of applicants. This creates pressure that’s difficult for those outside the system to fully comprehend.
The Global Context: AI Cheating Goes Mainstream
China’s proactive approach stands in sharp contrast to the reactive measures we’re seeing elsewhere. In the United States, educational institutions are scrambling to address AI-enabled cheating through increasingly desperate measures. The Wall Street Journal reported that sales of traditional blue exam books (those paper booklets that seemed destined for obsolescence) have surged at universities across America over the past two years.
The reason? Professors are retreating to handwritten, offline testing as a defense against AI assistance. It’s a digital-age return to analog solutions, driven by the same concerns that motivated China’s tech companies to hit the pause button.
But here’s what makes the Chinese approach fascinating: rather than playing an endless game of cat-and-mouse between educators and students armed with AI tools, the companies that create these tools chose to self-regulate.
The Technical Challenge of Academic Integrity
The specific measures taken by Chinese AI companies reveal sophisticated understanding of how students might exploit their technology. Disabling photo recognition during exam periods isn’t arbitrary (it directly targets the most likely cheating scenario).
A student could potentially photograph exam questions and submit them to an AI chatbot for instant analysis and answers. By removing image processing capabilities during testing windows, companies eliminated the most straightforward cheating pathway while still allowing text-based interactions for legitimate educational purposes.
DeepSeek, the AI tool that gained international attention earlier in 2025, went beyond temporary shutdowns, implementing time-based restrictions that automatically activate during examination periods. This suggests these companies have developed specific protocols for managing their technology during high-stakes educational events.
Corporate Responsibility in the AI Era
What we’re witnessing in China represents a broader question about the role of technology companies in society. When AI tools become powerful enough to fundamentally undermine established institutions (whether educational systems, professional examinations, or other merit-based evaluations), what responsibilities do their creators bear?
The Chinese approach suggests that companies can and should take proactive steps to prevent misuse of their technology, even when doing so temporarily reduces their service capabilities and potentially impacts revenue.
This stands in notable contrast to the typical Silicon Valley philosophy of “move fast and break things,” where companies often deploy technology first and address societal consequences later. The Chinese AI companies’ decision to voluntarily restrict their own services during exam periods reflects a different philosophy: that preserving institutional integrity can take precedence over unrestricted technological access.
The Broader Implications for Educational Technology
The temporary shutdown of AI features during the gaokao raises important questions about the future relationship between artificial intelligence and education. As AI capabilities continue to advance, educational institutions worldwide will need to grapple with similar challenges.
Some potential approaches are already emerging:
Adaptive Assessment: Educational systems might evolve toward forms of evaluation that account for AI assistance rather than trying to eliminate it entirely. This could include open-book, AI-assisted examinations that test higher-order thinking skills rather than information recall.
Authentication Technology: Development of more sophisticated proctoring systems that can detect AI usage in real-time, potentially through keystroke analysis, browser monitoring, or other digital forensics.
Institutional Partnerships: The Chinese model suggests that partnerships between educational institutions and AI companies could become a standard approach, with technology providers voluntarily implementing restrictions during critical evaluation periods.
Looking Beyond the Exam Room
The Chinese tech companies’ actions during gaokao season offer a glimpse into how societies might manage AI’s impact on established institutions. But this approach also raises questions about scalability and fairness.
If AI companies can disable features to preserve exam integrity, could they also be pressured to restrict access for other reasons? What happens when similar measures are requested by authoritarian governments for less benign purposes?
The gaokao represents a relatively clear-cut case where the stakes are well-defined and the timeline is limited. But as AI becomes more integrated into daily life, the decisions about when and how to restrict access will become more complex and consequential.
The Road Ahead
As the 2025 gaokao concludes and AI services return to full functionality, the precedent has been set. Chinese technology companies have demonstrated that self-regulation is possible, even when it comes at a cost to their services.
This approach may become a model for other high-stakes educational assessments worldwide. Professional licensing examinations, graduate school admissions tests, and other critical evaluations might see similar voluntary restrictions from AI providers.
The bigger question is whether this represents a sustainable long-term solution or merely a temporary measure while educational institutions adapt to an AI-enabled world. As artificial intelligence becomes more sophisticated and ubiquitous, the challenge of maintaining academic integrity will only intensify.
For now, China’s approach offers a compelling example of how technology companies can take responsibility for their tools’ societal impact. In an era where AI capabilities often outpace regulatory frameworks, voluntary corporate responsibility might be our best defense against unintended consequences.
The gaokao continues to shape millions of lives each year. Thanks to the unprecedented cooperation between China’s AI industry and educational system, it can continue to do so fairly (at least for now).
This article was rewritten with the aid of AI. At Techsoma, we embrace AI and understand our role in providing context, driving narrative and changing culture.