BusinessMonday, April 20, 2026· 2 min read

Chinese Tech Workers Push Back, Turning AI ‘Doubles’ Into a Win for Worker Voice

TL;DR

When a GitHub project called Colleague Skill and employer requests to 'distill' employees' skills into AI agents sparked alarm, Chinese tech workers pushed back — forcing a public conversation about consent, labor rights, and how AI should be deployed at work. The backlash is already shaping expectations: workers are asserting control over how their expertise and persona are used, prompting companies and communities to consider clearer norms and protections.

Key Takeaways

  • 1Some Chinese tech employees were asked to train AI agents modeled on their skills and personalities, prompting ethical concerns.
  • 2Open-source tools like Colleague Skill intensified the debate by making it easier to 'distill' human expertise into models.
  • 3Workers have begun to resist such practices, creating pressure for clearer consent, contractual safeguards, and company policy changes.
  • 4The pushback is catalyzing a broader conversation about responsible workplace AI, potentially leading to new norms and better deployment practices.

Workers reclaim the conversation around workplace AI

Reports that some employers in China asked tech staff to help train AI agents modeled on their skills and personalities have touched off a wave of debate — and, importantly, resistance. A GitHub project called Colleague Skill, which claimed to let users "distill" colleagues' expertise into AI, helped focus attention on how quickly tools can enable the automation of tacit, human knowledge. That spotlight has prompted otherwise enthusiastic early adopters to pause and ask who benefits and who is protected.

The immediate upside is clear: tech workers are asserting agency. Rather than accepting such requests uncritically, many are pushing back — refusing to participate, publicly raising concerns, and demanding clearer terms. This grassroots response is already nudging employers to rethink how they introduce AI into teams and what consent, attribution, and compensation should look like when human know-how is captured by models.

Practical change and constructive outcomes

  • The controversy has accelerated conversations about contractual safeguards and informed consent when training workplace AIs.
  • Communities and open-source contributors are debating best practices, which could lead to shared standards and tool-level safeguards that protect contributors’ rights.
  • Employers seeing the backlash may adopt clearer policies, benefit-sharing, or opt for collaborative development approaches that preserve human agency.

While the incident highlights real risks — from job disruption to misuse of personal traits — the positive takeaway is that workers are not passive bystanders. Their pushback is shaping norms around responsible AI deployment in the workplace, offering a path toward tools that augment rather than exploit human talent.

Get AI Wins in Your Inbox

The best positive AI stories delivered to your inbox. No spam, unsubscribe anytime.