You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 2 minutes
Meta will start tracking the way employees work, including their keystrokes and mouse clicks, to train its AI models.
The Model Capability Initiative (MCI) will run on Meta’s computers and internal apps, logging their activity to be used as training data for Meta’s AI models, according to an internal memo seen by Reuters.
“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them – things like mouse movements, clicking buttons, and navigating dropdown menus,” a Meta spokesperson told LeadDev.
“To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose,” they added.
Meta’s move comes as major tech companies such as OpenAI, Anthropic, and Google, have recently introduced new tools that allow AI agents to take control of a user’s computer or web browser to complete specific tasks.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Big Brother is watching
However, not everyone feels comfortable about ‘Big Brother’ watching their work activity.
“All this does is make employees feel uncomfortable and use other devices for most of the real computer usage they are likely trying to capture. It’s hard to imagine any employee anywhere being comfortable with their employer seeing exactly what they are doing on their computer; it’s effectively spying,” an anonymous source told LeadDev.
Legal or not?
Some have also questioned whether Meta’s approach is legally and ethically valid.
“Even if the logging was technically permissible under existing employment terms, the purpose has changed. Activity previously accessible for security, IT support, or compliance is now being repurposed as training data for commercial AI systems. That is a different category of processing, with different consent, transparency and data protection implications. In most jurisdictions with meaningful employment or privacy frameworks, a change of purpose usually requires a fresh legal basis, not just a fresh email to staff,” Jean Gan, head of legal compliance at Scan Global Logistics, wrote in a LinkedIn post.
“If an organization is building, or adopting AI agents trained on employee behaviour, the questions cannot sit in a single function. Data protection, employment, IP ownership of the training data, secondary use consent, and the ethical question of whether staff are being fairly told what their work is actually being used for. All of these are now live issues,” Gan added in their post.
The legality will depend on your jurisdiction and specific employment contracts, but the direction of travel is clear. Companies are hungry to train their models on real world employee behavior that can’t be accessed elsewhere.
It’s easy to see a world where more and more employees rebel against training their own replacements, and start to sabotage AI training efforts.