WeTransfer Reassures Users After AI Training Concerns
Popular file transfer platform WeTransfer spent a frantic day reassuring users that it has no intention of using any uploaded files to train AI models, after an update to its terms of service suggested that anything sent through the platform could be used for making or improving machine learning tools. The company moved quickly to address the concerns, rewriting the offending clause in clearer language to alleviate user fears.
WeTransfer users were outraged when it seemed the updated terms of service implied their data would be used to train AI models. The company’s swift response and clarification have helped to ease tensions, but the incident highlights the growing distrust of tech companies and their handling of user data.
Clarification and Reassurance
The offending language in the terms of service stated that using WeTransfer gave the company the right to use the data “for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy.” This broad and unclear language sparked concerns among users, who feared that their data could be used without their permission.
WeTransfer noted the growing furor and rushed to try and put out the fire. The company rewrote the section of the terms of service and shared a blog explaining the confusion, promising repeatedly that no one’s data would be used without their permission, especially for AI models. “From your feedback, we understood that it may have been unclear that you retain ownership and control of your content. We’ve since updated the terms further to make them easier to understand,” WeTransfer wrote in the blog.
Lessons Learned
The incident highlights the importance of clear and transparent communication from tech companies regarding their use of user data. The fact that a similar incident occurred with Dropbox about a year and a half ago suggests that companies must be proactive in addressing user concerns and providing reassurance. The stakes are high, particularly for creative professionals who are sensitive to even the appearance of data misuse.
In an era where tools like DALL·E, Midjourney, and ChatGPT train on the work of artists, writers, and musicians, the importance of protecting user data cannot be overstated. WeTransfer’s swift response and clarification demonstrate a commitment to user trust and transparency, and other companies would do well to follow their example.




