LinkedIn Under Fire: Users Concerned Over Data Usage for AI Training
LinkedIn has recently come under scrutiny for its decision to use user-generated data to train artificial intelligence models. Many users are expressing their dissatisfaction with the platform's practice of auto-enrolling members into this initiative without their explicit consent. The professional networking site collects data from users, including their posts, activity frequency, language preferences, and feedback, with the stated aim of improving its services.
[See our previous report: Your “Personal” YouTube Videos Help Train the Next Tech Revolution?]
User Reactions and Concerns
Prominent voices in the online community, such as Rachel Tobac, chair of Women In Security and Privacy, have voiced strong objections to LinkedIn's auto-opt-in approach. Tobac argues that users should not be burdened with the responsibility of opting out of features they did not consciously agree to. She urges social media platforms to provide clear choices to users regarding data usage, advocating for more transparency in how organizations handle personal information.
[See our previous report: The Looming Threat of 'Model Collapse': How Synthetic Data Challenges AI Progress]
LinkedIn's Response and New Features
In response to the backlash, LinkedIn announced updates to its user agreement, effective November 20, that clarify its data usage practices and introduce a new opt-out feature specifically for training AI models. LinkedIn’s Chief Privacy Officer, Kalinda Raina, stated that the data collected is essential for improving security and enhancing products within the generative AI sector.
[See our previous report: Navigating Privacy: The Battle Over AI Training and User Data in the EU]
The Value of AI for Users
A LinkedIn spokesperson emphasized that many users are looking for AI tools to help with career-related tasks, such as crafting resumes or preparing messages for recruiters. The spokesperson noted that the goal of LinkedIn's generative AI services is to provide users with a competitive edge in their professional endeavours. Despite this, the platform maintains that users have control over how their data is utilized.
[See our previous report: How OpenAI Improves AI with Your Help: A Transparent Look!]
How to Opt Out
For users who wish to prevent LinkedIn from using their data for AI training, the platform provides straightforward steps to opt out. Members can disable the feature via desktop or mobile app settings, ensuring they retain control over their personal information.
[See our previous report: Guarding Your Art: How to Opt Out of OpenAI DALL-E Training?]
Data Protection Measures
LinkedIn asserts that it employs privacy-enhancing technologies to anonymize personal data in datasets used for AI training. Additionally, while opting out prevents future use of personal data for training purposes, it does not alter the data already utilized in AI development.
[See our previous report: Your Daily Conversation at Home is Used to Train AI by Amazon?]
Source: USA Today