NSW Schools Caught in Microsoft Teams Biometric Data Collection Without Parental Consent

Image Credit: Dimitri Karastelev | Splash

In March 2025, Microsoft activated a "voice and face enrolment" feature in its Teams platform by default. This feature collected users' voice and facial biometric data to enhance meeting experiences, including improved audio quality and speaker identification through Microsoft's AI tool, CoPilot. The NSW Department of Education, unaware of this update, discovered the issue in early April 2025. Upon discovery, the department promptly disabled the feature and deleted all collected biometric profiles within 24 hours.

Impact on Students and Staff

The biometric data collection affected an unspecified number of students and staff in NSW public schools, where Teams is integral for classes, video lessons, and collaboration. The department has not disclosed the exact number of affected individuals or confirmed comprehensive parental notification.

Causes: AI Feature Activation and Oversight Gaps

The incident resulted from Microsoft's decision to enable the AI-driven feature by default, coupled with the department's failure to notice the update. Microsoft had announced the feature in October 2024 via the Microsoft 365 Message Center (MC912707), advising administrators to review and update their settings before mid-March 2025. However, the NSW Department of Education, managing thousands of Microsoft 365 notifications annually, overlooked the change.

Response: Disabling AI Feature and Data Deletion

Upon discovering the issue, the NSW Department of Education acted swiftly, disabling the feature and deleting all biometric profiles within 24 hours. A department spokesperson clarified, "The Department of Education does not collect student biometric data", emphasizing the collection was unintentional.

Privacy Concerns and Criticisms

Shadow Education Minister Sarah Mitchell criticized the department for not informing all parents, stating, "It appears there are parents out there who are not even aware this occurred". She described the situation as a "complete breach of privacy and trust for every student and parent" across the state.

Privacy advocates, including Dr. Rys Farthing of Reset Tech Australia, expressed concerns about the ethical implications of collecting children's biometric data without explicit consent. Dr. Farthing warned that biometric data exposure poses lifelong risks for students, questioning whether the data was used to train AI models or shared before deletion.

Microsoft's Position on Data Usage

Microsoft stated that the collected biometric data was encrypted and stored securely, denying any access or sharing of the data. A spokesperson said, "No one, including Microsoft, has access to individual users' biometric data – it's encrypted and stored securely per our compliance and privacy standards".

Global Context: AI and Biometric Data in Education

The incident reflects broader concerns about AI and biometric data in education. In 2023, New York State banned the use of facial recognition technology in schools following a report that concluded the risks to student privacy and civil rights outweigh potential security benefits.

The controversy may drive adoption of tools to monitor Microsoft 365 updates more effectively. Public sentiment on platforms like X and Reddit advocates for open-source alternatives to reduce reliance on proprietary systems. Schools may face increased pressure for transparency, such as notifying parents and offering opt-out options. Privacy advocates are pushing for stronger Australian regulations to protect children's data as AI integration in education grows.

3% Cover the Fee
TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Previous
Previous

Hong Kong InnoEX 2025 Showcases AI Innovations, Smart City Solutions, and Drone Technologies

Next
Next

Replika AI Chatbot Faces Scrutiny Over Alleged Inappropriate Behaviour and User Safety Risks