Lack of transparency and child safety: key concerns in Italy’s ChatGPT probe

 Lack of transparency and child safety: key concerns in Italy’s ChatGPT probe

Lack of transparency and child safety: key concerns in Italy’s ChatGPT probe

OpenAI is under investigation by Italy’s data protection authority over suspicions of violating European Union privacy with its AI chatbot, ChatGPT. The Italian authority, Garante, has not disclosed specific details but has given OpenAI 30 days to respond to the allegations. The potential breaches, which could attract fines of up to €20 million or 4% of global annual turnover, relate to the General Data Protection Regulation (GDPR).

Previous concerns raised by Garante include a lack of legal basis for data collection and processing, ‘hallucinations’ by the AI tool, and child safety issues. After addressing some issues, OpenAI resumed ChatGPT service in Italy last year, but continued investigations have led to the current suspicion of EU law violations.

OpenAI’s reliance on legitimate interests as a legal basis for processing individuals’ data in AI model training remains a significant issue in the investigation. OpenAI has sought to establish a base in Ireland to shift GDPR compliance oversight potentially. The investigation is part of wider EU scrutiny of ChatGPT’s GDPR compliance.