Technology

Canadian Privacy Watchdogs Take Aim at OpenAI Over ChatGPT Data Practices

Federal and provincial investigators find the AI giant collected personal information without consent, but say the platform is now safer after promised reforms.

Canadian Privacy Watchdogs Take Aim at OpenAI Over ChatGPT Data Practices
(Global Tech / File)

Canada's privacy regulators have slammed OpenAI for collecting vast amounts of personal data from Canadians without permission to train its ChatGPT artificial intelligence models, prompting calls for urgent legal reforms to protect citizens in the AI age.

Federal Privacy Commissioner Philippe Dufresne and his counterparts in Quebec, British Columbia, and Alberta launched a joint investigation after ChatGPT's explosive launch in late 2022. What they uncovered was troubling: OpenAI had scraped publicly available information across the internet to build its GPT-3.5 and GPT-4 models—all without transparency or consent from Canadians whose data was used.

Data Collection Without Guardrails

"OpenAI launched ChatGPT without having fully addressed known privacy issues," Dufresne said Wednesday at a press conference in Ottawa. "This exposed Canadians to potential risks of harm such as breaches and discrimination on the basis of information about them."

The investigation found that OpenAI's data harvesting practices violated the federal Personal Information Protection and Electronic Documents Act (PIPEDA). Canadians had no way to access, correct, or delete their information once it was collected. Even worse, the scraped data sometimes contained factual inaccuracies that made their way into the AI model's responses.

Perhaps most damning: OpenAI knew about these privacy gaps before launch but decided to push the product to market anyway, prioritizing speed over safety.

"We have some statements from leaders saying 'we felt we had to move, we knew that there were others out there and so we launched it... having done limited testing," Dufresne told reporters. "We found that problematic."

Commitments Made, But Questions Remain

The good news: OpenAI has agreed to implement stronger privacy protections, including data retention policies, transparency measures in both official languages, and significantly reduced data collection for training new models. The company has committed to ongoing compliance monitoring by Canadian regulators.

Dufresne confirmed that ChatGPT is now "safe to use" and that the identified issues have been "conditionally resolved."

However, British Columbia Privacy Commissioner Michael Harvey raised a red flag that could reshape the entire AI industry. He suggested that ChatGPT's fundamental design—relying on third-party data collection rather than direct user consent—may be inherently incompatible with Canada's privacy laws as currently written.

"I would say that we are also encouraged by many of the things that we've seen ChatGPT do over the course of the investigation, and we're also encouraged by a number of the commitments that they've committed to make," Harvey said, acknowledging both progress and ongoing concerns.

Canada Needs New AI Privacy Rules

The investigation underscores what privacy advocates have been warning: Canada's privacy legislation was written for an earlier era and is woefully unprepared for artificial intelligence. OpenAI was able to exploit legal gaps that shouldn't exist.

"This has reinforced the need to modernize Canada's privacy laws for the age of artificial intelligence," Dufresne and his provincial counterparts concluded, with clear implications for Parliament.

OpenAI disagreed with some of the commissioners' findings but agreed to the reforms anyway. The company published a bilingual blog post Wednesday outlining its updated privacy approach and the controls available to Canadian users.

This report is based on findings from Canada's federal Privacy Commissioner and provincial commissioners in Quebec, British Columbia, and Alberta, released following a joint investigation into OpenAI's data practices.

Share this story