Notifying Users of AI Data

…or don’t do what Zoom did.

Zoom has tried to do the right thing with regards to full, clear notification of users of what they were doing with data. Unfortunately, what wound up happening is the notification was sufficiently vague that they caused a panic. Any user data could be used in any way to train any and all AI to improve services offered. A few names of services were included: “Zoom IQ or other tools” which obviously does not limit anything. There is no possibility of opting out either. Now that the possibility of LLM AIs regurgitating text verbatim has become known, people are jumping on that as a reason to stop using Zoom.

Trying to do the right thing

I’m willing to wager that Zoom really thought they were doing right by their users. They were providing informed consent for the use of AI with their services. Unfortunately both parts of this are wrong.

Informed

The statement Zoom made is both vague and broad. The liberal reference to many technologies, while listing some as examples doesn’t inform anyone of anything in particular. The breadth is referencing “content”, which is defined in the terms of service as “anything the user generates.” So the net effect of this language is that they are “informing” you that they can use anything and everything involved in your calls for whatever purpose they can think up.

The worst part of the disclosure is they’re not specific on which user data was implicated. Is it shared documents? Is it the data stream for the video? The privacy policy says “yes, all of it.” Although I have doubts that the development team will slurp down every bit of video that is transmitted from client to client, they are entirely within their rights.

The legal team that drafted this policy needs to tighten up their relationship to the development team with respect to what they’re working on, or at least get better feedback from them. In a blog post intended to clarify this they continue to fall short of necessary clarification. Assertions in a blog post are neither legally binding, nor even remembered after a few weeks or months when the development team moves on to something new.

Consent

I personally found out about the changes to the privacy policy by excited and anxious LinkedIn posts on the subject. Most of the posts referenced the Terms of Service, which didn’t even include the relevant language – it is the Privacy Statement where the language can be found. I have most likely used Zoom since the updates were in place since the document is dated June 30, 2023. Noise about the implications of this change didn’t start reverberating until August. There is a lot to be argued about whether a user consents to every change of a policy that is altered without explicit users even being aware.

How to Actually Do the Right Thing

It’s really easy to point and scowl at people who are trying, yet failing, to do right by their users. It is far more productive to say how you prevent this kind of backlash in the future.

Be specific

I identified how being vague with what you’re actually doing can be detrimental, so the obvious solution is to be more articulate with what you are doing. I realize this is a bit of a push-pull relationship with the developers, but a little bit of understanding goes a long way here. “In calls where advanced AI (or ML) features are enabled, the content of those calls will be used to train AI.” is way more specific. The blog post on the subject seems to suggest this, but it sounds kind of unmoored from products. Using this language also has the added advantage of being non-specific in identifying explicit features and even guides users on how to NOT participate.

Get Consent

The thing many people, especially attorneys, fail to recognize is notifying users could be a marketing opportunity. You’re rolling out hot new AI. Boast to the users! Get them excited and on board with the latest technology, not just unwittingly subject to it.