…or don’t do what Zoom did.
Zoom has tried to do the right thing with regards to full, clear notification of users of what they were doing with data. Unfortunately, what wound up happening is the notification was sufficiently vague that they caused a panic. Any user data could be used in any way to train any and all AI to improve services offered. A few names of services were included: “Zoom IQ or other tools” which obviously does not limit anything. There is no possibility of opting out either. Now that the possibility of LLM AIs regurgitating text verbatim has become known, people are jumping on that as a reason to stop using Zoom.
Trying to do the right thing
I’m willing to wager that Zoom really thought they were doing right by their users. They were providing informed consent for the use of AI with their services. Unfortunately both parts of this are wrong.
The statement Zoom made is both vague and broad. The liberal reference to many technologies, while listing some as examples doesn’t inform anyone of anything in particular. The breadth is referencing “content”, which is defined in the terms of service as “anything the user generates.” So the net effect of this language is that they are “informing” you that they can use anything and everything involved in your calls for whatever purpose they can think up.
The legal team that drafted this policy needs to tighten up their relationship to the development team with respect to what they’re working on, or at least get better feedback from them. In a blog post intended to clarify this they continue to fall short of necessary clarification. Assertions in a blog post are neither legally binding, nor even remembered after a few weeks or months when the development team moves on to something new.
How to Actually Do the Right Thing
It’s really easy to point and scowl at people who are trying, yet failing, to do right by their users. It is far more productive to say how you prevent this kind of backlash in the future.
I identified how being vague with what you’re actually doing can be detrimental, so the obvious solution is to be more articulate with what you are doing. I realize this is a bit of a push-pull relationship with the developers, but a little bit of understanding goes a long way here. “In calls where advanced AI (or ML) features are enabled, the content of those calls will be used to train AI.” is way more specific. The blog post on the subject seems to suggest this, but it sounds kind of unmoored from products. Using this language also has the added advantage of being non-specific in identifying explicit features and even guides users on how to NOT participate.
The thing many people, especially attorneys, fail to recognize is notifying users could be a marketing opportunity. You’re rolling out hot new AI. Boast to the users! Get them excited and on board with the latest technology, not just unwittingly subject to it.