Responsibility
A letter from Morgan
It’s time for some real talk.
In our work, we measure the identity attributes of people we see and hear in media. These identity attributes are deeply personal and often impact people’s experiences as they move through the world. We strongly believe that to build products and services that are fair, equitable, and inclusive, Responsibility needs to be foundational to our work.
Whether measuring identity attributes using people or AI, an irresponsible approach can result in inaccurate results. Our customers are counting on us to give them accurate insights and effective tools. If we get it wrong, the most likely outcome is that we underreport or have less effective tooling for groups that show up less in content. Groups like people with larger body sizes, or darker skin tones, or older adults.
So, with that context, here are a few ways we strive to build our products and business with a responsibility-first approach.
I. We partner with experts and advocacy groups
Our partners have research, resources, and groups of people that together form a representative perspective for the community they work with. These partners co-created the annotation guides with us, helped deliver training to our teams, and work with us as we build more insights into our offerings in the future. I encourage you to take a look at our partners on our Partners page, visit their sites, and check out the resources they have available.
II. We strive to build robust scales
Let’s use an example to show what we mean by robust. If you look around you might notice AI models that assign a gender to a picture or video of a person. All of these tech products have results that are either “man” or “woman” (or “male” or “female”). There are two challenges with this approach:
These gender models are binary, but gender is not.
If you don’t have an option for something other than “man” or “woman”, you won’t be able to measure and report on it.
You can’t tell someone’s gender by looking at them.
Gender is an internally held identity dimension. What we can do, as an observer, is say how we are perceiving someone to be expressing a gender identity.
Our approach is different. We don’t annotate men and women (gender identities), but instead we annotate gender expressions (feminine, masculine, and gender nonconforming). These are usually highly correlated with gender identities, but are a more accurate and fair way to measure. Having a scale with additional options allows us to capture more nuance.
III. We are thoughtful about what can and cannot be estimated via people or technology
Let’s use another example. If a person can’t look at a picture of another person and know what their sexual orientation is, an AI model can’t do it either. In cases like this, we will not ask a human or AI annotator to look at a picture of a single person and tell us whether that person is gay or straight or bisexual or any other sexual orientation.
What we can do instead, is notice when a person is shown in the context of a romantic, intimate, or partnered interaction. We can observe the sexual orientation of that interaction, and make an annotation of that.
And even though we’re being thoughtful about how to make this type of annotation responsibly, we won’t always get it right! Are we seeing two female friends playing with one of their babies or a lesbian couple playing with their baby? This type of annotation is hard for both people and AI models. But even if we get some fraction of them wrong, they’re wrong because they’re hard, not because we’re using stereotypes to make annotations.
IV. We are committed to building AI and ML models responsibly
The points above are really step 0 to building AI models responsibly. The next set of steps involve:
1. Building representative training datasets
2. Publishing data cards for the datasets
3. Choosing AI algorithms that promote equity and fairness
4. Measuring the performance of trained models
5. Publishing model cards for the models
6. Continuing to test and tweak performance over time
7. Being transparent about all of the above
As we incorporate our own AI and ML models, expect to see more from us as described above.
V. We’re learning and growing every day and want to hear from you
We learn from our partners, we learn from our customers, we learn from so many different sources every day. When we make mistakes (and we do, and we will, we all do!), we seek out feedback and look for opportunities to grow. If you have any feedback on the work we do or the approaches we share - we would love to hear it. Please reach out!