Many marketers are eager to learn what generative artificial intelligence products can do for their brands.
But before everyone gets too preoccupied with “could,” the Brandtech Group believes it’s time to consider the “should.”
On Wednesday, the mar tech holding company announced a series of initiatives aimed at setting guidelines for the ethical use of generative AI.
The initiatives include a free ethical policy blueprint for businesses and the launch of “Bias Breaker” in Pencil, the generative AI ad-building platform The Brandtech Group acquired last year.
The Brandtech Group also plans to offer 6-week-long ethical AI sprints to clients through several of its consulting businesses. During these periods of development, business will be guided to work on brand-specific policies.
Tackling bias in Artificial Intelligence
Previously, AI-curious clients would come to The Brandtech Group with concerns about copyright infringement, training data origins and data privacy, Head of Emerging Technology Rebecca Sykes told AdExchanger.
Legal matters are still a concern, but now brands seem more worried about the court of public opinion, citing fears of getting things wrong and suffering reputational damage.
According to Sykes, weaker brands that already struggle to stand out against a “sea of sameness” are especially prone to facing negative consequences from AI tools, which are inherently designed to reflect what’s already in training data – including any human prejudices or biases present therein.
“The bias is so baked in, and the models are so opaque,” said Sykes, that if your brand doesn’t have a strong position around DEI and ethics to begin with, you might not notice the impact that AI-generated material can have on your business as it scales.
The Brandtech Group also advocates that its clients have an official position on transparency and disclosure, and think through what its “hard no’s” will be regarding the use of certain techniques or models.
“Different brands will need to take a different stance and have a different point of view,” said Sykes of the policy-making process. “But it has led them to some clarity, to much more robust thinking about what they will make, what they won’t make, but most importantly, how they’re going to make it.”
Rolling the Representational Dice
In developing the “Bias Breaker” tool, The Brandtech Group worked with Pencil to generate thousands of AI images from text-based prompts and identify trends in what the images depicted.
“I think images, and particularly images of people, were the starting point, because it created the most friction and tension in the companies we were talking to,” said Sykes. “They were most divided over whether they should or shouldn’t develop synthetic people.”
What they found was that simple occupational prompts tended to align with existing stereotypes about the types of people who hold those jobs – typing “CEO” typically conjures a middle-aged white man, for example, while an image prompted by the word “nurse” will feature a pretty, young white woman in uniform.
To correct for this trend, Pencil’s new Bias Breaker tool uses random probability to inject more inclusive descriptive language into these prompts, creating a wider range of representational figures across age, gender, race, body type and even religion.
That said, Bias Breaker still isn’t intended to be a complete solution, Sykes admitted. For one thing, it doesn’t yet account for the fact that AI depictions of marginalized identities can often dip into cultural stereotypes, too.
The Brandtech Group’s own policy requires that content intended for a particular group must include members of that group in the creation process. For example, content related to Pride Month celebrations in June should feature input from human members of the LGBTQ+ Community. More importantly, companies should not resort to AI-generated content if they’d previously paid community members to engage in actual photoshoots.
“If you care enough in your support of Pride Month to create content for it, have [the content] be about real people, focused on a real community,” Sykes said.
Pencil, The Brandtech Group’s existing AI product, also guards against copyright infringement. The company’s image-generating software doesn’t allow for proper names of any kind, making it impossible to generate content “in the style of” a particular artist, and it includes fully copyright-cleared data sets from models like Adobe Firefly and Getty Images for its more copyright-anxious clients to use.
A more ethical future
Generative AI is still an ethical minefield in many other ways. And the Brandtech Group doesn’t have any easy answers to complex problems, like AI’s reliance on labor exploitation or its negative environment impact.
“I would hate for anyone to think that we think we’ve solved it and we’re walking away,” said Sykes. “This is step one, and then we want to keep building and moving forward.”
In the meantime, The Brandtech Group’s internal ethics policy follows an important guiding principle: A computer can never be responsible for making any decisions on its own.
“We have made a very conscious decision that nothing is fully automated,” said Sykes. “We’ll take automation to the point where it makes your life significantly easier, but not to the point where we’re running autonomous decision-making.”
Also published in: Adexchanger