Dangers of AI in Video Production Part II

How AI contributes to “representational harms” and does so with impunity

Matt Croak Code

--

Photo by Hitesh Choudhary on Unsplash

I came across this post on LinkedIn that described how AI was used to generate images of Barbie dolls when given countries as prompts.

The results were pretty alarming.

These results further highlight the flaws in AI with regards to awareness, empathy, and context. The images and videos conceived as a result of Midjourney are very convincing, showing the progress AI has made at conjuring high quality images and video in a short amount of time.

This progress is accelerating much more rapidly than AI’s ability to understand more abstract and existential things. This makes AI particularly dangerous when it comes to something called representational harms.

Representational Harms

Representational harms are implications related to people, and groups of people, portraying them in a negative light. These can include harms like denigration, stereotyping, misrecognition, and exnomination.

Exnomination is a practice where a certain category or way of being is framed as the norm by not giving it a name, or not specifying it as a category in itself (for example, “athlete” vs “female athlete”).

These harms have the ability to perpetuate preconceived notions about things like race, religion, gender and socioeconomic status. The more these harms are perpetuated, the more they might be normalized and subsequently reinforced as truth.

This is a pain-point for AI because one of the main intents of AI and its pioneers is to provide a resource that serves in an unbiased capacity — only considering and making decisions based on empirical data.

The current situation, however, is that AI programs are actually quite biased, repeating biases not only from the training data but also from the program creators themselves.

With regards to the LinkedIn post mentioned above, these biases were exhibited in a program being used to…

--

--