Support the Human Restoration Project and help transform education for the better! Your generous donation will enable us to provide Creative Commons, open access resources, and high-quality classroom materials that empower educators and nurture human-centered learning in our schools.
Act now and help us restore humanity to education!
Activities & Lessons
|
Social Justice & Student Voice
Students
|
90+ minutes.
Human Restoration Project, Creative Commons-BY-SA. Inspired by educators Jamie Mitchell’s & Ramiel Nassara’s class activity.
May 2023
Access variations of this resource:
No remixes are available yet for this resource!
Have you modified this resource for a different age group, activity, content area, or with other improvements? Contribute your remix!
Make an edit? Your input and designs create human-centered practices which fuel our movement for change. Upon approval, your credited remix will be published under a Creative Commons license.
Students explore the ethical considerations of AI-generated art and the stereotypes and biases they can produce.
Practically all students are exposed to AI (through ChatGPT, Snapchat, Midjourney, DALL-E, Microsoft Office, and likely Google Documents) and are going to explore their use in school and work related assignments. In this lesson, students explore the critical topics of AI ethics and bias, delving into the ethical considerations surrounding the development and deployment of A.I. while examining the inherent biases that can emerge in AI algorithms and their real-world consequences.
This lesson requires access to the Internet for A.I. art generation. The activity can be done as a whole class on one computer, or across any student platform (e.g. phones, laptops).
AI ethics refers to the field of study and practice concerned with the moral principles and guidelines that govern the responsible development, deployment, and use of artificial intelligence systems.
Throughout 2022 and 2023, the chat AI platform ChatGPT has experienced remarkable growth and advancement. It has incorporated vast amounts of new data, expanding its knowledge base to cover a wide array of topics and domains. As a result, it has become increasingly proficient in generating human-like responses, demonstrating improved understanding, nuanced language usage, and enhanced coherence. With ongoing updates and improvements, AI chat frameworks have been implemented from coding (Github Copilot) to office use (Microsoft Office).
Alongside ChatGPT, image generation software such as Midjourney and DALL-E has seen massive improvements in generating graphics and art, including mimicking styles of various artists over thousands of years.
In this lesson, we’ll problematize AI image generation. As in, we’ll examine the dangers of AI in daily use and why we must take a critical look at how/why it’s being used.
Image models, like Midjourney, are computed through a process called training. During training, the model is presented with images along with their corresponding labels or annotations, enabling it to understand the relationship between the visual input and the desired output. Through a series of mathematical equations and refined code, training enables image models like Midjourney to generalize their understanding of images and perform various tasks, including image recognition, object detection, and even artistic style transfer.
Let's take the example of training Midjourney to recognize different species of flowers. The training process would begin with thousands to millions of labeled images containing various flowers, each image accompanied by a description. The images could include different angles, lighting conditions, and backgrounds to ensure the model learns to recognize the flowers in diverse settings.
As the training progresses over multiple iterations, the model fine-tunes its internal representations, learning to distinguish subtle differences between flower species and capture essential characteristics that define each type. By repeatedly exposing the model to diverse examples and adjusting its parameters based on the observed errors, Midjourney becomes increasingly proficient in recognizing and classifying different species of flowers accurately.
Once the training is complete, the trained image model can be used to classify new, unseen, stylized, or morphed images of flowers by leveraging its learned knowledge and generalization abilities. In other words, the model is combining 100,000s of images from both flowers and other fed images.
Where do you think Midjourney and other AI-image tools find their initial images?
What limitations are there to generating art? What problems may arise?
The training data used to train image generation models may contain biases and reflect societal stereotypes. If the training dataset predominantly consists of biased or unrepresentative images, the model may learn and perpetuate those biases when generating new images. Dr. Meredith Broussard, data journalist and author of More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech writes,
“Tech is racist and sexist and ableist because the world is so. Computers just reflect the existing reality and suggest that things will stay the same - they predict the status quo. By adopting a more critical view of technology, and by being choosier about the tech we allow into our lives and our society, we can employ technology to stop reproducing the world as it is, and get us closer to a world that is truly more just.”
What does this quote mean to you? What do you notice and wonder about AI image creation and potential problems that may occur?
The choices of programmers of image generation models can also contribute to issues of stereotyping. The input and instructions provided to the model, as well as the filtering or selection of generated images, can introduce or amplify biases if not approached with care and awareness. According to Career Explorer, 72% of programmers are male and 49% are white, in comparison to low reports of those identifying as 8% Hispanic or 6% Black. Because of this discrepancy and lack of diverse representation, AI image generation has been criticized for perpetuating stereotypes.
Biases are a fact of life. Categorizing people is how humans organize our world and process our incredibly complex society. Everyone holds biases in some way. That said, biases are also how we can quickly create stereotypes leading to prejudice and discrimination. Counteracting these biases requires knowledge of how stereotypes are perpetuated, recognizing when they exist, and changing the narrative. AI, because it is fed images from the real world and lacks a critical understanding of bias, tends to reinforce dominant, potentially harmful stereotypes.
As an example, let’s examine how DALL-E generates images that could reflect various biases. (Note that Midjourney can be used as well, but requires access to Discord which is often banned on school networks.)
Choose at least three prompts. Consider combining one or more prompts from the same column. Then consider:
Then, discuss the following:
Compare with a partner or small group. What were their findings? How were they similar or different to your own findings?
Why do many AI ethicists claim that image generation software has the capacity to cause harm? Do you agree or disagree? Why?
Does this activity make you think any differently about AI generated art?
Consider what steps could be taken to ensure AI does not generate stereotypes. What could be at least one potential step to lessen this problem?