top of page

3 Strategies for Incorporating Text-to-Image AI Tools in the Classroom

Updated: Oct 2, 2023

The abundance of tools available today to each of us to create images, diagrams, videos and presentations seems endless.

But, those who have tried even one or two of them can say that it is a process of experimentation and many drafts. Sometimes even abandoning the tool before reaching the desired result.

Sometimes we also settle for what we got because we ran out of free trials and didn't want to pay.


When introducing these tools into the classroom, the development of Artificial Intelligence Literacy of our students and the special characteristics of the Generative AI- Text to Image medium should also be taken into account.

That is, to engage (at least) in these topics:

  1. Understanding the medium itself - how can the machine produce an image for us from a text (prompt)?

  2. How to correctly activate the medium - how do you formulate a suitable prompt for the type of application we have chosen?

  3. Limitations of the medium - how do you ensure copyright protection and fair use of works?

Now I will expand a little on three tips for integrating "text-to-image" artificial intelligence in the classroom:


In order to understand the first topic, I suggest showing the students a video like the following explanation referring to the Dall-E application of the Open AI company


Another recommended activity is to write a prompt and insert it into two different applications.For example Stable Deffusion and Dall-E2If you enter the prompt:Painting in oil on canvas, of a woman with a baby inspired by the artist Elmer Borlongan

In the Dall-E application, images are obtained in the style of the artist, with the figures of the mother and the baby characterized by Asian facial features. The colors of the painting are warm, and the baby is wrapped in cloth. The textures of the painting convey a feeling of softness and warmth, but their facial features do not suggest joy in all the 4 possibilities that the application gave us here.

Borlongan-inspired woman and baby prompt in Dall-e

However, when I put the same prompt into the Stable Deffusion Online application

These 3 results were obtained:


A woman with a baby inspired by Borlongan in the stable deffusion online application

A number of differences are evident in the images compared to the Dall-E2 applicationThe woman's hair is drawn more realistically, but there is some distortion in the facial features. The nature of the brushstrokes is different and there is much more care in the lighting of the painting, as if the artist dramatically illuminated the figures while working on the painting in the studio. The drawing of the fingers of the hands of the two characters is extremely distorted - something that is a very strong hint that the drawing is not original, but was created using artificial intelligence.

The comparison between the machines can lead to a discussion about the data on which the models were trained.

It is possible that the differences between the works on which the AI models were adapted are the reason for the differences, along with different characteristics of the models themselves.


When looking at the original painting by the artist and appears on the Google Arts and Culture website It is possible to notice the background that gives the context of a period and additional information about the figures, the precise details of the clothing, the rocking chair and the afternoon shadow in the window designed in the wall behind the figures are evident. Also, it is possible to notice the facial expression of the child and the father and imagine the family situation that the painting tries to represent to us viewers. In the example above I used this painting as a source of inspiration but only gave the title of the painting and the name of the artist. You can practice in class adding a detailed description of the background, characteristic details for the characters, the colors of the clothes and different textures. The richer the description of the painting requested from the machine and uses the language of art (a picture is worth a thousand words. Remember- we will get a result close to our request. This is a visual literacy skill but in reverse! We usually used to describe the artist's painting with (a thousand) words. Today we need to describe in words a painting in the hope of getting the result we expected. And this is exactly part of artificial intelligence literacy - the ability to describe a desired image in words.It is very important to promote this skill in the classroom.

I also argue that there is a gap between what artificial intelligence is able to produce by the rapid process of mechanized reproduction. I will immediately explain why I think so.

The AI model primarily takes in a handful of distinct features (though we're responsible for providing these details) and then generates a replica for us. A classroom conversation about the noticeable distinctions between the creations of the machines and the artist can be enlightening. Such a discussion can highlight our understanding, insights, and an exploration of the benefits and challenges of coexisting with artificial intelligence.

Father and Son Creator: Elmer Borlongan

Regarding the second issue - understanding how the machine operates, it is important to actually experience the production of images within a classroom context, in a task aimed at achieving the goals of the lesson.

For example - in a lesson aimed at revealing social perceptions and perspectives on an event in the company of children (even the one that happened yesterday during recess), the students can be asked to produce an image using One of the applications you will choose that does not require registration, describing the situation from their point of view. Conversation around the pictures can help express perceptions, feelings and points of view directly, and give the educator new tools to create a healthy conversation in the company of children.


Each of the applications that allow you to create an image from text has rules for formulating an appropriate prompt. A quick search on the net will yield you dozens of explanatory videos and brochures with suggestions for excellent formats. I suggest starting with clipdrop.co/stable-diffusion Remember that first you need to know how to describe the image you want to create. Such writing requires the ability to observe as well as formulate the desired description in words.


Your students will really like this site - (requires registration with an email) clipdrop.co/stable-doodle because the site also has scatch options to create personal drawings converted to any style the creator chooses. Here I scathed 2 oranges surfing the waves and the AI model converted them to a vivid picture...

two oranges surfing in the sea. doodle to image


The third subject delves into copyright concerns and the potential detriment to individuals and the broader society (the topic of fake news can also be touched upon, though I'll address it in a distinct article). Drawing from the instance of the artist's painting influenced by classical art, I prompted two distinct machines to craft a new image. It's evident that this infringes on the copyrights of the artists involved, both contemporary creators and those from the past. Such matters are currently being debated in courts globally, and there's a significant likelihood that the approach to copyrights will undergo a transformation. Link to an article relevant to the class discussion Examining Spotify's approach suggests that copyright concerns regarding live artists can be addressed innovatively. Such solutions could significantly influence the realms of art, graphic design, and creativity. It's valuable to bring up these topics in educational settings, especially if they remain newsworthy. This could also reshape the way creative professionals operate. Historically, these roles were viewed as the least susceptible to the influences of industrialization and technological advancements. Yet, now it's my moment to weigh in. The conversation can also touch upon the merits of human-made art and explore how artists can leverage this emerging medium to redefine artistic expression.


A wonderful and recent example that I met at the MOMA museum in Manhattan New York about a month ago belongs to the artist Refic Anadol who took all the works in this huge and important museum and encoded their characteristics into a computer. All this enormous information was processed by the computer into an artistic language of dynamic performance.



I feel compelled to mention that my museum visit, particularly in light of the critique regarding the use of artists' works for machine training that infringes on copyrights, prompted me to ponder the innovative avenues artists could explore. If they adeptly harness the modern tools of this era, they can continue to produce aesthetically pleasing works that resonate with a broad audience, just as they always have. At the exhibition, numerous individuals were captivated by the artwork, unable to divert their gaze. Truly, it was an artistic journey in the era of artificial intelligence.


I invite you to Log In to the site and participate in the conversation about the post.

In addition, remember that you can book an inspiring lecture Also on ChatGPT technology and innovation in education today.


Good luck 21st century teachers!

20 views0 comments
bottom of page