Since the introduction of DALL-E 2 and ChatGPT, there has been a fair amount of hand-wringing about AI technology—some of it justified.
It’s true that the technology’s future is unclear. There is great debate about the ethics of using existing artwork, images, and content to train these AI products, and concern about what industries it will displace or change. And it seems as if an AI arms race between companies like Microsoft and Google is already underway.
And yet as an industrial designer and professor, I’ve found AI image-generation programs to be a fantastic way to improve the design process.
They don’t replace the valuable insights and critical thinking skills I’ve accumulated from years of experience. But they do spark creativity and expand the range of what’s possible with the products my students and I design.
A peek behind the design curtain
Industrial design involves creating everyday objects, with a particular focus on their form and function. Industrial designers have a hand in anything from furniture and consumer electronics to accessories and apparel.
A typical design process involves lots of research and talking to consumers about their needs. From there, designers brainstorm ideas and sketch them out, followed by the prototyping and fabrication stage. Finally, the objects get refined and manufactured.
During the early stages of brainstorming, designers spend a lot of time with their sketchbooks, getting inspired by their immediate environment, by history books, and by their own experiences. The Internet also plays a big role—it’s where designers collect many of the images they use to create inspiration boards. Famously, Jonathan Ive, who designed many iconic Apple products, looked at luxury watches as inspiration for the Apple Watch, using the “crown”—normally used to wind a mechanical watch and set the time—as an input device to allow users to scroll through content.
AI has given designers like myself the ability to generate images just based on a simple text prompt. Tools like DALL-E or Midjourney let us input abstract concepts and turn them into a flood of images.
Enter any sentence—no matter how crazy—and you’ll receive a set of unique images generated just for you. Want to design a teapot? Here, have 1,000 of them. Some may have a dinosaur shape; others may be made of mashed potatoes.
While only a small subset of them may be usable as a teapot, they provide a seed of inspiration that the designer can nurture and refine into a finished product.
From nostalgia to a tissue box
Perhaps a handful of those 1,000 teapot images allow a designer to conceive of a new, unexpected shape that is easier to hold, more economical to manufacture, or more beautiful to look at. Generative AI can facilitate the brainstorming process, but it’s still the designer’s responsibility to make the choices that ultimately lead to products that enrich people’s lives.
Recently, I have found myself using AI image generators like DALL-E and Midjourney to explore complex ideas that might be difficult or time-consuming to articulate and channel into a physical product. For example, for one project, I wanted to create objects that really connected people, in a deep way, to a place they had visited or lived in—as opposed to the refrigerator magnet souvenirs that tourists often end up buying.
So I decided to design a set of small household objects to be sold to tourists visiting the small colonial town of Antigua, Guatemala, just a few miles from where I grew up. I wanted the objects to elicit nostalgia about the town—and perhaps inspire those who purchased them to want to return.
I began by prompting DALL-E for tabletop objects that were nostalgic. The results were hilarious and unexpected. I received images of objects that looked sad, like erasers and a tissue box with a frown. It had taken my prompt very literally.