Google Research recently announced an impressive, similar text-to-image system, and one startup, HuggingFace, is publicly developing their own version that anyone can try right now on the web, although it’s not yet as good as DALL-E or Google’s system. In principle, anyone with enough resources and expertise can make a system like this. And, sometimes, the unexpected results are the best. But, even with the need to sift through many outputs or try different text prompts, there’s no other existing way to pump out so many great results so quickly – not even by hiring an artist. Not all of the images will look pleasing to the eye, nor do they necessarily reflect what you had in mind. Each set of images takes less than a minute to generate. It’s staggering that an algorithm can do this. They gathered some of the images online and licensed others. OpenAI researchers built DALL-E 2 from an enormous collection of images with captions. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself. It raises immediate questions about how these technologies will change how art is made and consumed. After hours of experimentation, it’s clear that DALL-E-while not without shortcomings-is leaps and bounds ahead of existing image generation technology. But a small and growing number of people-myself included-have been given access to experiment with it.Īs a researcher studying the nexus of technology and art, I was keen to see how well the program worked. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. A picture may be worth a thousand words, but thanks to an artificial intelligence program called DALL-E 2, you can have a professional-looking image with far fewer.ĭALL-E 2 is a new neural network algorithm that creates a picture from a short phrase or sentence that you provide.
0 Comments
Leave a Reply. |