[ad_1]
OpenAI, the San Francisco-based company best-known for its massive GPT-3 natural language model, announced on Wednesday it is releasing a second version of its text-to-image AI model.
Like its predecessor, the new DALL-E 2 is a neural network that creates images based on natural language phrases fed in by the user. But while the original DALL-E‘s images were low-resolution and conceptually basic, images generated by DALL-E 2 are five times more realistic and accurate, OpenAI researchers tell Fast Company. What’s more, the second DALL-E is actually a smaller neural network. (OpenAI declined to give DALL-E 2’s dimensions in parameters.)
DALL-E 2 is also a multi-modal neural network, meaning it is capable of processing both natural language and visual images. You can show the model two different images, for example, and ask it to create images that combine aspects of the source images in various ways.
And the creativity the system seems to display while doing it is, well, a little unsettling. During a demonstration Monday, DALL-E was given two images–one that looked like street art, the other something like art deco. It quickly created a set of 20 or so images arranged in a grid, each different from its neighbor. The system combined varying visual aspects of the source images in a number of ways. In some it seemed to allow the dominant style in one source image to be fully expressed, while suppressing the style of the other. Taken together the new images had a design language that was distinct from that of the source images.
“It’s really fascinating watching these images being generated with math,” OpenAI algorithms researcher Prafulla Dhariwal says. “And it’s very beautiful.”
OpenAI engineers took pains to explain the steps they’re taking to prevent the model from creating untoward or harmful images. They removed all images containing nudity or violence or gore from the training data set, OpenAI researcher Mark Chen says. Without that Chen says it’s “exceedingly unlikely” that DALL-E will produce such stuff accidentally. Human beings at OpenAI will also be monitoring the images users create with DALL-E. “Adult, violent, or political content won’t be allowed on the platform,” Chen says.
OpenAI says it plans to gradually roll out access to the new model to groups of “trusted” users. “Eventually we hope to offer access to DALL-E 2 through an API [application programming interface],” Dhariwal says. Developers will then be able to build their own apps on top of the AI model.
Looking at practical applications of the model, Dhariwal and Chen both envision DALL-E 2 being helpful for graphic designers who might use the tool to help open new creative avenues. And the developers who eventually access DALL-E 2 via the API will likely find new and novel applications for the technology.
Chen says DALL-E 2 could be an important tool because while creating language feels natural to human beings, creating imagery doesn’t come quite as easily.
But DALL-E 2 is worth doing without any immediate practical application at all. As a multimodal AI, it has foundational research value that may benefit other AI systems for years to come.
“Vision and language are both key parts of human intelligence; building models like DALL-E 2 connects these two domains,” Dhariwal says. “It’s a very important step for us as we try to teach machines to perceive the world the way humans do, and then eventually develop general intelligence.”
[ad_2]
Source link
Comments are closed.