Photo credit: OpenAI
Finally, DALL-E 2, OpenAI’s image-generating AI system, is available as an API, allowing developers to integrate the system into their apps, websites, and services. In a blog post today, OpenAI announced that any developer can harness the power of DALL-E 2 – which more than three million people now use to produce over four million images a day – as soon as they add an OpenAI API as part of it -Account creates the public beta.
DALL-E 2 API pricing varies by resolution. For 1024×1024 images, the cost is $0.02 per image; 512×512 images cost $0.018 per image; and 256×256 images are $0.016 per image. Volume discounts are available to companies working with the OpenAI corporate team.
As with the DALL-E 2 beta, the API allows users to generate new images from text prompts (e.g. “a fluffy rabbit hopping through a field of flowers”) or edit existing images. Microsoft, a close OpenAI partner, uses it in Bing and Microsoft Edge with its Image Creator tool, which allows users to create images when the web results don’t return what they’re looking for. Fashion design app CALA leverages the DALL-E 2 API for a tool that lets customers refine design ideas based on text descriptions or images, while photography startup Mixtiles turns it into an artwork creation flow for its users.
Not much changes to the guidelines with the API launch, which should disappoint those who fear that generative AI systems like DALL-E 2 will be released without adequate attention to the ethical and legal issues involved. As before, users are bound by the OpenAI Terms of Service, which prohibits the use of DALL-E 2 to generate overtly violent, sexual, or hateful content. OpenAI also continues to block the uploading of images by people without their consent or images they do not own the rights to, using a mix of automated and human surveillance systems to enforce this.
A small change is that images generated with the API do not have to contain a watermark. OpenAI introduced the watermark during the DALL-E 2 beta to indicate which images came from the system, but chose to make it optional with the launch of the API.
“We encourage developers to disclose that images are AI-generated, but do not require them to contain the DALL-E 2 signature,” Luke Miller, a product manager at OpenAI who oversees development of DALL-E 2, said per E -Mail to TechCrunch.
OpenAI also uses prompt and image-level filters with DALL-E 2, although filters that some customers have complained about are overzealous and imprecise. And the company has focused some of its research efforts on diversifying the types of images produced by DALL-E 2 to combat the biases that text-to-image AI systems are known to fall prey to (such as producing mostly white Pictures). Men when prompted with texts like “Examples from CEOs.”
But these moves have not appeased every critic. In August, Getty Images banned the uploading and sale of illustrations created with DALL-E 2 and other such tools after sites like Newgrounds, PurplePort and FurAffinity made similar decisions. Getty Images CEO Craig Peters told The Verge that the ban was due to concerns about “unaddressed legal issues” because the training datasets for systems like DALL-E 2 contained copyrighted images found on the internet.
Striking a middle ground, Getty Images’ rival Shutterstock recently announced it would use DALL-E 2 to generate content, while also creating a “contribution fund” to compensate creators when the company sells work to text -to-image AI systems to train . Also, AI artworks uploaded by third parties will be banned to minimize the risk of copyrighted works entering the platform.
Technologists Mat Dryhurst and Holly Herndon are leading an initiative called Source+, which aims to allow people to ban their work or likeness from being used for AI training purposes. But it is optional. OpenAI hasn’t said if it will participate — or if it will ever roll out a self-service tool that allows rightsholders to opt out of their work from training or content creation.
In an interview, Miller gave little detail about new mitigations, other than that OpenAI has improved its techniques to prevent the system from generating biased, toxic, and otherwise objectionable content that customers may find objectionable. He described the open API beta as an “iterative” process that will involve working with “users and artists” over the next few months as OpenAI scales the infrastructure powering DALL-E 2.
If the DALL-E 2 beta is any indication, the API program will surely evolve over time. Initially, OpenAI disabled the ability to edit people’s faces with DALL-E 2, but later enabled the feature after making improvements to its security system.
“We’ve done a lot of work in this regard – from both the images you upload and the prompts you send, to aligning with our content policies and incorporating various mitigations to address at the prompt level and the prompt level to filter the image layer to ensure it complies with our content policy. So, for example, if someone uploaded an image that contained hate symbols or bloodthirsty content — very, very, very violent content — it would be rejected,” Miller said. “We’re constantly thinking about how we can improve the system.”
But while OpenAI seems intent on avoiding the Stable Diffusion controversy, the open-source equivalent of DALL-E 2 used to create porn, gore, and celebrity deepfakes leaves it up to API users to to decide exactly how and where to use its technology. Some, like Microsoft, will no doubt take a measured approach and slowly roll out DALL-E 2-based products to gather feedback. Others will plunge headlong, confronting both the technology and the ethical dilemmas that come with it.
If one thing is for sure, it’s that there is some pent-up demand for generative AI – damn consequences. Even before the API was officially available, developers released workarounds to integrate DALL-E 2 into apps, services, websites and even video games. With the launch of public beta, powered by the impressive marketing power of OpenAI, synthetic images are poised to truly break into the mainstream.