blog Article

Can I use AI Image Generators in my organization?

Author: Joren Verspeurt ,

 

AI-generated images have been used for everything from winning art competitions to protecting the identity of anti-government protesters, but are they usable in a business context? What about the controversy about these models from artists and other content creators? If you've thought about using AI-generated images in your products and services or even in your marketing campaigns, this post is for you. We will explore the legal and ethical considerations you may want to take a closer look at when integrating image generators into your workflow, and provide you with practical advice on how to do it responsibly. We'll start with some general recommendations, move on to questions of intellectual property and copyright, and end with some advice for those of you who are interested in training or finetuning your own models.

Real-world impact: what's at stake?

Currently, the legality of using generated images is in a bit of a gray zone. Despite recent clarifications from the US Copyright Office, among others, the question of copyright infringement for both providers and users of image generation models doesn't seem to be settled yet. Some important legal and intellectual property issues currently depend on the outcome of some high-profile lawsuits like the ones against Stability AI.

Beyond the purely legal aspects, there are also ethical considerations. Not only have popular image generators been shown to reinforce harmful biases and stereotypes in some cases, but they have also received backlash from artists whose work was used to train them without permission.

Clearly, organizations currently using AI-generated images without caution may be exposing themselves to future risk, both financial and to their reputation. 

How should I use image generators responsibly?

Diving into the vibrant world of AI Image Generators feels a bit like unlocking a treasure chest of endless creativity. But amidst the allure of effortlessly engineered visuals lies a labyrinth of responsibility. Like with any powerful tool, it's not just about what you can do, but what you should do. This section isn't here to curb your enthusiasm, but rather to guide you through the nuances of using these tools thoughtfully and ethically.

Consider which model to use

There's a growing amount of options when it comes to image generation tools. You could go for a SaaS provider who trained a model in-house, you could use an open source model and host it yourself, … Regardless of the workflow, it's important to understand a model's origins. What kind of data was it trained on? Was the dataset diverse and representative?

Also consider the way the model is made available to you. Not every license will be compatible with every use case. Even models published under "open source" licenses may have restrictions that could make them unsuitable for your organization.

Consider copyright and intellectual property

Even if the license terms of the generation service grant you full rights over the images you generate, that isn't enough for the images to be considered your intellectual property. As we will explore in more detail later in this post, the images you generate aren't considered copyrightable as-is. In fact, you may be infringing on the copyright of an artist whose work was used to train the image generation model. 

On the one hand, it's hard to argue such a copyright infringement claim for most individual generated images. It's generally not traceable which training images were decisive in setting the specific weights that contributed the most to the image you just generated, despite efforts by some to approximate such "inspiration lists". On the other hand, it has been shown that these models sometimes recreate exact copies of elements from other images. Due to this and other factors, it may be best to use generated images as a base, to build on with elements of human creativity or to use as inspiration.

Consider the impact on society

Regardless of whether it's legal to use an AI-generated image for commercial purposes, you may be understandably sensitive to the backlash these models and people using them have gotten from artists whose work has been used as training data. Initiatives exist to reward artists for their contributions and improve the fairness of certain image generation services, but we're still quite far from a universal system that's acceptable to everyone.

There is also legitimate concern about bias when it comes to using these models. For example, if most of the images whose label includes the term CEO tend to be of white middle-aged men, then asking the trained model to produce images of "a CEO" will usually produce images of white middle-aged men as well. Without special care taken to avoid harmful bias, the models tend to faithfully reproduce the biases present in the training data.

Use generators collaboratively

Perhaps the best way of using image generators is to use them in the way we recommend using most AI solutions: as a copilot, not in the driver's seat. For instance, use AI to draft the initial design and then have a human artist refine, modify, and finalize it. Use AI-generated images as a way to communicate concepts or ideas to designers. They can serve as a reference or starting point, making the collaboration smoother. For businesses looking to pitch ideas or present mockups, AI-generated images can be invaluable. They can help visualize concepts rapidly without committing significant resources. The final version can then be made with a human touch after the idea has been accepted. Another use may be in internal workshops or documentation. The possibilities are endless.

Actions to take

  • Review terms of service: Before using any AI image generator, read and understand the provider's terms of service related to image ownership.
  • Add a human touch: Add a layer of human creativity to the AI-generated images. This enhances the originality and uniqueness of the image but also solidifies the copyright claims.
  •  Document your process: Keep a record of how the AI was used in the creation process. Backup the original images. Keep track of alterations. This can be helpful in proving human involvement or creativity. Tools exist to help with this, including open source software from the Content Authenticity Initiative.
  • Look into artist compensation: If using an AI tool that was trained on artworks or images from specific artists and they weren't compensated already as part of the training process, consider a compensation model. It's a way of recognizing and rewarding the creative foundation upon which the AI model was built.
  • Be transparent: If using AI-generated images for commercial purposes, consider informing stakeholders or the audience that the images were generated by AI. This promotes transparency and trust. It will also be required under the new EU AI Act.
  • Seek legal advice: Especially if you plan to commercialize the images, consult with a legal expert familiar with copyright law in the digital age.
  • Stay updated: Copyright laws and regulations related to AI are evolving. Regularly review updates to ensure ongoing compliance.

Who owns the images I create?

Traditionally, copyright laws were designed to protect human creators. However, with AI entering the fray, things have gotten a bit murkier. If a human uses a tool to create something, typically the human owns the copyright. But what happens when the tool, in this case an AI image generator, contributes significantly to the creation process?

Ownership according to AI providers

Most AI image generator providers include terms of service that specify the rights users have over generated images. For instance, some may grant users full rights to images, while others might retain certain rights or require attribution. As an example, OpenAI's Terms of Use state that users own their prompts and uploads, and they are assigned all rights in any images generated by DALL-E 2 for them. It's crucial to thoroughly review the terms of the tools you use (or have them reviewed by a legal advisor), before using the images commercially.

Do note, that even if you're granted rights to an image generated by an AI, there might still be concerns stemming from the training data used by the AI. If an AI was trained using copyrighted images without proper licenses, there might be potential legal issues with the generated images, even if they appear original.

Legal precedents

While legislation is still catching up with technology, there have been several cases worldwide that can provide guidance. An important recent case was that of the graphic novel "Zarya of the Dawn", where all the images, featuring a main character with a remarkable resemblance to actress Zendaya, were AI-generated. In the end, the US Copyright Office ruled that the graphic novel in its entirety was copyrightable by the author, but not the individual AI-generated images.

Zarya-of-the-DawnTwo other important cases are Getty Images vs Stability AI case on the one hand, and Sarah Andersen and others vs Stability AI, Midjourney, and DeviantArt on the other. In the first, Getty alleges that Stability committed copyright and trademark infringement, using 12 million of their stock images to train their Stable Diffusion model without permission or compensation. Getty Images stock photos are an interesting data source because their catalog of images includes rich metadata about the contents of the image. They have licensed their images to be used in AI model training in the past, but no such deal was made with Stability. It may be possible to argue that using the copyrighted images to train a model in this case constitutes Fair Use, because it's transformative and the images are not used for their original purpose. However, another of the requirements for that defense in this case is that the market (most importantly here, the demand) for the originals is not affected, which is a stretch.

The case brought by illustrator Sarah Andersen and 2 colleagues is relevant for a different reason: the plaintiffs here seem to want to argue that not only were their copyrighted works included in the training data for Stable Diffusion and other models without permission, but that it's possible to produce works that are substantially similar to their original images using Stable Diffusion. The reason that this is possible, according to them, is that the model contains "latent images" that are equivalent to their original images and that when a user enters a description, some interpolation of these latent images is turned into the final result. Unfortunately for them, as Stability has been able to argue, this is a misinterpretation of what Stable Diffusion actually does, leading the judge to be inclined to dismiss the case.

What if I train my own image generator?

Even if you're in the enviable position where you can train or finetune your own image generator model, with a set of images that you fully own or have an appropriate license for, there are still some precautions to take. 

Watch out for leaks

Recent studies have shown that diffusion models typically do memorize some of their training data, despite the initial claims of some researchers who published about their models that they are very robust to this kind of overfitting. In some cases these images can be extracted, reconstructed, or identified based on black box access to the model (meaning a way to use it for text-to-image generation, without access to the weights). This can be mitigated during training, but if there might be intellectual property problems or issues related to other kinds of confidential training data if these model attacks are possible, it's best to implement additional measures to prevent this kind of leak.

Watch out for bias

As mentioned above, many of the image generation models available today exhibit some form of harmful bias. If this is a concern in the domain where you'd like to use image generation, then you may want to account for this, either during training or at generation time. Assembling a diverse training dataset can take a lot of effort, but is worth it both to avoid harmful bias and to improve performance in general. Various techniques can also be used to balance out datasets. A straightforward way is to increase the weight of the images in underrepresented classes in the training set, but other more sophisticated techniques exist as well. Finally, it may be worth it to do some checks on the prompts that are used to generate images, to make sure that they don't produce harmful results.

Wrapping up

As we journey through the ever-evolving realm of AI image generators, it's clear that we're not just looking at a fleeting trend but a transformative force in the digital landscape. There are opportunities out there to grasp, as long as that's done in an informed way, backed by strong ethics and up-to-date knowledge.

We hope you learned something that you can apply in your own organization, and that you don't hesitate to reach out if you have any follow-up questions about the complex terrain of the AI solution landscape.

Joren Verspeurt
About The Author

Joren Verspeurt

Joren is a Machine Learning Engineer and Security Officer at Radix. He mixes his experience as a Data Scientist and Software Engineer for companies working in domains as varied as video streaming, life sciences, and cloud telephony. What drives him is finding ways to improve people's interactions with technology by using Al and Machine Learning techniques.

About The Author