Stable Diffusion: Training Your Own Model in 3 Simple Steps (2024)

What Is Stable Diffusion?

Stable Diffusion is an open source machine learning framework designed for generating high-quality images from textual descriptions. It uses a unique approach that blends variational autoencoders with diffusion models, enabling it to transform text into intricate visual representations.

The development of Stable Diffusion represents a significant step forward in the field of generative AI, offering creatives, designers, and developers a free and open tool for image creation. By inputting simple text prompts, users can produce images ranging from realistic photographs to artworks in various styles.

Advanced users of Stable Diffusion might want to train their own, fine-tuned version of the model for specific use cases. We’ll show a hands-on tutorial for achieving this with open source, no-code tools.

In this article:

  • Stable Diffusion Architecture and Concepts
  • What Can You Do with the Base Stable Diffusion Model?
  • Why Train Your Own Model?
  • Tutorial: Train Your Own Stable Diffusion Model Locally
  • Optimizing Your AI Infrastructure with Run:ai

Stable Diffusion Architecture and Concepts

The stable diffusion architecture is illustrated in the following diagram by Hugging Face. Let’s briefly review the key components.

Stable Diffusion: Training Your Own Model in 3 Simple Steps (1)

Variational Autoencoder

The Variational Autoencoder (VAE) within the Stable Diffusion architecture is used to learn the distribution of training images. It works by encoding input images into a lower-dimensional latent space, capturing their essential features. This encoding process enables the model to generate new images by sampling from the latent space, effectively learning to recreate the diversity and complexity of the input data. The VAE's efficiency in data representation and generation is crucial for the model's ability to produce high-quality, varied images from text descriptions.

Forward Diffusion

The forward diffusion process in Stable Diffusion gradually introduces noise into an image, moving it from a state of order to disorder. This step-by-step degradation of the image's details simulates the transition from a coherent picture to a random noise pattern. By carefully controlling this process, the model learns to recognize and understand the underlying structures of images. This knowledge is essential for the reverse diffusion phase, where the model reconstructs images from noise based on textual cues, ensuring the generated images are both diverse and aligned with the input descriptions.

Reverse Diffusion

In the reverse diffusion phase, Stable Diffusion performs the inverse of the forward process. Starting from random noise, it progressively removes noise to synthesize an image that matches the provided text prompt. This stage is critical as it utilizes the learned representations to guide the transformation of noise back into coherent visual content. Through a series of iterations, the model fine-tunes the details, adjusting colors, shapes, and textures to align with the description, effectively bringing the textual prompt to visual life.

Noise Predictor (U-Net)

The noise predictor, based on the U-Net architecture, is a core component of Stable Diffusion that estimates the amount of noise to remove at each step of the reverse diffusion process. It acts as the model's intuition, determining how to refine the noisy image towards the final, detailed output that matches the text prompt. The U-Net's ability to handle both global structures and fine details is key to producing high-quality images that faithfully reflect the desired content, style, and mood indicated by the user.

Text Conditioning

Text conditioning in Stable Diffusion involves embedding the text prompt into a format that the model can understand and use to guide image generation. This process ensures that the output images are not just random creations but are closely aligned with the themes, subjects, and styles described in the input text. By effectively translating textual descriptions into visual cues, the model can produce images that accurately reflect the user's intentions, from specific objects and scenes to abstract concepts and artistic styles.

What Can You Do with the Base Stable Diffusion Model?

The base models of Stable Diffusion, such as XL 1.0 or the newer SD 3.0, are versatile tools capable of generating a broad spectrum of images across various styles, from photorealistic to animated and digital art. These models, designed to convert text prompts into images, offer general-purpose capabilities, making them suitable for a wide range of creative and practical applications.

Users can leverage these models to produce diverse content without the need for specific training in image creation techniques. For example, the base models can create artwork, design elements, and visual concepts straight from textual descriptions, offering an accessible entry point for users to explore AI-generated imagery.

Despite their versatility, base models have limitations in specialized tasks. While they can generate images in a wide array of styles, achieving high fidelity in specific genres or styles, like classic anime, may require extensive prompting and fine-tuning.

Why Train Your Own Model?

Training your own model on top of a Stable Diffusion base model allows for specialization and refinement in the generation of images tailored to specific needs or styles. A common method for teaching specialized styles to Stable Diffusion is Dreambooth.

For instance, by training a base model like SD XL 1.0 with an additional dataset focused on a particular subject, such as wild animals, the resulting fine-tuned model gains an enhanced ability to generate images that align closely with the desired outcomes, producing more accurate and stylistically consistent images with minimal effort.

This fine-tuning process transforms a generalist base model into a specialist, capable of understanding and replicating specific visual styles or subjects with high fidelity. The creation of fine-tuned models, including the use of advanced techniques like LoRAs (Locally Receptive Attention) and LyCORIS, further narrows the focus, allowing for the generation of images in highly specific styles.

For example, using LoRA or LyCORIS, you might inject a fictional character into the visuals, modify the clothing of characters, add specific elements to the background, or add objects like cars or buildings.

For example, Jake Dahn illustrated how to use LoRA to generate detailed self-portraits in various styles, by fine tuning the model with images of himself.

Stable Diffusion: Training Your Own Model in 3 Simple Steps (2)

Tutorial: Train Your Own Stable Diffusion Model Locally

Requirements

This tutorial is primarily based on a setup tested with Windows 10, though the tools and software we're going to use are compatible across Linux, Windows, and Mac platforms.

A critical hardware requirement is a GPU with at least 6–7GB of VRAM. While this setup might not suffice for running pure Dreambooth tasks, incorporating LoRA (Locally Receptive Attention) makes it feasible.

On the software front, the requirements include:

  • Python 3.10: Ensure you select Add Python to PATH and Support for TCL/Tk during installation.
  • Git: Necessary for cloning the required repositories.

Step 1: Inference

Inference with Stable Diffusion involves generating new images based on the model's understanding. The setup utilizes the open source Stable Diffusion Web UI. To start, clone the repository using the command:

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

After cloning, navigate to the stable-diffusion-webui directory and execute webui-user.bat (for Windows) or webui-user.sh (for Linux). This action launches a command window that performs initial setup tasks and eventually displays a message indicating the local URL (e.g., http://127.0.0.1:7860) where the web UI is accessible.

By visiting this URL, you can explore the inference capabilities of Stable Diffusion, inputting text prompts to generate images according to your specifications.

Stable Diffusion: Training Your Own Model in 3 Simple Steps (3)

Step 2: Training

This part uses the open source tool kohya_ss, which includes an implementation of Dreambooth, and will allow you to personalize your model. Installation involves cloning the repository with the following command:

git clone https://github.com/bmaltais/kohya_ss.git

Within the kohya_ss directory, initiate the setup by running setup.bat, which guides you through a series of configuration options, including the choice of PyTorch version and whether to utilize GPU resources. The setup script may prompt you to make selections regarding the uninstallation of previous files, distributed training, and optimization options, among others.

To begin training, navigate to the training directory and execute:

#On Windows

.\gui.bat --listen 127.0.0.1 --server_port 7860 --inbrowser

#On Linux

./gui.sh --listen 127.0.0.1 --server_port 7860 --inbrowser

This command starts the training web UI. In the Dreambooth LoRA tab, set up the training by specifying the instance prompt (a unique identifier for your model's focus), class prompt (the general category of your focus, such as animals or people), and the paths to your training images and destination directory.

Once configured, start the training process, which can vary in duration based on hardware capabilities.

Note: If you want the UI to be accessible on public web, use the following command to start the GUI: ./gui.sh --share

Step 3: Perform Inference on Your Personalized Model

After training, integrate your custom model with the Stable Diffusion Web UI for inference. This involves moving the last.safetensors file from your training output to the stable-diffusion-webui/models/LoRA directory.

Stable Diffusion: Training Your Own Model in 3 Simple Steps (4)

Relaunch the inference web UI. Instead of clicking Generate as usual, click on the small button below it: Share/hide extra networks. That will show a new section under the LoRA tab, where you should see your personalized model.

Click over it, and a string like <lora:last:1> will be added to your prompts, which tells the system to use your specialized model on top of the base model.

That’s it! You have trained a customized Dreambooth model on top of Stable Diffusion.

Optimizing Your AI Infrastructure with Run:ai

Run:ai automates resource management and orchestration and reduces cost for the infrastructure used to train LLMs and other computationally intensive models. With Run:ai, you can automatically run as many compute intensive experiments as needed.

Here are some of the capabilities you gain when using Run:ai:

  • Advanced visibility—create an efficient pipeline of resource sharing by pooling GPU compute resources.
  • No more bottlenecks—you can set up guaranteed quotas of GPU resources, to avoid bottlenecks and optimize billing.
  • A higher level of control—Run:ai enables you to dynamically change resource allocation, ensuring each job gets the resources it needs at any given time.

Run:ai simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their models.

Learn more about the Run:ai GPU virtualization platform.

Stable Diffusion: Training Your Own Model in 3 Simple Steps (2024)

FAQs

How to train your own Stable Diffusion model? ›

Steps to Train a Stable Diffusion Model
  1. Step 1: Model Fine-Tuning. With a domain-specific dataset in place, now the model can be customised. ...
  2. Step 2: Evaluation. Post-training the stable diffusion model, you must evaluate its performance and validate its accuracy. ...
  3. Step 3: Deployment.
Apr 17, 2024

How many steps should I use for Stable Diffusion? ›

Some noise is removed with each step, resulting in a higher-quality image over time. The repetition stops when the desired number of steps completes. Around 25 sampling steps are usually enough to achieve high-quality images. Using more may produce a slightly different picture, but not necessarily better quality.

What is the Stable Diffusion training process? ›

The central idea of Stable Diffusion model training is to add random noise in image latent space at several timesteps t with a schedule, and apply it to get noisy latents. The purpose of U-Net is to learn to predict the noise.

How long does it take to train a Stable Diffusion model? ›

Note that it is a long process to train Stable Diffusion with your images that can take up to several days, especially if you are training a complex neural network on a large dataset. Check and evaluate the model and do the testing image generations.

How many images do you need to train Stable Diffusion? ›

Training the Stable Diffusion Model. Now the pictures: You'll want at least 20 pictures or so for your AI model to analyze in order to avoid creating a bunch of generic person art or nightmare fuel. So bust out your phone and take some selfies!

How do I set up my own Stable Diffusion? ›

  1. How to Run Stable Diffusion Locally.
  2. Step 1: Install Python & Git.
  3. Step 2: Create a GitHub and Hugging Face account.
  4. Step 3: Clone Stable Diffusion Web-UI to your PC.
  5. Step 4: Download the Latest Stable Diffusion model.
  6. Step 5: Setup the Web-UI.
  7. Step 6: Run Stable Diffusion.

How does Stable Diffusion work for dummies? ›

To generate images, stable diffusion starts with random noise and repeatedly de-noises it according to the text prompt, until a coherent image emerges. The model uses techniques like attention to focus on the most relevant parts of the text prompt when generating each part of the image.

How to try Stable Diffusion 3? ›

To try Stable Diffusion 3 models, try using the API on the Stability Platform, sign up for a free three-day trial on Stable Assistant, and try Stable Artisan via Discord.

Can you make NSFW with Stable Diffusion? ›

After proper training, these models can generate photo-realistic NSFW portraits. Keeping ethical grounds aside, if you are interested in this kind of artwork, you can use Stable Diffusion online NSFW generators to satisfy your artistic taste. There are many pre-trained Stable Diffusion models present on the internet.

Is a diffusion model hard to train? ›

Diffusion models are very data hungry. Without data augmentation that won't be enough to train it nor would I expect realistic images from a dataset that small for something so complicated.

How many GPU hours to train Stable Diffusion? ›

Challenges in Stable Diffusion Pre-training

Baseline pre-training requires over 200,000 A100 GPU hours on billions of images [5]. This highlights the inherent challenges of pre-training due to its large-scale and computationally intensive nature.

Is there a better model than Stable Diffusion? ›

Midjourney's strength lies in its ability to produce highly artistic and nuanced images. It excels in creating visuals that are rich in detail and texture. On the other hand, Stable Diffusion's open-source nature makes it incredibly flexible. It allows users to modify, improve, and adapt the tool to what they need.

How to train a Stable Diffusion XL model? ›

A Comprehensive Guide to Training a Stable Diffusion XL LoRa: Optimal Settings, Dataset Building, Captioning and Model Evaluation
  1. Building your dataset.
  2. Mastering captioning.
  3. Using the right training configuration.
  4. Testing your model.
Feb 11, 2024

How is a diffusion model trained? ›

During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule.

How are Stable Diffusion models made? ›

In principle, diffusion models use Gaussian noise to encode an image. Then, they use a noise predictor together with a reverse diffusion process to recreate the image. Apart from having the technical differences of a diffusion model, Stable Diffusion is unique in that it doesn't use the pixel space of the image.

How much did it cost to train Stable Diffusion? ›

The model was trained using 256 Nvidia A100 GPUs on Amazon Web Services for a total of 150,000 GPU-hours, at a cost of $600,000. SD3 was trained at a cost of around $10 million.

Top Articles
Latest Posts
Article information

Author: Laurine Ryan

Last Updated:

Views: 6374

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Laurine Ryan

Birthday: 1994-12-23

Address: Suite 751 871 Lissette Throughway, West Kittie, NH 41603

Phone: +2366831109631

Job: Sales Producer

Hobby: Creative writing, Motor sports, Do it yourself, Skateboarding, Coffee roasting, Calligraphy, Stand-up comedy

Introduction: My name is Laurine Ryan, I am a adorable, fair, graceful, spotless, gorgeous, homely, cooperative person who loves writing and wants to share my knowledge and understanding with you.