Stable Diffusion Webgpu - ai tOOler
Menu Close
Stable Diffusion Webgpu
☆☆☆☆☆
Images (372)

Stable Diffusion Webgpu

Images created on the web using Stable Diffusion.

Visit Tool

Starting price Free

Tool Information

The Stable Diffusion WebGPU demo is a handy online tool that helps you generate images with ease using just your web browser.

This web application is built on the create-react-app framework and relies on JavaScript to function. To get started, make sure you’re using the latest version of Chrome and that JavaScript is enabled. You'll also need to turn on a couple of experimental features: "Experimental WebAssembly" and "Experimental WebAssembly JavaScript Promise Integration (JSPI)" in your browser settings.

Once you're all set up, this tool generates images through a series of steps. Each step takes around a minute, with an additional 10 seconds needed for the VAE decoder to complete the image. Just a heads-up—if you have the DevTools open while running the application, it might slow everything down by about twice as much!

The UNET model, which does the heavy lifting of creating your images, operates on the CPU. This choice is made for better performance and more accurate results compared to running it on the GPU. It’s recommended to go through at least 20 steps for decent results, but if you’re just trying it out, 3 steps will do the job.

This application features a user-friendly interface that lets you easily load the model, kick off the image generation, and view the results. If you run into any hiccups while using it, don’t worry; there’s an FAQ section with troubleshooting tips to help you out.

Keep in mind that while it uses a GPU, the WebGPU implementation in onnxruntime is still in the early stages. This means you might encounter some incomplete operations, as data has to be continually transferred between the CPU and GPU, which can impact performance. Currently, multi-threading isn't supported, and certain limitations in WebAssembly mean you can't create 64-bit memory using SharedArrayBuffer.

The good news is that the developer is aware of these issues and is actively working on solutions through proposed changes and patches. If you're interested in experimenting with the tool on your own, the source code is available on GitHub. There’s also a patched version of onnxruntime that allows you to work with large language models through transformers.js, although its reliability may vary depending on the situation. Plus, the developer plans to submit a pull request to the onnxruntime repository to help improve it further.

Pros and Cons

Pros

  • Result viewing capability
  • Uses CPU for better performance
  • Options to run image generation
  • Addressing WebAssembly limitations
  • Requires 'Experimental WebAssembly' and 'Experimental WebAssembly JavaScript Promise Integration (JSPI)' flags
  • Patched version of onnxruntime provided
  • Inference steps for image generation
  • No need for repeated downloads
  • Uses create-react-app framework
  • Use of large language models
  • Future changes to support 64-bit memory creation
  • Local running option
  • FAQ for troubleshooting
  • Addressing multithreading support
  • Approximately 1 minute per step
  • Web-based application
  • Accurate UNET model results
  • Options to load model
  • Model files cached
  • Can work with transformers.js
  • User-friendly interface
  • Open-source code on GitHub
  • JavaScript enabled
  • Runs on latest Chrome
  • Additional 10 seconds for VAE decoder
  • Recommended minimum 20 inference steps
  • Developer active in problem-solving

Cons

  • DevTools makes it slower
  • UNET works only on CPU
  • WebGPU is not fully implemented
  • works only on Chrome
  • Needs 'Experimental WebAssembly JSPI' option
  • Needs JavaScript on
  • 20 steps for good results
  • Slow at making inferences
  • No support for multi-threading
  • Needs 'Experimental WebAssembly' option

Reviews

You must be logged in to submit a review.

No reviews yet. Be the first to review!