stable-diffusion配置问题
Q7nl1s admin

直接点击运行 webui-user.bat 文件报错:

Cloning Taming Transformers into E:\DeepLearning\stable-diffusion-webui\repositories\taming-transformers… Traceback (most recent call last): File “E:\DeepLearning\stable-diffusion-webui\launch.py”, line 38, in main() File “E:\DeepLearning\stable-diffusion-webui\launch.py”, line 29, in main prepare_environment() File “E:\DeepLearning\stable-diffusion-webui\modules\launch_utils.py”, line 289, in prepare_environment git_clone(taming_transformers_repo, repo_dir(‘taming-transformers’), “Taming Transformers”, taming_transformers_commit_hash) File “E:\DeepLearning\stable-diffusion-webui\modules\launch_utils.py”, line 147, in git_clone run(f’”{git}” clone “{url}” “{dir}”‘, f”Cloning {name} into {dir}…”, f”Couldn’t clone {name}”) File “E:\DeepLearning\stable-diffusion-webui\modules\launch_utils.py”, line 101, in run raise RuntimeError(“\n”.join(error_bits)) RuntimeError: Couldn’t clone Taming Transformers. Command: “git” clone “https://github.com/CompVis/taming-transformers.git“ “E:\DeepLearning\stable-diffusion-webui\repositories\taming-transformers” Error code: 128 stderr: Cloning into ‘E:\DeepLearning\stable-diffusion-webui\repositories\taming-transformers’… fatal: unable to access ‘https://github.com/CompVis/taming-transformers.git/‘: Failed to connect to github.com port 443 after 21089 ms: Couldn’t connect to server

解决方法:

在当前文件夹下打开 cmd 配置代理后再运行当前文件夹下的 webui-user.bat 文件

1
call ./webui-user.bat

image-20230615231833554

OutOfMemoryError: CUDA out of memory. Tried to allocate 67.91 GiB (GPU 0; 11.99 GiB total capacity; 2.58 GiB already allocated; 7.09 GiB free; 2.62 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

参考:https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/21

(80条消息) 目标检测图片resize下标注框box的变化方式_目标检测图像resize标签变化_Python图像识别的博客-CSDN博客

(80条消息) RuntimeError: CUDA out of memory 已解决_outofmemoryerror: cuda out of memory._Hello_zhangmaoker的博客-CSDN博客

1
2
3
4
User friendly error message: 
Error: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \.
2023-06-19 15:55:17,788 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-06-19 15:55:17,791 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

解决方法:

image-20230619160211008

参考:简明 Stable Diffusion 本地部署(win) - 知乎 (zhihu.com)

1
2
Error: 'A tensor with all NaNs was produced in Unet. Use --disable-nan-check commandline argument to disable this check.'. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \. Full error message is in your terminal/ cli.
Time taken: 2.55sTorch active/reserved: 6265/6588 MiB, Sys VRAM: 8914/12282 MiB (72.58%)

image-20230619160910179

IMG_7917 image-20230619165246658 IMG_4400
 Comments
Comment plugin failed to load
Loading comment plugin
Powered by Hexo & Theme Keep
Unique Visitor Page View