KohyaSS/README.md

338 lines
19 KiB
Markdown
Raw Normal View History

2023-01-10 14:38:32 +00:00
# Kohya's GUI
2022-10-30 15:15:09 +00:00
2023-02-08 03:30:25 +00:00
This repository provides a Windows-focused Gradio GUI for [Kohya's Stable Diffusion trainers](https://github.com/kohya-ss/sd-scripts). The GUI allows you to set the training parameters and generate and run the required CLI commands to train the model.
If you run on Linux and would like to use the GUI, there is now a port of it as a docker container. You can find the project [here](https://github.com/P2Enjoy/kohya_ss-docker).
### Table of Contents
2023-03-02 20:00:55 +00:00
- [Tutorials](https://github.com/bmaltais/kohya_ss#tutorials)
- [Required Dependencies](https://github.com/bmaltais/kohya_ss#required-dependencies)
- [Installation](https://github.com/bmaltais/kohya_ss#installation)
- [CUDNN 8.6](https://github.com/bmaltais/kohya_ss#optional-cudnn-86)
- [Upgrading](https://github.com/bmaltais/kohya_ss#upgrading)
- [Launching the GUI](https://github.com/bmaltais/kohya_ss#launching-the-gui)
- [Dreambooth](https://github.com/bmaltais/kohya_ss#dreambooth)
- [Finetune](https://github.com/bmaltais/kohya_ss#finetune)
- [Train Network](https://github.com/bmaltais/kohya_ss#train-network)
- [LoRA](https://github.com/bmaltais/kohya_ss#lora)
- [Troubleshooting](https://github.com/bmaltais/kohya_ss#troubleshooting)
- [Page File Limit](https://github.com/bmaltais/kohya_ss#page-file-limit)
- [No module called tkinter](https://github.com/bmaltais/kohya_ss#no-module-called-tkinter)
- [FileNotFoundError](https://github.com/bmaltais/kohya_ss#filenotfounderror)
- [Change History](https://github.com/bmaltais/kohya_ss#change-history)
2022-10-30 17:37:42 +00:00
2023-01-28 00:46:13 +00:00
## Tutorials
2023-02-08 03:30:25 +00:00
[How to Create a LoRA Part 1: Dataset Preparation](https://www.youtube.com/watch?v=N4_-fB62Hwk):
2023-01-28 00:46:13 +00:00
2023-02-08 03:30:25 +00:00
[![LoRA Part 1 Tutorial](https://img.youtube.com/vi/N4_-fB62Hwk/0.jpg)](https://www.youtube.com/watch?v=N4_-fB62Hwk)
2023-01-28 00:46:13 +00:00
2023-02-08 03:30:25 +00:00
[How to Create a LoRA Part 2: Training the Model](https://www.youtube.com/watch?v=k5imq01uvUY):
2023-01-28 00:46:13 +00:00
2023-02-08 03:30:25 +00:00
[![LoRA Part 2 Tutorial](https://img.youtube.com/vi/k5imq01uvUY/0.jpg)](https://www.youtube.com/watch?v=k5imq01uvUY)
2023-01-28 00:46:13 +00:00
2023-01-06 23:38:24 +00:00
## Required Dependencies
2023-02-08 03:30:25 +00:00
- Install [Python 3.10](https://www.python.org/ftp/python/3.10.9/python-3.10.9-amd64.exe)
- make sure to tick the box to add Python to the 'PATH' environment variable
- Install [Git](https://git-scm.com/download/win)
- Install [Visual Studio 2015, 2017, 2019, and 2022 redistributable](https://aka.ms/vs/17/release/vc_redist.x64.exe)
2023-01-06 23:38:24 +00:00
2023-01-16 15:28:20 +00:00
## Installation
2023-03-20 12:47:00 +00:00
### Runpod
Follow the instructions found in this discussion: https://github.com/bmaltais/kohya_ss/discussions/379
2023-03-03 23:17:29 +00:00
### Ubuntu
In the terminal, run
```
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
bash ubuntu_setup.sh
```
then configure accelerate with the same answers as in the Windows instructions when prompted.
### Windows
2023-01-06 23:38:24 +00:00
Give unrestricted script access to powershell so venv can work:
2023-02-08 03:30:25 +00:00
- Run PowerShell as an administrator
- Run `Set-ExecutionPolicy Unrestricted` and answer 'A'
- Close PowerShell
2023-01-06 23:38:24 +00:00
2023-02-08 03:30:25 +00:00
Open a regular user Powershell terminal and run the following commands:
2023-01-06 23:33:07 +00:00
```powershell
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
2023-01-26 21:22:58 +00:00
python -m venv venv
2023-01-06 23:33:07 +00:00
.\venv\Scripts\activate
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
2023-01-17 22:54:38 +00:00
pip install --use-pep517 --upgrade -r requirements.txt
2023-01-06 23:33:07 +00:00
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
accelerate config
```
### Optional: CUDNN 8.6
2023-02-08 03:30:25 +00:00
This step is optional but can improve the learning speed for NVIDIA 30X0/40X0 owners. It allows for larger training batch size and faster training speed.
2023-01-06 23:33:07 +00:00
2023-02-08 03:30:25 +00:00
Due to the file size, I can't host the DLLs needed for CUDNN 8.6 on Github. I strongly advise you download them for a speed boost in sample generation (almost 50% on 4090 GPU) you can download them [here](https://b1.thefileditch.ch/mwxKTEtelILoIbMbruuM.zip).
2023-01-06 23:33:07 +00:00
2023-02-08 03:30:25 +00:00
To install, simply unzip the directory and place the `cudnn_windows` folder in the root of the this repo.
2023-01-06 23:33:07 +00:00
2023-02-08 03:30:25 +00:00
Run the following commands to install:
2023-01-06 23:33:07 +00:00
```
.\venv\Scripts\activate
2023-02-08 03:30:25 +00:00
2023-01-06 23:33:07 +00:00
python .\tools\cudann_1.8_install.py
```
2023-02-08 03:30:25 +00:00
## Upgrading
2023-01-06 23:33:07 +00:00
2023-02-08 03:30:25 +00:00
When a new release comes out, you can upgrade your repo with the following commands in the root directory:
2023-01-06 23:33:07 +00:00
```powershell
git pull
2023-02-08 03:30:25 +00:00
2023-01-06 23:33:07 +00:00
.\venv\Scripts\activate
2023-02-08 03:30:25 +00:00
2023-01-17 22:54:38 +00:00
pip install --use-pep517 --upgrade -r requirements.txt
2023-01-06 23:33:07 +00:00
```
Once the commands have completed successfully you should be ready to use the new version.
## Launching the GUI using gui.bat or gui.ps1
2023-01-06 23:33:07 +00:00
The script can be run with several optional command line arguments:
2023-01-06 23:33:07 +00:00
--listen: the IP address to listen on for connections to Gradio.
--username: a username for authentication.
--password: a password for authentication.
--server_port: the port to run the server listener on.
--inbrowser: opens the Gradio UI in a web browser.
--share: shares the Gradio UI.
These command line arguments can be passed to the UI function as keyword arguments. To launch the Gradio UI, run the script in a terminal with the desired command line arguments, for example:
`gui.ps1 --listen 127.0.0.1 --server_port 7860 --inbrowser --share`
2023-01-16 18:39:10 +00:00
or
`gui.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share`
## Launching the GUI using kohya_gui.py
To run the GUI, simply use this command:
2023-01-16 18:39:10 +00:00
```
.\venv\Scripts\activate
2023-02-08 03:30:25 +00:00
2023-01-16 18:39:10 +00:00
python.exe .\kohya_gui.py
2023-01-06 23:33:07 +00:00
```
## Dreambooth
2022-10-30 17:39:00 +00:00
2023-02-08 03:30:25 +00:00
You can find the dreambooth solution specific here: [Dreambooth README](train_db_README.md)
2022-10-30 17:37:42 +00:00
## Finetune
2023-02-08 03:30:25 +00:00
You can find the finetune solution specific here: [Finetune README](fine_tune_README.md)
2022-12-21 14:05:06 +00:00
## Train Network
2023-01-01 19:14:58 +00:00
2023-02-08 03:30:25 +00:00
You can find the train network solution specific here: [Train network README](train_network_README.md)
2023-01-01 19:14:58 +00:00
## LoRA
2023-01-01 19:14:58 +00:00
2023-02-08 03:30:25 +00:00
Training a LoRA currently uses the `train_network.py` code. You can create a LoRA network by using the all-in-one `gui.cmd` or by running the dedicated LoRA training GUI with:
2023-01-01 19:14:58 +00:00
```
.\venv\Scripts\activate
2023-02-08 03:30:25 +00:00
python lora_gui.py
2023-01-01 19:14:58 +00:00
```
2023-02-08 03:30:25 +00:00
Once you have created the LoRA network, you can generate images via auto1111 by installing [this extension](https://github.com/kohya-ss/sd-webui-additional-networks).
2023-01-01 19:14:58 +00:00
2023-01-17 22:54:38 +00:00
## Troubleshooting
2023-02-08 03:30:25 +00:00
### Page File Limit
2023-01-17 22:54:38 +00:00
2023-02-08 03:30:25 +00:00
- X error relating to `page file`: Increase the page file size limit in Windows.
2023-01-17 22:54:38 +00:00
### No module called tkinter
2023-02-08 03:30:25 +00:00
- Re-install [Python 3.10](https://www.python.org/ftp/python/3.10.9/python-3.10.9-amd64.exe) on your system.
2023-01-17 22:54:38 +00:00
### FileNotFoundError
2023-02-08 03:30:25 +00:00
This is usually related to an installation issue. Make sure you do not have any python modules installed locally that could conflict with the ones installed in the venv:
1. Open a new powershell terminal and make sure no venv is active.
2023-02-08 03:30:25 +00:00
2. Run the following commands:
```
pip freeze > uninstall.txt
pip uninstall -r uninstall.txt
```
2023-02-08 03:30:25 +00:00
This will store your a backup file with your current locally installed pip packages and then uninstall them. Then, redo the installation instructions within the kohya_ss venv.
2023-02-08 03:30:25 +00:00
## Change History
2022-12-21 14:05:06 +00:00
2023-03-20 12:47:00 +00:00
* 2023/03/19 (v21.3.0)
- Add a function to load training config with `.toml` to each training script. Thanks to Linaqruf for this great contribution!
- Specify `.toml` file with `--config_file`. `.toml` file has `key=value` entries. Keys are same as command line options. See [#241](https://github.com/kohya-ss/sd-scripts/pull/241) for details.
- All sub-sections are combined to a single dictionary (the section names are ignored.)
- Omitted arguments are the default values for command line arguments.
- Command line args override the arguments in `.toml`.
- With `--output_config` option, you can output current command line options to the `.toml` specified with`--config_file`. Please use as a template.
- Add `--lr_scheduler_type` and `--lr_scheduler_args` arguments for custom LR scheduler to each training script. Thanks to Isotr0py! [#271](https://github.com/kohya-ss/sd-scripts/pull/271)
- Same as the optimizer.
- Add sample image generation with weight and no length limit. Thanks to mio2333! [#288](https://github.com/kohya-ss/sd-scripts/pull/288)
- `( )`, `(xxxx:1.2)` and `[ ]` can be used.
- Fix exception on training model in diffusers format with `train_network.py` Thanks to orenwang! [#290](https://github.com/kohya-ss/sd-scripts/pull/290)
- Add warning if you are about to overwrite an existing model: https://github.com/bmaltais/kohya_ss/issues/404
2023-03-22 00:20:57 +00:00
- Add `--vae_batch_size` for faster latents caching to each training script. This batches VAE calls.
- Please start with`2` or `4` depending on the size of VRAM.
- Fix a number of training steps with `--gradient_accumulation_steps` and `--max_train_epochs`. Thanks to tsukimiya!
- Extract parser setup to external scripts. Thanks to robertsmieja!
- Fix an issue without `.npz` and with `--full_path` in training.
- Support extensions with upper cases for images for not Windows environment.
- Fix `resize_lora.py` to work with LoRA with dynamic rank (including `conv_dim != network_dim`). Thanks to toshiaki!
2023-03-22 00:34:21 +00:00
- Fix issue: https://github.com/bmaltais/kohya_ss/issues/406
2023-03-20 00:07:11 +00:00
* 2023/03/19 (v21.2.5):
2023-03-15 23:31:52 +00:00
- Fix basic captioning logic
- Add possibility to not train TE in Dreamboot by setting `Step text encoder training` to -1.
2023-03-20 00:07:11 +00:00
- Update linux scripts
* 2023/03/12 (v21.2.4):
- Fix issue with kohya locon not training the convolution layers
- Update LyCORIS module version
- Update LyCORYS locon extract tool
* 2023/03/12 (v21.2.3):
- Add validation that all requirements are met before starting the GUI.
2023-03-11 15:47:40 +00:00
* 2023/03/11 (v21.2.2):
2023-03-12 14:14:32 +00:00
- Add support for LoRA LoHa type. See https://github.com/KohakuBlueleaf/LyCORIS for more details.
2023-03-10 16:44:52 +00:00
* 2023/03/10 (v21.2.1):
- Update to latest sd-script code
- Add support for SVD based LoRA merge
2023-03-09 16:06:59 +00:00
* 2023/03/09 (v21.2.0):
2023-03-09 00:38:54 +00:00
- Fix issue https://github.com/bmaltais/kohya_ss/issues/335
- Add option to print LoRA trainer command without executing it
- Add support for samples during trainin via a new `Sample images config` accordion in the `Training parameters` tab.
- Added new `Additional parameters` under the `Advanced Configuration` section of the `Training parameters` tab to allow for the specifications of parameters not handles by the GUI.
2023-03-07 00:42:46 +00:00
- Added support for sample as a new Accordion under the `Training parameters` tab. More info about the prompt options can be found here: https://github.com/kohya-ss/sd-scripts/issues/256#issuecomment-1455005709
2023-03-09 16:06:59 +00:00
- There may be problems due to major changes. If you cannot revert back to the previous version when problems occur, please do not update for a while.
- Minimum metadata (module name, dim, alpha and network_args) is recorded even with `--no_metadata`, issue https://github.com/kohya-ss/sd-scripts/issues/254
- `train_network.py` supports LoRA for Conv2d-3x3 (extended to conv2d with a kernel size not 1x1).
- Same as a current version of [LoCon](https://github.com/KohakuBlueleaf/LoCon). __Thank you very much KohakuBlueleaf for your help!__
- LoCon will be enhanced in the future. Compatibility for future versions is not guaranteed.
- Specify `--network_args` option like: `--network_args "conv_dim=4" "conv_alpha=1"`
- [Additional Networks extension](https://github.com/kohya-ss/sd-webui-additional-networks) version 0.5.0 or later is required to use 'LoRA for Conv2d-3x3' in Stable Diffusion web UI.
- __Stable Diffusion web UI built-in LoRA does not support 'LoRA for Conv2d-3x3' now. Consider carefully whether or not to use it.__
- Merging/extracting scripts also support LoRA for Conv2d-3x3.
- Free CUDA memory after sample generation to reduce VRAM usage, issue https://github.com/kohya-ss/sd-scripts/issues/260
- Empty caption doesn't cause error now, issue https://github.com/kohya-ss/sd-scripts/issues/258
- Fix sample generation is crashing in Textual Inversion training when using templates, or if height/width is not divisible by 8.
- Update documents (Japanese only).
- Dependencies are updated, Please [upgrade](#upgrade) the repo.
- Add detail dataset config feature by extra config file. Thanks to fur0ut0 for this great contribution!
- Documentation is [here](https://github-com.translate.goog/kohya-ss/sd-scripts/blob/main/config_README-ja.md) (only in Japanese currently.)
- Specify `.toml` file with `--dataset_config` option.
- The options supported under the previous release can be used as is instead of the `.toml` config file.
- There might be bugs due to the large scale of update, please report any problems if you find at https://github.com/kohya-ss/sd-scripts/issues.
- Add feature to generate sample images in the middle of training for each training scripts.
- `--sample_every_n_steps` and `--sample_every_n_epochs` options: frequency to generate.
- `--sample_prompts` option: the file contains prompts (each line generates one image.)
- The prompt is subset of `gen_img_diffusers.py`. The prompt options `w, h, d, l, s, n` are supported.
- `--sample_sampler` option: sampler (scheduler) for generating, such as ddim or k_euler. See help for useable samplers.
- Add `--tokenizer_cache_dir` to each training and generation scripts to cache Tokenizer locally from Diffusers.
- Scripts will support offline training/generation after caching.
- Support letents upscaling for highres. fix, and VAE batch size in `gen_img_diffusers.py` (no documentation yet.)
2023-03-09 16:06:59 +00:00
- Sample image generation:
A prompt file might look like this, for example
```
# prompt 1
masterpiece, best quality, 1girl, in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
```
Lines beginning with `#` are comments. You can specify options for the generated image with options like `--n` after the prompt. The following can be used.
* `--n` Negative prompt up to the next option.
* `--w` Specifies the width of the generated image.
* `--h` Specifies the height of the generated image.
* `--d` Specifies the seed of the generated image.
* `--l` Specifies the CFG scale of the generated image.
* `--s` Specifies the number of steps in the generation.
The prompt weighting such as `( )` and `[ ]` are not working.
Please read [Releases](https://github.com/kohya-ss/sd-scripts/releases) for recent updates.
2023-03-06 02:10:24 +00:00
* 2023/03/05 (v21.1.5):
- Add replace underscore with space option to WD14 captioning. Thanks @sALTaccount!
2023-03-06 02:57:06 +00:00
- Improve how custom preset is set and handles.
- Add support for `--listen` argument. This allow gradio to listen for connections from other devices on the network (or internet). For example: `gui.ps1 --listen "0.0.0.0"` will allow anyone to connect to the gradio webui.
2023-03-06 17:46:57 +00:00
- Updated `Resize LoRA` tab to support LoCon resizing. Added new resize
2023-03-05 15:34:09 +00:00
* 2023/03/05 (v21.1.4):
- Removing legacy and confusing use 8bit adam chackbox. It is now configured using the Optimiser drop down list. It will be set properly based on legacy config files.
2023-03-04 23:56:22 +00:00
* 2023/03/04 (v21.1.3):
- Fix progress bar being displayed when not required.
- Add support for linux, thank you @devNegative-asm
2023-03-03 12:11:15 +00:00
* 2023/03/03 (v21.1.2):
- Fix issue https://github.com/bmaltais/kohya_ss/issues/277
2023-03-03 12:41:44 +00:00
- Fix issue https://github.com/bmaltais/kohya_ss/issues/278 introduce by LoCon project switching to pip module. Make sure to run upgrade.ps1 to install the latest pip requirements for LoCon support.
2023-03-02 22:51:17 +00:00
* 2023/03/02 (v21.1.1):
- Emergency fix for https://github.com/bmaltais/kohya_ss/issues/261
2023-03-02 19:25:11 +00:00
* 2023/03/02 (v21.1.0):
2023-03-08 06:55:30 +00:00
- Add LoCon support (https://github.com/KohakuBlueleaf/LoCon.git) to the Dreambooth LoRA tab. This will allow to create a new type of LoRA that include conv layers as part of the LoRA... hence the name LoCon. LoCon will work with the native Auto1111 implementation of LoRA. If you want to use it with the Kohya_ss additionalNetwork you will need to install this other extension... until Kohya_ss support it natively: https://github.com/KohakuBlueleaf/a1111-sd-webui-locon
2023-03-02 00:02:04 +00:00
* 2023/03/01 (v21.0.1):
- Add warning to tensorboard start if the log information is missing
- Fix issue with 8bitadam on older config file load
2023-02-27 00:49:22 +00:00
* 2023/02/27 (v21.0.0):
- Add tensorboard start and stop support to the GUI
* 2023/02/26 (v20.8.2):
2023-02-25 01:37:51 +00:00
- Fix issue https://github.com/bmaltais/kohya_ss/issues/231
- Change default for seed to random
- Add support for --share argument to `kohya_gui.py` and `gui.ps1`
- Implement 8bit adam login to help with the legacy `Use 8bit adam` checkbox that is now superceided by the `Optimizer` dropdown selection. This field will be eventually removed. Kept for now for backward compatibility.
2023-02-24 00:25:51 +00:00
* 2023/02/23 (v20.8.1):
2023-02-24 00:25:09 +00:00
- Fix instability training issue in `train_network.py`.
- `fp16` training is probably not affected by this issue.
- Training with `float` for SD2.x models will work now. Also training with bf16 might be improved.
- This issue seems to have occurred in [PR#190](https://github.com/kohya-ss/sd-scripts/pull/190).
- Add some metadata to LoRA model. Thanks to space-nuko!
- Raise an error if optimizer options conflict (e.g. `--optimizer_type` and `--use_8bit_adam`.)
- Support ControlNet in `gen_img_diffusers.py` (no documentation yet.)
* 2023/02/22 (v20.8.0):
2023-02-23 01:32:57 +00:00
- Add gui support for optimizers: `AdamW, AdamW8bit, Lion, SGDNesterov, SGDNesterov8bit, DAdaptation, AdaFactor`
- Add gui support for `--noise_offset`
- Refactor optmizer options. Thanks to mgz-dev!
2023-02-23 01:32:57 +00:00
- Add `--optimizer_type` option for each training script. Please see help. Japanese documentation is [here](https://github-com.translate.goog/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md?_x_tr_sl=fr&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp#%E3%82%AA%E3%83%97%E3%83%86%E3%82%A3%E3%83%9E%E3%82%A4%E3%82%B6%E3%81%AE%E6%8C%87%E5%AE%9A%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6).
- `--use_8bit_adam` and `--use_lion_optimizer` options also work and will override the options above for backward compatibility.
- Add SGDNesterov and its 8bit.
- Add [D-Adaptation](https://github.com/facebookresearch/dadaptation) optimizer. Thanks to BootsofLagrangian and all!
- Please install D-Adaptation optimizer with `pip install dadaptation` (it is not in requirements.txt currently.)
- Please see https://github.com/kohya-ss/sd-scripts/issues/181 for details.
- Add AdaFactor optimizer. Thanks to Toshiaki!
- Extra lr scheduler settings (num_cycles etc.) are working in training scripts other than `train_network.py`.
- Add `--max_grad_norm` option for each training script for gradient clipping. `0.0` disables clipping.
- Symbolic link can be loaded in each training script. Thanks to TkskKurumi!