Updates
This commit is contained in:
parent
3b93266aae
commit
f213b15014
192
README.md
192
README.md
@ -163,8 +163,12 @@ This will store your a backup file with your current locally installed pip packa
|
||||
|
||||
## Change History
|
||||
|
||||
* 2023/02/24 (v20.8.2):
|
||||
- Fix issue https://github.com/bmaltais/kohya_ss/issues/231
|
||||
- Change default for seed to random
|
||||
- Add support for --share argument to `kohya_gui.py` and `gui.ps1`
|
||||
- Implement 8bit adam login to help with the legacy `Use 8bit adam` checkbox that is now superceided by the `Optimizer` dropdown selection. This field will be eventually removed. Kept for now for backward compatibility.
|
||||
* 2023/02/23 (v20.8.1):
|
||||
|
||||
- Fix instability training issue in `train_network.py`.
|
||||
- `fp16` training is probably not affected by this issue.
|
||||
- Training with `float` for SD2.x models will work now. Also training with bf16 might be improved.
|
||||
@ -186,189 +190,3 @@ This will store your a backup file with your current locally installed pip packa
|
||||
- Extra lr scheduler settings (num_cycles etc.) are working in training scripts other than `train_network.py`.
|
||||
- Add `--max_grad_norm` option for each training script for gradient clipping. `0.0` disables clipping.
|
||||
- Symbolic link can be loaded in each training script. Thanks to TkskKurumi!
|
||||
* 2023/02/19 (v20.7.4):
|
||||
- Add `--use_lion_optimizer` to each training script to use [Lion optimizer](https://github.com/lucidrains/lion-pytorch).
|
||||
- Please install Lion optimizer with `pip install lion-pytorch` (it is not in ``requirements.txt`` currently.)
|
||||
- Add `--lowram` option to `train_network.py`. Load models to VRAM instead of VRAM (for machines which have bigger VRAM than RAM such as Colab and Kaggle). Thanks to Isotr0py!
|
||||
- Default behavior (without lowram) has reverted to the same as before 14 Feb.
|
||||
- Fixed git commit hash to be set correctly regardless of the working directory. Thanks to vladmandic!
|
||||
* 2023/02/15 (v20.7.3):
|
||||
- Update upgrade.ps1 script
|
||||
- Integrate new kohya sd-script
|
||||
- Noise offset is recorded to the metadata. Thanks to space-nuko!
|
||||
- Show the moving average loss to prevent loss jumping in `train_network.py` and `train_db.py`. Thanks to shirayu!
|
||||
- Add support with multi-gpu trainining for `train_network.py`. Thanks to Isotr0py!
|
||||
- Add `--verbose` option for `resize_lora.py`. For details, see [this PR](https://github.com/kohya-ss/sd-scripts/pull/179). Thanks to mgz-dev!
|
||||
- Git commit hash is added to the metadata for LoRA. Thanks to space-nuko!
|
||||
- Add `--noise_offset` option for each training scripts.
|
||||
- Implementation of https://www.crosslabs.org//blog/diffusion-with-offset-noise
|
||||
- This option may improve ability to generate darker/lighter images. May work with LoRA.
|
||||
* 2023/02/11 (v20.7.2):
|
||||
- `lora_interrogator.py` is added in `networks` folder. See `python networks\lora_interrogator.py -h` for usage.
|
||||
- For LoRAs where the activation word is unknown, this script compares the output of Text Encoder after applying LoRA to that of unapplied to find out which token is affected by LoRA. Hopefully you can figure out the activation word. LoRA trained with captions does not seem to be able to interrogate.
|
||||
- Batch size can be large (like 64 or 128).
|
||||
- `train_textual_inversion.py` now supports multiple init words.
|
||||
- Following feature is reverted to be the same as before. Sorry for confusion:
|
||||
> Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
|
||||
- Add new tool to sort, group and average crop image in a dataset
|
||||
* 2023/02/09 (v20.7.1)
|
||||
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
|
||||
- ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
|
||||
- ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
|
||||
- ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
|
||||
- The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
|
||||
- Typo check is added. Thanks to shirayu!
|
||||
- Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
|
||||
* 2023/02/06 (v20.7.0)
|
||||
- `--bucket_reso_steps` and `--bucket_no_upscale` options are added to training scripts (fine tuning, DreamBooth, LoRA and Textual Inversion) and `prepare_buckets_latents.py`.
|
||||
- `--bucket_reso_steps` takes the steps for buckets in aspect ratio bucketing. Default is 64, same as before.
|
||||
- Any value greater than or equal to 1 can be specified; 64 is highly recommended and a value divisible by 8 is recommended.
|
||||
- If less than 64 is specified, padding will occur within U-Net. The result is unknown.
|
||||
- If you specify a value that is not divisible by 8, it will be truncated to divisible by 8 inside VAE, because the size of the latent is 1/8 of the image size.
|
||||
- If the `--bucket_no_upscale` option is specified, images smaller than the bucket size will be processed without upscaling.
|
||||
- Internally, a bucket smaller than the image size is created (for example, if the image is 300x300 and `bucket_reso_steps=64`, the bucket is 256x256). The image will be trimmed.
|
||||
- Implementation of [#130](https://github.com/kohya-ss/sd-scripts/issues/130).
|
||||
- Images with an area larger than the maximum size specified by `--resolution` are downsampled to the max bucket size.
|
||||
- Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
|
||||
- `--random_crop` now also works with buckets enabled.
|
||||
- Instead of always cropping the center of the image, the image is shifted left, right, up, and down to be used as the training data. This is expected to train to the edges of the image.
|
||||
- Implementation of discussion [#34](https://github.com/kohya-ss/sd-scripts/discussions/34).
|
||||
* 2023/02/04 (v20.6.1)
|
||||
- Add new LoRA resize GUI
|
||||
- `--persistent_data_loader_workers` option is added to `fine_tune.py`, `train_db.py` and `train_network.py`. This option may significantly reduce the waiting time between epochs. Thanks to hitomi!
|
||||
- `--debug_dataset` option is now working on non-Windows environment. Thanks to tsukimiya!
|
||||
- `networks/resize_lora.py` script is added. This can approximate the higher-rank (dim) LoRA model by a lower-rank LoRA model, e.g. 128 to 4. Thanks to mgz-dev!
|
||||
- `--help` option shows usage.
|
||||
- Currently the metadata is not copied. This will be fixed in the near future.
|
||||
* 2023/02/03 (v20.6.0)
|
||||
- Increase max LoRA rank (dim) size to 1024.
|
||||
- Update finetune preprocessing scripts.
|
||||
- `.bmp` and `.jpeg` are supported. Thanks to breakcore2 and p1atdev!
|
||||
- The default weights of `tag_images_by_wd14_tagger.py` is now `SmilingWolf/wd-v1-4-convnext-tagger-v2`. You can specify another model id from `SmilingWolf` by `--repo_id` option. Thanks to SmilingWolf for the great work.
|
||||
- To change the weight, remove `wd14_tagger_model` folder, and run the script again.
|
||||
- `--max_data_loader_n_workers` option is added to each script. This option uses the DataLoader for data loading to speed up loading, 20%~30% faster.
|
||||
- Please specify 2 or 4, depends on the number of CPU cores.
|
||||
- `--recursive` option is added to `merge_dd_tags_to_metadata.py` and `merge_captions_to_metadata.py`, only works with `--full_path`.
|
||||
- `make_captions_by_git.py` is added. It uses [GIT microsoft/git-large-textcaps](https://huggingface.co/microsoft/git-large-textcaps) for captioning.
|
||||
- `requirements.txt` is updated. If you use this script, [please update the libraries](https://github.com/kohya-ss/sd-scripts#upgrade).
|
||||
- Usage is almost the same as `make_captions.py`, but batch size should be smaller.
|
||||
- `--remove_words` option removes as much text as possible (such as `the word "XXXX" on it`).
|
||||
- `--skip_existing` option is added to `prepare_buckets_latents.py`. Images with existing npz files are ignored by this option.
|
||||
- `clean_captions_and_tags.py` is updated to remove duplicated or conflicting tags, e.g. `shirt` is removed when `white shirt` exists. if `black hair` is with `red hair`, both are removed.
|
||||
- Tag frequency is added to the metadata in `train_network.py`. Thanks to space-nuko!
|
||||
- __All tags and number of occurrences of the tag are recorded.__ If you do not want it, disable metadata storing with `--no_metadata` option.
|
||||
* 2023/01/30 (v20.5.2):
|
||||
- Add `--lr_scheduler_num_cycles` and `--lr_scheduler_power` options for `train_network.py` for cosine_with_restarts and polynomial learning rate schedulers. Thanks to mgz-dev!
|
||||
- Fixed U-Net `sample_size` parameter to `64` when converting from SD to Diffusers format, in `convert_diffusers20_original_sd.py`
|
||||
* 2023/01/27 (v20.5.1):
|
||||
- Fix [issue #70](https://github.com/bmaltais/kohya_ss/issues/70)
|
||||
- Fix [issue #71](https://github.com/bmaltais/kohya_ss/issues/71)
|
||||
* 2023/01/26 (v20.5.0):
|
||||
- Add new `Dreambooth TI` tab for training of Textual Inversion embeddings
|
||||
- Add Textual Inversion training. Documentation is [here](./train_ti_README-ja.md) (in Japanese.)
|
||||
* 2023/01/22 (v20.4.1):
|
||||
- Add new tool to verify LoRA weights produced by the trainer. Can be found under "Dreambooth LoRA/Tools/Verify LoRA"
|
||||
* 2023/01/22 (v20.4.0):
|
||||
- Add support for `network_alpha` under the Training tab and support for `--training_comment` under the Folders tab.
|
||||
- Add `--network_alpha` option to specify `alpha` value to prevent underflows for stable training. Thanks to CCRcmcpe!
|
||||
- Details of the issue are described [here](https://github.com/kohya-ss/sd-webui-additional-networks/issues/49).
|
||||
- The default value is `1`, scale `1 / rank (or dimension)`. Set same value as `network_dim` for same behavior to old version.
|
||||
- LoRA with a large dimension (rank) seems to require a higher learning rate with `alpha=1` (e.g. 1e-3 for 128-dim, still investigating).
|
||||
- For generating images in Web UI, __the latest version of the extension `sd-webui-additional-networks` (v0.3.0 or later) is required for the models trained with this release or later.__
|
||||
- Add logging for the learning rate for U-Net and Text Encoder independently, and for running average epoch loss. Thanks to mgz-dev!
|
||||
- Add more metadata such as dataset/reg image dirs, session ID, output name etc... See [this pull request](https://github.com/kohya-ss/sd-scripts/pull/77) for details. Thanks to space-nuko!
|
||||
- __Now the metadata includes the folder name (the basename of the folder contains image files, not the full path).__ If you do not want it, disable metadata storing with `--no_metadata` option.
|
||||
- Add `--training_comment` option. You can specify an arbitrary string and refer to it by the extension.
|
||||
|
||||
It seems that the Stable Diffusion web UI now supports image generation using the LoRA model learned in this repository.
|
||||
|
||||
Note: At this time, it appears that models learned with version 0.4.0 are not supported. If you want to use the generation function of the web UI, please continue to use version 0.3.2. Also, it seems that LoRA models for SD2.x are not supported.
|
||||
|
||||
* 2023/01/16 (v20.3.0):
|
||||
- Fix a part of LoRA modules are not trained when `gradient_checkpointing` is enabled.
|
||||
- Add `--save_last_n_epochs_state` option. You can specify how many state folders to keep, apart from how many models to keep. Thanks to shirayu!
|
||||
- Fix Text Encoder training stops at `max_train_steps` even if `max_train_epochs` is set in `train_db.py`.
|
||||
- Added script to check LoRA weights. You can check weights by `python networks\check_lora_weights.py <model file>`. If some modules are not trained, the value is `0.0` like following.
|
||||
- `lora_te_text_model_encoder_layers_11_*` is not trained with `clip_skip=2`, so `0.0` is okay for these modules.
|
||||
|
||||
- example result of `check_lora_weights.py`, Text Encoder and a part of U-Net are not trained:
|
||||
```
|
||||
number of LoRA-up modules: 264
|
||||
lora_te_text_model_encoder_layers_0_mlp_fc1.lora_up.weight,0.0
|
||||
lora_te_text_model_encoder_layers_0_mlp_fc2.lora_up.weight,0.0
|
||||
lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight,0.0
|
||||
:
|
||||
lora_unet_down_blocks_2_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_up.weight,0.0
|
||||
lora_unet_down_blocks_2_attentions_1_transformer_blocks_0_ff_net_2.lora_up.weight,0.0
|
||||
lora_unet_mid_block_attentions_0_proj_in.lora_up.weight,0.003503334941342473
|
||||
lora_unet_mid_block_attentions_0_proj_out.lora_up.weight,0.004308608360588551
|
||||
:
|
||||
```
|
||||
|
||||
- all modules are trained:
|
||||
```
|
||||
number of LoRA-up modules: 264
|
||||
lora_te_text_model_encoder_layers_0_mlp_fc1.lora_up.weight,0.0028684409335255623
|
||||
lora_te_text_model_encoder_layers_0_mlp_fc2.lora_up.weight,0.0029794853180646896
|
||||
lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight,0.002507600700482726
|
||||
lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_up.weight,0.002639499492943287
|
||||
:
|
||||
```
|
||||
|
||||
* 2023/01/16 (v20.2.1):
|
||||
- Merging the latest code update from kohya
|
||||
- Added `--max_train_epochs` and `--max_data_loader_n_workers` option for each training script.
|
||||
- If you specify the number of training epochs with `--max_train_epochs`, the number of steps is calculated from the number of epochs automatically.
|
||||
- You can set the number of workers for DataLoader with `--max_data_loader_n_workers`, default is 8. The lower number may reduce the main memory usage and the time between epochs, but may cause slower data loading (training).
|
||||
- Fix loading some VAE or .safetensors as VAE is failed for `--vae` option. Thanks to Fannovel16!
|
||||
- Add negative prompt scaling for `gen_img_diffusers.py` You can set another conditioning scale to the negative prompt with `--negative_scale` option, and `--nl` option for the prompt. Thanks to laksjdjf!
|
||||
- Refactoring of GUI code and fixing mismatch... and possibly introducing bugs...
|
||||
* 2023/01/11 (v20.2.0):
|
||||
- Add support for max token length
|
||||
* 2023/01/10 (v20.1.1):
|
||||
- Fix issue with LoRA config loading
|
||||
* 2023/01/10 (v20.1):
|
||||
- Add support for `--output_name` to trainers
|
||||
- Refactor code for easier maintenance
|
||||
* 2023/01/10 (v20.0):
|
||||
- Update code base to match [latest kohys_ss code upgrade](https://github.com/kohya-ss/sd-scripts)
|
||||
* 2023/01/09 (v19.4.3):
|
||||
- Add vae support to dreambooth GUI
|
||||
- Add gradient_checkpointing, gradient_accumulation_steps, mem_eff_attn, shuffle_caption to finetune GUI
|
||||
- Add gradient_accumulation_steps, mem_eff_attn to dreambooth lora gui
|
||||
* 2023/01/08 (v19.4.2):
|
||||
- Add find/replace option to Basic Caption utility
|
||||
- Add resume training and save_state option to finetune UI
|
||||
* 2023/01/06 (v19.4.1):
|
||||
- Emergency fix for new version of gradio causing issues with drop down menus. Please run `pip install -U -r requirements.txt` to fix the issue after pulling this repo.
|
||||
* 2023/01/06 (v19.4):
|
||||
- Add new Utility to Extract a LoRA from a finetuned model
|
||||
* 2023/01/06 (v19.3.1):
|
||||
- Emergency fix for dreambooth_ui no longer working, sorry
|
||||
- Add LoRA network merge too GUI. Run `pip install -U -r requirements.txt` after pulling this new release.
|
||||
* 2023/01/05 (v19.3):
|
||||
- Add support for `--clip_skip` option
|
||||
- Add missing `detect_face_rotate.py` to tools folder
|
||||
- Add `gui.cmd` for easy start of GUI
|
||||
* 2023/01/02 (v19.2) update:
|
||||
- Finetune, add xformers, 8bit adam, min bucket, max bucket, batch size and flip augmentation support for dataset preparation
|
||||
- Finetune, add "Dataset preparation" tab to group task specific options
|
||||
* 2023/01/01 (v19.2) update:
|
||||
- add support for color and flip augmentation to "Dreambooth LoRA"
|
||||
* 2023/01/01 (v19.1) update:
|
||||
- merge kohys_ss upstream code updates
|
||||
- rework Dreambooth LoRA GUI
|
||||
- fix bug where LoRA network weights were not loaded to properly resume training
|
||||
* 2022/12/30 (v19) update:
|
||||
- support for LoRA network training in kohya_gui.py.
|
||||
* 2022/12/23 (v18.8) update:
|
||||
- Fix for conversion tool issue when the source was an sd1.x diffuser model
|
||||
- Other minor code and GUI fix
|
||||
* 2022/12/22 (v18.7) update:
|
||||
- Merge dreambooth and finetune is a common GUI
|
||||
- General bug fixes and code improvements
|
||||
* 2022/12/21 (v18.6.1) update:
|
||||
- fix issue with dataset balancing when the number of detected images in the folder is 0
|
||||
|
||||
* 2022/12/21 (v18.6) update:
|
||||
- add optional GUI authentication support via: `python fine_tune.py --username=<name> --password=<password>`
|
@ -24,6 +24,7 @@ from library.common_gui import (
|
||||
gradio_training,
|
||||
gradio_config,
|
||||
gradio_source_model,
|
||||
set_legacy_8bitadam,
|
||||
)
|
||||
from library.dreambooth_folder_creation_gui import (
|
||||
gradio_dreambooth_folder_creation_tab,
|
||||
@ -622,6 +623,11 @@ def dreambooth_tab(
|
||||
inputs=[color_aug],
|
||||
outputs=[cache_latents],
|
||||
)
|
||||
optimizer.change(
|
||||
set_legacy_8bitadam,
|
||||
inputs=[optimizer, use_8bit_adam],
|
||||
outputs=[optimizer, use_8bit_adam],
|
||||
)
|
||||
with gr.Tab('Tools'):
|
||||
gr.Markdown(
|
||||
'This section provide Dreambooth tools to help setup your dataset...'
|
||||
|
@ -18,6 +18,7 @@ from library.common_gui import (
|
||||
gradio_source_model,
|
||||
color_aug_changed,
|
||||
run_cmd_training,
|
||||
set_legacy_8bitadam,
|
||||
)
|
||||
from library.utilities import utilities_tab
|
||||
|
||||
@ -616,6 +617,11 @@ def finetune_tab():
|
||||
inputs=[color_aug],
|
||||
outputs=[cache_latents], # Not applicable to fine_tune.py
|
||||
)
|
||||
optimizer.change(
|
||||
set_legacy_8bitadam,
|
||||
inputs=[optimizer, use_8bit_adam],
|
||||
outputs=[optimizer, use_8bit_adam],
|
||||
)
|
||||
|
||||
button_run = gr.Button('Train model')
|
||||
|
||||
|
@ -51,12 +51,15 @@ def UI(**kwargs):
|
||||
password = kwargs.get('password')
|
||||
server_port = kwargs.get('server_port', 0)
|
||||
inbrowser = kwargs.get('inbrowser', False)
|
||||
share = kwargs.get('share', False)
|
||||
if username and password:
|
||||
launch_kwargs["auth"] = (username, password)
|
||||
if server_port > 0:
|
||||
launch_kwargs["server_port"] = server_port
|
||||
if inbrowser:
|
||||
launch_kwargs["inbrowser"] = inbrowser
|
||||
if share:
|
||||
launch_kwargs["share"] = share
|
||||
interface.launch(**launch_kwargs)
|
||||
|
||||
if __name__ == '__main__':
|
||||
@ -72,7 +75,8 @@ if __name__ == '__main__':
|
||||
'--server_port', type=int, default=0, help='Port to run the server listener on'
|
||||
)
|
||||
parser.add_argument("--inbrowser", action="store_true", help="Open in browser")
|
||||
parser.add_argument("--share", action="store_true", help="Share the gradio UI")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
UI(username=args.username, password=args.password, inbrowser=args.inbrowser, server_port=args.server_port)
|
||||
UI(username=args.username, password=args.password, inbrowser=args.inbrowser, server_port=args.server_port, share=args.share)
|
||||
|
@ -80,6 +80,14 @@ def remove_doublequote(file_path):
|
||||
|
||||
return file_path
|
||||
|
||||
def set_legacy_8bitadam(optimizer, use_8bit_adam):
|
||||
if optimizer == 'AdamW8bit':
|
||||
# use_8bit_adam = True
|
||||
return gr.Dropdown.update(value=optimizer), gr.Checkbox.update(value=True, interactive=False, visible=True)
|
||||
else:
|
||||
# use_8bit_adam = False
|
||||
return gr.Dropdown.update(value=optimizer), gr.Checkbox.update(value=False, interactive=False, visible=True)
|
||||
|
||||
|
||||
def get_folder_path(folder_path=''):
|
||||
current_folder_path = folder_path
|
||||
@ -444,7 +452,7 @@ def gradio_training(
|
||||
label='Number of CPU threads per core',
|
||||
value=2,
|
||||
)
|
||||
seed = gr.Textbox(label='Seed', value=1234)
|
||||
seed = gr.Textbox(label='Seed', placeholder='(Optional) eg:1234')
|
||||
cache_latents = gr.Checkbox(label='Cache latent', value=True)
|
||||
with gr.Row():
|
||||
learning_rate = gr.Textbox(
|
||||
|
@ -24,6 +24,7 @@ from library.common_gui import (
|
||||
gradio_config,
|
||||
gradio_source_model,
|
||||
run_cmd_training,
|
||||
set_legacy_8bitadam,
|
||||
)
|
||||
from library.dreambooth_folder_creation_gui import (
|
||||
gradio_dreambooth_folder_creation_tab,
|
||||
@ -229,6 +230,7 @@ def open_configuration(
|
||||
# Set the value in the dictionary to the corresponding value in `my_data`, or the default value if not found
|
||||
if not key in ['file_path']:
|
||||
values.append(my_data.get(key, value))
|
||||
|
||||
return tuple(values)
|
||||
|
||||
|
||||
@ -721,6 +723,12 @@ def lora_tab(
|
||||
inputs=[color_aug],
|
||||
outputs=[cache_latents],
|
||||
)
|
||||
|
||||
optimizer.change(
|
||||
set_legacy_8bitadam,
|
||||
inputs=[optimizer, use_8bit_adam],
|
||||
outputs=[optimizer, use_8bit_adam],
|
||||
)
|
||||
|
||||
with gr.Tab('Tools'):
|
||||
gr.Markdown(
|
||||
|
@ -24,6 +24,7 @@ from library.common_gui import (
|
||||
gradio_training,
|
||||
gradio_config,
|
||||
gradio_source_model,
|
||||
set_legacy_8bitadam,
|
||||
)
|
||||
from library.dreambooth_folder_creation_gui import (
|
||||
gradio_dreambooth_folder_creation_tab,
|
||||
@ -697,6 +698,11 @@ def ti_tab(
|
||||
inputs=[color_aug],
|
||||
outputs=[cache_latents],
|
||||
)
|
||||
optimizer.change(
|
||||
set_legacy_8bitadam,
|
||||
inputs=[optimizer, use_8bit_adam],
|
||||
outputs=[optimizer, use_8bit_adam],
|
||||
)
|
||||
with gr.Tab('Tools'):
|
||||
gr.Markdown(
|
||||
'This section provide Dreambooth tools to help setup your dataset...'
|
||||
|
Loading…
Reference in New Issue
Block a user