Commit Graph

86 Commits

Author SHA1 Message Date
bmaltais
8d559ded18 * 2023/02/06 (v20.7.0)
- ``--bucket_reso_steps`` and ``--bucket_no_upscale`` options are added to training scripts (fine tuning, DreamBooth, LoRA and Textual Inversion) and ``prepare_buckets_latents.py``.
    - ``--bucket_reso_steps`` takes the steps for buckets in aspect ratio bucketing. Default is 64, same as before.
        - Any value greater than or equal to 1 can be specified; 64 is highly recommended and a value divisible by 8 is recommended.
        - If less than 64 is specified, padding will occur within U-Net. The result is unknown.
        - If you specify a value that is not divisible by 8, it will be truncated to divisible by 8 inside VAE, because the size of the latent is 1/8 of the image size.
    - If ``--bucket_no_upscale`` option is specified, images smaller than the bucket size will be processed without upscaling.
        - Internally, a bucket smaller than the image size is created (for example, if the image is 300x300 and ``bucket_reso_steps=64``, the bucket is 256x256). The image will be trimmed.
        - Implementation of [#130](https://github.com/kohya-ss/sd-scripts/issues/130).
        - Images with an area larger than the maximum size specified by ``--resolution`` are downsampled to the max bucket size.
    - Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - ``--random_crop`` now also works with buckets enabled.
        - Instead of always cropping the center of the image, the image is shifted left, right, up, and down to be used as the training data. This is expected to train to the edges of the image.
        - Implementation of discussion [#34](https://github.com/kohya-ss/sd-scripts/discussions/34).
2023-02-06 11:04:07 -05:00
bmaltais
2486af9903 Update to latest dev code of kohya_s. WIP 2023-02-05 14:16:53 -05:00
bmaltais
2626214f8a Add support for LoRA resizing 2023-02-04 11:55:06 -05:00
bmaltais
20e62af1a6 Update to latest kohya_ss sd-script code 2023-02-03 14:40:03 -05:00
bmaltais
c8f4c9d6e8 Add support for lr_scheduler_num_cycles, lr_scheduler_power 2023-01-30 08:26:15 -05:00
bmaltais
2ec7432440 Fix issue 81:
https://github.com/bmaltais/kohya_ss/issues/81
2023-01-29 11:17:30 -05:00
bmaltais
d45a7abb46 Add reference to Linux docker port 2023-01-29 11:12:05 -05:00
bmaltais
bc8a4757f8 Sync with kohya 2023/01/29 update 2023-01-29 11:10:06 -05:00
bmaltais
a4957cfea7 Adding LoRA tutorial 2023-01-27 19:46:13 -05:00
bmaltais
202923b3ce Add support for --keep_token option 2023-01-27 07:33:44 -05:00
bmaltais
bf371b49bf Fix issue 71 2023-01-27 07:04:35 -05:00
bmaltais
03bd2e9b01 Add TI training support 2023-01-26 16:22:58 -05:00
bmaltais
511361c80b - Add new tool to verify LoRA weights produced by the trainer. Can be found under "Dreambooth LoRA/Tools/Verify LoRA 2023-01-22 11:40:14 -05:00
bmaltais
2ca17f69dd v20.4.0:
Add support for `network_alpha` under the Training tab and support for `--training_comment` under the Folders tab.
2023-01-22 10:18:00 -05:00
bmaltais
31a1c8a71a Merge kohya Jan 19 updates 2023-01-19 15:47:43 -05:00
bmaltais
cb953d716f Update 2023-01-17 17:54:38 -05:00
bmaltais
7886dfe9c7 Update gui start instructions 2023-01-16 13:39:10 -05:00
bmaltais
95b9ab7c4d Update README 2023-01-16 13:33:17 -05:00
bmaltais
bfb0d18d4c Update install instructions 2023-01-16 10:28:20 -05:00
bmaltais
6aed2bb402 Add support for new arguments:
- max_train_epochs
- max_data_loader_n_workers
Move some of the codeto  common gui library.
2023-01-15 11:05:22 -05:00
bmaltais
43116feda8 Add support for max token 2023-01-10 09:38:32 -05:00
bmaltais
42a3646d4a Update readme 2023-01-09 17:59:11 -05:00
bmaltais
dc5afbb057 Move functions to common_gui
Add model name support
2023-01-09 11:48:57 -05:00
bmaltais
442eb7a292 Merge latest kohya code release into GUI repo 2023-01-09 07:47:07 -05:00
bmaltais
a4262c0a66 - Add vae support to dreambooth GUI
- Add gradient_checkpointing, gradient_accumulation_steps, mem_eff_attn, shuffle_caption to finetune GUI
- Add gradient_accumulation_steps, mem_eff_attn to dreambooth lora gui
2023-01-08 20:55:41 -05:00
bmaltais
f1d53ae3f9 Add resume training and save_state option to finetune UI 2023-01-08 19:31:44 -05:00
bmaltais
115ed35187 Emergency fix 2023-01-06 23:19:49 -05:00
bmaltais
aa0e39e14e Update readme 2023-01-06 18:38:24 -05:00
bmaltais
8ec1edbadc Update README 2023-01-06 18:33:07 -05:00
bmaltais
34f7cd8e57 Add new Utility to Extract a LoRA from a finetuned model 2023-01-06 18:25:55 -05:00
bmaltais
c20a10d7fd Emergency fix for dreambooth_ui no longer working, sorry
- Add LoRA network merge too GUI. Run `pip install -U -r requirements.txt` after pulling this new release.
2023-01-06 07:13:12 -05:00
bmaltais
b8100b1a0a - Add support for --clip_skip option
- Add missing `detect_face_rotate.py` to tools folder
- Add `gui.cmd` for easy start of GUI
2023-01-05 19:16:13 -05:00
bmaltais
9d3c402973 - Finetune, add xformers, 8bit adam, min bucket, max bucket, batch size and flip augmentation support for dataset preparation
- Finetune, add "Dataset preparation" tab to group task specific options
2023-01-02 13:07:17 -05:00
bmaltais
1d460a09fd add support for color and flip augmentation to "Dreambooth LoRA" 2023-01-01 22:43:44 -05:00
bmaltais
af46ce4c47 Update LoRA GUI
Various improvements
2023-01-01 14:14:58 -05:00
bmaltais
0f75b7c2db Update readme 2022-12-30 20:50:01 -05:00
bmaltais
2cdf4cf741 - Fix for conversion tool issue when the source was an sd1.x diffuser model
- Other minor code and GUI fix
2022-12-23 07:56:35 -05:00
bmaltais
fd10512bf4 Add revision info 2022-12-22 13:19:28 -05:00
bmaltais
6a7e27e100 fix issue with dataset balancing when the number of detected images in the folder is 0 2022-12-21 11:02:49 -05:00
bmaltais
aa5d13f9a7 Add authentication support 2022-12-21 09:05:06 -05:00
bmaltais
1bc5089db8 Create model and log folder when running th dreambooth folder creation utility 2022-12-20 10:07:22 -05:00
bmaltais
706dfe157f
Merge dreambooth and finetuning in one repo to align with kohya_ss new repo (#10)
* Merge both dreambooth and finetune back in one repo
2022-12-20 09:15:17 -05:00
bmaltais
69558b5951 Update readme 2022-12-19 21:51:52 -05:00
bmaltais
6987f51b0a Fix stop encoder training issue 2022-12-19 10:39:04 -05:00
bmaltais
c90aa2cc61 - Fix file/folder opening behind the browser window
- Add WD14 and BLIP captioning to utilities
- Improve overall GUI layout
2022-12-19 09:22:52 -05:00
bmaltais
0ca93a7aa7 v18.1: Model conversion utility 2022-12-18 13:11:10 -05:00
bmaltais
f459c32a3e v18: Save model as option added 2022-12-17 20:36:31 -05:00
bmaltais
fc22813b8f Fix README 2022-12-17 16:24:23 -05:00
bmaltais
e1d66e47f4
v17.2 (#8)
* Update youtube video

* Dataset balancing

* Fix typo
2022-12-17 16:22:34 -05:00
bmaltais
88561720e0 Update README for v17.1 2022-12-17 11:52:31 -05:00