Commit Graph

125 Commits

Author SHA1 Message Date
bmaltais
ba9e26a99d LoHa support 2023-03-09 07:49:50 -05:00
bmaltais
35c1d42570 Update readme 2023-03-08 19:38:54 -05:00
bmaltais
54a7537fdf
Merge pull request #327 from merlin-rtzr/master
Fix typo
2023-03-08 09:48:47 -05:00
bmaltais
3a5d491ff2 Add option to print LoRA trainer command without executing it 2023-03-08 08:49:12 -05:00
Merlin
cea9bbd956
Fix typo 2023-03-08 15:55:30 +09:00
bmaltais
7ed8f7c3c5 Add additional parameters feild 2023-03-07 07:42:13 -05:00
bmaltais
3fe01f70bc Update readme 2023-03-06 19:42:46 -05:00
bmaltais
7249b0baa8 Update to latest sd-script release
add gui support for sample config
2023-03-06 19:15:02 -05:00
bmaltais
fccb1c3359 v21.1.5 2023-03-06 12:46:57 -05:00
bmaltais
414a98d100 Add --listen support 2023-03-05 21:57:06 -05:00
bmaltais
cc7aee2301 Improve custom preset handling 2023-03-05 21:10:24 -05:00
bmaltais
09939ff8a8 Remove legacy 8bit adam checkbox 2023-03-05 10:34:09 -05:00
bmaltais
3beeef4414 Add linux support 2023-03-04 18:56:22 -05:00
devdn
38bdcea3c5 Add setup script for ubuntu users 2023-03-03 18:17:54 -05:00
bmaltais
29bb8599bb Fix issue 278 2023-03-03 07:41:44 -05:00
bmaltais
d30abe5491 Fix issue 277 2023-03-03 07:11:15 -05:00
bmaltais
482d7834d1 Emergency fix 2023-03-02 17:51:17 -05:00
bmaltais
962628c89a Fix ToC links 2023-03-02 15:00:55 -05:00
bmaltais
c926c9d877 Update Readme 2023-03-02 14:36:07 -05:00
bmaltais
6105eb0279 Update readme 2023-03-02 14:25:11 -05:00
bmaltais
7f0e5683c6 v21.0.1 2023-03-01 19:02:04 -05:00
bmaltais
9d2e3f85a2 Add tensorboard support 2023-02-26 19:49:22 -05:00
bmaltais
f213b15014 Updates 2023-02-24 20:37:51 -05:00
bmaltais
fe4558633d Fix date 2023-02-23 19:25:51 -05:00
bmaltais
df6092a52b Update readme 2023-02-23 19:25:09 -05:00
bmaltais
bf0344ba9e Adding GUI support for new features 2023-02-22 20:32:57 -05:00
bmaltais
2a5fb346d5 Sinc to latest code update on sd-script 2023-02-22 13:30:29 -05:00
bmaltais
611a0f3e76
Merge branch 'master' into dev 2023-02-19 20:16:44 -05:00
bmaltais
bb57c1a36e Update code to latest sd-script version 2023-02-19 06:50:33 -05:00
bmaltais
48122347a3
Merge pull request #189 from bmaltais/LR-Free
v20.7.3
2023-02-17 19:18:39 -05:00
bmaltais
674ed88d13 * 2023/02/16 (v20.7.3)
- Noise offset is recorded to the metadata. Thanks to space-nuko!
    - Show the moving average loss to prevent loss jumping in `train_network.py` and `train_db.py`. Thanks to shirayu!
2023-02-17 19:18:11 -05:00
bmaltais
641a168e55 Integrate new kohya sd-script 2023-02-14 18:52:08 -05:00
bmaltais
a1f6438f7b Upgrade upgrade.ps1 script to fix reported issue:
https://github.com/bmaltais/kohya_ss/issues/165
2023-02-14 17:42:36 -05:00
bmaltais
261b6790ee Update tool 2023-02-12 07:02:05 -05:00
bmaltais
a008c62893
Merge pull request #147 from bmaltais/dev
v20.7.2
2023-02-11 12:00:17 -05:00
bmaltais
a49fb9cb8c 2023/02/11 (v20.7.2):
- ``lora_interrogator.py`` is added in ``networks`` folder. See ``python networks\lora_interrogator.py -h`` for usage.
        - For LoRAs where the activation word is unknown, this script compares the output of Text Encoder after applying LoRA to that of unapplied to find out which token is affected by LoRA. Hopefully you can figure out the activation word. LoRA trained with captions does not seem to be able to interrogate.
        - Batch size can be large (like 64 or 128).
    - ``train_textual_inversion.py`` now supports multiple init words.
    - Following feature is reverted to be the same as before. Sorry for confusion:
        > Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - Add new tool to sort, group and average crop image in a dataset
2023-02-11 11:59:38 -05:00
bmaltais
9660d83612
Merge pull request #129 from jonathanzhang53/master
README documentation update
2023-02-09 19:19:01 -05:00
bmaltais
90c0d55457 2023/02/09 (v20.7.1)
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
        - ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
        - ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
        - ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
        - The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
        - Typo check is added. Thanks to shirayu!
    - Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
2023-02-09 19:17:17 -05:00
jonathanzhang53
6c4348233f README documentation update 2023-02-07 22:32:54 -05:00
bmaltais
8d559ded18 * 2023/02/06 (v20.7.0)
- ``--bucket_reso_steps`` and ``--bucket_no_upscale`` options are added to training scripts (fine tuning, DreamBooth, LoRA and Textual Inversion) and ``prepare_buckets_latents.py``.
    - ``--bucket_reso_steps`` takes the steps for buckets in aspect ratio bucketing. Default is 64, same as before.
        - Any value greater than or equal to 1 can be specified; 64 is highly recommended and a value divisible by 8 is recommended.
        - If less than 64 is specified, padding will occur within U-Net. The result is unknown.
        - If you specify a value that is not divisible by 8, it will be truncated to divisible by 8 inside VAE, because the size of the latent is 1/8 of the image size.
    - If ``--bucket_no_upscale`` option is specified, images smaller than the bucket size will be processed without upscaling.
        - Internally, a bucket smaller than the image size is created (for example, if the image is 300x300 and ``bucket_reso_steps=64``, the bucket is 256x256). The image will be trimmed.
        - Implementation of [#130](https://github.com/kohya-ss/sd-scripts/issues/130).
        - Images with an area larger than the maximum size specified by ``--resolution`` are downsampled to the max bucket size.
    - Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - ``--random_crop`` now also works with buckets enabled.
        - Instead of always cropping the center of the image, the image is shifted left, right, up, and down to be used as the training data. This is expected to train to the edges of the image.
        - Implementation of discussion [#34](https://github.com/kohya-ss/sd-scripts/discussions/34).
2023-02-06 11:04:07 -05:00
bmaltais
2486af9903 Update to latest dev code of kohya_s. WIP 2023-02-05 14:16:53 -05:00
bmaltais
2626214f8a Add support for LoRA resizing 2023-02-04 11:55:06 -05:00
bmaltais
20e62af1a6 Update to latest kohya_ss sd-script code 2023-02-03 14:40:03 -05:00
bmaltais
c8f4c9d6e8 Add support for lr_scheduler_num_cycles, lr_scheduler_power 2023-01-30 08:26:15 -05:00
bmaltais
2ec7432440 Fix issue 81:
https://github.com/bmaltais/kohya_ss/issues/81
2023-01-29 11:17:30 -05:00
bmaltais
d45a7abb46 Add reference to Linux docker port 2023-01-29 11:12:05 -05:00
bmaltais
bc8a4757f8 Sync with kohya 2023/01/29 update 2023-01-29 11:10:06 -05:00
bmaltais
a4957cfea7 Adding LoRA tutorial 2023-01-27 19:46:13 -05:00
bmaltais
202923b3ce Add support for --keep_token option 2023-01-27 07:33:44 -05:00
bmaltais
bf371b49bf Fix issue 71 2023-01-27 07:04:35 -05:00