Commit Graph

434 Commits

Author SHA1 Message Date
Ki-wimon
88a49df47e
set inital value to LoCon conv parameters 2023-03-01 18:57:02 +08:00
Ki-wimon
cdb8bb1182
update code of cloning locon scripts 2023-03-01 13:07:38 +08:00
Ki-wimon
6bcd52c9cc
update new locon args support 2023-03-01 12:22:11 +08:00
Ki-wimon
c07e3bba76
add new LoCon args 2023-03-01 12:19:18 +08:00
Ki-wimon
d76fe7d4e0
LoCon script auto upgrade feature 2023-02-28 22:58:35 +08:00
bmaltais
b1fb87a9e1 Merging PR into LoCon branch 2023-02-28 07:45:42 -05:00
bmaltais
dfd155a8e1 Undo LoCon commit 2023-02-28 07:37:19 -05:00
bmaltais
04f0f0cf4f
Merge pull request #245 from Ki-wimon/master
LoCon support feature
2023-02-28 07:04:27 -05:00
bmaltais
f6bec77eaa
Merge branch 'dev' into master 2023-02-28 07:04:08 -05:00
Ki-wimon
c32a99dad5
Update lora_gui.py 2023-02-28 01:38:05 +08:00
Ki-wimon
6e664f1176
support locon 2023-02-28 01:16:23 +08:00
bmaltais
9d2e3f85a2 Add tensorboard support 2023-02-26 19:49:22 -05:00
bmaltais
74cce23116
Merge pull request #238 from bmaltais/dev
v20.8.2
2023-02-26 15:14:02 -05:00
bmaltais
5e267b23af Adding activate script 2023-02-26 15:12:50 -05:00
bmaltais
6b5d6303cc Update 2023-02-26 15:11:21 -05:00
bmaltais
f213b15014 Updates 2023-02-24 20:37:51 -05:00
bmaltais
3b93266aae Bug: Fix issue: https://github.com/bmaltais/kohya_ss/issues/231 2023-02-24 07:30:37 -05:00
bmaltais
8775667fc7 Updates 2023-02-23 21:48:45 -05:00
bmaltais
c7e99eb54b
Merge pull request #227 from bmaltais/dev
v20.8.1
2023-02-23 19:26:16 -05:00
bmaltais
fe4558633d Fix date 2023-02-23 19:25:51 -05:00
bmaltais
df6092a52b Update readme 2023-02-23 19:25:09 -05:00
bmaltais
60ad22733c Update to latest code version 2023-02-23 19:21:30 -05:00
bmaltais
49bfd3a618
Merge pull request #223 from bmaltais/dev
v20.8.0
2023-02-22 20:33:41 -05:00
bmaltais
bf0344ba9e Adding GUI support for new features 2023-02-22 20:32:57 -05:00
bmaltais
2a5fb346d5 Sinc to latest code update on sd-script 2023-02-22 13:30:29 -05:00
bmaltais
34ab8448fb Fix issue where dadaptation code was pushed by mistake 2023-02-20 08:26:45 -05:00
bmaltais
1807c548b5 Merge branch 'dev' of https://github.com/bmaltais/kohya_ss into dev 2023-02-20 07:56:40 -05:00
bmaltais
dfc9a8dd40 Fix issue with save config 2023-02-20 07:56:24 -05:00
bmaltais
39ac6b0086
Merge pull request #209 from bmaltais/dev
v20.7.4
2023-02-19 20:16:51 -05:00
bmaltais
611a0f3e76
Merge branch 'master' into dev 2023-02-19 20:16:44 -05:00
bmaltais
758bfe85dc Adding support for Lion optimizer in gui 2023-02-19 20:13:03 -05:00
bmaltais
bb57c1a36e Update code to latest sd-script version 2023-02-19 06:50:33 -05:00
bmaltais
48122347a3
Merge pull request #189 from bmaltais/LR-Free
v20.7.3
2023-02-17 19:18:39 -05:00
bmaltais
674ed88d13 * 2023/02/16 (v20.7.3)
- Noise offset is recorded to the metadata. Thanks to space-nuko!
    - Show the moving average loss to prevent loss jumping in `train_network.py` and `train_db.py`. Thanks to shirayu!
2023-02-17 19:18:11 -05:00
bmaltais
f9863e3950 add dadapation to other trainers 2023-02-16 19:33:46 -05:00
bmaltais
655f885cf4 Add dadapation to other trainers 2023-02-16 19:33:33 -05:00
bmaltais
641a168e55 Integrate new kohya sd-script 2023-02-14 18:52:08 -05:00
bmaltais
a1f6438f7b Upgrade upgrade.ps1 script to fix reported issue:
https://github.com/bmaltais/kohya_ss/issues/165
2023-02-14 17:42:36 -05:00
bmaltais
6129c7dd40 1st implementation 2023-02-13 21:20:09 -05:00
bmaltais
261b6790ee Update tool 2023-02-12 07:02:05 -05:00
bmaltais
a008c62893
Merge pull request #147 from bmaltais/dev
v20.7.2
2023-02-11 12:00:17 -05:00
bmaltais
a49fb9cb8c 2023/02/11 (v20.7.2):
- ``lora_interrogator.py`` is added in ``networks`` folder. See ``python networks\lora_interrogator.py -h`` for usage.
        - For LoRAs where the activation word is unknown, this script compares the output of Text Encoder after applying LoRA to that of unapplied to find out which token is affected by LoRA. Hopefully you can figure out the activation word. LoRA trained with captions does not seem to be able to interrogate.
        - Batch size can be large (like 64 or 128).
    - ``train_textual_inversion.py`` now supports multiple init words.
    - Following feature is reverted to be the same as before. Sorry for confusion:
        > Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - Add new tool to sort, group and average crop image in a dataset
2023-02-11 11:59:38 -05:00
bmaltais
e5f8ba559f Add server_port and inbrowser support
- to all gui scripts
2023-02-10 08:22:03 -05:00
bmaltais
dff9710c81
Merge pull request #139 from bmaltais/dev
Reverting changes to startup commands
2023-02-09 20:25:48 -05:00
bmaltais
56d171c55b Reverting changes to startup commands 2023-02-09 20:25:42 -05:00
bmaltais
9660d83612
Merge pull request #129 from jonathanzhang53/master
README documentation update
2023-02-09 19:19:01 -05:00
bmaltais
9a7bb4c624
Merge pull request #138 from bmaltais/dev
v20.7.1
2023-02-09 19:18:08 -05:00
bmaltais
7bc93821a0 2023/02/09 (v20.7.1)
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
        - ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
        - ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
        - ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
        - The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
        - Typo check is added. Thanks to shirayu!
    - Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
2023-02-09 19:17:24 -05:00
bmaltais
90c0d55457 2023/02/09 (v20.7.1)
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
        - ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
        - ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
        - ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
        - The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
        - Typo check is added. Thanks to shirayu!
    - Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
2023-02-09 19:17:17 -05:00
jonathanzhang53
6c4348233f README documentation update 2023-02-07 22:32:54 -05:00