Yuval Aboulafia
|
3bf5591efe
|
fix F541 f-string without any placeholders
|
2022-12-24 21:35:29 +02:00 |
|
Jim Hays
|
c0355caefe
|
Fix various typos
|
2022-12-14 21:01:32 -05:00 |
|
AUTOMATIC1111
|
c9a2cfdf2a
|
Merge branch 'master' into racecond_fix
|
2022-12-03 10:19:51 +03:00 |
|
brkirch
|
4d5f1691dd
|
Use devices.autocast instead of torch.autocast
|
2022-11-30 10:33:42 -05:00 |
|
AUTOMATIC
|
b48b7999c8
|
Merge remote-tracking branch 'flamelaw/master'
|
2022-11-27 12:19:59 +03:00 |
|
flamelaw
|
755df94b2a
|
set TI AdamW default weight decay to 0
|
2022-11-27 00:35:44 +09:00 |
|
AUTOMATIC
|
ce6911158b
|
Add support Stable Diffusion 2.0
|
2022-11-26 16:10:46 +03:00 |
|
flamelaw
|
89d8ecff09
|
small fixes
|
2022-11-23 02:49:01 +09:00 |
|
flamelaw
|
5b57f61ba4
|
fix pin_memory with different latent sampling method
|
2022-11-21 10:15:46 +09:00 |
|
flamelaw
|
bd68e35de3
|
Gradient accumulation, autocast fix, new latent sampling method, etc
|
2022-11-20 12:35:26 +09:00 |
|
AUTOMATIC
|
cdc8020d13
|
change StableDiffusionProcessing to internally use sampler name instead of sampler index
|
2022-11-19 12:01:51 +03:00 |
|
Muhammad Rizqi Nur
|
bb832d7725
|
Simplify grad clip
|
2022-11-05 11:48:38 +07:00 |
|
Fampai
|
39541d7725
|
Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
|
2022-11-04 04:50:22 -04:00 |
|
Muhammad Rizqi Nur
|
237e79c77d
|
Merge branch 'master' into gradient-clipping
|
2022-11-02 20:48:58 +07:00 |
|
Nerogar
|
cffc240a73
|
fixed textual inversion training with inpainting models
|
2022-11-01 21:02:07 +01:00 |
|
Fampai
|
890e68aaf7
|
Fixed minor bug
when unloading vae during TI training, generating images after
training will error out
|
2022-10-31 10:07:12 -04:00 |
|
Fampai
|
3b0127e698
|
Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations
|
2022-10-31 09:54:51 -04:00 |
|
Fampai
|
006756f9cd
|
Added TI training optimizations
option to use xattention optimizations when training
option to unload vae when training
|
2022-10-31 07:26:08 -04:00 |
|
Muhammad Rizqi Nur
|
cd4d59c0de
|
Merge master
|
2022-10-30 18:57:51 +07:00 |
|
Muhammad Rizqi Nur
|
3d58510f21
|
Fix dataset still being loaded even when training will be skipped
|
2022-10-30 00:54:59 +07:00 |
|
Muhammad Rizqi Nur
|
a07f054c86
|
Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513
Also group the saving into one
|
2022-10-30 00:49:29 +07:00 |
|
Muhammad Rizqi Nur
|
ab05a74ead
|
Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95bd5f26d0f6e250e67338ada91980d1.
|
2022-10-30 00:32:02 +07:00 |
|
Muhammad Rizqi Nur
|
3ce2bfdf95
|
Add cleanup after training
|
2022-10-29 19:43:21 +07:00 |
|
Muhammad Rizqi Nur
|
ab27c111d0
|
Add input validations before loading dataset for training
|
2022-10-29 18:09:17 +07:00 |
|
Muhammad Rizqi Nur
|
05e2e40537
|
Merge branch 'master' into gradient-clipping
|
2022-10-29 15:04:21 +07:00 |
|
Muhammad Rizqi Nur
|
9ceef81f77
|
Fix log off by 1
|
2022-10-28 20:48:08 +07:00 |
|
Muhammad Rizqi Nur
|
16451ca573
|
Learning rate sched syntax support for grad clipping
|
2022-10-28 17:16:23 +07:00 |
|
Muhammad Rizqi Nur
|
1618df41ba
|
Gradient clipping for textual embedding
|
2022-10-28 10:31:27 +07:00 |
|
DepFA
|
737eb28fac
|
typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir
|
2022-10-26 17:38:08 +03:00 |
|
timntorres
|
f4e1464217
|
Implement PR #3625 but for embeddings.
|
2022-10-26 10:14:35 +03:00 |
|
timntorres
|
4875a6c217
|
Implement PR #3309 but for embeddings.
|
2022-10-26 10:14:35 +03:00 |
|
timntorres
|
c2dc9bfa89
|
Implement PR #3189 but for embeddings.
|
2022-10-26 10:14:35 +03:00 |
|
AUTOMATIC
|
cbb857b675
|
enable creating embedding with --medvram
|
2022-10-26 09:44:02 +03:00 |
|
Melan
|
18f86e41f6
|
Removed two unused imports
|
2022-10-24 17:21:18 +02:00 |
|
AUTOMATIC
|
7d6b388d71
|
Merge branch 'ae'
|
2022-10-21 13:35:01 +03:00 |
|
Melan
|
8f59129847
|
Some changes to the tensorboard code and hypernetwork support
|
2022-10-20 22:37:16 +02:00 |
|
Melan
|
a6d593a6b5
|
Fixed a typo in a variable
|
2022-10-20 19:43:21 +02:00 |
|
Melan
|
29e74d6e71
|
Add support for Tensorboard for training embeddings
|
2022-10-20 16:26:16 +02:00 |
|
DepFA
|
0087079c2d
|
allow overwrite old embedding
|
2022-10-20 00:10:59 +01:00 |
|
MalumaDev
|
1997ccff13
|
Merge branch 'master' into test_resolve_conflicts
|
2022-10-18 08:55:08 +02:00 |
|
DepFA
|
62edfae257
|
print list of embeddings on reload
|
2022-10-17 08:42:17 +03:00 |
|
MalumaDev
|
ae0fdad64a
|
Merge branch 'master' into test_resolve_conflicts
|
2022-10-16 17:55:58 +02:00 |
|
AUTOMATIC
|
0c5fa9a681
|
do not reload embeddings from disk when doing textual inversion
|
2022-10-16 09:09:04 +03:00 |
|
MalumaDev
|
97ceaa23d0
|
Merge branch 'master' into test_resolve_conflicts
|
2022-10-16 00:06:36 +02:00 |
|
DepFA
|
b6e3b96dab
|
Change vector size footer label
|
2022-10-15 17:23:39 +03:00 |
|
DepFA
|
ddf6899df0
|
generalise to popular lossless formats
|
2022-10-15 17:23:39 +03:00 |
|
DepFA
|
9a1dcd78ed
|
add webp for embed load
|
2022-10-15 17:23:39 +03:00 |
|
DepFA
|
939f16529a
|
only save 1 image per embedding
|
2022-10-15 17:23:39 +03:00 |
|
DepFA
|
9e846083b7
|
add vector size to embed text
|
2022-10-15 17:23:39 +03:00 |
|
MalumaDev
|
7b7561f6e4
|
Merge branch 'master' into test_resolve_conflicts
|
2022-10-15 16:20:17 +02:00 |
|