* 2023/02/06 (v20.7.0)
- ``--bucket_reso_steps`` and ``--bucket_no_upscale`` options are added to training scripts (fine tuning, DreamBooth, LoRA and Textual Inversion) and ``prepare_buckets_latents.py``. - ``--bucket_reso_steps`` takes the steps for buckets in aspect ratio bucketing. Default is 64, same as before. - Any value greater than or equal to 1 can be specified; 64 is highly recommended and a value divisible by 8 is recommended. - If less than 64 is specified, padding will occur within U-Net. The result is unknown. - If you specify a value that is not divisible by 8, it will be truncated to divisible by 8 inside VAE, because the size of the latent is 1/8 of the image size. - If ``--bucket_no_upscale`` option is specified, images smaller than the bucket size will be processed without upscaling. - Internally, a bucket smaller than the image size is created (for example, if the image is 300x300 and ``bucket_reso_steps=64``, the bucket is 256x256). The image will be trimmed. - Implementation of [#130](https://github.com/kohya-ss/sd-scripts/issues/130). - Images with an area larger than the maximum size specified by ``--resolution`` are downsampled to the max bucket size. - Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images. - ``--random_crop`` now also works with buckets enabled. - Instead of always cropping the center of the image, the image is shifted left, right, up, and down to be used as the training data. This is expected to train to the edges of the image. - Implementation of discussion [#34](https://github.com/kohya-ss/sd-scripts/discussions/34).
This commit is contained in:
parent
cbfc311687
commit
8d559ded18
14
README.md
14
README.md
@ -143,6 +143,20 @@ Then redo the installation instruction within the kohya_ss venv.
|
||||
|
||||
## Change history
|
||||
|
||||
* 2023/02/06 (v20.7.0)
|
||||
- ``--bucket_reso_steps`` and ``--bucket_no_upscale`` options are added to training scripts (fine tuning, DreamBooth, LoRA and Textual Inversion) and ``prepare_buckets_latents.py``.
|
||||
- ``--bucket_reso_steps`` takes the steps for buckets in aspect ratio bucketing. Default is 64, same as before.
|
||||
- Any value greater than or equal to 1 can be specified; 64 is highly recommended and a value divisible by 8 is recommended.
|
||||
- If less than 64 is specified, padding will occur within U-Net. The result is unknown.
|
||||
- If you specify a value that is not divisible by 8, it will be truncated to divisible by 8 inside VAE, because the size of the latent is 1/8 of the image size.
|
||||
- If ``--bucket_no_upscale`` option is specified, images smaller than the bucket size will be processed without upscaling.
|
||||
- Internally, a bucket smaller than the image size is created (for example, if the image is 300x300 and ``bucket_reso_steps=64``, the bucket is 256x256). The image will be trimmed.
|
||||
- Implementation of [#130](https://github.com/kohya-ss/sd-scripts/issues/130).
|
||||
- Images with an area larger than the maximum size specified by ``--resolution`` are downsampled to the max bucket size.
|
||||
- Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
|
||||
- ``--random_crop`` now also works with buckets enabled.
|
||||
- Instead of always cropping the center of the image, the image is shifted left, right, up, and down to be used as the training data. This is expected to train to the edges of the image.
|
||||
- Implementation of discussion [#34](https://github.com/kohya-ss/sd-scripts/discussions/34).
|
||||
* 2023/02/04 (v20.6.1)
|
||||
- Add new LoRA resize GUI
|
||||
- ``--persistent_data_loader_workers`` option is added to ``fine_tune.py``, ``train_db.py`` and ``train_network.py``. This option may significantly reduce the waiting time between epochs. Thanks to hitomi!
|
||||
|
@ -557,12 +557,12 @@ def gradio_advanced_training():
|
||||
bucket_no_upscale = gr.Checkbox(
|
||||
label="Don't upscale bucket resolution", value=True
|
||||
)
|
||||
random_crop = gr.Checkbox(
|
||||
label='Random crop instead of center crop', value=False
|
||||
)
|
||||
bucket_reso_steps = gr.Number(
|
||||
label='Bucket resolution steps', value=64
|
||||
)
|
||||
random_crop = gr.Checkbox(
|
||||
label='Random crop instead of center crop', value=False
|
||||
)
|
||||
with gr.Row():
|
||||
save_state = gr.Checkbox(label='Save training state', value=False)
|
||||
resume = gr.Textbox(
|
||||
|
@ -123,6 +123,10 @@ class BucketManager():
|
||||
self.buckets.append([])
|
||||
# print(reso, bucket_id, len(self.buckets))
|
||||
|
||||
def round_to_steps(self, x):
|
||||
x = int(x + .5)
|
||||
return x - x % self.reso_steps
|
||||
|
||||
def select_bucket(self, image_width, image_height):
|
||||
aspect_ratio = image_width / image_height
|
||||
if not self.no_upscale:
|
||||
@ -150,7 +154,24 @@ class BucketManager():
|
||||
resized_height = self.max_area / resized_width
|
||||
assert abs(resized_width / resized_height - aspect_ratio) < 1e-2, "aspect is illegal"
|
||||
|
||||
resized_size = (int(resized_width + .5), int(resized_height + .5))
|
||||
# リサイズ後の短辺または長辺をreso_steps単位にする:aspect ratioの差が少ないほうを選ぶ
|
||||
# 元のbucketingと同じロジック
|
||||
b_width_rounded = self.round_to_steps(resized_width)
|
||||
b_height_in_wr = self.round_to_steps(b_width_rounded / aspect_ratio)
|
||||
ar_width_rounded = b_width_rounded / b_height_in_wr
|
||||
|
||||
b_height_rounded = self.round_to_steps(resized_height)
|
||||
b_width_in_hr = self.round_to_steps(b_height_rounded * aspect_ratio)
|
||||
ar_height_rounded = b_width_in_hr / b_height_rounded
|
||||
|
||||
# print(b_width_rounded, b_height_in_wr, ar_width_rounded)
|
||||
# print(b_width_in_hr, b_height_rounded, ar_height_rounded)
|
||||
|
||||
if abs(ar_width_rounded - aspect_ratio) < abs(ar_height_rounded - aspect_ratio):
|
||||
resized_size = (b_width_rounded, int(b_width_rounded / aspect_ratio + .5))
|
||||
else:
|
||||
resized_size = (int(b_height_rounded * aspect_ratio + .5), b_height_rounded)
|
||||
# print(resized_size)
|
||||
else:
|
||||
resized_size = (image_width, image_height) # リサイズは不要
|
||||
|
||||
@ -889,13 +910,14 @@ def debug_dataset(train_dataset, show_input_ids=False):
|
||||
k = 0
|
||||
for i, example in enumerate(train_dataset):
|
||||
if example['latents'] is not None:
|
||||
print("sample has latents from npz file")
|
||||
print(f"sample has latents from npz file: {example['latents'].size()}")
|
||||
for j, (ik, cap, lw, iid) in enumerate(zip(example['image_keys'], example['captions'], example['loss_weights'], example['input_ids'])):
|
||||
print(f'{ik}, size: {train_dataset.image_data[ik].image_size}, caption: "{cap}", loss weight: {lw}')
|
||||
print(f'{ik}, size: {train_dataset.image_data[ik].image_size}, loss weight: {lw}, caption: "{cap}"')
|
||||
if show_input_ids:
|
||||
print(f"input ids: {iid}")
|
||||
if example['images'] is not None:
|
||||
im = example['images'][j]
|
||||
print(f"image size: {im.size()}")
|
||||
im = ((im.numpy() + 1.0) * 127.5).astype(np.uint8)
|
||||
im = np.transpose(im, (1, 2, 0)) # c,H,W -> H,W,c
|
||||
im = im[:, :, ::-1] # RGB -> BGR (OpenCV)
|
||||
@ -1696,4 +1718,4 @@ class ImageLoadingDataset(torch.utils.data.Dataset):
|
||||
return (tensor_pil, img_path)
|
||||
|
||||
|
||||
# endregion
|
||||
# endregion
|
@ -67,7 +67,7 @@ def main():
|
||||
parser = argparse.ArgumentParser(description='Resize images in a folder to a specified max resolution(s)')
|
||||
parser.add_argument('src_img_folder', type=str, help='Source folder containing the images')
|
||||
parser.add_argument('dst_img_folder', type=str, help='Destination folder to save the resized images')
|
||||
parser.add_argument('--max_resolution', type=str, help='Maximum resolution(s) in the format "512x512,384x384, etc, etc"', default="512x512,384x384,256x256,128x128")
|
||||
parser.add_argument('--max_resolution', type=str, help='Maximum resolution(s) in the format "512x512,448x448,384x384, etc, etc"', default="512x512,448x448,384x384")
|
||||
parser.add_argument('--divisible_by', type=int, help='Ensure new dimensions are divisible by this value', default=1)
|
||||
args = parser.parse_args()
|
||||
resize_images(args.src_img_folder, args.dst_img_folder, args.max_resolution)
|
||||
|
Loading…
Reference in New Issue
Block a user