274 lines
7.8 KiB
Plaintext
274 lines
7.8 KiB
Plaintext
|
|
{
|
||
|
|
"cells": [
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "view-in-github"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"<a href=\"https://colab.research.google.com/github/bkkaggle/pytorch-CycleGAN-and-pix2pix/blob/master/CycleGAN.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "5VIGyIus8Vr7"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"Take a look at the [repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) for more information"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "7wNjDKdQy35h"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Install"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "TRm-USlsHgEV"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "Pt3igws3eiVp"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"import os\n",
|
||
|
|
"os.chdir('pytorch-CycleGAN-and-pix2pix/')"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "z1EySlOXwwoa"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!pip install -r requirements.txt"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "8daqlgVhw29P"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Datasets\n",
|
||
|
|
"\n",
|
||
|
|
"Download one of the official datasets with:\n",
|
||
|
|
"\n",
|
||
|
|
"- `bash ./datasets/download_cyclegan_dataset.sh [apple2orange, summer2winter_yosemite, horse2zebra, monet2photo, cezanne2photo, ukiyoe2photo, vangogh2photo, maps, cityscapes, facades, iphone2dslr_flower, ae_photos]`\n",
|
||
|
|
"\n",
|
||
|
|
"Or use your own dataset by creating the appropriate folders and adding in the images.\n",
|
||
|
|
"\n",
|
||
|
|
"- Create a dataset folder under `/dataset` for your dataset.\n",
|
||
|
|
"- Create subfolders `testA`, `testB`, `trainA`, and `trainB` under your dataset's folder. Place any images you want to transform from a to b (cat2dog) in the `testA` folder, images you want to transform from b to a (dog2cat) in the `testB` folder, and do the same for the `trainA` and `trainB` folders."
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "vrdOettJxaCc"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!bash ./datasets/download_cyclegan_dataset.sh horse2zebra"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "gdUz4116xhpm"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Pretrained models\n",
|
||
|
|
"\n",
|
||
|
|
"Download one of the official pretrained models with:\n",
|
||
|
|
"\n",
|
||
|
|
"- `bash ./scripts/download_cyclegan_model.sh [apple2orange, orange2apple, summer2winter_yosemite, winter2summer_yosemite, horse2zebra, zebra2horse, monet2photo, style_monet, style_cezanne, style_ukiyoe, style_vangogh, sat2map, map2sat, cityscapes_photo2label, cityscapes_label2photo, facades_photo2label, facades_label2photo, iphone2dslr_flower]`\n",
|
||
|
|
"\n",
|
||
|
|
"Or add your own pretrained model to `./checkpoints/{NAME}_pretrained/latest_net_G.pt`"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "B75UqtKhxznS"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!bash ./scripts/download_cyclegan_model.sh horse2zebra"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "yFw1kDQBx3LN"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Training\n",
|
||
|
|
"\n",
|
||
|
|
"- `python train.py --dataroot ./datasets/horse2zebra --name horse2zebra --model cycle_gan`\n",
|
||
|
|
"\n",
|
||
|
|
"Change the `--dataroot` and `--name` to your own dataset's path and model's name. Use `--gpu_ids 0,1,..` to train on multiple GPUs and `--batch_size` to change the batch size. I've found that a batch size of 16 fits onto 4 V100s and can finish training an epoch in ~90s.\n",
|
||
|
|
"\n",
|
||
|
|
"Once your model has trained, copy over the last checkpoint to a format that the testing model can automatically detect:\n",
|
||
|
|
"\n",
|
||
|
|
"Use `cp ./checkpoints/horse2zebra/latest_net_G_A.pth ./checkpoints/horse2zebra/latest_net_G.pth` if you want to transform images from class A to class B and `cp ./checkpoints/horse2zebra/latest_net_G_B.pth ./checkpoints/horse2zebra/latest_net_G.pth` if you want to transform images from class B to class A.\n"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "0sp7TCT2x9dB"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!python train.py --dataroot ./datasets/horse2zebra --name horse2zebra --model cycle_gan --display_id -1"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "9UkcaFZiyASl"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Testing\n",
|
||
|
|
"\n",
|
||
|
|
"- `python test.py --dataroot datasets/horse2zebra/testA --name horse2zebra_pretrained --model test --no_dropout`\n",
|
||
|
|
"\n",
|
||
|
|
"Change the `--dataroot` and `--name` to be consistent with your trained model's configuration.\n",
|
||
|
|
"\n",
|
||
|
|
"> from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix:\n",
|
||
|
|
"> The option --model test is used for generating results of CycleGAN only for one side. This option will automatically set --dataset_mode single, which only loads the images from one set. On the contrary, using --model cycle_gan requires loading and generating results in both directions, which is sometimes unnecessary. The results will be saved at ./results/. Use --results_dir {directory_path_to_save_result} to specify the results directory.\n",
|
||
|
|
"\n",
|
||
|
|
"> For your own experiments, you might want to specify --netG, --norm, --no_dropout to match the generator architecture of the trained model."
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "uCsKkEq0yGh0"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!python test.py --dataroot datasets/horse2zebra/testA --name horse2zebra_pretrained --model test --no_dropout"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "OzSKIPUByfiN"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Visualize"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "9Mgg8raPyizq"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"import matplotlib.pyplot as plt\n",
|
||
|
|
"\n",
|
||
|
|
"img = plt.imread('./results/horse2zebra_pretrained/test_latest/images/n02381460_1010_fake.png')\n",
|
||
|
|
"plt.imshow(img)"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "0G3oVH9DyqLQ"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"import matplotlib.pyplot as plt\n",
|
||
|
|
"\n",
|
||
|
|
"img = plt.imread('./results/horse2zebra_pretrained/test_latest/images/n02381460_1010_real.png')\n",
|
||
|
|
"plt.imshow(img)"
|
||
|
|
]
|
||
|
|
}
|
||
|
|
],
|
||
|
|
"metadata": {
|
||
|
|
"accelerator": "GPU",
|
||
|
|
"colab": {
|
||
|
|
"collapsed_sections": [],
|
||
|
|
"include_colab_link": true,
|
||
|
|
"name": "CycleGAN",
|
||
|
|
"provenance": []
|
||
|
|
},
|
||
|
|
"environment": {
|
||
|
|
"name": "tf2-gpu.2-3.m74",
|
||
|
|
"type": "gcloud",
|
||
|
|
"uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-3:m74"
|
||
|
|
},
|
||
|
|
"kernelspec": {
|
||
|
|
"display_name": "Python 3",
|
||
|
|
"language": "python",
|
||
|
|
"name": "python3"
|
||
|
|
},
|
||
|
|
"language_info": {
|
||
|
|
"codemirror_mode": {
|
||
|
|
"name": "ipython",
|
||
|
|
"version": 3
|
||
|
|
},
|
||
|
|
"file_extension": ".py",
|
||
|
|
"mimetype": "text/x-python",
|
||
|
|
"name": "python",
|
||
|
|
"nbconvert_exporter": "python",
|
||
|
|
"pygments_lexer": "ipython3",
|
||
|
|
"version": "3.7.10"
|
||
|
|
}
|
||
|
|
},
|
||
|
|
"nbformat": 4,
|
||
|
|
"nbformat_minor": 4
|
||
|
|
}
|