284 lines
7.0 KiB
Plaintext
284 lines
7.0 KiB
Plaintext
|
|
{
|
||
|
|
"cells": [
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "view-in-github"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"<a href=\"https://colab.research.google.com/github/bkkaggle/pytorch-CycleGAN-and-pix2pix/blob/master/pix2pix.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "7wNjDKdQy35h"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Install"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "TRm-USlsHgEV"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "Pt3igws3eiVp"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"import os\n",
|
||
|
|
"os.chdir('pytorch-CycleGAN-and-pix2pix/')"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "z1EySlOXwwoa"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!pip install -r requirements.txt"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "8daqlgVhw29P"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Datasets\n",
|
||
|
|
"\n",
|
||
|
|
"Download one of the official datasets with:\n",
|
||
|
|
"\n",
|
||
|
|
"- `bash ./datasets/download_pix2pix_dataset.sh [cityscapes, night2day, edges2handbags, edges2shoes, facades, maps]`\n",
|
||
|
|
"\n",
|
||
|
|
"Or use your own dataset by creating the appropriate folders and adding in the images. Follow the instructions [here](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/datasets.md#pix2pix-datasets)."
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "vrdOettJxaCc"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!bash ./datasets/download_pix2pix_dataset.sh facades"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "gdUz4116xhpm"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Pretrained models\n",
|
||
|
|
"\n",
|
||
|
|
"Download one of the official pretrained models with:\n",
|
||
|
|
"\n",
|
||
|
|
"- `bash ./scripts/download_pix2pix_model.sh [edges2shoes, sat2map, map2sat, facades_label2photo, and day2night]`\n",
|
||
|
|
"\n",
|
||
|
|
"Or add your own pretrained model to `./checkpoints/{NAME}_pretrained/latest_net_G.pt`"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "GC2DEP4M0OsS"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!bash ./scripts/download_pix2pix_model.sh facades_label2photo"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "yFw1kDQBx3LN"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Training\n",
|
||
|
|
"\n",
|
||
|
|
"- `python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA`\n",
|
||
|
|
"\n",
|
||
|
|
"Change the `--dataroot` and `--name` to your own dataset's path and model's name. Use `--gpu_ids 0,1,..` to train on multiple GPUs and `--batch_size` to change the batch size. Add `--direction BtoA` if you want to train a model to transfrom from class B to A."
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "0sp7TCT2x9dB"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA --display_id -1"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "9UkcaFZiyASl"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Testing\n",
|
||
|
|
"\n",
|
||
|
|
"- `python test.py --dataroot ./datasets/facades --direction BtoA --model pix2pix --name facades_pix2pix`\n",
|
||
|
|
"\n",
|
||
|
|
"Change the `--dataroot`, `--name`, and `--direction` to be consistent with your trained model's configuration and how you want to transform images.\n",
|
||
|
|
"\n",
|
||
|
|
"> from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix:\n",
|
||
|
|
"> Note that we specified --direction BtoA as Facades dataset's A to B direction is photos to labels.\n",
|
||
|
|
"\n",
|
||
|
|
"> If you would like to apply a pre-trained model to a collection of input images (rather than image pairs), please use --model test option. See ./scripts/test_single.sh for how to apply a model to Facade label maps (stored in the directory facades/testB).\n",
|
||
|
|
"\n",
|
||
|
|
"> See a list of currently available models at ./scripts/download_pix2pix_model.sh"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "mey7o6j-0368"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!ls checkpoints/"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "uCsKkEq0yGh0"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"!python test.py --dataroot ./datasets/facades --direction BtoA --model pix2pix --name facades_label2photo_pretrained --use_wandb"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "markdown",
|
||
|
|
"metadata": {
|
||
|
|
"colab_type": "text",
|
||
|
|
"id": "OzSKIPUByfiN"
|
||
|
|
},
|
||
|
|
"source": [
|
||
|
|
"# Visualize"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "9Mgg8raPyizq"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"import matplotlib.pyplot as plt\n",
|
||
|
|
"\n",
|
||
|
|
"img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_fake_B.png')\n",
|
||
|
|
"plt.imshow(img)"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "0G3oVH9DyqLQ"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_real_A.png')\n",
|
||
|
|
"plt.imshow(img)"
|
||
|
|
]
|
||
|
|
},
|
||
|
|
{
|
||
|
|
"cell_type": "code",
|
||
|
|
"execution_count": null,
|
||
|
|
"metadata": {
|
||
|
|
"colab": {},
|
||
|
|
"colab_type": "code",
|
||
|
|
"id": "ErK5OC1j1LH4"
|
||
|
|
},
|
||
|
|
"outputs": [],
|
||
|
|
"source": [
|
||
|
|
"img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_real_B.png')\n",
|
||
|
|
"plt.imshow(img)"
|
||
|
|
]
|
||
|
|
}
|
||
|
|
],
|
||
|
|
"metadata": {
|
||
|
|
"accelerator": "GPU",
|
||
|
|
"colab": {
|
||
|
|
"collapsed_sections": [],
|
||
|
|
"include_colab_link": true,
|
||
|
|
"name": "pix2pix",
|
||
|
|
"provenance": []
|
||
|
|
},
|
||
|
|
"environment": {
|
||
|
|
"name": "tf2-gpu.2-3.m74",
|
||
|
|
"type": "gcloud",
|
||
|
|
"uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-3:m74"
|
||
|
|
},
|
||
|
|
"kernelspec": {
|
||
|
|
"display_name": "Python 3",
|
||
|
|
"language": "python",
|
||
|
|
"name": "python3"
|
||
|
|
},
|
||
|
|
"language_info": {
|
||
|
|
"codemirror_mode": {
|
||
|
|
"name": "ipython",
|
||
|
|
"version": 3
|
||
|
|
},
|
||
|
|
"file_extension": ".py",
|
||
|
|
"mimetype": "text/x-python",
|
||
|
|
"name": "python",
|
||
|
|
"nbconvert_exporter": "python",
|
||
|
|
"pygments_lexer": "ipython3",
|
||
|
|
"version": "3.7.10"
|
||
|
|
}
|
||
|
|
},
|
||
|
|
"nbformat": 4,
|
||
|
|
"nbformat_minor": 4
|
||
|
|
}
|