Browse Source

update with official repo

Niemes 2 years ago
parent
commit
c366b1b80f

+ 146 - 0
.gitignore

@@ -0,0 +1,146 @@
+# Disco-specfic ignores
+init_images/*
+images_out/*
+MiDaS/
+models/
+pretrained/*
+settings.json
+
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+#  Usually these files are written by a python script from a template
+#  before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+cover/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+#   For a library or package, you might want to ignore these files since the code is
+#   intended to run in multiple environments; otherwise, check them in:
+# .python-version
+
+# pipenv
+#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+#   However, in case of collaboration, if having platform-specific dependencies or dependencies
+#   having no cross-platform support, pipenv may install dependencies that don't work, or not
+#   install all needed dependencies.
+#Pipfile.lock
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/

+ 2746 - 0
Disco_Diffusion.ipynb

@@ -0,0 +1,2746 @@
+{
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "TitleTop"
+      },
+      "source": [
+        "# Disco Diffusion v5.2 - Now with VR Mode\n",
+        "\n",
+        "In case of confusion, Disco is the name of this notebook edit. The diffusion model in use is Katherine Crowson's fine-tuned 512x512 model\n",
+        "\n",
+        "For issues, join the [Disco Diffusion Discord](https://discord.gg/msEZBy4HxA) or message us on twitter at [@somnai_dreams](https://twitter.com/somnai_dreams) or [@gandamu](https://twitter.com/gandamu_ml)"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "CreditsChTop"
+      },
+      "source": [
+        "### Credits & Changelog \u2b07\ufe0f"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "Credits"
+      },
+      "source": [
+        "#### Credits\n",
+        "\n",
+        "Original notebook by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses either OpenAI's 256x256 unconditional ImageNet or Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.\n",
+        "\n",
+        "Modified by Daniel Russell (https://github.com/russelldc, https://twitter.com/danielrussruss) to include (hopefully) optimal params for quick generations in 15-100 timesteps rather than 1000, as well as more robust augmentations.\n",
+        "\n",
+        "Further improvements from Dango233 and nsheppard helped improve the quality of diffusion in general, and especially so for shorter runs like this notebook aims to achieve.\n",
+        "\n",
+        "Vark added code to load in multiple Clip models at once, which all prompts are evaluated against, which may greatly improve accuracy.\n",
+        "\n",
+        "The latest zoom, pan, rotation, and keyframes features were taken from Chigozie Nri's VQGAN Zoom Notebook (https://github.com/chigozienri, https://twitter.com/chigozienri)\n",
+        "\n",
+        "Advanced DangoCutn Cutout method is also from Dango223.\n",
+        "\n",
+        "--\n",
+        "\n",
+        "Disco:\n",
+        "\n",
+        "Somnai (https://twitter.com/Somnai_dreams) added Diffusion Animation techniques, QoL improvements and various implementations of tech and techniques, mostly listed in the changelog below.\n",
+        "\n",
+        "3D animation implementation added by Adam Letts (https://twitter.com/gandamu_ml) in collaboration with Somnai. Creation of disco.py and ongoing maintenance.\n",
+        "\n",
+        "Turbo feature by Chris Allen (https://twitter.com/zippy731)\n",
+        "\n",
+        "Improvements to ability to run on local systems, Windows support, and dependency installation by HostsServer (https://twitter.com/HostsServer)\n",
+        "\n",
+        "VR Mode by Tom Mason (https://twitter.com/nin_artificial)"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "LicenseTop"
+      },
+      "source": [
+        "#### License"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "License"
+      },
+      "source": [
+        "Licensed under the MIT License\n",
+        "\n",
+        "Copyright (c) 2021 Katherine Crowson \n",
+        "\n",
+        "Permission is hereby granted, free of charge, to any person obtaining a copy\n",
+        "of this software and associated documentation files (the \"Software\"), to deal\n",
+        "in the Software without restriction, including without limitation the rights\n",
+        "to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n",
+        "copies of the Software, and to permit persons to whom the Software is\n",
+        "furnished to do so, subject to the following conditions:\n",
+        "\n",
+        "The above copyright notice and this permission notice shall be included in\n",
+        "all copies or substantial portions of the Software.\n",
+        "\n",
+        "THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n",
+        "IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n",
+        "FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n",
+        "AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n",
+        "LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n",
+        "OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n",
+        "THE SOFTWARE.\n",
+        "\n",
+        "--\n",
+        "\n",
+        "MIT License\n",
+        "\n",
+        "Copyright (c) 2019 Intel ISL (Intel Intelligent Systems Lab)\n",
+        "\n",
+        "Permission is hereby granted, free of charge, to any person obtaining a copy\n",
+        "of this software and associated documentation files (the \"Software\"), to deal\n",
+        "in the Software without restriction, including without limitation the rights\n",
+        "to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n",
+        "copies of the Software, and to permit persons to whom the Software is\n",
+        "furnished to do so, subject to the following conditions:\n",
+        "\n",
+        "The above copyright notice and this permission notice shall be included in all\n",
+        "copies or substantial portions of the Software.\n",
+        "\n",
+        "THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n",
+        "IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n",
+        "FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n",
+        "AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n",
+        "LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n",
+        "OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n",
+        "SOFTWARE.\n",
+        "\n",
+        "--\n",
+        "\n",
+        "Licensed under the MIT License\n",
+        "\n",
+        "Copyright (c) 2021 Maxwell Ingham\n",
+        "\n",
+        "Copyright (c) 2022 Adam Letts \n",
+        "\n",
+        "Permission is hereby granted, free of charge, to any person obtaining a copy\n",
+        "of this software and associated documentation files (the \"Software\"), to deal\n",
+        "in the Software without restriction, including without limitation the rights\n",
+        "to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n",
+        "copies of the Software, and to permit persons to whom the Software is\n",
+        "furnished to do so, subject to the following conditions:\n",
+        "\n",
+        "The above copyright notice and this permission notice shall be included in\n",
+        "all copies or substantial portions of the Software.\n",
+        "\n",
+        "THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n",
+        "IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n",
+        "FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n",
+        "AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n",
+        "LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n",
+        "OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n",
+        "THE SOFTWARE."
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "ChangelogTop"
+      },
+      "source": [
+        "#### Changelog"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "Changelog"
+      },
+      "source": [
+        "#@title <- View Changelog\n",
+        "skip_for_run_all = True #@param {type: 'boolean'}\n",
+        "\n",
+        "if skip_for_run_all == False:\n",
+        "  print(\n",
+        "      '''\n",
+        "  v1 Update: Oct 29th 2021 - Somnai\n",
+        "\n",
+        "      QoL improvements added by Somnai (@somnai_dreams), including user friendly UI, settings+prompt saving and improved google drive folder organization.\n",
+        "\n",
+        "  v1.1 Update: Nov 13th 2021 - Somnai\n",
+        "\n",
+        "      Now includes sizing options, intermediate saves and fixed image prompts and perlin inits. unexposed batch option since it doesn't work\n",
+        "\n",
+        "  v2 Update: Nov 22nd 2021 - Somnai\n",
+        "\n",
+        "      Initial addition of Katherine Crowson's Secondary Model Method (https://colab.research.google.com/drive/1mpkrhOjoyzPeSWy2r7T8EYRaU7amYOOi#scrollTo=X5gODNAMEUCR)\n",
+        "\n",
+        "      Noticed settings were saving with the wrong name so corrected it. Let me know if you preferred the old scheme.\n",
+        "\n",
+        "  v3 Update: Dec 24th 2021 - Somnai\n",
+        "\n",
+        "      Implemented Dango's advanced cutout method\n",
+        "\n",
+        "      Added SLIP models, thanks to NeuralDivergent\n",
+        "\n",
+        "      Fixed issue with NaNs resulting in black images, with massive help and testing from @Softology\n",
+        "\n",
+        "      Perlin now changes properly within batches (not sure where this perlin_regen code came from originally, but thank you)\n",
+        "\n",
+        "  v4 Update: Jan 2021 - Somnai\n",
+        "\n",
+        "      Implemented Diffusion Zooming\n",
+        "\n",
+        "      Added Chigozie keyframing\n",
+        "\n",
+        "      Made a bunch of edits to processes\n",
+        "  \n",
+        "  v4.1 Update: Jan  14th 2021 - Somnai\n",
+        "\n",
+        "      Added video input mode\n",
+        "\n",
+        "      Added license that somehow went missing\n",
+        "\n",
+        "      Added improved prompt keyframing, fixed image_prompts and multiple prompts\n",
+        "\n",
+        "      Improved UI\n",
+        "\n",
+        "      Significant under the hood cleanup and improvement\n",
+        "\n",
+        "      Refined defaults for each mode\n",
+        "\n",
+        "      Added latent-diffusion SuperRes for sharpening\n",
+        "\n",
+        "      Added resume run mode\n",
+        "\n",
+        "  v4.9 Update: Feb 5th 2022 - gandamu / Adam Letts\n",
+        "\n",
+        "      Added 3D\n",
+        "\n",
+        "      Added brightness corrections to prevent animation from steadily going dark over time\n",
+        "\n",
+        "  v4.91 Update: Feb 19th 2022 - gandamu / Adam Letts\n",
+        "\n",
+        "      Cleaned up 3D implementation and made associated args accessible via Colab UI elements\n",
+        "\n",
+        "  v4.92 Update: Feb 20th 2022 - gandamu / Adam Letts\n",
+        "\n",
+        "      Separated transform code\n",
+        "\n",
+        "  v5.01 Update: Mar 10th 2022 - gandamu / Adam Letts\n",
+        "\n",
+        "      IPython magic commands replaced by Python code\n",
+        "\n",
+        "  v5.1 Update: Mar 30th 2022 - zippy / Chris Allen and gandamu / Adam Letts\n",
+        "\n",
+        "      Integrated Turbo+Smooth features from Disco Diffusion Turbo -- just the implementation, without its defaults.\n",
+        "\n",
+        "      Implemented resume of turbo animations in such a way that it's now possible to resume from different batch folders and batch numbers.\n",
+        "\n",
+        "      3D rotation parameter units are now degrees (rather than radians)\n",
+        "\n",
+        "      Corrected name collision in sampling_mode (now diffusion_sampling_mode for plms/ddim, and sampling_mode for 3D transform sampling)\n",
+        "\n",
+        "      Added video_init_seed_continuity option to make init video animations more continuous\n",
+        "\n",
+        "  v5.1 Update: Apr 4th 2022 - MSFTserver aka HostsServer\n",
+        "\n",
+        "      Removed pytorch3d from needing to be compiled with a lite version specifically made for Disco Diffusion\n",
+        "\n",
+        "      Remove Super Resolution\n",
+        "\n",
+        "      Remove SLIP Models\n",
+        "\n",
+        "      Update for crossplatform support\n",
+        "\n",
+        "  v5.2 Update: Apr 10th 2022 - nin_artificial / Tom Mason\n",
+        "\n",
+        "      VR Mode\n",
+        "\n",
+        "      '''\n",
+        "  )"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "TutorialTop"
+      },
+      "source": [
+        "# Tutorial"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "DiffusionSet"
+      },
+      "source": [
+        "**Diffusion settings (Defaults are heavily outdated)**\n",
+        "---\n",
+        "Disco Diffusion is complex, and continually evolving with new features.  The most current documentation on on Disco Diffusion settings can be found in the unofficial guidebook:\n",
+        "\n",
+        "[Zippy's Disco Diffusion Cheatsheet](https://docs.google.com/document/d/1l8s7uS2dGqjztYSjPpzlmXLjl5PM3IGkRWI3IiCuK7g/edit)\n",
+        "\n",
+        "We also encourage users to join the [Disco Diffusion User Discord](https://discord.gg/XGZrFFCRfN) to learn from the active user community.\n",
+        "\n",
+        "This section below is outdated as of v2\n",
+        "\n",
+        "Setting | Description | Default\n",
+        "--- | --- | ---\n",
+        "**Your vision:**\n",
+        "`text_prompts` | A description of what you'd like the machine to generate. Think of it like writing the caption below your image on a website. | N/A\n",
+        "`image_prompts` | Think of these images more as a description of their contents. | N/A\n",
+        "**Image quality:**\n",
+        "`clip_guidance_scale`  | Controls how much the image should look like the prompt. | 1000\n",
+        "`tv_scale` | Controls the smoothness of the final output. | 150\n",
+        "`range_scale` | Controls how far out of range RGB values are allowed to be. | 150\n",
+        "`sat_scale` | Controls how much saturation is allowed. From nshepperd's JAX notebook. | 0\n",
+        "`cutn` | Controls how many crops to take from the image. | 16\n",
+        "`cutn_batches` | Accumulate CLIP gradient from multiple batches of cuts. | 2\n",
+        "**Init settings:**\n",
+        "`init_image` | URL or local path | None\n",
+        "`init_scale` | This enhances the effect of the init image, a good value is 1000 | 0\n",
+        "`skip_steps` | Controls the starting point along the diffusion timesteps | 0\n",
+        "`perlin_init` | Option to start with random perlin noise | False\n",
+        "`perlin_mode` | ('gray', 'color') | 'mixed'\n",
+        "**Advanced:**\n",
+        "`skip_augs` | Controls whether to skip torchvision augmentations | False\n",
+        "`randomize_class` | Controls whether the imagenet class is randomly changed each iteration | True\n",
+        "`clip_denoised` | Determines whether CLIP discriminates a noisy or denoised image | False\n",
+        "`clamp_grad` | Experimental: Using adaptive clip grad in the cond_fn | True\n",
+        "`seed`  | Choose a random seed and print it at end of run for reproduction | random_seed\n",
+        "`fuzzy_prompt` | Controls whether to add multiple noisy prompts to the prompt losses | False\n",
+        "`rand_mag` | Controls the magnitude of the random noise | 0.1\n",
+        "`eta` | DDIM hyperparameter | 0.5\n",
+        "\n",
+        "..\n",
+        "\n",
+        "**Model settings**\n",
+        "---\n",
+        "\n",
+        "Setting | Description | Default\n",
+        "--- | --- | ---\n",
+        "**Diffusion:**\n",
+        "`timestep_respacing` | Modify this value to decrease the number of timesteps. | ddim100\n",
+        "`diffusion_steps` || 1000\n",
+        "**Diffusion:**\n",
+        "`clip_models` | Models of CLIP to load. Typically the more, the better but they all come at a hefty VRAM cost. | ViT-B/32, ViT-B/16, RN50x4"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "SetupTop"
+      },
+      "source": [
+        "# 1. Set Up"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "CheckGPU"
+      },
+      "source": [
+        "#@title 1.1 Check GPU Status\n",
+        "import subprocess\n",
+        "simple_nvidia_smi_display = False#@param {type:\"boolean\"}\n",
+        "if simple_nvidia_smi_display:\n",
+        "  #!nvidia-smi\n",
+        "  nvidiasmi_output = subprocess.run(['nvidia-smi', '-L'], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "  print(nvidiasmi_output)\n",
+        "else:\n",
+        "  #!nvidia-smi -i 0 -e 0\n",
+        "  nvidiasmi_output = subprocess.run(['nvidia-smi'], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "  print(nvidiasmi_output)\n",
+        "  nvidiasmi_ecc_note = subprocess.run(['nvidia-smi', '-i', '0', '-e', '0'], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "  print(nvidiasmi_ecc_note)"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "PrepFolders"
+      },
+      "source": [
+        "#@title 1.2 Prepare Folders\n",
+        "import subprocess, os, sys, ipykernel\n",
+        "\n",
+        "def gitclone(url):\n",
+        "  res = subprocess.run(['git', 'clone', url], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "  print(res)\n",
+        "\n",
+        "def pipi(modulestr):\n",
+        "  res = subprocess.run(['pip', 'install', modulestr], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "  print(res)\n",
+        "\n",
+        "def pipie(modulestr):\n",
+        "  res = subprocess.run(['git', 'install', '-e', modulestr], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "  print(res)\n",
+        "\n",
+        "def wget(url, outputdir):\n",
+        "  res = subprocess.run(['wget', url, '-P', f'{outputdir}'], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "  print(res)\n",
+        "\n",
+        "try:\n",
+        "    from google.colab import drive\n",
+        "    print(\"Google Colab detected. Using Google Drive.\")\n",
+        "    is_colab = True\n",
+        "    #@markdown If you connect your Google Drive, you can save the final image of each run on your drive.\n",
+        "    google_drive = True #@param {type:\"boolean\"}\n",
+        "    #@markdown Click here if you'd like to save the diffusion model checkpoint file to (and/or load from) your Google Drive:\n",
+        "    save_models_to_google_drive = True #@param {type:\"boolean\"}\n",
+        "except:\n",
+        "    is_colab = False\n",
+        "    google_drive = False\n",
+        "    save_models_to_google_drive = False\n",
+        "    print(\"Google Colab not detected.\")\n",
+        "\n",
+        "if is_colab:\n",
+        "    if google_drive is True:\n",
+        "        drive.mount('/content/drive')\n",
+        "        root_path = '/content/drive/MyDrive/AI/Disco_Diffusion'\n",
+        "    else:\n",
+        "        root_path = '/content'\n",
+        "else:\n",
+        "    root_path = os.getcwd()\n",
+        "\n",
+        "import os\n",
+        "def createPath(filepath):\n",
+        "    os.makedirs(filepath, exist_ok=True)\n",
+        "\n",
+        "initDirPath = f'{root_path}/init_images'\n",
+        "createPath(initDirPath)\n",
+        "outDirPath = f'{root_path}/images_out'\n",
+        "createPath(outDirPath)\n",
+        "\n",
+        "if is_colab:\n",
+        "    if google_drive and not save_models_to_google_drive or not google_drive:\n",
+        "        model_path = '/content/models'\n",
+        "        createPath(model_path)\n",
+        "    if google_drive and save_models_to_google_drive:\n",
+        "        model_path = f'{root_path}/models'\n",
+        "        createPath(model_path)\n",
+        "else:\n",
+        "    model_path = f'{root_path}/models'\n",
+        "    createPath(model_path)\n",
+        "\n",
+        "# libraries = f'{root_path}/libraries'\n",
+        "# createPath(libraries)"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "InstallDeps"
+      },
+      "source": [
+        "#@title ### 1.3 Install and import dependencies\n",
+        "\n",
+        "import pathlib, shutil, os, sys\n",
+        "\n",
+        "if not is_colab:\n",
+        "  # If running locally, there's a good chance your env will need this in order to not crash upon np.matmul() or similar operations.\n",
+        "  os.environ['KMP_DUPLICATE_LIB_OK']='TRUE'\n",
+        "\n",
+        "PROJECT_DIR = os.path.abspath(os.getcwd())\n",
+        "USE_ADABINS = True\n",
+        "\n",
+        "if is_colab:\n",
+        "  if google_drive is not True:\n",
+        "    root_path = f'/content'\n",
+        "    model_path = '/content/models' \n",
+        "else:\n",
+        "  root_path = os.getcwd()\n",
+        "  model_path = f'{root_path}/models'\n",
+        "\n",
+        "model_256_downloaded = False\n",
+        "model_512_downloaded = False\n",
+        "model_secondary_downloaded = False\n",
+        "\n",
+        "multipip_res = subprocess.run(['pip', 'install', 'lpips', 'datetime', 'timm', 'ftfy', 'einops', 'pytorch-lightning', 'omegaconf'], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "print(multipip_res)\n",
+        "\n",
+        "if is_colab:\n",
+        "  subprocess.run(['apt', 'install', 'imagemagick'], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "\n",
+        "try:\n",
+        "  from CLIP import clip\n",
+        "except:\n",
+        "  if not os.path.exists(\"CLIP\"):\n",
+        "    gitclone(\"https://github.com/openai/CLIP\")\n",
+        "  sys.path.append(f'{PROJECT_DIR}/CLIP')\n",
+        "\n",
+        "try:\n",
+        "  from guided_diffusion.script_util import create_model_and_diffusion\n",
+        "except:\n",
+        "  if not os.path.exists(\"guided-diffusion\"):\n",
+        "    gitclone(\"https://github.com/crowsonkb/guided-diffusion\")\n",
+        "  sys.path.append(f'{PROJECT_DIR}/guided-diffusion')\n",
+        "\n",
+        "try:\n",
+        "  from resize_right import resize\n",
+        "except:\n",
+        "  if not os.path.exists(\"ResizeRight\"):\n",
+        "    gitclone(\"https://github.com/assafshocher/ResizeRight.git\")\n",
+        "  sys.path.append(f'{PROJECT_DIR}/ResizeRight')\n",
+        "\n",
+        "try:\n",
+        "  import py3d_tools\n",
+        "except:\n",
+        "  if not os.path.exists('pytorch3d-lite'):\n",
+        "    gitclone(\"https://github.com/MSFTserver/pytorch3d-lite.git\")\n",
+        "  sys.path.append(f'{PROJECT_DIR}/pytorch3d-lite')\n",
+        "\n",
+        "try:\n",
+        "  from midas.dpt_depth import DPTDepthModel\n",
+        "except:\n",
+        "  if not os.path.exists('MiDaS'):\n",
+        "    gitclone(\"https://github.com/isl-org/MiDaS.git\")\n",
+        "  if not os.path.exists('MiDaS/midas_utils.py'):\n",
+        "    shutil.move('MiDaS/utils.py', 'MiDaS/midas_utils.py')\n",
+        "  if not os.path.exists(f'{model_path}/dpt_large-midas-2f21e586.pt'):\n",
+        "    wget(\"https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt\", model_path)\n",
+        "  sys.path.append(f'{PROJECT_DIR}/MiDaS')\n",
+        "\n",
+        "try:\n",
+        "  sys.path.append(PROJECT_DIR)\n",
+        "  import disco_xform_utils as dxf\n",
+        "except:\n",
+        "  if not os.path.exists(\"disco-diffusion\"):\n",
+        "    gitclone(\"https://github.com/alembics/disco-diffusion.git\")\n",
+        "  if os.path.exists('disco_xform_utils.py') is not True:\n",
+        "    shutil.move('disco-diffusion/disco_xform_utils.py', 'disco_xform_utils.py')\n",
+        "  sys.path.append(PROJECT_DIR)\n",
+        "\n",
+        "import torch\n",
+        "from dataclasses import dataclass\n",
+        "from functools import partial\n",
+        "import cv2\n",
+        "import pandas as pd\n",
+        "import gc\n",
+        "import io\n",
+        "import math\n",
+        "import timm\n",
+        "from IPython import display\n",
+        "import lpips\n",
+        "from PIL import Image, ImageOps\n",
+        "import requests\n",
+        "from glob import glob\n",
+        "import json\n",
+        "from types import SimpleNamespace\n",
+        "from torch import nn\n",
+        "from torch.nn import functional as F\n",
+        "import torchvision.transforms as T\n",
+        "import torchvision.transforms.functional as TF\n",
+        "from tqdm.notebook import tqdm\n",
+        "from CLIP import clip\n",
+        "from resize_right import resize\n",
+        "from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults\n",
+        "from datetime import datetime\n",
+        "import numpy as np\n",
+        "import matplotlib.pyplot as plt\n",
+        "import random\n",
+        "from ipywidgets import Output\n",
+        "import hashlib\n",
+        "from functools import partial\n",
+        "if is_colab:\n",
+        "  os.chdir('/content')\n",
+        "  from google.colab import files\n",
+        "else:\n",
+        "  os.chdir(f'{PROJECT_DIR}')\n",
+        "from IPython.display import Image as ipyimg\n",
+        "from numpy import asarray\n",
+        "from einops import rearrange, repeat\n",
+        "import torch, torchvision\n",
+        "import time\n",
+        "from omegaconf import OmegaConf\n",
+        "import warnings\n",
+        "warnings.filterwarnings(\"ignore\", category=UserWarning)\n",
+        "\n",
+        "# AdaBins stuff\n",
+        "if USE_ADABINS:\n",
+        "  try:\n",
+        "    from infer import InferenceHelper\n",
+        "  except:\n",
+        "    if os.path.exists(\"AdaBins\") is not True:\n",
+        "      gitclone(\"https://github.com/shariqfarooq123/AdaBins.git\")\n",
+        "    if not os.path.exists(f'{PROJECT_DIR}/pretrained/AdaBins_nyu.pt'):\n",
+        "      createPath(f'{PROJECT_DIR}/pretrained')\n",
+        "      wget(\"https://cloudflare-ipfs.com/ipfs/Qmd2mMnDLWePKmgfS8m6ntAg4nhV5VkUyAydYBp8cWWeB7/AdaBins_nyu.pt\", f'{PROJECT_DIR}/pretrained')\n",
+        "    sys.path.append(f'{PROJECT_DIR}/AdaBins')\n",
+        "  from infer import InferenceHelper\n",
+        "  MAX_ADABINS_AREA = 500000\n",
+        "\n",
+        "import torch\n",
+        "DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n",
+        "print('Using device:', DEVICE)\n",
+        "device = DEVICE # At least one of the modules expects this name..\n",
+        "\n",
+        "if torch.cuda.get_device_capability(DEVICE) == (8,0): ## A100 fix thanks to Emad\n",
+        "  print('Disabling CUDNN for A100 gpu', file=sys.stderr)\n",
+        "  torch.backends.cudnn.enabled = False"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "DefMidasFns"
+      },
+      "source": [
+        "#@title ### 1.4 Define Midas functions\n",
+        "\n",
+        "from midas.dpt_depth import DPTDepthModel\n",
+        "from midas.midas_net import MidasNet\n",
+        "from midas.midas_net_custom import MidasNet_small\n",
+        "from midas.transforms import Resize, NormalizeImage, PrepareForNet\n",
+        "\n",
+        "# Initialize MiDaS depth model.\n",
+        "# It remains resident in VRAM and likely takes around 2GB VRAM.\n",
+        "# You could instead initialize it for each frame (and free it after each frame) to save VRAM.. but initializing it is slow.\n",
+        "default_models = {\n",
+        "    \"midas_v21_small\": f\"{model_path}/midas_v21_small-70d6b9c8.pt\",\n",
+        "    \"midas_v21\": f\"{model_path}/midas_v21-f6b98070.pt\",\n",
+        "    \"dpt_large\": f\"{model_path}/dpt_large-midas-2f21e586.pt\",\n",
+        "    \"dpt_hybrid\": f\"{model_path}/dpt_hybrid-midas-501f0c75.pt\",\n",
+        "    \"dpt_hybrid_nyu\": f\"{model_path}/dpt_hybrid_nyu-2ce69ec7.pt\",}\n",
+        "\n",
+        "\n",
+        "def init_midas_depth_model(midas_model_type=\"dpt_large\", optimize=True):\n",
+        "    midas_model = None\n",
+        "    net_w = None\n",
+        "    net_h = None\n",
+        "    resize_mode = None\n",
+        "    normalization = None\n",
+        "\n",
+        "    print(f\"Initializing MiDaS '{midas_model_type}' depth model...\")\n",
+        "    # load network\n",
+        "    midas_model_path = default_models[midas_model_type]\n",
+        "\n",
+        "    if midas_model_type == \"dpt_large\": # DPT-Large\n",
+        "        midas_model = DPTDepthModel(\n",
+        "            path=midas_model_path,\n",
+        "            backbone=\"vitl16_384\",\n",
+        "            non_negative=True,\n",
+        "        )\n",
+        "        net_w, net_h = 384, 384\n",
+        "        resize_mode = \"minimal\"\n",
+        "        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])\n",
+        "    elif midas_model_type == \"dpt_hybrid\": #DPT-Hybrid\n",
+        "        midas_model = DPTDepthModel(\n",
+        "            path=midas_model_path,\n",
+        "            backbone=\"vitb_rn50_384\",\n",
+        "            non_negative=True,\n",
+        "        )\n",
+        "        net_w, net_h = 384, 384\n",
+        "        resize_mode=\"minimal\"\n",
+        "        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])\n",
+        "    elif midas_model_type == \"dpt_hybrid_nyu\": #DPT-Hybrid-NYU\n",
+        "        midas_model = DPTDepthModel(\n",
+        "            path=midas_model_path,\n",
+        "            backbone=\"vitb_rn50_384\",\n",
+        "            non_negative=True,\n",
+        "        )\n",
+        "        net_w, net_h = 384, 384\n",
+        "        resize_mode=\"minimal\"\n",
+        "        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])\n",
+        "    elif midas_model_type == \"midas_v21\":\n",
+        "        midas_model = MidasNet(midas_model_path, non_negative=True)\n",
+        "        net_w, net_h = 384, 384\n",
+        "        resize_mode=\"upper_bound\"\n",
+        "        normalization = NormalizeImage(\n",
+        "            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]\n",
+        "        )\n",
+        "    elif midas_model_type == \"midas_v21_small\":\n",
+        "        midas_model = MidasNet_small(midas_model_path, features=64, backbone=\"efficientnet_lite3\", exportable=True, non_negative=True, blocks={'expand': True})\n",
+        "        net_w, net_h = 256, 256\n",
+        "        resize_mode=\"upper_bound\"\n",
+        "        normalization = NormalizeImage(\n",
+        "            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]\n",
+        "        )\n",
+        "    else:\n",
+        "        print(f\"midas_model_type '{midas_model_type}' not implemented\")\n",
+        "        assert False\n",
+        "\n",
+        "    midas_transform = T.Compose(\n",
+        "        [\n",
+        "            Resize(\n",
+        "                net_w,\n",
+        "                net_h,\n",
+        "                resize_target=None,\n",
+        "                keep_aspect_ratio=True,\n",
+        "                ensure_multiple_of=32,\n",
+        "                resize_method=resize_mode,\n",
+        "                image_interpolation_method=cv2.INTER_CUBIC,\n",
+        "            ),\n",
+        "            normalization,\n",
+        "            PrepareForNet(),\n",
+        "        ]\n",
+        "    )\n",
+        "\n",
+        "    midas_model.eval()\n",
+        "    \n",
+        "    if optimize==True:\n",
+        "        if DEVICE == torch.device(\"cuda\"):\n",
+        "            midas_model = midas_model.to(memory_format=torch.channels_last)  \n",
+        "            midas_model = midas_model.half()\n",
+        "\n",
+        "    midas_model.to(DEVICE)\n",
+        "\n",
+        "    print(f\"MiDaS '{midas_model_type}' depth model initialized.\")\n",
+        "    return midas_model, midas_transform, net_w, net_h, resize_mode, normalization"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "DefFns"
+      },
+      "source": [
+        "#@title 1.5 Define necessary functions\n",
+        "\n",
+        "# https://gist.github.com/adefossez/0646dbe9ed4005480a2407c62aac8869\n",
+        "\n",
+        "import py3d_tools as p3dT\n",
+        "import disco_xform_utils as dxf\n",
+        "\n",
+        "def interp(t):\n",
+        "    return 3 * t**2 - 2 * t ** 3\n",
+        "\n",
+        "def perlin(width, height, scale=10, device=None):\n",
+        "    gx, gy = torch.randn(2, width + 1, height + 1, 1, 1, device=device)\n",
+        "    xs = torch.linspace(0, 1, scale + 1)[:-1, None].to(device)\n",
+        "    ys = torch.linspace(0, 1, scale + 1)[None, :-1].to(device)\n",
+        "    wx = 1 - interp(xs)\n",
+        "    wy = 1 - interp(ys)\n",
+        "    dots = 0\n",
+        "    dots += wx * wy * (gx[:-1, :-1] * xs + gy[:-1, :-1] * ys)\n",
+        "    dots += (1 - wx) * wy * (-gx[1:, :-1] * (1 - xs) + gy[1:, :-1] * ys)\n",
+        "    dots += wx * (1 - wy) * (gx[:-1, 1:] * xs - gy[:-1, 1:] * (1 - ys))\n",
+        "    dots += (1 - wx) * (1 - wy) * (-gx[1:, 1:] * (1 - xs) - gy[1:, 1:] * (1 - ys))\n",
+        "    return dots.permute(0, 2, 1, 3).contiguous().view(width * scale, height * scale)\n",
+        "\n",
+        "def perlin_ms(octaves, width, height, grayscale, device=device):\n",
+        "    out_array = [0.5] if grayscale else [0.5, 0.5, 0.5]\n",
+        "    # out_array = [0.0] if grayscale else [0.0, 0.0, 0.0]\n",
+        "    for i in range(1 if grayscale else 3):\n",
+        "        scale = 2 ** len(octaves)\n",
+        "        oct_width = width\n",
+        "        oct_height = height\n",
+        "        for oct in octaves:\n",
+        "            p = perlin(oct_width, oct_height, scale, device)\n",
+        "            out_array[i] += p * oct\n",
+        "            scale //= 2\n",
+        "            oct_width *= 2\n",
+        "            oct_height *= 2\n",
+        "    return torch.cat(out_array)\n",
+        "\n",
+        "def create_perlin_noise(octaves=[1, 1, 1, 1], width=2, height=2, grayscale=True):\n",
+        "    out = perlin_ms(octaves, width, height, grayscale)\n",
+        "    if grayscale:\n",
+        "        out = TF.resize(size=(side_y, side_x), img=out.unsqueeze(0))\n",
+        "        out = TF.to_pil_image(out.clamp(0, 1)).convert('RGB')\n",
+        "    else:\n",
+        "        out = out.reshape(-1, 3, out.shape[0]//3, out.shape[1])\n",
+        "        out = TF.resize(size=(side_y, side_x), img=out)\n",
+        "        out = TF.to_pil_image(out.clamp(0, 1).squeeze())\n",
+        "\n",
+        "    out = ImageOps.autocontrast(out)\n",
+        "    return out\n",
+        "\n",
+        "def regen_perlin():\n",
+        "    if perlin_mode == 'color':\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)\n",
+        "    elif perlin_mode == 'gray':\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "    else:\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "\n",
+        "    init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "    del init2\n",
+        "    return init.expand(batch_size, -1, -1, -1)\n",
+        "\n",
+        "def fetch(url_or_path):\n",
+        "    if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):\n",
+        "        r = requests.get(url_or_path)\n",
+        "        r.raise_for_status()\n",
+        "        fd = io.BytesIO()\n",
+        "        fd.write(r.content)\n",
+        "        fd.seek(0)\n",
+        "        return fd\n",
+        "    return open(url_or_path, 'rb')\n",
+        "\n",
+        "def read_image_workaround(path):\n",
+        "    \"\"\"OpenCV reads images as BGR, Pillow saves them as RGB. Work around\n",
+        "    this incompatibility to avoid colour inversions.\"\"\"\n",
+        "    im_tmp = cv2.imread(path)\n",
+        "    return cv2.cvtColor(im_tmp, cv2.COLOR_BGR2RGB)\n",
+        "\n",
+        "def parse_prompt(prompt):\n",
+        "    if prompt.startswith('http://') or prompt.startswith('https://'):\n",
+        "        vals = prompt.rsplit(':', 2)\n",
+        "        vals = [vals[0] + ':' + vals[1], *vals[2:]]\n",
+        "    else:\n",
+        "        vals = prompt.rsplit(':', 1)\n",
+        "    vals = vals + ['', '1'][len(vals):]\n",
+        "    return vals[0], float(vals[1])\n",
+        "\n",
+        "def sinc(x):\n",
+        "    return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))\n",
+        "\n",
+        "def lanczos(x, a):\n",
+        "    cond = torch.logical_and(-a < x, x < a)\n",
+        "    out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))\n",
+        "    return out / out.sum()\n",
+        "\n",
+        "def ramp(ratio, width):\n",
+        "    n = math.ceil(width / ratio + 1)\n",
+        "    out = torch.empty([n])\n",
+        "    cur = 0\n",
+        "    for i in range(out.shape[0]):\n",
+        "        out[i] = cur\n",
+        "        cur += ratio\n",
+        "    return torch.cat([-out[1:].flip([0]), out])[1:-1]\n",
+        "\n",
+        "def resample(input, size, align_corners=True):\n",
+        "    n, c, h, w = input.shape\n",
+        "    dh, dw = size\n",
+        "\n",
+        "    input = input.reshape([n * c, 1, h, w])\n",
+        "\n",
+        "    if dh < h:\n",
+        "        kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)\n",
+        "        pad_h = (kernel_h.shape[0] - 1) // 2\n",
+        "        input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')\n",
+        "        input = F.conv2d(input, kernel_h[None, None, :, None])\n",
+        "\n",
+        "    if dw < w:\n",
+        "        kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)\n",
+        "        pad_w = (kernel_w.shape[0] - 1) // 2\n",
+        "        input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')\n",
+        "        input = F.conv2d(input, kernel_w[None, None, None, :])\n",
+        "\n",
+        "    input = input.reshape([n, c, h, w])\n",
+        "    return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)\n",
+        "\n",
+        "class MakeCutouts(nn.Module):\n",
+        "    def __init__(self, cut_size, cutn, skip_augs=False):\n",
+        "        super().__init__()\n",
+        "        self.cut_size = cut_size\n",
+        "        self.cutn = cutn\n",
+        "        self.skip_augs = skip_augs\n",
+        "        self.augs = T.Compose([\n",
+        "            T.RandomHorizontalFlip(p=0.5),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomAffine(degrees=15, translate=(0.1, 0.1)),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomPerspective(distortion_scale=0.4, p=0.7),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomGrayscale(p=0.15),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            # T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "        ])\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        input = T.Pad(input.shape[2]//4, fill=0)(input)\n",
+        "        sideY, sideX = input.shape[2:4]\n",
+        "        max_size = min(sideX, sideY)\n",
+        "\n",
+        "        cutouts = []\n",
+        "        for ch in range(self.cutn):\n",
+        "            if ch > self.cutn - self.cutn//4:\n",
+        "                cutout = input.clone()\n",
+        "            else:\n",
+        "                size = int(max_size * torch.zeros(1,).normal_(mean=.8, std=.3).clip(float(self.cut_size/max_size), 1.))\n",
+        "                offsetx = torch.randint(0, abs(sideX - size + 1), ())\n",
+        "                offsety = torch.randint(0, abs(sideY - size + 1), ())\n",
+        "                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]\n",
+        "\n",
+        "            if not self.skip_augs:\n",
+        "                cutout = self.augs(cutout)\n",
+        "            cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))\n",
+        "            del cutout\n",
+        "\n",
+        "        cutouts = torch.cat(cutouts, dim=0)\n",
+        "        return cutouts\n",
+        "\n",
+        "cutout_debug = False\n",
+        "padargs = {}\n",
+        "\n",
+        "class MakeCutoutsDango(nn.Module):\n",
+        "    def __init__(self, cut_size,\n",
+        "                 Overview=4, \n",
+        "                 InnerCrop = 0, IC_Size_Pow=0.5, IC_Grey_P = 0.2\n",
+        "                 ):\n",
+        "        super().__init__()\n",
+        "        self.cut_size = cut_size\n",
+        "        self.Overview = Overview\n",
+        "        self.InnerCrop = InnerCrop\n",
+        "        self.IC_Size_Pow = IC_Size_Pow\n",
+        "        self.IC_Grey_P = IC_Grey_P\n",
+        "        if args.animation_mode == 'None':\n",
+        "          self.augs = T.Compose([\n",
+        "              T.RandomHorizontalFlip(p=0.5),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomAffine(degrees=10, translate=(0.05, 0.05),  interpolation = T.InterpolationMode.BILINEAR),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomGrayscale(p=0.1),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "          ])\n",
+        "        elif args.animation_mode == 'Video Input':\n",
+        "          self.augs = T.Compose([\n",
+        "              T.RandomHorizontalFlip(p=0.5),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomAffine(degrees=15, translate=(0.1, 0.1)),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomPerspective(distortion_scale=0.4, p=0.7),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomGrayscale(p=0.15),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              # T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "          ])\n",
+        "        elif  args.animation_mode == '2D' or args.animation_mode == '3D':\n",
+        "          self.augs = T.Compose([\n",
+        "              T.RandomHorizontalFlip(p=0.4),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomAffine(degrees=10, translate=(0.05, 0.05),  interpolation = T.InterpolationMode.BILINEAR),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomGrayscale(p=0.1),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.3),\n",
+        "          ])\n",
+        "          \n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        cutouts = []\n",
+        "        gray = T.Grayscale(3)\n",
+        "        sideY, sideX = input.shape[2:4]\n",
+        "        max_size = min(sideX, sideY)\n",
+        "        min_size = min(sideX, sideY, self.cut_size)\n",
+        "        l_size = max(sideX, sideY)\n",
+        "        output_shape = [1,3,self.cut_size,self.cut_size] \n",
+        "        output_shape_2 = [1,3,self.cut_size+2,self.cut_size+2]\n",
+        "        pad_input = F.pad(input,((sideY-max_size)//2,(sideY-max_size)//2,(sideX-max_size)//2,(sideX-max_size)//2), **padargs)\n",
+        "        cutout = resize(pad_input, out_shape=output_shape)\n",
+        "\n",
+        "        if self.Overview>0:\n",
+        "            if self.Overview<=4:\n",
+        "                if self.Overview>=1:\n",
+        "                    cutouts.append(cutout)\n",
+        "                if self.Overview>=2:\n",
+        "                    cutouts.append(gray(cutout))\n",
+        "                if self.Overview>=3:\n",
+        "                    cutouts.append(TF.hflip(cutout))\n",
+        "                if self.Overview==4:\n",
+        "                    cutouts.append(gray(TF.hflip(cutout)))\n",
+        "            else:\n",
+        "                cutout = resize(pad_input, out_shape=output_shape)\n",
+        "                for _ in range(self.Overview):\n",
+        "                    cutouts.append(cutout)\n",
+        "\n",
+        "            if cutout_debug:\n",
+        "                if is_colab:\n",
+        "                    TF.to_pil_image(cutouts[0].clamp(0, 1).squeeze(0)).save(\"/content/cutout_overview0.jpg\",quality=99)\n",
+        "                else:\n",
+        "                    TF.to_pil_image(cutouts[0].clamp(0, 1).squeeze(0)).save(\"cutout_overview0.jpg\",quality=99)\n",
+        "\n",
+        "                              \n",
+        "        if self.InnerCrop >0:\n",
+        "            for i in range(self.InnerCrop):\n",
+        "                size = int(torch.rand([])**self.IC_Size_Pow * (max_size - min_size) + min_size)\n",
+        "                offsetx = torch.randint(0, sideX - size + 1, ())\n",
+        "                offsety = torch.randint(0, sideY - size + 1, ())\n",
+        "                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]\n",
+        "                if i <= int(self.IC_Grey_P * self.InnerCrop):\n",
+        "                    cutout = gray(cutout)\n",
+        "                cutout = resize(cutout, out_shape=output_shape)\n",
+        "                cutouts.append(cutout)\n",
+        "            if cutout_debug:\n",
+        "                if is_colab:\n",
+        "                    TF.to_pil_image(cutouts[-1].clamp(0, 1).squeeze(0)).save(\"/content/cutout_InnerCrop.jpg\",quality=99)\n",
+        "                else:\n",
+        "                    TF.to_pil_image(cutouts[-1].clamp(0, 1).squeeze(0)).save(\"cutout_InnerCrop.jpg\",quality=99)\n",
+        "        cutouts = torch.cat(cutouts)\n",
+        "        if skip_augs is not True: cutouts=self.augs(cutouts)\n",
+        "        return cutouts\n",
+        "\n",
+        "def spherical_dist_loss(x, y):\n",
+        "    x = F.normalize(x, dim=-1)\n",
+        "    y = F.normalize(y, dim=-1)\n",
+        "    return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)     \n",
+        "\n",
+        "def tv_loss(input):\n",
+        "    \"\"\"L2 total variation loss, as in Mahendran et al.\"\"\"\n",
+        "    input = F.pad(input, (0, 1, 0, 1), 'replicate')\n",
+        "    x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]\n",
+        "    y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]\n",
+        "    return (x_diff**2 + y_diff**2).mean([1, 2, 3])\n",
+        "\n",
+        "\n",
+        "def range_loss(input):\n",
+        "    return (input - input.clamp(-1, 1)).pow(2).mean([1, 2, 3])\n",
+        "\n",
+        "stop_on_next_loop = False  # Make sure GPU memory doesn't get corrupted from cancelling the run mid-way through, allow a full frame to complete\n",
+        "TRANSLATION_SCALE = 1.0/200.0\n",
+        "\n",
+        "def do_3d_step(img_filepath, frame_num, midas_model, midas_transform):\n",
+        "  if args.key_frames:\n",
+        "    translation_x = args.translation_x_series[frame_num]\n",
+        "    translation_y = args.translation_y_series[frame_num]\n",
+        "    translation_z = args.translation_z_series[frame_num]\n",
+        "    rotation_3d_x = args.rotation_3d_x_series[frame_num]\n",
+        "    rotation_3d_y = args.rotation_3d_y_series[frame_num]\n",
+        "    rotation_3d_z = args.rotation_3d_z_series[frame_num]\n",
+        "    print(\n",
+        "        f'translation_x: {translation_x}',\n",
+        "        f'translation_y: {translation_y}',\n",
+        "        f'translation_z: {translation_z}',\n",
+        "        f'rotation_3d_x: {rotation_3d_x}',\n",
+        "        f'rotation_3d_y: {rotation_3d_y}',\n",
+        "        f'rotation_3d_z: {rotation_3d_z}',\n",
+        "    )\n",
+        "\n",
+        "  translate_xyz = [-translation_x*TRANSLATION_SCALE, translation_y*TRANSLATION_SCALE, -translation_z*TRANSLATION_SCALE]\n",
+        "  rotate_xyz_degrees = [rotation_3d_x, rotation_3d_y, rotation_3d_z]\n",
+        "  print('translation:',translate_xyz)\n",
+        "  print('rotation:',rotate_xyz_degrees)\n",
+        "  rotate_xyz = [math.radians(rotate_xyz_degrees[0]), math.radians(rotate_xyz_degrees[1]), math.radians(rotate_xyz_degrees[2])]\n",
+        "  rot_mat = p3dT.euler_angles_to_matrix(torch.tensor(rotate_xyz, device=device), \"XYZ\").unsqueeze(0)\n",
+        "  print(\"rot_mat: \" + str(rot_mat))\n",
+        "  next_step_pil = dxf.transform_image_3d(img_filepath, midas_model, midas_transform, DEVICE,\n",
+        "                                          rot_mat, translate_xyz, args.near_plane, args.far_plane,\n",
+        "                                          args.fov, padding_mode=args.padding_mode,\n",
+        "                                          sampling_mode=args.sampling_mode, midas_weight=args.midas_weight)\n",
+        "  return next_step_pil\n",
+        "\n",
+        "def do_run():\n",
+        "  seed = args.seed\n",
+        "  print(range(args.start_frame, args.max_frames))\n",
+        "\n",
+        "  if (args.animation_mode == \"3D\") and (args.midas_weight > 0.0):\n",
+        "      midas_model, midas_transform, midas_net_w, midas_net_h, midas_resize_mode, midas_normalization = init_midas_depth_model(args.midas_depth_model)\n",
+        "  for frame_num in range(args.start_frame, args.max_frames):\n",
+        "      if stop_on_next_loop:\n",
+        "        break\n",
+        "      \n",
+        "      display.clear_output(wait=True)\n",
+        "\n",
+        "      # Print Frame progress if animation mode is on\n",
+        "      if args.animation_mode != \"None\":\n",
+        "        batchBar = tqdm(range(args.max_frames), desc =\"Frames\")\n",
+        "        batchBar.n = frame_num\n",
+        "        batchBar.refresh()\n",
+        "\n",
+        "      \n",
+        "      # Inits if not video frames\n",
+        "      if args.animation_mode != \"Video Input\":\n",
+        "        if args.init_image == '':\n",
+        "          init_image = None\n",
+        "        else:\n",
+        "          init_image = args.init_image\n",
+        "        init_scale = args.init_scale\n",
+        "        skip_steps = args.skip_steps\n",
+        "\n",
+        "      if args.animation_mode == \"2D\":\n",
+        "        if args.key_frames:\n",
+        "          angle = args.angle_series[frame_num]\n",
+        "          zoom = args.zoom_series[frame_num]\n",
+        "          translation_x = args.translation_x_series[frame_num]\n",
+        "          translation_y = args.translation_y_series[frame_num]\n",
+        "          print(\n",
+        "              f'angle: {angle}',\n",
+        "              f'zoom: {zoom}',\n",
+        "              f'translation_x: {translation_x}',\n",
+        "              f'translation_y: {translation_y}',\n",
+        "          )\n",
+        "        \n",
+        "        if frame_num > 0:\n",
+        "          seed += 1\n",
+        "          if resume_run and frame_num == start_frame:\n",
+        "            img_0 = cv2.imread(batchFolder+f\"/{batch_name}({batchNum})_{start_frame-1:04}.png\")\n",
+        "          else:\n",
+        "            img_0 = cv2.imread('prevFrame.png')\n",
+        "          center = (1*img_0.shape[1]//2, 1*img_0.shape[0]//2)\n",
+        "          trans_mat = np.float32(\n",
+        "              [[1, 0, translation_x],\n",
+        "              [0, 1, translation_y]]\n",
+        "          )\n",
+        "          rot_mat = cv2.getRotationMatrix2D( center, angle, zoom )\n",
+        "          trans_mat = np.vstack([trans_mat, [0,0,1]])\n",
+        "          rot_mat = np.vstack([rot_mat, [0,0,1]])\n",
+        "          transformation_matrix = np.matmul(rot_mat, trans_mat)\n",
+        "          img_0 = cv2.warpPerspective(\n",
+        "              img_0,\n",
+        "              transformation_matrix,\n",
+        "              (img_0.shape[1], img_0.shape[0]),\n",
+        "              borderMode=cv2.BORDER_WRAP\n",
+        "          )\n",
+        "\n",
+        "          cv2.imwrite('prevFrameScaled.png', img_0)\n",
+        "          init_image = 'prevFrameScaled.png'\n",
+        "          init_scale = args.frames_scale\n",
+        "          skip_steps = args.calc_frames_skip_steps\n",
+        "\n",
+        "      if args.animation_mode == \"3D\":\n",
+        "        if frame_num > 0:\n",
+        "          seed += 1    \n",
+        "          if resume_run and frame_num == start_frame:\n",
+        "            img_filepath = batchFolder+f\"/{batch_name}({batchNum})_{start_frame-1:04}.png\"\n",
+        "            if turbo_mode and frame_num > turbo_preroll:\n",
+        "              shutil.copyfile(img_filepath, 'oldFrameScaled.png')\n",
+        "          else:\n",
+        "            img_filepath = '/content/prevFrame.png' if is_colab else 'prevFrame.png'\n",
+        "\n",
+        "          next_step_pil = do_3d_step(img_filepath, frame_num, midas_model, midas_transform)\n",
+        "          next_step_pil.save('prevFrameScaled.png')\n",
+        "\n",
+        "          ### Turbo mode - skip some diffusions, use 3d morph for clarity and to save time\n",
+        "          if turbo_mode:\n",
+        "            if frame_num == turbo_preroll: #start tracking oldframe\n",
+        "              next_step_pil.save('oldFrameScaled.png')#stash for later blending          \n",
+        "            elif frame_num > turbo_preroll:\n",
+        "              #set up 2 warped image sequences, old & new, to blend toward new diff image\n",
+        "              old_frame = do_3d_step('oldFrameScaled.png', frame_num, midas_model, midas_transform)\n",
+        "              old_frame.save('oldFrameScaled.png')\n",
+        "              if frame_num % int(turbo_steps) != 0: \n",
+        "                print('turbo skip this frame: skipping clip diffusion steps')\n",
+        "                filename = f'{args.batch_name}({args.batchNum})_{frame_num:04}.png'\n",
+        "                blend_factor = ((frame_num % int(turbo_steps))+1)/int(turbo_steps)\n",
+        "                print('turbo skip this frame: skipping clip diffusion steps and saving blended frame')\n",
+        "                newWarpedImg = cv2.imread('prevFrameScaled.png')#this is already updated..\n",
+        "                oldWarpedImg = cv2.imread('oldFrameScaled.png')\n",
+        "                blendedImage = cv2.addWeighted(newWarpedImg, blend_factor, oldWarpedImg,1-blend_factor, 0.0)\n",
+        "                cv2.imwrite(f'{batchFolder}/{filename}',blendedImage)\n",
+        "                next_step_pil.save(f'{img_filepath}') # save it also as prev_frame to feed next iteration\n",
+        "                continue\n",
+        "              else:\n",
+        "                #if not a skip frame, will run diffusion and need to blend.\n",
+        "                oldWarpedImg = cv2.imread('prevFrameScaled.png')\n",
+        "                cv2.imwrite(f'oldFrameScaled.png',oldWarpedImg)#swap in for blending later \n",
+        "                print('clip/diff this frame - generate clip diff image')\n",
+        "\n",
+        "          init_image = 'prevFrameScaled.png'\n",
+        "          init_scale = args.frames_scale\n",
+        "          skip_steps = args.calc_frames_skip_steps\n",
+        "\n",
+        "      if  args.animation_mode == \"Video Input\":\n",
+        "        if not video_init_seed_continuity:\n",
+        "          seed += 1\n",
+        "        init_image = f'{videoFramesFolder}/{frame_num+1:04}.jpg'\n",
+        "        init_scale = args.frames_scale\n",
+        "        skip_steps = args.calc_frames_skip_steps\n",
+        "\n",
+        "      loss_values = []\n",
+        "  \n",
+        "      if seed is not None:\n",
+        "          np.random.seed(seed)\n",
+        "          random.seed(seed)\n",
+        "          torch.manual_seed(seed)\n",
+        "          torch.cuda.manual_seed_all(seed)\n",
+        "          torch.backends.cudnn.deterministic = True\n",
+        "  \n",
+        "      target_embeds, weights = [], []\n",
+        "      \n",
+        "      if args.prompts_series is not None and frame_num >= len(args.prompts_series):\n",
+        "        frame_prompt = args.prompts_series[-1]\n",
+        "      elif args.prompts_series is not None:\n",
+        "        frame_prompt = args.prompts_series[frame_num]\n",
+        "      else:\n",
+        "        frame_prompt = []\n",
+        "      \n",
+        "      print(args.image_prompts_series)\n",
+        "      if args.image_prompts_series is not None and frame_num >= len(args.image_prompts_series):\n",
+        "        image_prompt = args.image_prompts_series[-1]\n",
+        "      elif args.image_prompts_series is not None:\n",
+        "        image_prompt = args.image_prompts_series[frame_num]\n",
+        "      else:\n",
+        "        image_prompt = []\n",
+        "\n",
+        "      print(f'Frame {frame_num} Prompt: {frame_prompt}')\n",
+        "\n",
+        "      model_stats = []\n",
+        "      for clip_model in clip_models:\n",
+        "            cutn = 16\n",
+        "            model_stat = {\"clip_model\":None,\"target_embeds\":[],\"make_cutouts\":None,\"weights\":[]}\n",
+        "            model_stat[\"clip_model\"] = clip_model\n",
+        "            \n",
+        "            \n",
+        "            for prompt in frame_prompt:\n",
+        "                txt, weight = parse_prompt(prompt)\n",
+        "                txt = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()\n",
+        "                \n",
+        "                if args.fuzzy_prompt:\n",
+        "                    for i in range(25):\n",
+        "                        model_stat[\"target_embeds\"].append((txt + torch.randn(txt.shape).cuda() * args.rand_mag).clamp(0,1))\n",
+        "                        model_stat[\"weights\"].append(weight)\n",
+        "                else:\n",
+        "                    model_stat[\"target_embeds\"].append(txt)\n",
+        "                    model_stat[\"weights\"].append(weight)\n",
+        "        \n",
+        "            if image_prompt:\n",
+        "              model_stat[\"make_cutouts\"] = MakeCutouts(clip_model.visual.input_resolution, cutn, skip_augs=skip_augs) \n",
+        "              for prompt in image_prompt:\n",
+        "                  path, weight = parse_prompt(prompt)\n",
+        "                  img = Image.open(fetch(path)).convert('RGB')\n",
+        "                  img = TF.resize(img, min(side_x, side_y, *img.size), T.InterpolationMode.LANCZOS)\n",
+        "                  batch = model_stat[\"make_cutouts\"](TF.to_tensor(img).to(device).unsqueeze(0).mul(2).sub(1))\n",
+        "                  embed = clip_model.encode_image(normalize(batch)).float()\n",
+        "                  if fuzzy_prompt:\n",
+        "                      for i in range(25):\n",
+        "                          model_stat[\"target_embeds\"].append((embed + torch.randn(embed.shape).cuda() * rand_mag).clamp(0,1))\n",
+        "                          weights.extend([weight / cutn] * cutn)\n",
+        "                  else:\n",
+        "                      model_stat[\"target_embeds\"].append(embed)\n",
+        "                      model_stat[\"weights\"].extend([weight / cutn] * cutn)\n",
+        "        \n",
+        "            model_stat[\"target_embeds\"] = torch.cat(model_stat[\"target_embeds\"])\n",
+        "            model_stat[\"weights\"] = torch.tensor(model_stat[\"weights\"], device=device)\n",
+        "            if model_stat[\"weights\"].sum().abs() < 1e-3:\n",
+        "                raise RuntimeError('The weights must not sum to 0.')\n",
+        "            model_stat[\"weights\"] /= model_stat[\"weights\"].sum().abs()\n",
+        "            model_stats.append(model_stat)\n",
+        "  \n",
+        "      init = None\n",
+        "      if init_image is not None:\n",
+        "          init = Image.open(fetch(init_image)).convert('RGB')\n",
+        "          init = init.resize((args.side_x, args.side_y), Image.LANCZOS)\n",
+        "          init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "      \n",
+        "      if args.perlin_init:\n",
+        "          if args.perlin_mode == 'color':\n",
+        "              init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "              init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)\n",
+        "          elif args.perlin_mode == 'gray':\n",
+        "            init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)\n",
+        "            init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "          else:\n",
+        "            init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "            init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "          # init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device)\n",
+        "          init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "          del init2\n",
+        "  \n",
+        "      cur_t = None\n",
+        "  \n",
+        "      def cond_fn(x, t, y=None):\n",
+        "          with torch.enable_grad():\n",
+        "              x_is_NaN = False\n",
+        "              x = x.detach().requires_grad_()\n",
+        "              n = x.shape[0]\n",
+        "              if use_secondary_model is True:\n",
+        "                alpha = torch.tensor(diffusion.sqrt_alphas_cumprod[cur_t], device=device, dtype=torch.float32)\n",
+        "                sigma = torch.tensor(diffusion.sqrt_one_minus_alphas_cumprod[cur_t], device=device, dtype=torch.float32)\n",
+        "                cosine_t = alpha_sigma_to_t(alpha, sigma)\n",
+        "                out = secondary_model(x, cosine_t[None].repeat([n])).pred\n",
+        "                fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]\n",
+        "                x_in = out * fac + x * (1 - fac)\n",
+        "                x_in_grad = torch.zeros_like(x_in)\n",
+        "              else:\n",
+        "                my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t\n",
+        "                out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})\n",
+        "                fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]\n",
+        "                x_in = out['pred_xstart'] * fac + x * (1 - fac)\n",
+        "                x_in_grad = torch.zeros_like(x_in)\n",
+        "              for model_stat in model_stats:\n",
+        "                for i in range(args.cutn_batches):\n",
+        "                    t_int = int(t.item())+1 #errors on last step without +1, need to find source\n",
+        "                    #when using SLIP Base model the dimensions need to be hard coded to avoid AttributeError: 'VisionTransformer' object has no attribute 'input_resolution'\n",
+        "                    try:\n",
+        "                        input_resolution=model_stat[\"clip_model\"].visual.input_resolution\n",
+        "                    except:\n",
+        "                        input_resolution=224\n",
+        "\n",
+        "                    cuts = MakeCutoutsDango(input_resolution,\n",
+        "                            Overview= args.cut_overview[1000-t_int], \n",
+        "                            InnerCrop = args.cut_innercut[1000-t_int], IC_Size_Pow=args.cut_ic_pow, IC_Grey_P = args.cut_icgray_p[1000-t_int]\n",
+        "                            )\n",
+        "                    clip_in = normalize(cuts(x_in.add(1).div(2)))\n",
+        "                    image_embeds = model_stat[\"clip_model\"].encode_image(clip_in).float()\n",
+        "                    dists = spherical_dist_loss(image_embeds.unsqueeze(1), model_stat[\"target_embeds\"].unsqueeze(0))\n",
+        "                    dists = dists.view([args.cut_overview[1000-t_int]+args.cut_innercut[1000-t_int], n, -1])\n",
+        "                    losses = dists.mul(model_stat[\"weights\"]).sum(2).mean(0)\n",
+        "                    loss_values.append(losses.sum().item()) # log loss, probably shouldn't do per cutn_batch\n",
+        "                    x_in_grad += torch.autograd.grad(losses.sum() * clip_guidance_scale, x_in)[0] / cutn_batches\n",
+        "              tv_losses = tv_loss(x_in)\n",
+        "              if use_secondary_model is True:\n",
+        "                range_losses = range_loss(out)\n",
+        "              else:\n",
+        "                range_losses = range_loss(out['pred_xstart'])\n",
+        "              sat_losses = torch.abs(x_in - x_in.clamp(min=-1,max=1)).mean()\n",
+        "              loss = tv_losses.sum() * tv_scale + range_losses.sum() * range_scale + sat_losses.sum() * sat_scale\n",
+        "              if init is not None and args.init_scale:\n",
+        "                  init_losses = lpips_model(x_in, init)\n",
+        "                  loss = loss + init_losses.sum() * args.init_scale\n",
+        "              x_in_grad += torch.autograd.grad(loss, x_in)[0]\n",
+        "              if torch.isnan(x_in_grad).any()==False:\n",
+        "                  grad = -torch.autograd.grad(x_in, x, x_in_grad)[0]\n",
+        "              else:\n",
+        "                # print(\"NaN'd\")\n",
+        "                x_is_NaN = True\n",
+        "                grad = torch.zeros_like(x)\n",
+        "          if args.clamp_grad and x_is_NaN == False:\n",
+        "              magnitude = grad.square().mean().sqrt()\n",
+        "              return grad * magnitude.clamp(max=args.clamp_max) / magnitude  #min=-0.02, min=-clamp_max, \n",
+        "          return grad\n",
+        "  \n",
+        "      if args.diffusion_sampling_mode == 'ddim':\n",
+        "          sample_fn = diffusion.ddim_sample_loop_progressive\n",
+        "      else:\n",
+        "          sample_fn = diffusion.plms_sample_loop_progressive\n",
+        "\n",
+        "\n",
+        "      image_display = Output()\n",
+        "      for i in range(args.n_batches):\n",
+        "          if args.animation_mode == 'None':\n",
+        "            display.clear_output(wait=True)\n",
+        "            batchBar = tqdm(range(args.n_batches), desc =\"Batches\")\n",
+        "            batchBar.n = i\n",
+        "            batchBar.refresh()\n",
+        "          print('')\n",
+        "          display.display(image_display)\n",
+        "          gc.collect()\n",
+        "          torch.cuda.empty_cache()\n",
+        "          cur_t = diffusion.num_timesteps - skip_steps - 1\n",
+        "          total_steps = cur_t\n",
+        "\n",
+        "          if perlin_init:\n",
+        "              init = regen_perlin()\n",
+        "\n",
+        "          if args.diffusion_sampling_mode == 'ddim':\n",
+        "              samples = sample_fn(\n",
+        "                  model,\n",
+        "                  (batch_size, 3, args.side_y, args.side_x),\n",
+        "                  clip_denoised=clip_denoised,\n",
+        "                  model_kwargs={},\n",
+        "                  cond_fn=cond_fn,\n",
+        "                  progress=True,\n",
+        "                  skip_timesteps=skip_steps,\n",
+        "                  init_image=init,\n",
+        "                  randomize_class=randomize_class,\n",
+        "                  eta=eta,\n",
+        "              )\n",
+        "          else:\n",
+        "              samples = sample_fn(\n",
+        "                  model,\n",
+        "                  (batch_size, 3, args.side_y, args.side_x),\n",
+        "                  clip_denoised=clip_denoised,\n",
+        "                  model_kwargs={},\n",
+        "                  cond_fn=cond_fn,\n",
+        "                  progress=True,\n",
+        "                  skip_timesteps=skip_steps,\n",
+        "                  init_image=init,\n",
+        "                  randomize_class=randomize_class,\n",
+        "                  order=2,\n",
+        "              )\n",
+        "          \n",
+        "          \n",
+        "          # with run_display:\n",
+        "          # display.clear_output(wait=True)\n",
+        "          for j, sample in enumerate(samples):    \n",
+        "            cur_t -= 1\n",
+        "            intermediateStep = False\n",
+        "            if args.steps_per_checkpoint is not None:\n",
+        "                if j % steps_per_checkpoint == 0 and j > 0:\n",
+        "                  intermediateStep = True\n",
+        "            elif j in args.intermediate_saves:\n",
+        "              intermediateStep = True\n",
+        "            with image_display:\n",
+        "              if j % args.display_rate == 0 or cur_t == -1 or intermediateStep == True:\n",
+        "                  for k, image in enumerate(sample['pred_xstart']):\n",
+        "                      # tqdm.write(f'Batch {i}, step {j}, output {k}:')\n",
+        "                      current_time = datetime.now().strftime('%y%m%d-%H%M%S_%f')\n",
+        "                      percent = math.ceil(j/total_steps*100)\n",
+        "                      if args.n_batches > 0:\n",
+        "                        #if intermediates are saved to the subfolder, don't append a step or percentage to the name\n",
+        "                        if cur_t == -1 and args.intermediates_in_subfolder is True:\n",
+        "                          save_num = f'{frame_num:04}' if animation_mode != \"None\" else i\n",
+        "                          filename = f'{args.batch_name}({args.batchNum})_{save_num}.png'\n",
+        "                        else:\n",
+        "                          #If we're working with percentages, append it\n",
+        "                          if args.steps_per_checkpoint is not None:\n",
+        "                            filename = f'{args.batch_name}({args.batchNum})_{i:04}-{percent:02}%.png'\n",
+        "                          # Or else, iIf we're working with specific steps, append those\n",
+        "                          else:\n",
+        "                            filename = f'{args.batch_name}({args.batchNum})_{i:04}-{j:03}.png'\n",
+        "                      image = TF.to_pil_image(image.add(1).div(2).clamp(0, 1))\n",
+        "                      if j % args.display_rate == 0 or cur_t == -1:\n",
+        "                        image.save('progress.png')\n",
+        "                        display.clear_output(wait=True)\n",
+        "                        display.display(display.Image('progress.png'))\n",
+        "                      if args.steps_per_checkpoint is not None:\n",
+        "                        if j % args.steps_per_checkpoint == 0 and j > 0:\n",
+        "                          if args.intermediates_in_subfolder is True:\n",
+        "                            image.save(f'{partialFolder}/{filename}')\n",
+        "                          else:\n",
+        "                            image.save(f'{batchFolder}/{filename}')\n",
+        "                      else:\n",
+        "                        if j in args.intermediate_saves:\n",
+        "                          if args.intermediates_in_subfolder is True:\n",
+        "                            image.save(f'{partialFolder}/{filename}')\n",
+        "                          else:\n",
+        "                            image.save(f'{batchFolder}/{filename}')\n",
+        "                      if cur_t == -1:\n",
+        "                        if frame_num == 0:\n",
+        "                          save_settings()\n",
+        "                        if args.animation_mode != \"None\":\n",
+        "                          image.save('prevFrame.png')\n",
+        "                        image.save(f'{batchFolder}/{filename}')\n",
+        "                        if args.animation_mode == \"3D\":\n",
+        "                          # If turbo, save a blended image\n",
+        "                          if turbo_mode and frame_num > 0:\n",
+        "                            # Mix new image with prevFrameScaled\n",
+        "                            blend_factor = (1)/int(turbo_steps)\n",
+        "                            newFrame = cv2.imread('prevFrame.png') # This is already updated..\n",
+        "                            prev_frame_warped = cv2.imread('prevFrameScaled.png')\n",
+        "                            blendedImage = cv2.addWeighted(newFrame, blend_factor, prev_frame_warped, (1-blend_factor), 0.0)\n",
+        "                            cv2.imwrite(f'{batchFolder}/{filename}',blendedImage)\n",
+        "                          else:\n",
+        "                            image.save(f'{batchFolder}/{filename}')\n",
+        "\n",
+        "                          if vr_mode:\n",
+        "                            generate_eye_views(TRANSLATION_SCALE, batchFolder, filename, frame_num, midas_model, midas_transform)\n",
+        "\n",
+        "                        # if frame_num != args.max_frames-1:\n",
+        "                        #   display.clear_output()\n",
+        "          \n",
+        "          plt.plot(np.array(loss_values), 'r')\n",
+        "\n",
+        "def generate_eye_views(trans_scale,batchFolder,filename,frame_num,midas_model, midas_transform):\n",
+        "   for i in range(2):\n",
+        "      theta = vr_eye_angle * (math.pi/180)\n",
+        "      ray_origin = math.cos(theta) * vr_ipd / 2 * (-1.0 if i==0 else 1.0)\n",
+        "      ray_rotation = (theta if i==0 else -theta)\n",
+        "      translate_xyz = [-(ray_origin)*trans_scale, 0,0]\n",
+        "      rotate_xyz = [0, (ray_rotation), 0]\n",
+        "      rot_mat = p3dT.euler_angles_to_matrix(torch.tensor(rotate_xyz, device=device), \"XYZ\").unsqueeze(0)\n",
+        "      transformed_image = dxf.transform_image_3d(f'{batchFolder}/{filename}', midas_model, midas_transform, DEVICE,\n",
+        "                                                      rot_mat, translate_xyz, args.near_plane, args.far_plane,\n",
+        "                                                      args.fov, padding_mode=args.padding_mode,\n",
+        "                                                      sampling_mode=args.sampling_mode, midas_weight=args.midas_weight,spherical=True)\n",
+        "      eye_file_path = batchFolder+f\"/frame_{frame_num-1:04}\" + ('_l' if i==0 else '_r')+'.png'\n",
+        "      transformed_image.save(eye_file_path)\n",
+        "\n",
+        "def save_settings():\n",
+        "  setting_list = {\n",
+        "    'text_prompts': text_prompts,\n",
+        "    'image_prompts': image_prompts,\n",
+        "    'clip_guidance_scale': clip_guidance_scale,\n",
+        "    'tv_scale': tv_scale,\n",
+        "    'range_scale': range_scale,\n",
+        "    'sat_scale': sat_scale,\n",
+        "    # 'cutn': cutn,\n",
+        "    'cutn_batches': cutn_batches,\n",
+        "    'max_frames': max_frames,\n",
+        "    'interp_spline': interp_spline,\n",
+        "    # 'rotation_per_frame': rotation_per_frame,\n",
+        "    'init_image': init_image,\n",
+        "    'init_scale': init_scale,\n",
+        "    'skip_steps': skip_steps,\n",
+        "    # 'zoom_per_frame': zoom_per_frame,\n",
+        "    'frames_scale': frames_scale,\n",
+        "    'frames_skip_steps': frames_skip_steps,\n",
+        "    'perlin_init': perlin_init,\n",
+        "    'perlin_mode': perlin_mode,\n",
+        "    'skip_augs': skip_augs,\n",
+        "    'randomize_class': randomize_class,\n",
+        "    'clip_denoised': clip_denoised,\n",
+        "    'clamp_grad': clamp_grad,\n",
+        "    'clamp_max': clamp_max,\n",
+        "    'seed': seed,\n",
+        "    'fuzzy_prompt': fuzzy_prompt,\n",
+        "    'rand_mag': rand_mag,\n",
+        "    'eta': eta,\n",
+        "    'width': width_height[0],\n",
+        "    'height': width_height[1],\n",
+        "    'diffusion_model': diffusion_model,\n",
+        "    'use_secondary_model': use_secondary_model,\n",
+        "    'steps': steps,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "    'diffusion_sampling_mode': diffusion_sampling_mode,\n",
+        "    'ViTB32': ViTB32,\n",
+        "    'ViTB16': ViTB16,\n",
+        "    'ViTL14': ViTL14,\n",
+        "    'RN101': RN101,\n",
+        "    'RN50': RN50,\n",
+        "    'RN50x4': RN50x4,\n",
+        "    'RN50x16': RN50x16,\n",
+        "    'RN50x64': RN50x64,\n",
+        "    'cut_overview': str(cut_overview),\n",
+        "    'cut_innercut': str(cut_innercut),\n",
+        "    'cut_ic_pow': cut_ic_pow,\n",
+        "    'cut_icgray_p': str(cut_icgray_p),\n",
+        "    'key_frames': key_frames,\n",
+        "    'max_frames': max_frames,\n",
+        "    'angle': angle,\n",
+        "    'zoom': zoom,\n",
+        "    'translation_x': translation_x,\n",
+        "    'translation_y': translation_y,\n",
+        "    'translation_z': translation_z,\n",
+        "    'rotation_3d_x': rotation_3d_x,\n",
+        "    'rotation_3d_y': rotation_3d_y,\n",
+        "    'rotation_3d_z': rotation_3d_z,\n",
+        "    'midas_depth_model': midas_depth_model,\n",
+        "    'midas_weight': midas_weight,\n",
+        "    'near_plane': near_plane,\n",
+        "    'far_plane': far_plane,\n",
+        "    'fov': fov,\n",
+        "    'padding_mode': padding_mode,\n",
+        "    'sampling_mode': sampling_mode,\n",
+        "    'video_init_path':video_init_path,\n",
+        "    'extract_nth_frame':extract_nth_frame,\n",
+        "    'video_init_seed_continuity': video_init_seed_continuity,\n",
+        "    'turbo_mode':turbo_mode,\n",
+        "    'turbo_steps':turbo_steps,\n",
+        "    'turbo_preroll':turbo_preroll,\n",
+        "  }\n",
+        "  # print('Settings:', setting_list)\n",
+        "  with open(f\"{batchFolder}/{batch_name}({batchNum})_settings.txt\", \"w+\") as f:   #save settings\n",
+        "    json.dump(setting_list, f, ensure_ascii=False, indent=4)"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "DefSecModel"
+      },
+      "source": [
+        "#@title 1.6 Define the secondary diffusion model\n",
+        "\n",
+        "def append_dims(x, n):\n",
+        "    return x[(Ellipsis, *(None,) * (n - x.ndim))]\n",
+        "\n",
+        "\n",
+        "def expand_to_planes(x, shape):\n",
+        "    return append_dims(x, len(shape)).repeat([1, 1, *shape[2:]])\n",
+        "\n",
+        "\n",
+        "def alpha_sigma_to_t(alpha, sigma):\n",
+        "    return torch.atan2(sigma, alpha) * 2 / math.pi\n",
+        "\n",
+        "\n",
+        "def t_to_alpha_sigma(t):\n",
+        "    return torch.cos(t * math.pi / 2), torch.sin(t * math.pi / 2)\n",
+        "\n",
+        "\n",
+        "@dataclass\n",
+        "class DiffusionOutput:\n",
+        "    v: torch.Tensor\n",
+        "    pred: torch.Tensor\n",
+        "    eps: torch.Tensor\n",
+        "\n",
+        "\n",
+        "class ConvBlock(nn.Sequential):\n",
+        "    def __init__(self, c_in, c_out):\n",
+        "        super().__init__(\n",
+        "            nn.Conv2d(c_in, c_out, 3, padding=1),\n",
+        "            nn.ReLU(inplace=True),\n",
+        "        )\n",
+        "\n",
+        "\n",
+        "class SkipBlock(nn.Module):\n",
+        "    def __init__(self, main, skip=None):\n",
+        "        super().__init__()\n",
+        "        self.main = nn.Sequential(*main)\n",
+        "        self.skip = skip if skip else nn.Identity()\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        return torch.cat([self.main(input), self.skip(input)], dim=1)\n",
+        "\n",
+        "\n",
+        "class FourierFeatures(nn.Module):\n",
+        "    def __init__(self, in_features, out_features, std=1.):\n",
+        "        super().__init__()\n",
+        "        assert out_features % 2 == 0\n",
+        "        self.weight = nn.Parameter(torch.randn([out_features // 2, in_features]) * std)\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        f = 2 * math.pi * input @ self.weight.T\n",
+        "        return torch.cat([f.cos(), f.sin()], dim=-1)\n",
+        "\n",
+        "\n",
+        "class SecondaryDiffusionImageNet(nn.Module):\n",
+        "    def __init__(self):\n",
+        "        super().__init__()\n",
+        "        c = 64  # The base channel count\n",
+        "\n",
+        "        self.timestep_embed = FourierFeatures(1, 16)\n",
+        "\n",
+        "        self.net = nn.Sequential(\n",
+        "            ConvBlock(3 + 16, c),\n",
+        "            ConvBlock(c, c),\n",
+        "            SkipBlock([\n",
+        "                nn.AvgPool2d(2),\n",
+        "                ConvBlock(c, c * 2),\n",
+        "                ConvBlock(c * 2, c * 2),\n",
+        "                SkipBlock([\n",
+        "                    nn.AvgPool2d(2),\n",
+        "                    ConvBlock(c * 2, c * 4),\n",
+        "                    ConvBlock(c * 4, c * 4),\n",
+        "                    SkipBlock([\n",
+        "                        nn.AvgPool2d(2),\n",
+        "                        ConvBlock(c * 4, c * 8),\n",
+        "                        ConvBlock(c * 8, c * 4),\n",
+        "                        nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "                    ]),\n",
+        "                    ConvBlock(c * 8, c * 4),\n",
+        "                    ConvBlock(c * 4, c * 2),\n",
+        "                    nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "                ]),\n",
+        "                ConvBlock(c * 4, c * 2),\n",
+        "                ConvBlock(c * 2, c),\n",
+        "                nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "            ]),\n",
+        "            ConvBlock(c * 2, c),\n",
+        "            nn.Conv2d(c, 3, 3, padding=1),\n",
+        "        )\n",
+        "\n",
+        "    def forward(self, input, t):\n",
+        "        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)\n",
+        "        v = self.net(torch.cat([input, timestep_embed], dim=1))\n",
+        "        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))\n",
+        "        pred = input * alphas - v * sigmas\n",
+        "        eps = input * sigmas + v * alphas\n",
+        "        return DiffusionOutput(v, pred, eps)\n",
+        "\n",
+        "\n",
+        "class SecondaryDiffusionImageNet2(nn.Module):\n",
+        "    def __init__(self):\n",
+        "        super().__init__()\n",
+        "        c = 64  # The base channel count\n",
+        "        cs = [c, c * 2, c * 2, c * 4, c * 4, c * 8]\n",
+        "\n",
+        "        self.timestep_embed = FourierFeatures(1, 16)\n",
+        "        self.down = nn.AvgPool2d(2)\n",
+        "        self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False)\n",
+        "\n",
+        "        self.net = nn.Sequential(\n",
+        "            ConvBlock(3 + 16, cs[0]),\n",
+        "            ConvBlock(cs[0], cs[0]),\n",
+        "            SkipBlock([\n",
+        "                self.down,\n",
+        "                ConvBlock(cs[0], cs[1]),\n",
+        "                ConvBlock(cs[1], cs[1]),\n",
+        "                SkipBlock([\n",
+        "                    self.down,\n",
+        "                    ConvBlock(cs[1], cs[2]),\n",
+        "                    ConvBlock(cs[2], cs[2]),\n",
+        "                    SkipBlock([\n",
+        "                        self.down,\n",
+        "                        ConvBlock(cs[2], cs[3]),\n",
+        "                        ConvBlock(cs[3], cs[3]),\n",
+        "                        SkipBlock([\n",
+        "                            self.down,\n",
+        "                            ConvBlock(cs[3], cs[4]),\n",
+        "                            ConvBlock(cs[4], cs[4]),\n",
+        "                            SkipBlock([\n",
+        "                                self.down,\n",
+        "                                ConvBlock(cs[4], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[4]),\n",
+        "                                self.up,\n",
+        "                            ]),\n",
+        "                            ConvBlock(cs[4] * 2, cs[4]),\n",
+        "                            ConvBlock(cs[4], cs[3]),\n",
+        "                            self.up,\n",
+        "                        ]),\n",
+        "                        ConvBlock(cs[3] * 2, cs[3]),\n",
+        "                        ConvBlock(cs[3], cs[2]),\n",
+        "                        self.up,\n",
+        "                    ]),\n",
+        "                    ConvBlock(cs[2] * 2, cs[2]),\n",
+        "                    ConvBlock(cs[2], cs[1]),\n",
+        "                    self.up,\n",
+        "                ]),\n",
+        "                ConvBlock(cs[1] * 2, cs[1]),\n",
+        "                ConvBlock(cs[1], cs[0]),\n",
+        "                self.up,\n",
+        "            ]),\n",
+        "            ConvBlock(cs[0] * 2, cs[0]),\n",
+        "            nn.Conv2d(cs[0], 3, 3, padding=1),\n",
+        "        )\n",
+        "\n",
+        "    def forward(self, input, t):\n",
+        "        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)\n",
+        "        v = self.net(torch.cat([input, timestep_embed], dim=1))\n",
+        "        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))\n",
+        "        pred = input * alphas - v * sigmas\n",
+        "        eps = input * sigmas + v * alphas\n",
+        "        return DiffusionOutput(v, pred, eps)"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "DiffClipSetTop"
+      },
+      "source": [
+        "# 2. Diffusion and CLIP model settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "ModelSettings"
+      },
+      "source": [
+        "#@markdown ####**Models Settings:**\n",
+        "diffusion_model = \"512x512_diffusion_uncond_finetune_008100\" #@param [\"256x256_diffusion_uncond\", \"512x512_diffusion_uncond_finetune_008100\"]\n",
+        "use_secondary_model = True #@param {type: 'boolean'}\n",
+        "diffusion_sampling_mode = 'ddim' #@param ['plms','ddim']  \n",
+        "\n",
+        "\n",
+        "use_checkpoint = True #@param {type: 'boolean'}\n",
+        "ViTB32 = True #@param{type:\"boolean\"}\n",
+        "ViTB16 = True #@param{type:\"boolean\"}\n",
+        "ViTL14 = False #@param{type:\"boolean\"}\n",
+        "RN101 = False #@param{type:\"boolean\"}\n",
+        "RN50 = True #@param{type:\"boolean\"}\n",
+        "RN50x4 = False #@param{type:\"boolean\"}\n",
+        "RN50x16 = False #@param{type:\"boolean\"}\n",
+        "RN50x64 = False #@param{type:\"boolean\"}\n",
+        "\n",
+        "#@markdown If you're having issues with model downloads, check this to compare SHA's:\n",
+        "check_model_SHA = False #@param{type:\"boolean\"}\n",
+        "\n",
+        "model_256_SHA = '983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a'\n",
+        "model_512_SHA = '9c111ab89e214862b76e1fa6a1b3f1d329b1a88281885943d2cdbe357ad57648'\n",
+        "model_secondary_SHA = '983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a'\n",
+        "\n",
+        "model_256_link = 'https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt'\n",
+        "model_512_link = 'https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt'\n",
+        "model_secondary_link = 'https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth'\n",
+        "\n",
+        "model_256_path = f'{model_path}/256x256_diffusion_uncond.pt'\n",
+        "model_512_path = f'{model_path}/512x512_diffusion_uncond_finetune_008100.pt'\n",
+        "model_secondary_path = f'{model_path}/secondary_model_imagenet_2.pth'\n",
+        "\n",
+        "# Download the diffusion model\n",
+        "if diffusion_model == '256x256_diffusion_uncond':\n",
+        "  if os.path.exists(model_256_path) and check_model_SHA:\n",
+        "    print('Checking 256 Diffusion File')\n",
+        "    with open(model_256_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_256_SHA:\n",
+        "      print('256 Model SHA matches')\n",
+        "      model_256_downloaded = True\n",
+        "    else: \n",
+        "      print(\"256 Model SHA doesn't match, redownloading...\")\n",
+        "      wget(model_256_link, model_path)\n",
+        "      model_256_downloaded = True\n",
+        "  elif os.path.exists(model_256_path) and not check_model_SHA or model_256_downloaded == True:\n",
+        "    print('256 Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    wget(model_256_link, model_path)\n",
+        "    model_256_downloaded = True\n",
+        "elif diffusion_model == '512x512_diffusion_uncond_finetune_008100':\n",
+        "  if os.path.exists(model_512_path) and check_model_SHA:\n",
+        "    print('Checking 512 Diffusion File')\n",
+        "    with open(model_512_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_512_SHA:\n",
+        "      print('512 Model SHA matches')\n",
+        "      model_512_downloaded = True\n",
+        "    else:  \n",
+        "      print(\"512 Model SHA doesn't match, redownloading...\")\n",
+        "      wget(model_512_link, model_path)\n",
+        "      model_512_downloaded = True\n",
+        "  elif os.path.exists(model_512_path) and not check_model_SHA or model_512_downloaded == True:\n",
+        "    print('512 Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    wget(model_512_link, model_path)\n",
+        "    model_512_downloaded = True\n",
+        "\n",
+        "\n",
+        "# Download the secondary diffusion model v2\n",
+        "if use_secondary_model == True:\n",
+        "  if os.path.exists(model_secondary_path) and check_model_SHA:\n",
+        "    print('Checking Secondary Diffusion File')\n",
+        "    with open(model_secondary_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_secondary_SHA:\n",
+        "      print('Secondary Model SHA matches')\n",
+        "      model_secondary_downloaded = True\n",
+        "    else:  \n",
+        "      print(\"Secondary Model SHA doesn't match, redownloading...\")\n",
+        "      wget(model_secondary_link, model_path)\n",
+        "      model_secondary_downloaded = True\n",
+        "  elif os.path.exists(model_secondary_path) and not check_model_SHA or model_secondary_downloaded == True:\n",
+        "    print('Secondary Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    wget(model_secondary_link, model_path)\n",
+        "    model_secondary_downloaded = True\n",
+        "\n",
+        "model_config = model_and_diffusion_defaults()\n",
+        "if diffusion_model == '512x512_diffusion_uncond_finetune_008100':\n",
+        "    model_config.update({\n",
+        "        'attention_resolutions': '32, 16, 8',\n",
+        "        'class_cond': False,\n",
+        "        'diffusion_steps': 1000, #No need to edit this, it is taken care of later.\n",
+        "        'rescale_timesteps': True,\n",
+        "        'timestep_respacing': 250, #No need to edit this, it is taken care of later.\n",
+        "        'image_size': 512,\n",
+        "        'learn_sigma': True,\n",
+        "        'noise_schedule': 'linear',\n",
+        "        'num_channels': 256,\n",
+        "        'num_head_channels': 64,\n",
+        "        'num_res_blocks': 2,\n",
+        "        'resblock_updown': True,\n",
+        "        'use_checkpoint': use_checkpoint,\n",
+        "        'use_fp16': True,\n",
+        "        'use_scale_shift_norm': True,\n",
+        "    })\n",
+        "elif diffusion_model == '256x256_diffusion_uncond':\n",
+        "    model_config.update({\n",
+        "        'attention_resolutions': '32, 16, 8',\n",
+        "        'class_cond': False,\n",
+        "        'diffusion_steps': 1000, #No need to edit this, it is taken care of later.\n",
+        "        'rescale_timesteps': True,\n",
+        "        'timestep_respacing': 250, #No need to edit this, it is taken care of later.\n",
+        "        'image_size': 256,\n",
+        "        'learn_sigma': True,\n",
+        "        'noise_schedule': 'linear',\n",
+        "        'num_channels': 256,\n",
+        "        'num_head_channels': 64,\n",
+        "        'num_res_blocks': 2,\n",
+        "        'resblock_updown': True,\n",
+        "        'use_checkpoint': use_checkpoint,\n",
+        "        'use_fp16': True,\n",
+        "        'use_scale_shift_norm': True,\n",
+        "    })\n",
+        "\n",
+        "model_default = model_config['image_size']\n",
+        "\n",
+        "\n",
+        "\n",
+        "if use_secondary_model:\n",
+        "    secondary_model = SecondaryDiffusionImageNet2()\n",
+        "    secondary_model.load_state_dict(torch.load(f'{model_path}/secondary_model_imagenet_2.pth', map_location='cpu'))\n",
+        "    secondary_model.eval().requires_grad_(False).to(device)\n",
+        "\n",
+        "clip_models = []\n",
+        "if ViTB32 is True: clip_models.append(clip.load('ViT-B/32', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if ViTB16 is True: clip_models.append(clip.load('ViT-B/16', jit=False)[0].eval().requires_grad_(False).to(device) ) \n",
+        "if ViTL14 is True: clip_models.append(clip.load('ViT-L/14', jit=False)[0].eval().requires_grad_(False).to(device) ) \n",
+        "if RN50 is True: clip_models.append(clip.load('RN50', jit=False)[0].eval().requires_grad_(False).to(device))\n",
+        "if RN50x4 is True: clip_models.append(clip.load('RN50x4', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN50x16 is True: clip_models.append(clip.load('RN50x16', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN50x64 is True: clip_models.append(clip.load('RN50x64', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN101 is True: clip_models.append(clip.load('RN101', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "\n",
+        "normalize = T.Normalize(mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711])\n",
+        "lpips_model = lpips.LPIPS(net='vgg').to(device)"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "SettingsTop"
+      },
+      "source": [
+        "# 3. Settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "BasicSettings"
+      },
+      "source": [
+        "#@markdown ####**Basic Settings:**\n",
+        "batch_name = 'TimeToDisco' #@param{type: 'string'}\n",
+        "steps = 250 #@param [25,50,100,150,250,500,1000]{type: 'raw', allow-input: true}\n",
+        "width_height = [1280, 768]#@param{type: 'raw'}\n",
+        "clip_guidance_scale = 5000 #@param{type: 'number'}\n",
+        "tv_scale =  0#@param{type: 'number'}\n",
+        "range_scale =   150#@param{type: 'number'}\n",
+        "sat_scale =   0#@param{type: 'number'}\n",
+        "cutn_batches = 4  #@param{type: 'number'}\n",
+        "skip_augs = False#@param{type: 'boolean'}\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Init Settings:**\n",
+        "init_image = None #@param{type: 'string'}\n",
+        "init_scale = 1000 #@param{type: 'integer'}\n",
+        "skip_steps = 10 #@param{type: 'integer'}\n",
+        "#@markdown *Make sure you set skip_steps to ~50% of your steps if you want to use an init image.*\n",
+        "\n",
+        "#Get corrected sizes\n",
+        "side_x = (width_height[0]//64)*64;\n",
+        "side_y = (width_height[1]//64)*64;\n",
+        "if side_x != width_height[0] or side_y != width_height[1]:\n",
+        "  print(f'Changing output size to {side_x}x{side_y}. Dimensions must by multiples of 64.')\n",
+        "\n",
+        "#Update Model Settings\n",
+        "timestep_respacing = f'ddim{steps}'\n",
+        "diffusion_steps = (1000//steps)*steps if steps < 1000 else steps\n",
+        "model_config.update({\n",
+        "    'timestep_respacing': timestep_respacing,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "})\n",
+        "\n",
+        "#Make folder for batch\n",
+        "batchFolder = f'{outDirPath}/{batch_name}'\n",
+        "createPath(batchFolder)"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "AnimSetTop"
+      },
+      "source": [
+        "### Animation Settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "AnimSettings"
+      },
+      "source": [
+        "#@markdown ####**Animation Mode:**\n",
+        "animation_mode = 'None' #@param ['None', '2D', '3D', 'Video Input'] {type:'string'}\n",
+        "#@markdown *For animation, you probably want to turn `cutn_batches` to 1 to make it quicker.*\n",
+        "\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Video Input Settings:**\n",
+        "if is_colab:\n",
+        "    video_init_path = \"/content/training.mp4\" #@param {type: 'string'}\n",
+        "else:\n",
+        "    video_init_path = \"training.mp4\" #@param {type: 'string'}\n",
+        "extract_nth_frame = 2 #@param {type: 'number'}\n",
+        "video_init_seed_continuity = True #@param {type: 'boolean'}\n",
+        "\n",
+        "if animation_mode == \"Video Input\":\n",
+        "  if is_colab:\n",
+        "      videoFramesFolder = f'/content/videoFrames'\n",
+        "  else:\n",
+        "      videoFramesFolder = f'videoFrames'\n",
+        "  createPath(videoFramesFolder)\n",
+        "  print(f\"Exporting Video Frames (1 every {extract_nth_frame})...\")\n",
+        "  try:\n",
+        "    for f in pathlib.Path(f'{videoFramesFolder}').glob('*.jpg'):\n",
+        "      f.unlink()\n",
+        "  except:\n",
+        "    print('')\n",
+        "  vf = f'select=not(mod(n\\,{extract_nth_frame}))'\n",
+        "  subprocess.run(['ffmpeg', '-i', f'{video_init_path}', '-vf', f'{vf}', '-vsync', 'vfr', '-q:v', '2', '-loglevel', 'error', '-stats', f'{videoFramesFolder}/%04d.jpg'], stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
+        "  #!ffmpeg -i {video_init_path} -vf {vf} -vsync vfr -q:v 2 -loglevel error -stats {videoFramesFolder}/%04d.jpg\n",
+        "\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**2D Animation Settings:**\n",
+        "#@markdown `zoom` is a multiplier of dimensions, 1 is no zoom.\n",
+        "#@markdown All rotations are provided in degrees.\n",
+        "\n",
+        "key_frames = True #@param {type:\"boolean\"}\n",
+        "max_frames = 10000#@param {type:\"number\"}\n",
+        "\n",
+        "if animation_mode == \"Video Input\":\n",
+        "  max_frames = len(glob(f'{videoFramesFolder}/*.jpg'))\n",
+        "\n",
+        "interp_spline = 'Linear' #Do not change, currently will not look good. param ['Linear','Quadratic','Cubic']{type:\"string\"}\n",
+        "angle = \"0:(0)\"#@param {type:\"string\"}\n",
+        "zoom = \"0: (1), 10: (1.05)\"#@param {type:\"string\"}\n",
+        "translation_x = \"0: (0)\"#@param {type:\"string\"}\n",
+        "translation_y = \"0: (0)\"#@param {type:\"string\"}\n",
+        "translation_z = \"0: (10.0)\"#@param {type:\"string\"}\n",
+        "rotation_3d_x = \"0: (0)\"#@param {type:\"string\"}\n",
+        "rotation_3d_y = \"0: (0)\"#@param {type:\"string\"}\n",
+        "rotation_3d_z = \"0: (0)\"#@param {type:\"string\"}\n",
+        "midas_depth_model = \"dpt_large\"#@param {type:\"string\"}\n",
+        "midas_weight = 0.3#@param {type:\"number\"}\n",
+        "near_plane = 200#@param {type:\"number\"}\n",
+        "far_plane = 10000#@param {type:\"number\"}\n",
+        "fov = 40#@param {type:\"number\"}\n",
+        "padding_mode = 'border'#@param {type:\"string\"}\n",
+        "sampling_mode = 'bicubic'#@param {type:\"string\"}\n",
+        "\n",
+        "#======= TURBO MODE\n",
+        "#@markdown ---\n",
+        "#@markdown ####**Turbo Mode (3D anim only):**\n",
+        "#@markdown (Starts after frame 10,) skips diffusion steps and just uses depth map to warp images for skipped frames.\n",
+        "#@markdown Speeds up rendering by 2x-4x, and may improve image coherence between frames. frame_blend_mode smooths abrupt texture changes across 2 frames.\n",
+        "#@markdown For different settings tuned for Turbo Mode, refer to the original Disco-Turbo Github: https://github.com/zippy731/disco-diffusion-turbo\n",
+        "\n",
+        "turbo_mode = False #@param {type:\"boolean\"}\n",
+        "turbo_steps = \"3\" #@param [\"2\",\"3\",\"4\",\"5\",\"6\"] {type:\"string\"}\n",
+        "turbo_preroll = 10 # frames\n",
+        "\n",
+        "#insist turbo be used only w 3d anim.\n",
+        "if turbo_mode and animation_mode != '3D':\n",
+        "  print('=====')\n",
+        "  print('Turbo mode only available with 3D animations. Disabling Turbo.')\n",
+        "  print('=====')\n",
+        "  turbo_mode = False\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Coherency Settings:**\n",
+        "#@markdown `frame_scale` tries to guide the new frame to looking like the old one. A good default is 1500.\n",
+        "frames_scale = 1500 #@param{type: 'integer'}\n",
+        "#@markdown `frame_skip_steps` will blur the previous frame - higher values will flicker less but struggle to add enough new detail to zoom into.\n",
+        "frames_skip_steps = '60%' #@param ['40%', '50%', '60%', '70%', '80%'] {type: 'string'}\n",
+        "\n",
+        "#======= VR MODE\n",
+        "#@markdown ---\n",
+        "#@markdown ####**VR Mode (3D anim only):**\n",
+        "#@markdown Enables stereo rendering of left/right eye views (supporting Turbo) which use a different (fish-eye) camera projection matrix.   \n",
+        "#@markdown Note the images you're prompting will work better if they have some inherent wide-angle aspect\n",
+        "#@markdown The generated images will need to be combined into left/right videos. These can then be stitched into the VR180 format.\n",
+        "#@markdown Google made the VR180 Creator tool but subsequently stopped supporting it. It's available for download in a few places including https://www.patrickgrunwald.de/vr180-creator-download\n",
+        "#@markdown The tool is not only good for stitching (videos and photos) but also for adding the correct metadata into existing videos, which is needed for services like YouTube to identify the format correctly.\n",
+        "#@markdown Watching YouTube VR videos isn't necessarily the easiest depending on your headset. For instance Oculus have a dedicated media studio and store which makes the files easier to access on a Quest https://creator.oculus.com/manage/mediastudio/\n",
+        "#@markdown \n",
+        "#@markdown The command to get ffmpeg to concat your frames for each eye is in the form: `ffmpeg -framerate 15 -i frame_%4d_l.png l.mp4` (repeat for r)\n",
+        "\n",
+        "vr_mode = False #@param {type:\"boolean\"}\n",
+        "#@markdown `vr_eye_angle` is the y-axis rotation of the eyes towards the center\n",
+        "vr_eye_angle = 0.5 #@param{type:\"number\"}\n",
+        "#@markdown interpupillary distance (between the eyes)\n",
+        "vr_ipd = 5.0 #@param{type:\"number\"}\n",
+        "\n",
+        "#insist turbo be used only w 3d anim.\n",
+        "if vr_mode and animation_mode != '3D':\n",
+        "  print('=====')\n",
+        "  print('VR mode only available with 3D animations. Disabling VR.')\n",
+        "  print('=====')\n",
+        "  turbo_mode = False\n",
+        "\n",
+        "\n",
+        "def parse_key_frames(string, prompt_parser=None):\n",
+        "    \"\"\"Given a string representing frame numbers paired with parameter values at that frame,\n",
+        "    return a dictionary with the frame numbers as keys and the parameter values as the values.\n",
+        "\n",
+        "    Parameters\n",
+        "    ----------\n",
+        "    string: string\n",
+        "        Frame numbers paired with parameter values at that frame number, in the format\n",
+        "        'framenumber1: (parametervalues1), framenumber2: (parametervalues2), ...'\n",
+        "    prompt_parser: function or None, optional\n",
+        "        If provided, prompt_parser will be applied to each string of parameter values.\n",
+        "    \n",
+        "    Returns\n",
+        "    -------\n",
+        "    dict\n",
+        "        Frame numbers as keys, parameter values at that frame number as values\n",
+        "\n",
+        "    Raises\n",
+        "    ------\n",
+        "    RuntimeError\n",
+        "        If the input string does not match the expected format.\n",
+        "    \n",
+        "    Examples\n",
+        "    --------\n",
+        "    >>> parse_key_frames(\"10:(Apple: 1| Orange: 0), 20: (Apple: 0| Orange: 1| Peach: 1)\")\n",
+        "    {10: 'Apple: 1| Orange: 0', 20: 'Apple: 0| Orange: 1| Peach: 1'}\n",
+        "\n",
+        "    >>> parse_key_frames(\"10:(Apple: 1| Orange: 0), 20: (Apple: 0| Orange: 1| Peach: 1)\", prompt_parser=lambda x: x.lower()))\n",
+        "    {10: 'apple: 1| orange: 0', 20: 'apple: 0| orange: 1| peach: 1'}\n",
+        "    \"\"\"\n",
+        "    import re\n",
+        "    pattern = r'((?P<frame>[0-9]+):[\\s]*[\\(](?P<param>[\\S\\s]*?)[\\)])'\n",
+        "    frames = dict()\n",
+        "    for match_object in re.finditer(pattern, string):\n",
+        "        frame = int(match_object.groupdict()['frame'])\n",
+        "        param = match_object.groupdict()['param']\n",
+        "        if prompt_parser:\n",
+        "            frames[frame] = prompt_parser(param)\n",
+        "        else:\n",
+        "            frames[frame] = param\n",
+        "\n",
+        "    if frames == {} and len(string) != 0:\n",
+        "        raise RuntimeError('Key Frame string not correctly formatted')\n",
+        "    return frames\n",
+        "\n",
+        "def get_inbetweens(key_frames, integer=False):\n",
+        "    \"\"\"Given a dict with frame numbers as keys and a parameter value as values,\n",
+        "    return a pandas Series containing the value of the parameter at every frame from 0 to max_frames.\n",
+        "    Any values not provided in the input dict are calculated by linear interpolation between\n",
+        "    the values of the previous and next provided frames. If there is no previous provided frame, then\n",
+        "    the value is equal to the value of the next provided frame, or if there is no next provided frame,\n",
+        "    then the value is equal to the value of the previous provided frame. If no frames are provided,\n",
+        "    all frame values are NaN.\n",
+        "\n",
+        "    Parameters\n",
+        "    ----------\n",
+        "    key_frames: dict\n",
+        "        A dict with integer frame numbers as keys and numerical values of a particular parameter as values.\n",
+        "    integer: Bool, optional\n",
+        "        If True, the values of the output series are converted to integers.\n",
+        "        Otherwise, the values are floats.\n",
+        "    \n",
+        "    Returns\n",
+        "    -------\n",
+        "    pd.Series\n",
+        "        A Series with length max_frames representing the parameter values for each frame.\n",
+        "    \n",
+        "    Examples\n",
+        "    --------\n",
+        "    >>> max_frames = 5\n",
+        "    >>> get_inbetweens({1: 5, 3: 6})\n",
+        "    0    5.0\n",
+        "    1    5.0\n",
+        "    2    5.5\n",
+        "    3    6.0\n",
+        "    4    6.0\n",
+        "    dtype: float64\n",
+        "\n",
+        "    >>> get_inbetweens({1: 5, 3: 6}, integer=True)\n",
+        "    0    5\n",
+        "    1    5\n",
+        "    2    5\n",
+        "    3    6\n",
+        "    4    6\n",
+        "    dtype: int64\n",
+        "    \"\"\"\n",
+        "    key_frame_series = pd.Series([np.nan for a in range(max_frames)])\n",
+        "\n",
+        "    for i, value in key_frames.items():\n",
+        "        key_frame_series[i] = value\n",
+        "    key_frame_series = key_frame_series.astype(float)\n",
+        "    \n",
+        "    interp_method = interp_spline\n",
+        "\n",
+        "    if interp_method == 'Cubic' and len(key_frames.items()) <=3:\n",
+        "      interp_method = 'Quadratic'\n",
+        "    \n",
+        "    if interp_method == 'Quadratic' and len(key_frames.items()) <= 2:\n",
+        "      interp_method = 'Linear'\n",
+        "      \n",
+        "    \n",
+        "    key_frame_series[0] = key_frame_series[key_frame_series.first_valid_index()]\n",
+        "    key_frame_series[max_frames-1] = key_frame_series[key_frame_series.last_valid_index()]\n",
+        "    # key_frame_series = key_frame_series.interpolate(method=intrp_method,order=1, limit_direction='both')\n",
+        "    key_frame_series = key_frame_series.interpolate(method=interp_method.lower(),limit_direction='both')\n",
+        "    if integer:\n",
+        "        return key_frame_series.astype(int)\n",
+        "    return key_frame_series\n",
+        "\n",
+        "def split_prompts(prompts):\n",
+        "  prompt_series = pd.Series([np.nan for a in range(max_frames)])\n",
+        "  for i, prompt in prompts.items():\n",
+        "    prompt_series[i] = prompt\n",
+        "  # prompt_series = prompt_series.astype(str)\n",
+        "  prompt_series = prompt_series.ffill().bfill()\n",
+        "  return prompt_series\n",
+        "\n",
+        "if key_frames:\n",
+        "    try:\n",
+        "        angle_series = get_inbetweens(parse_key_frames(angle))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `angle` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `angle` as \"\n",
+        "            f'\"0: ({angle})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        angle = f\"0: ({angle})\"\n",
+        "        angle_series = get_inbetweens(parse_key_frames(angle))\n",
+        "\n",
+        "    try:\n",
+        "        zoom_series = get_inbetweens(parse_key_frames(zoom))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `zoom` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `zoom` as \"\n",
+        "            f'\"0: ({zoom})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        zoom = f\"0: ({zoom})\"\n",
+        "        zoom_series = get_inbetweens(parse_key_frames(zoom))\n",
+        "\n",
+        "    try:\n",
+        "        translation_x_series = get_inbetweens(parse_key_frames(translation_x))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `translation_x` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `translation_x` as \"\n",
+        "            f'\"0: ({translation_x})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        translation_x = f\"0: ({translation_x})\"\n",
+        "        translation_x_series = get_inbetweens(parse_key_frames(translation_x))\n",
+        "\n",
+        "    try:\n",
+        "        translation_y_series = get_inbetweens(parse_key_frames(translation_y))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `translation_y` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `translation_y` as \"\n",
+        "            f'\"0: ({translation_y})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        translation_y = f\"0: ({translation_y})\"\n",
+        "        translation_y_series = get_inbetweens(parse_key_frames(translation_y))\n",
+        "\n",
+        "    try:\n",
+        "        translation_z_series = get_inbetweens(parse_key_frames(translation_z))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `translation_z` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `translation_z` as \"\n",
+        "            f'\"0: ({translation_z})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        translation_z = f\"0: ({translation_z})\"\n",
+        "        translation_z_series = get_inbetweens(parse_key_frames(translation_z))\n",
+        "\n",
+        "    try:\n",
+        "        rotation_3d_x_series = get_inbetweens(parse_key_frames(rotation_3d_x))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `rotation_3d_x` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `rotation_3d_x` as \"\n",
+        "            f'\"0: ({rotation_3d_x})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        rotation_3d_x = f\"0: ({rotation_3d_x})\"\n",
+        "        rotation_3d_x_series = get_inbetweens(parse_key_frames(rotation_3d_x))\n",
+        "\n",
+        "    try:\n",
+        "        rotation_3d_y_series = get_inbetweens(parse_key_frames(rotation_3d_y))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `rotation_3d_y` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `rotation_3d_y` as \"\n",
+        "            f'\"0: ({rotation_3d_y})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        rotation_3d_y = f\"0: ({rotation_3d_y})\"\n",
+        "        rotation_3d_y_series = get_inbetweens(parse_key_frames(rotation_3d_y))\n",
+        "\n",
+        "    try:\n",
+        "        rotation_3d_z_series = get_inbetweens(parse_key_frames(rotation_3d_z))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `rotation_3d_z` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `rotation_3d_z` as \"\n",
+        "            f'\"0: ({rotation_3d_z})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        rotation_3d_z = f\"0: ({rotation_3d_z})\"\n",
+        "        rotation_3d_z_series = get_inbetweens(parse_key_frames(rotation_3d_z))\n",
+        "\n",
+        "else:\n",
+        "    angle = float(angle)\n",
+        "    zoom = float(zoom)\n",
+        "    translation_x = float(translation_x)\n",
+        "    translation_y = float(translation_y)\n",
+        "    translation_z = float(translation_z)\n",
+        "    rotation_3d_x = float(rotation_3d_x)\n",
+        "    rotation_3d_y = float(rotation_3d_y)\n",
+        "    rotation_3d_z = float(rotation_3d_z)"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "ExtraSetTop"
+      },
+      "source": [
+        "### Extra Settings\n",
+        " Partial Saves, Advanced Settings, Cutn Scheduling"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "ExtraSettings"
+      },
+      "source": [
+        "#@markdown ####**Saving:**\n",
+        "\n",
+        "intermediate_saves = 0#@param{type: 'raw'}\n",
+        "intermediates_in_subfolder = True #@param{type: 'boolean'}\n",
+        "#@markdown Intermediate steps will save a copy at your specified intervals. You can either format it as a single integer or a list of specific steps \n",
+        "\n",
+        "#@markdown A value of `2` will save a copy at 33% and 66%. 0 will save none.\n",
+        "\n",
+        "#@markdown A value of `[5, 9, 34, 45]` will save at steps 5, 9, 34, and 45. (Make sure to include the brackets)\n",
+        "\n",
+        "\n",
+        "if type(intermediate_saves) is not list:\n",
+        "  if intermediate_saves:\n",
+        "    steps_per_checkpoint = math.floor((steps - skip_steps - 1) // (intermediate_saves+1))\n",
+        "    steps_per_checkpoint = steps_per_checkpoint if steps_per_checkpoint > 0 else 1\n",
+        "    print(f'Will save every {steps_per_checkpoint} steps')\n",
+        "  else:\n",
+        "    steps_per_checkpoint = steps+10\n",
+        "else:\n",
+        "  steps_per_checkpoint = None\n",
+        "\n",
+        "if intermediate_saves and intermediates_in_subfolder is True:\n",
+        "  partialFolder = f'{batchFolder}/partials'\n",
+        "  createPath(partialFolder)\n",
+        "\n",
+        "  #@markdown ---\n",
+        "\n",
+        "#@markdown ####**Advanced Settings:**\n",
+        "#@markdown *There are a few extra advanced settings available if you double click this cell.*\n",
+        "\n",
+        "#@markdown *Perlin init will replace your init, so uncheck if using one.*\n",
+        "\n",
+        "perlin_init = False  #@param{type: 'boolean'}\n",
+        "perlin_mode = 'mixed' #@param ['mixed', 'color', 'gray']\n",
+        "set_seed = 'random_seed' #@param{type: 'string'}\n",
+        "eta = 0.8#@param{type: 'number'}\n",
+        "clamp_grad = True #@param{type: 'boolean'}\n",
+        "clamp_max = 0.05 #@param{type: 'number'}\n",
+        "\n",
+        "\n",
+        "### EXTRA ADVANCED SETTINGS:\n",
+        "randomize_class = True\n",
+        "clip_denoised = False\n",
+        "fuzzy_prompt = False\n",
+        "rand_mag = 0.05\n",
+        "\n",
+        "\n",
+        " #@markdown ---\n",
+        "\n",
+        "#@markdown ####**Cutn Scheduling:**\n",
+        "#@markdown Format: `[40]*400+[20]*600` = 40 cuts for the first 400 /1000 steps, then 20 for the last 600/1000\n",
+        "\n",
+        "#@markdown cut_overview and cut_innercut are cumulative for total cutn on any given step. Overview cuts see the entire image and are good for early structure, innercuts are your standard cutn.\n",
+        "\n",
+        "cut_overview = \"[12]*400+[4]*600\" #@param {type: 'string'}       \n",
+        "cut_innercut =\"[4]*400+[12]*600\"#@param {type: 'string'}  \n",
+        "cut_ic_pow = 1#@param {type: 'number'}  \n",
+        "cut_icgray_p = \"[0.2]*400+[0]*600\"#@param {type: 'string'}"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "PromptsTop"
+      },
+      "source": [
+        "### Prompts\n",
+        "`animation_mode: None` will only use the first set. `animation_mode: 2D / Video` will run through them per the set frames and hold on the last one."
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "Prompts"
+      },
+      "source": [
+        "text_prompts = {\n",
+        "    0: [\"A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation.\", \"yellow color scheme\"],\n",
+        "    100: [\"This set of prompts start at frame 100\",\"This prompt has weight five:5\"],\n",
+        "}\n",
+        "\n",
+        "image_prompts = {\n",
+        "    # 0:['ImagePromptsWorkButArentVeryGood.png:2',],\n",
+        "}"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "DiffuseTop"
+      },
+      "source": [
+        "# 4. Diffuse!"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "DoTheRun"
+      },
+      "source": [
+        "#@title Do the Run!\n",
+        "#@markdown `n_batches` ignored with animation modes.\n",
+        "display_rate =  50 #@param{type: 'number'}\n",
+        "n_batches =  50 #@param{type: 'number'}\n",
+        "\n",
+        "#Update Model Settings\n",
+        "timestep_respacing = f'ddim{steps}'\n",
+        "diffusion_steps = (1000//steps)*steps if steps < 1000 else steps\n",
+        "model_config.update({\n",
+        "    'timestep_respacing': timestep_respacing,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "})\n",
+        "\n",
+        "batch_size = 1 \n",
+        "\n",
+        "def move_files(start_num, end_num, old_folder, new_folder):\n",
+        "    for i in range(start_num, end_num):\n",
+        "        old_file = old_folder + f'/{batch_name}({batchNum})_{i:04}.png'\n",
+        "        new_file = new_folder + f'/{batch_name}({batchNum})_{i:04}.png'\n",
+        "        os.rename(old_file, new_file)\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "\n",
+        "resume_run = False #@param{type: 'boolean'}\n",
+        "run_to_resume = 'latest' #@param{type: 'string'}\n",
+        "resume_from_frame = 'latest' #@param{type: 'string'}\n",
+        "retain_overwritten_frames = False #@param{type: 'boolean'}\n",
+        "if retain_overwritten_frames is True:\n",
+        "  retainFolder = f'{batchFolder}/retained'\n",
+        "  createPath(retainFolder)\n",
+        "\n",
+        "\n",
+        "skip_step_ratio = int(frames_skip_steps.rstrip(\"%\")) / 100\n",
+        "calc_frames_skip_steps = math.floor(steps * skip_step_ratio)\n",
+        "\n",
+        "\n",
+        "if steps <= calc_frames_skip_steps:\n",
+        "  sys.exit(\"ERROR: You can't skip more steps than your total steps\")\n",
+        "\n",
+        "if resume_run:\n",
+        "  if run_to_resume == 'latest':\n",
+        "    try:\n",
+        "      batchNum\n",
+        "    except:\n",
+        "      batchNum = len(glob(f\"{batchFolder}/{batch_name}(*)_settings.txt\"))-1\n",
+        "  else:\n",
+        "    batchNum = int(run_to_resume)\n",
+        "  if resume_from_frame == 'latest':\n",
+        "    start_frame = len(glob(batchFolder+f\"/{batch_name}({batchNum})_*.png\"))\n",
+        "    if animation_mode != '3D' and turbo_mode == True and start_frame > turbo_preroll and start_frame % int(turbo_steps) != 0:\n",
+        "      start_frame = start_frame - (start_frame % int(turbo_steps))\n",
+        "  else:\n",
+        "    start_frame = int(resume_from_frame)+1\n",
+        "    if animation_mode != '3D' and turbo_mode == True and start_frame > turbo_preroll and start_frame % int(turbo_steps) != 0:\n",
+        "      start_frame = start_frame - (start_frame % int(turbo_steps))\n",
+        "    if retain_overwritten_frames is True:\n",
+        "      existing_frames = len(glob(batchFolder+f\"/{batch_name}({batchNum})_*.png\"))\n",
+        "      frames_to_save = existing_frames - start_frame\n",
+        "      print(f'Moving {frames_to_save} frames to the Retained folder')\n",
+        "      move_files(start_frame, existing_frames, batchFolder, retainFolder)\n",
+        "else:\n",
+        "  start_frame = 0\n",
+        "  batchNum = len(glob(batchFolder+\"/*.txt\"))\n",
+        "  while os.path.isfile(f\"{batchFolder}/{batch_name}({batchNum})_settings.txt\") is True or os.path.isfile(f\"{batchFolder}/{batch_name}-{batchNum}_settings.txt\") is True:\n",
+        "    batchNum += 1\n",
+        "\n",
+        "print(f'Starting Run: {batch_name}({batchNum}) at frame {start_frame}')\n",
+        "\n",
+        "if set_seed == 'random_seed':\n",
+        "    random.seed()\n",
+        "    seed = random.randint(0, 2**32)\n",
+        "    # print(f'Using seed: {seed}')\n",
+        "else:\n",
+        "    seed = int(set_seed)\n",
+        "\n",
+        "args = {\n",
+        "    'batchNum': batchNum,\n",
+        "    'prompts_series':split_prompts(text_prompts) if text_prompts else None,\n",
+        "    'image_prompts_series':split_prompts(image_prompts) if image_prompts else None,\n",
+        "    'seed': seed,\n",
+        "    'display_rate':display_rate,\n",
+        "    'n_batches':n_batches if animation_mode == 'None' else 1,\n",
+        "    'batch_size':batch_size,\n",
+        "    'batch_name': batch_name,\n",
+        "    'steps': steps,\n",
+        "    'diffusion_sampling_mode': diffusion_sampling_mode,\n",
+        "    'width_height': width_height,\n",
+        "    'clip_guidance_scale': clip_guidance_scale,\n",
+        "    'tv_scale': tv_scale,\n",
+        "    'range_scale': range_scale,\n",
+        "    'sat_scale': sat_scale,\n",
+        "    'cutn_batches': cutn_batches,\n",
+        "    'init_image': init_image,\n",
+        "    'init_scale': init_scale,\n",
+        "    'skip_steps': skip_steps,\n",
+        "    'side_x': side_x,\n",
+        "    'side_y': side_y,\n",
+        "    'timestep_respacing': timestep_respacing,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "    'animation_mode': animation_mode,\n",
+        "    'video_init_path': video_init_path,\n",
+        "    'extract_nth_frame': extract_nth_frame,\n",
+        "    'video_init_seed_continuity': video_init_seed_continuity,\n",
+        "    'key_frames': key_frames,\n",
+        "    'max_frames': max_frames if animation_mode != \"None\" else 1,\n",
+        "    'interp_spline': interp_spline,\n",
+        "    'start_frame': start_frame,\n",
+        "    'angle': angle,\n",
+        "    'zoom': zoom,\n",
+        "    'translation_x': translation_x,\n",
+        "    'translation_y': translation_y,\n",
+        "    'translation_z': translation_z,\n",
+        "    'rotation_3d_x': rotation_3d_x,\n",
+        "    'rotation_3d_y': rotation_3d_y,\n",
+        "    'rotation_3d_z': rotation_3d_z,\n",
+        "    'midas_depth_model': midas_depth_model,\n",
+        "    'midas_weight': midas_weight,\n",
+        "    'near_plane': near_plane,\n",
+        "    'far_plane': far_plane,\n",
+        "    'fov': fov,\n",
+        "    'padding_mode': padding_mode,\n",
+        "    'sampling_mode': sampling_mode,\n",
+        "    'angle_series':angle_series,\n",
+        "    'zoom_series':zoom_series,\n",
+        "    'translation_x_series':translation_x_series,\n",
+        "    'translation_y_series':translation_y_series,\n",
+        "    'translation_z_series':translation_z_series,\n",
+        "    'rotation_3d_x_series':rotation_3d_x_series,\n",
+        "    'rotation_3d_y_series':rotation_3d_y_series,\n",
+        "    'rotation_3d_z_series':rotation_3d_z_series,\n",
+        "    'frames_scale': frames_scale,\n",
+        "    'calc_frames_skip_steps': calc_frames_skip_steps,\n",
+        "    'skip_step_ratio': skip_step_ratio,\n",
+        "    'calc_frames_skip_steps': calc_frames_skip_steps,\n",
+        "    'text_prompts': text_prompts,\n",
+        "    'image_prompts': image_prompts,\n",
+        "    'cut_overview': eval(cut_overview),\n",
+        "    'cut_innercut': eval(cut_innercut),\n",
+        "    'cut_ic_pow': cut_ic_pow,\n",
+        "    'cut_icgray_p': eval(cut_icgray_p),\n",
+        "    'intermediate_saves': intermediate_saves,\n",
+        "    'intermediates_in_subfolder': intermediates_in_subfolder,\n",
+        "    'steps_per_checkpoint': steps_per_checkpoint,\n",
+        "    'perlin_init': perlin_init,\n",
+        "    'perlin_mode': perlin_mode,\n",
+        "    'set_seed': set_seed,\n",
+        "    'eta': eta,\n",
+        "    'clamp_grad': clamp_grad,\n",
+        "    'clamp_max': clamp_max,\n",
+        "    'skip_augs': skip_augs,\n",
+        "    'randomize_class': randomize_class,\n",
+        "    'clip_denoised': clip_denoised,\n",
+        "    'fuzzy_prompt': fuzzy_prompt,\n",
+        "    'rand_mag': rand_mag,\n",
+        "}\n",
+        "\n",
+        "args = SimpleNamespace(**args)\n",
+        "\n",
+        "print('Prepping model...')\n",
+        "model, diffusion = create_model_and_diffusion(**model_config)\n",
+        "model.load_state_dict(torch.load(f'{model_path}/{diffusion_model}.pt', map_location='cpu'))\n",
+        "model.requires_grad_(False).eval().to(device)\n",
+        "for name, param in model.named_parameters():\n",
+        "    if 'qkv' in name or 'norm' in name or 'proj' in name:\n",
+        "        param.requires_grad_()\n",
+        "if model_config['use_fp16']:\n",
+        "    model.convert_to_fp16()\n",
+        "\n",
+        "gc.collect()\n",
+        "torch.cuda.empty_cache()\n",
+        "try:\n",
+        "  do_run()\n",
+        "except KeyboardInterrupt:\n",
+        "    pass\n",
+        "finally:\n",
+        "    print('Seed used:', seed)\n",
+        "    gc.collect()\n",
+        "    torch.cuda.empty_cache()"
+      ],
+      "outputs": [],
+      "execution_count": null
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "CreateVidTop"
+      },
+      "source": [
+        "# 5. Create the video"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "CreateVid"
+      },
+      "source": [
+        "# @title ### **Create video**\n",
+        "#@markdown Video file will save in the same folder as your images.\n",
+        "\n",
+        "skip_video_for_run_all = True #@param {type: 'boolean'}\n",
+        "\n",
+        "if skip_video_for_run_all == True:\n",
+        "  print('Skipping video creation, uncheck skip_video_for_run_all if you want to run it')\n",
+        "\n",
+        "else:\n",
+        "  # import subprocess in case this cell is run without the above cells\n",
+        "  import subprocess\n",
+        "  from base64 import b64encode\n",
+        "\n",
+        "  latest_run = batchNum\n",
+        "\n",
+        "  folder = batch_name #@param\n",
+        "  run = latest_run #@param\n",
+        "  final_frame = 'final_frame'\n",
+        "\n",
+        "\n",
+        "  init_frame = 1#@param {type:\"number\"} This is the frame where the video will start\n",
+        "  last_frame = final_frame#@param {type:\"number\"} You can change i to the number of the last frame you want to generate. It will raise an error if that number of frames does not exist.\n",
+        "  fps = 12#@param {type:\"number\"}\n",
+        "  # view_video_in_cell = True #@param {type: 'boolean'}\n",
+        "\n",
+        "  frames = []\n",
+        "  # tqdm.write('Generating video...')\n",
+        "\n",
+        "  if last_frame == 'final_frame':\n",
+        "    last_frame = len(glob(batchFolder+f\"/{folder}({run})_*.png\"))\n",
+        "    print(f'Total frames: {last_frame}')\n",
+        "\n",
+        "  image_path = f\"{outDirPath}/{folder}/{folder}({run})_%04d.png\"\n",
+        "  filepath = f\"{outDirPath}/{folder}/{folder}({run}).mp4\"\n",
+        "\n",
+        "\n",
+        "  cmd = [\n",
+        "      'ffmpeg',\n",
+        "      '-y',\n",
+        "      '-vcodec',\n",
+        "      'png',\n",
+        "      '-r',\n",
+        "      str(fps),\n",
+        "      '-start_number',\n",
+        "      str(init_frame),\n",
+        "      '-i',\n",
+        "      image_path,\n",
+        "      '-frames:v',\n",
+        "      str(last_frame+1),\n",
+        "      '-c:v',\n",
+        "      'libx264',\n",
+        "      '-vf',\n",
+        "      f'fps={fps}',\n",
+        "      '-pix_fmt',\n",
+        "      'yuv420p',\n",
+        "      '-crf',\n",
+        "      '17',\n",
+        "      '-preset',\n",
+        "      'veryslow',\n",
+        "      filepath\n",
+        "  ]\n",
+        "\n",
+        "  process = subprocess.Popen(cmd, cwd=f'{batchFolder}', stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n",
+        "  stdout, stderr = process.communicate()\n",
+        "  if process.returncode != 0:\n",
+        "      print(stderr)\n",
+        "      raise RuntimeError(stderr)\n",
+        "  else:\n",
+        "      print(\"The video is ready and saved to the images folder\")\n",
+        "\n",
+        "  # if view_video_in_cell:\n",
+        "  #     mp4 = open(filepath,'rb').read()\n",
+        "  #     data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
+        "  #     display.HTML(f'<video width=400 controls><source src=\"{data_url}\" type=\"video/mp4\"></video>')\n",
+        "  "
+      ],
+      "outputs": [],
+      "execution_count": null
+    }
+  ],
+  "metadata": {
+    "anaconda-cloud": {},
+    "accelerator": "GPU",
+    "colab": {
+      "collapsed_sections": [
+        "CreditsChTop",
+        "TutorialTop",
+        "CheckGPU",
+        "InstallDeps",
+        "DefMidasFns",
+        "DefFns",
+        "DefSecModel",
+        "DefSuperRes",
+        "AnimSetTop",
+        "ExtraSetTop"
+      ],
+      "machine_shape": "hm",
+      "name": "Disco Diffusion v5.2 [w/ VR Mode]",
+      "private_outputs": true,
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "display_name": "Python 3",
+      "language": "python",
+      "name": "python3"
+    },
+    "language_info": {
+      "codemirror_mode": {
+        "name": "ipython",
+        "version": 3
+      },
+      "file_extension": ".py",
+      "mimetype": "text/x-python",
+      "name": "python",
+      "nbconvert_exporter": "python",
+      "pygments_lexer": "ipython3",
+      "version": "3.6.1"
+    }
+  },
+  "nbformat": 4,
+  "nbformat_minor": 4
+}

+ 86 - 3
README.md

@@ -1,6 +1,89 @@
+# Disco Diffusion
 
+<a href="https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
 
-# OVH Notebook DiscoDifusion : 
+A frankensteinian amalgamation of notebooks, models and techniques for the generation of AI Art and Animations.
 
-## Infos : 
-## Création automatique d’un notebook équivalent
+[to be updated with further info soon]
+
+
+
+
+## Changelog
+#### v1 Oct 29th 2021 - Somnai  
+* Initial QoL improvements added, including user friendly UI, settings+prompt saving and improved google drive folder organization.
+
+#### v1.1 Nov 13th 2021 - Somnai
+* Now includes sizing options, intermediate saves and fixed image prompts and perlin inits. unexposed batch option since it doesn't work
+
+#### v2 Update: Nov 22nd 2021 - Somnai
+* Initial addition of Katherine Crowson's Secondary Model Method (https://colab.research.google.com/drive/1mpkrhOjoyzPeSWy2r7T8EYRaU7amYOOi#scrollTo=X5gODNAMEUCR)
+* Fix for incorrectly named settings files
+
+#### v3 Update: Dec 24th 2021 - Somnai
+* Implemented Dango's advanced cutout method
+* Added SLIP models, thanks to NeuralDivergent
+* Fixed issue with NaNs resulting in black images, with massive help and testing from @Softology
+* Perlin now changes properly within batches (not sure where this perlin_regen code came from originally, but thank you)
+
+#### v4 Update: Jan 2021 - Somnai
+* Implemented Diffusion Zooming
+* Added Chigozie keyframing
+* Made a bunch of edits to processes
+
+#### v4.1 Update: Jan 14th 2021 - Somnai
+* Added video input mode
+* Added license that somehow went missing
+* Added improved prompt keyframing, fixed image_prompts and multiple prompts
+* Improved UI
+* Significant under the hood cleanup and improvement
+* Refined defaults for each mode
+* Removed SLIP models for the time being due to import conflicts
+* Added latent-diffusion SuperRes for sharpening
+* Added resume run mode
+
+#### v5 Update: Feb 20th 2022 - gandamu / Adam Letts
+* Added 3D animation mode. Uses weighted combination of AdaBins and MiDaS depth estimation models. Uses pytorch3d for 3D transforms on Colab and/or Linux.
+
+#### v5.1 Update: Mar 30th 2022 - zippy / Chris Allen and gandamu / Adam Letts
+
+* Integrated Turbo+Smooth features from Disco Diffusion Turbo -- just the implementation, without its defaults.
+* Implemented resume of turbo animations in such a way that it's now possible to resume from different batch folders and batch numbers.
+* 3D rotation parameter units are now degrees (rather than radians)
+* Corrected name collision in sampling_mode (now diffusion_sampling_mode for plms/ddim, and sampling_mode for 3D transform sampling)
+* Added video_init_seed_continuity option to make init video animations more continuous
+* Removed pytorch3d from needing to be compiled with a lite version specifically made for Disco Diffusion
+* Remove Super Resolution
+* Remove Slip Models
+* Update for crossplatform support
+
+#### v5.1 Update: Apr 4th 2022 - MSFTserver aka HostsServer
+
+* Removed pytorch3d from needing to be compiled with a lite version specifically made for Disco Diffusion
+* Remove Super Resolution
+* Remove Slip Models
+* Update for crossplatform support
+
+#### v5.2 Update: Apr 10th 2022 - nin_artificial / Tom Mason
+
+* VR Mode
+
+## Notebook Provenance 
+
+Original notebook by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses either OpenAI's 256x256 unconditional ImageNet or Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.
+
+Modified by Daniel Russell (https://github.com/russelldc, https://twitter.com/danielrussruss) to include (hopefully) optimal params for quick generations in 15-100 timesteps rather than 1000, as well as more robust augmentations.
+
+Further improvements from Dango233 and nsheppard helped improve the quality of diffusion in general, and especially so for shorter runs like this notebook aims to achieve.
+
+Vark added code to load in multiple Clip models at once, which all prompts are evaluated against, which may greatly improve accuracy.
+
+The latest zoom, pan, rotation, and keyframes features were taken from Chigozie Nri's VQGAN Zoom Notebook (https://github.com/chigozienri, https://twitter.com/chigozienri)
+
+Advanced DangoCutn Cutout method is also from Dango223.
+
+--
+
+Somnai (https://twitter.com/Somnai_dreams) added 2D Diffusion animation techniques, QoL improvements and various implementations of tech and techniques, mostly listed in the changelog below.
+
+3D animation implementation added by Adam Letts (https://twitter.com/gandamu_ml) in collaboration with Somnai.

+ 1446 - 0
archive/Disco_Diffusion_v3_1_[w_SLIP_&_DangoCutn].ipynb

@@ -0,0 +1,1446 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "name": "Disco Diffusion v3.1 [w/ SLIP & DangoCutn].ipynb",
+      "private_outputs": true,
+      "provenance": [],
+      "collapsed_sections": [
+        "XTu6AjLyFQUq",
+        "otQKpqkGrF2r",
+        "CR6lPDOW7lxf",
+        "u1VHzHvNx5fd"
+      ],
+      "machine_shape": "hm"
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    },
+    "accelerator": "GPU"
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "1YwMUyt9LHG1"
+      },
+      "source": [
+        "# Disco Diffusion v3 - Now with Dango's Cutn method and SLIP\n",
+        "\n",
+        "Original notebook by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses either OpenAI's 256x256 unconditional ImageNet or Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.\n",
+        "\n",
+        "Modified by Daniel Russell (https://github.com/russelldc, https://twitter.com/danielrussruss) to include (hopefully) optimal params for quick generations in 15-100 timesteps rather than 1000, as well as more robust augmentations.\n",
+        "\n",
+        "Further improvements from Dango233 and nsheppard helped improve the quality of diffusion in general, and especially so for shorter runs like this notebook aims to achieve.\n",
+        "\n",
+        "Vark added code to load in multiple Clip models at once, which all prompts are evaluated against, which may greatly improve accuracy.\n",
+        "\n",
+        "--\n",
+        "\n",
+        "I, Somnai (https://twitter.com/Somnai_dreams), have made QoL improvements and assorted implementations, mostly listed in the changelog below.\n"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "#@title <- View Disco Changelog\n",
+        "\n",
+        "skip_for_run_all = True #@param {type: 'boolean'}\n",
+        "\n",
+        "if skip_for_run_all == False:\n",
+        "  print(\n",
+        "      '''\n",
+        "  v1 Update: Oct 29th 2021\n",
+        "\n",
+        "      QoL improvements added by Somnai (@somnai_dreams), including user friendly UI, settings+prompt saving and improved google drive folder organization.\n",
+        "\n",
+        "  v1.1 Update: Nov 13th 2021\n",
+        "\n",
+        "      Now includes sizing options, intermediate saves and fixed image prompts and perlin inits. unexposed batch option since it doesn't work\n",
+        "\n",
+        "  v2 Update: Nov 22nd 2021\n",
+        "\n",
+        "      Initial addition of Katherine Crowson's Secondary Model Method (https://colab.research.google.com/drive/1mpkrhOjoyzPeSWy2r7T8EYRaU7amYOOi#scrollTo=X5gODNAMEUCR)\n",
+        "\n",
+        "      Noticed settings were saving with the wrong name so corrected it. Let me know if you preferred the old scheme.\n",
+        "\n",
+        "  v3 Update: Dec 24th 2021\n",
+        "\n",
+        "      Added Dango's advanced cutout method\n",
+        "\n",
+        "      Added SLIP models, thanks to NeuralDivergent\n",
+        "\n",
+        "      Worked with @Softology to fixed issue with NaNs resulting in black images\n",
+        "\n",
+        "      Perlin now changes properly within batches (not sure where this perlin_regen code came from originally, but thank you)\n",
+        "  \n",
+        "  v3.1 Update: Dec 31th 2021\n",
+        "\n",
+        "      Name changed to Disco since it was getting confusing with QoLs and MPs.\n",
+        "\n",
+        "      Improved UI and settings (e.g. simplefied timesteps and respacing into a single file)\n",
+        "\n",
+        "      Optional check for corrupted model downloads\n",
+        "\n",
+        "      '''\n",
+        "  )"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "qFB3nwLSQI8X"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "XTu6AjLyFQUq"
+      },
+      "source": [
+        "#Tutorial"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "YR806W0wi3He"
+      },
+      "source": [
+        "**Diffusion settings**\n",
+        "---\n",
+        "\n",
+        "Setting | Description | Default\n",
+        "--- | --- | ---\n",
+        "**Your vision:**\n",
+        "`text_prompts` | A description of what you'd like the machine to generate. Think of it like writing the caption below your image on a website. | N/A\n",
+        "`image_prompts` | Think of these images more as a description of their contents. | N/A\n",
+        "**Image quality:**\n",
+        "`clip_guidance_scale`  | Controls how much the image should look like the prompt. | 1000\n",
+        "`tv_scale` |  Controls the smoothness of the final output. | 150\n",
+        "`range_scale` |  Controls how far out of range RGB values are allowed to be. | 150\n",
+        "`sat_scale` | Controls how much saturation is allowed. From nshepperd's JAX notebook. | 0\n",
+        "`cutn` | Controls how many crops to take from the image. | 16\n",
+        "`cutn_batches` | Accumulate CLIP gradient from multiple batches of cuts  | 2\n",
+        "**Init settings:**\n",
+        "`init_image` |   URL or local path | None\n",
+        "`init_scale` |  This enhances the effect of the init image, a good value is 1000 | 0\n",
+        "`skip_timesteps` |  Controls the starting point along the diffusion timesteps | 0\n",
+        "`perlin_init` |  Option to start with random perlin noise | False\n",
+        "`perlin_mode` |  ('gray', 'color') | 'mixed'\n",
+        "**Advanced:**\n",
+        "`skip_augs` |Controls whether to skip torchvision augmentations | False\n",
+        "`randomize_class` |Controls whether the imagenet class is randomly changed each iteration | True\n",
+        "`clip_denoised` |Determines whether CLIP discriminates a noisy or denoised image | False\n",
+        "`clamp_grad` |Experimental: Using adaptive clip grad in the cond_fn | True\n",
+        "`seed`  | Choose a random seed and print it at end of run for reproduction | random_seed\n",
+        "`fuzzy_prompt` | Controls whether to add multiple noisy prompts to the prompt losses | False\n",
+        "`rand_mag` |Controls the magnitude of the random noise | 0.1\n",
+        "`eta` | DDIM hyperparameter | 0.5\n",
+        "\n",
+        "..\n",
+        "\n",
+        "**Model settings**\n",
+        "---\n",
+        "\n",
+        "Setting | Description | Default\n",
+        "--- | --- | ---\n",
+        "**Diffusion:**\n",
+        "`timestep_respacing`  | Modify this value to decrease the number of timesteps. | ddim100\n",
+        "`diffusion_steps` || 1000\n",
+        "**Diffusion:**\n",
+        "`clip_models`  | Models of CLIP to load. Typically the more, the better but they all come at a hefty VRAM cost. | ViT-B/32, ViT-B/16, RN50x4"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "_9Eg9Kf5FlfK"
+      },
+      "source": [
+        "# 1. Pre Set Up"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "qZ3rNuAWAewx",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title 1.1 Check GPU Status\n",
+        "!nvidia-smi -L"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "yZsjzwS0YGo6",
+        "cellView": "form"
+      },
+      "source": [
+        "from google.colab import drive\n",
+        "#@title 1.2 Prepare Folders\n",
+        "#@markdown If you connect your Google Drive, you can save the final image of each run on your drive.\n",
+        "\n",
+        "google_drive = True #@param {type:\"boolean\"}\n",
+        "\n",
+        "#@markdown Click here if you'd like to save the diffusion model checkpoint file to (and/or load from) your Google Drive:\n",
+        "yes_please = True #@param {type:\"boolean\"}\n",
+        "\n",
+        "#@markdown The folder to output and save models to: (default is `/AI/Disco_Diffusion`)\n",
+        "google_drive_folder = '/AI/Disco_Diffusion' #@param {type:\"string\"}\n",
+        "\n",
+        "if google_drive is True:\n",
+        "  drive.mount('/content/drive')\n",
+        "  root_path = f'/content/drive/MyDrive{google_drive_folder}'\n",
+        "else:\n",
+        "  root_path = '/content'\n",
+        "\n",
+        "import os\n",
+        "from os import path\n",
+        "#Simple create paths taken with modifications from Datamosh's Batch VQGAN+CLIP notebook\n",
+        "def createPath(filepath):\n",
+        "    if path.exists(filepath) == False:\n",
+        "      os.makedirs(filepath)\n",
+        "      print(f'Made {filepath}')\n",
+        "    else:\n",
+        "      print(f'filepath {filepath} exists.')\n",
+        "\n",
+        "initDirPath = f'{root_path}/init_images'\n",
+        "createPath(initDirPath)\n",
+        "outDirPath = f'{root_path}/images_out'\n",
+        "createPath(outDirPath)\n",
+        "\n",
+        "if google_drive and not yes_please or not google_drive:\n",
+        "    model_path = '/content/models'\n",
+        "    createPath(model_path)\n",
+        "if google_drive and yes_please:\n",
+        "    model_path = f'{root_path}/models'\n",
+        "    createPath(model_path)\n",
+        "# libraries = f'{root_path}/libraries'\n",
+        "# createPath(libraries)\n",
+        "\n"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "otQKpqkGrF2r"
+      },
+      "source": [
+        "#2. Install\n",
+        "\n",
+        "Run this once at the start of your session and after a restart."
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "JmbrcrhpBPC6",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title ### 2.1 Install and import dependencies\n",
+        "\n",
+        "if google_drive is not True:\n",
+        "  root_path = f'/content'\n",
+        "  model_path = '/content/' \n",
+        "\n",
+        "model_256_downloaded = False\n",
+        "model_512_downloaded = False\n",
+        "model_secondary_downloaded = False\n",
+        "\n",
+        "!git clone https://github.com/openai/CLIP\n",
+        "!git clone https://github.com/facebookresearch/SLIP.git\n",
+        "!git clone https://github.com/crowsonkb/guided-diffusion\n",
+        "!git clone https://github.com/assafshocher/ResizeRight.git\n",
+        "!pip install -e ./CLIP\n",
+        "!pip install -e ./guided-diffusion\n",
+        "!pip install lpips datetime timm\n",
+        "import sys\n",
+        "sys.path.append('./SLIP')\n",
+        "sys.path.append('./ResizeRight')\n",
+        "from dataclasses import dataclass\n",
+        "from functools import partial\n",
+        "import gc\n",
+        "import io\n",
+        "import math\n",
+        "import timm\n",
+        "from IPython import display\n",
+        "import lpips\n",
+        "from PIL import Image, ImageOps\n",
+        "import requests\n",
+        "from glob import glob\n",
+        "import json\n",
+        "import torch\n",
+        "from torch import nn\n",
+        "from torch.nn import functional as F\n",
+        "import torchvision.transforms as T\n",
+        "import torchvision.transforms.functional as TF\n",
+        "from tqdm.notebook import tqdm\n",
+        "sys.path.append('./CLIP')\n",
+        "sys.path.append('./guided-diffusion')\n",
+        "import clip\n",
+        "from resize_right import resize\n",
+        "from models import SLIP_VITB16, SLIP, SLIP_VITL16\n",
+        "from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults\n",
+        "from datetime import datetime\n",
+        "import numpy as np\n",
+        "import matplotlib.pyplot as plt\n",
+        "import random\n",
+        "from ipywidgets import Output\n",
+        "import hashlib\n",
+        "\n",
+        "import torch\n",
+        "device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n",
+        "print('Using device:', device)\n",
+        "\n",
+        "if torch.cuda.get_device_capability(device) == (8,0): ## A100 fix thanks to Emad\n",
+        "  print('Disabling CUDNN for A100 gpu', file=sys.stderr)\n",
+        "  torch.backends.cudnn.enabled = False"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "FpZczxnOnPIU"
+      },
+      "source": [
+        "#@title 2.2 Define necessary functions\n",
+        "\n",
+        "# https://gist.github.com/adefossez/0646dbe9ed4005480a2407c62aac8869\n",
+        "\n",
+        "def interp(t):\n",
+        "    return 3 * t**2 - 2 * t ** 3\n",
+        "\n",
+        "def perlin(width, height, scale=10, device=None):\n",
+        "    gx, gy = torch.randn(2, width + 1, height + 1, 1, 1, device=device)\n",
+        "    xs = torch.linspace(0, 1, scale + 1)[:-1, None].to(device)\n",
+        "    ys = torch.linspace(0, 1, scale + 1)[None, :-1].to(device)\n",
+        "    wx = 1 - interp(xs)\n",
+        "    wy = 1 - interp(ys)\n",
+        "    dots = 0\n",
+        "    dots += wx * wy * (gx[:-1, :-1] * xs + gy[:-1, :-1] * ys)\n",
+        "    dots += (1 - wx) * wy * (-gx[1:, :-1] * (1 - xs) + gy[1:, :-1] * ys)\n",
+        "    dots += wx * (1 - wy) * (gx[:-1, 1:] * xs - gy[:-1, 1:] * (1 - ys))\n",
+        "    dots += (1 - wx) * (1 - wy) * (-gx[1:, 1:] * (1 - xs) - gy[1:, 1:] * (1 - ys))\n",
+        "    return dots.permute(0, 2, 1, 3).contiguous().view(width * scale, height * scale)\n",
+        "\n",
+        "def perlin_ms(octaves, width, height, grayscale, device=device):\n",
+        "    out_array = [0.5] if grayscale else [0.5, 0.5, 0.5]\n",
+        "    # out_array = [0.0] if grayscale else [0.0, 0.0, 0.0]\n",
+        "    for i in range(1 if grayscale else 3):\n",
+        "        scale = 2 ** len(octaves)\n",
+        "        oct_width = width\n",
+        "        oct_height = height\n",
+        "        for oct in octaves:\n",
+        "            p = perlin(oct_width, oct_height, scale, device)\n",
+        "            out_array[i] += p * oct\n",
+        "            scale //= 2\n",
+        "            oct_width *= 2\n",
+        "            oct_height *= 2\n",
+        "    return torch.cat(out_array)\n",
+        "\n",
+        "def create_perlin_noise(octaves=[1, 1, 1, 1], width=2, height=2, grayscale=True):\n",
+        "    out = perlin_ms(octaves, width, height, grayscale)\n",
+        "    if grayscale:\n",
+        "        out = TF.resize(size=(side_y, side_x), img=out.unsqueeze(0))\n",
+        "        out = TF.to_pil_image(out.clamp(0, 1)).convert('RGB')\n",
+        "    else:\n",
+        "        out = out.reshape(-1, 3, out.shape[0]//3, out.shape[1])\n",
+        "        out = TF.resize(size=(side_y, side_x), img=out)\n",
+        "        out = TF.to_pil_image(out.clamp(0, 1).squeeze())\n",
+        "\n",
+        "    out = ImageOps.autocontrast(out)\n",
+        "    return out\n",
+        "\n",
+        "def regen_perlin():\n",
+        "    if perlin_mode == 'color':\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)\n",
+        "    elif perlin_mode == 'gray':\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "    else:\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "\n",
+        "    init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "    del init2\n",
+        "    return init.expand(batch_size, -1, -1, -1)\n",
+        "\n",
+        "def fetch(url_or_path):\n",
+        "    if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):\n",
+        "        r = requests.get(url_or_path)\n",
+        "        r.raise_for_status()\n",
+        "        fd = io.BytesIO()\n",
+        "        fd.write(r.content)\n",
+        "        fd.seek(0)\n",
+        "        return fd\n",
+        "    return open(url_or_path, 'rb')\n",
+        "\n",
+        "\n",
+        "def parse_prompt(prompt):\n",
+        "    if prompt.startswith('http://') or prompt.startswith('https://'):\n",
+        "        vals = prompt.rsplit(':', 2)\n",
+        "        vals = [vals[0] + ':' + vals[1], *vals[2:]]\n",
+        "    else:\n",
+        "        vals = prompt.rsplit(':', 1)\n",
+        "    vals = vals + ['', '1'][len(vals):]\n",
+        "    return vals[0], float(vals[1])\n",
+        "\n",
+        "def sinc(x):\n",
+        "    return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))\n",
+        "\n",
+        "def lanczos(x, a):\n",
+        "    cond = torch.logical_and(-a < x, x < a)\n",
+        "    out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))\n",
+        "    return out / out.sum()\n",
+        "\n",
+        "def ramp(ratio, width):\n",
+        "    n = math.ceil(width / ratio + 1)\n",
+        "    out = torch.empty([n])\n",
+        "    cur = 0\n",
+        "    for i in range(out.shape[0]):\n",
+        "        out[i] = cur\n",
+        "        cur += ratio\n",
+        "    return torch.cat([-out[1:].flip([0]), out])[1:-1]\n",
+        "\n",
+        "def resample(input, size, align_corners=True):\n",
+        "    n, c, h, w = input.shape\n",
+        "    dh, dw = size\n",
+        "\n",
+        "    input = input.reshape([n * c, 1, h, w])\n",
+        "\n",
+        "    if dh < h:\n",
+        "        kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)\n",
+        "        pad_h = (kernel_h.shape[0] - 1) // 2\n",
+        "        input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')\n",
+        "        input = F.conv2d(input, kernel_h[None, None, :, None])\n",
+        "\n",
+        "    if dw < w:\n",
+        "        kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)\n",
+        "        pad_w = (kernel_w.shape[0] - 1) // 2\n",
+        "        input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')\n",
+        "        input = F.conv2d(input, kernel_w[None, None, None, :])\n",
+        "\n",
+        "    input = input.reshape([n, c, h, w])\n",
+        "    return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)\n",
+        "\n",
+        "class MakeCutouts(nn.Module):\n",
+        "    def __init__(self, cut_size, cutn, skip_augs=False):\n",
+        "        super().__init__()\n",
+        "        self.cut_size = cut_size\n",
+        "        self.cutn = cutn\n",
+        "        self.skip_augs = skip_augs\n",
+        "        self.augs = T.Compose([\n",
+        "            T.RandomHorizontalFlip(p=0.5),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomAffine(degrees=15, translate=(0.1, 0.1)),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomPerspective(distortion_scale=0.4, p=0.7),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomGrayscale(p=0.15),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            # T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "        ])\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        input = T.Pad(input.shape[2]//4, fill=0)(input)\n",
+        "        sideY, sideX = input.shape[2:4]\n",
+        "        max_size = min(sideX, sideY)\n",
+        "\n",
+        "        cutouts = []\n",
+        "        for ch in range(cutn):\n",
+        "            if ch > cutn - cutn//4:\n",
+        "                cutout = input.clone()\n",
+        "            else:\n",
+        "                size = int(max_size * torch.zeros(1,).normal_(mean=.8, std=.3).clip(float(self.cut_size/max_size), 1.))\n",
+        "                offsetx = torch.randint(0, abs(sideX - size + 1), ())\n",
+        "                offsety = torch.randint(0, abs(sideY - size + 1), ())\n",
+        "                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]\n",
+        "\n",
+        "            if not self.skip_augs:\n",
+        "                cutout = self.augs(cutout)\n",
+        "            cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))\n",
+        "            del cutout\n",
+        "\n",
+        "        cutouts = torch.cat(cutouts, dim=0)\n",
+        "        return cutouts\n",
+        "\n",
+        "cutout_debug = False\n",
+        "padargs = {}\n",
+        "\n",
+        "class MakeCutoutsDango(nn.Module):\n",
+        "    def __init__(self, cut_size,\n",
+        "                 Overview=4, \n",
+        "                 InnerCrop = 0, IC_Size_Pow=0.5, IC_Grey_P = 0.2\n",
+        "                 ):\n",
+        "        super().__init__()\n",
+        "        self.cut_size = cut_size\n",
+        "        self.Overview = Overview\n",
+        "        self.InnerCrop = InnerCrop\n",
+        "        self.IC_Size_Pow = IC_Size_Pow\n",
+        "        self.IC_Grey_P = IC_Grey_P\n",
+        "        self.augs = T.Compose([\n",
+        "            T.RandomHorizontalFlip(p=0.5),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomAffine(degrees=10, translate=(0.05, 0.05),  interpolation = T.InterpolationMode.BILINEAR),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomGrayscale(p=0.1),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "        ])\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        cutouts = []\n",
+        "        gray = T.Grayscale(3)\n",
+        "        sideY, sideX = input.shape[2:4]\n",
+        "        max_size = min(sideX, sideY)\n",
+        "        min_size = min(sideX, sideY, self.cut_size)\n",
+        "        l_size = max(sideX, sideY)\n",
+        "        output_shape = [1,3,self.cut_size,self.cut_size] \n",
+        "        output_shape_2 = [1,3,self.cut_size+2,self.cut_size+2]\n",
+        "        pad_input = F.pad(input,((sideY-max_size)//2,(sideY-max_size)//2,(sideX-max_size)//2,(sideX-max_size)//2), **padargs)\n",
+        "        cutout = resize(pad_input, out_shape=output_shape)\n",
+        "\n",
+        "        if self.Overview>0:\n",
+        "            if self.Overview<=4:\n",
+        "                if self.Overview>=1:\n",
+        "                    cutouts.append(cutout)\n",
+        "                if self.Overview>=2:\n",
+        "                    cutouts.append(gray(cutout))\n",
+        "                if self.Overview>=3:\n",
+        "                    cutouts.append(TF.hflip(cutout))\n",
+        "                if self.Overview==4:\n",
+        "                    cutouts.append(gray(TF.hflip(cutout)))\n",
+        "            else:\n",
+        "                cutout = resize(pad_input, out_shape=output_shape)\n",
+        "                for _ in range(self.Overview):\n",
+        "                    cutouts.append(cutout)\n",
+        "\n",
+        "            if cutout_debug:\n",
+        "                TF.to_pil_image(cutouts[0].add(1).div(2).clamp(0, 1).squeeze(0)).save(\"/content/cutout_overview.jpg\",quality=99)\n",
+        "                \n",
+        "        if self.InnerCrop >0:\n",
+        "            for i in range(self.InnerCrop):\n",
+        "                size = int(torch.rand([])**self.IC_Size_Pow * (max_size - min_size) + min_size)\n",
+        "                offsetx = torch.randint(0, sideX - size + 1, ())\n",
+        "                offsety = torch.randint(0, sideY - size + 1, ())\n",
+        "                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]\n",
+        "                if i <= int(self.IC_Grey_P * self.InnerCrop):\n",
+        "                    cutout = gray(cutout)\n",
+        "                cutout = resize(cutout, out_shape=output_shape)\n",
+        "                cutouts.append(cutout)\n",
+        "            if cutout_debug:\n",
+        "                TF.to_pil_image(cutouts[-1].add(1).div(2).clamp(0, 1).squeeze(0)).save(\"/content/cutout_InnerCrop.jpg\",quality=99)\n",
+        "        cutouts = torch.cat(cutouts)\n",
+        "        if skip_augs is not True: cutouts=self.augs(cutouts)\n",
+        "        return cutouts\n",
+        "\n",
+        "def spherical_dist_loss(x, y):\n",
+        "    x = F.normalize(x, dim=-1)\n",
+        "    y = F.normalize(y, dim=-1)\n",
+        "    return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)     \n",
+        "\n",
+        "def tv_loss(input):\n",
+        "    \"\"\"L2 total variation loss, as in Mahendran et al.\"\"\"\n",
+        "    input = F.pad(input, (0, 1, 0, 1), 'replicate')\n",
+        "    x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]\n",
+        "    y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]\n",
+        "    return (x_diff**2 + y_diff**2).mean([1, 2, 3])\n",
+        "\n",
+        "\n",
+        "def range_loss(input):\n",
+        "    return (input - input.clamp(-1, 1)).pow(2).mean([1, 2, 3])\n",
+        "\n",
+        "\n",
+        "def do_run():\n",
+        "    loss_values = []\n",
+        " \n",
+        "    if seed is not None:\n",
+        "        np.random.seed(seed)\n",
+        "        random.seed(seed)\n",
+        "        torch.manual_seed(seed)\n",
+        "        torch.cuda.manual_seed_all(seed)\n",
+        "        torch.backends.cudnn.deterministic = True\n",
+        " \n",
+        "    target_embeds, weights = [], []\n",
+        "    \n",
+        "    \n",
+        "    model_stats = []\n",
+        "    for clip_model in clip_models:\n",
+        "          \n",
+        "          model_stat = {\"clip_model\":None,\"target_embeds\":[],\"make_cutouts\":None,\"weights\":[]}\n",
+        "          model_stat[\"clip_model\"] = clip_model\n",
+        "          # model_stat[\"make_cutouts\"] = MakeCutouts(clip_model.visual.input_resolution, cutn, skip_augs=skip_augs) \n",
+        "\n",
+        "          for prompt in text_prompts:\n",
+        "              txt, weight = parse_prompt(prompt)\n",
+        "              txt = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()\n",
+        "\n",
+        "              if fuzzy_prompt:\n",
+        "                  for i in range(25):\n",
+        "                      model_stat[\"target_embeds\"].append((txt + torch.randn(txt.shape).cuda() * rand_mag).clamp(0,1))\n",
+        "                      model_stat[\"weights\"].append(weight)\n",
+        "              else:\n",
+        "                  model_stat[\"target_embeds\"].append(txt)\n",
+        "                  model_stat[\"weights\"].append(weight)\n",
+        "      \n",
+        "          # for prompt in image_prompts:\n",
+        "          #     path, weight = parse_prompt(prompt)\n",
+        "          #     img = Image.open(fetch(path)).convert('RGB')\n",
+        "          #     img = TF.resize(img, min(side_x, side_y, *img.size), T.InterpolationMode.LANCZOS)\n",
+        "          #     batch = model_stat[\"make_cutouts\"](TF.to_tensor(img).to(device).unsqueeze(0).mul(2).sub(1))\n",
+        "          #     embed = clip_model.encode_image(normalize(batch)).float()\n",
+        "          #     if fuzzy_prompt:\n",
+        "          #         for i in range(25):\n",
+        "          #             model_stat[\"target_embeds\"].append((embed + torch.randn(embed.shape).cuda() * rand_mag).clamp(0,1))\n",
+        "          #             weights.extend([weight / cutn] * cutn)\n",
+        "          #     else:\n",
+        "          #         model_stat[\"target_embeds\"].append(embed)\n",
+        "          #         model_stat[\"weights\"].extend([weight / cutn] * cutn)\n",
+        "      \n",
+        "          model_stat[\"target_embeds\"] = torch.cat(model_stat[\"target_embeds\"])\n",
+        "          model_stat[\"weights\"] = torch.tensor(model_stat[\"weights\"], device=device)\n",
+        "          if model_stat[\"weights\"].sum().abs() < 1e-3:\n",
+        "              raise RuntimeError('The weights must not sum to 0.')\n",
+        "          model_stat[\"weights\"] /= model_stat[\"weights\"].sum().abs()\n",
+        "          model_stats.append(model_stat)\n",
+        " \n",
+        "    init = None\n",
+        "    if init_image is not None:\n",
+        "        init = Image.open(fetch(init_image)).convert('RGB')\n",
+        "        init = init.resize((side_x, side_y), Image.LANCZOS)\n",
+        "        init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "    \n",
+        "    if perlin_init:\n",
+        "        if perlin_mode == 'color':\n",
+        "            init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "            init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)\n",
+        "        elif perlin_mode == 'gray':\n",
+        "           init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)\n",
+        "           init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "        else:\n",
+        "           init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "           init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "        # init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device)\n",
+        "        init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "        del init2\n",
+        " \n",
+        "    cur_t = None\n",
+        " \n",
+        "    def cond_fn(x, t, y=None):\n",
+        "        with torch.enable_grad():\n",
+        "            x_is_NaN = False\n",
+        "            x = x.detach().requires_grad_()\n",
+        "            n = x.shape[0]\n",
+        "            if use_secondary_model is True:\n",
+        "              alpha = torch.tensor(diffusion.sqrt_alphas_cumprod[cur_t], device=device, dtype=torch.float32)\n",
+        "              sigma = torch.tensor(diffusion.sqrt_one_minus_alphas_cumprod[cur_t], device=device, dtype=torch.float32)\n",
+        "              cosine_t = alpha_sigma_to_t(alpha, sigma)\n",
+        "              out = secondary_model(x, cosine_t[None].repeat([n])).pred\n",
+        "              fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]\n",
+        "              x_in = out * fac + x * (1 - fac)\n",
+        "              x_in_grad = torch.zeros_like(x_in)\n",
+        "            else:\n",
+        "              my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t\n",
+        "              out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})\n",
+        "              fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]\n",
+        "              x_in = out['pred_xstart'] * fac + x * (1 - fac)\n",
+        "              x_in_grad = torch.zeros_like(x_in)\n",
+        "            for model_stat in model_stats:\n",
+        "              for i in range(cutn_batches):\n",
+        "                  t_int = int(t.item())+1 #errors on last step without +1, need to find source\n",
+        "                  #when using SLIP Base model the dimensions need to be hard coded to avoid AttributeError: 'VisionTransformer' object has no attribute 'input_resolution'\n",
+        "                  try:\n",
+        "                      input_resolution=model_stat[\"clip_model\"].visual.input_resolution\n",
+        "                  except:\n",
+        "                      input_resolution=224\n",
+        "\n",
+        "                  cuts = MakeCutoutsDango(input_resolution,\n",
+        "                          Overview= cut_overview[1000-t_int], \n",
+        "                          InnerCrop = cut_innercut[1000-t_int], IC_Size_Pow=cut_ic_pow, IC_Grey_P = cut_icgray_p[1000-t_int]\n",
+        "                          )\n",
+        "                  clip_in = normalize(cuts(x_in.add(1).div(2)))\n",
+        "                  image_embeds = model_stat[\"clip_model\"].encode_image(clip_in).float()\n",
+        "                  dists = spherical_dist_loss(image_embeds.unsqueeze(1), model_stat[\"target_embeds\"].unsqueeze(0))\n",
+        "                  dists = dists.view([cut_overview[1000-t_int]+cut_innercut[1000-t_int], n, -1])\n",
+        "                  losses = dists.mul(model_stat[\"weights\"]).sum(2).mean(0)\n",
+        "                  loss_values.append(losses.sum().item()) # log loss, probably shouldn't do per cutn_batch\n",
+        "                  x_in_grad += torch.autograd.grad(losses.sum() * clip_guidance_scale, x_in)[0] / cutn_batches\n",
+        "            tv_losses = tv_loss(x_in)\n",
+        "            if use_secondary_model is True:\n",
+        "              range_losses = range_loss(out)\n",
+        "            else:\n",
+        "              range_losses = range_loss(out['pred_xstart'])\n",
+        "            sat_losses = torch.abs(x_in - x_in.clamp(min=-1,max=1)).mean()\n",
+        "            loss = tv_losses.sum() * tv_scale + range_losses.sum() * range_scale + sat_losses.sum() * sat_scale\n",
+        "            if init is not None and init_scale:\n",
+        "                init_losses = lpips_model(x_in, init)\n",
+        "                loss = loss + init_losses.sum() * init_scale\n",
+        "            x_in_grad += torch.autograd.grad(loss, x_in)[0]\n",
+        "            if torch.isnan(x_in_grad).any()==False:\n",
+        "                grad = -torch.autograd.grad(x_in, x, x_in_grad)[0]\n",
+        "            else:\n",
+        "              # print(\"NaN'd\")\n",
+        "              x_is_NaN = True\n",
+        "              grad = torch.zeros_like(x)\n",
+        "        if clamp_grad and x_is_NaN == False:\n",
+        "            magnitude = grad.square().mean().sqrt()\n",
+        "            return grad * magnitude.clamp(min=-clamp_max, max=clamp_max) / magnitude  #min=-0.02,\n",
+        "        return grad\n",
+        " \n",
+        "    if model_config['timestep_respacing'].startswith('ddim'):\n",
+        "        sample_fn = diffusion.ddim_sample_loop_progressive\n",
+        "    else:\n",
+        "        sample_fn = diffusion.p_sample_loop_progressive\n",
+        "  \n",
+        "    # batches_display = Output()\n",
+        "    # display.display(batches_display)\n",
+        "    # run_display = Output()\n",
+        "    # display.display(run_display)\n",
+        "    image_display = Output()\n",
+        "    \n",
+        "    # with batches_display:\n",
+        "    for i in range(n_batches):\n",
+        "        display.clear_output(wait=True)\n",
+        "        batchBar = tqdm(range(n_batches), desc =\"Batches\")\n",
+        "        batchBar.n = i\n",
+        "        batchBar.refresh()\n",
+        "        print('')\n",
+        "        display.display(image_display)\n",
+        "        gc.collect()\n",
+        "        torch.cuda.empty_cache()\n",
+        "        # display.clear_output(wait=True)\n",
+        "        cur_t = diffusion.num_timesteps - skip_timesteps - 1\n",
+        "        total_steps = cur_t\n",
+        "\n",
+        "        if perlin_init:\n",
+        "            init = regen_perlin()\n",
+        "\n",
+        "        if model_config['timestep_respacing'].startswith('ddim'):\n",
+        "            samples = sample_fn(\n",
+        "                model,\n",
+        "                (batch_size, 3, side_y, side_x),\n",
+        "                clip_denoised=clip_denoised,\n",
+        "                model_kwargs={},\n",
+        "                cond_fn=cond_fn,\n",
+        "                progress=True,\n",
+        "                skip_timesteps=skip_timesteps,\n",
+        "                init_image=init,\n",
+        "                randomize_class=randomize_class,\n",
+        "                eta=eta,\n",
+        "            )\n",
+        "        else:\n",
+        "            samples = sample_fn(\n",
+        "                model,\n",
+        "                (batch_size, 3, side_y, side_x),\n",
+        "                clip_denoised=clip_denoised,\n",
+        "                model_kwargs={},\n",
+        "                cond_fn=cond_fn,\n",
+        "                progress=True,\n",
+        "                skip_timesteps=skip_timesteps,\n",
+        "                init_image=init,\n",
+        "                randomize_class=randomize_class,\n",
+        "            )\n",
+        "        \n",
+        "        \n",
+        "        # with run_display:\n",
+        "        # display.clear_output(wait=True)\n",
+        "        for j, sample in enumerate(samples):    \n",
+        "          cur_t -= 1\n",
+        "          intermediateStep = False\n",
+        "          if steps_per_checkpoint is not None:\n",
+        "              if j % steps_per_checkpoint == 0 and j > 0:\n",
+        "                intermediateStep = True\n",
+        "          elif j in intermediate_saves:\n",
+        "            intermediateStep = True\n",
+        "          \n",
+        "          with image_display:\n",
+        "            if j % display_rate == 0 or cur_t == -1 or intermediateStep == True:\n",
+        "                for k, image in enumerate(sample['pred_xstart']):\n",
+        "                    # tqdm.write(f'Batch {i}, step {j}, output {k}:')\n",
+        "                    current_time = datetime.now().strftime('%y%m%d-%H%M%S_%f')\n",
+        "                    percent = math.ceil(j/total_steps*100)\n",
+        "                    if n_batches > 0:\n",
+        "                      #if intermediates are saved to the subfolder, don't append a step or percentage to the name\n",
+        "                      if cur_t == -1 and intermediates_in_subfolder is True:\n",
+        "                        filename = f'{batch_name}({batchNum})_{i:04}.png'\n",
+        "                      else:\n",
+        "                        #If we're working with percentages, append it\n",
+        "                        if steps_per_checkpoint is not None:\n",
+        "                          filename = f'{batch_name}({batchNum})_{i:04}-{percent:02}%.png'\n",
+        "                        # Or else, iIf we're working with specific steps, append those\n",
+        "                        else:\n",
+        "                          filename = f'{batch_name}({batchNum})_{i:04}-{j:03}.png'\n",
+        "                    image = TF.to_pil_image(image.add(1).div(2).clamp(0, 1))\n",
+        "                    image.save('progress.png')\n",
+        "                    if j % display_rate == 0 or cur_t == -1:\n",
+        "                      display.clear_output(wait=True)\n",
+        "                      display.display(display.Image('progress.png'))\n",
+        "                    if steps_per_checkpoint is not None:\n",
+        "                      if j % steps_per_checkpoint == 0 and j > 0:\n",
+        "                        if intermediates_in_subfolder is True:\n",
+        "                          image.save(f'{partialFolder}/{filename}')\n",
+        "                        else:\n",
+        "                          image.save(f'{batchFolder}/{filename}')\n",
+        "                    else:\n",
+        "                      if j in intermediate_saves:\n",
+        "                        if intermediates_in_subfolder is True:\n",
+        "                          image.save(f'{partialFolder}/{filename}')\n",
+        "                        else:\n",
+        "                          image.save(f'{batchFolder}/{filename}')\n",
+        "                    if cur_t == -1:\n",
+        "                      if i == 0:\n",
+        "                        save_settings()\n",
+        "                      image.save(f'{batchFolder}/{filename}')\n",
+        "                      display.clear_output()\n",
+        "        \n",
+        "        plt.plot(np.array(loss_values), 'r')\n",
+        "\n",
+        "def save_settings():\n",
+        "  setting_list = {\n",
+        "    'text_prompts': text_prompts,\n",
+        "    'image_prompts': image_prompts,\n",
+        "    'clip_guidance_scale': clip_guidance_scale,\n",
+        "    'tv_scale': tv_scale,\n",
+        "    'range_scale': range_scale,\n",
+        "    'sat_scale': sat_scale,\n",
+        "    # 'cutn': cutn,\n",
+        "    'cutn_batches': cutn_batches,\n",
+        "    'init_image': init_image,\n",
+        "    'init_scale': init_scale,\n",
+        "    'skip_timesteps': skip_timesteps,\n",
+        "    'perlin_init': perlin_init,\n",
+        "    'perlin_mode': perlin_mode,\n",
+        "    'skip_augs': skip_augs,\n",
+        "    'randomize_class': randomize_class,\n",
+        "    'clip_denoised': clip_denoised,\n",
+        "    'clamp_grad': clamp_grad,\n",
+        "    'clamp_max': clamp_max,\n",
+        "    'seed': seed,\n",
+        "    'fuzzy_prompt': fuzzy_prompt,\n",
+        "    'rand_mag': rand_mag,\n",
+        "    'eta': eta,\n",
+        "    'width': width_height[0],\n",
+        "    'height': width_height[1],\n",
+        "    'diffusion_model': diffusion_model,\n",
+        "    'use_secondary_model': use_secondary_model,\n",
+        "    'steps': steps,\n",
+        "    # 'diffusion_steps': diffusion_steps,\n",
+        "    'ViTB32': ViTB32,\n",
+        "    'ViTB16': ViTB16,\n",
+        "    'RN101': RN101,\n",
+        "    'RN50': RN50,\n",
+        "    'RN50x4': RN50x4,\n",
+        "    'RN50x16': RN50x16,\n",
+        "  }\n",
+        "  # print('Settings:', setting_list)\n",
+        "  with open(f\"{batchFolder}/{batch_name}({batchNum})_settings.txt\", \"w+\") as f:   #save settings\n",
+        "    json.dump(setting_list, f, ensure_ascii=False, indent=4)\n",
+        "  "
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "TI4oAu0N4ksZ"
+      },
+      "source": [
+        "#@title 2.3 Define the secondary diffusion model\n",
+        "\n",
+        "def append_dims(x, n):\n",
+        "    return x[(Ellipsis, *(None,) * (n - x.ndim))]\n",
+        "\n",
+        "\n",
+        "def expand_to_planes(x, shape):\n",
+        "    return append_dims(x, len(shape)).repeat([1, 1, *shape[2:]])\n",
+        "\n",
+        "\n",
+        "def alpha_sigma_to_t(alpha, sigma):\n",
+        "    return torch.atan2(sigma, alpha) * 2 / math.pi\n",
+        "\n",
+        "\n",
+        "def t_to_alpha_sigma(t):\n",
+        "    return torch.cos(t * math.pi / 2), torch.sin(t * math.pi / 2)\n",
+        "\n",
+        "\n",
+        "@dataclass\n",
+        "class DiffusionOutput:\n",
+        "    v: torch.Tensor\n",
+        "    pred: torch.Tensor\n",
+        "    eps: torch.Tensor\n",
+        "\n",
+        "\n",
+        "class ConvBlock(nn.Sequential):\n",
+        "    def __init__(self, c_in, c_out):\n",
+        "        super().__init__(\n",
+        "            nn.Conv2d(c_in, c_out, 3, padding=1),\n",
+        "            nn.ReLU(inplace=True),\n",
+        "        )\n",
+        "\n",
+        "\n",
+        "class SkipBlock(nn.Module):\n",
+        "    def __init__(self, main, skip=None):\n",
+        "        super().__init__()\n",
+        "        self.main = nn.Sequential(*main)\n",
+        "        self.skip = skip if skip else nn.Identity()\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        return torch.cat([self.main(input), self.skip(input)], dim=1)\n",
+        "\n",
+        "\n",
+        "class FourierFeatures(nn.Module):\n",
+        "    def __init__(self, in_features, out_features, std=1.):\n",
+        "        super().__init__()\n",
+        "        assert out_features % 2 == 0\n",
+        "        self.weight = nn.Parameter(torch.randn([out_features // 2, in_features]) * std)\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        f = 2 * math.pi * input @ self.weight.T\n",
+        "        return torch.cat([f.cos(), f.sin()], dim=-1)\n",
+        "\n",
+        "\n",
+        "class SecondaryDiffusionImageNet(nn.Module):\n",
+        "    def __init__(self):\n",
+        "        super().__init__()\n",
+        "        c = 64  # The base channel count\n",
+        "\n",
+        "        self.timestep_embed = FourierFeatures(1, 16)\n",
+        "\n",
+        "        self.net = nn.Sequential(\n",
+        "            ConvBlock(3 + 16, c),\n",
+        "            ConvBlock(c, c),\n",
+        "            SkipBlock([\n",
+        "                nn.AvgPool2d(2),\n",
+        "                ConvBlock(c, c * 2),\n",
+        "                ConvBlock(c * 2, c * 2),\n",
+        "                SkipBlock([\n",
+        "                    nn.AvgPool2d(2),\n",
+        "                    ConvBlock(c * 2, c * 4),\n",
+        "                    ConvBlock(c * 4, c * 4),\n",
+        "                    SkipBlock([\n",
+        "                        nn.AvgPool2d(2),\n",
+        "                        ConvBlock(c * 4, c * 8),\n",
+        "                        ConvBlock(c * 8, c * 4),\n",
+        "                        nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "                    ]),\n",
+        "                    ConvBlock(c * 8, c * 4),\n",
+        "                    ConvBlock(c * 4, c * 2),\n",
+        "                    nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "                ]),\n",
+        "                ConvBlock(c * 4, c * 2),\n",
+        "                ConvBlock(c * 2, c),\n",
+        "                nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "            ]),\n",
+        "            ConvBlock(c * 2, c),\n",
+        "            nn.Conv2d(c, 3, 3, padding=1),\n",
+        "        )\n",
+        "\n",
+        "    def forward(self, input, t):\n",
+        "        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)\n",
+        "        v = self.net(torch.cat([input, timestep_embed], dim=1))\n",
+        "        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))\n",
+        "        pred = input * alphas - v * sigmas\n",
+        "        eps = input * sigmas + v * alphas\n",
+        "        return DiffusionOutput(v, pred, eps)\n",
+        "\n",
+        "\n",
+        "class SecondaryDiffusionImageNet2(nn.Module):\n",
+        "    def __init__(self):\n",
+        "        super().__init__()\n",
+        "        c = 64  # The base channel count\n",
+        "        cs = [c, c * 2, c * 2, c * 4, c * 4, c * 8]\n",
+        "\n",
+        "        self.timestep_embed = FourierFeatures(1, 16)\n",
+        "        self.down = nn.AvgPool2d(2)\n",
+        "        self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False)\n",
+        "\n",
+        "        self.net = nn.Sequential(\n",
+        "            ConvBlock(3 + 16, cs[0]),\n",
+        "            ConvBlock(cs[0], cs[0]),\n",
+        "            SkipBlock([\n",
+        "                self.down,\n",
+        "                ConvBlock(cs[0], cs[1]),\n",
+        "                ConvBlock(cs[1], cs[1]),\n",
+        "                SkipBlock([\n",
+        "                    self.down,\n",
+        "                    ConvBlock(cs[1], cs[2]),\n",
+        "                    ConvBlock(cs[2], cs[2]),\n",
+        "                    SkipBlock([\n",
+        "                        self.down,\n",
+        "                        ConvBlock(cs[2], cs[3]),\n",
+        "                        ConvBlock(cs[3], cs[3]),\n",
+        "                        SkipBlock([\n",
+        "                            self.down,\n",
+        "                            ConvBlock(cs[3], cs[4]),\n",
+        "                            ConvBlock(cs[4], cs[4]),\n",
+        "                            SkipBlock([\n",
+        "                                self.down,\n",
+        "                                ConvBlock(cs[4], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[4]),\n",
+        "                                self.up,\n",
+        "                            ]),\n",
+        "                            ConvBlock(cs[4] * 2, cs[4]),\n",
+        "                            ConvBlock(cs[4], cs[3]),\n",
+        "                            self.up,\n",
+        "                        ]),\n",
+        "                        ConvBlock(cs[3] * 2, cs[3]),\n",
+        "                        ConvBlock(cs[3], cs[2]),\n",
+        "                        self.up,\n",
+        "                    ]),\n",
+        "                    ConvBlock(cs[2] * 2, cs[2]),\n",
+        "                    ConvBlock(cs[2], cs[1]),\n",
+        "                    self.up,\n",
+        "                ]),\n",
+        "                ConvBlock(cs[1] * 2, cs[1]),\n",
+        "                ConvBlock(cs[1], cs[0]),\n",
+        "                self.up,\n",
+        "            ]),\n",
+        "            ConvBlock(cs[0] * 2, cs[0]),\n",
+        "            nn.Conv2d(cs[0], 3, 3, padding=1),\n",
+        "        )\n",
+        "\n",
+        "    def forward(self, input, t):\n",
+        "        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)\n",
+        "        v = self.net(torch.cat([input, timestep_embed], dim=1))\n",
+        "        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))\n",
+        "        pred = input * alphas - v * sigmas\n",
+        "        eps = input * sigmas + v * alphas\n",
+        "        return DiffusionOutput(v, pred, eps)\n"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "CR6lPDOW7lxf"
+      },
+      "source": [
+        "# 3. Diffusion and CLIP model settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "z5irgNNZ7lxg",
+        "cellView": "form"
+      },
+      "source": [
+        "#@markdown ####**Models Settings:**\n",
+        "diffusion_model = \"512x512_diffusion_uncond_finetune_008100\" #@param [\"256x256_diffusion_uncond\", \"512x512_diffusion_uncond_finetune_008100\"]\n",
+        "use_secondary_model = True #@param {type: 'boolean'}\n",
+        "\n",
+        "timestep_respacing = '50' # param ['25','50','100','150','250','500','1000','ddim25','ddim50', 'ddim75', 'ddim100','ddim150','ddim250','ddim500','ddim1000']  \n",
+        "diffusion_steps = 1000 # param {type: 'number'}\n",
+        "use_checkpoint = True #@param {type: 'boolean'}\n",
+        "ViTB32 = True #@param{type:\"boolean\"}\n",
+        "ViTB16 = True #@param{type:\"boolean\"}\n",
+        "RN101 = False #@param{type:\"boolean\"}\n",
+        "RN50 = True #@param{type:\"boolean\"}\n",
+        "RN50x4 = False #@param{type:\"boolean\"}\n",
+        "RN50x16 = False #@param{type:\"boolean\"}\n",
+        "SLIPB16 = False #@param{type:\"boolean\"}\n",
+        "SLIPL16 = False #@param{type:\"boolean\"}\n",
+        "\n",
+        "#@markdown If you're having issues with model downloads, check this to compare SHA's:\n",
+        "check_model_SHA = False #@param{type:\"boolean\"}\n",
+        "\n",
+        "model_256_SHA = '983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a'\n",
+        "model_512_SHA = '9c111ab89e214862b76e1fa6a1b3f1d329b1a88281885943d2cdbe357ad57648'\n",
+        "model_secondary_SHA = '983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a'\n",
+        "\n",
+        "model_256_link = 'https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt'\n",
+        "model_512_link = 'http://batbot.tv/ai/models/guided-diffusion/512x512_diffusion_uncond_finetune_008100.pt'\n",
+        "model_secondary_link = 'https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth'\n",
+        "\n",
+        "model_256_path = f'{model_path}/256x256_diffusion_uncond.pt'\n",
+        "model_512_path = f'{model_path}/512x512_diffusion_uncond_finetune_008100.pt'\n",
+        "model_secondary_path = f'{model_path}/secondary_model_imagenet_2.pth'\n",
+        "\n",
+        "# Download the diffusion model\n",
+        "if diffusion_model == '256x256_diffusion_uncond':\n",
+        "  if os.path.exists(model_256_path) and check_model_SHA:\n",
+        "    print('Checking 256 Diffusion File')\n",
+        "    with open(model_256_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_256_SHA:\n",
+        "      print('256 Model SHA matches')\n",
+        "      model_256_downloaded = True\n",
+        "    else: \n",
+        "      print(\"256 Model SHA doesn't match, redownloading...\")\n",
+        "      !wget --continue {model_256_link} -P {model_path}\n",
+        "      model_256_downloaded = True\n",
+        "  elif os.path.exists(model_256_path) and not check_model_SHA or model_256_downloaded == True:\n",
+        "    print('256 Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    !wget --continue {model_256_link} -P {model_path}\n",
+        "    model_256_downloaded = True\n",
+        "elif diffusion_model == '512x512_diffusion_uncond_finetune_008100':\n",
+        "  if os.path.exists(model_512_path) and check_model_SHA:\n",
+        "    print('Checking 512 Diffusion File')\n",
+        "    with open(model_512_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_512_SHA:\n",
+        "      print('512 Model SHA matches')\n",
+        "      model_512_downloaded = True\n",
+        "    else:  \n",
+        "      print(\"512 Model SHA doesn't match, redownloading...\")\n",
+        "      !wget --continue {model_512_link} -P {model_path}\n",
+        "      model_512_downloaded = True\n",
+        "  elif os.path.exists(model_512_path) and not check_model_SHA or model_512_downloaded == True:\n",
+        "    print('512 Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    !wget --continue {model_512_link} -P {model_path}\n",
+        "    model_512_downloaded = True\n",
+        "\n",
+        "\n",
+        "# Download the secondary diffusion model v2\n",
+        "if use_secondary_model == True:\n",
+        "  if os.path.exists(model_secondary_path) and check_model_SHA:\n",
+        "    print('Checking Secondary Diffusion File')\n",
+        "    with open(model_secondary_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_secondary_SHA:\n",
+        "      print('Secondary Model SHA matches')\n",
+        "      model_secondary_downloaded = True\n",
+        "    else:  \n",
+        "      print(\"Secondary Model SHA doesn't match, redownloading...\")\n",
+        "      !wget --continue {model_secondary_link} -P {model_path}\n",
+        "      model_secondary_downloaded = True\n",
+        "  elif os.path.exists(model_secondary_path) and not check_model_SHA or model_secondary_downloaded == True:\n",
+        "    print('Secondary Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    !wget --continue {model_secondary_link} -P {model_path}\n",
+        "    model_secondary_downloaded = True\n",
+        "\n",
+        "model_config = model_and_diffusion_defaults()\n",
+        "if diffusion_model == '512x512_diffusion_uncond_finetune_008100':\n",
+        "    model_config.update({\n",
+        "        'attention_resolutions': '32, 16, 8',\n",
+        "        'class_cond': False,\n",
+        "        'diffusion_steps': diffusion_steps,\n",
+        "        'rescale_timesteps': True,\n",
+        "        'timestep_respacing': timestep_respacing,\n",
+        "        'image_size': 512,\n",
+        "        'learn_sigma': True,\n",
+        "        'noise_schedule': 'linear',\n",
+        "        'num_channels': 256,\n",
+        "        'num_head_channels': 64,\n",
+        "        'num_res_blocks': 2,\n",
+        "        'resblock_updown': True,\n",
+        "        'use_checkpoint': use_checkpoint,\n",
+        "        'use_fp16': True,\n",
+        "        'use_scale_shift_norm': True,\n",
+        "    })\n",
+        "elif diffusion_model == '256x256_diffusion_uncond':\n",
+        "    model_config.update({\n",
+        "        'attention_resolutions': '32, 16, 8',\n",
+        "        'class_cond': False,\n",
+        "        'diffusion_steps': diffusion_steps,\n",
+        "        'rescale_timesteps': True,\n",
+        "        'timestep_respacing': timestep_respacing,\n",
+        "        'image_size': 256,\n",
+        "        'learn_sigma': True,\n",
+        "        'noise_schedule': 'linear',\n",
+        "        'num_channels': 256,\n",
+        "        'num_head_channels': 64,\n",
+        "        'num_res_blocks': 2,\n",
+        "        'resblock_updown': True,\n",
+        "        'use_checkpoint': use_checkpoint,\n",
+        "        'use_fp16': True,\n",
+        "        'use_scale_shift_norm': True,\n",
+        "    })\n",
+        "\n",
+        "secondary_model_ver = 2\n",
+        "model_default = model_config['image_size']\n",
+        "\n",
+        "\n",
+        "\n",
+        "if secondary_model_ver == 2:\n",
+        "    secondary_model = SecondaryDiffusionImageNet2()\n",
+        "    secondary_model.load_state_dict(torch.load(f'{model_path}/secondary_model_imagenet_2.pth', map_location='cpu'))\n",
+        "secondary_model.eval().requires_grad_(False).to(device)\n",
+        "\n",
+        "clip_models = []\n",
+        "if ViTB32 is True: clip_models.append(clip.load('ViT-B/32', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if ViTB16 is True: clip_models.append(clip.load('ViT-B/16', jit=False)[0].eval().requires_grad_(False).to(device) ) \n",
+        "if RN50 is True: clip_models.append(clip.load('RN50', jit=False)[0].eval().requires_grad_(False).to(device))\n",
+        "if RN50x4 is True: clip_models.append(clip.load('RN50x4', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN50x16 is True: clip_models.append(clip.load('RN50x16', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN101 is True: clip_models.append(clip.load('RN101', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "\n",
+        "if SLIPB16:\n",
+        "  SLIPB16model = SLIP_VITB16(ssl_mlp_dim=4096, ssl_emb_dim=256)\n",
+        "  if not os.path.exists(f'{model_path}/slip_base_100ep.pt'):\n",
+        "    !wget https://dl.fbaipublicfiles.com/slip/slip_base_100ep.pt -P {model_path}\n",
+        "  sd = torch.load(f'{model_path}/slip_base_100ep.pt')\n",
+        "  real_sd = {}\n",
+        "  for k, v in sd['state_dict'].items():\n",
+        "    real_sd['.'.join(k.split('.')[1:])] = v\n",
+        "  del sd\n",
+        "  SLIPB16model.load_state_dict(real_sd)\n",
+        "  SLIPB16model.requires_grad_(False).eval().to(device)\n",
+        "\n",
+        "  clip_models.append(SLIPB16model)\n",
+        "\n",
+        "if SLIPL16:\n",
+        "  SLIPL16model = SLIP_VITL16(ssl_mlp_dim=4096, ssl_emb_dim=256)\n",
+        "  if not os.path.exists(f'{model_path}/slip_large_100ep.pt'):\n",
+        "    !wget https://dl.fbaipublicfiles.com/slip/slip_large_100ep.pt -P {model_path}\n",
+        "  sd = torch.load(f'{model_path}/slip_large_100ep.pt')\n",
+        "  real_sd = {}\n",
+        "  for k, v in sd['state_dict'].items():\n",
+        "    real_sd['.'.join(k.split('.')[1:])] = v\n",
+        "  del sd\n",
+        "  SLIPL16model.load_state_dict(real_sd)\n",
+        "  SLIPL16model.requires_grad_(False).eval().to(device)\n",
+        "\n",
+        "  clip_models.append(SLIPL16model)\n",
+        "\n",
+        "normalize = T.Normalize(mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711])\n",
+        "lpips_model = lpips.LPIPS(net='vgg').to(device)"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "CzNe0Oyh72AX"
+      },
+      "source": [
+        "# 4. Settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "1ED8nq0E72AY",
+        "cellView": "form"
+      },
+      "source": [
+        "#@markdown ####**Basic Settings:**\n",
+        "batch_name = 'DiscoTime' #@param{type: 'string'}\n",
+        "steps = 250 #@param{type: 'number'}\n",
+        "width_height = [1280, 768]#@param{type: 'raw'}\n",
+        "# height = 512#@param{type: 'raw'}\n",
+        "\n",
+        "\n",
+        "clip_guidance_scale = 5000 #@param{type: 'number'}\n",
+        "tv_scale =  0#@param{type: 'number'}\n",
+        "range_scale =   150#@param{type: 'number'}\n",
+        "sat_scale = 0  #@param{type: 'number'}\n",
+        "cutn = 16  #param{type: 'number'}\n",
+        "cutn_batches = 1  #@param{type: 'number'}\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Init Settings:**\n",
+        "\n",
+        "init_image = '' #@param{type: 'string'}\n",
+        "init_scale =   0#@param{type: 'number'}\n",
+        "skip_timesteps = 0  #@param{type: 'number'}\n",
+        "\n",
+        "\n",
+        "cut_overview = [35]*400+[5]*600     #Format: 40 cuts for the first 400 /1000 steps, then 20 for the last 600/1000\n",
+        "cut_innercut =[5]*400+[35]*600\n",
+        "cut_ic_pow = 1\n",
+        "cut_icgray_p = [0.2]*400+[0]*900\n",
+        "\n",
+        "if init_image == '':\n",
+        "  init_image = None\n",
+        "\n",
+        "side_x = (width_height[0]//64)*64;\n",
+        "side_y = (width_height[1]//64)*64;\n",
+        "\n",
+        "if side_x != width_height[0] or side_y != width_height[1]:\n",
+        "  print(f'Changing output size to {side_x}x{side_y}. Dimensions must by multiples of 64.')\n",
+        "\n",
+        "timestep_respacing = f'ddim{steps}'\n",
+        "diffusion_steps = (1000//steps)*steps if steps < 1000 else steps\n",
+        "model_config.update({\n",
+        "    'timestep_respacing': timestep_respacing,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "})\n",
+        "\n",
+        "#Make folder for batch\n",
+        "batchFolder = f'{outDirPath}/{batch_name}'\n",
+        "createPath(batchFolder)\n"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Extra Settings (run at least once)\n",
+        " Partial Saves, Advanced Settings "
+      ],
+      "metadata": {
+        "id": "u1VHzHvNx5fd"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "#@markdown ####**Saving:**\n",
+        "\n",
+        "intermediate_saves =  [200, 225, 245]#@param{type: 'raw'}\n",
+        "intermediates_in_subfolder = True #@param{type: 'boolean'}\n",
+        "#@markdown Intermediate steps will save a copy at your specified intervals. You can either format it as a single integer or a list of specific steps \n",
+        "\n",
+        "#@markdown A value of `2` will save a copy at 33% and 66%. 0 will save none.\n",
+        "\n",
+        "#@markdown A value of `[5, 9, 34, 45]` will save at steps 5, 9, 34, and 45. (Make sure to include the brackets)\n",
+        "\n",
+        "\n",
+        "if type(intermediate_saves) is not list:\n",
+        "  steps_per_checkpoint = math.floor((steps - skip_timesteps - 1) // (intermediate_saves+1))\n",
+        "  steps_per_checkpoint = steps_per_checkpoint if steps_per_checkpoint > 0 else 1\n",
+        "  print(f'Will save every {steps_per_checkpoint} steps')\n",
+        "else:\n",
+        "  steps_per_checkpoint = None\n",
+        "\n",
+        "if steps_per_checkpoint is not 0 and intermediates_in_subfolder is True:\n",
+        "  partialFolder = f'{batchFolder}/partials'\n",
+        "  createPath(partialFolder)\n",
+        "\n",
+        "  #@markdown ---\n",
+        "\n",
+        "#@markdown ####**Advanced Settings:**\n",
+        "#@markdown *There are a few extra advanced settings available if you double click this cell.*\n",
+        "\n",
+        "#@markdown *Perlin init will replace your init, so uncheck if using one.*\n",
+        "\n",
+        "perlin_init = False  #@param{type: 'boolean'}\n",
+        "perlin_mode = 'mixed' #@param ['mixed', 'color', 'gray']\n",
+        "set_seed = 'random_seed' #@param{type: 'string'}\n",
+        "eta = 1.0#@param{type: 'number'}\n",
+        "clamp_grad = True #@param{type: 'boolean'}\n",
+        "clamp_max = 0.05 #@param{type: 'number'}\n",
+        "\n",
+        "\n",
+        "### EXTRA ADVANCED SETTINGS:\n",
+        "\n",
+        "skip_augs = False #@param{type: 'boolean'}\n",
+        "randomize_class = True\n",
+        "clip_denoised = False\n",
+        "fuzzy_prompt = False\n",
+        "rand_mag = 0.05"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "lCLMxtILyAHA"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "iBoAkz6Q72Aa"
+      },
+      "source": [
+        "##Prompts"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "zfbk0vQE72Aa"
+      },
+      "source": [
+        "text_prompts = [\n",
+        "    \"A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, trending on artstation.\"\n",
+        "]\n",
+        "\n",
+        "image_prompts = [ #currently disabled\n",
+        "    # 'mona.jpg',\n",
+        "]"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "0T2i11sl737J"
+      },
+      "source": [
+        "# 5. Diffuse!"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "G7r54h7I737K"
+      },
+      "source": [
+        "#@title Do the Run!\n",
+        "\n",
+        "display_rate =  50#@param{type: 'number'}\n",
+        "n_batches =  100#@param{type: 'number'}\n",
+        "batch_size = 1 \n",
+        "\n",
+        "batchNum = len(glob(batchFolder+\"/*.txt\"))\n",
+        "\n",
+        "while path.isfile(f\"{batchFolder}/{batch_name}({batchNum})_settings.txt\") is True or path.isfile(f\"{batchFolder}/{batch_name}-{batchNum}_settings.txt\") is True:\n",
+        "  batchNum += 1\n",
+        "\n",
+        "if set_seed == 'random_seed':\n",
+        "    random.seed()\n",
+        "    seed = random.randint(0, 2**32)\n",
+        "    # print(f'Using seed: {seed}')\n",
+        "else:\n",
+        "  seed = int(set_seed)\n",
+        "\n",
+        "print('Prepping model...')\n",
+        "model, diffusion = create_model_and_diffusion(**model_config)\n",
+        "model.load_state_dict(torch.load(f'{model_path}/{diffusion_model}.pt', map_location='cpu'))\n",
+        "model.requires_grad_(False).eval().to(device)\n",
+        "for name, param in model.named_parameters():\n",
+        "    if 'qkv' in name or 'norm' in name or 'proj' in name:\n",
+        "        param.requires_grad_()\n",
+        "if model_config['use_fp16']:\n",
+        "    model.convert_to_fp16()\n",
+        "\n",
+        "gc.collect()\n",
+        "torch.cuda.empty_cache()\n",
+        "try:\n",
+        "    do_run()\n",
+        "except KeyboardInterrupt:\n",
+        "    pass\n",
+        "finally:\n",
+        "    print('Seed used:', seed)\n",
+        "    gc.collect()\n",
+        "    torch.cuda.empty_cache()"
+      ],
+      "execution_count": null,
+      "outputs": []
+    }
+  ]
+}

+ 2698 - 0
archive/Disco_Diffusion_v4_1_[w_Video_Inits,_Recovery_&_DDIM_Sharpen].ipynb

@@ -0,0 +1,2698 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "name": "Disco Diffusion v4.1 [w/ Video Inits, Recovery & DDIM Sharpen].ipynb",
+      "private_outputs": true,
+      "provenance": [],
+      "collapsed_sections": [
+        "1YwMUyt9LHG1",
+        "XTu6AjLyFQUq",
+        "_9Eg9Kf5FlfK",
+        "u1VHzHvNx5fd"
+      ],
+      "machine_shape": "hm"
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    },
+    "accelerator": "GPU"
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "1YwMUyt9LHG1"
+      },
+      "source": [
+        "# Disco Diffusion v4.1 - Now with Video Inits, Recovery, DDIM Sharpen and improved UI\n",
+        "\n",
+        "In case of confusion, Disco is the name of this notebook edit. The diffusion model in use is Katherine Crowson's fine-tuned 512x512 model\n",
+        "\n",
+        "For issues, message [@Somnai_dreams](https://twitter.com/Somnai_dreams) or Somnai#6855\n",
+        "\n",
+        "Credits & Changelog ⬇️\n"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Original notebook by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses either OpenAI's 256x256 unconditional ImageNet or Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.\n",
+        "\n",
+        "Modified by Daniel Russell (https://github.com/russelldc, https://twitter.com/danielrussruss) to include (hopefully) optimal params for quick generations in 15-100 timesteps rather than 1000, as well as more robust augmentations.\n",
+        "\n",
+        "Further improvements from Dango233 and nsheppard helped improve the quality of diffusion in general, and especially so for shorter runs like this notebook aims to achieve.\n",
+        "\n",
+        "Vark added code to load in multiple Clip models at once, which all prompts are evaluated against, which may greatly improve accuracy.\n",
+        "\n",
+        "The latest zoom, pan, rotation, and keyframes features were taken from Chigozie Nri's VQGAN Zoom Notebook (https://github.com/chigozienri, https://twitter.com/chigozienri)\n",
+        "\n",
+        "Advanced DangoCutn Cutout method is also from Dango223.\n",
+        "\n",
+        "--\n",
+        "\n",
+        "I, Somnai (https://twitter.com/Somnai_dreams), have added Diffusion Animation techniques, QoL improvements and various implementations of tech and techniques, mostly listed in the changelog below."
+      ],
+      "metadata": {
+        "id": "wX5omb9C7Bjz"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title Licensed under the MIT License\n",
+        "\n",
+        "# Copyright (c) 2021 Katherine Crowson \n",
+        "\n",
+        "# Permission is hereby granted, free of charge, to any person obtaining a copy\n",
+        "# of this software and associated documentation files (the \"Software\"), to deal\n",
+        "# in the Software without restriction, including without limitation the rights\n",
+        "# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n",
+        "# copies of the Software, and to permit persons to whom the Software is\n",
+        "# furnished to do so, subject to the following conditions:\n",
+        "\n",
+        "# The above copyright notice and this permission notice shall be included in\n",
+        "# all copies or substantial portions of the Software.\n",
+        "\n",
+        "# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n",
+        "# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n",
+        "# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n",
+        "# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n",
+        "# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n",
+        "# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n",
+        "# THE SOFTWARE."
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "wDSYhyjqZQI9"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "#@title <- View Changelog\n",
+        "\n",
+        "skip_for_run_all = True #@param {type: 'boolean'}\n",
+        "\n",
+        "if skip_for_run_all == False:\n",
+        "  print(\n",
+        "      '''\n",
+        "  v1 Update: Oct 29th 2021\n",
+        "\n",
+        "      QoL improvements added by Somnai (@somnai_dreams), including user friendly UI, settings+prompt saving and improved google drive folder organization.\n",
+        "\n",
+        "  v1.1 Update: Nov 13th 2021\n",
+        "\n",
+        "      Now includes sizing options, intermediate saves and fixed image prompts and perlin inits. unexposed batch option since it doesn't work\n",
+        "\n",
+        "  v2 Update: Nov 22nd 2021\n",
+        "\n",
+        "      Initial addition of Katherine Crowson's Secondary Model Method (https://colab.research.google.com/drive/1mpkrhOjoyzPeSWy2r7T8EYRaU7amYOOi#scrollTo=X5gODNAMEUCR)\n",
+        "\n",
+        "      Noticed settings were saving with the wrong name so corrected it. Let me know if you preferred the old scheme.\n",
+        "\n",
+        "  v3 Update: Dec 24th 2021\n",
+        "\n",
+        "      Implemented Dango's advanced cutout method\n",
+        "\n",
+        "      Added SLIP models, thanks to NeuralDivergent\n",
+        "\n",
+        "      Fixed issue with NaNs resulting in black images, with massive help and testing from @Softology\n",
+        "\n",
+        "      Perlin now changes properly within batches (not sure where this perlin_regen code came from originally, but thank you)\n",
+        "\n",
+        "  v4 Update: Jan 2021\n",
+        "\n",
+        "      Implemented Diffusion Zooming\n",
+        "\n",
+        "      Added Chigozie keyframing\n",
+        "\n",
+        "      Made a bunch of edits to processes\n",
+        "  \n",
+        "  v4.1 Update: Jan  14th 2021\n",
+        "\n",
+        "      Added video input mode\n",
+        "\n",
+        "      Added license that somehow went missing\n",
+        "\n",
+        "      Added improved prompt keyframing, fixed image_prompts and multiple prompts\n",
+        "\n",
+        "      Improved UI\n",
+        "\n",
+        "      Significant under the hood cleanup and improvement\n",
+        "\n",
+        "      Refined defaults for each mode\n",
+        "\n",
+        "      Added latent-diffusion SuperRes for sharpening\n",
+        "\n",
+        "      Added resume run mode\n",
+        "\n",
+        "      '''\n",
+        "  )"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "qFB3nwLSQI8X"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "XTu6AjLyFQUq"
+      },
+      "source": [
+        "#Tutorial"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "YR806W0wi3He"
+      },
+      "source": [
+        "**Diffusion settings**\n",
+        "---\n",
+        "\n",
+        "This section is outdated as of v2\n",
+        "\n",
+        "Setting | Description | Default\n",
+        "--- | --- | ---\n",
+        "**Your vision:**\n",
+        "`text_prompts` | A description of what you'd like the machine to generate. Think of it like writing the caption below your image on a website. | N/A\n",
+        "`image_prompts` | Think of these images more as a description of their contents. | N/A\n",
+        "**Image quality:**\n",
+        "`clip_guidance_scale`  | Controls how much the image should look like the prompt. | 1000\n",
+        "`tv_scale` |  Controls the smoothness of the final output. | 150\n",
+        "`range_scale` |  Controls how far out of range RGB values are allowed to be. | 150\n",
+        "`sat_scale` | Controls how much saturation is allowed. From nshepperd's JAX notebook. | 0\n",
+        "`cutn` | Controls how many crops to take from the image. | 16\n",
+        "`cutn_batches` | Accumulate CLIP gradient from multiple batches of cuts  | 2\n",
+        "**Init settings:**\n",
+        "`init_image` |   URL or local path | None\n",
+        "`init_scale` |  This enhances the effect of the init image, a good value is 1000 | 0\n",
+        "`skip_steps Controls the starting point along the diffusion timesteps | 0\n",
+        "`perlin_init` |  Option to start with random perlin noise | False\n",
+        "`perlin_mode` |  ('gray', 'color') | 'mixed'\n",
+        "**Advanced:**\n",
+        "`skip_augs` |Controls whether to skip torchvision augmentations | False\n",
+        "`randomize_class` |Controls whether the imagenet class is randomly changed each iteration | True\n",
+        "`clip_denoised` |Determines whether CLIP discriminates a noisy or denoised image | False\n",
+        "`clamp_grad` |Experimental: Using adaptive clip grad in the cond_fn | True\n",
+        "`seed`  | Choose a random seed and print it at end of run for reproduction | random_seed\n",
+        "`fuzzy_prompt` | Controls whether to add multiple noisy prompts to the prompt losses | False\n",
+        "`rand_mag` |Controls the magnitude of the random noise | 0.1\n",
+        "`eta` | DDIM hyperparameter | 0.5\n",
+        "\n",
+        "..\n",
+        "\n",
+        "**Model settings**\n",
+        "---\n",
+        "\n",
+        "Setting | Description | Default\n",
+        "--- | --- | ---\n",
+        "**Diffusion:**\n",
+        "`timestep_respacing`  | Modify this value to decrease the number of timesteps. | ddim100\n",
+        "`diffusion_steps` || 1000\n",
+        "**Diffusion:**\n",
+        "`clip_models`  | Models of CLIP to load. Typically the more, the better but they all come at a hefty VRAM cost. | ViT-B/32, ViT-B/16, RN50x4"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "_9Eg9Kf5FlfK"
+      },
+      "source": [
+        "# 1. Set Up"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "qZ3rNuAWAewx",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title 1.1 Check GPU Status\n",
+        "!nvidia-smi -L"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "yZsjzwS0YGo6",
+        "cellView": "form"
+      },
+      "source": [
+        "from google.colab import drive\n",
+        "#@title 1.2 Prepare Folders\n",
+        "#@markdown If you connect your Google Drive, you can save the final image of each run on your drive.\n",
+        "\n",
+        "google_drive = True #@param {type:\"boolean\"}\n",
+        "\n",
+        "#@markdown Click here if you'd like to save the diffusion model checkpoint file to (and/or load from) your Google Drive:\n",
+        "yes_please = True #@param {type:\"boolean\"}\n",
+        "\n",
+        "if google_drive is True:\n",
+        "  drive.mount('/content/drive')\n",
+        "  root_path = '/content/drive/MyDrive/AI/Disco_Diffusion'\n",
+        "else:\n",
+        "  root_path = '/content'\n",
+        "\n",
+        "import os\n",
+        "from os import path\n",
+        "#Simple create paths taken with modifications from Datamosh's Batch VQGAN+CLIP notebook\n",
+        "def createPath(filepath):\n",
+        "    if path.exists(filepath) == False:\n",
+        "      os.makedirs(filepath)\n",
+        "      print(f'Made {filepath}')\n",
+        "    else:\n",
+        "      print(f'filepath {filepath} exists.')\n",
+        "\n",
+        "initDirPath = f'{root_path}/init_images'\n",
+        "createPath(initDirPath)\n",
+        "outDirPath = f'{root_path}/images_out'\n",
+        "createPath(outDirPath)\n",
+        "\n",
+        "if google_drive and not yes_please or not google_drive:\n",
+        "    model_path = '/content/models'\n",
+        "    createPath(model_path)\n",
+        "if google_drive and yes_please:\n",
+        "    model_path = f'{root_path}/models'\n",
+        "    createPath(model_path)\n",
+        "# libraries = f'{root_path}/libraries'\n",
+        "# createPath(libraries)\n",
+        "\n"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "JmbrcrhpBPC6",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title ### 1.3 Install and import dependencies\n",
+        "\n",
+        "if google_drive is not True:\n",
+        "  root_path = f'/content'\n",
+        "  model_path = '/content/models' \n",
+        "\n",
+        "model_256_downloaded = False\n",
+        "model_512_downloaded = False\n",
+        "model_secondary_downloaded = False\n",
+        "\n",
+        "!git clone https://github.com/openai/CLIP\n",
+        "# !git clone https://github.com/facebookresearch/SLIP.git\n",
+        "!git clone https://github.com/crowsonkb/guided-diffusion\n",
+        "!git clone https://github.com/assafshocher/ResizeRight.git\n",
+        "!pip install -e ./CLIP\n",
+        "!pip install -e ./guided-diffusion\n",
+        "!pip install lpips datetime timm\n",
+        "!apt install imagemagick\n",
+        "\n",
+        "\n",
+        "import sys\n",
+        "# sys.path.append('./SLIP')\n",
+        "sys.path.append('./ResizeRight')\n",
+        "from dataclasses import dataclass\n",
+        "from functools import partial\n",
+        "import cv2\n",
+        "import pandas as pd\n",
+        "import gc\n",
+        "import io\n",
+        "import math\n",
+        "import timm\n",
+        "from IPython import display\n",
+        "import lpips\n",
+        "from PIL import Image, ImageOps\n",
+        "import requests\n",
+        "from glob import glob\n",
+        "import json\n",
+        "from types import SimpleNamespace\n",
+        "import torch\n",
+        "from torch import nn\n",
+        "from torch.nn import functional as F\n",
+        "import torchvision.transforms as T\n",
+        "import torchvision.transforms.functional as TF\n",
+        "from tqdm.notebook import tqdm\n",
+        "sys.path.append('./CLIP')\n",
+        "sys.path.append('./guided-diffusion')\n",
+        "import clip\n",
+        "from resize_right import resize\n",
+        "# from models import SLIP_VITB16, SLIP, SLIP_VITL16\n",
+        "from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults\n",
+        "from datetime import datetime\n",
+        "import numpy as np\n",
+        "import matplotlib.pyplot as plt\n",
+        "import random\n",
+        "from ipywidgets import Output\n",
+        "import hashlib\n",
+        "\n",
+        "#SuperRes\n",
+        "!git clone https://github.com/CompVis/latent-diffusion.git\n",
+        "!git clone https://github.com/CompVis/taming-transformers\n",
+        "!pip install -e ./taming-transformers\n",
+        "!pip install ipywidgets omegaconf>=2.0.0 pytorch-lightning>=1.0.8 torch-fidelity einops wandb\n",
+        "\n",
+        "#SuperRes\n",
+        "import ipywidgets as widgets\n",
+        "import os\n",
+        "sys.path.append(\".\")\n",
+        "sys.path.append('./taming-transformers')\n",
+        "from taming.models import vqgan # checking correct import from taming\n",
+        "from torchvision.datasets.utils import download_url\n",
+        "%cd '/content/latent-diffusion'\n",
+        "from functools import partial\n",
+        "from ldm.util import instantiate_from_config\n",
+        "from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like\n",
+        "# from ldm.models.diffusion.ddim import DDIMSampler\n",
+        "from ldm.util import ismap\n",
+        "%cd '/content'\n",
+        "from google.colab import files\n",
+        "from IPython.display import Image as ipyimg\n",
+        "from numpy import asarray\n",
+        "from einops import rearrange, repeat\n",
+        "import torch, torchvision\n",
+        "import time\n",
+        "from omegaconf import OmegaConf\n",
+        "import warnings\n",
+        "warnings.filterwarnings(\"ignore\", category=UserWarning)\n",
+        "\n",
+        "\n",
+        "import torch\n",
+        "device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n",
+        "print('Using device:', device)\n",
+        "\n",
+        "if torch.cuda.get_device_capability(device) == (8,0): ## A100 fix thanks to Emad\n",
+        "  print('Disabling CUDNN for A100 gpu', file=sys.stderr)\n",
+        "  torch.backends.cudnn.enabled = False"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "FpZczxnOnPIU",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title 1.4 Define necessary functions\n",
+        "\n",
+        "# https://gist.github.com/adefossez/0646dbe9ed4005480a2407c62aac8869\n",
+        "\n",
+        "def interp(t):\n",
+        "    return 3 * t**2 - 2 * t ** 3\n",
+        "\n",
+        "def perlin(width, height, scale=10, device=None):\n",
+        "    gx, gy = torch.randn(2, width + 1, height + 1, 1, 1, device=device)\n",
+        "    xs = torch.linspace(0, 1, scale + 1)[:-1, None].to(device)\n",
+        "    ys = torch.linspace(0, 1, scale + 1)[None, :-1].to(device)\n",
+        "    wx = 1 - interp(xs)\n",
+        "    wy = 1 - interp(ys)\n",
+        "    dots = 0\n",
+        "    dots += wx * wy * (gx[:-1, :-1] * xs + gy[:-1, :-1] * ys)\n",
+        "    dots += (1 - wx) * wy * (-gx[1:, :-1] * (1 - xs) + gy[1:, :-1] * ys)\n",
+        "    dots += wx * (1 - wy) * (gx[:-1, 1:] * xs - gy[:-1, 1:] * (1 - ys))\n",
+        "    dots += (1 - wx) * (1 - wy) * (-gx[1:, 1:] * (1 - xs) - gy[1:, 1:] * (1 - ys))\n",
+        "    return dots.permute(0, 2, 1, 3).contiguous().view(width * scale, height * scale)\n",
+        "\n",
+        "def perlin_ms(octaves, width, height, grayscale, device=device):\n",
+        "    out_array = [0.5] if grayscale else [0.5, 0.5, 0.5]\n",
+        "    # out_array = [0.0] if grayscale else [0.0, 0.0, 0.0]\n",
+        "    for i in range(1 if grayscale else 3):\n",
+        "        scale = 2 ** len(octaves)\n",
+        "        oct_width = width\n",
+        "        oct_height = height\n",
+        "        for oct in octaves:\n",
+        "            p = perlin(oct_width, oct_height, scale, device)\n",
+        "            out_array[i] += p * oct\n",
+        "            scale //= 2\n",
+        "            oct_width *= 2\n",
+        "            oct_height *= 2\n",
+        "    return torch.cat(out_array)\n",
+        "\n",
+        "def create_perlin_noise(octaves=[1, 1, 1, 1], width=2, height=2, grayscale=True):\n",
+        "    out = perlin_ms(octaves, width, height, grayscale)\n",
+        "    if grayscale:\n",
+        "        out = TF.resize(size=(side_y, side_x), img=out.unsqueeze(0))\n",
+        "        out = TF.to_pil_image(out.clamp(0, 1)).convert('RGB')\n",
+        "    else:\n",
+        "        out = out.reshape(-1, 3, out.shape[0]//3, out.shape[1])\n",
+        "        out = TF.resize(size=(side_y, side_x), img=out)\n",
+        "        out = TF.to_pil_image(out.clamp(0, 1).squeeze())\n",
+        "\n",
+        "    out = ImageOps.autocontrast(out)\n",
+        "    return out\n",
+        "\n",
+        "def regen_perlin():\n",
+        "    if perlin_mode == 'color':\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)\n",
+        "    elif perlin_mode == 'gray':\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "    else:\n",
+        "        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "\n",
+        "    init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "    del init2\n",
+        "    return init.expand(batch_size, -1, -1, -1)\n",
+        "\n",
+        "def fetch(url_or_path):\n",
+        "    if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):\n",
+        "        r = requests.get(url_or_path)\n",
+        "        r.raise_for_status()\n",
+        "        fd = io.BytesIO()\n",
+        "        fd.write(r.content)\n",
+        "        fd.seek(0)\n",
+        "        return fd\n",
+        "    return open(url_or_path, 'rb')\n",
+        "\n",
+        "def read_image_workaround(path):\n",
+        "    \"\"\"OpenCV reads images as BGR, Pillow saves them as RGB. Work around\n",
+        "    this incompatibility to avoid colour inversions.\"\"\"\n",
+        "    im_tmp = cv2.imread(path)\n",
+        "    return cv2.cvtColor(im_tmp, cv2.COLOR_BGR2RGB)\n",
+        "\n",
+        "def parse_prompt(prompt):\n",
+        "    if prompt.startswith('http://') or prompt.startswith('https://'):\n",
+        "        vals = prompt.rsplit(':', 2)\n",
+        "        vals = [vals[0] + ':' + vals[1], *vals[2:]]\n",
+        "    else:\n",
+        "        vals = prompt.rsplit(':', 1)\n",
+        "    vals = vals + ['', '1'][len(vals):]\n",
+        "    return vals[0], float(vals[1])\n",
+        "\n",
+        "def sinc(x):\n",
+        "    return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))\n",
+        "\n",
+        "def lanczos(x, a):\n",
+        "    cond = torch.logical_and(-a < x, x < a)\n",
+        "    out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))\n",
+        "    return out / out.sum()\n",
+        "\n",
+        "def ramp(ratio, width):\n",
+        "    n = math.ceil(width / ratio + 1)\n",
+        "    out = torch.empty([n])\n",
+        "    cur = 0\n",
+        "    for i in range(out.shape[0]):\n",
+        "        out[i] = cur\n",
+        "        cur += ratio\n",
+        "    return torch.cat([-out[1:].flip([0]), out])[1:-1]\n",
+        "\n",
+        "def resample(input, size, align_corners=True):\n",
+        "    n, c, h, w = input.shape\n",
+        "    dh, dw = size\n",
+        "\n",
+        "    input = input.reshape([n * c, 1, h, w])\n",
+        "\n",
+        "    if dh < h:\n",
+        "        kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)\n",
+        "        pad_h = (kernel_h.shape[0] - 1) // 2\n",
+        "        input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')\n",
+        "        input = F.conv2d(input, kernel_h[None, None, :, None])\n",
+        "\n",
+        "    if dw < w:\n",
+        "        kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)\n",
+        "        pad_w = (kernel_w.shape[0] - 1) // 2\n",
+        "        input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')\n",
+        "        input = F.conv2d(input, kernel_w[None, None, None, :])\n",
+        "\n",
+        "    input = input.reshape([n, c, h, w])\n",
+        "    return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)\n",
+        "\n",
+        "class MakeCutouts(nn.Module):\n",
+        "    def __init__(self, cut_size, cutn, skip_augs=False):\n",
+        "        super().__init__()\n",
+        "        self.cut_size = cut_size\n",
+        "        self.cutn = cutn\n",
+        "        self.skip_augs = skip_augs\n",
+        "        self.augs = T.Compose([\n",
+        "            T.RandomHorizontalFlip(p=0.5),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomAffine(degrees=15, translate=(0.1, 0.1)),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomPerspective(distortion_scale=0.4, p=0.7),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomGrayscale(p=0.15),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            # T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "        ])\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        input = T.Pad(input.shape[2]//4, fill=0)(input)\n",
+        "        sideY, sideX = input.shape[2:4]\n",
+        "        max_size = min(sideX, sideY)\n",
+        "\n",
+        "        cutouts = []\n",
+        "        for ch in range(self.cutn):\n",
+        "            if ch > self.cutn - self.cutn//4:\n",
+        "                cutout = input.clone()\n",
+        "            else:\n",
+        "                size = int(max_size * torch.zeros(1,).normal_(mean=.8, std=.3).clip(float(self.cut_size/max_size), 1.))\n",
+        "                offsetx = torch.randint(0, abs(sideX - size + 1), ())\n",
+        "                offsety = torch.randint(0, abs(sideY - size + 1), ())\n",
+        "                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]\n",
+        "\n",
+        "            if not self.skip_augs:\n",
+        "                cutout = self.augs(cutout)\n",
+        "            cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))\n",
+        "            del cutout\n",
+        "\n",
+        "        cutouts = torch.cat(cutouts, dim=0)\n",
+        "        return cutouts\n",
+        "\n",
+        "cutout_debug = False\n",
+        "padargs = {}\n",
+        "\n",
+        "class MakeCutoutsDango(nn.Module):\n",
+        "    def __init__(self, cut_size,\n",
+        "                 Overview=4, \n",
+        "                 InnerCrop = 0, IC_Size_Pow=0.5, IC_Grey_P = 0.2\n",
+        "                 ):\n",
+        "        super().__init__()\n",
+        "        self.cut_size = cut_size\n",
+        "        self.Overview = Overview\n",
+        "        self.InnerCrop = InnerCrop\n",
+        "        self.IC_Size_Pow = IC_Size_Pow\n",
+        "        self.IC_Grey_P = IC_Grey_P\n",
+        "        if args.animation_mode == 'None':\n",
+        "          self.augs = T.Compose([\n",
+        "              T.RandomHorizontalFlip(p=0.5),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomAffine(degrees=10, translate=(0.05, 0.05),  interpolation = T.InterpolationMode.BILINEAR),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomGrayscale(p=0.1),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "          ])\n",
+        "        elif args.animation_mode == 'Video Input':\n",
+        "          self.augs = T.Compose([\n",
+        "              T.RandomHorizontalFlip(p=0.5),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomAffine(degrees=15, translate=(0.1, 0.1)),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomPerspective(distortion_scale=0.4, p=0.7),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomGrayscale(p=0.15),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              # T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "          ])\n",
+        "        elif  args.animation_mode == '2D':\n",
+        "          self.augs = T.Compose([\n",
+        "              T.RandomHorizontalFlip(p=0.4),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomAffine(degrees=10, translate=(0.05, 0.05),  interpolation = T.InterpolationMode.BILINEAR),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.RandomGrayscale(p=0.1),\n",
+        "              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "              T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.3),\n",
+        "          ])\n",
+        "          \n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        cutouts = []\n",
+        "        gray = T.Grayscale(3)\n",
+        "        sideY, sideX = input.shape[2:4]\n",
+        "        max_size = min(sideX, sideY)\n",
+        "        min_size = min(sideX, sideY, self.cut_size)\n",
+        "        l_size = max(sideX, sideY)\n",
+        "        output_shape = [1,3,self.cut_size,self.cut_size] \n",
+        "        output_shape_2 = [1,3,self.cut_size+2,self.cut_size+2]\n",
+        "        pad_input = F.pad(input,((sideY-max_size)//2,(sideY-max_size)//2,(sideX-max_size)//2,(sideX-max_size)//2), **padargs)\n",
+        "        cutout = resize(pad_input, out_shape=output_shape)\n",
+        "\n",
+        "        if self.Overview>0:\n",
+        "            if self.Overview<=4:\n",
+        "                if self.Overview>=1:\n",
+        "                    cutouts.append(cutout)\n",
+        "                if self.Overview>=2:\n",
+        "                    cutouts.append(gray(cutout))\n",
+        "                if self.Overview>=3:\n",
+        "                    cutouts.append(TF.hflip(cutout))\n",
+        "                if self.Overview==4:\n",
+        "                    cutouts.append(gray(TF.hflip(cutout)))\n",
+        "            else:\n",
+        "                cutout = resize(pad_input, out_shape=output_shape)\n",
+        "                for _ in range(self.Overview):\n",
+        "                    cutouts.append(cutout)\n",
+        "\n",
+        "            if cutout_debug:\n",
+        "                TF.to_pil_image(cutouts[0].clamp(0, 1).squeeze(0)).save(\"/content/cutout_overview0.jpg\",quality=99)\n",
+        "                              \n",
+        "        if self.InnerCrop >0:\n",
+        "            for i in range(self.InnerCrop):\n",
+        "                size = int(torch.rand([])**self.IC_Size_Pow * (max_size - min_size) + min_size)\n",
+        "                offsetx = torch.randint(0, sideX - size + 1, ())\n",
+        "                offsety = torch.randint(0, sideY - size + 1, ())\n",
+        "                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]\n",
+        "                if i <= int(self.IC_Grey_P * self.InnerCrop):\n",
+        "                    cutout = gray(cutout)\n",
+        "                cutout = resize(cutout, out_shape=output_shape)\n",
+        "                cutouts.append(cutout)\n",
+        "            if cutout_debug:\n",
+        "                TF.to_pil_image(cutouts[-1].clamp(0, 1).squeeze(0)).save(\"/content/cutout_InnerCrop.jpg\",quality=99)\n",
+        "        cutouts = torch.cat(cutouts)\n",
+        "        if skip_augs is not True: cutouts=self.augs(cutouts)\n",
+        "        return cutouts\n",
+        "\n",
+        "def spherical_dist_loss(x, y):\n",
+        "    x = F.normalize(x, dim=-1)\n",
+        "    y = F.normalize(y, dim=-1)\n",
+        "    return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)     \n",
+        "\n",
+        "def tv_loss(input):\n",
+        "    \"\"\"L2 total variation loss, as in Mahendran et al.\"\"\"\n",
+        "    input = F.pad(input, (0, 1, 0, 1), 'replicate')\n",
+        "    x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]\n",
+        "    y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]\n",
+        "    return (x_diff**2 + y_diff**2).mean([1, 2, 3])\n",
+        "\n",
+        "\n",
+        "def range_loss(input):\n",
+        "    return (input - input.clamp(-1, 1)).pow(2).mean([1, 2, 3])\n",
+        "\n",
+        "stop_on_next_loop = False  # Make sure GPU memory doesn't get corrupted from cancelling the run mid-way through, allow a full frame to complete\n",
+        "\n",
+        "def do_run():\n",
+        "  seed = args.seed\n",
+        "  print(range(args.start_frame, args.max_frames))\n",
+        "  for frame_num in range(args.start_frame, args.max_frames):\n",
+        "      if stop_on_next_loop:\n",
+        "        break\n",
+        "      \n",
+        "      display.clear_output(wait=True)\n",
+        "\n",
+        "      # Print Frame progress if animation mode is on\n",
+        "      if args.animation_mode != \"None\":\n",
+        "        batchBar = tqdm(range(args.max_frames), desc =\"Frames\")\n",
+        "        batchBar.n = frame_num\n",
+        "        batchBar.refresh()\n",
+        "\n",
+        "      \n",
+        "      # Inits if not video frames\n",
+        "      if args.animation_mode != \"Video Input\":\n",
+        "        if args.init_image == '':\n",
+        "          init_image = None\n",
+        "        else:\n",
+        "          init_image = args.init_image\n",
+        "        init_scale = args.init_scale\n",
+        "        skip_steps = args.skip_steps\n",
+        "\n",
+        "      if args.animation_mode == \"2D\":\n",
+        "        if args.key_frames:\n",
+        "          angle = args.angle_series[frame_num]\n",
+        "          zoom = args.zoom_series[frame_num]\n",
+        "          translation_x = args.translation_x_series[frame_num]\n",
+        "          translation_y = args.translation_y_series[frame_num]\n",
+        "          print(\n",
+        "              f'angle: {angle}',\n",
+        "              f'zoom: {zoom}',\n",
+        "              f'translation_x: {translation_x}',\n",
+        "              f'translation_y: {translation_y}',\n",
+        "          )\n",
+        "        \n",
+        "        if frame_num > 0:\n",
+        "          seed = seed + 1          \n",
+        "          if resume_run and frame_num == start_frame:\n",
+        "            img_0 = cv2.imread(batchFolder+f\"/{batch_name}({batchNum})_{start_frame-1:04}.png\")\n",
+        "          else:\n",
+        "            img_0 = cv2.imread('prevFrame.png')\n",
+        "          center = (1*img_0.shape[1]//2, 1*img_0.shape[0]//2)\n",
+        "          trans_mat = np.float32(\n",
+        "              [[1, 0, translation_x],\n",
+        "              [0, 1, translation_y]]\n",
+        "          )\n",
+        "          rot_mat = cv2.getRotationMatrix2D( center, angle, zoom )\n",
+        "          trans_mat = np.vstack([trans_mat, [0,0,1]])\n",
+        "          rot_mat = np.vstack([rot_mat, [0,0,1]])\n",
+        "          transformation_matrix = np.matmul(rot_mat, trans_mat)\n",
+        "          img_0 = cv2.warpPerspective(\n",
+        "              img_0,\n",
+        "              transformation_matrix,\n",
+        "              (img_0.shape[1], img_0.shape[0]),\n",
+        "              borderMode=cv2.BORDER_WRAP\n",
+        "          )\n",
+        "          cv2.imwrite('prevFrameScaled.png', img_0)\n",
+        "          init_image = 'prevFrameScaled.png'\n",
+        "          init_scale = args.frames_scale\n",
+        "          skip_steps = args.calc_frames_skip_steps\n",
+        "\n",
+        "      if  args.animation_mode == \"Video Input\":\n",
+        "        seed = seed + 1  \n",
+        "        init_image = f'{videoFramesFolder}/{frame_num+1:04}.jpg'\n",
+        "        init_scale = args.frames_scale\n",
+        "        skip_steps = args.calc_frames_skip_steps\n",
+        "\n",
+        "      loss_values = []\n",
+        "  \n",
+        "      if seed is not None:\n",
+        "          np.random.seed(seed)\n",
+        "          random.seed(seed)\n",
+        "          torch.manual_seed(seed)\n",
+        "          torch.cuda.manual_seed_all(seed)\n",
+        "          torch.backends.cudnn.deterministic = True\n",
+        "  \n",
+        "      target_embeds, weights = [], []\n",
+        "      \n",
+        "      if args.prompts_series is not None and frame_num >= len(args.prompts_series):\n",
+        "        frame_prompt = args.prompts_series[-1]\n",
+        "      elif args.prompts_series is not None:\n",
+        "        frame_prompt = args.prompts_series[frame_num]\n",
+        "      else:\n",
+        "        frame_prompt = []\n",
+        "      \n",
+        "      print(args.image_prompts_series)\n",
+        "      if args.image_prompts_series is not None and frame_num >= len(args.image_prompts_series):\n",
+        "        image_prompt = args.image_prompts_series[-1]\n",
+        "      elif args.image_prompts_series is not None:\n",
+        "        image_prompt = args.image_prompts_series[frame_num]\n",
+        "      else:\n",
+        "        image_prompt = []\n",
+        "\n",
+        "      print(f'Frame Prompt: {frame_prompt}')\n",
+        "\n",
+        "      model_stats = []\n",
+        "      for clip_model in clip_models:\n",
+        "            cutn = 16\n",
+        "            model_stat = {\"clip_model\":None,\"target_embeds\":[],\"make_cutouts\":None,\"weights\":[]}\n",
+        "            model_stat[\"clip_model\"] = clip_model\n",
+        "            \n",
+        "            \n",
+        "            for prompt in frame_prompt:\n",
+        "                txt, weight = parse_prompt(prompt)\n",
+        "                txt = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()\n",
+        "                \n",
+        "                if args.fuzzy_prompt:\n",
+        "                    for i in range(25):\n",
+        "                        model_stat[\"target_embeds\"].append((txt + torch.randn(txt.shape).cuda() * args.rand_mag).clamp(0,1))\n",
+        "                        model_stat[\"weights\"].append(weight)\n",
+        "                else:\n",
+        "                    model_stat[\"target_embeds\"].append(txt)\n",
+        "                    model_stat[\"weights\"].append(weight)\n",
+        "        \n",
+        "            if image_prompt:\n",
+        "              model_stat[\"make_cutouts\"] = MakeCutouts(clip_model.visual.input_resolution, cutn, skip_augs=skip_augs) \n",
+        "              for prompt in image_prompt:\n",
+        "                  path, weight = parse_prompt(prompt)\n",
+        "                  img = Image.open(fetch(path)).convert('RGB')\n",
+        "                  img = TF.resize(img, min(side_x, side_y, *img.size), T.InterpolationMode.LANCZOS)\n",
+        "                  batch = model_stat[\"make_cutouts\"](TF.to_tensor(img).to(device).unsqueeze(0).mul(2).sub(1))\n",
+        "                  embed = clip_model.encode_image(normalize(batch)).float()\n",
+        "                  if fuzzy_prompt:\n",
+        "                      for i in range(25):\n",
+        "                          model_stat[\"target_embeds\"].append((embed + torch.randn(embed.shape).cuda() * rand_mag).clamp(0,1))\n",
+        "                          weights.extend([weight / cutn] * cutn)\n",
+        "                  else:\n",
+        "                      model_stat[\"target_embeds\"].append(embed)\n",
+        "                      model_stat[\"weights\"].extend([weight / cutn] * cutn)\n",
+        "        \n",
+        "            model_stat[\"target_embeds\"] = torch.cat(model_stat[\"target_embeds\"])\n",
+        "            model_stat[\"weights\"] = torch.tensor(model_stat[\"weights\"], device=device)\n",
+        "            if model_stat[\"weights\"].sum().abs() < 1e-3:\n",
+        "                raise RuntimeError('The weights must not sum to 0.')\n",
+        "            model_stat[\"weights\"] /= model_stat[\"weights\"].sum().abs()\n",
+        "            model_stats.append(model_stat)\n",
+        "  \n",
+        "      init = None\n",
+        "      if init_image is not None:\n",
+        "          init = Image.open(fetch(init_image)).convert('RGB')\n",
+        "          init = init.resize((args.side_x, args.side_y), Image.LANCZOS)\n",
+        "          init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "      \n",
+        "      if args.perlin_init:\n",
+        "          if args.perlin_mode == 'color':\n",
+        "              init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "              init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)\n",
+        "          elif args.perlin_mode == 'gray':\n",
+        "            init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)\n",
+        "            init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "          else:\n",
+        "            init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "            init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "          # init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device)\n",
+        "          init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "          del init2\n",
+        "  \n",
+        "      cur_t = None\n",
+        "  \n",
+        "      def cond_fn(x, t, y=None):\n",
+        "          with torch.enable_grad():\n",
+        "              x_is_NaN = False\n",
+        "              x = x.detach().requires_grad_()\n",
+        "              n = x.shape[0]\n",
+        "              if use_secondary_model is True:\n",
+        "                alpha = torch.tensor(diffusion.sqrt_alphas_cumprod[cur_t], device=device, dtype=torch.float32)\n",
+        "                sigma = torch.tensor(diffusion.sqrt_one_minus_alphas_cumprod[cur_t], device=device, dtype=torch.float32)\n",
+        "                cosine_t = alpha_sigma_to_t(alpha, sigma)\n",
+        "                out = secondary_model(x, cosine_t[None].repeat([n])).pred\n",
+        "                fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]\n",
+        "                x_in = out * fac + x * (1 - fac)\n",
+        "                x_in_grad = torch.zeros_like(x_in)\n",
+        "              else:\n",
+        "                my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t\n",
+        "                out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})\n",
+        "                fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]\n",
+        "                x_in = out['pred_xstart'] * fac + x * (1 - fac)\n",
+        "                x_in_grad = torch.zeros_like(x_in)\n",
+        "              for model_stat in model_stats:\n",
+        "                for i in range(args.cutn_batches):\n",
+        "                    t_int = int(t.item())+1 #errors on last step without +1, need to find source\n",
+        "                    #when using SLIP Base model the dimensions need to be hard coded to avoid AttributeError: 'VisionTransformer' object has no attribute 'input_resolution'\n",
+        "                    try:\n",
+        "                        input_resolution=model_stat[\"clip_model\"].visual.input_resolution\n",
+        "                    except:\n",
+        "                        input_resolution=224\n",
+        "\n",
+        "                    cuts = MakeCutoutsDango(input_resolution,\n",
+        "                            Overview= args.cut_overview[1000-t_int], \n",
+        "                            InnerCrop = args.cut_innercut[1000-t_int], IC_Size_Pow=args.cut_ic_pow, IC_Grey_P = args.cut_icgray_p[1000-t_int]\n",
+        "                            )\n",
+        "                    clip_in = normalize(cuts(x_in.add(1).div(2)))\n",
+        "                    image_embeds = model_stat[\"clip_model\"].encode_image(clip_in).float()\n",
+        "                    dists = spherical_dist_loss(image_embeds.unsqueeze(1), model_stat[\"target_embeds\"].unsqueeze(0))\n",
+        "                    dists = dists.view([args.cut_overview[1000-t_int]+args.cut_innercut[1000-t_int], n, -1])\n",
+        "                    losses = dists.mul(model_stat[\"weights\"]).sum(2).mean(0)\n",
+        "                    loss_values.append(losses.sum().item()) # log loss, probably shouldn't do per cutn_batch\n",
+        "                    x_in_grad += torch.autograd.grad(losses.sum() * clip_guidance_scale, x_in)[0] / cutn_batches\n",
+        "              tv_losses = tv_loss(x_in)\n",
+        "              if use_secondary_model is True:\n",
+        "                range_losses = range_loss(out)\n",
+        "              else:\n",
+        "                range_losses = range_loss(out['pred_xstart'])\n",
+        "              sat_losses = torch.abs(x_in - x_in.clamp(min=-1,max=1)).mean()\n",
+        "              loss = tv_losses.sum() * tv_scale + range_losses.sum() * range_scale + sat_losses.sum() * sat_scale\n",
+        "              if init is not None and args.init_scale:\n",
+        "                  init_losses = lpips_model(x_in, init)\n",
+        "                  loss = loss + init_losses.sum() * args.init_scale\n",
+        "              x_in_grad += torch.autograd.grad(loss, x_in)[0]\n",
+        "              if torch.isnan(x_in_grad).any()==False:\n",
+        "                  grad = -torch.autograd.grad(x_in, x, x_in_grad)[0]\n",
+        "              else:\n",
+        "                # print(\"NaN'd\")\n",
+        "                x_is_NaN = True\n",
+        "                grad = torch.zeros_like(x)\n",
+        "          if args.clamp_grad and x_is_NaN == False:\n",
+        "              magnitude = grad.square().mean().sqrt()\n",
+        "              return grad * magnitude.clamp(max=args.clamp_max) / magnitude  #min=-0.02, min=-clamp_max, \n",
+        "          return grad\n",
+        "  \n",
+        "      if model_config['timestep_respacing'].startswith('ddim'):\n",
+        "          sample_fn = diffusion.ddim_sample_loop_progressive\n",
+        "      else:\n",
+        "          sample_fn = diffusion.p_sample_loop_progressive\n",
+        "    \n",
+        "\n",
+        "      image_display = Output()\n",
+        "      for i in range(args.n_batches):\n",
+        "          if args.animation_mode == 'None':\n",
+        "            display.clear_output(wait=True)\n",
+        "            batchBar = tqdm(range(args.n_batches), desc =\"Batches\")\n",
+        "            batchBar.n = i\n",
+        "            batchBar.refresh()\n",
+        "          print('')\n",
+        "          display.display(image_display)\n",
+        "          gc.collect()\n",
+        "          torch.cuda.empty_cache()\n",
+        "          cur_t = diffusion.num_timesteps - skip_steps - 1\n",
+        "          total_steps = cur_t\n",
+        "\n",
+        "          if perlin_init:\n",
+        "              init = regen_perlin()\n",
+        "\n",
+        "          if model_config['timestep_respacing'].startswith('ddim'):\n",
+        "              samples = sample_fn(\n",
+        "                  model,\n",
+        "                  (batch_size, 3, args.side_y, args.side_x),\n",
+        "                  clip_denoised=clip_denoised,\n",
+        "                  model_kwargs={},\n",
+        "                  cond_fn=cond_fn,\n",
+        "                  progress=True,\n",
+        "                  skip_timesteps=skip_steps,\n",
+        "                  init_image=init,\n",
+        "                  randomize_class=randomize_class,\n",
+        "                  eta=eta,\n",
+        "              )\n",
+        "          else:\n",
+        "              samples = sample_fn(\n",
+        "                  model,\n",
+        "                  (batch_size, 3, args.side_y, args.side_x),\n",
+        "                  clip_denoised=clip_denoised,\n",
+        "                  model_kwargs={},\n",
+        "                  cond_fn=cond_fn,\n",
+        "                  progress=True,\n",
+        "                  skip_timesteps=skip_steps,\n",
+        "                  init_image=init,\n",
+        "                  randomize_class=randomize_class,\n",
+        "              )\n",
+        "          \n",
+        "          \n",
+        "          # with run_display:\n",
+        "          # display.clear_output(wait=True)\n",
+        "          imgToSharpen = None\n",
+        "          for j, sample in enumerate(samples):    \n",
+        "            cur_t -= 1\n",
+        "            intermediateStep = False\n",
+        "            if args.steps_per_checkpoint is not None:\n",
+        "                if j % steps_per_checkpoint == 0 and j > 0:\n",
+        "                  intermediateStep = True\n",
+        "            elif j in args.intermediate_saves:\n",
+        "              intermediateStep = True\n",
+        "            with image_display:\n",
+        "              if j % args.display_rate == 0 or cur_t == -1 or intermediateStep == True:\n",
+        "                  for k, image in enumerate(sample['pred_xstart']):\n",
+        "                      # tqdm.write(f'Batch {i}, step {j}, output {k}:')\n",
+        "                      current_time = datetime.now().strftime('%y%m%d-%H%M%S_%f')\n",
+        "                      percent = math.ceil(j/total_steps*100)\n",
+        "                      if args.n_batches > 0:\n",
+        "                        #if intermediates are saved to the subfolder, don't append a step or percentage to the name\n",
+        "                        if cur_t == -1 and args.intermediates_in_subfolder is True:\n",
+        "                          save_num = f'{frame_num:04}' if animation_mode != \"None\" else i\n",
+        "                          filename = f'{args.batch_name}({args.batchNum})_{save_num}.png'\n",
+        "                        else:\n",
+        "                          #If we're working with percentages, append it\n",
+        "                          if args.steps_per_checkpoint is not None:\n",
+        "                            filename = f'{args.batch_name}({args.batchNum})_{i:04}-{percent:02}%.png'\n",
+        "                          # Or else, iIf we're working with specific steps, append those\n",
+        "                          else:\n",
+        "                            filename = f'{args.batch_name}({args.batchNum})_{i:04}-{j:03}.png'\n",
+        "                      image = TF.to_pil_image(image.add(1).div(2).clamp(0, 1))\n",
+        "                      if j % args.display_rate == 0 or cur_t == -1:\n",
+        "                        image.save('progress.png')\n",
+        "                        display.clear_output(wait=True)\n",
+        "                        display.display(display.Image('progress.png'))\n",
+        "                      if args.steps_per_checkpoint is not None:\n",
+        "                        if j % args.steps_per_checkpoint == 0 and j > 0:\n",
+        "                          if args.intermediates_in_subfolder is True:\n",
+        "                            image.save(f'{partialFolder}/{filename}')\n",
+        "                          else:\n",
+        "                            image.save(f'{batchFolder}/{filename}')\n",
+        "                      else:\n",
+        "                        if j in args.intermediate_saves:\n",
+        "                          if args.intermediates_in_subfolder is True:\n",
+        "                            image.save(f'{partialFolder}/{filename}')\n",
+        "                          else:\n",
+        "                            image.save(f'{batchFolder}/{filename}')\n",
+        "                      if cur_t == -1:\n",
+        "                        if frame_num == 0:\n",
+        "                          save_settings()\n",
+        "                        if args.animation_mode != \"None\":\n",
+        "                          image.save('prevFrame.png')\n",
+        "                        if args.sharpen_preset != \"Off\" and animation_mode == \"None\":\n",
+        "                          imgToSharpen = image\n",
+        "                          if args.keep_unsharp is True:\n",
+        "                            image.save(f'{unsharpenFolder}/{filename}')\n",
+        "                        else:\n",
+        "                          image.save(f'{batchFolder}/{filename}')\n",
+        "                        # if frame_num != args.max_frames-1:\n",
+        "                        #   display.clear_output()\n",
+        "\n",
+        "          with image_display:   \n",
+        "            if args.sharpen_preset != \"Off\" and animation_mode == \"None\":\n",
+        "              print('Starting Diffusion Sharpening...')\n",
+        "              do_superres(imgToSharpen, f'{batchFolder}/{filename}')\n",
+        "              display.clear_output()\n",
+        "          \n",
+        "          plt.plot(np.array(loss_values), 'r')\n",
+        "\n",
+        "def save_settings():\n",
+        "  setting_list = {\n",
+        "    'text_prompts': text_prompts,\n",
+        "    'image_prompts': image_prompts,\n",
+        "    'clip_guidance_scale': clip_guidance_scale,\n",
+        "    'tv_scale': tv_scale,\n",
+        "    'range_scale': range_scale,\n",
+        "    'sat_scale': sat_scale,\n",
+        "    # 'cutn': cutn,\n",
+        "    'cutn_batches': cutn_batches,\n",
+        "    'max_frames': max_frames,\n",
+        "    'interp_spline': interp_spline,\n",
+        "    # 'rotation_per_frame': rotation_per_frame,\n",
+        "    'init_image': init_image,\n",
+        "    'init_scale': init_scale,\n",
+        "    'skip_steps': skip_steps,\n",
+        "    # 'zoom_per_frame': zoom_per_frame,\n",
+        "    'frames_scale': frames_scale,\n",
+        "    'frames_skip_steps': frames_skip_steps,\n",
+        "    'perlin_init': perlin_init,\n",
+        "    'perlin_mode': perlin_mode,\n",
+        "    'skip_augs': skip_augs,\n",
+        "    'randomize_class': randomize_class,\n",
+        "    'clip_denoised': clip_denoised,\n",
+        "    'clamp_grad': clamp_grad,\n",
+        "    'clamp_max': clamp_max,\n",
+        "    'seed': seed,\n",
+        "    'fuzzy_prompt': fuzzy_prompt,\n",
+        "    'rand_mag': rand_mag,\n",
+        "    'eta': eta,\n",
+        "    'width': width_height[0],\n",
+        "    'height': width_height[1],\n",
+        "    'diffusion_model': diffusion_model,\n",
+        "    'use_secondary_model': use_secondary_model,\n",
+        "    'steps': steps,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "    'ViTB32': ViTB32,\n",
+        "    'ViTB16': ViTB16,\n",
+        "    'ViTL14': ViTL14,\n",
+        "    'RN101': RN101,\n",
+        "    'RN50': RN50,\n",
+        "    'RN50x4': RN50x4,\n",
+        "    'RN50x16': RN50x16,\n",
+        "    'RN50x64': RN50x64,\n",
+        "    'cut_overview': str(cut_overview),\n",
+        "    'cut_innercut': str(cut_innercut),\n",
+        "    'cut_ic_pow': cut_ic_pow,\n",
+        "    'cut_icgray_p': str(cut_icgray_p),\n",
+        "    'key_frames': key_frames,\n",
+        "    'max_frames': max_frames,\n",
+        "    'angle': angle,\n",
+        "    'zoom': zoom,\n",
+        "    'translation_x': translation_x,\n",
+        "    'translation_y': translation_y,\n",
+        "    'video_init_path':video_init_path,\n",
+        "    'extract_nth_frame':extract_nth_frame,\n",
+        "  }\n",
+        "  # print('Settings:', setting_list)\n",
+        "  with open(f\"{batchFolder}/{batch_name}({batchNum})_settings.txt\", \"w+\") as f:   #save settings\n",
+        "    json.dump(setting_list, f, ensure_ascii=False, indent=4)\n",
+        "  "
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "TI4oAu0N4ksZ"
+      },
+      "source": [
+        "#@title 1.5 Define the secondary diffusion model\n",
+        "\n",
+        "def append_dims(x, n):\n",
+        "    return x[(Ellipsis, *(None,) * (n - x.ndim))]\n",
+        "\n",
+        "\n",
+        "def expand_to_planes(x, shape):\n",
+        "    return append_dims(x, len(shape)).repeat([1, 1, *shape[2:]])\n",
+        "\n",
+        "\n",
+        "def alpha_sigma_to_t(alpha, sigma):\n",
+        "    return torch.atan2(sigma, alpha) * 2 / math.pi\n",
+        "\n",
+        "\n",
+        "def t_to_alpha_sigma(t):\n",
+        "    return torch.cos(t * math.pi / 2), torch.sin(t * math.pi / 2)\n",
+        "\n",
+        "\n",
+        "@dataclass\n",
+        "class DiffusionOutput:\n",
+        "    v: torch.Tensor\n",
+        "    pred: torch.Tensor\n",
+        "    eps: torch.Tensor\n",
+        "\n",
+        "\n",
+        "class ConvBlock(nn.Sequential):\n",
+        "    def __init__(self, c_in, c_out):\n",
+        "        super().__init__(\n",
+        "            nn.Conv2d(c_in, c_out, 3, padding=1),\n",
+        "            nn.ReLU(inplace=True),\n",
+        "        )\n",
+        "\n",
+        "\n",
+        "class SkipBlock(nn.Module):\n",
+        "    def __init__(self, main, skip=None):\n",
+        "        super().__init__()\n",
+        "        self.main = nn.Sequential(*main)\n",
+        "        self.skip = skip if skip else nn.Identity()\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        return torch.cat([self.main(input), self.skip(input)], dim=1)\n",
+        "\n",
+        "\n",
+        "class FourierFeatures(nn.Module):\n",
+        "    def __init__(self, in_features, out_features, std=1.):\n",
+        "        super().__init__()\n",
+        "        assert out_features % 2 == 0\n",
+        "        self.weight = nn.Parameter(torch.randn([out_features // 2, in_features]) * std)\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        f = 2 * math.pi * input @ self.weight.T\n",
+        "        return torch.cat([f.cos(), f.sin()], dim=-1)\n",
+        "\n",
+        "\n",
+        "class SecondaryDiffusionImageNet(nn.Module):\n",
+        "    def __init__(self):\n",
+        "        super().__init__()\n",
+        "        c = 64  # The base channel count\n",
+        "\n",
+        "        self.timestep_embed = FourierFeatures(1, 16)\n",
+        "\n",
+        "        self.net = nn.Sequential(\n",
+        "            ConvBlock(3 + 16, c),\n",
+        "            ConvBlock(c, c),\n",
+        "            SkipBlock([\n",
+        "                nn.AvgPool2d(2),\n",
+        "                ConvBlock(c, c * 2),\n",
+        "                ConvBlock(c * 2, c * 2),\n",
+        "                SkipBlock([\n",
+        "                    nn.AvgPool2d(2),\n",
+        "                    ConvBlock(c * 2, c * 4),\n",
+        "                    ConvBlock(c * 4, c * 4),\n",
+        "                    SkipBlock([\n",
+        "                        nn.AvgPool2d(2),\n",
+        "                        ConvBlock(c * 4, c * 8),\n",
+        "                        ConvBlock(c * 8, c * 4),\n",
+        "                        nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "                    ]),\n",
+        "                    ConvBlock(c * 8, c * 4),\n",
+        "                    ConvBlock(c * 4, c * 2),\n",
+        "                    nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "                ]),\n",
+        "                ConvBlock(c * 4, c * 2),\n",
+        "                ConvBlock(c * 2, c),\n",
+        "                nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "            ]),\n",
+        "            ConvBlock(c * 2, c),\n",
+        "            nn.Conv2d(c, 3, 3, padding=1),\n",
+        "        )\n",
+        "\n",
+        "    def forward(self, input, t):\n",
+        "        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)\n",
+        "        v = self.net(torch.cat([input, timestep_embed], dim=1))\n",
+        "        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))\n",
+        "        pred = input * alphas - v * sigmas\n",
+        "        eps = input * sigmas + v * alphas\n",
+        "        return DiffusionOutput(v, pred, eps)\n",
+        "\n",
+        "\n",
+        "class SecondaryDiffusionImageNet2(nn.Module):\n",
+        "    def __init__(self):\n",
+        "        super().__init__()\n",
+        "        c = 64  # The base channel count\n",
+        "        cs = [c, c * 2, c * 2, c * 4, c * 4, c * 8]\n",
+        "\n",
+        "        self.timestep_embed = FourierFeatures(1, 16)\n",
+        "        self.down = nn.AvgPool2d(2)\n",
+        "        self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False)\n",
+        "\n",
+        "        self.net = nn.Sequential(\n",
+        "            ConvBlock(3 + 16, cs[0]),\n",
+        "            ConvBlock(cs[0], cs[0]),\n",
+        "            SkipBlock([\n",
+        "                self.down,\n",
+        "                ConvBlock(cs[0], cs[1]),\n",
+        "                ConvBlock(cs[1], cs[1]),\n",
+        "                SkipBlock([\n",
+        "                    self.down,\n",
+        "                    ConvBlock(cs[1], cs[2]),\n",
+        "                    ConvBlock(cs[2], cs[2]),\n",
+        "                    SkipBlock([\n",
+        "                        self.down,\n",
+        "                        ConvBlock(cs[2], cs[3]),\n",
+        "                        ConvBlock(cs[3], cs[3]),\n",
+        "                        SkipBlock([\n",
+        "                            self.down,\n",
+        "                            ConvBlock(cs[3], cs[4]),\n",
+        "                            ConvBlock(cs[4], cs[4]),\n",
+        "                            SkipBlock([\n",
+        "                                self.down,\n",
+        "                                ConvBlock(cs[4], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[4]),\n",
+        "                                self.up,\n",
+        "                            ]),\n",
+        "                            ConvBlock(cs[4] * 2, cs[4]),\n",
+        "                            ConvBlock(cs[4], cs[3]),\n",
+        "                            self.up,\n",
+        "                        ]),\n",
+        "                        ConvBlock(cs[3] * 2, cs[3]),\n",
+        "                        ConvBlock(cs[3], cs[2]),\n",
+        "                        self.up,\n",
+        "                    ]),\n",
+        "                    ConvBlock(cs[2] * 2, cs[2]),\n",
+        "                    ConvBlock(cs[2], cs[1]),\n",
+        "                    self.up,\n",
+        "                ]),\n",
+        "                ConvBlock(cs[1] * 2, cs[1]),\n",
+        "                ConvBlock(cs[1], cs[0]),\n",
+        "                self.up,\n",
+        "            ]),\n",
+        "            ConvBlock(cs[0] * 2, cs[0]),\n",
+        "            nn.Conv2d(cs[0], 3, 3, padding=1),\n",
+        "        )\n",
+        "\n",
+        "    def forward(self, input, t):\n",
+        "        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)\n",
+        "        v = self.net(torch.cat([input, timestep_embed], dim=1))\n",
+        "        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))\n",
+        "        pred = input * alphas - v * sigmas\n",
+        "        eps = input * sigmas + v * alphas\n",
+        "        return DiffusionOutput(v, pred, eps)\n"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "#@title 1.6 SuperRes Define\n",
+        "class DDIMSampler(object):\n",
+        "    def __init__(self, model, schedule=\"linear\", **kwargs):\n",
+        "        super().__init__()\n",
+        "        self.model = model\n",
+        "        self.ddpm_num_timesteps = model.num_timesteps\n",
+        "        self.schedule = schedule\n",
+        "\n",
+        "    def register_buffer(self, name, attr):\n",
+        "        if type(attr) == torch.Tensor:\n",
+        "            if attr.device != torch.device(\"cuda\"):\n",
+        "                attr = attr.to(torch.device(\"cuda\"))\n",
+        "        setattr(self, name, attr)\n",
+        "\n",
+        "    def make_schedule(self, ddim_num_steps, ddim_discretize=\"uniform\", ddim_eta=0., verbose=True):\n",
+        "        self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,\n",
+        "                                                  num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)\n",
+        "        alphas_cumprod = self.model.alphas_cumprod\n",
+        "        assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'\n",
+        "        to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)\n",
+        "\n",
+        "        self.register_buffer('betas', to_torch(self.model.betas))\n",
+        "        self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))\n",
+        "        self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))\n",
+        "\n",
+        "        # calculations for diffusion q(x_t | x_{t-1}) and others\n",
+        "        self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))\n",
+        "        self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))\n",
+        "        self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))\n",
+        "        self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))\n",
+        "        self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))\n",
+        "\n",
+        "        # ddim sampling parameters\n",
+        "        ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),\n",
+        "                                                                                   ddim_timesteps=self.ddim_timesteps,\n",
+        "                                                                                   eta=ddim_eta,verbose=verbose)\n",
+        "        self.register_buffer('ddim_sigmas', ddim_sigmas)\n",
+        "        self.register_buffer('ddim_alphas', ddim_alphas)\n",
+        "        self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)\n",
+        "        self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))\n",
+        "        sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(\n",
+        "            (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (\n",
+        "                        1 - self.alphas_cumprod / self.alphas_cumprod_prev))\n",
+        "        self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)\n",
+        "\n",
+        "    @torch.no_grad()\n",
+        "    def sample(self,\n",
+        "               S,\n",
+        "               batch_size,\n",
+        "               shape,\n",
+        "               conditioning=None,\n",
+        "               callback=None,\n",
+        "               normals_sequence=None,\n",
+        "               img_callback=None,\n",
+        "               quantize_x0=False,\n",
+        "               eta=0.,\n",
+        "               mask=None,\n",
+        "               x0=None,\n",
+        "               temperature=1.,\n",
+        "               noise_dropout=0.,\n",
+        "               score_corrector=None,\n",
+        "               corrector_kwargs=None,\n",
+        "               verbose=True,\n",
+        "               x_T=None,\n",
+        "               log_every_t=100,\n",
+        "               **kwargs\n",
+        "               ):\n",
+        "        if conditioning is not None:\n",
+        "            if isinstance(conditioning, dict):\n",
+        "                cbs = conditioning[list(conditioning.keys())[0]].shape[0]\n",
+        "                if cbs != batch_size:\n",
+        "                    print(f\"Warning: Got {cbs} conditionings but batch-size is {batch_size}\")\n",
+        "            else:\n",
+        "                if conditioning.shape[0] != batch_size:\n",
+        "                    print(f\"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}\")\n",
+        "\n",
+        "        self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)\n",
+        "        # sampling\n",
+        "        C, H, W = shape\n",
+        "        size = (batch_size, C, H, W)\n",
+        "        # print(f'Data shape for DDIM sampling is {size}, eta {eta}')\n",
+        "\n",
+        "        samples, intermediates = self.ddim_sampling(conditioning, size,\n",
+        "                                                    callback=callback,\n",
+        "                                                    img_callback=img_callback,\n",
+        "                                                    quantize_denoised=quantize_x0,\n",
+        "                                                    mask=mask, x0=x0,\n",
+        "                                                    ddim_use_original_steps=False,\n",
+        "                                                    noise_dropout=noise_dropout,\n",
+        "                                                    temperature=temperature,\n",
+        "                                                    score_corrector=score_corrector,\n",
+        "                                                    corrector_kwargs=corrector_kwargs,\n",
+        "                                                    x_T=x_T,\n",
+        "                                                    log_every_t=log_every_t\n",
+        "                                                    )\n",
+        "        return samples, intermediates\n",
+        "\n",
+        "    @torch.no_grad()\n",
+        "    def ddim_sampling(self, cond, shape,\n",
+        "                      x_T=None, ddim_use_original_steps=False,\n",
+        "                      callback=None, timesteps=None, quantize_denoised=False,\n",
+        "                      mask=None, x0=None, img_callback=None, log_every_t=100,\n",
+        "                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):\n",
+        "        device = self.model.betas.device\n",
+        "        b = shape[0]\n",
+        "        if x_T is None:\n",
+        "            img = torch.randn(shape, device=device)\n",
+        "        else:\n",
+        "            img = x_T\n",
+        "\n",
+        "        if timesteps is None:\n",
+        "            timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps\n",
+        "        elif timesteps is not None and not ddim_use_original_steps:\n",
+        "            subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1\n",
+        "            timesteps = self.ddim_timesteps[:subset_end]\n",
+        "\n",
+        "        intermediates = {'x_inter': [img], 'pred_x0': [img]}\n",
+        "        time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)\n",
+        "        total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]\n",
+        "        print(f\"Running DDIM Sharpening with {total_steps} timesteps\")\n",
+        "\n",
+        "        iterator = tqdm(time_range, desc='DDIM Sharpening', total=total_steps)\n",
+        "\n",
+        "        for i, step in enumerate(iterator):\n",
+        "            index = total_steps - i - 1\n",
+        "            ts = torch.full((b,), step, device=device, dtype=torch.long)\n",
+        "\n",
+        "            if mask is not None:\n",
+        "                assert x0 is not None\n",
+        "                img_orig = self.model.q_sample(x0, ts)  # TODO: deterministic forward pass?\n",
+        "                img = img_orig * mask + (1. - mask) * img\n",
+        "\n",
+        "            outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,\n",
+        "                                      quantize_denoised=quantize_denoised, temperature=temperature,\n",
+        "                                      noise_dropout=noise_dropout, score_corrector=score_corrector,\n",
+        "                                      corrector_kwargs=corrector_kwargs)\n",
+        "            img, pred_x0 = outs\n",
+        "            if callback: callback(i)\n",
+        "            if img_callback: img_callback(pred_x0, i)\n",
+        "\n",
+        "            if index % log_every_t == 0 or index == total_steps - 1:\n",
+        "                intermediates['x_inter'].append(img)\n",
+        "                intermediates['pred_x0'].append(pred_x0)\n",
+        "\n",
+        "        return img, intermediates\n",
+        "\n",
+        "    @torch.no_grad()\n",
+        "    def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,\n",
+        "                      temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):\n",
+        "        b, *_, device = *x.shape, x.device\n",
+        "        e_t = self.model.apply_model(x, t, c)\n",
+        "        if score_corrector is not None:\n",
+        "            assert self.model.parameterization == \"eps\"\n",
+        "            e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)\n",
+        "\n",
+        "        alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas\n",
+        "        alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev\n",
+        "        sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas\n",
+        "        sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas\n",
+        "        # select parameters corresponding to the currently considered timestep\n",
+        "        a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)\n",
+        "        a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)\n",
+        "        sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)\n",
+        "        sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)\n",
+        "\n",
+        "        # current prediction for x_0\n",
+        "        pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()\n",
+        "        if quantize_denoised:\n",
+        "            pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)\n",
+        "        # direction pointing to x_t\n",
+        "        dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t\n",
+        "        noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature\n",
+        "        if noise_dropout > 0.:\n",
+        "            noise = torch.nn.functional.dropout(noise, p=noise_dropout)\n",
+        "        x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise\n",
+        "        return x_prev, pred_x0\n",
+        "\n",
+        "\n",
+        "def download_models(mode):\n",
+        "\n",
+        "    if mode == \"superresolution\":\n",
+        "        # this is the small bsr light model\n",
+        "        url_conf = 'https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1'\n",
+        "        url_ckpt = 'https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1'\n",
+        "\n",
+        "        path_conf = f'{model_path}/superres/project.yaml'\n",
+        "        path_ckpt = f'{model_path}/superres/last.ckpt'\n",
+        "\n",
+        "        download_url(url_conf, path_conf)\n",
+        "        download_url(url_ckpt, path_ckpt)\n",
+        "\n",
+        "        path_conf = path_conf + '/?dl=1' # fix it\n",
+        "        path_ckpt = path_ckpt + '/?dl=1' # fix it\n",
+        "        return path_conf, path_ckpt\n",
+        "\n",
+        "    else:\n",
+        "        raise NotImplementedError\n",
+        "\n",
+        "\n",
+        "def load_model_from_config(config, ckpt):\n",
+        "    print(f\"Loading model from {ckpt}\")\n",
+        "    pl_sd = torch.load(ckpt, map_location=\"cpu\")\n",
+        "    global_step = pl_sd[\"global_step\"]\n",
+        "    sd = pl_sd[\"state_dict\"]\n",
+        "    model = instantiate_from_config(config.model)\n",
+        "    m, u = model.load_state_dict(sd, strict=False)\n",
+        "    model.cuda()\n",
+        "    model.eval()\n",
+        "    return {\"model\": model}, global_step\n",
+        "\n",
+        "\n",
+        "def get_model(mode):\n",
+        "    path_conf, path_ckpt = download_models(mode)\n",
+        "    config = OmegaConf.load(path_conf)\n",
+        "    model, step = load_model_from_config(config, path_ckpt)\n",
+        "    return model\n",
+        "\n",
+        "\n",
+        "def get_custom_cond(mode):\n",
+        "    dest = \"data/example_conditioning\"\n",
+        "\n",
+        "    if mode == \"superresolution\":\n",
+        "        uploaded_img = files.upload()\n",
+        "        filename = next(iter(uploaded_img))\n",
+        "        name, filetype = filename.split(\".\") # todo assumes just one dot in name !\n",
+        "        os.rename(f\"{filename}\", f\"{dest}/{mode}/custom_{name}.{filetype}\")\n",
+        "\n",
+        "    elif mode == \"text_conditional\":\n",
+        "        w = widgets.Text(value='A cake with cream!', disabled=True)\n",
+        "        display.display(w)\n",
+        "\n",
+        "        with open(f\"{dest}/{mode}/custom_{w.value[:20]}.txt\", 'w') as f:\n",
+        "            f.write(w.value)\n",
+        "\n",
+        "    elif mode == \"class_conditional\":\n",
+        "        w = widgets.IntSlider(min=0, max=1000)\n",
+        "        display.display(w)\n",
+        "        with open(f\"{dest}/{mode}/custom.txt\", 'w') as f:\n",
+        "            f.write(w.value)\n",
+        "\n",
+        "    else:\n",
+        "        raise NotImplementedError(f\"cond not implemented for mode{mode}\")\n",
+        "\n",
+        "\n",
+        "def get_cond_options(mode):\n",
+        "    path = \"data/example_conditioning\"\n",
+        "    path = os.path.join(path, mode)\n",
+        "    onlyfiles = [f for f in sorted(os.listdir(path))]\n",
+        "    return path, onlyfiles\n",
+        "\n",
+        "\n",
+        "def select_cond_path(mode):\n",
+        "    path = \"data/example_conditioning\"  # todo\n",
+        "    path = os.path.join(path, mode)\n",
+        "    onlyfiles = [f for f in sorted(os.listdir(path))]\n",
+        "\n",
+        "    selected = widgets.RadioButtons(\n",
+        "        options=onlyfiles,\n",
+        "        description='Select conditioning:',\n",
+        "        disabled=False\n",
+        "    )\n",
+        "    display.display(selected)\n",
+        "    selected_path = os.path.join(path, selected.value)\n",
+        "    return selected_path\n",
+        "\n",
+        "\n",
+        "def get_cond(mode, img):\n",
+        "    example = dict()\n",
+        "    if mode == \"superresolution\":\n",
+        "        up_f = 4\n",
+        "        # visualize_cond_img(selected_path)\n",
+        "\n",
+        "        c = img\n",
+        "        c = torch.unsqueeze(torchvision.transforms.ToTensor()(c), 0)\n",
+        "        c_up = torchvision.transforms.functional.resize(c, size=[up_f * c.shape[2], up_f * c.shape[3]], antialias=True)\n",
+        "        c_up = rearrange(c_up, '1 c h w -> 1 h w c')\n",
+        "        c = rearrange(c, '1 c h w -> 1 h w c')\n",
+        "        c = 2. * c - 1.\n",
+        "\n",
+        "        c = c.to(torch.device(\"cuda\"))\n",
+        "        example[\"LR_image\"] = c\n",
+        "        example[\"image\"] = c_up\n",
+        "\n",
+        "    return example\n",
+        "\n",
+        "\n",
+        "def visualize_cond_img(path):\n",
+        "    display.display(ipyimg(filename=path))\n",
+        "\n",
+        "\n",
+        "def sr_run(model, img, task, custom_steps, eta, resize_enabled=False, classifier_ckpt=None, global_step=None):\n",
+        "    # global stride\n",
+        "\n",
+        "    example = get_cond(task, img)\n",
+        "\n",
+        "    save_intermediate_vid = False\n",
+        "    n_runs = 1\n",
+        "    masked = False\n",
+        "    guider = None\n",
+        "    ckwargs = None\n",
+        "    mode = 'ddim'\n",
+        "    ddim_use_x0_pred = False\n",
+        "    temperature = 1.\n",
+        "    eta = eta\n",
+        "    make_progrow = True\n",
+        "    custom_shape = None\n",
+        "\n",
+        "    height, width = example[\"image\"].shape[1:3]\n",
+        "    split_input = height >= 128 and width >= 128\n",
+        "\n",
+        "    if split_input:\n",
+        "        ks = 128\n",
+        "        stride = 64\n",
+        "        vqf = 4  #\n",
+        "        model.split_input_params = {\"ks\": (ks, ks), \"stride\": (stride, stride),\n",
+        "                                    \"vqf\": vqf,\n",
+        "                                    \"patch_distributed_vq\": True,\n",
+        "                                    \"tie_braker\": False,\n",
+        "                                    \"clip_max_weight\": 0.5,\n",
+        "                                    \"clip_min_weight\": 0.01,\n",
+        "                                    \"clip_max_tie_weight\": 0.5,\n",
+        "                                    \"clip_min_tie_weight\": 0.01}\n",
+        "    else:\n",
+        "        if hasattr(model, \"split_input_params\"):\n",
+        "            delattr(model, \"split_input_params\")\n",
+        "\n",
+        "    invert_mask = False\n",
+        "\n",
+        "    x_T = None\n",
+        "    for n in range(n_runs):\n",
+        "        if custom_shape is not None:\n",
+        "            x_T = torch.randn(1, custom_shape[1], custom_shape[2], custom_shape[3]).to(model.device)\n",
+        "            x_T = repeat(x_T, '1 c h w -> b c h w', b=custom_shape[0])\n",
+        "\n",
+        "        logs = make_convolutional_sample(example, model,\n",
+        "                                         mode=mode, custom_steps=custom_steps,\n",
+        "                                         eta=eta, swap_mode=False , masked=masked,\n",
+        "                                         invert_mask=invert_mask, quantize_x0=False,\n",
+        "                                         custom_schedule=None, decode_interval=10,\n",
+        "                                         resize_enabled=resize_enabled, custom_shape=custom_shape,\n",
+        "                                         temperature=temperature, noise_dropout=0.,\n",
+        "                                         corrector=guider, corrector_kwargs=ckwargs, x_T=x_T, save_intermediate_vid=save_intermediate_vid,\n",
+        "                                         make_progrow=make_progrow,ddim_use_x0_pred=ddim_use_x0_pred\n",
+        "                                         )\n",
+        "    return logs\n",
+        "\n",
+        "\n",
+        "@torch.no_grad()\n",
+        "def convsample_ddim(model, cond, steps, shape, eta=1.0, callback=None, normals_sequence=None,\n",
+        "                    mask=None, x0=None, quantize_x0=False, img_callback=None,\n",
+        "                    temperature=1., noise_dropout=0., score_corrector=None,\n",
+        "                    corrector_kwargs=None, x_T=None, log_every_t=None\n",
+        "                    ):\n",
+        "\n",
+        "    ddim = DDIMSampler(model)\n",
+        "    bs = shape[0]  # dont know where this comes from but wayne\n",
+        "    shape = shape[1:]  # cut batch dim\n",
+        "    # print(f\"Sampling with eta = {eta}; steps: {steps}\")\n",
+        "    samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback,\n",
+        "                                         normals_sequence=normals_sequence, quantize_x0=quantize_x0, eta=eta,\n",
+        "                                         mask=mask, x0=x0, temperature=temperature, verbose=False,\n",
+        "                                         score_corrector=score_corrector,\n",
+        "                                         corrector_kwargs=corrector_kwargs, x_T=x_T)\n",
+        "\n",
+        "    return samples, intermediates\n",
+        "\n",
+        "\n",
+        "@torch.no_grad()\n",
+        "def make_convolutional_sample(batch, model, mode=\"vanilla\", custom_steps=None, eta=1.0, swap_mode=False, masked=False,\n",
+        "                              invert_mask=True, quantize_x0=False, custom_schedule=None, decode_interval=1000,\n",
+        "                              resize_enabled=False, custom_shape=None, temperature=1., noise_dropout=0., corrector=None,\n",
+        "                              corrector_kwargs=None, x_T=None, save_intermediate_vid=False, make_progrow=True,ddim_use_x0_pred=False):\n",
+        "    log = dict()\n",
+        "\n",
+        "    z, c, x, xrec, xc = model.get_input(batch, model.first_stage_key,\n",
+        "                                        return_first_stage_outputs=True,\n",
+        "                                        force_c_encode=not (hasattr(model, 'split_input_params')\n",
+        "                                                            and model.cond_stage_key == 'coordinates_bbox'),\n",
+        "                                        return_original_cond=True)\n",
+        "\n",
+        "    log_every_t = 1 if save_intermediate_vid else None\n",
+        "\n",
+        "    if custom_shape is not None:\n",
+        "        z = torch.randn(custom_shape)\n",
+        "        # print(f\"Generating {custom_shape[0]} samples of shape {custom_shape[1:]}\")\n",
+        "\n",
+        "    z0 = None\n",
+        "\n",
+        "    log[\"input\"] = x\n",
+        "    log[\"reconstruction\"] = xrec\n",
+        "\n",
+        "    if ismap(xc):\n",
+        "        log[\"original_conditioning\"] = model.to_rgb(xc)\n",
+        "        if hasattr(model, 'cond_stage_key'):\n",
+        "            log[model.cond_stage_key] = model.to_rgb(xc)\n",
+        "\n",
+        "    else:\n",
+        "        log[\"original_conditioning\"] = xc if xc is not None else torch.zeros_like(x)\n",
+        "        if model.cond_stage_model:\n",
+        "            log[model.cond_stage_key] = xc if xc is not None else torch.zeros_like(x)\n",
+        "            if model.cond_stage_key =='class_label':\n",
+        "                log[model.cond_stage_key] = xc[model.cond_stage_key]\n",
+        "\n",
+        "    with model.ema_scope(\"Plotting\"):\n",
+        "        t0 = time.time()\n",
+        "        img_cb = None\n",
+        "\n",
+        "        sample, intermediates = convsample_ddim(model, c, steps=custom_steps, shape=z.shape,\n",
+        "                                                eta=eta,\n",
+        "                                                quantize_x0=quantize_x0, img_callback=img_cb, mask=None, x0=z0,\n",
+        "                                                temperature=temperature, noise_dropout=noise_dropout,\n",
+        "                                                score_corrector=corrector, corrector_kwargs=corrector_kwargs,\n",
+        "                                                x_T=x_T, log_every_t=log_every_t)\n",
+        "        t1 = time.time()\n",
+        "\n",
+        "        if ddim_use_x0_pred:\n",
+        "            sample = intermediates['pred_x0'][-1]\n",
+        "\n",
+        "    x_sample = model.decode_first_stage(sample)\n",
+        "\n",
+        "    try:\n",
+        "        x_sample_noquant = model.decode_first_stage(sample, force_not_quantize=True)\n",
+        "        log[\"sample_noquant\"] = x_sample_noquant\n",
+        "        log[\"sample_diff\"] = torch.abs(x_sample_noquant - x_sample)\n",
+        "    except:\n",
+        "        pass\n",
+        "\n",
+        "    log[\"sample\"] = x_sample\n",
+        "    log[\"time\"] = t1 - t0\n",
+        "\n",
+        "    return log\n",
+        "\n",
+        "sr_diffMode = 'superresolution'\n",
+        "sr_model = get_model('superresolution')\n",
+        "\n",
+        "\n",
+        "\n",
+        "\n",
+        "\n",
+        "\n",
+        "def do_superres(img, filepath):\n",
+        "\n",
+        "  if args.sharpen_preset == 'Faster':\n",
+        "      sr_diffusion_steps = \"25\" \n",
+        "      sr_pre_downsample = '1/2' \n",
+        "  if args.sharpen_preset == 'Fast':\n",
+        "      sr_diffusion_steps = \"100\" \n",
+        "      sr_pre_downsample = '1/2' \n",
+        "  if args.sharpen_preset == 'Slow':\n",
+        "      sr_diffusion_steps = \"25\" \n",
+        "      sr_pre_downsample = 'None' \n",
+        "  if args.sharpen_preset == 'Very Slow':\n",
+        "      sr_diffusion_steps = \"100\" \n",
+        "      sr_pre_downsample = 'None' \n",
+        "\n",
+        "\n",
+        "  sr_post_downsample = 'Original Size'\n",
+        "  sr_diffusion_steps = int(sr_diffusion_steps)\n",
+        "  sr_eta = 1.0 \n",
+        "  sr_downsample_method = 'Lanczos' \n",
+        "\n",
+        "  gc.collect()\n",
+        "  torch.cuda.empty_cache()\n",
+        "\n",
+        "  im_og = img\n",
+        "  width_og, height_og = im_og.size\n",
+        "\n",
+        "  #Downsample Pre\n",
+        "  if sr_pre_downsample == '1/2':\n",
+        "    downsample_rate = 2\n",
+        "  elif sr_pre_downsample == '1/4':\n",
+        "    downsample_rate = 4\n",
+        "  else:\n",
+        "    downsample_rate = 1\n",
+        "\n",
+        "  width_downsampled_pre = width_og//downsample_rate\n",
+        "  height_downsampled_pre = height_og//downsample_rate\n",
+        "\n",
+        "  if downsample_rate != 1:\n",
+        "    # print(f'Downsampling from [{width_og}, {height_og}] to [{width_downsampled_pre}, {height_downsampled_pre}]')\n",
+        "    im_og = im_og.resize((width_downsampled_pre, height_downsampled_pre), Image.LANCZOS)\n",
+        "    # im_og.save('/content/temp.png')\n",
+        "    # filepath = '/content/temp.png'\n",
+        "\n",
+        "  logs = sr_run(sr_model[\"model\"], im_og, sr_diffMode, sr_diffusion_steps, sr_eta)\n",
+        "\n",
+        "  sample = logs[\"sample\"]\n",
+        "  sample = sample.detach().cpu()\n",
+        "  sample = torch.clamp(sample, -1., 1.)\n",
+        "  sample = (sample + 1.) / 2. * 255\n",
+        "  sample = sample.numpy().astype(np.uint8)\n",
+        "  sample = np.transpose(sample, (0, 2, 3, 1))\n",
+        "  a = Image.fromarray(sample[0])\n",
+        "\n",
+        "  #Downsample Post\n",
+        "  if sr_post_downsample == '1/2':\n",
+        "    downsample_rate = 2\n",
+        "  elif sr_post_downsample == '1/4':\n",
+        "    downsample_rate = 4\n",
+        "  else:\n",
+        "    downsample_rate = 1\n",
+        "\n",
+        "  width, height = a.size\n",
+        "  width_downsampled_post = width//downsample_rate\n",
+        "  height_downsampled_post = height//downsample_rate\n",
+        "\n",
+        "  if sr_downsample_method == 'Lanczos':\n",
+        "    aliasing = Image.LANCZOS\n",
+        "  else:\n",
+        "    aliasing = Image.NEAREST\n",
+        "\n",
+        "  if downsample_rate != 1:\n",
+        "    # print(f'Downsampling from [{width}, {height}] to [{width_downsampled_post}, {height_downsampled_post}]')\n",
+        "    a = a.resize((width_downsampled_post, height_downsampled_post), aliasing)\n",
+        "  elif sr_post_downsample == 'Original Size':\n",
+        "    # print(f'Downsampling from [{width}, {height}] to Original Size [{width_og}, {height_og}]')\n",
+        "    a = a.resize((width_og, height_og), aliasing)\n",
+        "\n",
+        "  display.display(a)\n",
+        "  a.save(filepath)\n",
+        "  return\n",
+        "  print(f'Processing finished!')\n"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "NJS2AUAnvn-D"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "CQVtY1Ixnqx4"
+      },
+      "source": [
+        "# 2. Diffusion and CLIP model settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "Fpbody2NCR7w",
+        "cellView": "form"
+      },
+      "source": [
+        "#@markdown ####**Models Settings:**\n",
+        "diffusion_model = \"512x512_diffusion_uncond_finetune_008100\" #@param [\"256x256_diffusion_uncond\", \"512x512_diffusion_uncond_finetune_008100\"]\n",
+        "use_secondary_model = True #@param {type: 'boolean'}\n",
+        "\n",
+        "timestep_respacing = '50' # param ['25','50','100','150','250','500','1000','ddim25','ddim50', 'ddim75', 'ddim100','ddim150','ddim250','ddim500','ddim1000']  \n",
+        "diffusion_steps = 1000 # param {type: 'number'}\n",
+        "use_checkpoint = True #@param {type: 'boolean'}\n",
+        "ViTB32 = True #@param{type:\"boolean\"}\n",
+        "ViTB16 = True #@param{type:\"boolean\"}\n",
+        "ViTL14 = False #@param{type:\"boolean\"}\n",
+        "RN101 = False #@param{type:\"boolean\"}\n",
+        "RN50 = True #@param{type:\"boolean\"}\n",
+        "RN50x4 = False #@param{type:\"boolean\"}\n",
+        "RN50x16 = False #@param{type:\"boolean\"}\n",
+        "RN50x64 = False #@param{type:\"boolean\"}\n",
+        "SLIPB16 = False # param{type:\"boolean\"}\n",
+        "SLIPL16 = False # param{type:\"boolean\"}\n",
+        "\n",
+        "#@markdown If you're having issues with model downloads, check this to compare SHA's:\n",
+        "check_model_SHA = False #@param{type:\"boolean\"}\n",
+        "\n",
+        "model_256_SHA = '983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a'\n",
+        "model_512_SHA = '9c111ab89e214862b76e1fa6a1b3f1d329b1a88281885943d2cdbe357ad57648'\n",
+        "model_secondary_SHA = '983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a'\n",
+        "\n",
+        "model_256_link = 'https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt'\n",
+        "model_512_link = 'https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt'\n",
+        "model_secondary_link = 'https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth'\n",
+        "\n",
+        "model_256_path = f'{model_path}/256x256_diffusion_uncond.pt'\n",
+        "model_512_path = f'{model_path}/512x512_diffusion_uncond_finetune_008100.pt'\n",
+        "model_secondary_path = f'{model_path}/secondary_model_imagenet_2.pth'\n",
+        "\n",
+        "# Download the diffusion model\n",
+        "if diffusion_model == '256x256_diffusion_uncond':\n",
+        "  if os.path.exists(model_256_path) and check_model_SHA:\n",
+        "    print('Checking 256 Diffusion File')\n",
+        "    with open(model_256_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_256_SHA:\n",
+        "      print('256 Model SHA matches')\n",
+        "      model_256_downloaded = True\n",
+        "    else: \n",
+        "      print(\"256 Model SHA doesn't match, redownloading...\")\n",
+        "      !wget --continue {model_256_link} -P {model_path}\n",
+        "      model_256_downloaded = True\n",
+        "  elif os.path.exists(model_256_path) and not check_model_SHA or model_256_downloaded == True:\n",
+        "    print('256 Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    !wget --continue {model_256_link} -P {model_path}\n",
+        "    model_256_downloaded = True\n",
+        "elif diffusion_model == '512x512_diffusion_uncond_finetune_008100':\n",
+        "  if os.path.exists(model_512_path) and check_model_SHA:\n",
+        "    print('Checking 512 Diffusion File')\n",
+        "    with open(model_512_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_512_SHA:\n",
+        "      print('512 Model SHA matches')\n",
+        "      model_512_downloaded = True\n",
+        "    else:  \n",
+        "      print(\"512 Model SHA doesn't match, redownloading...\")\n",
+        "      !wget --continue {model_512_link} -P {model_path}\n",
+        "      model_512_downloaded = True\n",
+        "  elif os.path.exists(model_512_path) and not check_model_SHA or model_512_downloaded == True:\n",
+        "    print('512 Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    !wget --continue {model_512_link} -P {model_path}\n",
+        "    model_512_downloaded = True\n",
+        "\n",
+        "\n",
+        "# Download the secondary diffusion model v2\n",
+        "if use_secondary_model == True:\n",
+        "  if os.path.exists(model_secondary_path) and check_model_SHA:\n",
+        "    print('Checking Secondary Diffusion File')\n",
+        "    with open(model_secondary_path,\"rb\") as f:\n",
+        "        bytes = f.read() \n",
+        "        hash = hashlib.sha256(bytes).hexdigest();\n",
+        "    if hash == model_secondary_SHA:\n",
+        "      print('Secondary Model SHA matches')\n",
+        "      model_secondary_downloaded = True\n",
+        "    else:  \n",
+        "      print(\"Secondary Model SHA doesn't match, redownloading...\")\n",
+        "      !wget --continue {model_secondary_link} -P {model_path}\n",
+        "      model_secondary_downloaded = True\n",
+        "  elif os.path.exists(model_secondary_path) and not check_model_SHA or model_secondary_downloaded == True:\n",
+        "    print('Secondary Model already downloaded, check check_model_SHA if the file is corrupt')\n",
+        "  else:  \n",
+        "    !wget --continue {model_secondary_link} -P {model_path}\n",
+        "    model_secondary_downloaded = True\n",
+        "\n",
+        "model_config = model_and_diffusion_defaults()\n",
+        "if diffusion_model == '512x512_diffusion_uncond_finetune_008100':\n",
+        "    model_config.update({\n",
+        "        'attention_resolutions': '32, 16, 8',\n",
+        "        'class_cond': False,\n",
+        "        'diffusion_steps': diffusion_steps,\n",
+        "        'rescale_timesteps': True,\n",
+        "        'timestep_respacing': timestep_respacing,\n",
+        "        'image_size': 512,\n",
+        "        'learn_sigma': True,\n",
+        "        'noise_schedule': 'linear',\n",
+        "        'num_channels': 256,\n",
+        "        'num_head_channels': 64,\n",
+        "        'num_res_blocks': 2,\n",
+        "        'resblock_updown': True,\n",
+        "        'use_checkpoint': use_checkpoint,\n",
+        "        'use_fp16': True,\n",
+        "        'use_scale_shift_norm': True,\n",
+        "    })\n",
+        "elif diffusion_model == '256x256_diffusion_uncond':\n",
+        "    model_config.update({\n",
+        "        'attention_resolutions': '32, 16, 8',\n",
+        "        'class_cond': False,\n",
+        "        'diffusion_steps': diffusion_steps,\n",
+        "        'rescale_timesteps': True,\n",
+        "        'timestep_respacing': timestep_respacing,\n",
+        "        'image_size': 256,\n",
+        "        'learn_sigma': True,\n",
+        "        'noise_schedule': 'linear',\n",
+        "        'num_channels': 256,\n",
+        "        'num_head_channels': 64,\n",
+        "        'num_res_blocks': 2,\n",
+        "        'resblock_updown': True,\n",
+        "        'use_checkpoint': use_checkpoint,\n",
+        "        'use_fp16': True,\n",
+        "        'use_scale_shift_norm': True,\n",
+        "    })\n",
+        "\n",
+        "secondary_model_ver = 2\n",
+        "model_default = model_config['image_size']\n",
+        "\n",
+        "\n",
+        "\n",
+        "if secondary_model_ver == 2:\n",
+        "    secondary_model = SecondaryDiffusionImageNet2()\n",
+        "    secondary_model.load_state_dict(torch.load(f'{model_path}/secondary_model_imagenet_2.pth', map_location='cpu'))\n",
+        "secondary_model.eval().requires_grad_(False).to(device)\n",
+        "\n",
+        "clip_models = []\n",
+        "if ViTB32 is True: clip_models.append(clip.load('ViT-B/32', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if ViTB16 is True: clip_models.append(clip.load('ViT-B/16', jit=False)[0].eval().requires_grad_(False).to(device) ) \n",
+        "if ViTL14 is True: clip_models.append(clip.load('ViT-L/14', jit=False)[0].eval().requires_grad_(False).to(device) ) \n",
+        "if RN50 is True: clip_models.append(clip.load('RN50', jit=False)[0].eval().requires_grad_(False).to(device))\n",
+        "if RN50x4 is True: clip_models.append(clip.load('RN50x4', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN50x16 is True: clip_models.append(clip.load('RN50x16', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN50x64 is True: clip_models.append(clip.load('RN50x64', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN101 is True: clip_models.append(clip.load('RN101', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "\n",
+        "if SLIPB16:\n",
+        "  SLIPB16model = SLIP_VITB16(ssl_mlp_dim=4096, ssl_emb_dim=256)\n",
+        "  if not os.path.exists(f'{model_path}/slip_base_100ep.pt'):\n",
+        "    !wget https://dl.fbaipublicfiles.com/slip/slip_base_100ep.pt -P {model_path}\n",
+        "  sd = torch.load(f'{model_path}/slip_base_100ep.pt')\n",
+        "  real_sd = {}\n",
+        "  for k, v in sd['state_dict'].items():\n",
+        "    real_sd['.'.join(k.split('.')[1:])] = v\n",
+        "  del sd\n",
+        "  SLIPB16model.load_state_dict(real_sd)\n",
+        "  SLIPB16model.requires_grad_(False).eval().to(device)\n",
+        "\n",
+        "  clip_models.append(SLIPB16model)\n",
+        "\n",
+        "if SLIPL16:\n",
+        "  SLIPL16model = SLIP_VITL16(ssl_mlp_dim=4096, ssl_emb_dim=256)\n",
+        "  if not os.path.exists(f'{model_path}/slip_large_100ep.pt'):\n",
+        "    !wget https://dl.fbaipublicfiles.com/slip/slip_large_100ep.pt -P {model_path}\n",
+        "  sd = torch.load(f'{model_path}/slip_large_100ep.pt')\n",
+        "  real_sd = {}\n",
+        "  for k, v in sd['state_dict'].items():\n",
+        "    real_sd['.'.join(k.split('.')[1:])] = v\n",
+        "  del sd\n",
+        "  SLIPL16model.load_state_dict(real_sd)\n",
+        "  SLIPL16model.requires_grad_(False).eval().to(device)\n",
+        "\n",
+        "  clip_models.append(SLIPL16model)\n",
+        "\n",
+        "normalize = T.Normalize(mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711])\n",
+        "lpips_model = lpips.LPIPS(net='vgg').to(device)"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "kjtsXaszn-bB"
+      },
+      "source": [
+        "# 3. Settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "U0PwzFZbLfcy",
+        "cellView": "form"
+      },
+      "source": [
+        "#@markdown ####**Basic Settings:**\n",
+        "batch_name = 'TimeToDisco' #@param{type: 'string'}\n",
+        "steps = 250 #@param [25,50,100,150,250,500,1000]{type: 'raw', allow-input: true}\n",
+        "width_height = [1280, 768]#@param{type: 'raw'}\n",
+        "clip_guidance_scale = 5000 #@param{type: 'number'}\n",
+        "tv_scale =  0#@param{type: 'number'}\n",
+        "range_scale =   150#@param{type: 'number'}\n",
+        "sat_scale =   0#@param{type: 'number'}\n",
+        "cutn_batches = 4  #@param{type: 'number'}\n",
+        "skip_augs = False#@param{type: 'boolean'}\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Init Settings:**\n",
+        "init_image = None #@param{type: 'string'}\n",
+        "init_scale = 1000 #@param{type: 'integer'}\n",
+        "skip_steps = 0 #@param{type: 'integer'}\n",
+        "#@markdown *Make sure you set skip_steps to ~50% of your steps if you want to use an init image.*\n",
+        "\n",
+        "#Get corrected sizes\n",
+        "side_x = (width_height[0]//64)*64;\n",
+        "side_y = (width_height[1]//64)*64;\n",
+        "if side_x != width_height[0] or side_y != width_height[1]:\n",
+        "  print(f'Changing output size to {side_x}x{side_y}. Dimensions must by multiples of 64.')\n",
+        "\n",
+        "#Update Model Settings\n",
+        "timestep_respacing = f'ddim{steps}'\n",
+        "diffusion_steps = (1000//steps)*steps if steps < 1000 else steps\n",
+        "model_config.update({\n",
+        "    'timestep_respacing': timestep_respacing,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "})\n",
+        "\n",
+        "#Make folder for batch\n",
+        "batchFolder = f'{outDirPath}/{batch_name}'\n",
+        "createPath(batchFolder)\n"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "###Animation Settings"
+      ],
+      "metadata": {
+        "id": "CnkTNXJAPzL2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "#@markdown ####**Animation Mode:**\n",
+        "animation_mode = \"None\" #@param['None', '2D', 'Video Input']\n",
+        "#@markdown *For animation, you probably want to turn `cutn_batches` to 1 to make it quicker.*\n",
+        "\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Video Input Settings:**\n",
+        "video_init_path = \"/content/training.mp4\" #@param {type: 'string'}\n",
+        "extract_nth_frame = 2 #@param {type:\"number\"} \n",
+        "\n",
+        "if animation_mode == \"Video Input\":\n",
+        "  videoFramesFolder = f'/content/videoFrames'\n",
+        "  createPath(videoFramesFolder)\n",
+        "  print(f\"Exporting Video Frames (1 every {extract_nth_frame})...\")\n",
+        "  try:\n",
+        "    !rm {videoFramesFolder}/*.jpg\n",
+        "  except:\n",
+        "    print('')\n",
+        "  vf = f'\"select=not(mod(n\\,{extract_nth_frame}))\"'\n",
+        "  !ffmpeg -i {video_init_path} -vf {vf} -vsync vfr -q:v 2 -loglevel error -stats {videoFramesFolder}/%04d.jpg\n",
+        "\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**2D Animation Settings:**\n",
+        "#@markdown `zoom` is a multiplier of dimensions, 1 is no zoom.\n",
+        "\n",
+        "key_frames = True #@param {type:\"boolean\"}\n",
+        "max_frames = 10000#@param {type:\"number\"}\n",
+        "\n",
+        "if animation_mode == \"Video Input\":\n",
+        "  max_frames = len(glob(f'{videoFramesFolder}/*.jpg'))\n",
+        "\n",
+        "interp_spline = 'Linear' #Do not change, currently will not look good. param ['Linear','Quadratic','Cubic']{type:\"string\"}\n",
+        "angle = \"0:(0)\"#@param {type:\"string\"}\n",
+        "zoom = \"0: (1), 10: (1.05)\"#@param {type:\"string\"}\n",
+        "translation_x = \"0: (0)\"#@param {type:\"string\"}\n",
+        "translation_y = \"0: (0)\"#@param {type:\"string\"}\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Coherency Settings:**\n",
+        "#@markdown `frame_scale` tries to guide the new frame to looking like the old one. A good default is 1500.\n",
+        "frames_scale = 1500 #@param{type: 'integer'}\n",
+        "#@markdown `frame_skip_steps` will blur the previous frame - higher values will flicker less but struggle to add enough new detail to zoom into.\n",
+        "frames_skip_steps = '60%' #@param ['40%', '50%', '60%', '70%', '80%'] {type: 'string'}\n",
+        "\n",
+        "\n",
+        "def parse_key_frames(string, prompt_parser=None):\n",
+        "    \"\"\"Given a string representing frame numbers paired with parameter values at that frame,\n",
+        "    return a dictionary with the frame numbers as keys and the parameter values as the values.\n",
+        "\n",
+        "    Parameters\n",
+        "    ----------\n",
+        "    string: string\n",
+        "        Frame numbers paired with parameter values at that frame number, in the format\n",
+        "        'framenumber1: (parametervalues1), framenumber2: (parametervalues2), ...'\n",
+        "    prompt_parser: function or None, optional\n",
+        "        If provided, prompt_parser will be applied to each string of parameter values.\n",
+        "    \n",
+        "    Returns\n",
+        "    -------\n",
+        "    dict\n",
+        "        Frame numbers as keys, parameter values at that frame number as values\n",
+        "\n",
+        "    Raises\n",
+        "    ------\n",
+        "    RuntimeError\n",
+        "        If the input string does not match the expected format.\n",
+        "    \n",
+        "    Examples\n",
+        "    --------\n",
+        "    >>> parse_key_frames(\"10:(Apple: 1| Orange: 0), 20: (Apple: 0| Orange: 1| Peach: 1)\")\n",
+        "    {10: 'Apple: 1| Orange: 0', 20: 'Apple: 0| Orange: 1| Peach: 1'}\n",
+        "\n",
+        "    >>> parse_key_frames(\"10:(Apple: 1| Orange: 0), 20: (Apple: 0| Orange: 1| Peach: 1)\", prompt_parser=lambda x: x.lower()))\n",
+        "    {10: 'apple: 1| orange: 0', 20: 'apple: 0| orange: 1| peach: 1'}\n",
+        "    \"\"\"\n",
+        "    import re\n",
+        "    pattern = r'((?P<frame>[0-9]+):[\\s]*[\\(](?P<param>[\\S\\s]*?)[\\)])'\n",
+        "    frames = dict()\n",
+        "    for match_object in re.finditer(pattern, string):\n",
+        "        frame = int(match_object.groupdict()['frame'])\n",
+        "        param = match_object.groupdict()['param']\n",
+        "        if prompt_parser:\n",
+        "            frames[frame] = prompt_parser(param)\n",
+        "        else:\n",
+        "            frames[frame] = param\n",
+        "\n",
+        "    if frames == {} and len(string) != 0:\n",
+        "        raise RuntimeError('Key Frame string not correctly formatted')\n",
+        "    return frames\n",
+        "\n",
+        "def get_inbetweens(key_frames, integer=False):\n",
+        "    \"\"\"Given a dict with frame numbers as keys and a parameter value as values,\n",
+        "    return a pandas Series containing the value of the parameter at every frame from 0 to max_frames.\n",
+        "    Any values not provided in the input dict are calculated by linear interpolation between\n",
+        "    the values of the previous and next provided frames. If there is no previous provided frame, then\n",
+        "    the value is equal to the value of the next provided frame, or if there is no next provided frame,\n",
+        "    then the value is equal to the value of the previous provided frame. If no frames are provided,\n",
+        "    all frame values are NaN.\n",
+        "\n",
+        "    Parameters\n",
+        "    ----------\n",
+        "    key_frames: dict\n",
+        "        A dict with integer frame numbers as keys and numerical values of a particular parameter as values.\n",
+        "    integer: Bool, optional\n",
+        "        If True, the values of the output series are converted to integers.\n",
+        "        Otherwise, the values are floats.\n",
+        "    \n",
+        "    Returns\n",
+        "    -------\n",
+        "    pd.Series\n",
+        "        A Series with length max_frames representing the parameter values for each frame.\n",
+        "    \n",
+        "    Examples\n",
+        "    --------\n",
+        "    >>> max_frames = 5\n",
+        "    >>> get_inbetweens({1: 5, 3: 6})\n",
+        "    0    5.0\n",
+        "    1    5.0\n",
+        "    2    5.5\n",
+        "    3    6.0\n",
+        "    4    6.0\n",
+        "    dtype: float64\n",
+        "\n",
+        "    >>> get_inbetweens({1: 5, 3: 6}, integer=True)\n",
+        "    0    5\n",
+        "    1    5\n",
+        "    2    5\n",
+        "    3    6\n",
+        "    4    6\n",
+        "    dtype: int64\n",
+        "    \"\"\"\n",
+        "    key_frame_series = pd.Series([np.nan for a in range(max_frames)])\n",
+        "\n",
+        "    for i, value in key_frames.items():\n",
+        "        key_frame_series[i] = value\n",
+        "    key_frame_series = key_frame_series.astype(float)\n",
+        "    \n",
+        "    interp_method = interp_spline\n",
+        "\n",
+        "    if interp_method == 'Cubic' and len(key_frames.items()) <=3:\n",
+        "      interp_method = 'Quadratic'\n",
+        "    \n",
+        "    if interp_method == 'Quadratic' and len(key_frames.items()) <= 2:\n",
+        "      interp_method = 'Linear'\n",
+        "      \n",
+        "    \n",
+        "    key_frame_series[0] = key_frame_series[key_frame_series.first_valid_index()]\n",
+        "    key_frame_series[max_frames-1] = key_frame_series[key_frame_series.last_valid_index()]\n",
+        "    # key_frame_series = key_frame_series.interpolate(method=intrp_method,order=1, limit_direction='both')\n",
+        "    key_frame_series = key_frame_series.interpolate(method=interp_method.lower(),limit_direction='both')\n",
+        "    if integer:\n",
+        "        return key_frame_series.astype(int)\n",
+        "    return key_frame_series\n",
+        "\n",
+        "def split_prompts(prompts):\n",
+        "  prompt_series = pd.Series([np.nan for a in range(max_frames)])\n",
+        "  for i, prompt in prompts.items():\n",
+        "    prompt_series[i] = prompt\n",
+        "  # prompt_series = prompt_series.astype(str)\n",
+        "  prompt_series = prompt_series.ffill().bfill()\n",
+        "  return prompt_series\n",
+        "\n",
+        "if key_frames:\n",
+        "    try:\n",
+        "        angle_series = get_inbetweens(parse_key_frames(angle))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `angle` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `angle` as \"\n",
+        "            f'\"0: ({angle})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        angle = f\"0: ({angle})\"\n",
+        "        angle_series = get_inbetweens(parse_key_frames(angle))\n",
+        "\n",
+        "    try:\n",
+        "        zoom_series = get_inbetweens(parse_key_frames(zoom))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `zoom` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `zoom` as \"\n",
+        "            f'\"0: ({zoom})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        zoom = f\"0: ({zoom})\"\n",
+        "        zoom_series = get_inbetweens(parse_key_frames(zoom))\n",
+        "\n",
+        "    try:\n",
+        "        translation_x_series = get_inbetweens(parse_key_frames(translation_x))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `translation_x` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `translation_x` as \"\n",
+        "            f'\"0: ({translation_x})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        translation_x = f\"0: ({translation_x})\"\n",
+        "        translation_x_series = get_inbetweens(parse_key_frames(translation_x))\n",
+        "\n",
+        "    try:\n",
+        "        translation_y_series = get_inbetweens(parse_key_frames(translation_y))\n",
+        "    except RuntimeError as e:\n",
+        "        print(\n",
+        "            \"WARNING: You have selected to use key frames, but you have not \"\n",
+        "            \"formatted `translation_y` correctly for key frames.\\n\"\n",
+        "            \"Attempting to interpret `translation_y` as \"\n",
+        "            f'\"0: ({translation_y})\"\\n'\n",
+        "            \"Please read the instructions to find out how to use key frames \"\n",
+        "            \"correctly.\\n\"\n",
+        "        )\n",
+        "        translation_y = f\"0: ({translation_y})\"\n",
+        "        translation_y_series = get_inbetweens(parse_key_frames(translation_y))\n",
+        "\n",
+        "else:\n",
+        "    angle = float(angle)\n",
+        "    zoom = float(zoom)\n",
+        "    translation_x = float(translation_x)\n",
+        "    translation_y = float(translation_y)\n"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "djPY2_4kHgV2"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Extra Settings\n",
+        " Partial Saves, Diffusion Sharpening, Advanced Settings, Cutn Scheduling"
+      ],
+      "metadata": {
+        "id": "u1VHzHvNx5fd"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "#@markdown ####**Saving:**\n",
+        "\n",
+        "intermediate_saves = 0#@param{type: 'raw'}\n",
+        "intermediates_in_subfolder = True #@param{type: 'boolean'}\n",
+        "#@markdown Intermediate steps will save a copy at your specified intervals. You can either format it as a single integer or a list of specific steps \n",
+        "\n",
+        "#@markdown A value of `2` will save a copy at 33% and 66%. 0 will save none.\n",
+        "\n",
+        "#@markdown A value of `[5, 9, 34, 45]` will save at steps 5, 9, 34, and 45. (Make sure to include the brackets)\n",
+        "\n",
+        "\n",
+        "if type(intermediate_saves) is not list:\n",
+        "  if intermediate_saves:\n",
+        "    steps_per_checkpoint = math.floor((steps - skip_steps - 1) // (intermediate_saves+1))\n",
+        "    steps_per_checkpoint = steps_per_checkpoint if steps_per_checkpoint > 0 else 1\n",
+        "    print(f'Will save every {steps_per_checkpoint} steps')\n",
+        "  else:\n",
+        "    steps_per_checkpoint = steps+10\n",
+        "else:\n",
+        "  steps_per_checkpoint = None\n",
+        "\n",
+        "if intermediate_saves and intermediates_in_subfolder is True:\n",
+        "  partialFolder = f'{batchFolder}/partials'\n",
+        "  createPath(partialFolder)\n",
+        "\n",
+        "  #@markdown ---\n",
+        "\n",
+        "#@markdown ####**SuperRes Sharpening:**\n",
+        "#@markdown *Sharpen each image using latent-diffusion. Does not run in animation mode. `keep_unsharp` will save both versions.*\n",
+        "sharpen_preset = 'Off' #@param ['Off', 'Faster', 'Fast', 'Slow', 'Very Slow']\n",
+        "keep_unsharp = True #@param{type: 'boolean'}\n",
+        "\n",
+        "if sharpen_preset != 'Off' and keep_unsharp is True:\n",
+        "  unsharpenFolder = f'{batchFolder}/unsharpened'\n",
+        "  createPath(unsharpenFolder)\n",
+        "\n",
+        "\n",
+        "  #@markdown ---\n",
+        "\n",
+        "#@markdown ####**Advanced Settings:**\n",
+        "#@markdown *There are a few extra advanced settings available if you double click this cell.*\n",
+        "\n",
+        "#@markdown *Perlin init will replace your init, so uncheck if using one.*\n",
+        "\n",
+        "perlin_init = False  #@param{type: 'boolean'}\n",
+        "perlin_mode = 'mixed' #@param ['mixed', 'color', 'gray']\n",
+        "set_seed = 'random_seed' #@param{type: 'string'}\n",
+        "eta = 0.8#@param{type: 'number'}\n",
+        "clamp_grad = True #@param{type: 'boolean'}\n",
+        "clamp_max = 0.05 #@param{type: 'number'}\n",
+        "\n",
+        "\n",
+        "### EXTRA ADVANCED SETTINGS:\n",
+        "randomize_class = True\n",
+        "clip_denoised = False\n",
+        "fuzzy_prompt = False\n",
+        "rand_mag = 0.05\n",
+        "\n",
+        "\n",
+        " #@markdown ---\n",
+        "\n",
+        "#@markdown ####**Cutn Scheduling:**\n",
+        "#@markdown Format: `[40]*400+[20]*600` = 40 cuts for the first 400 /1000 steps, then 20 for the last 600/1000\n",
+        "\n",
+        "#@markdown cut_overview and cut_innercut are cumulative for total cutn on any given step. Overview cuts see the entire image and are good for early structure, innercuts are your standard cutn.\n",
+        "\n",
+        "cut_overview = \"[12]*400+[4]*600\" #@param {type: 'string'}       \n",
+        "cut_innercut =\"[4]*400+[12]*600\"#@param {type: 'string'}  \n",
+        "cut_ic_pow = 1#@param {type: 'number'}  \n",
+        "cut_icgray_p = \"[0.2]*400+[0]*600\"#@param {type: 'string'}  \n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "lCLMxtILyAHA",
+        "cellView": "form"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "XIwh5RvNpk4K"
+      },
+      "source": [
+        "###Prompts\n",
+        "`animation_mode: None` will only use the first set. `animation_mode: 2D / Video` will run through them per the set frames and hold on the last one."
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "BGBzhk3dpcGO"
+      },
+      "source": [
+        "text_prompts = {\n",
+        "    0: [\"A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation.\", \"yellow color scheme\"],\n",
+        "    100: [\"This set of prompts start at frame 100\",\"This prompt has weight five:5\"],\n",
+        "}\n",
+        "\n",
+        "image_prompts = {\n",
+        "    # 0:['ImagePromptsWorkButArentVeryGood.png:2',],\n",
+        "}"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "Nf9hTc8YLoLx"
+      },
+      "source": [
+        "# 4. Diffuse!"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "LHLiO56OfwgD",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title Do the Run!\n",
+        "#@markdown `n_batches` ignored with animation modes.\n",
+        "display_rate =  50 #@param{type: 'number'}\n",
+        "n_batches =  1 #@param{type: 'number'}\n",
+        "\n",
+        "batch_size = 1 \n",
+        "\n",
+        "def move_files(start_num, end_num, old_folder, new_folder):\n",
+        "    for i in range(start_num, end_num):\n",
+        "        old_file = old_folder + f'/{batch_name}({batchNum})_{i:04}.png'\n",
+        "        new_file = new_folder + f'/{batch_name}({batchNum})_{i:04}.png'\n",
+        "        os.rename(old_file, new_file)\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "\n",
+        "resume_run = False #@param{type: 'boolean'}\n",
+        "run_to_resume = 'latest' #@param{type: 'string'}\n",
+        "resume_from_frame = 'latest' #@param{type: 'string'}\n",
+        "retain_overwritten_frames = False #@param{type: 'boolean'}\n",
+        "if retain_overwritten_frames is True:\n",
+        "  retainFolder = f'{batchFolder}/retained'\n",
+        "  createPath(retainFolder)\n",
+        "\n",
+        "\n",
+        "skip_step_ratio = int(frames_skip_steps.rstrip(\"%\")) / 100\n",
+        "calc_frames_skip_steps = math.floor(steps * skip_step_ratio)\n",
+        "\n",
+        "\n",
+        "if steps <= calc_frames_skip_steps:\n",
+        "  sys.exit(\"ERROR: You can't skip more steps than your total steps\")\n",
+        "\n",
+        "if resume_run:\n",
+        "  if run_to_resume == 'latest':\n",
+        "    try:\n",
+        "      batchNum\n",
+        "    except:\n",
+        "      batchNum = len(glob(f\"{batchFolder}/{batch_name}(*)_settings.txt\"))-1\n",
+        "  else:\n",
+        "    batchNum = int(run_to_resume)\n",
+        "  if resume_from_frame == 'latest':\n",
+        "    start_frame = len(glob(batchFolder+f\"/{batch_name}({batchNum})_*.png\"))\n",
+        "  else:\n",
+        "    start_frame = int(resume_from_frame)+1\n",
+        "    if retain_overwritten_frames is True:\n",
+        "      existing_frames = len(glob(batchFolder+f\"/{batch_name}({batchNum})_*.png\"))\n",
+        "      frames_to_save = existing_frames - start_frame\n",
+        "      print(f'Moving {frames_to_save} frames to the Retained folder')\n",
+        "      move_files(start_frame, existing_frames, batchFolder, retainFolder)\n",
+        "else:\n",
+        "  start_frame = 0\n",
+        "  batchNum = len(glob(batchFolder+\"/*.txt\"))\n",
+        "  while path.isfile(f\"{batchFolder}/{batch_name}({batchNum})_settings.txt\") is True or path.isfile(f\"{batchFolder}/{batch_name}-{batchNum}_settings.txt\") is True:\n",
+        "    batchNum += 1\n",
+        "\n",
+        "print(f'Starting Run: {batch_name}({batchNum}) at frame {start_frame}')\n",
+        "\n",
+        "if set_seed == 'random_seed':\n",
+        "    random.seed()\n",
+        "    seed = random.randint(0, 2**32)\n",
+        "    # print(f'Using seed: {seed}')\n",
+        "else:\n",
+        "    seed = int(set_seed)\n",
+        "\n",
+        "args = {\n",
+        "    'batchNum': batchNum,\n",
+        "    'prompts_series':split_prompts(text_prompts) if text_prompts else None,\n",
+        "    'image_prompts_series':split_prompts(image_prompts) if image_prompts else None,\n",
+        "    'seed': seed,\n",
+        "    'display_rate':display_rate,\n",
+        "    'n_batches':n_batches if animation_mode == 'None' else 1,\n",
+        "    'batch_size':batch_size,\n",
+        "    'batch_name': batch_name,\n",
+        "    'steps': steps,\n",
+        "    'width_height': width_height,\n",
+        "    'clip_guidance_scale': clip_guidance_scale,\n",
+        "    'tv_scale': tv_scale,\n",
+        "    'range_scale': range_scale,\n",
+        "    'sat_scale': sat_scale,\n",
+        "    'cutn_batches': cutn_batches,\n",
+        "    'init_image': init_image,\n",
+        "    'init_scale': init_scale,\n",
+        "    'skip_steps': skip_steps,\n",
+        "    'sharpen_preset': sharpen_preset,\n",
+        "    'keep_unsharp': keep_unsharp,\n",
+        "    'side_x': side_x,\n",
+        "    'side_y': side_y,\n",
+        "    'timestep_respacing': timestep_respacing,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "    'animation_mode': animation_mode,\n",
+        "    'video_init_path': video_init_path,\n",
+        "    'extract_nth_frame': extract_nth_frame,\n",
+        "    'key_frames': key_frames,\n",
+        "    'max_frames': max_frames if animation_mode != \"None\" else 1,\n",
+        "    'interp_spline': interp_spline,\n",
+        "    'start_frame': start_frame,\n",
+        "    'angle': angle,\n",
+        "    'zoom': zoom,\n",
+        "    'translation_x': translation_x,\n",
+        "    'translation_y': translation_y,\n",
+        "    'angle_series':angle_series,\n",
+        "    'zoom_series':zoom_series,\n",
+        "    'translation_x_series':translation_x_series,\n",
+        "    'translation_y_series':translation_y_series,\n",
+        "    'frames_scale': frames_scale,\n",
+        "    'calc_frames_skip_steps': calc_frames_skip_steps,\n",
+        "    'skip_step_ratio': skip_step_ratio,\n",
+        "    'calc_frames_skip_steps': calc_frames_skip_steps,\n",
+        "    'text_prompts': text_prompts,\n",
+        "    'image_prompts': image_prompts,\n",
+        "    'cut_overview': eval(cut_overview),\n",
+        "    'cut_innercut': eval(cut_innercut),\n",
+        "    'cut_ic_pow': cut_ic_pow,\n",
+        "    'cut_icgray_p': eval(cut_icgray_p),\n",
+        "    'intermediate_saves': intermediate_saves,\n",
+        "    'intermediates_in_subfolder': intermediates_in_subfolder,\n",
+        "    'steps_per_checkpoint': steps_per_checkpoint,\n",
+        "    'perlin_init': perlin_init,\n",
+        "    'perlin_mode': perlin_mode,\n",
+        "    'set_seed': set_seed,\n",
+        "    'eta': eta,\n",
+        "    'clamp_grad': clamp_grad,\n",
+        "    'clamp_max': clamp_max,\n",
+        "    'skip_augs': skip_augs,\n",
+        "    'randomize_class': randomize_class,\n",
+        "    'clip_denoised': clip_denoised,\n",
+        "    'fuzzy_prompt': fuzzy_prompt,\n",
+        "    'rand_mag': rand_mag,\n",
+        "}\n",
+        "\n",
+        "args = SimpleNamespace(**args)\n",
+        "\n",
+        "print('Prepping model...')\n",
+        "model, diffusion = create_model_and_diffusion(**model_config)\n",
+        "model.load_state_dict(torch.load(f'{model_path}/{diffusion_model}.pt', map_location='cpu'))\n",
+        "model.requires_grad_(False).eval().to(device)\n",
+        "for name, param in model.named_parameters():\n",
+        "    if 'qkv' in name or 'norm' in name or 'proj' in name:\n",
+        "        param.requires_grad_()\n",
+        "if model_config['use_fp16']:\n",
+        "    model.convert_to_fp16()\n",
+        "\n",
+        "gc.collect()\n",
+        "torch.cuda.empty_cache()\n",
+        "try:\n",
+        "  do_run()\n",
+        "except KeyboardInterrupt:\n",
+        "    pass\n",
+        "finally:\n",
+        "    print('Seed used:', seed)\n",
+        "    gc.collect()\n",
+        "    torch.cuda.empty_cache()"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "EZUg3bfzazgW"
+      },
+      "source": [
+        "# 5. Create the video"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ### **Create video**\n",
+        "#@markdown Video file will save in the same folder as your images.\n",
+        "\n",
+        "skip_video_for_run_all = True #@param {type: 'boolean'}\n",
+        "\n",
+        "if skip_video_for_run_all == False:\n",
+        "  # import subprocess in case this cell is run without the above cells\n",
+        "  import subprocess\n",
+        "  from base64 import b64encode\n",
+        "\n",
+        "  latest_run = batchNum\n",
+        "\n",
+        "  folder = batch_name #@param\n",
+        "  run = latest_run #@param\n",
+        "  final_frame = 'final_frame'\n",
+        "\n",
+        "\n",
+        "  init_frame = 1#@param {type:\"number\"} This is the frame where the video will start\n",
+        "  last_frame = final_frame#@param {type:\"number\"} You can change i to the number of the last frame you want to generate. It will raise an error if that number of frames does not exist.\n",
+        "  fps = 12#@param {type:\"number\"}\n",
+        "  view_video_in_cell = False #@param {type: 'boolean'}\n",
+        "\n",
+        "  frames = []\n",
+        "  # tqdm.write('Generating video...')\n",
+        "\n",
+        "  if last_frame == 'final_frame':\n",
+        "    last_frame = len(glob(batchFolder+f\"/{folder}({run})_*.png\"))\n",
+        "    print(f'Total frames: {last_frame}')\n",
+        "\n",
+        "  image_path = f\"{outDirPath}/{folder}/{folder}({run})_%04d.png\"\n",
+        "  filepath = f\"{outDirPath}/{folder}/{folder}({run}).mp4\"\n",
+        "\n",
+        "\n",
+        "  cmd = [\n",
+        "      'ffmpeg',\n",
+        "      '-y',\n",
+        "      '-vcodec',\n",
+        "      'png',\n",
+        "      '-r',\n",
+        "      str(fps),\n",
+        "      '-start_number',\n",
+        "      str(init_frame),\n",
+        "      '-i',\n",
+        "      image_path,\n",
+        "      '-frames:v',\n",
+        "      str(last_frame+1),\n",
+        "      '-c:v',\n",
+        "      'libx264',\n",
+        "      '-vf',\n",
+        "      f'fps={fps}',\n",
+        "      '-pix_fmt',\n",
+        "      'yuv420p',\n",
+        "      '-crf',\n",
+        "      '17',\n",
+        "      '-preset',\n",
+        "      'veryslow',\n",
+        "      filepath\n",
+        "  ]\n",
+        "\n",
+        "  process = subprocess.Popen(cmd, cwd=f'{batchFolder}', stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n",
+        "  stdout, stderr = process.communicate()\n",
+        "  if process.returncode != 0:\n",
+        "      print(stderr)\n",
+        "      raise RuntimeError(stderr)\n",
+        "  else:\n",
+        "      print(\"The video is ready\")\n",
+        "\n",
+        "  if view_video_in_cell:\n",
+        "      mp4 = open(filepath,'rb').read()\n",
+        "      data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
+        "      display.HTML(\"\"\"\n",
+        "      <video width=400 controls>\n",
+        "            <source src=\"%s\" type=\"video/mp4\">\n",
+        "      </video>\n",
+        "      \"\"\" % data_url)"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "HV54fuU3pMzJ"
+      },
+      "execution_count": null,
+      "outputs": []
+    }
+  ]
+}

+ 1135 - 0
archive/QoL_MP_Diffusion_v2_[w_Secondary_Model_v2].ipynb

@@ -0,0 +1,1135 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "name": "QoL MP Diffusion v2 [w/ Secondary Model v2].ipynb",
+      "private_outputs": true,
+      "provenance": [],
+      "collapsed_sections": [
+        "XTu6AjLyFQUq"
+      ],
+      "machine_shape": "hm"
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    },
+    "accelerator": "GPU"
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "1YwMUyt9LHG1"
+      },
+      "source": [
+        "# Generates images from text prompts with CLIP guided diffusion.\n",
+        "\n",
+        "By Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses either OpenAI's 256x256 unconditional ImageNet or Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.\n",
+        "\n",
+        "Modified by Daniel Russell (https://github.com/russelldc, https://twitter.com/danielrussruss) to include (hopefully) optimal params for quick generations in 15-100 timesteps rather than 1000, as well as more robust augmentations.\n",
+        "\n",
+        "**Update**: Sep 19th 2021\n",
+        "\n",
+        "\n",
+        "Further improvements from Dango233 and nsheppard helped improve the quality of diffusion in general, and especially so for shorter runs like this notebook aims to achieve.\n",
+        "\n",
+        "Katherine's original notebook can be found here:\n",
+        "https://colab.research.google.com/drive/1QBsaDAZv8np29FPbvjffbE1eytoJcsgA\n",
+        "\n",
+        "Vark added code to load in multiple Clip models at once, which all prompts are evaluated against, which may greatly improve accuracy.\n",
+        "\n",
+        "--\n",
+        "\n",
+        "I, Somnai (https://twitter.com/Somnai_dreams), have made the following QoL improvements and assorted implementations:\n",
+        "\n",
+        "**Update**: Oct 29th 2021\n",
+        "\n",
+        "QoL improvements added by Somnai (@somnai_dreams), including user friendly UI, settings+prompt saving and improved google drive folder organization.\n",
+        "\n",
+        "**Update**: Nov 13th 2021\n",
+        "\n",
+        "Now includes sizing options, intermediate saves and fixed image prompts and perlin inits. unexposed batch option since it doesn't work\n",
+        "\n",
+        "**Update**: Nov 22nd 2021\n",
+        "\n",
+        "Initial addition of Katherine Crowson's Secondary Model Method (https://colab.research.google.com/drive/1mpkrhOjoyzPeSWy2r7T8EYRaU7amYOOi#scrollTo=X5gODNAMEUCR)\n",
+        "\n",
+        "Noticed settings were saving with the wrong name so corrected it. Let me know if you preferred the old scheme."
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "XTu6AjLyFQUq"
+      },
+      "source": [
+        "#Tutorial"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "YR806W0wi3He"
+      },
+      "source": [
+        "**Diffusion settings**\n",
+        "---\n",
+        "\n",
+        "Setting | Description | Default\n",
+        "--- | --- | ---\n",
+        "**Your vision:**\n",
+        "`text_prompts` | A description of what you'd like the machine to generate. Think of it like writing the caption below your image on a website. | N/A\n",
+        "`image_prompts` | Think of these images more as a description of their contents. | N/A\n",
+        "**Image quality:**\n",
+        "`clip_guidance_scale`  | Controls how much the image should look like the prompt. | 1000\n",
+        "`tv_scale` |  Controls the smoothness of the final output. | 150\n",
+        "`range_scale` |  Controls how far out of range RGB values are allowed to be. | 150\n",
+        "`sat_scale` | Controls how much saturation is allowed. From nshepperd's JAX notebook. | 0\n",
+        "`cutn` | Controls how many crops to take from the image. | 16\n",
+        "`cutn_batches` | Accumulate CLIP gradient from multiple batches of cuts  | 2\n",
+        "**Init settings:**\n",
+        "`init_image` |   URL or local path | None\n",
+        "`init_scale` |  This enhances the effect of the init image, a good value is 1000 | 0\n",
+        "`skip_timesteps` |  Controls the starting point along the diffusion timesteps | 0\n",
+        "`perlin_init` |  Option to start with random perlin noise | False\n",
+        "`perlin_mode` |  ('gray', 'color') | 'mixed'\n",
+        "**Advanced:**\n",
+        "`skip_augs` |Controls whether to skip torchvision augmentations | False\n",
+        "`randomize_class` |Controls whether the imagenet class is randomly changed each iteration | True\n",
+        "`clip_denoised` |Determines whether CLIP discriminates a noisy or denoised image | False\n",
+        "`clamp_grad` |Experimental: Using adaptive clip grad in the cond_fn | True\n",
+        "`seed`  | Choose a random seed and print it at end of run for reproduction | random_seed\n",
+        "`fuzzy_prompt` | Controls whether to add multiple noisy prompts to the prompt losses | False\n",
+        "`rand_mag` |Controls the magnitude of the random noise | 0.1\n",
+        "`eta` | DDIM hyperparameter | 0.5\n",
+        "\n",
+        "..\n",
+        "\n",
+        "**Model settings**\n",
+        "---\n",
+        "\n",
+        "Setting | Description | Default\n",
+        "--- | --- | ---\n",
+        "**Diffusion:**\n",
+        "`timestep_respacing`  | Modify this value to decrease the number of timesteps. | ddim100\n",
+        "`diffusion_steps` || 1000\n",
+        "**Diffusion:**\n",
+        "`clip_models`  | Models of CLIP to load. Typically the more, the better but they all come at a hefty VRAM cost. | ViT-B/32, ViT-B/16, RN50x4"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "_9Eg9Kf5FlfK"
+      },
+      "source": [
+        "# 1. Pre Set Up"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "qZ3rNuAWAewx",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title 1.1 Check GPU Status\n",
+        "!nvidia-smi"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "yZsjzwS0YGo6",
+        "cellView": "form"
+      },
+      "source": [
+        "from google.colab import drive\n",
+        "#@title 1.2 Prepare Folders\n",
+        "#@markdown If you connect your Google Drive, you can save the final image of each run on your drive.\n",
+        "\n",
+        "google_drive = True #@param {type:\"boolean\"}\n",
+        "\n",
+        "#@markdown Click here if you'd like to save the diffusion model checkpoint file to (and/or load from) your Google Drive:\n",
+        "yes_please = True #@param {type:\"boolean\"}\n",
+        "\n",
+        "if google_drive is True:\n",
+        "  drive.mount('/content/drive')\n",
+        "  root_path = '/content/drive/MyDrive/AI/MP_Diffusion'\n",
+        "else:\n",
+        "  root_path = '/content'\n",
+        "\n",
+        "import os\n",
+        "from os import path\n",
+        "#Simple create paths taken with modifications from Datamosh's Batch VQGAN+CLIP notebook\n",
+        "def createPath(filepath):\n",
+        "    if path.exists(filepath) == False:\n",
+        "      os.makedirs(filepath)\n",
+        "      print(f'Made {filepath}')\n",
+        "    else:\n",
+        "      print(f'filepath {filepath} exists.')\n",
+        "\n",
+        "initDirPath = f'{root_path}/init_images'\n",
+        "createPath(initDirPath)\n",
+        "outDirPath = f'{root_path}/images_out'\n",
+        "createPath(outDirPath)\n",
+        "\n",
+        "if google_drive and not yes_please or not google_drive:\n",
+        "    model_path = '/content/models'\n",
+        "    createPath(model_path)\n",
+        "if google_drive and yes_please:\n",
+        "    model_path = f'{root_path}/models'\n",
+        "    createPath(model_path)\n",
+        "# libraries = f'{root_path}/libraries'\n",
+        "# createPath(libraries)\n",
+        "\n"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "otQKpqkGrF2r"
+      },
+      "source": [
+        "#2. Install\n",
+        "\n",
+        "Run this once at the start of your session and after a restart."
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "JmbrcrhpBPC6",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title ### 2.1 Install and import dependencies\n",
+        "\n",
+        "if google_drive is not True:\n",
+        "  root_path = f'/content'\n",
+        "  model_path = '/content/' \n",
+        "\n",
+        "!git clone https://github.com/openai/CLIP\n",
+        "!git clone https://github.com/crowsonkb/guided-diffusion\n",
+        "!pip install -e ./CLIP\n",
+        "!pip install -e ./guided-diffusion\n",
+        "!pip install lpips datetime\n",
+        "\n",
+        "from dataclasses import dataclass\n",
+        "from functools import partial\n",
+        "import gc\n",
+        "import io\n",
+        "import math\n",
+        "import sys\n",
+        "from IPython import display\n",
+        "import lpips\n",
+        "from PIL import Image, ImageOps\n",
+        "import requests\n",
+        "from glob import glob\n",
+        "import json\n",
+        "import torch\n",
+        "from torch import nn\n",
+        "from torch.nn import functional as F\n",
+        "import torchvision.transforms as T\n",
+        "import torchvision.transforms.functional as TF\n",
+        "from tqdm.notebook import tqdm\n",
+        "sys.path.append('./CLIP')\n",
+        "sys.path.append('./guided-diffusion')\n",
+        "import clip\n",
+        "from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults\n",
+        "from datetime import datetime\n",
+        "import numpy as np\n",
+        "import matplotlib.pyplot as plt\n",
+        "import random\n",
+        "\n",
+        "import torch\n",
+        "device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n",
+        "print('Using device:', device)\n",
+        "\n",
+        "if torch.cuda.get_device_capability(device) == (8,0): ## A100 fix thanks to Emad\n",
+        "  print('Disabling CUDNN for A100 gpu', file=sys.stderr)\n",
+        "  torch.backends.cudnn.enabled = False"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "FpZczxnOnPIU",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title 2.2 Define necessary functions\n",
+        "\n",
+        "# https://gist.github.com/adefossez/0646dbe9ed4005480a2407c62aac8869\n",
+        "\n",
+        "\n",
+        "def interp(t):\n",
+        "    return 3 * t**2 - 2 * t ** 3\n",
+        "\n",
+        "def perlin(width, height, scale=10, device=None):\n",
+        "    gx, gy = torch.randn(2, width + 1, height + 1, 1, 1, device=device)\n",
+        "    xs = torch.linspace(0, 1, scale + 1)[:-1, None].to(device)\n",
+        "    ys = torch.linspace(0, 1, scale + 1)[None, :-1].to(device)\n",
+        "    wx = 1 - interp(xs)\n",
+        "    wy = 1 - interp(ys)\n",
+        "    dots = 0\n",
+        "    dots += wx * wy * (gx[:-1, :-1] * xs + gy[:-1, :-1] * ys)\n",
+        "    dots += (1 - wx) * wy * (-gx[1:, :-1] * (1 - xs) + gy[1:, :-1] * ys)\n",
+        "    dots += wx * (1 - wy) * (gx[:-1, 1:] * xs - gy[:-1, 1:] * (1 - ys))\n",
+        "    dots += (1 - wx) * (1 - wy) * (-gx[1:, 1:] * (1 - xs) - gy[1:, 1:] * (1 - ys))\n",
+        "    return dots.permute(0, 2, 1, 3).contiguous().view(width * scale, height * scale)\n",
+        "\n",
+        "def perlin_ms(octaves, width, height, grayscale, device=device):\n",
+        "    out_array = [0.5] if grayscale else [0.5, 0.5, 0.5]\n",
+        "    # out_array = [0.0] if grayscale else [0.0, 0.0, 0.0]\n",
+        "    for i in range(1 if grayscale else 3):\n",
+        "        scale = 2 ** len(octaves)\n",
+        "        oct_width = width\n",
+        "        oct_height = height\n",
+        "        for oct in octaves:\n",
+        "            p = perlin(oct_width, oct_height, scale, device)\n",
+        "            out_array[i] += p * oct\n",
+        "            scale //= 2\n",
+        "            oct_width *= 2\n",
+        "            oct_height *= 2\n",
+        "    return torch.cat(out_array)\n",
+        "\n",
+        "def create_perlin_noise(octaves=[1, 1, 1, 1], width=2, height=2, grayscale=True):\n",
+        "    out = perlin_ms(octaves, width, height, grayscale)\n",
+        "    if grayscale:\n",
+        "        out = TF.resize(size=(side_y, side_x), img=out.unsqueeze(0))\n",
+        "        out = TF.to_pil_image(out.clamp(0, 1)).convert('RGB')\n",
+        "    else:\n",
+        "        out = out.reshape(-1, 3, out.shape[0]//3, out.shape[1])\n",
+        "        out = TF.resize(size=(side_y, side_x), img=out)\n",
+        "        out = TF.to_pil_image(out.clamp(0, 1).squeeze())\n",
+        "\n",
+        "    out = ImageOps.autocontrast(out)\n",
+        "    return out\n",
+        "\n",
+        "def fetch(url_or_path):\n",
+        "    if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):\n",
+        "        r = requests.get(url_or_path)\n",
+        "        r.raise_for_status()\n",
+        "        fd = io.BytesIO()\n",
+        "        fd.write(r.content)\n",
+        "        fd.seek(0)\n",
+        "        return fd\n",
+        "    return open(url_or_path, 'rb')\n",
+        "\n",
+        "\n",
+        "def parse_prompt(prompt):\n",
+        "    if prompt.startswith('http://') or prompt.startswith('https://'):\n",
+        "        vals = prompt.rsplit(':', 2)\n",
+        "        vals = [vals[0] + ':' + vals[1], *vals[2:]]\n",
+        "    else:\n",
+        "        vals = prompt.rsplit(':', 1)\n",
+        "    vals = vals + ['', '1'][len(vals):]\n",
+        "    return vals[0], float(vals[1])\n",
+        "\n",
+        "def sinc(x):\n",
+        "    return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))\n",
+        "\n",
+        "def lanczos(x, a):\n",
+        "    cond = torch.logical_and(-a < x, x < a)\n",
+        "    out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))\n",
+        "    return out / out.sum()\n",
+        "\n",
+        "def ramp(ratio, width):\n",
+        "    n = math.ceil(width / ratio + 1)\n",
+        "    out = torch.empty([n])\n",
+        "    cur = 0\n",
+        "    for i in range(out.shape[0]):\n",
+        "        out[i] = cur\n",
+        "        cur += ratio\n",
+        "    return torch.cat([-out[1:].flip([0]), out])[1:-1]\n",
+        "\n",
+        "def resample(input, size, align_corners=True):\n",
+        "    n, c, h, w = input.shape\n",
+        "    dh, dw = size\n",
+        "\n",
+        "    input = input.reshape([n * c, 1, h, w])\n",
+        "\n",
+        "    if dh < h:\n",
+        "        kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)\n",
+        "        pad_h = (kernel_h.shape[0] - 1) // 2\n",
+        "        input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')\n",
+        "        input = F.conv2d(input, kernel_h[None, None, :, None])\n",
+        "\n",
+        "    if dw < w:\n",
+        "        kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)\n",
+        "        pad_w = (kernel_w.shape[0] - 1) // 2\n",
+        "        input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')\n",
+        "        input = F.conv2d(input, kernel_w[None, None, None, :])\n",
+        "\n",
+        "    input = input.reshape([n, c, h, w])\n",
+        "    return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)\n",
+        "\n",
+        "class MakeCutouts(nn.Module):\n",
+        "    def __init__(self, cut_size, cutn, skip_augs=False):\n",
+        "        super().__init__()\n",
+        "        self.cut_size = cut_size\n",
+        "        self.cutn = cutn\n",
+        "        self.skip_augs = skip_augs\n",
+        "        self.augs = T.Compose([\n",
+        "            T.RandomHorizontalFlip(p=0.5),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomAffine(degrees=15, translate=(0.1, 0.1)),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomPerspective(distortion_scale=0.4, p=0.7),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            T.RandomGrayscale(p=0.15),\n",
+        "            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),\n",
+        "            # T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),\n",
+        "        ])\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        input = T.Pad(input.shape[2]//4, fill=0)(input)\n",
+        "        sideY, sideX = input.shape[2:4]\n",
+        "        max_size = min(sideX, sideY)\n",
+        "\n",
+        "        cutouts = []\n",
+        "        for ch in range(cutn):\n",
+        "            if ch > cutn - cutn//4:\n",
+        "                cutout = input.clone()\n",
+        "            else:\n",
+        "                size = int(max_size * torch.zeros(1,).normal_(mean=.8, std=.3).clip(float(self.cut_size/max_size), 1.))\n",
+        "                offsetx = torch.randint(0, abs(sideX - size + 1), ())\n",
+        "                offsety = torch.randint(0, abs(sideY - size + 1), ())\n",
+        "                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]\n",
+        "\n",
+        "            if not self.skip_augs:\n",
+        "                cutout = self.augs(cutout)\n",
+        "            cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))\n",
+        "            del cutout\n",
+        "\n",
+        "        cutouts = torch.cat(cutouts, dim=0)\n",
+        "        return cutouts\n",
+        "\n",
+        "\n",
+        "def spherical_dist_loss(x, y):\n",
+        "    x = F.normalize(x, dim=-1)\n",
+        "    y = F.normalize(y, dim=-1)\n",
+        "    return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)\n",
+        "\n",
+        "\n",
+        "def tv_loss(input):\n",
+        "    \"\"\"L2 total variation loss, as in Mahendran et al.\"\"\"\n",
+        "    input = F.pad(input, (0, 1, 0, 1), 'replicate')\n",
+        "    x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]\n",
+        "    y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]\n",
+        "    return (x_diff**2 + y_diff**2).mean([1, 2, 3])\n",
+        "\n",
+        "\n",
+        "def range_loss(input):\n",
+        "    return (input - input.clamp(-1, 1)).pow(2).mean([1, 2, 3])\n",
+        "\n",
+        "\n",
+        "def do_run():\n",
+        "    loss_values = []\n",
+        " \n",
+        "    if seed is not None:\n",
+        "        np.random.seed(seed)\n",
+        "        random.seed(seed)\n",
+        "        torch.manual_seed(seed)\n",
+        "        torch.cuda.manual_seed_all(seed)\n",
+        "        torch.backends.cudnn.deterministic = True\n",
+        " \n",
+        "    target_embeds, weights = [], []\n",
+        "    model_stats = []\n",
+        "    \n",
+        "    for clip_model in clip_models:\n",
+        "        model_stat = {\"clip_model\":None,\"target_embeds\":[],\"make_cutouts\":None,\"weights\":[]}\n",
+        "        model_stat[\"clip_model\"] = clip_model\n",
+        "        model_stat[\"make_cutouts\"] = MakeCutouts(clip_model.visual.input_resolution, cutn, skip_augs=skip_augs)\n",
+        "\n",
+        "        for prompt in text_prompts:\n",
+        "            txt, weight = parse_prompt(prompt)\n",
+        "            txt = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()\n",
+        "\n",
+        "            if fuzzy_prompt:\n",
+        "                for i in range(25):\n",
+        "                    model_stat[\"target_embeds\"].append((txt + torch.randn(txt.shape).cuda() * rand_mag).clamp(0,1))\n",
+        "                    model_stat[\"weights\"].append(weight)\n",
+        "            else:\n",
+        "                model_stat[\"target_embeds\"].append(txt)\n",
+        "                model_stat[\"weights\"].append(weight)\n",
+        "    \n",
+        "        for prompt in image_prompts:\n",
+        "            path, weight = parse_prompt(prompt)\n",
+        "            img = Image.open(fetch(path)).convert('RGB')\n",
+        "            img = TF.resize(img, min(side_x, side_y, *img.size), T.InterpolationMode.LANCZOS)\n",
+        "            batch = model_stat[\"make_cutouts\"](TF.to_tensor(img).to(device).unsqueeze(0).mul(2).sub(1))\n",
+        "            embed = clip_model.encode_image(normalize(batch)).float()\n",
+        "            if fuzzy_prompt:\n",
+        "                for i in range(25):\n",
+        "                    model_stat[\"target_embeds\"].append((embed + torch.randn(embed.shape).cuda() * rand_mag).clamp(0,1))\n",
+        "                    weights.extend([weight / cutn] * cutn)\n",
+        "            else:\n",
+        "                model_stat[\"target_embeds\"].append(embed)\n",
+        "                model_stat[\"weights\"].extend([weight / cutn] * cutn)\n",
+        "    \n",
+        "        model_stat[\"target_embeds\"] = torch.cat(model_stat[\"target_embeds\"])\n",
+        "        model_stat[\"weights\"] = torch.tensor(model_stat[\"weights\"], device=device)\n",
+        "        if model_stat[\"weights\"].sum().abs() < 1e-3:\n",
+        "            raise RuntimeError('The weights must not sum to 0.')\n",
+        "        model_stat[\"weights\"] /= model_stat[\"weights\"].sum().abs()\n",
+        "        model_stats.append(model_stat)\n",
+        " \n",
+        "    init = None\n",
+        "    if init_image is not None:\n",
+        "        init = Image.open(fetch(init_image)).convert('RGB')\n",
+        "        init = init.resize((side_x, side_y), Image.LANCZOS)\n",
+        "        init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "    \n",
+        "    if perlin_init:\n",
+        "        if perlin_mode == 'color':\n",
+        "            init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "            init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)\n",
+        "        elif perlin_mode == 'gray':\n",
+        "           init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)\n",
+        "           init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "        else:\n",
+        "           init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)\n",
+        "           init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)\n",
+        "        \n",
+        "        # init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device)\n",
+        "        init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)\n",
+        "        del init2\n",
+        " \n",
+        "    cur_t = None\n",
+        " \n",
+        "    def cond_fn(x, t, y=None):\n",
+        "        with torch.enable_grad():\n",
+        "            x = x.detach().requires_grad_()\n",
+        "            n = x.shape[0]\n",
+        "            if use_secondary_model is True:\n",
+        "              alpha = torch.tensor(diffusion.sqrt_alphas_cumprod[cur_t], device=device, dtype=torch.float32)\n",
+        "              sigma = torch.tensor(diffusion.sqrt_one_minus_alphas_cumprod[cur_t], device=device, dtype=torch.float32)\n",
+        "              cosine_t = alpha_sigma_to_t(alpha, sigma)\n",
+        "              out = secondary_model(x, cosine_t[None].repeat([n])).pred\n",
+        "              fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]\n",
+        "              x_in = out * fac + x * (1 - fac)\n",
+        "              x_in_grad = torch.zeros_like(x_in)\n",
+        "            else:\n",
+        "              my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t\n",
+        "              out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})\n",
+        "              fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]\n",
+        "              x_in = out['pred_xstart'] * fac + x * (1 - fac)\n",
+        "              x_in_grad = torch.zeros_like(x_in)\n",
+        "            for model_stat in model_stats:\n",
+        "              for i in range(cutn_batches):\n",
+        "                  clip_in = normalize(model_stat[\"make_cutouts\"](x_in.add(1).div(2)))\n",
+        "                  image_embeds = model_stat[\"clip_model\"].encode_image(clip_in).float()\n",
+        "                  dists = spherical_dist_loss(image_embeds.unsqueeze(1), model_stat[\"target_embeds\"].unsqueeze(0))\n",
+        "                  dists = dists.view([cutn, n, -1])\n",
+        "                  losses = dists.mul(model_stat[\"weights\"]).sum(2).mean(0)\n",
+        "                  loss_values.append(losses.sum().item()) # log loss, probably shouldn't do per cutn_batch\n",
+        "                  x_in_grad += torch.autograd.grad(losses.sum() * clip_guidance_scale, x_in)[0] / cutn_batches\n",
+        "            tv_losses = tv_loss(x_in)\n",
+        "            if use_secondary_model is True:\n",
+        "              range_losses = range_loss(out)\n",
+        "            else:\n",
+        "              range_losses = range_loss(out['pred_xstart'])\n",
+        "            sat_losses = torch.abs(x_in - x_in.clamp(min=-1,max=1)).mean()\n",
+        "            loss = tv_losses.sum() * tv_scale + range_losses.sum() * range_scale + sat_losses.sum() * sat_scale\n",
+        "            if init is not None and init_scale:\n",
+        "                init_losses = lpips_model(x_in, init)\n",
+        "                loss = loss + init_losses.sum() * init_scale\n",
+        "            x_in_grad += torch.autograd.grad(loss, x_in)[0]\n",
+        "            grad = -torch.autograd.grad(x_in, x, x_in_grad)[0]\n",
+        "        if clamp_grad:\n",
+        "            magnitude = grad.square().mean().sqrt()\n",
+        "            return grad * magnitude.clamp(max=0.05) / magnitude\n",
+        "        return grad\n",
+        " \n",
+        "    if model_config['timestep_respacing'].startswith('ddim'):\n",
+        "        sample_fn = diffusion.ddim_sample_loop_progressive\n",
+        "    else:\n",
+        "        sample_fn = diffusion.p_sample_loop_progressive\n",
+        " \n",
+        "    for i in range(n_batches):\n",
+        "        cur_t = diffusion.num_timesteps - skip_timesteps - 1\n",
+        "        total_steps = cur_t\n",
+        " \n",
+        "        if model_config['timestep_respacing'].startswith('ddim'):\n",
+        "            samples = sample_fn(\n",
+        "                model,\n",
+        "                (batch_size, 3, side_y, side_x),\n",
+        "                clip_denoised=clip_denoised,\n",
+        "                model_kwargs={},\n",
+        "                cond_fn=cond_fn,\n",
+        "                progress=True,\n",
+        "                skip_timesteps=skip_timesteps,\n",
+        "                init_image=init,\n",
+        "                randomize_class=randomize_class,\n",
+        "                eta=eta,\n",
+        "            )\n",
+        "        else:\n",
+        "            samples = sample_fn(\n",
+        "                model,\n",
+        "                (batch_size, 3, side_y, side_x),\n",
+        "                clip_denoised=clip_denoised,\n",
+        "                model_kwargs={},\n",
+        "                cond_fn=cond_fn,\n",
+        "                progress=True,\n",
+        "                skip_timesteps=skip_timesteps,\n",
+        "                init_image=init,\n",
+        "                randomize_class=randomize_class,\n",
+        "            )\n",
+        "\n",
+        "        for j, sample in enumerate(samples):\n",
+        "            display.clear_output(wait=True)\n",
+        "            cur_t -= 1\n",
+        "            intermediateStep = False\n",
+        "            if steps_per_checkpoint is not None:\n",
+        "                if j % steps_per_checkpoint == 0 and j > 0:\n",
+        "                  intermediateStep = True\n",
+        "            elif j in intermediate_saves:\n",
+        "              intermediateStep = True\n",
+        "            if j % display_rate == 0 or cur_t == -1 or intermediateStep == True:\n",
+        "                for k, image in enumerate(sample['pred_xstart']):\n",
+        "                    tqdm.write(f'Batch {i}, step {j}, output {k}:')\n",
+        "                    current_time = datetime.now().strftime('%y%m%d-%H%M%S_%f')\n",
+        "                    percent = math.ceil(j/total_steps*100)\n",
+        "                    if n_batches > 0:\n",
+        "                      #if intermediates are saved to the subfolder, don't append a step or percentage to the name\n",
+        "                      if cur_t == -1 and intermediates_in_subfolder is True:\n",
+        "                        filename = f'{batch_name}({batchNum})_{i:04}.png'\n",
+        "                      else:\n",
+        "                        #If we're working with percentages, append it\n",
+        "                        if steps_per_checkpoint is not None:\n",
+        "                          filename = f'{batch_name}({batchNum})_{i:04}-{percent:02}%.png'\n",
+        "                        # Or else, iIf we're working with specific steps, append those\n",
+        "                        else:\n",
+        "                          filename = f'{batch_name}({batchNum})_{i:04}-{j:03}.png'\n",
+        "                    image = TF.to_pil_image(image.add(1).div(2).clamp(0, 1))\n",
+        "                    image.save('progress.png')\n",
+        "                    display.display(display.Image('progress.png'))\n",
+        "                    if steps_per_checkpoint is not None:\n",
+        "                      if j % steps_per_checkpoint == 0 and j > 0:\n",
+        "                        if intermediates_in_subfolder is True:\n",
+        "                          image.save(f'{partialFolder}/{filename}')\n",
+        "                        else:\n",
+        "                          image.save(f'{batchFolder}/{filename}')\n",
+        "                    else:\n",
+        "                      if j in intermediate_saves:\n",
+        "                        if intermediates_in_subfolder is True:\n",
+        "                          image.save(f'{partialFolder}/{filename}')\n",
+        "                        else:\n",
+        "                          image.save(f'{batchFolder}/{filename}')\n",
+        "                    if cur_t == -1:\n",
+        "                      if i == 0:\n",
+        "                        save_settings()\n",
+        "                      image.save(f'{batchFolder}/{filename}')\n",
+        " \n",
+        "        plt.plot(np.array(loss_values), 'r')\n",
+        "\n",
+        "def save_settings():\n",
+        "  setting_list = {\n",
+        "    'text_prompts': text_prompts,\n",
+        "    'image_prompts': image_prompts,\n",
+        "    'clip_guidance_scale': clip_guidance_scale,\n",
+        "    'tv_scale': tv_scale,\n",
+        "    'range_scale': range_scale,\n",
+        "    'sat_scale': sat_scale,\n",
+        "    'cutn': cutn,\n",
+        "    'cutn_batches': cutn_batches,\n",
+        "    'init_image': init_image,\n",
+        "    'init_scale': init_scale,\n",
+        "    'skip_timesteps': skip_timesteps,\n",
+        "    'perlin_init': perlin_init,\n",
+        "    'perlin_mode': perlin_mode,\n",
+        "    'skip_augs': skip_augs,\n",
+        "    'randomize_class': randomize_class,\n",
+        "    'clip_denoised': clip_denoised,\n",
+        "    'clamp_grad': clamp_grad,\n",
+        "    'seed': seed,\n",
+        "    'fuzzy_prompt': fuzzy_prompt,\n",
+        "    'rand_mag': rand_mag,\n",
+        "    'eta': eta,\n",
+        "    'width': width,\n",
+        "    'height': height,\n",
+        "    'diffusion_model': diffusion_model,\n",
+        "    'use_secondary_model': use_secondary_model,\n",
+        "    'timestep_respacing': timestep_respacing,\n",
+        "    'timestep_respacing': timestep_respacing,\n",
+        "    'diffusion_steps': diffusion_steps,\n",
+        "    'ViTB32': ViTB32,\n",
+        "    'ViTB16': ViTB16,\n",
+        "    'RN101': RN101,\n",
+        "    'RN50': RN50,\n",
+        "    'RN50x4': RN50x4,\n",
+        "    'RN50x16': RN50x16,\n",
+        "  }\n",
+        "  print('Settings:', setting_list)\n",
+        "  with open(f\"{batchFolder}/{batch_name}({batchNum})_settings.txt\", \"w+\") as f:   #save settings\n",
+        "    json.dump(setting_list, f, ensure_ascii=False, indent=4)\n",
+        "  "
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "cellView": "form",
+        "id": "TI4oAu0N4ksZ"
+      },
+      "source": [
+        "#@title 2.3 Define the secondary diffusion model\n",
+        "\n",
+        "def append_dims(x, n):\n",
+        "    return x[(Ellipsis, *(None,) * (n - x.ndim))]\n",
+        "\n",
+        "\n",
+        "def expand_to_planes(x, shape):\n",
+        "    return append_dims(x, len(shape)).repeat([1, 1, *shape[2:]])\n",
+        "\n",
+        "\n",
+        "def alpha_sigma_to_t(alpha, sigma):\n",
+        "    return torch.atan2(sigma, alpha) * 2 / math.pi\n",
+        "\n",
+        "\n",
+        "def t_to_alpha_sigma(t):\n",
+        "    return torch.cos(t * math.pi / 2), torch.sin(t * math.pi / 2)\n",
+        "\n",
+        "\n",
+        "@dataclass\n",
+        "class DiffusionOutput:\n",
+        "    v: torch.Tensor\n",
+        "    pred: torch.Tensor\n",
+        "    eps: torch.Tensor\n",
+        "\n",
+        "\n",
+        "class ConvBlock(nn.Sequential):\n",
+        "    def __init__(self, c_in, c_out):\n",
+        "        super().__init__(\n",
+        "            nn.Conv2d(c_in, c_out, 3, padding=1),\n",
+        "            nn.ReLU(inplace=True),\n",
+        "        )\n",
+        "\n",
+        "\n",
+        "class SkipBlock(nn.Module):\n",
+        "    def __init__(self, main, skip=None):\n",
+        "        super().__init__()\n",
+        "        self.main = nn.Sequential(*main)\n",
+        "        self.skip = skip if skip else nn.Identity()\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        return torch.cat([self.main(input), self.skip(input)], dim=1)\n",
+        "\n",
+        "\n",
+        "class FourierFeatures(nn.Module):\n",
+        "    def __init__(self, in_features, out_features, std=1.):\n",
+        "        super().__init__()\n",
+        "        assert out_features % 2 == 0\n",
+        "        self.weight = nn.Parameter(torch.randn([out_features // 2, in_features]) * std)\n",
+        "\n",
+        "    def forward(self, input):\n",
+        "        f = 2 * math.pi * input @ self.weight.T\n",
+        "        return torch.cat([f.cos(), f.sin()], dim=-1)\n",
+        "\n",
+        "\n",
+        "class SecondaryDiffusionImageNet(nn.Module):\n",
+        "    def __init__(self):\n",
+        "        super().__init__()\n",
+        "        c = 64  # The base channel count\n",
+        "\n",
+        "        self.timestep_embed = FourierFeatures(1, 16)\n",
+        "\n",
+        "        self.net = nn.Sequential(\n",
+        "            ConvBlock(3 + 16, c),\n",
+        "            ConvBlock(c, c),\n",
+        "            SkipBlock([\n",
+        "                nn.AvgPool2d(2),\n",
+        "                ConvBlock(c, c * 2),\n",
+        "                ConvBlock(c * 2, c * 2),\n",
+        "                SkipBlock([\n",
+        "                    nn.AvgPool2d(2),\n",
+        "                    ConvBlock(c * 2, c * 4),\n",
+        "                    ConvBlock(c * 4, c * 4),\n",
+        "                    SkipBlock([\n",
+        "                        nn.AvgPool2d(2),\n",
+        "                        ConvBlock(c * 4, c * 8),\n",
+        "                        ConvBlock(c * 8, c * 4),\n",
+        "                        nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "                    ]),\n",
+        "                    ConvBlock(c * 8, c * 4),\n",
+        "                    ConvBlock(c * 4, c * 2),\n",
+        "                    nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "                ]),\n",
+        "                ConvBlock(c * 4, c * 2),\n",
+        "                ConvBlock(c * 2, c),\n",
+        "                nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),\n",
+        "            ]),\n",
+        "            ConvBlock(c * 2, c),\n",
+        "            nn.Conv2d(c, 3, 3, padding=1),\n",
+        "        )\n",
+        "\n",
+        "    def forward(self, input, t):\n",
+        "        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)\n",
+        "        v = self.net(torch.cat([input, timestep_embed], dim=1))\n",
+        "        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))\n",
+        "        pred = input * alphas - v * sigmas\n",
+        "        eps = input * sigmas + v * alphas\n",
+        "        return DiffusionOutput(v, pred, eps)\n",
+        "\n",
+        "\n",
+        "class SecondaryDiffusionImageNet2(nn.Module):\n",
+        "    def __init__(self):\n",
+        "        super().__init__()\n",
+        "        c = 64  # The base channel count\n",
+        "        cs = [c, c * 2, c * 2, c * 4, c * 4, c * 8]\n",
+        "\n",
+        "        self.timestep_embed = FourierFeatures(1, 16)\n",
+        "        self.down = nn.AvgPool2d(2)\n",
+        "        self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False)\n",
+        "\n",
+        "        self.net = nn.Sequential(\n",
+        "            ConvBlock(3 + 16, cs[0]),\n",
+        "            ConvBlock(cs[0], cs[0]),\n",
+        "            SkipBlock([\n",
+        "                self.down,\n",
+        "                ConvBlock(cs[0], cs[1]),\n",
+        "                ConvBlock(cs[1], cs[1]),\n",
+        "                SkipBlock([\n",
+        "                    self.down,\n",
+        "                    ConvBlock(cs[1], cs[2]),\n",
+        "                    ConvBlock(cs[2], cs[2]),\n",
+        "                    SkipBlock([\n",
+        "                        self.down,\n",
+        "                        ConvBlock(cs[2], cs[3]),\n",
+        "                        ConvBlock(cs[3], cs[3]),\n",
+        "                        SkipBlock([\n",
+        "                            self.down,\n",
+        "                            ConvBlock(cs[3], cs[4]),\n",
+        "                            ConvBlock(cs[4], cs[4]),\n",
+        "                            SkipBlock([\n",
+        "                                self.down,\n",
+        "                                ConvBlock(cs[4], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[5]),\n",
+        "                                ConvBlock(cs[5], cs[4]),\n",
+        "                                self.up,\n",
+        "                            ]),\n",
+        "                            ConvBlock(cs[4] * 2, cs[4]),\n",
+        "                            ConvBlock(cs[4], cs[3]),\n",
+        "                            self.up,\n",
+        "                        ]),\n",
+        "                        ConvBlock(cs[3] * 2, cs[3]),\n",
+        "                        ConvBlock(cs[3], cs[2]),\n",
+        "                        self.up,\n",
+        "                    ]),\n",
+        "                    ConvBlock(cs[2] * 2, cs[2]),\n",
+        "                    ConvBlock(cs[2], cs[1]),\n",
+        "                    self.up,\n",
+        "                ]),\n",
+        "                ConvBlock(cs[1] * 2, cs[1]),\n",
+        "                ConvBlock(cs[1], cs[0]),\n",
+        "                self.up,\n",
+        "            ]),\n",
+        "            ConvBlock(cs[0] * 2, cs[0]),\n",
+        "            nn.Conv2d(cs[0], 3, 3, padding=1),\n",
+        "        )\n",
+        "\n",
+        "    def forward(self, input, t):\n",
+        "        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)\n",
+        "        v = self.net(torch.cat([input, timestep_embed], dim=1))\n",
+        "        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))\n",
+        "        pred = input * alphas - v * sigmas\n",
+        "        eps = input * sigmas + v * alphas\n",
+        "        return DiffusionOutput(v, pred, eps)\n"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "CQVtY1Ixnqx4"
+      },
+      "source": [
+        "# 3. Diffusion and CLIP model settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "Fpbody2NCR7w",
+        "cellView": "form"
+      },
+      "source": [
+        "diffusion_model = \"512x512_diffusion_uncond_finetune_008100\" #@param [\"256x256_diffusion_uncond\", \"512x512_diffusion_uncond_finetune_008100\"]\n",
+        "\n",
+        "\n",
+        "if diffusion_model == '256x256_diffusion_uncond':\n",
+        "    !wget --continue 'https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt' -P {model_path}\n",
+        "elif diffusion_model == '512x512_diffusion_uncond_finetune_008100':\n",
+        "    !wget --continue 'http://batbot.tv/ai/models/guided-diffusion/512x512_diffusion_uncond_finetune_008100.pt' -P {model_path}\n",
+        "\n",
+        "use_secondary_model = True #@param {type: 'boolean'}\n",
+        "\n",
+        "# Download the secondary diffusion model v2\n",
+        "# SHA-256: 983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a \n",
+        "if use_secondary_model == True:\n",
+        "  !wget --continue  'https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth' -P {model_path}\n",
+        "\n",
+        "\n",
+        "timestep_respacing = 'ddim250' #@param ['25','50','100','150','250','500','1000','ddim25','ddim50', 'ddim75', 'ddim100','ddim150','ddim250','ddim500','ddim1000']  \n",
+        "diffusion_steps = 1000 #@param {type: 'number'}\n",
+        "ViTB32 = True #@param{type:\"boolean\"}\n",
+        "ViTB16 = True #@param{type:\"boolean\"}\n",
+        "RN101 = False #@param{type:\"boolean\"}\n",
+        "RN50 = False #@param{type:\"boolean\"}\n",
+        "RN50x4 = False #@param{type:\"boolean\"}\n",
+        "RN50x16 = False #@param{type:\"boolean\"}\n",
+        "# ViTL = True #@param{type:\"boolean\"}\n",
+        "#@markdown *RN50x16 for A100 only*\n",
+        "\n",
+        "\n",
+        "model_config = model_and_diffusion_defaults()\n",
+        "if diffusion_model == '512x512_diffusion_uncond_finetune_008100':\n",
+        "    model_config.update({\n",
+        "        'attention_resolutions': '32, 16, 8',\n",
+        "        'class_cond': False,\n",
+        "        'diffusion_steps': diffusion_steps,\n",
+        "        'rescale_timesteps': True,\n",
+        "        'timestep_respacing': timestep_respacing,\n",
+        "        'image_size': 512,\n",
+        "        'learn_sigma': True,\n",
+        "        'noise_schedule': 'linear',\n",
+        "        'num_channels': 256,\n",
+        "        'num_head_channels': 64,\n",
+        "        'num_res_blocks': 2,\n",
+        "        'resblock_updown': True,\n",
+        "        'use_checkpoint': True,\n",
+        "        'use_fp16': True,\n",
+        "        'use_scale_shift_norm': True,\n",
+        "    })\n",
+        "elif diffusion_model == '256x256_diffusion_uncond':\n",
+        "    model_config.update({\n",
+        "        'attention_resolutions': '32, 16, 8',\n",
+        "        'class_cond': False,\n",
+        "        'diffusion_steps': diffusion_steps,\n",
+        "        'rescale_timesteps': True,\n",
+        "        'timestep_respacing': timestep_respacing,\n",
+        "        'image_size': 256,\n",
+        "        'learn_sigma': True,\n",
+        "        'noise_schedule': 'linear',\n",
+        "        'num_channels': 256,\n",
+        "        'num_head_channels': 64,\n",
+        "        'num_res_blocks': 2,\n",
+        "        'resblock_updown': True,\n",
+        "        'use_checkpoint': True,\n",
+        "        'use_fp16': True,\n",
+        "        'use_scale_shift_norm': True,\n",
+        "    })\n",
+        "\n",
+        "secondary_model_ver = 2\n",
+        "model_default = model_config['image_size']\n",
+        "\n",
+        "model, diffusion = create_model_and_diffusion(**model_config)\n",
+        "model.load_state_dict(torch.load(f'{model_path}/{diffusion_model}.pt', map_location='cpu'))\n",
+        "model.requires_grad_(False).eval().to(device)\n",
+        "for name, param in model.named_parameters():\n",
+        "    if 'qkv' in name or 'norm' in name or 'proj' in name:\n",
+        "        param.requires_grad_()\n",
+        "if model_config['use_fp16']:\n",
+        "    model.convert_to_fp16()\n",
+        "\n",
+        "if secondary_model_ver == 2:\n",
+        "    secondary_model = SecondaryDiffusionImageNet2()\n",
+        "    secondary_model.load_state_dict(torch.load(f'{model_path}/secondary_model_imagenet_2.pth', map_location='cpu'))\n",
+        "secondary_model.eval().requires_grad_(False).to(device)\n",
+        "\n",
+        "clip_models = []\n",
+        "if ViTB32 is True: clip_models.append(clip.load('ViT-B/32', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if ViTB16 is True: clip_models.append(clip.load('ViT-B/16', jit=False)[0].eval().requires_grad_(False).to(device) ) \n",
+        "if RN50 is True: clip_models.append(clip.load('RN50', jit=False)[0].eval().requires_grad_(False).to(device))\n",
+        "if RN50x4 is True: clip_models.append(clip.load('RN50x4', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN50x16 is True: clip_models.append(clip.load('RN50x16', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "if RN101 is True: clip_models.append(clip.load('RN101', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "# if ViTL is True: clip_models.append(load('ViT-L', jit=False)[0].eval().requires_grad_(False).to(device)) \n",
+        "\n",
+        "normalize = T.Normalize(mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711])\n",
+        "lpips_model = lpips.LPIPS(net='vgg').to(device)"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "kjtsXaszn-bB"
+      },
+      "source": [
+        "# 4. Settings"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "U0PwzFZbLfcy",
+        "cellView": "form"
+      },
+      "source": [
+        "#@markdown ####**Basic Settings:**\n",
+        "\n",
+        "clip_guidance_scale = 5000 #@param{type: 'number'}\n",
+        "tv_scale =  8000#@param{type: 'number'}\n",
+        "range_scale = 150  #@param{type: 'number'}\n",
+        "sat_scale = 0  #@param{type: 'number'}\n",
+        "cutn = 16  #@param{type: 'number'}\n",
+        "cutn_batches = 2  #@param{type: 'number'}\n",
+        "\n",
+        "init_image = '' #@param{type: 'string'}\n",
+        "init_scale =   200#@param{type: 'number'}\n",
+        "skip_timesteps = 0  #@param{type: 'number'}\n",
+        "\n",
+        "#@markdown Size must be multiple of 64. Leave as `model_default` for default sizes. \n",
+        "width = model_default#@param{type: 'raw'}\n",
+        "height = model_default#@param{type: 'raw'}\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Saving:**\n",
+        "batch_name = 'Test' #@param{type: 'string'}\n",
+        "intermediate_saves =  5#@param{type: 'raw'}\n",
+        "intermediates_in_subfolder = True #@param{type: 'boolean'}\n",
+        "#@markdown Intermediate steps will save a copy at your specified intervals. You can either format it as a single integer or a list of specific steps \n",
+        "\n",
+        "#@markdown A value of `2` will save a copy at 33% and 66%. 0 will save none.\n",
+        "\n",
+        "#@markdown A value of `[5, 9, 34, 45]` will save at steps 5, 9, 34, and 45. (Make sure to include the brackets)\n",
+        "\n",
+        "\n",
+        "\n",
+        "#@markdown ---\n",
+        "\n",
+        "#@markdown ####**Advanced Settings:**\n",
+        "#@markdown *Perlin init will replace your init, so uncheck if using one.*\n",
+        "\n",
+        "perlin_init = True  #@param{type: 'boolean'}\n",
+        "perlin_mode = 'mixed' \n",
+        "\n",
+        "skip_augs = False #@param{type: 'boolean'}\n",
+        "randomize_class = True #@param{type: 'boolean'}\n",
+        "clip_denoised = False #@param{type: 'boolean'}\n",
+        "clamp_grad = True #@param{type: 'boolean'}\n",
+        "\n",
+        "seed = 'random_seed' #@param{type: 'string'}\n",
+        "\n",
+        "fuzzy_prompt = False #@param{type: 'boolean'}\n",
+        "rand_mag = 0.05  #@param{type: 'number'}\n",
+        "eta =   1#@param{type: 'number'}\n",
+        "\n",
+        "if type(intermediate_saves) is not list:\n",
+        "  steps_per_checkpoint = math.floor((diffusion.num_timesteps - skip_timesteps - 1) // (intermediate_saves+1))\n",
+        "  steps_per_checkpoint = steps_per_checkpoint if steps_per_checkpoint > 0 else 1\n",
+        "  print(f'Will save every {steps_per_checkpoint} steps')\n",
+        "else:\n",
+        "  steps_per_checkpoint = None\n",
+        "\n",
+        "\n",
+        "if init_image == '':\n",
+        "  init_image = None\n",
+        "\n",
+        "side_x = width;\n",
+        "side_y = height;\n",
+        "\n",
+        "#Make folder for batch\n",
+        "batchFolder = f'{outDirPath}/{batch_name}'\n",
+        "createPath(batchFolder)\n",
+        "\n",
+        "partialFolder = f'{batchFolder}/partials'\n",
+        "createPath(partialFolder)"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "XIwh5RvNpk4K"
+      },
+      "source": [
+        "##Prompts"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "BGBzhk3dpcGO"
+      },
+      "source": [
+        "text_prompts = [\n",
+        "    \"A lost treasure found in the depths of atlantis by greg ruktowski, trending on artstation\",\n",
+        "]\n",
+        "\n",
+        "image_prompts = [ \n",
+        "    # 'mona.jpg',\n",
+        "]"
+      ],
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "Nf9hTc8YLoLx"
+      },
+      "source": [
+        "# 5. Diffuse!"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "metadata": {
+        "id": "LHLiO56OfwgD",
+        "cellView": "form"
+      },
+      "source": [
+        "#@title Do the Run!\n",
+        "\n",
+        "display_rate = 2 #@param{type: 'number'}\n",
+        "n_batches =  5#@param{type: 'number'}\n",
+        "batch_size = 1 \n",
+        "\n",
+        "batchNum = len(glob(batchFolder+\"/*.txt\"))\n",
+        "\n",
+        "while path.isfile(f\"{batchFolder}/{batch_name}({batchNum})_settings.txt\") is True or path.isfile(f\"{batchFolder}/{batch_name}-{batchNum}_settings.txt\") is True:\n",
+        "  batchNum += 1\n",
+        "\n",
+        "if seed == 'random_seed':\n",
+        "    seed = random.randint(0, 2**32)\n",
+        "else:\n",
+        "  seed = int(seed)\n",
+        "\n",
+        "gc.collect()\n",
+        "torch.cuda.empty_cache()\n",
+        "try:    \n",
+        "    do_run()\n",
+        "except KeyboardInterrupt:\n",
+        "    pass\n",
+        "finally:\n",
+        "    print('seed', seed)\n",
+        "    gc.collect()\n",
+        "    torch.cuda.empty_cache()"
+      ],
+      "execution_count": null,
+      "outputs": []
+    }
+  ]
+}

+ 2623 - 0
disco.py

@@ -0,0 +1,2623 @@
+# %%
+# !! {"metadata": {
+# !!   "id": "view-in-github",
+# !!   "colab_type": "text"
+# !! }}
+"""
+<a href="https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
+"""
+
+# %%
+# !! {"metadata": {
+# !!    "id": "TitleTop"
+# !! }}
+"""
+# Disco Diffusion v5.2 - Now with VR Mode
+
+In case of confusion, Disco is the name of this notebook edit. The diffusion model in use is Katherine Crowson's fine-tuned 512x512 model
+
+For issues, join the [Disco Diffusion Discord](https://discord.gg/msEZBy4HxA) or message us on twitter at [@somnai_dreams](https://twitter.com/somnai_dreams) or [@gandamu](https://twitter.com/gandamu_ml)
+"""
+
+# %%
+# !! {"metadata": {
+# !!   "id": "CreditsChTop"
+# !! }}
+"""
+### Credits & Changelog ⬇️
+"""
+
+# %%
+# !! {"metadata": {
+# !!   "id": "Credits"
+# !! }}
+"""
+#### Credits
+
+Original notebook by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses either OpenAI's 256x256 unconditional ImageNet or Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.
+
+Modified by Daniel Russell (https://github.com/russelldc, https://twitter.com/danielrussruss) to include (hopefully) optimal params for quick generations in 15-100 timesteps rather than 1000, as well as more robust augmentations.
+
+Further improvements from Dango233 and nsheppard helped improve the quality of diffusion in general, and especially so for shorter runs like this notebook aims to achieve.
+
+Vark added code to load in multiple Clip models at once, which all prompts are evaluated against, which may greatly improve accuracy.
+
+The latest zoom, pan, rotation, and keyframes features were taken from Chigozie Nri's VQGAN Zoom Notebook (https://github.com/chigozienri, https://twitter.com/chigozienri)
+
+Advanced DangoCutn Cutout method is also from Dango223.
+
+--
+
+Disco:
+
+Somnai (https://twitter.com/Somnai_dreams) added Diffusion Animation techniques, QoL improvements and various implementations of tech and techniques, mostly listed in the changelog below.
+
+3D animation implementation added by Adam Letts (https://twitter.com/gandamu_ml) in collaboration with Somnai. Creation of disco.py and ongoing maintenance.
+
+Turbo feature by Chris Allen (https://twitter.com/zippy731)
+
+Improvements to ability to run on local systems, Windows support, and dependency installation by HostsServer (https://twitter.com/HostsServer)
+
+VR Mode by Tom Mason (https://twitter.com/nin_artificial)
+
+"""
+
+# %%
+# !! {"metadata": {
+# !!   "id": "LicenseTop"
+# !! }}
+"""
+#### License
+"""
+
+# %%
+# !! {"metadata": {
+# !!  "id": "License"
+# !!  }}
+"""
+Licensed under the MIT License
+
+Copyright (c) 2021 Katherine Crowson 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+
+--
+
+MIT License
+
+Copyright (c) 2019 Intel ISL (Intel Intelligent Systems Lab)
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+--
+
+Licensed under the MIT License
+
+Copyright (c) 2021 Maxwell Ingham
+
+Copyright (c) 2022 Adam Letts 
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+"""
+
+# %%
+# !! {"metadata": {
+# !!   "id": "ChangelogTop"
+# !! }}
+"""
+#### Changelog
+"""
+
+# %%
+# !! {"metadata": {
+# !!   "cellView": "form",
+# !!    "id": "Changelog"
+# !! }}
+#@title <- View Changelog
+skip_for_run_all = True #@param {type: 'boolean'}
+
+if skip_for_run_all == False:
+  print(
+      '''
+  v1 Update: Oct 29th 2021 - Somnai
+
+      QoL improvements added by Somnai (@somnai_dreams), including user friendly UI, settings+prompt saving and improved google drive folder organization.
+
+  v1.1 Update: Nov 13th 2021 - Somnai
+
+      Now includes sizing options, intermediate saves and fixed image prompts and perlin inits. unexposed batch option since it doesn't work
+
+  v2 Update: Nov 22nd 2021 - Somnai
+
+      Initial addition of Katherine Crowson's Secondary Model Method (https://colab.research.google.com/drive/1mpkrhOjoyzPeSWy2r7T8EYRaU7amYOOi#scrollTo=X5gODNAMEUCR)
+
+      Noticed settings were saving with the wrong name so corrected it. Let me know if you preferred the old scheme.
+
+  v3 Update: Dec 24th 2021 - Somnai
+
+      Implemented Dango's advanced cutout method
+
+      Added SLIP models, thanks to NeuralDivergent
+
+      Fixed issue with NaNs resulting in black images, with massive help and testing from @Softology
+
+      Perlin now changes properly within batches (not sure where this perlin_regen code came from originally, but thank you)
+
+  v4 Update: Jan 2021 - Somnai
+
+      Implemented Diffusion Zooming
+
+      Added Chigozie keyframing
+
+      Made a bunch of edits to processes
+  
+  v4.1 Update: Jan  14th 2021 - Somnai
+
+      Added video input mode
+
+      Added license that somehow went missing
+
+      Added improved prompt keyframing, fixed image_prompts and multiple prompts
+
+      Improved UI
+
+      Significant under the hood cleanup and improvement
+
+      Refined defaults for each mode
+
+      Added latent-diffusion SuperRes for sharpening
+
+      Added resume run mode
+
+  v4.9 Update: Feb 5th 2022 - gandamu / Adam Letts
+
+      Added 3D
+
+      Added brightness corrections to prevent animation from steadily going dark over time
+
+  v4.91 Update: Feb 19th 2022 - gandamu / Adam Letts
+
+      Cleaned up 3D implementation and made associated args accessible via Colab UI elements
+
+  v4.92 Update: Feb 20th 2022 - gandamu / Adam Letts
+
+      Separated transform code
+
+  v5.01 Update: Mar 10th 2022 - gandamu / Adam Letts
+
+      IPython magic commands replaced by Python code
+
+  v5.1 Update: Mar 30th 2022 - zippy / Chris Allen and gandamu / Adam Letts
+
+      Integrated Turbo+Smooth features from Disco Diffusion Turbo -- just the implementation, without its defaults.
+
+      Implemented resume of turbo animations in such a way that it's now possible to resume from different batch folders and batch numbers.
+
+      3D rotation parameter units are now degrees (rather than radians)
+
+      Corrected name collision in sampling_mode (now diffusion_sampling_mode for plms/ddim, and sampling_mode for 3D transform sampling)
+
+      Added video_init_seed_continuity option to make init video animations more continuous
+
+  v5.1 Update: Apr 4th 2022 - MSFTserver aka HostsServer
+
+      Removed pytorch3d from needing to be compiled with a lite version specifically made for Disco Diffusion
+
+      Remove Super Resolution
+
+      Remove SLIP Models
+
+      Update for crossplatform support
+
+  v5.2 Update: Apr 10th 2022 - nin_artificial / Tom Mason
+
+      VR Mode
+
+      '''
+  )
+
+
+# %%
+# !! {"metadata": {
+# !!   "id": "TutorialTop"
+# !! }}
+"""
+# Tutorial
+"""
+
+# %%
+# !! {"metadata": {
+# !!  "id": "DiffusionSet"
+# !! }}
+"""
+**Diffusion settings (Defaults are heavily outdated)**
+---
+Disco Diffusion is complex, and continually evolving with new features.  The most current documentation on on Disco Diffusion settings can be found in the unofficial guidebook:
+
+[Zippy's Disco Diffusion Cheatsheet](https://docs.google.com/document/d/1l8s7uS2dGqjztYSjPpzlmXLjl5PM3IGkRWI3IiCuK7g/edit)
+
+We also encourage users to join the [Disco Diffusion User Discord](https://discord.gg/XGZrFFCRfN) to learn from the active user community.
+
+This section below is outdated as of v2
+
+Setting | Description | Default
+--- | --- | ---
+**Your vision:**
+`text_prompts` | A description of what you'd like the machine to generate. Think of it like writing the caption below your image on a website. | N/A
+`image_prompts` | Think of these images more as a description of their contents. | N/A
+**Image quality:**
+`clip_guidance_scale`  | Controls how much the image should look like the prompt. | 1000
+`tv_scale` | Controls the smoothness of the final output. | 150
+`range_scale` | Controls how far out of range RGB values are allowed to be. | 150
+`sat_scale` | Controls how much saturation is allowed. From nshepperd's JAX notebook. | 0
+`cutn` | Controls how many crops to take from the image. | 16
+`cutn_batches` | Accumulate CLIP gradient from multiple batches of cuts. | 2
+**Init settings:**
+`init_image` | URL or local path | None
+`init_scale` | This enhances the effect of the init image, a good value is 1000 | 0
+`skip_steps` | Controls the starting point along the diffusion timesteps | 0
+`perlin_init` | Option to start with random perlin noise | False
+`perlin_mode` | ('gray', 'color') | 'mixed'
+**Advanced:**
+`skip_augs` | Controls whether to skip torchvision augmentations | False
+`randomize_class` | Controls whether the imagenet class is randomly changed each iteration | True
+`clip_denoised` | Determines whether CLIP discriminates a noisy or denoised image | False
+`clamp_grad` | Experimental: Using adaptive clip grad in the cond_fn | True
+`seed`  | Choose a random seed and print it at end of run for reproduction | random_seed
+`fuzzy_prompt` | Controls whether to add multiple noisy prompts to the prompt losses | False
+`rand_mag` | Controls the magnitude of the random noise | 0.1
+`eta` | DDIM hyperparameter | 0.5
+
+..
+
+**Model settings**
+---
+
+Setting | Description | Default
+--- | --- | ---
+**Diffusion:**
+`timestep_respacing` | Modify this value to decrease the number of timesteps. | ddim100
+`diffusion_steps` || 1000
+**Diffusion:**
+`clip_models` | Models of CLIP to load. Typically the more, the better but they all come at a hefty VRAM cost. | ViT-B/32, ViT-B/16, RN50x4
+"""
+
+# %%
+# !! {"metadata": {
+# !!  "id": "SetupTop"
+# !! }}
+"""
+# 1. Set Up
+"""
+
+# %%
+# !! {"metadata": {
+# !!   "cellView": "form",
+# !!    "id": "CheckGPU"
+# !! }}
+#@title 1.1 Check GPU Status
+import subprocess
+simple_nvidia_smi_display = False#@param {type:"boolean"}
+if simple_nvidia_smi_display:
+  #!nvidia-smi
+  nvidiasmi_output = subprocess.run(['nvidia-smi', '-L'], stdout=subprocess.PIPE).stdout.decode('utf-8')
+  print(nvidiasmi_output)
+else:
+  #!nvidia-smi -i 0 -e 0
+  nvidiasmi_output = subprocess.run(['nvidia-smi'], stdout=subprocess.PIPE).stdout.decode('utf-8')
+  print(nvidiasmi_output)
+  nvidiasmi_ecc_note = subprocess.run(['nvidia-smi', '-i', '0', '-e', '0'], stdout=subprocess.PIPE).stdout.decode('utf-8')
+  print(nvidiasmi_ecc_note)
+
+# %%
+# !! {"metadata": {
+# !!    "cellView": "form",
+# !!    "id": "PrepFolders"
+# !! }}
+#@title 1.2 Prepare Folders
+import subprocess, os, sys, ipykernel
+
+def gitclone(url):
+  res = subprocess.run(['git', 'clone', url], stdout=subprocess.PIPE).stdout.decode('utf-8')
+  print(res)
+
+def pipi(modulestr):
+  res = subprocess.run(['pip', 'install', modulestr], stdout=subprocess.PIPE).stdout.decode('utf-8')
+  print(res)
+
+def pipie(modulestr):
+  res = subprocess.run(['git', 'install', '-e', modulestr], stdout=subprocess.PIPE).stdout.decode('utf-8')
+  print(res)
+
+def wget(url, outputdir):
+  res = subprocess.run(['wget', url, '-P', f'{outputdir}'], stdout=subprocess.PIPE).stdout.decode('utf-8')
+  print(res)
+
+try:
+    from google.colab import drive
+    print("Google Colab detected. Using Google Drive.")
+    is_colab = True
+    #@markdown If you connect your Google Drive, you can save the final image of each run on your drive.
+    google_drive = True #@param {type:"boolean"}
+    #@markdown Click here if you'd like to save the diffusion model checkpoint file to (and/or load from) your Google Drive:
+    save_models_to_google_drive = True #@param {type:"boolean"}
+except:
+    is_colab = False
+    google_drive = False
+    save_models_to_google_drive = False
+    print("Google Colab not detected.")
+
+if is_colab:
+    if google_drive is True:
+        drive.mount('/content/drive')
+        root_path = '/content/drive/MyDrive/AI/Disco_Diffusion'
+    else:
+        root_path = '/content'
+else:
+    root_path = os.getcwd()
+
+import os
+def createPath(filepath):
+    os.makedirs(filepath, exist_ok=True)
+
+initDirPath = f'{root_path}/init_images'
+createPath(initDirPath)
+outDirPath = f'{root_path}/images_out'
+createPath(outDirPath)
+
+if is_colab:
+    if google_drive and not save_models_to_google_drive or not google_drive:
+        model_path = '/content/models'
+        createPath(model_path)
+    if google_drive and save_models_to_google_drive:
+        model_path = f'{root_path}/models'
+        createPath(model_path)
+else:
+    model_path = f'{root_path}/models'
+    createPath(model_path)
+
+# libraries = f'{root_path}/libraries'
+# createPath(libraries)
+
+# %%
+# !! {"metadata": {
+# !!    "cellView": "form",
+# !!    "id": "InstallDeps"
+# !! }}
+#@title ### 1.3 Install and import dependencies
+
+import pathlib, shutil, os, sys
+
+if not is_colab:
+  # If running locally, there's a good chance your env will need this in order to not crash upon np.matmul() or similar operations.
+  os.environ['KMP_DUPLICATE_LIB_OK']='TRUE'
+
+PROJECT_DIR = os.path.abspath(os.getcwd())
+USE_ADABINS = True
+
+if is_colab:
+  if google_drive is not True:
+    root_path = f'/content'
+    model_path = '/content/models' 
+else:
+  root_path = os.getcwd()
+  model_path = f'{root_path}/models'
+
+model_256_downloaded = False
+model_512_downloaded = False
+model_secondary_downloaded = False
+
+multipip_res = subprocess.run(['pip', 'install', 'lpips', 'datetime', 'timm', 'ftfy', 'einops', 'pytorch-lightning', 'omegaconf'], stdout=subprocess.PIPE).stdout.decode('utf-8')
+print(multipip_res)
+
+if is_colab:
+  subprocess.run(['apt', 'install', 'imagemagick'], stdout=subprocess.PIPE).stdout.decode('utf-8')
+
+try:
+  from CLIP import clip
+except:
+  if not os.path.exists("CLIP"):
+    gitclone("https://github.com/openai/CLIP")
+  sys.path.append(f'{PROJECT_DIR}/CLIP')
+
+try:
+  from guided_diffusion.script_util import create_model_and_diffusion
+except:
+  if not os.path.exists("guided-diffusion"):
+    gitclone("https://github.com/crowsonkb/guided-diffusion")
+  sys.path.append(f'{PROJECT_DIR}/guided-diffusion')
+
+try:
+  from resize_right import resize
+except:
+  if not os.path.exists("ResizeRight"):
+    gitclone("https://github.com/assafshocher/ResizeRight.git")
+  sys.path.append(f'{PROJECT_DIR}/ResizeRight')
+
+try:
+  import py3d_tools
+except:
+  if not os.path.exists('pytorch3d-lite'):
+    gitclone("https://github.com/MSFTserver/pytorch3d-lite.git")
+  sys.path.append(f'{PROJECT_DIR}/pytorch3d-lite')
+
+try:
+  from midas.dpt_depth import DPTDepthModel
+except:
+  if not os.path.exists('MiDaS'):
+    gitclone("https://github.com/isl-org/MiDaS.git")
+  if not os.path.exists('MiDaS/midas_utils.py'):
+    shutil.move('MiDaS/utils.py', 'MiDaS/midas_utils.py')
+  if not os.path.exists(f'{model_path}/dpt_large-midas-2f21e586.pt'):
+    wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", model_path)
+  sys.path.append(f'{PROJECT_DIR}/MiDaS')
+
+try:
+  sys.path.append(PROJECT_DIR)
+  import disco_xform_utils as dxf
+except:
+  if not os.path.exists("disco-diffusion"):
+    gitclone("https://github.com/alembics/disco-diffusion.git")
+  if os.path.exists('disco_xform_utils.py') is not True:
+    shutil.move('disco-diffusion/disco_xform_utils.py', 'disco_xform_utils.py')
+  sys.path.append(PROJECT_DIR)
+
+import torch
+from dataclasses import dataclass
+from functools import partial
+import cv2
+import pandas as pd
+import gc
+import io
+import math
+import timm
+from IPython import display
+import lpips
+from PIL import Image, ImageOps
+import requests
+from glob import glob
+import json
+from types import SimpleNamespace
+from torch import nn
+from torch.nn import functional as F
+import torchvision.transforms as T
+import torchvision.transforms.functional as TF
+from tqdm.notebook import tqdm
+from CLIP import clip
+from resize_right import resize
+from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults
+from datetime import datetime
+import numpy as np
+import matplotlib.pyplot as plt
+import random
+from ipywidgets import Output
+import hashlib
+from functools import partial
+if is_colab:
+  os.chdir('/content')
+  from google.colab import files
+else:
+  os.chdir(f'{PROJECT_DIR}')
+from IPython.display import Image as ipyimg
+from numpy import asarray
+from einops import rearrange, repeat
+import torch, torchvision
+import time
+from omegaconf import OmegaConf
+import warnings
+warnings.filterwarnings("ignore", category=UserWarning)
+
+# AdaBins stuff
+if USE_ADABINS:
+  try:
+    from infer import InferenceHelper
+  except:
+    if os.path.exists("AdaBins") is not True:
+      gitclone("https://github.com/shariqfarooq123/AdaBins.git")
+    if not os.path.exists(f'{PROJECT_DIR}/pretrained/AdaBins_nyu.pt'):
+      createPath(f'{PROJECT_DIR}/pretrained')
+      wget("https://cloudflare-ipfs.com/ipfs/Qmd2mMnDLWePKmgfS8m6ntAg4nhV5VkUyAydYBp8cWWeB7/AdaBins_nyu.pt", f'{PROJECT_DIR}/pretrained')
+    sys.path.append(f'{PROJECT_DIR}/AdaBins')
+  from infer import InferenceHelper
+  MAX_ADABINS_AREA = 500000
+
+import torch
+DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
+print('Using device:', DEVICE)
+device = DEVICE # At least one of the modules expects this name..
+
+if torch.cuda.get_device_capability(DEVICE) == (8,0): ## A100 fix thanks to Emad
+  print('Disabling CUDNN for A100 gpu', file=sys.stderr)
+  torch.backends.cudnn.enabled = False
+
+# %%
+# !! {"metadata": {
+# !!  "cellView": "form",
+# !!    "id": "DefMidasFns"
+# !! }}
+#@title ### 1.4 Define Midas functions
+
+from midas.dpt_depth import DPTDepthModel
+from midas.midas_net import MidasNet
+from midas.midas_net_custom import MidasNet_small
+from midas.transforms import Resize, NormalizeImage, PrepareForNet
+
+# Initialize MiDaS depth model.
+# It remains resident in VRAM and likely takes around 2GB VRAM.
+# You could instead initialize it for each frame (and free it after each frame) to save VRAM.. but initializing it is slow.
+default_models = {
+    "midas_v21_small": f"{model_path}/midas_v21_small-70d6b9c8.pt",
+    "midas_v21": f"{model_path}/midas_v21-f6b98070.pt",
+    "dpt_large": f"{model_path}/dpt_large-midas-2f21e586.pt",
+    "dpt_hybrid": f"{model_path}/dpt_hybrid-midas-501f0c75.pt",
+    "dpt_hybrid_nyu": f"{model_path}/dpt_hybrid_nyu-2ce69ec7.pt",}
+
+
+def init_midas_depth_model(midas_model_type="dpt_large", optimize=True):
+    midas_model = None
+    net_w = None
+    net_h = None
+    resize_mode = None
+    normalization = None
+
+    print(f"Initializing MiDaS '{midas_model_type}' depth model...")
+    # load network
+    midas_model_path = default_models[midas_model_type]
+
+    if midas_model_type == "dpt_large": # DPT-Large
+        midas_model = DPTDepthModel(
+            path=midas_model_path,
+            backbone="vitl16_384",
+            non_negative=True,
+        )
+        net_w, net_h = 384, 384
+        resize_mode = "minimal"
+        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
+    elif midas_model_type == "dpt_hybrid": #DPT-Hybrid
+        midas_model = DPTDepthModel(
+            path=midas_model_path,
+            backbone="vitb_rn50_384",
+            non_negative=True,
+        )
+        net_w, net_h = 384, 384
+        resize_mode="minimal"
+        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
+    elif midas_model_type == "dpt_hybrid_nyu": #DPT-Hybrid-NYU
+        midas_model = DPTDepthModel(
+            path=midas_model_path,
+            backbone="vitb_rn50_384",
+            non_negative=True,
+        )
+        net_w, net_h = 384, 384
+        resize_mode="minimal"
+        normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
+    elif midas_model_type == "midas_v21":
+        midas_model = MidasNet(midas_model_path, non_negative=True)
+        net_w, net_h = 384, 384
+        resize_mode="upper_bound"
+        normalization = NormalizeImage(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
+        )
+    elif midas_model_type == "midas_v21_small":
+        midas_model = MidasNet_small(midas_model_path, features=64, backbone="efficientnet_lite3", exportable=True, non_negative=True, blocks={'expand': True})
+        net_w, net_h = 256, 256
+        resize_mode="upper_bound"
+        normalization = NormalizeImage(
+            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
+        )
+    else:
+        print(f"midas_model_type '{midas_model_type}' not implemented")
+        assert False
+
+    midas_transform = T.Compose(
+        [
+            Resize(
+                net_w,
+                net_h,
+                resize_target=None,
+                keep_aspect_ratio=True,
+                ensure_multiple_of=32,
+                resize_method=resize_mode,
+                image_interpolation_method=cv2.INTER_CUBIC,
+            ),
+            normalization,
+            PrepareForNet(),
+        ]
+    )
+
+    midas_model.eval()
+    
+    if optimize==True:
+        if DEVICE == torch.device("cuda"):
+            midas_model = midas_model.to(memory_format=torch.channels_last)  
+            midas_model = midas_model.half()
+
+    midas_model.to(DEVICE)
+
+    print(f"MiDaS '{midas_model_type}' depth model initialized.")
+    return midas_model, midas_transform, net_w, net_h, resize_mode, normalization
+
+# %%
+# !! {"metadata": {
+# !!    "cellView": "form",
+# !!    "id": "DefFns"
+# !! }}
+#@title 1.5 Define necessary functions
+
+# https://gist.github.com/adefossez/0646dbe9ed4005480a2407c62aac8869
+
+import py3d_tools as p3dT
+import disco_xform_utils as dxf
+
+def interp(t):
+    return 3 * t**2 - 2 * t ** 3
+
+def perlin(width, height, scale=10, device=None):
+    gx, gy = torch.randn(2, width + 1, height + 1, 1, 1, device=device)
+    xs = torch.linspace(0, 1, scale + 1)[:-1, None].to(device)
+    ys = torch.linspace(0, 1, scale + 1)[None, :-1].to(device)
+    wx = 1 - interp(xs)
+    wy = 1 - interp(ys)
+    dots = 0
+    dots += wx * wy * (gx[:-1, :-1] * xs + gy[:-1, :-1] * ys)
+    dots += (1 - wx) * wy * (-gx[1:, :-1] * (1 - xs) + gy[1:, :-1] * ys)
+    dots += wx * (1 - wy) * (gx[:-1, 1:] * xs - gy[:-1, 1:] * (1 - ys))
+    dots += (1 - wx) * (1 - wy) * (-gx[1:, 1:] * (1 - xs) - gy[1:, 1:] * (1 - ys))
+    return dots.permute(0, 2, 1, 3).contiguous().view(width * scale, height * scale)
+
+def perlin_ms(octaves, width, height, grayscale, device=device):
+    out_array = [0.5] if grayscale else [0.5, 0.5, 0.5]
+    # out_array = [0.0] if grayscale else [0.0, 0.0, 0.0]
+    for i in range(1 if grayscale else 3):
+        scale = 2 ** len(octaves)
+        oct_width = width
+        oct_height = height
+        for oct in octaves:
+            p = perlin(oct_width, oct_height, scale, device)
+            out_array[i] += p * oct
+            scale //= 2
+            oct_width *= 2
+            oct_height *= 2
+    return torch.cat(out_array)
+
+def create_perlin_noise(octaves=[1, 1, 1, 1], width=2, height=2, grayscale=True):
+    out = perlin_ms(octaves, width, height, grayscale)
+    if grayscale:
+        out = TF.resize(size=(side_y, side_x), img=out.unsqueeze(0))
+        out = TF.to_pil_image(out.clamp(0, 1)).convert('RGB')
+    else:
+        out = out.reshape(-1, 3, out.shape[0]//3, out.shape[1])
+        out = TF.resize(size=(side_y, side_x), img=out)
+        out = TF.to_pil_image(out.clamp(0, 1).squeeze())
+
+    out = ImageOps.autocontrast(out)
+    return out
+
+def regen_perlin():
+    if perlin_mode == 'color':
+        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)
+        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)
+    elif perlin_mode == 'gray':
+        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)
+        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)
+    else:
+        init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)
+        init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)
+
+    init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)
+    del init2
+    return init.expand(batch_size, -1, -1, -1)
+
+def fetch(url_or_path):
+    if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):
+        r = requests.get(url_or_path)
+        r.raise_for_status()
+        fd = io.BytesIO()
+        fd.write(r.content)
+        fd.seek(0)
+        return fd
+    return open(url_or_path, 'rb')
+
+def read_image_workaround(path):
+    """OpenCV reads images as BGR, Pillow saves them as RGB. Work around
+    this incompatibility to avoid colour inversions."""
+    im_tmp = cv2.imread(path)
+    return cv2.cvtColor(im_tmp, cv2.COLOR_BGR2RGB)
+
+def parse_prompt(prompt):
+    if prompt.startswith('http://') or prompt.startswith('https://'):
+        vals = prompt.rsplit(':', 2)
+        vals = [vals[0] + ':' + vals[1], *vals[2:]]
+    else:
+        vals = prompt.rsplit(':', 1)
+    vals = vals + ['', '1'][len(vals):]
+    return vals[0], float(vals[1])
+
+def sinc(x):
+    return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))
+
+def lanczos(x, a):
+    cond = torch.logical_and(-a < x, x < a)
+    out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))
+    return out / out.sum()
+
+def ramp(ratio, width):
+    n = math.ceil(width / ratio + 1)
+    out = torch.empty([n])
+    cur = 0
+    for i in range(out.shape[0]):
+        out[i] = cur
+        cur += ratio
+    return torch.cat([-out[1:].flip([0]), out])[1:-1]
+
+def resample(input, size, align_corners=True):
+    n, c, h, w = input.shape
+    dh, dw = size
+
+    input = input.reshape([n * c, 1, h, w])
+
+    if dh < h:
+        kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)
+        pad_h = (kernel_h.shape[0] - 1) // 2
+        input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')
+        input = F.conv2d(input, kernel_h[None, None, :, None])
+
+    if dw < w:
+        kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)
+        pad_w = (kernel_w.shape[0] - 1) // 2
+        input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')
+        input = F.conv2d(input, kernel_w[None, None, None, :])
+
+    input = input.reshape([n, c, h, w])
+    return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)
+
+class MakeCutouts(nn.Module):
+    def __init__(self, cut_size, cutn, skip_augs=False):
+        super().__init__()
+        self.cut_size = cut_size
+        self.cutn = cutn
+        self.skip_augs = skip_augs
+        self.augs = T.Compose([
+            T.RandomHorizontalFlip(p=0.5),
+            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+            T.RandomAffine(degrees=15, translate=(0.1, 0.1)),
+            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+            T.RandomPerspective(distortion_scale=0.4, p=0.7),
+            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+            T.RandomGrayscale(p=0.15),
+            T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+            # T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),
+        ])
+
+    def forward(self, input):
+        input = T.Pad(input.shape[2]//4, fill=0)(input)
+        sideY, sideX = input.shape[2:4]
+        max_size = min(sideX, sideY)
+
+        cutouts = []
+        for ch in range(self.cutn):
+            if ch > self.cutn - self.cutn//4:
+                cutout = input.clone()
+            else:
+                size = int(max_size * torch.zeros(1,).normal_(mean=.8, std=.3).clip(float(self.cut_size/max_size), 1.))
+                offsetx = torch.randint(0, abs(sideX - size + 1), ())
+                offsety = torch.randint(0, abs(sideY - size + 1), ())
+                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
+
+            if not self.skip_augs:
+                cutout = self.augs(cutout)
+            cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))
+            del cutout
+
+        cutouts = torch.cat(cutouts, dim=0)
+        return cutouts
+
+cutout_debug = False
+padargs = {}
+
+class MakeCutoutsDango(nn.Module):
+    def __init__(self, cut_size,
+                 Overview=4, 
+                 InnerCrop = 0, IC_Size_Pow=0.5, IC_Grey_P = 0.2
+                 ):
+        super().__init__()
+        self.cut_size = cut_size
+        self.Overview = Overview
+        self.InnerCrop = InnerCrop
+        self.IC_Size_Pow = IC_Size_Pow
+        self.IC_Grey_P = IC_Grey_P
+        if args.animation_mode == 'None':
+          self.augs = T.Compose([
+              T.RandomHorizontalFlip(p=0.5),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.RandomAffine(degrees=10, translate=(0.05, 0.05),  interpolation = T.InterpolationMode.BILINEAR),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.RandomGrayscale(p=0.1),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),
+          ])
+        elif args.animation_mode == 'Video Input':
+          self.augs = T.Compose([
+              T.RandomHorizontalFlip(p=0.5),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.RandomAffine(degrees=15, translate=(0.1, 0.1)),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.RandomPerspective(distortion_scale=0.4, p=0.7),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.RandomGrayscale(p=0.15),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              # T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),
+          ])
+        elif  args.animation_mode == '2D' or args.animation_mode == '3D':
+          self.augs = T.Compose([
+              T.RandomHorizontalFlip(p=0.4),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.RandomAffine(degrees=10, translate=(0.05, 0.05),  interpolation = T.InterpolationMode.BILINEAR),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.RandomGrayscale(p=0.1),
+              T.Lambda(lambda x: x + torch.randn_like(x) * 0.01),
+              T.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.3),
+          ])
+          
+
+    def forward(self, input):
+        cutouts = []
+        gray = T.Grayscale(3)
+        sideY, sideX = input.shape[2:4]
+        max_size = min(sideX, sideY)
+        min_size = min(sideX, sideY, self.cut_size)
+        l_size = max(sideX, sideY)
+        output_shape = [1,3,self.cut_size,self.cut_size] 
+        output_shape_2 = [1,3,self.cut_size+2,self.cut_size+2]
+        pad_input = F.pad(input,((sideY-max_size)//2,(sideY-max_size)//2,(sideX-max_size)//2,(sideX-max_size)//2), **padargs)
+        cutout = resize(pad_input, out_shape=output_shape)
+
+        if self.Overview>0:
+            if self.Overview<=4:
+                if self.Overview>=1:
+                    cutouts.append(cutout)
+                if self.Overview>=2:
+                    cutouts.append(gray(cutout))
+                if self.Overview>=3:
+                    cutouts.append(TF.hflip(cutout))
+                if self.Overview==4:
+                    cutouts.append(gray(TF.hflip(cutout)))
+            else:
+                cutout = resize(pad_input, out_shape=output_shape)
+                for _ in range(self.Overview):
+                    cutouts.append(cutout)
+
+            if cutout_debug:
+                if is_colab:
+                    TF.to_pil_image(cutouts[0].clamp(0, 1).squeeze(0)).save("/content/cutout_overview0.jpg",quality=99)
+                else:
+                    TF.to_pil_image(cutouts[0].clamp(0, 1).squeeze(0)).save("cutout_overview0.jpg",quality=99)
+
+                              
+        if self.InnerCrop >0:
+            for i in range(self.InnerCrop):
+                size = int(torch.rand([])**self.IC_Size_Pow * (max_size - min_size) + min_size)
+                offsetx = torch.randint(0, sideX - size + 1, ())
+                offsety = torch.randint(0, sideY - size + 1, ())
+                cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
+                if i <= int(self.IC_Grey_P * self.InnerCrop):
+                    cutout = gray(cutout)
+                cutout = resize(cutout, out_shape=output_shape)
+                cutouts.append(cutout)
+            if cutout_debug:
+                if is_colab:
+                    TF.to_pil_image(cutouts[-1].clamp(0, 1).squeeze(0)).save("/content/cutout_InnerCrop.jpg",quality=99)
+                else:
+                    TF.to_pil_image(cutouts[-1].clamp(0, 1).squeeze(0)).save("cutout_InnerCrop.jpg",quality=99)
+        cutouts = torch.cat(cutouts)
+        if skip_augs is not True: cutouts=self.augs(cutouts)
+        return cutouts
+
+def spherical_dist_loss(x, y):
+    x = F.normalize(x, dim=-1)
+    y = F.normalize(y, dim=-1)
+    return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)     
+
+def tv_loss(input):
+    """L2 total variation loss, as in Mahendran et al."""
+    input = F.pad(input, (0, 1, 0, 1), 'replicate')
+    x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]
+    y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]
+    return (x_diff**2 + y_diff**2).mean([1, 2, 3])
+
+
+def range_loss(input):
+    return (input - input.clamp(-1, 1)).pow(2).mean([1, 2, 3])
+
+stop_on_next_loop = False  # Make sure GPU memory doesn't get corrupted from cancelling the run mid-way through, allow a full frame to complete
+TRANSLATION_SCALE = 1.0/200.0
+
+def do_3d_step(img_filepath, frame_num, midas_model, midas_transform):
+  if args.key_frames:
+    translation_x = args.translation_x_series[frame_num]
+    translation_y = args.translation_y_series[frame_num]
+    translation_z = args.translation_z_series[frame_num]
+    rotation_3d_x = args.rotation_3d_x_series[frame_num]
+    rotation_3d_y = args.rotation_3d_y_series[frame_num]
+    rotation_3d_z = args.rotation_3d_z_series[frame_num]
+    print(
+        f'translation_x: {translation_x}',
+        f'translation_y: {translation_y}',
+        f'translation_z: {translation_z}',
+        f'rotation_3d_x: {rotation_3d_x}',
+        f'rotation_3d_y: {rotation_3d_y}',
+        f'rotation_3d_z: {rotation_3d_z}',
+    )
+
+  translate_xyz = [-translation_x*TRANSLATION_SCALE, translation_y*TRANSLATION_SCALE, -translation_z*TRANSLATION_SCALE]
+  rotate_xyz_degrees = [rotation_3d_x, rotation_3d_y, rotation_3d_z]
+  print('translation:',translate_xyz)
+  print('rotation:',rotate_xyz_degrees)
+  rotate_xyz = [math.radians(rotate_xyz_degrees[0]), math.radians(rotate_xyz_degrees[1]), math.radians(rotate_xyz_degrees[2])]
+  rot_mat = p3dT.euler_angles_to_matrix(torch.tensor(rotate_xyz, device=device), "XYZ").unsqueeze(0)
+  print("rot_mat: " + str(rot_mat))
+  next_step_pil = dxf.transform_image_3d(img_filepath, midas_model, midas_transform, DEVICE,
+                                          rot_mat, translate_xyz, args.near_plane, args.far_plane,
+                                          args.fov, padding_mode=args.padding_mode,
+                                          sampling_mode=args.sampling_mode, midas_weight=args.midas_weight)
+  return next_step_pil
+
+def do_run():
+  seed = args.seed
+  print(range(args.start_frame, args.max_frames))
+
+  if (args.animation_mode == "3D") and (args.midas_weight > 0.0):
+      midas_model, midas_transform, midas_net_w, midas_net_h, midas_resize_mode, midas_normalization = init_midas_depth_model(args.midas_depth_model)
+  for frame_num in range(args.start_frame, args.max_frames):
+      if stop_on_next_loop:
+        break
+      
+      display.clear_output(wait=True)
+
+      # Print Frame progress if animation mode is on
+      if args.animation_mode != "None":
+        batchBar = tqdm(range(args.max_frames), desc ="Frames")
+        batchBar.n = frame_num
+        batchBar.refresh()
+
+      
+      # Inits if not video frames
+      if args.animation_mode != "Video Input":
+        if args.init_image == '':
+          init_image = None
+        else:
+          init_image = args.init_image
+        init_scale = args.init_scale
+        skip_steps = args.skip_steps
+
+      if args.animation_mode == "2D":
+        if args.key_frames:
+          angle = args.angle_series[frame_num]
+          zoom = args.zoom_series[frame_num]
+          translation_x = args.translation_x_series[frame_num]
+          translation_y = args.translation_y_series[frame_num]
+          print(
+              f'angle: {angle}',
+              f'zoom: {zoom}',
+              f'translation_x: {translation_x}',
+              f'translation_y: {translation_y}',
+          )
+        
+        if frame_num > 0:
+          seed += 1
+          if resume_run and frame_num == start_frame:
+            img_0 = cv2.imread(batchFolder+f"/{batch_name}({batchNum})_{start_frame-1:04}.png")
+          else:
+            img_0 = cv2.imread('prevFrame.png')
+          center = (1*img_0.shape[1]//2, 1*img_0.shape[0]//2)
+          trans_mat = np.float32(
+              [[1, 0, translation_x],
+              [0, 1, translation_y]]
+          )
+          rot_mat = cv2.getRotationMatrix2D( center, angle, zoom )
+          trans_mat = np.vstack([trans_mat, [0,0,1]])
+          rot_mat = np.vstack([rot_mat, [0,0,1]])
+          transformation_matrix = np.matmul(rot_mat, trans_mat)
+          img_0 = cv2.warpPerspective(
+              img_0,
+              transformation_matrix,
+              (img_0.shape[1], img_0.shape[0]),
+              borderMode=cv2.BORDER_WRAP
+          )
+
+          cv2.imwrite('prevFrameScaled.png', img_0)
+          init_image = 'prevFrameScaled.png'
+          init_scale = args.frames_scale
+          skip_steps = args.calc_frames_skip_steps
+
+      if args.animation_mode == "3D":
+        if frame_num > 0:
+          seed += 1    
+          if resume_run and frame_num == start_frame:
+            img_filepath = batchFolder+f"/{batch_name}({batchNum})_{start_frame-1:04}.png"
+            if turbo_mode and frame_num > turbo_preroll:
+              shutil.copyfile(img_filepath, 'oldFrameScaled.png')
+          else:
+            img_filepath = '/content/prevFrame.png' if is_colab else 'prevFrame.png'
+
+          next_step_pil = do_3d_step(img_filepath, frame_num, midas_model, midas_transform)
+          next_step_pil.save('prevFrameScaled.png')
+
+          ### Turbo mode - skip some diffusions, use 3d morph for clarity and to save time
+          if turbo_mode:
+            if frame_num == turbo_preroll: #start tracking oldframe
+              next_step_pil.save('oldFrameScaled.png')#stash for later blending          
+            elif frame_num > turbo_preroll:
+              #set up 2 warped image sequences, old & new, to blend toward new diff image
+              old_frame = do_3d_step('oldFrameScaled.png', frame_num, midas_model, midas_transform)
+              old_frame.save('oldFrameScaled.png')
+              if frame_num % int(turbo_steps) != 0: 
+                print('turbo skip this frame: skipping clip diffusion steps')
+                filename = f'{args.batch_name}({args.batchNum})_{frame_num:04}.png'
+                blend_factor = ((frame_num % int(turbo_steps))+1)/int(turbo_steps)
+                print('turbo skip this frame: skipping clip diffusion steps and saving blended frame')
+                newWarpedImg = cv2.imread('prevFrameScaled.png')#this is already updated..
+                oldWarpedImg = cv2.imread('oldFrameScaled.png')
+                blendedImage = cv2.addWeighted(newWarpedImg, blend_factor, oldWarpedImg,1-blend_factor, 0.0)
+                cv2.imwrite(f'{batchFolder}/{filename}',blendedImage)
+                next_step_pil.save(f'{img_filepath}') # save it also as prev_frame to feed next iteration
+                if vr_mode:
+                  generate_eye_views(TRANSLATION_SCALE,batchFolder,filename,frame_num,midas_model, midas_transform)
+                continue
+              else:
+                #if not a skip frame, will run diffusion and need to blend.
+                oldWarpedImg = cv2.imread('prevFrameScaled.png')
+                cv2.imwrite(f'oldFrameScaled.png',oldWarpedImg)#swap in for blending later 
+                print('clip/diff this frame - generate clip diff image')
+
+          init_image = 'prevFrameScaled.png'
+          init_scale = args.frames_scale
+          skip_steps = args.calc_frames_skip_steps
+
+      if  args.animation_mode == "Video Input":
+        if not video_init_seed_continuity:
+          seed += 1
+        init_image = f'{videoFramesFolder}/{frame_num+1:04}.jpg'
+        init_scale = args.frames_scale
+        skip_steps = args.calc_frames_skip_steps
+
+      loss_values = []
+  
+      if seed is not None:
+          np.random.seed(seed)
+          random.seed(seed)
+          torch.manual_seed(seed)
+          torch.cuda.manual_seed_all(seed)
+          torch.backends.cudnn.deterministic = True
+  
+      target_embeds, weights = [], []
+      
+      if args.prompts_series is not None and frame_num >= len(args.prompts_series):
+        frame_prompt = args.prompts_series[-1]
+      elif args.prompts_series is not None:
+        frame_prompt = args.prompts_series[frame_num]
+      else:
+        frame_prompt = []
+      
+      print(args.image_prompts_series)
+      if args.image_prompts_series is not None and frame_num >= len(args.image_prompts_series):
+        image_prompt = args.image_prompts_series[-1]
+      elif args.image_prompts_series is not None:
+        image_prompt = args.image_prompts_series[frame_num]
+      else:
+        image_prompt = []
+
+      print(f'Frame {frame_num} Prompt: {frame_prompt}')
+
+      model_stats = []
+      for clip_model in clip_models:
+            cutn = 16
+            model_stat = {"clip_model":None,"target_embeds":[],"make_cutouts":None,"weights":[]}
+            model_stat["clip_model"] = clip_model
+            
+            
+            for prompt in frame_prompt:
+                txt, weight = parse_prompt(prompt)
+                txt = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()
+                
+                if args.fuzzy_prompt:
+                    for i in range(25):
+                        model_stat["target_embeds"].append((txt + torch.randn(txt.shape).cuda() * args.rand_mag).clamp(0,1))
+                        model_stat["weights"].append(weight)
+                else:
+                    model_stat["target_embeds"].append(txt)
+                    model_stat["weights"].append(weight)
+        
+            if image_prompt:
+              model_stat["make_cutouts"] = MakeCutouts(clip_model.visual.input_resolution, cutn, skip_augs=skip_augs) 
+              for prompt in image_prompt:
+                  path, weight = parse_prompt(prompt)
+                  img = Image.open(fetch(path)).convert('RGB')
+                  img = TF.resize(img, min(side_x, side_y, *img.size), T.InterpolationMode.LANCZOS)
+                  batch = model_stat["make_cutouts"](TF.to_tensor(img).to(device).unsqueeze(0).mul(2).sub(1))
+                  embed = clip_model.encode_image(normalize(batch)).float()
+                  if fuzzy_prompt:
+                      for i in range(25):
+                          model_stat["target_embeds"].append((embed + torch.randn(embed.shape).cuda() * rand_mag).clamp(0,1))
+                          weights.extend([weight / cutn] * cutn)
+                  else:
+                      model_stat["target_embeds"].append(embed)
+                      model_stat["weights"].extend([weight / cutn] * cutn)
+        
+            model_stat["target_embeds"] = torch.cat(model_stat["target_embeds"])
+            model_stat["weights"] = torch.tensor(model_stat["weights"], device=device)
+            if model_stat["weights"].sum().abs() < 1e-3:
+                raise RuntimeError('The weights must not sum to 0.')
+            model_stat["weights"] /= model_stat["weights"].sum().abs()
+            model_stats.append(model_stat)
+  
+      init = None
+      if init_image is not None:
+          init = Image.open(fetch(init_image)).convert('RGB')
+          init = init.resize((args.side_x, args.side_y), Image.LANCZOS)
+          init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)
+      
+      if args.perlin_init:
+          if args.perlin_mode == 'color':
+              init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)
+              init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, False)
+          elif args.perlin_mode == 'gray':
+            init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, True)
+            init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)
+          else:
+            init = create_perlin_noise([1.5**-i*0.5 for i in range(12)], 1, 1, False)
+            init2 = create_perlin_noise([1.5**-i*0.5 for i in range(8)], 4, 4, True)
+          # init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device)
+          init = TF.to_tensor(init).add(TF.to_tensor(init2)).div(2).to(device).unsqueeze(0).mul(2).sub(1)
+          del init2
+  
+      cur_t = None
+  
+      def cond_fn(x, t, y=None):
+          with torch.enable_grad():
+              x_is_NaN = False
+              x = x.detach().requires_grad_()
+              n = x.shape[0]
+              if use_secondary_model is True:
+                alpha = torch.tensor(diffusion.sqrt_alphas_cumprod[cur_t], device=device, dtype=torch.float32)
+                sigma = torch.tensor(diffusion.sqrt_one_minus_alphas_cumprod[cur_t], device=device, dtype=torch.float32)
+                cosine_t = alpha_sigma_to_t(alpha, sigma)
+                out = secondary_model(x, cosine_t[None].repeat([n])).pred
+                fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]
+                x_in = out * fac + x * (1 - fac)
+                x_in_grad = torch.zeros_like(x_in)
+              else:
+                my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t
+                out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})
+                fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]
+                x_in = out['pred_xstart'] * fac + x * (1 - fac)
+                x_in_grad = torch.zeros_like(x_in)
+              for model_stat in model_stats:
+                for i in range(args.cutn_batches):
+                    t_int = int(t.item())+1 #errors on last step without +1, need to find source
+                    #when using SLIP Base model the dimensions need to be hard coded to avoid AttributeError: 'VisionTransformer' object has no attribute 'input_resolution'
+                    try:
+                        input_resolution=model_stat["clip_model"].visual.input_resolution
+                    except:
+                        input_resolution=224
+
+                    cuts = MakeCutoutsDango(input_resolution,
+                            Overview= args.cut_overview[1000-t_int], 
+                            InnerCrop = args.cut_innercut[1000-t_int], IC_Size_Pow=args.cut_ic_pow, IC_Grey_P = args.cut_icgray_p[1000-t_int]
+                            )
+                    clip_in = normalize(cuts(x_in.add(1).div(2)))
+                    image_embeds = model_stat["clip_model"].encode_image(clip_in).float()
+                    dists = spherical_dist_loss(image_embeds.unsqueeze(1), model_stat["target_embeds"].unsqueeze(0))
+                    dists = dists.view([args.cut_overview[1000-t_int]+args.cut_innercut[1000-t_int], n, -1])
+                    losses = dists.mul(model_stat["weights"]).sum(2).mean(0)
+                    loss_values.append(losses.sum().item()) # log loss, probably shouldn't do per cutn_batch
+                    x_in_grad += torch.autograd.grad(losses.sum() * clip_guidance_scale, x_in)[0] / cutn_batches
+              tv_losses = tv_loss(x_in)
+              if use_secondary_model is True:
+                range_losses = range_loss(out)
+              else:
+                range_losses = range_loss(out['pred_xstart'])
+              sat_losses = torch.abs(x_in - x_in.clamp(min=-1,max=1)).mean()
+              loss = tv_losses.sum() * tv_scale + range_losses.sum() * range_scale + sat_losses.sum() * sat_scale
+              if init is not None and args.init_scale:
+                  init_losses = lpips_model(x_in, init)
+                  loss = loss + init_losses.sum() * args.init_scale
+              x_in_grad += torch.autograd.grad(loss, x_in)[0]
+              if torch.isnan(x_in_grad).any()==False:
+                  grad = -torch.autograd.grad(x_in, x, x_in_grad)[0]
+              else:
+                # print("NaN'd")
+                x_is_NaN = True
+                grad = torch.zeros_like(x)
+          if args.clamp_grad and x_is_NaN == False:
+              magnitude = grad.square().mean().sqrt()
+              return grad * magnitude.clamp(max=args.clamp_max) / magnitude  #min=-0.02, min=-clamp_max, 
+          return grad
+  
+      if args.diffusion_sampling_mode == 'ddim':
+          sample_fn = diffusion.ddim_sample_loop_progressive
+      else:
+          sample_fn = diffusion.plms_sample_loop_progressive
+
+
+      image_display = Output()
+      for i in range(args.n_batches):
+          if args.animation_mode == 'None':
+            display.clear_output(wait=True)
+            batchBar = tqdm(range(args.n_batches), desc ="Batches")
+            batchBar.n = i
+            batchBar.refresh()
+          print('')
+          display.display(image_display)
+          gc.collect()
+          torch.cuda.empty_cache()
+          cur_t = diffusion.num_timesteps - skip_steps - 1
+          total_steps = cur_t
+
+          if perlin_init:
+              init = regen_perlin()
+
+          if args.diffusion_sampling_mode == 'ddim':
+              samples = sample_fn(
+                  model,
+                  (batch_size, 3, args.side_y, args.side_x),
+                  clip_denoised=clip_denoised,
+                  model_kwargs={},
+                  cond_fn=cond_fn,
+                  progress=True,
+                  skip_timesteps=skip_steps,
+                  init_image=init,
+                  randomize_class=randomize_class,
+                  eta=eta,
+              )
+          else:
+              samples = sample_fn(
+                  model,
+                  (batch_size, 3, args.side_y, args.side_x),
+                  clip_denoised=clip_denoised,
+                  model_kwargs={},
+                  cond_fn=cond_fn,
+                  progress=True,
+                  skip_timesteps=skip_steps,
+                  init_image=init,
+                  randomize_class=randomize_class,
+                  order=2,
+              )
+          
+          
+          # with run_display:
+          # display.clear_output(wait=True)
+          for j, sample in enumerate(samples):    
+            cur_t -= 1
+            intermediateStep = False
+            if args.steps_per_checkpoint is not None:
+                if j % steps_per_checkpoint == 0 and j > 0:
+                  intermediateStep = True
+            elif j in args.intermediate_saves:
+              intermediateStep = True
+            with image_display:
+              if j % args.display_rate == 0 or cur_t == -1 or intermediateStep == True:
+                  for k, image in enumerate(sample['pred_xstart']):
+                      # tqdm.write(f'Batch {i}, step {j}, output {k}:')
+                      current_time = datetime.now().strftime('%y%m%d-%H%M%S_%f')
+                      percent = math.ceil(j/total_steps*100)
+                      if args.n_batches > 0:
+                        #if intermediates are saved to the subfolder, don't append a step or percentage to the name
+                        if cur_t == -1 and args.intermediates_in_subfolder is True:
+                          save_num = f'{frame_num:04}' if animation_mode != "None" else i
+                          filename = f'{args.batch_name}({args.batchNum})_{save_num}.png'
+                        else:
+                          #If we're working with percentages, append it
+                          if args.steps_per_checkpoint is not None:
+                            filename = f'{args.batch_name}({args.batchNum})_{i:04}-{percent:02}%.png'
+                          # Or else, iIf we're working with specific steps, append those
+                          else:
+                            filename = f'{args.batch_name}({args.batchNum})_{i:04}-{j:03}.png'
+                      image = TF.to_pil_image(image.add(1).div(2).clamp(0, 1))
+                      if j % args.display_rate == 0 or cur_t == -1:
+                        image.save('progress.png')
+                        display.clear_output(wait=True)
+                        display.display(display.Image('progress.png'))
+                      if args.steps_per_checkpoint is not None:
+                        if j % args.steps_per_checkpoint == 0 and j > 0:
+                          if args.intermediates_in_subfolder is True:
+                            image.save(f'{partialFolder}/{filename}')
+                          else:
+                            image.save(f'{batchFolder}/{filename}')
+                      else:
+                        if j in args.intermediate_saves:
+                          if args.intermediates_in_subfolder is True:
+                            image.save(f'{partialFolder}/{filename}')
+                          else:
+                            image.save(f'{batchFolder}/{filename}')
+                      if cur_t == -1:
+                        if frame_num == 0:
+                          save_settings()
+                        if args.animation_mode != "None":
+                          image.save('prevFrame.png')
+                        image.save(f'{batchFolder}/{filename}')
+                        if args.animation_mode == "3D":
+                          # If turbo, save a blended image
+                          if turbo_mode and frame_num > 0:
+                            # Mix new image with prevFrameScaled
+                            blend_factor = (1)/int(turbo_steps)
+                            newFrame = cv2.imread('prevFrame.png') # This is already updated..
+                            prev_frame_warped = cv2.imread('prevFrameScaled.png')
+                            blendedImage = cv2.addWeighted(newFrame, blend_factor, prev_frame_warped, (1-blend_factor), 0.0)
+                            cv2.imwrite(f'{batchFolder}/{filename}',blendedImage)
+                          else:
+                            image.save(f'{batchFolder}/{filename}')
+
+                          if vr_mode:
+                            generate_eye_views(TRANSLATION_SCALE, batchFolder, filename, frame_num, midas_model, midas_transform)
+
+                        # if frame_num != args.max_frames-1:
+                        #   display.clear_output()
+          
+          plt.plot(np.array(loss_values), 'r')
+
+def generate_eye_views(trans_scale,batchFolder,filename,frame_num,midas_model, midas_transform):
+   for i in range(2):
+      theta = vr_eye_angle * (math.pi/180)
+      ray_origin = math.cos(theta) * vr_ipd / 2 * (-1.0 if i==0 else 1.0)
+      ray_rotation = (theta if i==0 else -theta)
+      translate_xyz = [-(ray_origin)*trans_scale, 0,0]
+      rotate_xyz = [0, (ray_rotation), 0]
+      rot_mat = p3dT.euler_angles_to_matrix(torch.tensor(rotate_xyz, device=device), "XYZ").unsqueeze(0)
+      transformed_image = dxf.transform_image_3d(f'{batchFolder}/{filename}', midas_model, midas_transform, DEVICE,
+                                                      rot_mat, translate_xyz, args.near_plane, args.far_plane,
+                                                      args.fov, padding_mode=args.padding_mode,
+                                                      sampling_mode=args.sampling_mode, midas_weight=args.midas_weight,spherical=True)
+      eye_file_path = batchFolder+f"/frame_{frame_num:04}" + ('_l' if i==0 else '_r')+'.png'
+      transformed_image.save(eye_file_path)
+
+def save_settings():
+  setting_list = {
+    'text_prompts': text_prompts,
+    'image_prompts': image_prompts,
+    'clip_guidance_scale': clip_guidance_scale,
+    'tv_scale': tv_scale,
+    'range_scale': range_scale,
+    'sat_scale': sat_scale,
+    # 'cutn': cutn,
+    'cutn_batches': cutn_batches,
+    'max_frames': max_frames,
+    'interp_spline': interp_spline,
+    # 'rotation_per_frame': rotation_per_frame,
+    'init_image': init_image,
+    'init_scale': init_scale,
+    'skip_steps': skip_steps,
+    # 'zoom_per_frame': zoom_per_frame,
+    'frames_scale': frames_scale,
+    'frames_skip_steps': frames_skip_steps,
+    'perlin_init': perlin_init,
+    'perlin_mode': perlin_mode,
+    'skip_augs': skip_augs,
+    'randomize_class': randomize_class,
+    'clip_denoised': clip_denoised,
+    'clamp_grad': clamp_grad,
+    'clamp_max': clamp_max,
+    'seed': seed,
+    'fuzzy_prompt': fuzzy_prompt,
+    'rand_mag': rand_mag,
+    'eta': eta,
+    'width': width_height[0],
+    'height': width_height[1],
+    'diffusion_model': diffusion_model,
+    'use_secondary_model': use_secondary_model,
+    'steps': steps,
+    'diffusion_steps': diffusion_steps,
+    'diffusion_sampling_mode': diffusion_sampling_mode,
+    'ViTB32': ViTB32,
+    'ViTB16': ViTB16,
+    'ViTL14': ViTL14,
+    'RN101': RN101,
+    'RN50': RN50,
+    'RN50x4': RN50x4,
+    'RN50x16': RN50x16,
+    'RN50x64': RN50x64,
+    'cut_overview': str(cut_overview),
+    'cut_innercut': str(cut_innercut),
+    'cut_ic_pow': cut_ic_pow,
+    'cut_icgray_p': str(cut_icgray_p),
+    'key_frames': key_frames,
+    'max_frames': max_frames,
+    'angle': angle,
+    'zoom': zoom,
+    'translation_x': translation_x,
+    'translation_y': translation_y,
+    'translation_z': translation_z,
+    'rotation_3d_x': rotation_3d_x,
+    'rotation_3d_y': rotation_3d_y,
+    'rotation_3d_z': rotation_3d_z,
+    'midas_depth_model': midas_depth_model,
+    'midas_weight': midas_weight,
+    'near_plane': near_plane,
+    'far_plane': far_plane,
+    'fov': fov,
+    'padding_mode': padding_mode,
+    'sampling_mode': sampling_mode,
+    'video_init_path':video_init_path,
+    'extract_nth_frame':extract_nth_frame,
+    'video_init_seed_continuity': video_init_seed_continuity,
+    'turbo_mode':turbo_mode,
+    'turbo_steps':turbo_steps,
+    'turbo_preroll':turbo_preroll,
+  }
+  # print('Settings:', setting_list)
+  with open(f"{batchFolder}/{batch_name}({batchNum})_settings.txt", "w+") as f:   #save settings
+    json.dump(setting_list, f, ensure_ascii=False, indent=4)
+
+# %%
+# !! {"metadata": {
+# !!    "cellView": "form",
+# !!    "id": "DefSecModel"
+# !! }}
+#@title 1.6 Define the secondary diffusion model
+
+def append_dims(x, n):
+    return x[(Ellipsis, *(None,) * (n - x.ndim))]
+
+
+def expand_to_planes(x, shape):
+    return append_dims(x, len(shape)).repeat([1, 1, *shape[2:]])
+
+
+def alpha_sigma_to_t(alpha, sigma):
+    return torch.atan2(sigma, alpha) * 2 / math.pi
+
+
+def t_to_alpha_sigma(t):
+    return torch.cos(t * math.pi / 2), torch.sin(t * math.pi / 2)
+
+
+@dataclass
+class DiffusionOutput:
+    v: torch.Tensor
+    pred: torch.Tensor
+    eps: torch.Tensor
+
+
+class ConvBlock(nn.Sequential):
+    def __init__(self, c_in, c_out):
+        super().__init__(
+            nn.Conv2d(c_in, c_out, 3, padding=1),
+            nn.ReLU(inplace=True),
+        )
+
+
+class SkipBlock(nn.Module):
+    def __init__(self, main, skip=None):
+        super().__init__()
+        self.main = nn.Sequential(*main)
+        self.skip = skip if skip else nn.Identity()
+
+    def forward(self, input):
+        return torch.cat([self.main(input), self.skip(input)], dim=1)
+
+
+class FourierFeatures(nn.Module):
+    def __init__(self, in_features, out_features, std=1.):
+        super().__init__()
+        assert out_features % 2 == 0
+        self.weight = nn.Parameter(torch.randn([out_features // 2, in_features]) * std)
+
+    def forward(self, input):
+        f = 2 * math.pi * input @ self.weight.T
+        return torch.cat([f.cos(), f.sin()], dim=-1)
+
+
+class SecondaryDiffusionImageNet(nn.Module):
+    def __init__(self):
+        super().__init__()
+        c = 64  # The base channel count
+
+        self.timestep_embed = FourierFeatures(1, 16)
+
+        self.net = nn.Sequential(
+            ConvBlock(3 + 16, c),
+            ConvBlock(c, c),
+            SkipBlock([
+                nn.AvgPool2d(2),
+                ConvBlock(c, c * 2),
+                ConvBlock(c * 2, c * 2),
+                SkipBlock([
+                    nn.AvgPool2d(2),
+                    ConvBlock(c * 2, c * 4),
+                    ConvBlock(c * 4, c * 4),
+                    SkipBlock([
+                        nn.AvgPool2d(2),
+                        ConvBlock(c * 4, c * 8),
+                        ConvBlock(c * 8, c * 4),
+                        nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),
+                    ]),
+                    ConvBlock(c * 8, c * 4),
+                    ConvBlock(c * 4, c * 2),
+                    nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),
+                ]),
+                ConvBlock(c * 4, c * 2),
+                ConvBlock(c * 2, c),
+                nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),
+            ]),
+            ConvBlock(c * 2, c),
+            nn.Conv2d(c, 3, 3, padding=1),
+        )
+
+    def forward(self, input, t):
+        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)
+        v = self.net(torch.cat([input, timestep_embed], dim=1))
+        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))
+        pred = input * alphas - v * sigmas
+        eps = input * sigmas + v * alphas
+        return DiffusionOutput(v, pred, eps)
+
+
+class SecondaryDiffusionImageNet2(nn.Module):
+    def __init__(self):
+        super().__init__()
+        c = 64  # The base channel count
+        cs = [c, c * 2, c * 2, c * 4, c * 4, c * 8]
+
+        self.timestep_embed = FourierFeatures(1, 16)
+        self.down = nn.AvgPool2d(2)
+        self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False)
+
+        self.net = nn.Sequential(
+            ConvBlock(3 + 16, cs[0]),
+            ConvBlock(cs[0], cs[0]),
+            SkipBlock([
+                self.down,
+                ConvBlock(cs[0], cs[1]),
+                ConvBlock(cs[1], cs[1]),
+                SkipBlock([
+                    self.down,
+                    ConvBlock(cs[1], cs[2]),
+                    ConvBlock(cs[2], cs[2]),
+                    SkipBlock([
+                        self.down,
+                        ConvBlock(cs[2], cs[3]),
+                        ConvBlock(cs[3], cs[3]),
+                        SkipBlock([
+                            self.down,
+                            ConvBlock(cs[3], cs[4]),
+                            ConvBlock(cs[4], cs[4]),
+                            SkipBlock([
+                                self.down,
+                                ConvBlock(cs[4], cs[5]),
+                                ConvBlock(cs[5], cs[5]),
+                                ConvBlock(cs[5], cs[5]),
+                                ConvBlock(cs[5], cs[4]),
+                                self.up,
+                            ]),
+                            ConvBlock(cs[4] * 2, cs[4]),
+                            ConvBlock(cs[4], cs[3]),
+                            self.up,
+                        ]),
+                        ConvBlock(cs[3] * 2, cs[3]),
+                        ConvBlock(cs[3], cs[2]),
+                        self.up,
+                    ]),
+                    ConvBlock(cs[2] * 2, cs[2]),
+                    ConvBlock(cs[2], cs[1]),
+                    self.up,
+                ]),
+                ConvBlock(cs[1] * 2, cs[1]),
+                ConvBlock(cs[1], cs[0]),
+                self.up,
+            ]),
+            ConvBlock(cs[0] * 2, cs[0]),
+            nn.Conv2d(cs[0], 3, 3, padding=1),
+        )
+
+    def forward(self, input, t):
+        timestep_embed = expand_to_planes(self.timestep_embed(t[:, None]), input.shape)
+        v = self.net(torch.cat([input, timestep_embed], dim=1))
+        alphas, sigmas = map(partial(append_dims, n=v.ndim), t_to_alpha_sigma(t))
+        pred = input * alphas - v * sigmas
+        eps = input * sigmas + v * alphas
+        return DiffusionOutput(v, pred, eps)
+
+
+# %%
+# !! {"metadata": {
+# !!    "id": "DiffClipSetTop"
+# !! }}
+"""
+# 2. Diffusion and CLIP model settings
+"""
+
+# %%
+# !! {"metadata": {
+# !!   "id": "ModelSettings"
+# !!  }}
+#@markdown ####**Models Settings:**
+diffusion_model = "512x512_diffusion_uncond_finetune_008100" #@param ["256x256_diffusion_uncond", "512x512_diffusion_uncond_finetune_008100"]
+use_secondary_model = True #@param {type: 'boolean'}
+diffusion_sampling_mode = 'ddim' #@param ['plms','ddim']  
+
+
+use_checkpoint = True #@param {type: 'boolean'}
+ViTB32 = True #@param{type:"boolean"}
+ViTB16 = True #@param{type:"boolean"}
+ViTL14 = False #@param{type:"boolean"}
+RN101 = False #@param{type:"boolean"}
+RN50 = True #@param{type:"boolean"}
+RN50x4 = False #@param{type:"boolean"}
+RN50x16 = False #@param{type:"boolean"}
+RN50x64 = False #@param{type:"boolean"}
+
+#@markdown If you're having issues with model downloads, check this to compare SHA's:
+check_model_SHA = False #@param{type:"boolean"}
+
+model_256_SHA = '983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a'
+model_512_SHA = '9c111ab89e214862b76e1fa6a1b3f1d329b1a88281885943d2cdbe357ad57648'
+model_secondary_SHA = '983e3de6f95c88c81b2ca7ebb2c217933be1973b1ff058776b970f901584613a'
+
+model_256_link = 'https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt'
+model_512_link = 'https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt'
+model_secondary_link = 'https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth'
+
+model_256_path = f'{model_path}/256x256_diffusion_uncond.pt'
+model_512_path = f'{model_path}/512x512_diffusion_uncond_finetune_008100.pt'
+model_secondary_path = f'{model_path}/secondary_model_imagenet_2.pth'
+
+# Download the diffusion model
+if diffusion_model == '256x256_diffusion_uncond':
+  if os.path.exists(model_256_path) and check_model_SHA:
+    print('Checking 256 Diffusion File')
+    with open(model_256_path,"rb") as f:
+        bytes = f.read() 
+        hash = hashlib.sha256(bytes).hexdigest();
+    if hash == model_256_SHA:
+      print('256 Model SHA matches')
+      model_256_downloaded = True
+    else: 
+      print("256 Model SHA doesn't match, redownloading...")
+      wget(model_256_link, model_path)
+      model_256_downloaded = True
+  elif os.path.exists(model_256_path) and not check_model_SHA or model_256_downloaded == True:
+    print('256 Model already downloaded, check check_model_SHA if the file is corrupt')
+  else:  
+    wget(model_256_link, model_path)
+    model_256_downloaded = True
+elif diffusion_model == '512x512_diffusion_uncond_finetune_008100':
+  if os.path.exists(model_512_path) and check_model_SHA:
+    print('Checking 512 Diffusion File')
+    with open(model_512_path,"rb") as f:
+        bytes = f.read() 
+        hash = hashlib.sha256(bytes).hexdigest();
+    if hash == model_512_SHA:
+      print('512 Model SHA matches')
+      model_512_downloaded = True
+    else:  
+      print("512 Model SHA doesn't match, redownloading...")
+      wget(model_512_link, model_path)
+      model_512_downloaded = True
+  elif os.path.exists(model_512_path) and not check_model_SHA or model_512_downloaded == True:
+    print('512 Model already downloaded, check check_model_SHA if the file is corrupt')
+  else:  
+    wget(model_512_link, model_path)
+    model_512_downloaded = True
+
+
+# Download the secondary diffusion model v2
+if use_secondary_model == True:
+  if os.path.exists(model_secondary_path) and check_model_SHA:
+    print('Checking Secondary Diffusion File')
+    with open(model_secondary_path,"rb") as f:
+        bytes = f.read() 
+        hash = hashlib.sha256(bytes).hexdigest();
+    if hash == model_secondary_SHA:
+      print('Secondary Model SHA matches')
+      model_secondary_downloaded = True
+    else:  
+      print("Secondary Model SHA doesn't match, redownloading...")
+      wget(model_secondary_link, model_path)
+      model_secondary_downloaded = True
+  elif os.path.exists(model_secondary_path) and not check_model_SHA or model_secondary_downloaded == True:
+    print('Secondary Model already downloaded, check check_model_SHA if the file is corrupt')
+  else:  
+    wget(model_secondary_link, model_path)
+    model_secondary_downloaded = True
+
+model_config = model_and_diffusion_defaults()
+if diffusion_model == '512x512_diffusion_uncond_finetune_008100':
+    model_config.update({
+        'attention_resolutions': '32, 16, 8',
+        'class_cond': False,
+        'diffusion_steps': 1000, #No need to edit this, it is taken care of later.
+        'rescale_timesteps': True,
+        'timestep_respacing': 250, #No need to edit this, it is taken care of later.
+        'image_size': 512,
+        'learn_sigma': True,
+        'noise_schedule': 'linear',
+        'num_channels': 256,
+        'num_head_channels': 64,
+        'num_res_blocks': 2,
+        'resblock_updown': True,
+        'use_checkpoint': use_checkpoint,
+        'use_fp16': True,
+        'use_scale_shift_norm': True,
+    })
+elif diffusion_model == '256x256_diffusion_uncond':
+    model_config.update({
+        'attention_resolutions': '32, 16, 8',
+        'class_cond': False,
+        'diffusion_steps': 1000, #No need to edit this, it is taken care of later.
+        'rescale_timesteps': True,
+        'timestep_respacing': 250, #No need to edit this, it is taken care of later.
+        'image_size': 256,
+        'learn_sigma': True,
+        'noise_schedule': 'linear',
+        'num_channels': 256,
+        'num_head_channels': 64,
+        'num_res_blocks': 2,
+        'resblock_updown': True,
+        'use_checkpoint': use_checkpoint,
+        'use_fp16': True,
+        'use_scale_shift_norm': True,
+    })
+
+model_default = model_config['image_size']
+
+
+
+if use_secondary_model:
+    secondary_model = SecondaryDiffusionImageNet2()
+    secondary_model.load_state_dict(torch.load(f'{model_path}/secondary_model_imagenet_2.pth', map_location='cpu'))
+    secondary_model.eval().requires_grad_(False).to(device)
+
+clip_models = []
+if ViTB32 is True: clip_models.append(clip.load('ViT-B/32', jit=False)[0].eval().requires_grad_(False).to(device)) 
+if ViTB16 is True: clip_models.append(clip.load('ViT-B/16', jit=False)[0].eval().requires_grad_(False).to(device) ) 
+if ViTL14 is True: clip_models.append(clip.load('ViT-L/14', jit=False)[0].eval().requires_grad_(False).to(device) ) 
+if RN50 is True: clip_models.append(clip.load('RN50', jit=False)[0].eval().requires_grad_(False).to(device))
+if RN50x4 is True: clip_models.append(clip.load('RN50x4', jit=False)[0].eval().requires_grad_(False).to(device)) 
+if RN50x16 is True: clip_models.append(clip.load('RN50x16', jit=False)[0].eval().requires_grad_(False).to(device)) 
+if RN50x64 is True: clip_models.append(clip.load('RN50x64', jit=False)[0].eval().requires_grad_(False).to(device)) 
+if RN101 is True: clip_models.append(clip.load('RN101', jit=False)[0].eval().requires_grad_(False).to(device)) 
+
+normalize = T.Normalize(mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711])
+lpips_model = lpips.LPIPS(net='vgg').to(device)
+
+
+# %%
+# !! {"metadata": {
+# !!    "id": "SettingsTop"
+# !! }}
+"""
+# 3. Settings
+"""
+
+# %%
+# !! {"metadata": {
+# !!    "id": "BasicSettings"
+# !!  }}
+#@markdown ####**Basic Settings:**
+batch_name = 'TimeToDisco' #@param{type: 'string'}
+steps = 250 #@param [25,50,100,150,250,500,1000]{type: 'raw', allow-input: true}
+width_height = [1280, 768]#@param{type: 'raw'}
+clip_guidance_scale = 5000 #@param{type: 'number'}
+tv_scale =  0#@param{type: 'number'}
+range_scale =   150#@param{type: 'number'}
+sat_scale =   0#@param{type: 'number'}
+cutn_batches = 4  #@param{type: 'number'}
+skip_augs = False#@param{type: 'boolean'}
+
+#@markdown ---
+
+#@markdown ####**Init Settings:**
+init_image = None #@param{type: 'string'}
+init_scale = 1000 #@param{type: 'integer'}
+skip_steps = 10 #@param{type: 'integer'}
+#@markdown *Make sure you set skip_steps to ~50% of your steps if you want to use an init image.*
+
+#Get corrected sizes
+side_x = (width_height[0]//64)*64;
+side_y = (width_height[1]//64)*64;
+if side_x != width_height[0] or side_y != width_height[1]:
+  print(f'Changing output size to {side_x}x{side_y}. Dimensions must by multiples of 64.')
+
+#Update Model Settings
+timestep_respacing = f'ddim{steps}'
+diffusion_steps = (1000//steps)*steps if steps < 1000 else steps
+model_config.update({
+    'timestep_respacing': timestep_respacing,
+    'diffusion_steps': diffusion_steps,
+})
+
+#Make folder for batch
+batchFolder = f'{outDirPath}/{batch_name}'
+createPath(batchFolder)
+
+
+# %%
+# !! {"metadata": {
+# !!    "id": "AnimSetTop"
+# !! }}
+"""
+### Animation Settings
+"""
+
+# %%
+# !! {"metadata": {
+# !!    "id": "AnimSettings"
+# !! }}
+#@markdown ####**Animation Mode:**
+animation_mode = 'None' #@param ['None', '2D', '3D', 'Video Input'] {type:'string'}
+#@markdown *For animation, you probably want to turn `cutn_batches` to 1 to make it quicker.*
+
+
+#@markdown ---
+
+#@markdown ####**Video Input Settings:**
+if is_colab:
+    video_init_path = "/content/training.mp4" #@param {type: 'string'}
+else:
+    video_init_path = "training.mp4" #@param {type: 'string'}
+extract_nth_frame = 2 #@param {type: 'number'}
+video_init_seed_continuity = True #@param {type: 'boolean'}
+
+if animation_mode == "Video Input":
+  if is_colab:
+      videoFramesFolder = f'/content/videoFrames'
+  else:
+      videoFramesFolder = f'videoFrames'
+  createPath(videoFramesFolder)
+  print(f"Exporting Video Frames (1 every {extract_nth_frame})...")
+  try:
+    for f in pathlib.Path(f'{videoFramesFolder}').glob('*.jpg'):
+      f.unlink()
+  except:
+    print('')
+  vf = f'select=not(mod(n\,{extract_nth_frame}))'
+  subprocess.run(['ffmpeg', '-i', f'{video_init_path}', '-vf', f'{vf}', '-vsync', 'vfr', '-q:v', '2', '-loglevel', 'error', '-stats', f'{videoFramesFolder}/%04d.jpg'], stdout=subprocess.PIPE).stdout.decode('utf-8')
+  #!ffmpeg -i {video_init_path} -vf {vf} -vsync vfr -q:v 2 -loglevel error -stats {videoFramesFolder}/%04d.jpg
+
+
+#@markdown ---
+
+#@markdown ####**2D Animation Settings:**
+#@markdown `zoom` is a multiplier of dimensions, 1 is no zoom.
+#@markdown All rotations are provided in degrees.
+
+key_frames = True #@param {type:"boolean"}
+max_frames = 10000#@param {type:"number"}
+
+if animation_mode == "Video Input":
+  max_frames = len(glob(f'{videoFramesFolder}/*.jpg'))
+
+interp_spline = 'Linear' #Do not change, currently will not look good. param ['Linear','Quadratic','Cubic']{type:"string"}
+angle = "0:(0)"#@param {type:"string"}
+zoom = "0: (1), 10: (1.05)"#@param {type:"string"}
+translation_x = "0: (0)"#@param {type:"string"}
+translation_y = "0: (0)"#@param {type:"string"}
+translation_z = "0: (10.0)"#@param {type:"string"}
+rotation_3d_x = "0: (0)"#@param {type:"string"}
+rotation_3d_y = "0: (0)"#@param {type:"string"}
+rotation_3d_z = "0: (0)"#@param {type:"string"}
+midas_depth_model = "dpt_large"#@param {type:"string"}
+midas_weight = 0.3#@param {type:"number"}
+near_plane = 200#@param {type:"number"}
+far_plane = 10000#@param {type:"number"}
+fov = 40#@param {type:"number"}
+padding_mode = 'border'#@param {type:"string"}
+sampling_mode = 'bicubic'#@param {type:"string"}
+
+#======= TURBO MODE
+#@markdown ---
+#@markdown ####**Turbo Mode (3D anim only):**
+#@markdown (Starts after frame 10,) skips diffusion steps and just uses depth map to warp images for skipped frames.
+#@markdown Speeds up rendering by 2x-4x, and may improve image coherence between frames. frame_blend_mode smooths abrupt texture changes across 2 frames.
+#@markdown For different settings tuned for Turbo Mode, refer to the original Disco-Turbo Github: https://github.com/zippy731/disco-diffusion-turbo
+
+turbo_mode = False #@param {type:"boolean"}
+turbo_steps = "3" #@param ["2","3","4","5","6"] {type:"string"}
+turbo_preroll = 10 # frames
+
+#insist turbo be used only w 3d anim.
+if turbo_mode and animation_mode != '3D':
+  print('=====')
+  print('Turbo mode only available with 3D animations. Disabling Turbo.')
+  print('=====')
+  turbo_mode = False
+
+#@markdown ---
+
+#@markdown ####**Coherency Settings:**
+#@markdown `frame_scale` tries to guide the new frame to looking like the old one. A good default is 1500.
+frames_scale = 1500 #@param{type: 'integer'}
+#@markdown `frame_skip_steps` will blur the previous frame - higher values will flicker less but struggle to add enough new detail to zoom into.
+frames_skip_steps = '60%' #@param ['40%', '50%', '60%', '70%', '80%'] {type: 'string'}
+
+#======= VR MODE
+#@markdown ---
+#@markdown ####**VR Mode (3D anim only):**
+#@markdown Enables stereo rendering of left/right eye views (supporting Turbo) which use a different (fish-eye) camera projection matrix.   
+#@markdown Note the images you're prompting will work better if they have some inherent wide-angle aspect
+#@markdown The generated images will need to be combined into left/right videos. These can then be stitched into the VR180 format.
+#@markdown Google made the VR180 Creator tool but subsequently stopped supporting it. It's available for download in a few places including https://www.patrickgrunwald.de/vr180-creator-download
+#@markdown The tool is not only good for stitching (videos and photos) but also for adding the correct metadata into existing videos, which is needed for services like YouTube to identify the format correctly.
+#@markdown Watching YouTube VR videos isn't necessarily the easiest depending on your headset. For instance Oculus have a dedicated media studio and store which makes the files easier to access on a Quest https://creator.oculus.com/manage/mediastudio/
+#@markdown 
+#@markdown The command to get ffmpeg to concat your frames for each eye is in the form: `ffmpeg -framerate 15 -i frame_%4d_l.png l.mp4` (repeat for r)
+
+vr_mode = False #@param {type:"boolean"}
+#@markdown `vr_eye_angle` is the y-axis rotation of the eyes towards the center
+vr_eye_angle = 0.5 #@param{type:"number"}
+#@markdown interpupillary distance (between the eyes)
+vr_ipd = 5.0 #@param{type:"number"}
+
+#insist VR be used only w 3d anim.
+if vr_mode and animation_mode != '3D':
+  print('=====')
+  print('VR mode only available with 3D animations. Disabling VR.')
+  print('=====')
+  vr_mode = False
+
+
+def parse_key_frames(string, prompt_parser=None):
+    """Given a string representing frame numbers paired with parameter values at that frame,
+    return a dictionary with the frame numbers as keys and the parameter values as the values.
+
+    Parameters
+    ----------
+    string: string
+        Frame numbers paired with parameter values at that frame number, in the format
+        'framenumber1: (parametervalues1), framenumber2: (parametervalues2), ...'
+    prompt_parser: function or None, optional
+        If provided, prompt_parser will be applied to each string of parameter values.
+    
+    Returns
+    -------
+    dict
+        Frame numbers as keys, parameter values at that frame number as values
+
+    Raises
+    ------
+    RuntimeError
+        If the input string does not match the expected format.
+    
+    Examples
+    --------
+    >>> parse_key_frames("10:(Apple: 1| Orange: 0), 20: (Apple: 0| Orange: 1| Peach: 1)")
+    {10: 'Apple: 1| Orange: 0', 20: 'Apple: 0| Orange: 1| Peach: 1'}
+
+    >>> parse_key_frames("10:(Apple: 1| Orange: 0), 20: (Apple: 0| Orange: 1| Peach: 1)", prompt_parser=lambda x: x.lower()))
+    {10: 'apple: 1| orange: 0', 20: 'apple: 0| orange: 1| peach: 1'}
+    """
+    import re
+    pattern = r'((?P<frame>[0-9]+):[\s]*[\(](?P<param>[\S\s]*?)[\)])'
+    frames = dict()
+    for match_object in re.finditer(pattern, string):
+        frame = int(match_object.groupdict()['frame'])
+        param = match_object.groupdict()['param']
+        if prompt_parser:
+            frames[frame] = prompt_parser(param)
+        else:
+            frames[frame] = param
+
+    if frames == {} and len(string) != 0:
+        raise RuntimeError('Key Frame string not correctly formatted')
+    return frames
+
+def get_inbetweens(key_frames, integer=False):
+    """Given a dict with frame numbers as keys and a parameter value as values,
+    return a pandas Series containing the value of the parameter at every frame from 0 to max_frames.
+    Any values not provided in the input dict are calculated by linear interpolation between
+    the values of the previous and next provided frames. If there is no previous provided frame, then
+    the value is equal to the value of the next provided frame, or if there is no next provided frame,
+    then the value is equal to the value of the previous provided frame. If no frames are provided,
+    all frame values are NaN.
+
+    Parameters
+    ----------
+    key_frames: dict
+        A dict with integer frame numbers as keys and numerical values of a particular parameter as values.
+    integer: Bool, optional
+        If True, the values of the output series are converted to integers.
+        Otherwise, the values are floats.
+    
+    Returns
+    -------
+    pd.Series
+        A Series with length max_frames representing the parameter values for each frame.
+    
+    Examples
+    --------
+    >>> max_frames = 5
+    >>> get_inbetweens({1: 5, 3: 6})
+    0    5.0
+    1    5.0
+    2    5.5
+    3    6.0
+    4    6.0
+    dtype: float64
+
+    >>> get_inbetweens({1: 5, 3: 6}, integer=True)
+    0    5
+    1    5
+    2    5
+    3    6
+    4    6
+    dtype: int64
+    """
+    key_frame_series = pd.Series([np.nan for a in range(max_frames)])
+
+    for i, value in key_frames.items():
+        key_frame_series[i] = value
+    key_frame_series = key_frame_series.astype(float)
+    
+    interp_method = interp_spline
+
+    if interp_method == 'Cubic' and len(key_frames.items()) <=3:
+      interp_method = 'Quadratic'
+    
+    if interp_method == 'Quadratic' and len(key_frames.items()) <= 2:
+      interp_method = 'Linear'
+      
+    
+    key_frame_series[0] = key_frame_series[key_frame_series.first_valid_index()]
+    key_frame_series[max_frames-1] = key_frame_series[key_frame_series.last_valid_index()]
+    # key_frame_series = key_frame_series.interpolate(method=intrp_method,order=1, limit_direction='both')
+    key_frame_series = key_frame_series.interpolate(method=interp_method.lower(),limit_direction='both')
+    if integer:
+        return key_frame_series.astype(int)
+    return key_frame_series
+
+def split_prompts(prompts):
+  prompt_series = pd.Series([np.nan for a in range(max_frames)])
+  for i, prompt in prompts.items():
+    prompt_series[i] = prompt
+  # prompt_series = prompt_series.astype(str)
+  prompt_series = prompt_series.ffill().bfill()
+  return prompt_series
+
+if key_frames:
+    try:
+        angle_series = get_inbetweens(parse_key_frames(angle))
+    except RuntimeError as e:
+        print(
+            "WARNING: You have selected to use key frames, but you have not "
+            "formatted `angle` correctly for key frames.\n"
+            "Attempting to interpret `angle` as "
+            f'"0: ({angle})"\n'
+            "Please read the instructions to find out how to use key frames "
+            "correctly.\n"
+        )
+        angle = f"0: ({angle})"
+        angle_series = get_inbetweens(parse_key_frames(angle))
+
+    try:
+        zoom_series = get_inbetweens(parse_key_frames(zoom))
+    except RuntimeError as e:
+        print(
+            "WARNING: You have selected to use key frames, but you have not "
+            "formatted `zoom` correctly for key frames.\n"
+            "Attempting to interpret `zoom` as "
+            f'"0: ({zoom})"\n'
+            "Please read the instructions to find out how to use key frames "
+            "correctly.\n"
+        )
+        zoom = f"0: ({zoom})"
+        zoom_series = get_inbetweens(parse_key_frames(zoom))
+
+    try:
+        translation_x_series = get_inbetweens(parse_key_frames(translation_x))
+    except RuntimeError as e:
+        print(
+            "WARNING: You have selected to use key frames, but you have not "
+            "formatted `translation_x` correctly for key frames.\n"
+            "Attempting to interpret `translation_x` as "
+            f'"0: ({translation_x})"\n'
+            "Please read the instructions to find out how to use key frames "
+            "correctly.\n"
+        )
+        translation_x = f"0: ({translation_x})"
+        translation_x_series = get_inbetweens(parse_key_frames(translation_x))
+
+    try:
+        translation_y_series = get_inbetweens(parse_key_frames(translation_y))
+    except RuntimeError as e:
+        print(
+            "WARNING: You have selected to use key frames, but you have not "
+            "formatted `translation_y` correctly for key frames.\n"
+            "Attempting to interpret `translation_y` as "
+            f'"0: ({translation_y})"\n'
+            "Please read the instructions to find out how to use key frames "
+            "correctly.\n"
+        )
+        translation_y = f"0: ({translation_y})"
+        translation_y_series = get_inbetweens(parse_key_frames(translation_y))
+
+    try:
+        translation_z_series = get_inbetweens(parse_key_frames(translation_z))
+    except RuntimeError as e:
+        print(
+            "WARNING: You have selected to use key frames, but you have not "
+            "formatted `translation_z` correctly for key frames.\n"
+            "Attempting to interpret `translation_z` as "
+            f'"0: ({translation_z})"\n'
+            "Please read the instructions to find out how to use key frames "
+            "correctly.\n"
+        )
+        translation_z = f"0: ({translation_z})"
+        translation_z_series = get_inbetweens(parse_key_frames(translation_z))
+
+    try:
+        rotation_3d_x_series = get_inbetweens(parse_key_frames(rotation_3d_x))
+    except RuntimeError as e:
+        print(
+            "WARNING: You have selected to use key frames, but you have not "
+            "formatted `rotation_3d_x` correctly for key frames.\n"
+            "Attempting to interpret `rotation_3d_x` as "
+            f'"0: ({rotation_3d_x})"\n'
+            "Please read the instructions to find out how to use key frames "
+            "correctly.\n"
+        )
+        rotation_3d_x = f"0: ({rotation_3d_x})"
+        rotation_3d_x_series = get_inbetweens(parse_key_frames(rotation_3d_x))
+
+    try:
+        rotation_3d_y_series = get_inbetweens(parse_key_frames(rotation_3d_y))
+    except RuntimeError as e:
+        print(
+            "WARNING: You have selected to use key frames, but you have not "
+            "formatted `rotation_3d_y` correctly for key frames.\n"
+            "Attempting to interpret `rotation_3d_y` as "
+            f'"0: ({rotation_3d_y})"\n'
+            "Please read the instructions to find out how to use key frames "
+            "correctly.\n"
+        )
+        rotation_3d_y = f"0: ({rotation_3d_y})"
+        rotation_3d_y_series = get_inbetweens(parse_key_frames(rotation_3d_y))
+
+    try:
+        rotation_3d_z_series = get_inbetweens(parse_key_frames(rotation_3d_z))
+    except RuntimeError as e:
+        print(
+            "WARNING: You have selected to use key frames, but you have not "
+            "formatted `rotation_3d_z` correctly for key frames.\n"
+            "Attempting to interpret `rotation_3d_z` as "
+            f'"0: ({rotation_3d_z})"\n'
+            "Please read the instructions to find out how to use key frames "
+            "correctly.\n"
+        )
+        rotation_3d_z = f"0: ({rotation_3d_z})"
+        rotation_3d_z_series = get_inbetweens(parse_key_frames(rotation_3d_z))
+
+else:
+    angle = float(angle)
+    zoom = float(zoom)
+    translation_x = float(translation_x)
+    translation_y = float(translation_y)
+    translation_z = float(translation_z)
+    rotation_3d_x = float(rotation_3d_x)
+    rotation_3d_y = float(rotation_3d_y)
+    rotation_3d_z = float(rotation_3d_z)
+
+
+# %%
+# !! {"metadata": {
+# !!    "id": "ExtraSetTop"
+# !! }}
+"""
+### Extra Settings
+ Partial Saves, Advanced Settings, Cutn Scheduling
+"""
+
+# %%
+# !! {"metadata": {
+# !!   "id": "ExtraSettings"
+# !! }}
+#@markdown ####**Saving:**
+
+intermediate_saves = 0#@param{type: 'raw'}
+intermediates_in_subfolder = True #@param{type: 'boolean'}
+#@markdown Intermediate steps will save a copy at your specified intervals. You can either format it as a single integer or a list of specific steps 
+
+#@markdown A value of `2` will save a copy at 33% and 66%. 0 will save none.
+
+#@markdown A value of `[5, 9, 34, 45]` will save at steps 5, 9, 34, and 45. (Make sure to include the brackets)
+
+
+if type(intermediate_saves) is not list:
+  if intermediate_saves:
+    steps_per_checkpoint = math.floor((steps - skip_steps - 1) // (intermediate_saves+1))
+    steps_per_checkpoint = steps_per_checkpoint if steps_per_checkpoint > 0 else 1
+    print(f'Will save every {steps_per_checkpoint} steps')
+  else:
+    steps_per_checkpoint = steps+10
+else:
+  steps_per_checkpoint = None
+
+if intermediate_saves and intermediates_in_subfolder is True:
+  partialFolder = f'{batchFolder}/partials'
+  createPath(partialFolder)
+
+  #@markdown ---
+
+#@markdown ####**Advanced Settings:**
+#@markdown *There are a few extra advanced settings available if you double click this cell.*
+
+#@markdown *Perlin init will replace your init, so uncheck if using one.*
+
+perlin_init = False  #@param{type: 'boolean'}
+perlin_mode = 'mixed' #@param ['mixed', 'color', 'gray']
+set_seed = 'random_seed' #@param{type: 'string'}
+eta = 0.8#@param{type: 'number'}
+clamp_grad = True #@param{type: 'boolean'}
+clamp_max = 0.05 #@param{type: 'number'}
+
+
+### EXTRA ADVANCED SETTINGS:
+randomize_class = True
+clip_denoised = False
+fuzzy_prompt = False
+rand_mag = 0.05
+
+
+ #@markdown ---
+
+#@markdown ####**Cutn Scheduling:**
+#@markdown Format: `[40]*400+[20]*600` = 40 cuts for the first 400 /1000 steps, then 20 for the last 600/1000
+
+#@markdown cut_overview and cut_innercut are cumulative for total cutn on any given step. Overview cuts see the entire image and are good for early structure, innercuts are your standard cutn.
+
+cut_overview = "[12]*400+[4]*600" #@param {type: 'string'}       
+cut_innercut ="[4]*400+[12]*600"#@param {type: 'string'}  
+cut_ic_pow = 1#@param {type: 'number'}  
+cut_icgray_p = "[0.2]*400+[0]*600"#@param {type: 'string'}
+
+
+# %%
+# !! {"metadata": {
+# !!    "id": "PromptsTop"
+# !! }}
+"""
+### Prompts
+`animation_mode: None` will only use the first set. `animation_mode: 2D / Video` will run through them per the set frames and hold on the last one.
+"""
+
+# %%
+# !! {"metadata": {
+# !!    "id": "Prompts"
+# !! }}
+text_prompts = {
+    0: ["A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation.", "yellow color scheme"],
+    100: ["This set of prompts start at frame 100","This prompt has weight five:5"],
+}
+
+image_prompts = {
+    # 0:['ImagePromptsWorkButArentVeryGood.png:2',],
+}
+
+
+# %%
+# !! {"metadata": {
+# !!    "id": "DiffuseTop"
+# !! }}
+"""
+# 4. Diffuse!
+"""
+
+# %%
+# !! {"metadata": {
+# !!    "id": "DoTheRun"
+# !!  }}
+#@title Do the Run!
+#@markdown `n_batches` ignored with animation modes.
+display_rate =  50 #@param{type: 'number'}
+n_batches =  50 #@param{type: 'number'}
+
+#Update Model Settings
+timestep_respacing = f'ddim{steps}'
+diffusion_steps = (1000//steps)*steps if steps < 1000 else steps
+model_config.update({
+    'timestep_respacing': timestep_respacing,
+    'diffusion_steps': diffusion_steps,
+})
+
+batch_size = 1 
+
+def move_files(start_num, end_num, old_folder, new_folder):
+    for i in range(start_num, end_num):
+        old_file = old_folder + f'/{batch_name}({batchNum})_{i:04}.png'
+        new_file = new_folder + f'/{batch_name}({batchNum})_{i:04}.png'
+        os.rename(old_file, new_file)
+
+#@markdown ---
+
+
+resume_run = False #@param{type: 'boolean'}
+run_to_resume = 'latest' #@param{type: 'string'}
+resume_from_frame = 'latest' #@param{type: 'string'}
+retain_overwritten_frames = False #@param{type: 'boolean'}
+if retain_overwritten_frames is True:
+  retainFolder = f'{batchFolder}/retained'
+  createPath(retainFolder)
+
+
+skip_step_ratio = int(frames_skip_steps.rstrip("%")) / 100
+calc_frames_skip_steps = math.floor(steps * skip_step_ratio)
+
+
+if steps <= calc_frames_skip_steps:
+  sys.exit("ERROR: You can't skip more steps than your total steps")
+
+if resume_run:
+  if run_to_resume == 'latest':
+    try:
+      batchNum
+    except:
+      batchNum = len(glob(f"{batchFolder}/{batch_name}(*)_settings.txt"))-1
+  else:
+    batchNum = int(run_to_resume)
+  if resume_from_frame == 'latest':
+    start_frame = len(glob(batchFolder+f"/{batch_name}({batchNum})_*.png"))
+    if animation_mode != '3D' and turbo_mode == True and start_frame > turbo_preroll and start_frame % int(turbo_steps) != 0:
+      start_frame = start_frame - (start_frame % int(turbo_steps))
+  else:
+    start_frame = int(resume_from_frame)+1
+    if animation_mode != '3D' and turbo_mode == True and start_frame > turbo_preroll and start_frame % int(turbo_steps) != 0:
+      start_frame = start_frame - (start_frame % int(turbo_steps))
+    if retain_overwritten_frames is True:
+      existing_frames = len(glob(batchFolder+f"/{batch_name}({batchNum})_*.png"))
+      frames_to_save = existing_frames - start_frame
+      print(f'Moving {frames_to_save} frames to the Retained folder')
+      move_files(start_frame, existing_frames, batchFolder, retainFolder)
+else:
+  start_frame = 0
+  batchNum = len(glob(batchFolder+"/*.txt"))
+  while os.path.isfile(f"{batchFolder}/{batch_name}({batchNum})_settings.txt") is True or os.path.isfile(f"{batchFolder}/{batch_name}-{batchNum}_settings.txt") is True:
+    batchNum += 1
+
+print(f'Starting Run: {batch_name}({batchNum}) at frame {start_frame}')
+
+if set_seed == 'random_seed':
+    random.seed()
+    seed = random.randint(0, 2**32)
+    # print(f'Using seed: {seed}')
+else:
+    seed = int(set_seed)
+
+args = {
+    'batchNum': batchNum,
+    'prompts_series':split_prompts(text_prompts) if text_prompts else None,
+    'image_prompts_series':split_prompts(image_prompts) if image_prompts else None,
+    'seed': seed,
+    'display_rate':display_rate,
+    'n_batches':n_batches if animation_mode == 'None' else 1,
+    'batch_size':batch_size,
+    'batch_name': batch_name,
+    'steps': steps,
+    'diffusion_sampling_mode': diffusion_sampling_mode,
+    'width_height': width_height,
+    'clip_guidance_scale': clip_guidance_scale,
+    'tv_scale': tv_scale,
+    'range_scale': range_scale,
+    'sat_scale': sat_scale,
+    'cutn_batches': cutn_batches,
+    'init_image': init_image,
+    'init_scale': init_scale,
+    'skip_steps': skip_steps,
+    'side_x': side_x,
+    'side_y': side_y,
+    'timestep_respacing': timestep_respacing,
+    'diffusion_steps': diffusion_steps,
+    'animation_mode': animation_mode,
+    'video_init_path': video_init_path,
+    'extract_nth_frame': extract_nth_frame,
+    'video_init_seed_continuity': video_init_seed_continuity,
+    'key_frames': key_frames,
+    'max_frames': max_frames if animation_mode != "None" else 1,
+    'interp_spline': interp_spline,
+    'start_frame': start_frame,
+    'angle': angle,
+    'zoom': zoom,
+    'translation_x': translation_x,
+    'translation_y': translation_y,
+    'translation_z': translation_z,
+    'rotation_3d_x': rotation_3d_x,
+    'rotation_3d_y': rotation_3d_y,
+    'rotation_3d_z': rotation_3d_z,
+    'midas_depth_model': midas_depth_model,
+    'midas_weight': midas_weight,
+    'near_plane': near_plane,
+    'far_plane': far_plane,
+    'fov': fov,
+    'padding_mode': padding_mode,
+    'sampling_mode': sampling_mode,
+    'angle_series':angle_series,
+    'zoom_series':zoom_series,
+    'translation_x_series':translation_x_series,
+    'translation_y_series':translation_y_series,
+    'translation_z_series':translation_z_series,
+    'rotation_3d_x_series':rotation_3d_x_series,
+    'rotation_3d_y_series':rotation_3d_y_series,
+    'rotation_3d_z_series':rotation_3d_z_series,
+    'frames_scale': frames_scale,
+    'calc_frames_skip_steps': calc_frames_skip_steps,
+    'skip_step_ratio': skip_step_ratio,
+    'calc_frames_skip_steps': calc_frames_skip_steps,
+    'text_prompts': text_prompts,
+    'image_prompts': image_prompts,
+    'cut_overview': eval(cut_overview),
+    'cut_innercut': eval(cut_innercut),
+    'cut_ic_pow': cut_ic_pow,
+    'cut_icgray_p': eval(cut_icgray_p),
+    'intermediate_saves': intermediate_saves,
+    'intermediates_in_subfolder': intermediates_in_subfolder,
+    'steps_per_checkpoint': steps_per_checkpoint,
+    'perlin_init': perlin_init,
+    'perlin_mode': perlin_mode,
+    'set_seed': set_seed,
+    'eta': eta,
+    'clamp_grad': clamp_grad,
+    'clamp_max': clamp_max,
+    'skip_augs': skip_augs,
+    'randomize_class': randomize_class,
+    'clip_denoised': clip_denoised,
+    'fuzzy_prompt': fuzzy_prompt,
+    'rand_mag': rand_mag,
+}
+
+args = SimpleNamespace(**args)
+
+print('Prepping model...')
+model, diffusion = create_model_and_diffusion(**model_config)
+model.load_state_dict(torch.load(f'{model_path}/{diffusion_model}.pt', map_location='cpu'))
+model.requires_grad_(False).eval().to(device)
+for name, param in model.named_parameters():
+    if 'qkv' in name or 'norm' in name or 'proj' in name:
+        param.requires_grad_()
+if model_config['use_fp16']:
+    model.convert_to_fp16()
+
+gc.collect()
+torch.cuda.empty_cache()
+try:
+  do_run()
+except KeyboardInterrupt:
+    pass
+finally:
+    print('Seed used:', seed)
+    gc.collect()
+    torch.cuda.empty_cache()
+
+
+# %%
+# !! {"metadata": {
+# !!    "id": "CreateVidTop"
+# !! }}
+"""
+# 5. Create the video
+"""
+
+# %%
+# !! {"metadata": {
+# !!    "id": "CreateVid"
+# !! }}
+# @title ### **Create video**
+#@markdown Video file will save in the same folder as your images.
+
+skip_video_for_run_all = True #@param {type: 'boolean'}
+
+if skip_video_for_run_all == True:
+  print('Skipping video creation, uncheck skip_video_for_run_all if you want to run it')
+
+else:
+  # import subprocess in case this cell is run without the above cells
+  import subprocess
+  from base64 import b64encode
+
+  latest_run = batchNum
+
+  folder = batch_name #@param
+  run = latest_run #@param
+  final_frame = 'final_frame'
+
+
+  init_frame = 1#@param {type:"number"} This is the frame where the video will start
+  last_frame = final_frame#@param {type:"number"} You can change i to the number of the last frame you want to generate. It will raise an error if that number of frames does not exist.
+  fps = 12#@param {type:"number"}
+  # view_video_in_cell = True #@param {type: 'boolean'}
+
+  frames = []
+  # tqdm.write('Generating video...')
+
+  if last_frame == 'final_frame':
+    last_frame = len(glob(batchFolder+f"/{folder}({run})_*.png"))
+    print(f'Total frames: {last_frame}')
+
+  image_path = f"{outDirPath}/{folder}/{folder}({run})_%04d.png"
+  filepath = f"{outDirPath}/{folder}/{folder}({run}).mp4"
+
+
+  cmd = [
+      'ffmpeg',
+      '-y',
+      '-vcodec',
+      'png',
+      '-r',
+      str(fps),
+      '-start_number',
+      str(init_frame),
+      '-i',
+      image_path,
+      '-frames:v',
+      str(last_frame+1),
+      '-c:v',
+      'libx264',
+      '-vf',
+      f'fps={fps}',
+      '-pix_fmt',
+      'yuv420p',
+      '-crf',
+      '17',
+      '-preset',
+      'veryslow',
+      filepath
+  ]
+
+  process = subprocess.Popen(cmd, cwd=f'{batchFolder}', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+  stdout, stderr = process.communicate()
+  if process.returncode != 0:
+      print(stderr)
+      raise RuntimeError(stderr)
+  else:
+      print("The video is ready and saved to the images folder")
+
+  # if view_video_in_cell:
+  #     mp4 = open(filepath,'rb').read()
+  #     data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
+  #     display.HTML(f'<video width=400 controls><source src="{data_url}" type="video/mp4"></video>')
+  

+ 131 - 0
disco_xform_utils.py

@@ -0,0 +1,131 @@
+import torch, torchvision
+import py3d_tools as p3d
+import midas_utils
+from PIL import Image
+import numpy as np
+import sys, math
+
+try:
+    from infer import InferenceHelper
+except:
+    print("disco_xform_utils.py failed to import InferenceHelper. Please ensure that AdaBins directory is in the path (i.e. via sys.path.append('./AdaBins') or other means).")
+    sys.exit()
+
+MAX_ADABINS_AREA = 500000
+MIN_ADABINS_AREA = 448*448
+
+@torch.no_grad()
+def transform_image_3d(img_filepath, midas_model, midas_transform, device, rot_mat=torch.eye(3).unsqueeze(0), translate=(0.,0.,-0.04), near=2000, far=20000, fov_deg=60, padding_mode='border', sampling_mode='bicubic', midas_weight = 0.3,spherical=False):
+    img_pil = Image.open(open(img_filepath, 'rb')).convert('RGB')
+    w, h = img_pil.size
+    image_tensor = torchvision.transforms.functional.to_tensor(img_pil).to(device)
+
+    use_adabins = midas_weight < 1.0
+
+    if use_adabins:
+        # AdaBins
+        """
+        predictions using nyu dataset
+        """
+        print("Running AdaBins depth estimation implementation...")
+        infer_helper = InferenceHelper(dataset='nyu')
+
+        image_pil_area = w*h
+        if image_pil_area > MAX_ADABINS_AREA:
+            scale = math.sqrt(MAX_ADABINS_AREA) / math.sqrt(image_pil_area)
+            depth_input = img_pil.resize((int(w*scale), int(h*scale)), Image.LANCZOS) # LANCZOS is supposed to be good for downsampling.
+        elif image_pil_area < MIN_ADABINS_AREA:
+            scale = math.sqrt(MIN_ADABINS_AREA) / math.sqrt(image_pil_area)
+            depth_input = img_pil.resize((int(w*scale), int(h*scale)), Image.BICUBIC)
+        else:
+            depth_input = img_pil
+        try:
+            _, adabins_depth = infer_helper.predict_pil(depth_input)
+            if image_pil_area != MAX_ADABINS_AREA:
+                adabins_depth = torchvision.transforms.functional.resize(torch.from_numpy(adabins_depth), image_tensor.shape[-2:], interpolation=torchvision.transforms.functional.InterpolationMode.BICUBIC).squeeze().to(device)
+            else:
+                adabins_depth = torch.from_numpy(adabins_depth).squeeze().to(device)
+            adabins_depth_np = adabins_depth.cpu().numpy()
+        except:
+            pass
+
+    torch.cuda.empty_cache()
+
+    # MiDaS
+    img_midas = midas_utils.read_image(img_filepath)
+    img_midas_input = midas_transform({"image": img_midas})["image"]
+    midas_optimize = True
+
+    # MiDaS depth estimation implementation
+    print("Running MiDaS depth estimation implementation...")
+    sample = torch.from_numpy(img_midas_input).float().to(device).unsqueeze(0)
+    if midas_optimize==True and device == torch.device("cuda"):
+        sample = sample.to(memory_format=torch.channels_last)  
+        sample = sample.half()
+    prediction_torch = midas_model.forward(sample)
+    prediction_torch = torch.nn.functional.interpolate(
+            prediction_torch.unsqueeze(1),
+            size=img_midas.shape[:2],
+            mode="bicubic",
+            align_corners=False,
+        ).squeeze()
+    prediction_np = prediction_torch.clone().cpu().numpy()
+
+    print("Finished depth estimation.")
+    torch.cuda.empty_cache()
+
+    # MiDaS makes the near values greater, and the far values lesser. Let's reverse that and try to align with AdaBins a bit better.
+    prediction_np = np.subtract(50.0, prediction_np)
+    prediction_np = prediction_np / 19.0
+
+    if use_adabins:
+        adabins_weight = 1.0 - midas_weight
+        depth_map = prediction_np*midas_weight + adabins_depth_np*adabins_weight
+    else:
+        depth_map = prediction_np
+
+    depth_map = np.expand_dims(depth_map, axis=0)
+    depth_tensor = torch.from_numpy(depth_map).squeeze().to(device)
+
+    pixel_aspect = 1.0 # really.. the aspect of an individual pixel! (so usually 1.0)
+    persp_cam_old = p3d.FoVPerspectiveCameras(near, far, pixel_aspect, fov=fov_deg, degrees=True, device=device)
+    persp_cam_new = p3d.FoVPerspectiveCameras(near, far, pixel_aspect, fov=fov_deg, degrees=True, R=rot_mat, T=torch.tensor([translate]), device=device)
+
+    # range of [-1,1] is important to torch grid_sample's padding handling
+    y,x = torch.meshgrid(torch.linspace(-1.,1.,h,dtype=torch.float32,device=device),torch.linspace(-1.,1.,w,dtype=torch.float32,device=device))
+    z = torch.as_tensor(depth_tensor, dtype=torch.float32, device=device)
+    xyz_old_world = torch.stack((x.flatten(), y.flatten(), z.flatten()), dim=1)
+
+    # Transform the points using pytorch3d. With current functionality, this is overkill and prevents it from working on Windows.
+    # If you want it to run on Windows (without pytorch3d), then the transforms (and/or perspective if that's separate) can be done pretty easily without it.
+    xyz_old_cam_xy = persp_cam_old.get_full_projection_transform().transform_points(xyz_old_world)[:,0:2]
+    xyz_new_cam_xy = persp_cam_new.get_full_projection_transform().transform_points(xyz_old_world)[:,0:2]
+
+    offset_xy = xyz_new_cam_xy - xyz_old_cam_xy
+    # affine_grid theta param expects a batch of 2D mats. Each is 2x3 to do rotation+translation.
+    identity_2d_batch = torch.tensor([[1.,0.,0.],[0.,1.,0.]], device=device).unsqueeze(0)
+    # coords_2d will have shape (N,H,W,2).. which is also what grid_sample needs.
+    coords_2d = torch.nn.functional.affine_grid(identity_2d_batch, [1,1,h,w], align_corners=False)
+    offset_coords_2d = coords_2d - torch.reshape(offset_xy, (h,w,2)).unsqueeze(0)
+
+    if spherical:
+        spherical_grid = get_spherical_projection(h, w, torch.tensor([0,0], device=device), -0.4,device=device)#align_corners=False
+        stage_image = torch.nn.functional.grid_sample(image_tensor.add(1/512 - 0.0001).unsqueeze(0), offset_coords_2d, mode=sampling_mode, padding_mode=padding_mode, align_corners=True)
+        new_image = torch.nn.functional.grid_sample(stage_image, spherical_grid,align_corners=True) #, mode=sampling_mode, padding_mode=padding_mode, align_corners=False)
+    else:
+        new_image = torch.nn.functional.grid_sample(image_tensor.add(1/512 - 0.0001).unsqueeze(0), offset_coords_2d, mode=sampling_mode, padding_mode=padding_mode, align_corners=False)
+
+    img_pil = torchvision.transforms.ToPILImage()(new_image.squeeze().clamp(0,1.))
+
+    torch.cuda.empty_cache()
+
+    return img_pil
+
+def get_spherical_projection(H, W, center, magnitude,device):  
+    xx, yy = torch.linspace(-1, 1, W,dtype=torch.float32,device=device), torch.linspace(-1, 1, H,dtype=torch.float32,device=device)  
+    gridy, gridx  = torch.meshgrid(yy, xx)
+    grid = torch.stack([gridx, gridy], dim=-1)  
+    d = center - grid
+    d_sum = torch.sqrt((d**2).sum(axis=-1))
+    grid += d * d_sum.unsqueeze(-1) * magnitude 
+    return grid.unsqueeze(0)

+ 47 - 0
docker/README.md

@@ -0,0 +1,47 @@
+# Docker
+
+## Introduction
+
+This is a Docker build file that will preinstall dependencies, packages, Git repos, and pre-cache the large model files needed by Disco Diffusion.
+
+## TO-DO:
+
+- Make container actually accept parameters on run.  Right now you'll just be seeing lighthouses.
+
+## Change Log
+
+- `1.0`
+
+  Initial build file created based on the DD 5.1 Git repo.  This initial build is deliberately meant to work touch-free of any of the existing Python code written.  It does handle some of the pre-setup tasks already done in the Python code such as pip packages, Git clones, and even pre-caching the model files for faster launch speed.
+
+## Build the Prep Image
+The prep image is broken out from the `main` folder's `Dockerfile` to help with long build context times (or wget download times after intitial build.)  This prep image build contains all the large model files required by Disco Diffusion.
+
+From a terminal in the `docker/prep` directory, run:
+```sh
+docker build -t disco-diffusion-prep:5.1 .
+```
+From a terminal in the `docker/main` directory, run:
+## Build the Image
+From a terminal, run:
+
+```sh
+docker build -t disco-diffusion:5.1 .
+```
+
+## Run as a Container
+
+This example runs Disco Diffusion in a Docker container.  It maps `images_out` and `init_images` to the container's working directory to access by the host OS.
+```sh
+docker run --rm -it \
+    -v $(echo ~)/disco-diffusion/images_out:/workspace/code/images_out \
+    -v $(echo ~)/disco-diffusion/init_images:/workspace/code/init_images \
+    --gpus=all \
+    --name="disco-diffusion" --ipc=host \
+    --user $(id -u):$(id -g) \
+disco-diffusion:5.1 python disco-diffusion/disco.py
+```
+
+## Passing Parameters
+
+This will be added after conferring with repo authors.

+ 40 - 0
docker/main/Dockerfile

@@ -0,0 +1,40 @@
+# Model prep phase, also cuts down on build context wait time since these models files
+# are large and prone to take long to copy...
+FROM disco-diffusion-prep:5.1 AS modelprep
+
+FROM nvcr.io/nvidia/pytorch:21.08-py3
+
+ENV PYTHONDONTWRITEBYTECODE 1
+ENV PYTHONUNBUFFERED 1
+
+# Install a few dependencies
+RUN apt update
+RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get -y install -y tzdata imagemagick
+
+# Create a disco user
+RUN useradd -ms /bin/bash disco
+USER disco
+
+# Set up code directory
+RUN mkdir code
+WORKDIR /workspace/code
+
+# Copy over models used
+COPY --from=modelprep /scratch/models /workspace/code/models
+COPY --from=modelprep /scratch/pretrained /workspace/code/pretrained
+
+# Clone Git repositories
+RUN git clone https://github.com/alembics/disco-diffusion.git && \
+    git clone https://github.com/openai/CLIP && \
+    git clone https://github.com/assafshocher/ResizeRight.git && \
+    git clone https://github.com/MSFTserver/pytorch3d-lite.git && \
+    git clone https://github.com/isl-org/MiDaS.git && \
+    git clone https://github.com/crowsonkb/guided-diffusion.git && \
+    git clone https://github.com/shariqfarooq123/AdaBins.git
+
+# Install Python packages
+RUN pip install imageio imageio-ffmpeg==0.4.4 pyspng==0.1.0 lpips datetime timm ipywidgets omegaconf>=2.0.0 pytorch-lightning>=1.0.8 torch-fidelity einops wandb pandas ftfy
+
+# Precache other big files
+COPY --chown=disco --from=modelprep /scratch/clip /home/disco/.cache/clip
+COPY --chown=disco --from=modelprep /scratch/model-lpips/vgg16-397923af.pth /home/disco/.cache/torch/hub/checkpoints/vgg16-397923af.pth

+ 25 - 0
docker/prep/Dockerfile

@@ -0,0 +1,25 @@
+FROM nvcr.io/nvidia/pytorch:21.08-py3 AS prep
+    RUN mkdir -p /scratch/models && \
+        mkdir -p /scratch/models/superres && \
+        mkdir -p /scratch/models/slip && \
+        mkdir -p /scratch/model-lpips && \
+        mkdir -p /scratch/clip && \
+        mkdir -p /scratch/pretrained
+
+    RUN wget --progress=bar:force:noscroll -P /scratch/model-lpips https://download.pytorch.org/models/vgg16-397923af.pth 
+
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/models https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/models https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/models https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/models https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth
+
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/pretrained https://cloudflare-ipfs.com/ipfs/Qmd2mMnDLWePKmgfS8m6ntAg4nhV5VkUyAydYBp8cWWeB7/AdaBins_nyu.pt
+
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/clip/ https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/clip https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/clip https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/clip https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/clip https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/clip https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/clip https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt
+    RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/clip https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt