HyperGAN
  • About
  • Getting started
  • CLI guide
  • Configurations
    • Configurable Parameters
  • Showcase
    • AI Explorer for Android
    • Youtube, Twitter, Discord +
  • Examples
    • 2D
    • Text
    • Classification
    • Colorizer
    • Next Frame (video)
  • Tutorials
    • Training a GAN
    • Pygame inference
    • Creating an image dataset
    • Searching for hyperparameters
  • Components
    • GAN
      • Aligned GAN
      • Aligned Interpolated GAN
      • Standard GAN
    • Generator
      • Configurable Generator
      • DCGAN Generator
      • Resizable Generator
    • Discriminator
      • DCGAN Discriminator
      • Configurable Discriminator
    • Layers
      • add
      • cat
      • channel_attention
      • ez_norm
      • layer
      • mul
      • multi_head_attention
      • operation
      • pixel_shuffle
      • residual
      • resizable_stack
      • segment_softmax
      • upsample
    • Loss
      • ALI Loss
      • F Divergence Loss
      • Least Squares Loss
      • Logistic Loss
      • QP Loss
      • RAGAN Loss
      • Realness Loss
      • Softmax Loss
      • Standard Loss
      • Wasserstein Loss
    • Latent
      • Uniform Distribution
    • Trainer
      • Alternating Trainer
      • Simultaneous Trainer
      • Balanced Trainer
      • Accumulate Gradient Trainer
    • Optimizer
    • Train Hook
      • Adversarial Norm
      • Weight Constraint
      • Stabilizing Training
      • JARE
      • Learning Rate Dropout
      • Gradient Penalty
      • Rolling Memory
    • Other GAN implementations
Powered by GitBook
On this page
  • examples
  • options
  • memory types

Was this helpful?

  1. Components
  2. Train Hook

Rolling Memory

(no paper)

Rolling memory is a type of experience replay. Each training step, a memory is replaced with the top scoring batch item.

Each types pairing becomes a discriminator that is added to the loss.

examples

{
    "class": "function:hypergan.train_hooks.experimental.rolling_memory_2_train_hook.RollingMemoryTrainHook",
    "types": ["mx-/g(mz-)"]
}

mx- is a memory of x that gets updated each training step. g(mz-) is a memory of z that gets run through a generator and updated each trainng step.

A discriminator d(mx-, g(mz-)) is created and added to the gan loss.

options

attribute

description

type

types

What memories and how they are paired. See memory types below

array of strings

top_k

How many memory items to replace per frame. Defaults to 1

integer

only

Overrides all other losses when this is set. Defaults to false

boolean

memory types

memory

description

mx-

x reverse sorted by d_real

mx+

x sorted by d_real

mg-

memory of g reverse sorted by d_fake

mg+

memory of g sorted by d_fake

g(mz-)

generator of memory of z reverse sorted by d_fake

g(mz+)

generator of memory of z sorted by d_fake

x

gan.inputs.x

g

gan.generator.sample

PreviousGradient PenaltyNextOther GAN implementations

Last updated 4 years ago

Was this helpful?