HyperGAN
  • About
  • Getting started
  • CLI guide
  • Configurations
    • Configurable Parameters
  • Showcase
    • AI Explorer for Android
    • Youtube, Twitter, Discord +
  • Examples
    • 2D
    • Text
    • Classification
    • Colorizer
    • Next Frame (video)
  • Tutorials
    • Training a GAN
    • Pygame inference
    • Creating an image dataset
    • Searching for hyperparameters
  • Components
    • GAN
      • Aligned GAN
      • Aligned Interpolated GAN
      • Standard GAN
    • Generator
      • Configurable Generator
      • DCGAN Generator
      • Resizable Generator
    • Discriminator
      • DCGAN Discriminator
      • Configurable Discriminator
    • Layers
      • add
      • cat
      • channel_attention
      • ez_norm
      • layer
      • mul
      • multi_head_attention
      • operation
      • pixel_shuffle
      • residual
      • resizable_stack
      • segment_softmax
      • upsample
    • Loss
      • ALI Loss
      • F Divergence Loss
      • Least Squares Loss
      • Logistic Loss
      • QP Loss
      • RAGAN Loss
      • Realness Loss
      • Softmax Loss
      • Standard Loss
      • Wasserstein Loss
    • Latent
      • Uniform Distribution
    • Trainer
      • Alternating Trainer
      • Simultaneous Trainer
      • Balanced Trainer
      • Accumulate Gradient Trainer
    • Optimizer
    • Train Hook
      • Adversarial Norm
      • Weight Constraint
      • Stabilizing Training
      • JARE
      • Learning Rate Dropout
      • Gradient Penalty
      • Rolling Memory
    • Other GAN implementations
Powered by GitBook
On this page
  • examples
  • options

Was this helpful?

  1. Components
  2. Train Hook

Learning Rate Dropout

https://arxiv.org/abs/1912.00144

PreviousJARENextGradient Penalty

Last updated 4 years ago

Was this helpful?

examples

{
  "class": "function:hypergan.train_hooks.learning_rate_dropout_train_hook.LearningRateDropoutTrainHook",
  "dropout": 0.01,
  "ones": 1e12,
  "zeros": 0.0,
  "skip_d": true
}

options

attribute

description

type

dropout

0-1 dropout ratio. Defaults to 0.5

float

ones

The gradient multiplier when not dropped out. Defaults to 0.1

float

zeros

The gradient multiplier when dropped out. Defaults to 0.0

float

skip_d

skip d gradients

Defaults to false

boolean

skip_g

skip g gradients

Defaults to false

boolean

Floats are

configurable parameters