HyperGAN
  • About
  • Getting started
  • CLI guide
  • Configurations
    • Configurable Parameters
  • Showcase
    • AI Explorer for Android
    • Youtube, Twitter, Discord +
  • Examples
    • 2D
    • Text
    • Classification
    • Colorizer
    • Next Frame (video)
  • Tutorials
    • Training a GAN
    • Pygame inference
    • Creating an image dataset
    • Searching for hyperparameters
  • Components
    • GAN
      • Aligned GAN
      • Aligned Interpolated GAN
      • Standard GAN
    • Generator
      • Configurable Generator
      • DCGAN Generator
      • Resizable Generator
    • Discriminator
      • DCGAN Discriminator
      • Configurable Discriminator
    • Layers
      • add
      • cat
      • channel_attention
      • ez_norm
      • layer
      • mul
      • multi_head_attention
      • operation
      • pixel_shuffle
      • residual
      • resizable_stack
      • segment_softmax
      • upsample
    • Loss
      • ALI Loss
      • F Divergence Loss
      • Least Squares Loss
      • Logistic Loss
      • QP Loss
      • RAGAN Loss
      • Realness Loss
      • Softmax Loss
      • Standard Loss
      • Wasserstein Loss
    • Latent
      • Uniform Distribution
    • Trainer
      • Alternating Trainer
      • Simultaneous Trainer
      • Balanced Trainer
      • Accumulate Gradient Trainer
    • Optimizer
    • Train Hook
      • Adversarial Norm
      • Weight Constraint
      • Stabilizing Training
      • JARE
      • Learning Rate Dropout
      • Gradient Penalty
      • Rolling Memory
    • Other GAN implementations
Powered by GitBook
On this page
  • examples
  • options

Was this helpful?

  1. Components
  2. Train Hook

Gradient Penalty

https://arxiv.org/pdf/1704.00028.pdf

PreviousLearning Rate DropoutNextRolling Memory

Last updated 4 years ago

Was this helpful?

lambda∗relu(∣∣gradients(target,components)∣∣2−flex)2lambda * relu(||gradients(target, components)||_2 - flex) ^2lambda∗relu(∣∣gradients(target,components)∣∣2​−flex)2

examples

{                                                                                       
  "class": "function:hypergan.train_hooks.gradient_penalty_train_hook.GradientPenaltyTra
  "lambda": 1.00,                                                                       
  "flex": 1.0,                                                                          
  "components": ["discriminator"],                                                       
  "target": "discriminator"
}

options

attribute

description

type

target

Used in gradients(target, components)

defaults to discriminator

string (optional)

lambda

Loss multiple

defaults to 1.0

float

components

Used in gradients(target, components)

defaults to all components

array of strings

flex

Can also be a list for separate X/G flex.

example: [0.0, 10.0]

array of float

loss

Side loss is added to: g_loss or d_loss

defaults to g_loss

string

Floats are

configurable parameters