1. 论文信息

论文题目:《A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications》

0fe50f6cc4ef45dbaee2e011ea2940bb_th

Arxiv Link : https://arxiv.org/pdf/2001.06937.pdf

2. Abstract

Generative Adversarial Networks (GANs) have been widely studied since 2014. There are a large number of different GANs variants.

生成对抗网络,始于2014年,现在已有很多变种

Index Terms

Deep Learning ; GANs ; Algorithm ; Theory ; Applications

3. Introduction

GANs consists of two models: a generator and a discriminator. These two models are typically implemented by neural networks, but they can be implemented with any form of differentiable system that maps data from one space to the other.

GANs包括两个模型:一个生成器和一个辨别器。一般是由神经网络实现,但是也可以由不同类型的能映射数据到另一个空间的可微系统实现。

The generator tries to capture the distribution of true examples for new data example generation.

生成器试图捕获真实示例的分布,以生成新的数据示例。

The discriminator is usually a binary classifier, discriminating generated examples from the true examples as accurately as possible.

鉴别器通常是一个二进制分类器,尽可能准确地将生成的示例与真实的示例区分开来。

The optimization of GANs is a minimax optimization problem. The goal is to reach Nash equilibrium.

Nash equilibrium即纳什均衡,对于GANs,其损失是:

$$\min_G \max_D V(D,G)=\mathbb{E} _ {x \sim p_{data}(x)}[\log D(x)]+\mathbb{E} _ {z \sim p_z(z)}[\log (1-D(G(z)))] $$

生成器G和判别器D两者相互对抗,共同学习,不断优化

4. Related Work

GANs belong to generative algorithms

GANs 属于生成算法

4.1 Generative algorithms

Generative algorithms can be classified into two classes: explicit density model and implicit density model.

生成算法可分为两类:显式密度模型和隐式密度模型。

4.1.1 Explicit density model

An explicit density model assumes the distribution and utilizes true data to train the model containing the distribution or fit the distribution parameters. When finished, new examples are produced utilizing the learned model or distribution.

显式密度模型假设分布,并利用真实数据训练包含分布的模型或拟合分布参数。完成后,利用学习的模型或分布产生新的示例。

The explicit density models include maximum likelihood estimation (MLE), approximate inference, and Markov chain method.

显式密度模型包括最大似然估计(MLE),近似推断和马尔可夫链方法。

4.1.2 Implicit density model

It produces data instances from the distribution without an explicit hypothesis and utilizes the produced examples to modify the model.

它在没有明确假设的情况下从分布中生成数据实例,并利用生成的实例来修改模型。

GANs belong to the directed implicit density model category.

GANs属于有向隐式密度模型类别。

4.1.3 The comparison between GANs and other generative algorithms

The basic idea behind adversarial learning is that the generator tries to create as realistic examples as possible to deceive the discriminator. The discriminator tries to distinguish fake examples from true examples. Both the generator and discriminator improve through adversarial learning.

对抗式学习背后的基本思想是,生成器试图创建尽可能真实的示例来欺骗鉴别器。鉴别器试图区分假例子和真例子。生成器和鉴别器都通过对抗性学习进行改进。

4.2 Adversarial idea

Adversarial machine learning is a minimax problem. The defender, who builds the classifier that we want to work correctly, is searching over the parameter space to find the parameters that reduce the cost of the classifier as much as possible. Simultaneously, the attacker is searching over the inputs of the model to maximize the cost.

对抗性机器学习是一个极小极大问题。防守者(defender)构建了我们想要正确工作的分类器,他在参数空间中搜索尽可能降低分类器成本(cost)的参数(parameter)。同时,攻击者(attacker)搜索模型的输入以使成本(cost)最大化。

5. Algorithms

5.1 Generative Adversarial Nets (GANs)

In order to learn the generator’s distribution $p_g$ over data $x$, a prior on input noise variables is defined as $p_z(z)$ and $z$ is the noise variable.

为了了解生成器在数据$x$上的分布$p_g$,将输入噪声变量的先验定义为$p_z(z)$,$z$是噪声变量。

Then, GANs represent a mapping from noise space to data space as $G(z, \theta_g)$, where G is a differentiable function represented by a neural network with parameters $\theta_g$.

GANs将噪声空间到数据空间的映射表示为$G(z, \theta_g)$,其中G是一个由参数$\theta_g$的神经网络表示的可微函数

Other than G, the other neural network $D(x, \theta_d)$ is also defined with parameters $\theta_d$ and the output of $D(x)$ is a single scalar. $D(x)$ denotes the probability that x was from the data rather than the generator G.

除G外,另一个神经网络$D(x, \theta_d)$ 也根据参数$\theta_d$定义,$D(x)$的输出为单标量。$D(x)$表示$x$来自数据而不是生成器G的概率。

The discriminator D is trained to maximize the probability of giving the correct label to both training data and fake samples generated from the generator G. G is trained to minimize $\log (1 −D (G(z)))$ simultaneously .

对鉴别器D进行训练以最大限度地提高对训练数据和从生成器G生成的伪样本给出正确标签的概率。G被训练以同时最小化$\log(1−D(G(z))$。

5.1.1 Objective function

(1) Original minimax game

The objective function of GANs is :

$$\min_G \max_D V(D,G)=\mathbb{E} _ {x \sim p_{data}(x)}[\log D(x)]+\mathbb{E} _ {z \sim p_z(z)}[\log (1-D(G(z)))]$$

$\log D(x)$ is the cross-entropy between $\begin{bmatrix}1 & 0 \end{bmatrix}^T$ and $\begin{bmatrix}D(x) & 1-D(x) \end{bmatrix}^T$. Similarly, $\log(1-D(G(z)))$ is the cross-entropy between $\begin{bmatrix}0 & 1 \end{bmatrix}^T$ and $\begin{bmatrix}D(G(z)) & 1-D(G(z)) \end{bmatrix}^T$ .

$\log D(x)$是$\begin{bmatrix}1 & 0 \end{bmatrix}^T$和$\begin{bmatrix}D(x) & 1-D(x) \end{bmatrix}^T$之间的交叉熵。同样,$\log(1-D(G(z)))$是$\begin{bmatrix}0 & 1 \end{bmatrix}^T$和$\begin{bmatrix}D(G(z)) & 1-D(G(z)) \end{bmatrix}^T$之间的交叉熵