Global minimizers, strict and non-strict saddle points, and implicit regularization for deep linear neural networks - IRT Saint Exupéry - Institut de Recherche Technologique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

Global minimizers, strict and non-strict saddle points, and implicit regularization for deep linear neural networks

Résumé

In non-convex settings, it is established that the behavior of gradient-based algorithms is different in the vicinity of local structures of the objective function such as strict and non-strict saddle points, local and global minima and maxima. It is therefore crucial to describe the landscape of non-convex problems. That is, to describe as well as possible the set of points of each of the above categories. In this work, we study the landscape of the empirical risk associated with deep linear neural networks and the square loss. It is known that, under weak assumptions, this objective function has no spurious local minima and no local maxima. We go a step further and characterize, among all critical points, which are global minimizers, strict saddle points, and non-strict saddle points. We enumerate all the associated critical values. The characterization is simple, involves conditions on the ranks of partial matrix products, and sheds some light on global convergence or implicit regularization that have been proved or observed when optimizing a linear neural network. In passing, we also provide an explicit parameterization of the set of all global minimizers and exhibit large sets of strict and non-strict saddle points.
Fichier principal
Vignette du fichier
landscape_linear_networks.pdf (709.35 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03299887 , version 1 (28-07-2021)
hal-03299887 , version 2 (24-02-2022)

Identifiants

  • HAL Id : hal-03299887 , version 1

Citer

El Mehdi Achour, François Malgouyres, Sébastien Gerchinovitz. Global minimizers, strict and non-strict saddle points, and implicit regularization for deep linear neural networks. 2021. ⟨hal-03299887v1⟩
147 Consultations
288 Téléchargements

Partager

Gmail Facebook X LinkedIn More