Structural Identifiability of Generalized Constraint Neural Network Models for Nonlinear Regression

Abstract : Identifiability becomes an essential requirement for learning machines when the models contain physically interpretable parameters. This paper presents two approaches to examining structural identifiability of the generalized constraint neural network (GCNN) models by viewing the model from two different perspectives. First, by taking the model as a static deterministic function, a functional framework is established, which can recognize deficient model and at the same time reparameterize it through a pairwise-mode symbolic examination. Second, by viewing the model as the mean function of an isotropic Gaussian conditional distribution, the algebraic approaches [E.A. Catchpole, B.J.T. Morgan, Detecting parameter redundancy, Biometrika 84 (1) (1997) 187-196] are extended to deal with multivariate nonlinear regression models through symbolically checking linear dependence of the derivative functional vectors. Examples are presented in which the proposed approaches are applied to GCNN nonlinear regression models that contain coupling physically interpretable parameters.
Document type :
Journal articles
Complete list of metadatas

https://hal-ecp.archives-ouvertes.fr/hal-00872379
Contributor : Paul-Henry Cournède <>
Submitted on : Saturday, October 12, 2013 - 12:16:53 AM
Last modification on : Thursday, April 25, 2019 - 11:29:13 AM

Identifiers

Collections

Citation

Shuang-Hong Yang, Bao-Gang Hu, Paul-Henry Cournède. Structural Identifiability of Generalized Constraint Neural Network Models for Nonlinear Regression. Neurocomputing, Elsevier, 2008, 72 (1-3), pp.392-400. ⟨10.1016/j.neucom.2007.12.013⟩. ⟨hal-00872379⟩

Share

Metrics

Record views

295