Previous | Next | Home | relative |

We are given multiple data sets and a nonlinear model function for each data value. Each data value is the sum of its error and its model function evaluated at an unknown parameter vector. The data errors are mean zero, finite variance, and independent, are not necessarily normal, and are identically distributed within each data set. We consider the problem of estimating the data variance as well as the parameter vector via an extended least-squares technique motivated by maximum likelihood estimation. We prove convergence of an algorithm that generalizes a standard successive approximation algorithm from nonlinear programming. This generalization reduces the estimation problem to a sequence of linear least-squares problems. It is shown that the parameter and variance estimators converge to their true values as the number of data values goes to infinity. Moreover, if the constraints are not active, the parameter estimates converge in distribution. This convergence does not depend on the data errors being normally distributed.

citation
Input File: relative.omh