Go
back to previous topic |
Forum Name |
"What Does RL Stand For?" | Topic subject | obscure question | Topic
URL | https://forums.carrionfields.com/dcboard.php?az=show_topic&forum=43&topic_id=1594 |
1594, obscure question
Posted by Isildur on Wed 31-Dec-69 07:00 PM
Anybody ever fool around with the nonlinear regression routines in the GNU Scientific Library or MINPACK? If so, care to help a n00b out?
|
1595, Math is very much a part of the Axis of Evil (n/t)
Posted by Zulghinlour on Wed 31-Dec-69 07:00 PM
n/t
|
1597, RE: Math is very much a part of the Axis of Evil (n/t)
Posted by Isildur on Wed 31-Dec-69 07:00 PM
Okay, you asked for it. I'm going to assume this means you want to help. Hopefully you can save me the shame of additional newbie-ish posts to the help-gsl mailing list.
I don't fully understand the GSL calls that allow the govern to test whether the iteration (all the solvers are iterative) should stop.
One is based on "absolute error" and "relative error", while the others use gradient values.
Relevant docs:
http://www.gnu.org/software/gsl/manual/html_node/Search-Stopping-Parameters-for-Minimization-Algorithms.html
My question concerns the first method, gsl_multifit_test_delta(). When I pass in values for epsabs and epsrel, what exactly does the function compare them against?
If after j iterations my system is:
F_1(a_j, b_j, c_j) = error_1_j F_2(a_j, b_j, c_j) = error_2_j ... F_i(a_j, b_j, c_j) = error_i_j ... F_m(a_j, b_j, c_j) = error_m_j
Then after the next iteration it will be:
F_1(a_k, b_k, c_k) = error_1_k F_2(a_k, b_k, c_k) = error_2_k ... F_i(a_k, b_k, c_k) = error_i_k ... F_m(a_k, b_k, c_k) = error_m_k
where k = j + 1, each parameter (a,b,c) has been perturbed slightly, and the residual error values have (hopefully) decreased.
What I'm interested in minimizing is the root mean squared error, or the square root of the mean squared error over the set of residuals. Is that what the docs are talking about when they use the term "absolute error"?
| |