Formulating the polynomial regressionHow to handle constant term in Least Squares Regression?Formulating regression model in matrix notationGLM for Poisson Regression for Soccer Ratings Not ConvergingComputing vector linear regressionVariance of Beta in the Normal Linear Regression ModelRegression with multiple categories2 Dimension Linear Regression Math ProblemTwo equivalent ways of polynomial regression?Simple Linear Regression problem involving its design matrixOn a nonlinear regression problem
Make a Bowl of Alphabet Soup
Does Doodling or Improvising on the Piano Have Any Benefits?
Why Shazam when there is already Superman?
Taxes on Dividends in a Roth IRA
Find the next value of this number series
What features enable the Su-25 Frogfoot to operate with such a wide variety of fuels?
Is there a nicer/politer/more positive alternative for "negates"?
Multiplicative persistence
Did the UK lift the requirement for registering SIM cards?
Delete multiple columns using awk or sed
Stack Interview Code methods made from class Node and Smart Pointers
Does the Linux kernel need a file system to run?
Is this part of the description of the Archfey warlock's Misty Escape feature redundant?
What to do when eye contact makes your coworker uncomfortable?
Why is so much work done on numerical verification of the Riemann Hypothesis?
I found an audio circuit and I built it just fine, but I find it a bit too quiet. How do I amplify the output so that it is a bit louder?
Biological Blimps: Propulsion
Why does Carol not get rid of the Kree symbol on her suit when she changes its colours?
Is it necessary to use pronouns with the verb "essere"?
Why do ¬, ∀ and ∃ have the same precedence?
How to get directions in deep space?
How to make money from a browser who sees 5 seconds into the future of any web page?
Can I turn my anal-retentiveness into a career?
awk assign to multiple variables at once
Formulating the polynomial regression
How to handle constant term in Least Squares Regression?Formulating regression model in matrix notationGLM for Poisson Regression for Soccer Ratings Not ConvergingComputing vector linear regressionVariance of Beta in the Normal Linear Regression ModelRegression with multiple categories2 Dimension Linear Regression Math ProblemTwo equivalent ways of polynomial regression?Simple Linear Regression problem involving its design matrixOn a nonlinear regression problem
$begingroup$
I'm trying to formulate a regression problem such that $y=ax^b$. Previously, I formulated the $y=ax+b$ like $y=Ac+e$ where $c=
beginbmatrix
a\
b
endbmatrix$ and $A=
beginbmatrix
x_1 & 1\
x_2 & 1\
. & \
. & \
x_n & 1
endbmatrix$
Can I only change the A matrix such that $A=
beginbmatrix
x_1^b & 1\
x_2^b & 1\
. & \
. & \
x_n^b & 1
endbmatrix$ and change c as $c=
beginbmatrix
a\
0
endbmatrix$ and write the same formula $y=Ac+e$
Am I correct? Any help would be appreciated. Many thanks
regression linear-regression
$endgroup$
add a comment |
$begingroup$
I'm trying to formulate a regression problem such that $y=ax^b$. Previously, I formulated the $y=ax+b$ like $y=Ac+e$ where $c=
beginbmatrix
a\
b
endbmatrix$ and $A=
beginbmatrix
x_1 & 1\
x_2 & 1\
. & \
. & \
x_n & 1
endbmatrix$
Can I only change the A matrix such that $A=
beginbmatrix
x_1^b & 1\
x_2^b & 1\
. & \
. & \
x_n^b & 1
endbmatrix$ and change c as $c=
beginbmatrix
a\
0
endbmatrix$ and write the same formula $y=Ac+e$
Am I correct? Any help would be appreciated. Many thanks
regression linear-regression
$endgroup$
$begingroup$
To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
$endgroup$
– John Douma
Mar 14 at 15:59
1
$begingroup$
If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
$endgroup$
– Jens Schwaiger
Mar 14 at 16:57
add a comment |
$begingroup$
I'm trying to formulate a regression problem such that $y=ax^b$. Previously, I formulated the $y=ax+b$ like $y=Ac+e$ where $c=
beginbmatrix
a\
b
endbmatrix$ and $A=
beginbmatrix
x_1 & 1\
x_2 & 1\
. & \
. & \
x_n & 1
endbmatrix$
Can I only change the A matrix such that $A=
beginbmatrix
x_1^b & 1\
x_2^b & 1\
. & \
. & \
x_n^b & 1
endbmatrix$ and change c as $c=
beginbmatrix
a\
0
endbmatrix$ and write the same formula $y=Ac+e$
Am I correct? Any help would be appreciated. Many thanks
regression linear-regression
$endgroup$
I'm trying to formulate a regression problem such that $y=ax^b$. Previously, I formulated the $y=ax+b$ like $y=Ac+e$ where $c=
beginbmatrix
a\
b
endbmatrix$ and $A=
beginbmatrix
x_1 & 1\
x_2 & 1\
. & \
. & \
x_n & 1
endbmatrix$
Can I only change the A matrix such that $A=
beginbmatrix
x_1^b & 1\
x_2^b & 1\
. & \
. & \
x_n^b & 1
endbmatrix$ and change c as $c=
beginbmatrix
a\
0
endbmatrix$ and write the same formula $y=Ac+e$
Am I correct? Any help would be appreciated. Many thanks
regression linear-regression
regression linear-regression
asked Mar 14 at 15:55
JasonJason
12
12
$begingroup$
To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
$endgroup$
– John Douma
Mar 14 at 15:59
1
$begingroup$
If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
$endgroup$
– Jens Schwaiger
Mar 14 at 16:57
add a comment |
$begingroup$
To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
$endgroup$
– John Douma
Mar 14 at 15:59
1
$begingroup$
If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
$endgroup$
– Jens Schwaiger
Mar 14 at 16:57
$begingroup$
To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
$endgroup$
– John Douma
Mar 14 at 15:59
$begingroup$
To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
$endgroup$
– John Douma
Mar 14 at 15:59
1
1
$begingroup$
If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
$endgroup$
– Jens Schwaiger
Mar 14 at 16:57
$begingroup$
If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
$endgroup$
– Jens Schwaiger
Mar 14 at 16:57
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
No.
You need all the powers in the columns of A.
Write the problem as minimizing
$D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.
Set $partial D/partial a_j =0$
for each $j$ and see what you get.
$endgroup$
$begingroup$
Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
$endgroup$
– Jason
Mar 14 at 16:11
$begingroup$
beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
$endgroup$
– Jason
Mar 14 at 16:37
add a comment |
$begingroup$
You have three alternatives to solve this problem.
1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate
$$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$
in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and
$$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$
2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.
$$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$
If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as
$$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$
By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as
$$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$
The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and
$$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$
After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.
3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is
$$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$
The partial derivatives are given by
$$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
$$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$
After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3148171%2fformulating-the-polynomial-regression%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
No.
You need all the powers in the columns of A.
Write the problem as minimizing
$D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.
Set $partial D/partial a_j =0$
for each $j$ and see what you get.
$endgroup$
$begingroup$
Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
$endgroup$
– Jason
Mar 14 at 16:11
$begingroup$
beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
$endgroup$
– Jason
Mar 14 at 16:37
add a comment |
$begingroup$
No.
You need all the powers in the columns of A.
Write the problem as minimizing
$D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.
Set $partial D/partial a_j =0$
for each $j$ and see what you get.
$endgroup$
$begingroup$
Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
$endgroup$
– Jason
Mar 14 at 16:11
$begingroup$
beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
$endgroup$
– Jason
Mar 14 at 16:37
add a comment |
$begingroup$
No.
You need all the powers in the columns of A.
Write the problem as minimizing
$D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.
Set $partial D/partial a_j =0$
for each $j$ and see what you get.
$endgroup$
No.
You need all the powers in the columns of A.
Write the problem as minimizing
$D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.
Set $partial D/partial a_j =0$
for each $j$ and see what you get.
answered Mar 14 at 16:02
marty cohenmarty cohen
74.5k549129
74.5k549129
$begingroup$
Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
$endgroup$
– Jason
Mar 14 at 16:11
$begingroup$
beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
$endgroup$
– Jason
Mar 14 at 16:37
add a comment |
$begingroup$
Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
$endgroup$
– Jason
Mar 14 at 16:11
$begingroup$
beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
$endgroup$
– Jason
Mar 14 at 16:37
$begingroup$
Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
$endgroup$
– Jason
Mar 14 at 16:11
$begingroup$
Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
$endgroup$
– Jason
Mar 14 at 16:11
$begingroup$
beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
$endgroup$
– Jason
Mar 14 at 16:37
$begingroup$
beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
$endgroup$
– Jason
Mar 14 at 16:37
add a comment |
$begingroup$
You have three alternatives to solve this problem.
1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate
$$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$
in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and
$$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$
2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.
$$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$
If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as
$$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$
By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as
$$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$
The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and
$$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$
After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.
3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is
$$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$
The partial derivatives are given by
$$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
$$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$
After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.
$endgroup$
add a comment |
$begingroup$
You have three alternatives to solve this problem.
1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate
$$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$
in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and
$$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$
2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.
$$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$
If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as
$$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$
By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as
$$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$
The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and
$$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$
After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.
3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is
$$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$
The partial derivatives are given by
$$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
$$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$
After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.
$endgroup$
add a comment |
$begingroup$
You have three alternatives to solve this problem.
1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate
$$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$
in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and
$$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$
2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.
$$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$
If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as
$$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$
By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as
$$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$
The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and
$$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$
After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.
3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is
$$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$
The partial derivatives are given by
$$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
$$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$
After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.
$endgroup$
You have three alternatives to solve this problem.
1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate
$$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$
in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and
$$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$
2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.
$$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$
If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as
$$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$
By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as
$$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$
The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and
$$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$
After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.
3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is
$$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$
The partial derivatives are given by
$$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
$$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$
After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.
answered 9 hours ago
MachineLearnerMachineLearner
96910
96910
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3148171%2fformulating-the-polynomial-regression%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
$endgroup$
– John Douma
Mar 14 at 15:59
1
$begingroup$
If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
$endgroup$
– Jens Schwaiger
Mar 14 at 16:57