Formulating the polynomial regressionHow to handle constant term in Least Squares Regression?Formulating regression model in matrix notationGLM for Poisson Regression for Soccer Ratings Not ConvergingComputing vector linear regressionVariance of Beta in the Normal Linear Regression ModelRegression with multiple categories2 Dimension Linear Regression Math ProblemTwo equivalent ways of polynomial regression?Simple Linear Regression problem involving its design matrixOn a nonlinear regression problem

Make a Bowl of Alphabet Soup

Does Doodling or Improvising on the Piano Have Any Benefits?

Why Shazam when there is already Superman?

Taxes on Dividends in a Roth IRA

Find the next value of this number series

What features enable the Su-25 Frogfoot to operate with such a wide variety of fuels?

Is there a nicer/politer/more positive alternative for "negates"?

Multiplicative persistence

Did the UK lift the requirement for registering SIM cards?

Delete multiple columns using awk or sed

Stack Interview Code methods made from class Node and Smart Pointers

Does the Linux kernel need a file system to run?

Is this part of the description of the Archfey warlock's Misty Escape feature redundant?

What to do when eye contact makes your coworker uncomfortable?

Why is so much work done on numerical verification of the Riemann Hypothesis?

I found an audio circuit and I built it just fine, but I find it a bit too quiet. How do I amplify the output so that it is a bit louder?

Biological Blimps: Propulsion

Why does Carol not get rid of the Kree symbol on her suit when she changes its colours?

Is it necessary to use pronouns with the verb "essere"?

Why do ¬, ∀ and ∃ have the same precedence?

How to get directions in deep space?

How to make money from a browser who sees 5 seconds into the future of any web page?

Can I turn my anal-retentiveness into a career?

awk assign to multiple variables at once



Formulating the polynomial regression


How to handle constant term in Least Squares Regression?Formulating regression model in matrix notationGLM for Poisson Regression for Soccer Ratings Not ConvergingComputing vector linear regressionVariance of Beta in the Normal Linear Regression ModelRegression with multiple categories2 Dimension Linear Regression Math ProblemTwo equivalent ways of polynomial regression?Simple Linear Regression problem involving its design matrixOn a nonlinear regression problem













0












$begingroup$


I'm trying to formulate a regression problem such that $y=ax^b$. Previously, I formulated the $y=ax+b$ like $y=Ac+e$ where $c=
beginbmatrix
a\
b
endbmatrix$
and $A=
beginbmatrix
x_1 & 1\
x_2 & 1\
. & \
. & \
x_n & 1
endbmatrix$



Can I only change the A matrix such that $A=
beginbmatrix
x_1^b & 1\
x_2^b & 1\
. & \
. & \
x_n^b & 1
endbmatrix$
and change c as $c=
beginbmatrix
a\
0
endbmatrix$
and write the same formula $y=Ac+e$



Am I correct? Any help would be appreciated. Many thanks










share|cite|improve this question









$endgroup$











  • $begingroup$
    To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
    $endgroup$
    – John Douma
    Mar 14 at 15:59






  • 1




    $begingroup$
    If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
    $endgroup$
    – Jens Schwaiger
    Mar 14 at 16:57















0












$begingroup$


I'm trying to formulate a regression problem such that $y=ax^b$. Previously, I formulated the $y=ax+b$ like $y=Ac+e$ where $c=
beginbmatrix
a\
b
endbmatrix$
and $A=
beginbmatrix
x_1 & 1\
x_2 & 1\
. & \
. & \
x_n & 1
endbmatrix$



Can I only change the A matrix such that $A=
beginbmatrix
x_1^b & 1\
x_2^b & 1\
. & \
. & \
x_n^b & 1
endbmatrix$
and change c as $c=
beginbmatrix
a\
0
endbmatrix$
and write the same formula $y=Ac+e$



Am I correct? Any help would be appreciated. Many thanks










share|cite|improve this question









$endgroup$











  • $begingroup$
    To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
    $endgroup$
    – John Douma
    Mar 14 at 15:59






  • 1




    $begingroup$
    If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
    $endgroup$
    – Jens Schwaiger
    Mar 14 at 16:57













0












0








0





$begingroup$


I'm trying to formulate a regression problem such that $y=ax^b$. Previously, I formulated the $y=ax+b$ like $y=Ac+e$ where $c=
beginbmatrix
a\
b
endbmatrix$
and $A=
beginbmatrix
x_1 & 1\
x_2 & 1\
. & \
. & \
x_n & 1
endbmatrix$



Can I only change the A matrix such that $A=
beginbmatrix
x_1^b & 1\
x_2^b & 1\
. & \
. & \
x_n^b & 1
endbmatrix$
and change c as $c=
beginbmatrix
a\
0
endbmatrix$
and write the same formula $y=Ac+e$



Am I correct? Any help would be appreciated. Many thanks










share|cite|improve this question









$endgroup$




I'm trying to formulate a regression problem such that $y=ax^b$. Previously, I formulated the $y=ax+b$ like $y=Ac+e$ where $c=
beginbmatrix
a\
b
endbmatrix$
and $A=
beginbmatrix
x_1 & 1\
x_2 & 1\
. & \
. & \
x_n & 1
endbmatrix$



Can I only change the A matrix such that $A=
beginbmatrix
x_1^b & 1\
x_2^b & 1\
. & \
. & \
x_n^b & 1
endbmatrix$
and change c as $c=
beginbmatrix
a\
0
endbmatrix$
and write the same formula $y=Ac+e$



Am I correct? Any help would be appreciated. Many thanks







regression linear-regression






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Mar 14 at 15:55









JasonJason

12




12











  • $begingroup$
    To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
    $endgroup$
    – John Douma
    Mar 14 at 15:59






  • 1




    $begingroup$
    If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
    $endgroup$
    – Jens Schwaiger
    Mar 14 at 16:57
















  • $begingroup$
    To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
    $endgroup$
    – John Douma
    Mar 14 at 15:59






  • 1




    $begingroup$
    If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
    $endgroup$
    – Jens Schwaiger
    Mar 14 at 16:57















$begingroup$
To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
$endgroup$
– John Douma
Mar 14 at 15:59




$begingroup$
To use linear regression, your model has to be linear in the parameters. Your model is not linear in $a$ and $b$.
$endgroup$
– John Douma
Mar 14 at 15:59




1




1




$begingroup$
If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
$endgroup$
– Jens Schwaiger
Mar 14 at 16:57




$begingroup$
If $a$ and $b$ are to be determined you probably might want to consider the model $ln y= b ln x+ln a$, a linear model quite frequently used for powerfunctions.
$endgroup$
– Jens Schwaiger
Mar 14 at 16:57










2 Answers
2






active

oldest

votes


















0












$begingroup$

No.
You need all the powers in the columns of A.



Write the problem as minimizing
$D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.



Set $partial D/partial a_j =0$
for each $j$ and see what you get.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
    $endgroup$
    – Jason
    Mar 14 at 16:11











  • $begingroup$
    beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
    $endgroup$
    – Jason
    Mar 14 at 16:37


















0












$begingroup$

You have three alternatives to solve this problem.



1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate



$$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$



in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and



$$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$



2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.



$$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$



If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as



$$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$



By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as



$$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$



The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and



$$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$



After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.



3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is



$$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$



The partial derivatives are given by



$$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
$$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$



After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.






share|cite|improve this answer









$endgroup$












    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3148171%2fformulating-the-polynomial-regression%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    No.
    You need all the powers in the columns of A.



    Write the problem as minimizing
    $D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.



    Set $partial D/partial a_j =0$
    for each $j$ and see what you get.






    share|cite|improve this answer









    $endgroup$












    • $begingroup$
      Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
      $endgroup$
      – Jason
      Mar 14 at 16:11











    • $begingroup$
      beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
      $endgroup$
      – Jason
      Mar 14 at 16:37















    0












    $begingroup$

    No.
    You need all the powers in the columns of A.



    Write the problem as minimizing
    $D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.



    Set $partial D/partial a_j =0$
    for each $j$ and see what you get.






    share|cite|improve this answer









    $endgroup$












    • $begingroup$
      Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
      $endgroup$
      – Jason
      Mar 14 at 16:11











    • $begingroup$
      beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
      $endgroup$
      – Jason
      Mar 14 at 16:37













    0












    0








    0





    $begingroup$

    No.
    You need all the powers in the columns of A.



    Write the problem as minimizing
    $D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.



    Set $partial D/partial a_j =0$
    for each $j$ and see what you get.






    share|cite|improve this answer









    $endgroup$



    No.
    You need all the powers in the columns of A.



    Write the problem as minimizing
    $D=sum_k=1^m (y_k-sum_j=0^n a_jx_k^j)^2$.



    Set $partial D/partial a_j =0$
    for each $j$ and see what you get.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Mar 14 at 16:02









    marty cohenmarty cohen

    74.5k549129




    74.5k549129











    • $begingroup$
      Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
      $endgroup$
      – Jason
      Mar 14 at 16:11











    • $begingroup$
      beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
      $endgroup$
      – Jason
      Mar 14 at 16:37
















    • $begingroup$
      Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
      $endgroup$
      – Jason
      Mar 14 at 16:11











    • $begingroup$
      beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
      $endgroup$
      – Jason
      Mar 14 at 16:37















    $begingroup$
    Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
    $endgroup$
    – Jason
    Mar 14 at 16:11





    $begingroup$
    Sir, when I take the derivative, I get the following equation: $$y_k=sum_j=0^n a_j x_k^j $$ How can I proceed to express them in matrix form? Many thanks
    $endgroup$
    – Jason
    Mar 14 at 16:11













    $begingroup$
    beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
    $endgroup$
    – Jason
    Mar 14 at 16:37




    $begingroup$
    beginbmatrix y1 \ y2 \ . \ . \ y_n endbmatrix = beginbmatrix a & 0 & 0 & ... & 0 \ a & a & 0 & ... & 0 \ . \ . \ a & a & a & ... & a endbmatrixbeginbmatrix x_1^b \ x_2^b \ . \ . \ x_n^b endbmatrix + beginbmatrix e_1 \ e_2 \ . \ . \ e_n endbmatrix Am I correct now, sir?
    $endgroup$
    – Jason
    Mar 14 at 16:37











    0












    $begingroup$

    You have three alternatives to solve this problem.



    1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate



    $$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$



    in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and



    $$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$



    2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.



    $$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$



    If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as



    $$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$



    By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as



    $$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$



    The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and



    $$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$



    After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.



    3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is



    $$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$



    The partial derivatives are given by



    $$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
    $$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$



    After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.






    share|cite|improve this answer









    $endgroup$

















      0












      $begingroup$

      You have three alternatives to solve this problem.



      1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate



      $$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$



      in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and



      $$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$



      2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.



      $$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$



      If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as



      $$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$



      By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as



      $$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$



      The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and



      $$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$



      After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.



      3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is



      $$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$



      The partial derivatives are given by



      $$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
      $$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$



      After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.






      share|cite|improve this answer









      $endgroup$















        0












        0








        0





        $begingroup$

        You have three alternatives to solve this problem.



        1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate



        $$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$



        in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and



        $$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$



        2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.



        $$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$



        If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as



        $$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$



        By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as



        $$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$



        The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and



        $$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$



        After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.



        3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is



        $$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$



        The partial derivatives are given by



        $$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
        $$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$



        After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.






        share|cite|improve this answer









        $endgroup$



        You have three alternatives to solve this problem.



        1. Alternative: As Jens Schwaiger proposed you can rewrite the equation as $ln y_i = ln a + b ln x_i+ tildevarepsilon_i$. If we introduce the coefficient $tildea=ln a$ and the transformed outputs $tildey_i=ln y_i$ then it is possible to perform a standard linear regression. the coefficients $boldsymbolw=[tildea,b]^T=[ln a, b]^T$ can be estimated by the least squares estimate



        $$hatboldsymbolw=left[boldsymbolPhi^TboldsymbolPhi right]^-1boldsymbolPhi^Ttildeboldsymboly,qquad (*)$$



        in which $tildeboldsymboly=left[ln y_1, ln y_2, ldots, ln y_nright]^T$ is the transformed output vector and



        $$boldsymbolPhi = beginbmatrix1 & ln x_1\1 & ln x_2 \ vdots & vdots \1 & ln x_n endbmatrix.$$



        2. Alternative: We have $y_i = ax_i^b+varepsilon_i=aexp(bln x_i)+varepsilon_i$. We can now expand the exponential function by a taylor series.



        $$y_i=aleft[1+bln x_i + b^2/2!left[ln x_iright]^2+b^3/3![ln x_i]^3+cdots right]+varepsilon_i $$



        If we truncate the series to the $m^textth$ power then we can approximate $y_i$ as



        $$y_i approx left[a+abln x_i + ab^2/2!left[ln x_iright]^2+ab^3/3![ln x_i]^3+cdots +ab^m/m!left[ln x_i right]^mright]+varepsilon_i.$$



        By introducing the coefficients $w_l = ab^l/l!$ we can rewrite the previous equation as



        $$y_i approx left[w_0+w_1ln x_i + w_2left[ln x_iright]^2+w_3[ln x_i]^3+cdots +w_mleft[ln x_i right]^mright]+varepsilon_i.$$



        The coefficients are given by equation $(*)$ but now $tildeboldsymboly=boldsymboly=left[y_1, y_2, ldots,y_n right]^T$ and



        $$boldsymbolPhi = beginbmatrix1 & ln x_1 & [ln x_1]^2 & cdots & [ln x_1]^m\1 & ln x_2 & [ln x_2]^2 & cdots & [ln x_2]^m\ vdots & vdots & vdots & ddots & vdots \1 & ln x_n & [ln x_n]^2 & cdots & [ln x_n]^m\endbmatrix.$$



        After having obtained the coefficients $boldsymbolw=[a, ab, ab^2, ..., ab^m]$ you can determine $a=w_0$ and $b=w_1/w_0$ and so forth.



        3. Alternative: Full nonlinear least squares (proposed by Marty Cohen), which does not have a closed form solution. Here we use the objective function $E(boldsymbolw=[a,b]^T)$ which is



        $$E(boldsymbolw)=sum_i=1^n[y_i-ax_i^b]^2.$$



        The partial derivatives are given by



        $$dfracpartial Epartial a = sum_i=1^n2[y_i-ax_i^b](-x_i^b)$$
        $$dfracpartial Epartial b = sum_i=1^n2[y_i-ax_i^b](-ax_i^bln x_i).$$



        After setting these partial derivatives equal to zero you will have to solve a nonlinear equation in the coefficients $a$ and $b$. You can try to numerically solve this equation by using Newton-Raphson.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered 9 hours ago









        MachineLearnerMachineLearner

        96910




        96910



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3148171%2fformulating-the-polynomial-regression%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Solar Wings Breeze Design and development Specifications (Breeze) References Navigation menu1368-485X"Hang glider: Breeze (Solar Wings)"e

            Kathakali Contents Etymology and nomenclature History Repertoire Songs and musical instruments Traditional plays Styles: Sampradayam Training centers and awards Relationship to other dance forms See also Notes References External links Navigation menueThe Illustrated Encyclopedia of Hinduism: A-MSouth Asian Folklore: An EncyclopediaRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to Play10.1353/atj.2005.0004The Illustrated Encyclopedia of Hinduism: A-MEncyclopedia of HinduismKathakali Dance-drama: Where Gods and Demons Come to PlaySonic Liturgy: Ritual and Music in Hindu Tradition"The Mirror of Gesture"Kathakali Dance-drama: Where Gods and Demons Come to Play"Kathakali"Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceMedieval Indian Literature: An AnthologyThe Oxford Companion to Indian TheatreSouth Asian Folklore: An Encyclopedia : Afghanistan, Bangladesh, India, Nepal, Pakistan, Sri LankaThe Rise of Performance Studies: Rethinking Richard Schechner's Broad SpectrumIndian Theatre: Traditions of PerformanceModern Asian Theatre and Performance 1900-2000Critical Theory and PerformanceBetween Theater and AnthropologyKathakali603847011Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceBetween Theater and AnthropologyBetween Theater and AnthropologyNambeesan Smaraka AwardsArchivedThe Cambridge Guide to TheatreRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeThe Garland Encyclopedia of World Music: South Asia : the Indian subcontinentThe Ethos of Noh: Actors and Their Art10.2307/1145740By Means of Performance: Intercultural Studies of Theatre and Ritual10.1017/s204912550000100xReconceiving the Renaissance: A Critical ReaderPerformance TheoryListening to Theatre: The Aural Dimension of Beijing Opera10.2307/1146013Kathakali: The Art of the Non-WorldlyOn KathakaliKathakali, the dance theatreThe Kathakali Complex: Performance & StructureKathakali Dance-Drama: Where Gods and Demons Come to Play10.1093/obo/9780195399318-0071Drama and Ritual of Early Hinduism"In the Shadow of Hollywood Orientalism: Authentic East Indian Dancing"10.1080/08949460490274013Sanskrit Play Production in Ancient IndiaIndian Music: History and StructureBharata, the Nāṭyaśāstra233639306Table of Contents2238067286469807Dance In Indian Painting10.2307/32047833204783Kathakali Dance-Theatre: A Visual Narrative of Sacred Indian MimeIndian Classical Dance: The Renaissance and BeyondKathakali: an indigenous art-form of Keralaeee

            Method to test if a number is a perfect power? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Detecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers