How is Welford's Algorithm derived?Variance of a Derived Distribution from a Bernoulli ProcessHow to prove an equality envolving variance and covarianceHow to derive the variance of a point estimator that contains another estimator?inequality derived from law of total varianceHint for computing the mean and variance of $hatalpha = - fracnlog prod_i=1^n X_i $Scale Parameter of Rayleigh DistributionMinimum mean squared error of an estimator of the variance of the normal distributionHow to calculate the variance of a continuous distribution function but with a mass point?Standard Error of Coefficients in simple Linear RegressionShowing that an estimator for covariance is consistent?

Boss Telling direct supervisor I snitched

Idiom for feeling after taking risk and someone else being rewarded

Called into a meeting and told we are being made redundant (laid off) and "not to share outside". Can I tell my partner?

Is it a Cyclops number? "Nobody" knows!

Graphic representation of a triangle using ArrayPlot

How to educate team mate to take screenshots for bugs with out unwanted stuff

Is divide-by-zero a security vulnerability?

Cycles on the torus

Why do we say 'Pairwise Disjoint', rather than 'Disjoint'?

How do you make a gun that shoots melee weapons and/or swords?

I can't die. Who am I?

Why does Central Limit Theorem break down in my simulation?

How do I raise a figure (placed with wrapfig) to be flush with the top of a paragraph?

How to copy the rest of lines of a file to another file

When an outsider describes family relationships, which point of view are they using?

What does *dead* mean in *What do you mean, dead?*?

How should I solve this integral with changing parameters?

Can one live in the U.S. and not use a credit card?

What can I do if someone tampers with my SSH public key?

Should we avoid writing fiction about historical events without extensive research?

What is better: yes / no radio, or simple checkbox?

Why restrict private health insurance?

Is it appropriate to ask a former professor to order a book for me through an inter-library loan?

If sound is a longitudinal wave, why can we hear it if our ears aren't aligned with the propagation direction?



How is Welford's Algorithm derived?


Variance of a Derived Distribution from a Bernoulli ProcessHow to prove an equality envolving variance and covarianceHow to derive the variance of a point estimator that contains another estimator?inequality derived from law of total varianceHint for computing the mean and variance of $hatalpha = - fracnlog prod_i=1^n X_i $Scale Parameter of Rayleigh DistributionMinimum mean squared error of an estimator of the variance of the normal distributionHow to calculate the variance of a continuous distribution function but with a mass point?Standard Error of Coefficients in simple Linear RegressionShowing that an estimator for covariance is consistent?













1












$begingroup$


I am having some trouble understanding how part of this formula is derived.



Taken from:
http://jonisalonen.com/2013/deriving-welfords-method-for-computing-variance/



$(x_N−barx_N)^2+sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)$
$=(x_N−barx_N)^2+(barx_N–x_N)(barx_N−1–barx_N)$



I can't seem to derive the right side from the left.



Any help explaining this would be greatly appreciated! Thanks.










share|cite|improve this question











$endgroup$
















    1












    $begingroup$


    I am having some trouble understanding how part of this formula is derived.



    Taken from:
    http://jonisalonen.com/2013/deriving-welfords-method-for-computing-variance/



    $(x_N−barx_N)^2+sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)$
    $=(x_N−barx_N)^2+(barx_N–x_N)(barx_N−1–barx_N)$



    I can't seem to derive the right side from the left.



    Any help explaining this would be greatly appreciated! Thanks.










    share|cite|improve this question











    $endgroup$














      1












      1








      1





      $begingroup$


      I am having some trouble understanding how part of this formula is derived.



      Taken from:
      http://jonisalonen.com/2013/deriving-welfords-method-for-computing-variance/



      $(x_N−barx_N)^2+sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)$
      $=(x_N−barx_N)^2+(barx_N–x_N)(barx_N−1–barx_N)$



      I can't seem to derive the right side from the left.



      Any help explaining this would be greatly appreciated! Thanks.










      share|cite|improve this question











      $endgroup$




      I am having some trouble understanding how part of this formula is derived.



      Taken from:
      http://jonisalonen.com/2013/deriving-welfords-method-for-computing-variance/



      $(x_N−barx_N)^2+sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)$
      $=(x_N−barx_N)^2+(barx_N–x_N)(barx_N−1–barx_N)$



      I can't seem to derive the right side from the left.



      Any help explaining this would be greatly appreciated! Thanks.







      variance sampling-theory






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Aug 9 '18 at 6:08









      joriki

      171k10188349




      171k10188349










      asked May 27 '18 at 14:49









      Isaac NgIsaac Ng

      111




      111




















          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          I think the key is understanding:



          enter image description here



          and also:



          enter image description here



          The above reduces to:



          enter image description here



          This is a good post: https://alessior.wordpress.com/2017/10/09/onlinerecursive-variance-calculation-welfords-method/






          share|cite|improve this answer









          $endgroup$




















            0












            $begingroup$

            The key to understanding is the following algebraic identity:
            $$sum_i=1^N (x_i − barx_N) = 0$$
            which basically says that the algebraic sum of deviations from the mean is zero. It is quite straightforward to derive this from the definition of mean:



            $$ barx_N = frac1N sum_i=1^N x_i$$



            This can be rewritten as:
            $$ N barx_N = sum_i=1^N x_i$$



            Since mean ($barx_N $) is a constant you can rewrite multiplying it by $N$ as adding it $N$ times:
            $$ implies sum_i=1^N barx_N = sum_i=1^N x_i $$



            Which reduces to:
            $$sum_i=1^N (x_i − barx_N) = 0$$




            Now let us look at the summation on the LHS
            $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1)
            = sum_i=1^N−1((x_i−barx_N) + (x_i−barx_N−1)) \
            = sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)$$



            Now apply the identity stated in the beginning to the above equation, the second term vanishes on the RHS.
            $$sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)
            = sum_i=1^N−1(x_i−barx_N) + 0 $$



            We just need a little more algebraic manipulation for the first term on the RHS.
            We need the index $i$ to go from $1$ to $N$.



            beginalign
            sum_i=1^N−1(x_i−barx_N)
            &= left(sum_i=1^N-1(x_i−barx_N) right) + (x_N −barx_N) - (x_N −barx_N) \
            &= left(sum_i=1^N-1(x_i−barx_N) + (x_N −barx_N) right) - (x_N −barx_N) \
            &= left(sum_i=1^N(x_i−barx_N) right) - (x_N −barx_N)
            endalign



            The first term on the LHS vanishes leading to:
            $$sum_i=1^N−1(x_i−barx_N) = (barx_N − x_N) $$




            We have now derived
            $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1) = (barx_N − x_N) $$



            and plug this into the following equation:



            beginalign
            (x_N−barx_N)^2 + sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)
            &= (x_N−barx_N)^2 + (barx_N−1–barx_N) sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1) \
            &= (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
            endalign



            which completes the derivation.




            One can simplify this expression further:



            beginalign
            (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
            &= (x_N−barx_N) left [ (x_N−barx_N) - (barx_N−1–barx_N) right ] \
            &= (x_N−barx_N) (x_N − barx_N−1)
            endalign






            share|cite|improve this answer











            $endgroup$












              Your Answer





              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "69"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2798082%2fhow-is-welfords-algorithm-derived%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              2












              $begingroup$

              I think the key is understanding:



              enter image description here



              and also:



              enter image description here



              The above reduces to:



              enter image description here



              This is a good post: https://alessior.wordpress.com/2017/10/09/onlinerecursive-variance-calculation-welfords-method/






              share|cite|improve this answer









              $endgroup$

















                2












                $begingroup$

                I think the key is understanding:



                enter image description here



                and also:



                enter image description here



                The above reduces to:



                enter image description here



                This is a good post: https://alessior.wordpress.com/2017/10/09/onlinerecursive-variance-calculation-welfords-method/






                share|cite|improve this answer









                $endgroup$















                  2












                  2








                  2





                  $begingroup$

                  I think the key is understanding:



                  enter image description here



                  and also:



                  enter image description here



                  The above reduces to:



                  enter image description here



                  This is a good post: https://alessior.wordpress.com/2017/10/09/onlinerecursive-variance-calculation-welfords-method/






                  share|cite|improve this answer









                  $endgroup$



                  I think the key is understanding:



                  enter image description here



                  and also:



                  enter image description here



                  The above reduces to:



                  enter image description here



                  This is a good post: https://alessior.wordpress.com/2017/10/09/onlinerecursive-variance-calculation-welfords-method/







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jun 19 '18 at 0:23









                  JerryHJerryH

                  312




                  312





















                      0












                      $begingroup$

                      The key to understanding is the following algebraic identity:
                      $$sum_i=1^N (x_i − barx_N) = 0$$
                      which basically says that the algebraic sum of deviations from the mean is zero. It is quite straightforward to derive this from the definition of mean:



                      $$ barx_N = frac1N sum_i=1^N x_i$$



                      This can be rewritten as:
                      $$ N barx_N = sum_i=1^N x_i$$



                      Since mean ($barx_N $) is a constant you can rewrite multiplying it by $N$ as adding it $N$ times:
                      $$ implies sum_i=1^N barx_N = sum_i=1^N x_i $$



                      Which reduces to:
                      $$sum_i=1^N (x_i − barx_N) = 0$$




                      Now let us look at the summation on the LHS
                      $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1)
                      = sum_i=1^N−1((x_i−barx_N) + (x_i−barx_N−1)) \
                      = sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)$$



                      Now apply the identity stated in the beginning to the above equation, the second term vanishes on the RHS.
                      $$sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)
                      = sum_i=1^N−1(x_i−barx_N) + 0 $$



                      We just need a little more algebraic manipulation for the first term on the RHS.
                      We need the index $i$ to go from $1$ to $N$.



                      beginalign
                      sum_i=1^N−1(x_i−barx_N)
                      &= left(sum_i=1^N-1(x_i−barx_N) right) + (x_N −barx_N) - (x_N −barx_N) \
                      &= left(sum_i=1^N-1(x_i−barx_N) + (x_N −barx_N) right) - (x_N −barx_N) \
                      &= left(sum_i=1^N(x_i−barx_N) right) - (x_N −barx_N)
                      endalign



                      The first term on the LHS vanishes leading to:
                      $$sum_i=1^N−1(x_i−barx_N) = (barx_N − x_N) $$




                      We have now derived
                      $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1) = (barx_N − x_N) $$



                      and plug this into the following equation:



                      beginalign
                      (x_N−barx_N)^2 + sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)
                      &= (x_N−barx_N)^2 + (barx_N−1–barx_N) sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1) \
                      &= (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
                      endalign



                      which completes the derivation.




                      One can simplify this expression further:



                      beginalign
                      (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
                      &= (x_N−barx_N) left [ (x_N−barx_N) - (barx_N−1–barx_N) right ] \
                      &= (x_N−barx_N) (x_N − barx_N−1)
                      endalign






                      share|cite|improve this answer











                      $endgroup$

















                        0












                        $begingroup$

                        The key to understanding is the following algebraic identity:
                        $$sum_i=1^N (x_i − barx_N) = 0$$
                        which basically says that the algebraic sum of deviations from the mean is zero. It is quite straightforward to derive this from the definition of mean:



                        $$ barx_N = frac1N sum_i=1^N x_i$$



                        This can be rewritten as:
                        $$ N barx_N = sum_i=1^N x_i$$



                        Since mean ($barx_N $) is a constant you can rewrite multiplying it by $N$ as adding it $N$ times:
                        $$ implies sum_i=1^N barx_N = sum_i=1^N x_i $$



                        Which reduces to:
                        $$sum_i=1^N (x_i − barx_N) = 0$$




                        Now let us look at the summation on the LHS
                        $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1)
                        = sum_i=1^N−1((x_i−barx_N) + (x_i−barx_N−1)) \
                        = sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)$$



                        Now apply the identity stated in the beginning to the above equation, the second term vanishes on the RHS.
                        $$sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)
                        = sum_i=1^N−1(x_i−barx_N) + 0 $$



                        We just need a little more algebraic manipulation for the first term on the RHS.
                        We need the index $i$ to go from $1$ to $N$.



                        beginalign
                        sum_i=1^N−1(x_i−barx_N)
                        &= left(sum_i=1^N-1(x_i−barx_N) right) + (x_N −barx_N) - (x_N −barx_N) \
                        &= left(sum_i=1^N-1(x_i−barx_N) + (x_N −barx_N) right) - (x_N −barx_N) \
                        &= left(sum_i=1^N(x_i−barx_N) right) - (x_N −barx_N)
                        endalign



                        The first term on the LHS vanishes leading to:
                        $$sum_i=1^N−1(x_i−barx_N) = (barx_N − x_N) $$




                        We have now derived
                        $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1) = (barx_N − x_N) $$



                        and plug this into the following equation:



                        beginalign
                        (x_N−barx_N)^2 + sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)
                        &= (x_N−barx_N)^2 + (barx_N−1–barx_N) sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1) \
                        &= (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
                        endalign



                        which completes the derivation.




                        One can simplify this expression further:



                        beginalign
                        (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
                        &= (x_N−barx_N) left [ (x_N−barx_N) - (barx_N−1–barx_N) right ] \
                        &= (x_N−barx_N) (x_N − barx_N−1)
                        endalign






                        share|cite|improve this answer











                        $endgroup$















                          0












                          0








                          0





                          $begingroup$

                          The key to understanding is the following algebraic identity:
                          $$sum_i=1^N (x_i − barx_N) = 0$$
                          which basically says that the algebraic sum of deviations from the mean is zero. It is quite straightforward to derive this from the definition of mean:



                          $$ barx_N = frac1N sum_i=1^N x_i$$



                          This can be rewritten as:
                          $$ N barx_N = sum_i=1^N x_i$$



                          Since mean ($barx_N $) is a constant you can rewrite multiplying it by $N$ as adding it $N$ times:
                          $$ implies sum_i=1^N barx_N = sum_i=1^N x_i $$



                          Which reduces to:
                          $$sum_i=1^N (x_i − barx_N) = 0$$




                          Now let us look at the summation on the LHS
                          $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1)
                          = sum_i=1^N−1((x_i−barx_N) + (x_i−barx_N−1)) \
                          = sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)$$



                          Now apply the identity stated in the beginning to the above equation, the second term vanishes on the RHS.
                          $$sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)
                          = sum_i=1^N−1(x_i−barx_N) + 0 $$



                          We just need a little more algebraic manipulation for the first term on the RHS.
                          We need the index $i$ to go from $1$ to $N$.



                          beginalign
                          sum_i=1^N−1(x_i−barx_N)
                          &= left(sum_i=1^N-1(x_i−barx_N) right) + (x_N −barx_N) - (x_N −barx_N) \
                          &= left(sum_i=1^N-1(x_i−barx_N) + (x_N −barx_N) right) - (x_N −barx_N) \
                          &= left(sum_i=1^N(x_i−barx_N) right) - (x_N −barx_N)
                          endalign



                          The first term on the LHS vanishes leading to:
                          $$sum_i=1^N−1(x_i−barx_N) = (barx_N − x_N) $$




                          We have now derived
                          $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1) = (barx_N − x_N) $$



                          and plug this into the following equation:



                          beginalign
                          (x_N−barx_N)^2 + sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)
                          &= (x_N−barx_N)^2 + (barx_N−1–barx_N) sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1) \
                          &= (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
                          endalign



                          which completes the derivation.




                          One can simplify this expression further:



                          beginalign
                          (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
                          &= (x_N−barx_N) left [ (x_N−barx_N) - (barx_N−1–barx_N) right ] \
                          &= (x_N−barx_N) (x_N − barx_N−1)
                          endalign






                          share|cite|improve this answer











                          $endgroup$



                          The key to understanding is the following algebraic identity:
                          $$sum_i=1^N (x_i − barx_N) = 0$$
                          which basically says that the algebraic sum of deviations from the mean is zero. It is quite straightforward to derive this from the definition of mean:



                          $$ barx_N = frac1N sum_i=1^N x_i$$



                          This can be rewritten as:
                          $$ N barx_N = sum_i=1^N x_i$$



                          Since mean ($barx_N $) is a constant you can rewrite multiplying it by $N$ as adding it $N$ times:
                          $$ implies sum_i=1^N barx_N = sum_i=1^N x_i $$



                          Which reduces to:
                          $$sum_i=1^N (x_i − barx_N) = 0$$




                          Now let us look at the summation on the LHS
                          $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1)
                          = sum_i=1^N−1((x_i−barx_N) + (x_i−barx_N−1)) \
                          = sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)$$



                          Now apply the identity stated in the beginning to the above equation, the second term vanishes on the RHS.
                          $$sum_i=1^N−1(x_i−barx_N) +sum_i=1^N−1 (x_i−barx_N−1)
                          = sum_i=1^N−1(x_i−barx_N) + 0 $$



                          We just need a little more algebraic manipulation for the first term on the RHS.
                          We need the index $i$ to go from $1$ to $N$.



                          beginalign
                          sum_i=1^N−1(x_i−barx_N)
                          &= left(sum_i=1^N-1(x_i−barx_N) right) + (x_N −barx_N) - (x_N −barx_N) \
                          &= left(sum_i=1^N-1(x_i−barx_N) + (x_N −barx_N) right) - (x_N −barx_N) \
                          &= left(sum_i=1^N(x_i−barx_N) right) - (x_N −barx_N)
                          endalign



                          The first term on the LHS vanishes leading to:
                          $$sum_i=1^N−1(x_i−barx_N) = (barx_N − x_N) $$




                          We have now derived
                          $$sum_i=1^N−1(x_i−barx_N + x_i−barx_N−1) = (barx_N − x_N) $$



                          and plug this into the following equation:



                          beginalign
                          (x_N−barx_N)^2 + sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1)(barx_N−1–barx_N)
                          &= (x_N−barx_N)^2 + (barx_N−1–barx_N) sum_i=1^N−1(x_i−barx_N+x_i−barx_N−1) \
                          &= (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
                          endalign



                          which completes the derivation.




                          One can simplify this expression further:



                          beginalign
                          (x_N−barx_N)^2 + (barx_N−1–barx_N) (barx_N − x_N)
                          &= (x_N−barx_N) left [ (x_N−barx_N) - (barx_N−1–barx_N) right ] \
                          &= (x_N−barx_N) (x_N − barx_N−1)
                          endalign







                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited yesterday

























                          answered yesterday









                          AnandAnand

                          1184




                          1184



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2798082%2fhow-is-welfords-algorithm-derived%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Solar Wings Breeze Design and development Specifications (Breeze) References Navigation menu1368-485X"Hang glider: Breeze (Solar Wings)"e

                              Kathakali Contents Etymology and nomenclature History Repertoire Songs and musical instruments Traditional plays Styles: Sampradayam Training centers and awards Relationship to other dance forms See also Notes References External links Navigation menueThe Illustrated Encyclopedia of Hinduism: A-MSouth Asian Folklore: An EncyclopediaRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to Play10.1353/atj.2005.0004The Illustrated Encyclopedia of Hinduism: A-MEncyclopedia of HinduismKathakali Dance-drama: Where Gods and Demons Come to PlaySonic Liturgy: Ritual and Music in Hindu Tradition"The Mirror of Gesture"Kathakali Dance-drama: Where Gods and Demons Come to Play"Kathakali"Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceMedieval Indian Literature: An AnthologyThe Oxford Companion to Indian TheatreSouth Asian Folklore: An Encyclopedia : Afghanistan, Bangladesh, India, Nepal, Pakistan, Sri LankaThe Rise of Performance Studies: Rethinking Richard Schechner's Broad SpectrumIndian Theatre: Traditions of PerformanceModern Asian Theatre and Performance 1900-2000Critical Theory and PerformanceBetween Theater and AnthropologyKathakali603847011Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceBetween Theater and AnthropologyBetween Theater and AnthropologyNambeesan Smaraka AwardsArchivedThe Cambridge Guide to TheatreRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeThe Garland Encyclopedia of World Music: South Asia : the Indian subcontinentThe Ethos of Noh: Actors and Their Art10.2307/1145740By Means of Performance: Intercultural Studies of Theatre and Ritual10.1017/s204912550000100xReconceiving the Renaissance: A Critical ReaderPerformance TheoryListening to Theatre: The Aural Dimension of Beijing Opera10.2307/1146013Kathakali: The Art of the Non-WorldlyOn KathakaliKathakali, the dance theatreThe Kathakali Complex: Performance & StructureKathakali Dance-Drama: Where Gods and Demons Come to Play10.1093/obo/9780195399318-0071Drama and Ritual of Early Hinduism"In the Shadow of Hollywood Orientalism: Authentic East Indian Dancing"10.1080/08949460490274013Sanskrit Play Production in Ancient IndiaIndian Music: History and StructureBharata, the Nāṭyaśāstra233639306Table of Contents2238067286469807Dance In Indian Painting10.2307/32047833204783Kathakali Dance-Theatre: A Visual Narrative of Sacred Indian MimeIndian Classical Dance: The Renaissance and BeyondKathakali: an indigenous art-form of Keralaeee

                              Method to test if a number is a perfect power? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Detecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers