How is the replicator dynamic a gradient flow of the Fisher information metric?Does the curvature determine the metric for all surfacesHamiltonian for Geodesic FlowManifolds: A definition of the Gradient an Algebraic Tangent vector over charts, how to show equivalence?A form of chain rule to differentiate the flow of a vector field on a manifoldGradient flow under varying riemannian metricDerivative of local flow of vector field at originHow does the gradient/jacobian relate to the first derivative of one dimensional functions?Fisher information of sampleFlatness of a statistical manifold with Fisher information metricThe tangential gradient of a function $f$ defined on a manifold

PlotLabels with equations not expressions

Can anyone tell me why this program fails?

How to deal with a cynical class?

How to write cleanly even if my character uses expletive language?

Have researchers managed to "reverse time"? If so, what does that mean for physics?

Can elves maintain concentration in a trance?

Calculus II Professor will not accept my correct integral evaluation that uses a different method, should I bring this up further?

Connecting top and bottom SMD component pads using via

Why would a flight no longer considered airworthy be redirected like this?

Good allowance savings plan?

Does this AnyDice function accurately calculate the number of ogres you make unconcious with three 4th-level castings of Sleep?

Should we release the security issues we found in our product as CVE or we can just update those on weekly release notes?

How do I hide Chekhov's Gun?

Official degrees of earth’s rotation per day

An Accountant Seeks the Help of a Mathematician

Identifying the interval from A♭ to D♯

Where is the 1/8 CR apprentice in Volo's Guide to Monsters?

Is it true that real estate prices mainly go up?

Old race car problem/puzzle

Why are the outputs of printf and std::cout different

Is it possible that AIC = BIC?

Why do passenger jet manufacturers design their planes with stall prevention systems?

Rules about breaking the rules. How do I do it well?

Make a transparent 448*448 image



How is the replicator dynamic a gradient flow of the Fisher information metric?


Does the curvature determine the metric for all surfacesHamiltonian for Geodesic FlowManifolds: A definition of the Gradient an Algebraic Tangent vector over charts, how to show equivalence?A form of chain rule to differentiate the flow of a vector field on a manifoldGradient flow under varying riemannian metricDerivative of local flow of vector field at originHow does the gradient/jacobian relate to the first derivative of one dimensional functions?Fisher information of sampleFlatness of a statistical manifold with Fisher information metricThe tangential gradient of a function $f$ defined on a manifold













0












$begingroup$


I am trying to understand how the replicator dynamic can be derived as a gradient flow of the Fisher information metric (aka Shahshahani metric). I have a question about understanding a particular proof, but if you have a different straightforward derivation/proof, I would love to see it.



I am trying to understand the following proof (I'll ask my particular questions at the end):



Consider the replicator dynamic $dotp_i = p_i(f_i(p) - barf)$, where $f_i(p) = frac
partial Vpartial p_i$
for $i in 1, dots, m$, and $barf = sum_i p_i f_i(p)$



Lemma:
$$ dotp_i = (nabla_g V)_i = sum_j g_ij fracpartial Vpartial p_j $$
where $g_ij$ is the inverse of the Fisher information metric.



Proof:
For every $xi = (xi_1, dots, xi_k)$ with $sum_j xi_j = 0$, that is, for every tangent vector to the simplex in which the dynamics of $p$ take place, we have:
beginalign
sum_j g_ij dotp_i xi_j &= sum_j frac1p_j dotp_j xi_j \
&= sum_j (f_j(p) - barf) xi_j \
&= sum_j f_j(p) xi_j \
&= sum_j fracpartial Vpartial p_j xi_j \
&textwhich implies the lemma
endalign



My questions are:



  • I am not sure why they start the proof how they do. My understanding is they start the proof with the expression of the dot product between the vector pointing in the direction of change ($dotp$) and an arbitrary other tangent vector to $p$. Why start with that dot product?

  • I don't know how the last step of the proof implies tne Lemma.

Any clarification, exposition, intuition, or alternative proof that can help me fully understand how the replicator dynamic is derived as a gradient flow of the Fisher metric would be extremely appreciated!










share|cite|improve this question











$endgroup$
















    0












    $begingroup$


    I am trying to understand how the replicator dynamic can be derived as a gradient flow of the Fisher information metric (aka Shahshahani metric). I have a question about understanding a particular proof, but if you have a different straightforward derivation/proof, I would love to see it.



    I am trying to understand the following proof (I'll ask my particular questions at the end):



    Consider the replicator dynamic $dotp_i = p_i(f_i(p) - barf)$, where $f_i(p) = frac
    partial Vpartial p_i$
    for $i in 1, dots, m$, and $barf = sum_i p_i f_i(p)$



    Lemma:
    $$ dotp_i = (nabla_g V)_i = sum_j g_ij fracpartial Vpartial p_j $$
    where $g_ij$ is the inverse of the Fisher information metric.



    Proof:
    For every $xi = (xi_1, dots, xi_k)$ with $sum_j xi_j = 0$, that is, for every tangent vector to the simplex in which the dynamics of $p$ take place, we have:
    beginalign
    sum_j g_ij dotp_i xi_j &= sum_j frac1p_j dotp_j xi_j \
    &= sum_j (f_j(p) - barf) xi_j \
    &= sum_j f_j(p) xi_j \
    &= sum_j fracpartial Vpartial p_j xi_j \
    &textwhich implies the lemma
    endalign



    My questions are:



    • I am not sure why they start the proof how they do. My understanding is they start the proof with the expression of the dot product between the vector pointing in the direction of change ($dotp$) and an arbitrary other tangent vector to $p$. Why start with that dot product?

    • I don't know how the last step of the proof implies tne Lemma.

    Any clarification, exposition, intuition, or alternative proof that can help me fully understand how the replicator dynamic is derived as a gradient flow of the Fisher metric would be extremely appreciated!










    share|cite|improve this question











    $endgroup$














      0












      0








      0





      $begingroup$


      I am trying to understand how the replicator dynamic can be derived as a gradient flow of the Fisher information metric (aka Shahshahani metric). I have a question about understanding a particular proof, but if you have a different straightforward derivation/proof, I would love to see it.



      I am trying to understand the following proof (I'll ask my particular questions at the end):



      Consider the replicator dynamic $dotp_i = p_i(f_i(p) - barf)$, where $f_i(p) = frac
      partial Vpartial p_i$
      for $i in 1, dots, m$, and $barf = sum_i p_i f_i(p)$



      Lemma:
      $$ dotp_i = (nabla_g V)_i = sum_j g_ij fracpartial Vpartial p_j $$
      where $g_ij$ is the inverse of the Fisher information metric.



      Proof:
      For every $xi = (xi_1, dots, xi_k)$ with $sum_j xi_j = 0$, that is, for every tangent vector to the simplex in which the dynamics of $p$ take place, we have:
      beginalign
      sum_j g_ij dotp_i xi_j &= sum_j frac1p_j dotp_j xi_j \
      &= sum_j (f_j(p) - barf) xi_j \
      &= sum_j f_j(p) xi_j \
      &= sum_j fracpartial Vpartial p_j xi_j \
      &textwhich implies the lemma
      endalign



      My questions are:



      • I am not sure why they start the proof how they do. My understanding is they start the proof with the expression of the dot product between the vector pointing in the direction of change ($dotp$) and an arbitrary other tangent vector to $p$. Why start with that dot product?

      • I don't know how the last step of the proof implies tne Lemma.

      Any clarification, exposition, intuition, or alternative proof that can help me fully understand how the replicator dynamic is derived as a gradient flow of the Fisher metric would be extremely appreciated!










      share|cite|improve this question











      $endgroup$




      I am trying to understand how the replicator dynamic can be derived as a gradient flow of the Fisher information metric (aka Shahshahani metric). I have a question about understanding a particular proof, but if you have a different straightforward derivation/proof, I would love to see it.



      I am trying to understand the following proof (I'll ask my particular questions at the end):



      Consider the replicator dynamic $dotp_i = p_i(f_i(p) - barf)$, where $f_i(p) = frac
      partial Vpartial p_i$
      for $i in 1, dots, m$, and $barf = sum_i p_i f_i(p)$



      Lemma:
      $$ dotp_i = (nabla_g V)_i = sum_j g_ij fracpartial Vpartial p_j $$
      where $g_ij$ is the inverse of the Fisher information metric.



      Proof:
      For every $xi = (xi_1, dots, xi_k)$ with $sum_j xi_j = 0$, that is, for every tangent vector to the simplex in which the dynamics of $p$ take place, we have:
      beginalign
      sum_j g_ij dotp_i xi_j &= sum_j frac1p_j dotp_j xi_j \
      &= sum_j (f_j(p) - barf) xi_j \
      &= sum_j f_j(p) xi_j \
      &= sum_j fracpartial Vpartial p_j xi_j \
      &textwhich implies the lemma
      endalign



      My questions are:



      • I am not sure why they start the proof how they do. My understanding is they start the proof with the expression of the dot product between the vector pointing in the direction of change ($dotp$) and an arbitrary other tangent vector to $p$. Why start with that dot product?

      • I don't know how the last step of the proof implies tne Lemma.

      Any clarification, exposition, intuition, or alternative proof that can help me fully understand how the replicator dynamic is derived as a gradient flow of the Fisher metric would be extremely appreciated!







      calculus differential-geometry gradient-flows fisher-information evolutionary-game-theory






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Mar 12 at 0:08







      DrNesbit

















      asked Mar 11 at 9:48









      DrNesbitDrNesbit

      617




      617




















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3143488%2fhow-is-the-replicator-dynamic-a-gradient-flow-of-the-fisher-information-metric%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3143488%2fhow-is-the-replicator-dynamic-a-gradient-flow-of-the-fisher-information-metric%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer

          random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

          Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye