Every finite state Markov chain has a stationary probability distributionFinite State Markov Chain Stationary DistributionStationary distribution behavior - Markov chainGiven an invariant distribution is the (finite state) Markov transition matrix unique?Definition of Stationary Distributions of a Markov ChainFinite state space Markov chainfinite state markov chain stationary distribution existenceDetermining the Stationary Distribution of a Homogeneous Markov ChainContinuous-state Markov chain: Existence and uniqueness of stationary distributionFinite state Markov Chain always has long-term stationary distribution?$pi = pi P$Always exists?Selecting a Stationary distribution of a Markov chain

How do I extrude a face to a single vertex

Freedom of speech and where it applies

Is camera lens focus an exact point or a range?

Can I use my Chinese passport to enter China after I acquired another citizenship?

Can I sign legal documents with a smiley face?

Is there a word to describe the feeling of being transfixed out of horror?

How to color a curve

Some numbers are more equivalent than others

How to align and center standalone amsmath equations?

Difference between -| and |- in TikZ

Bob has never been a M before

Find last 3 digits of this monster number

About a little hole in Z'ha'dum

Can the Supreme Court overturn an impeachment?

On a tidally locked planet, would time be quantized?

Could the E-bike drivetrain wear down till needing replacement after 400 km?

Why is Arduino resetting while driving motors?

How can Trident be so inexpensive? Will it orbit Triton or just do a (slow) flyby?

Diode in opposite direction?

List of people who lose a child in תנ"ך

Do Legal Documents Require Signing In Standard Pen Colors?

MAXDOP Settings for SQL Server 2014

Two-sided logarithm inequality

Why does Async/Await work properly when the loop is inside the async function and not the other way around?



Every finite state Markov chain has a stationary probability distribution


Finite State Markov Chain Stationary DistributionStationary distribution behavior - Markov chainGiven an invariant distribution is the (finite state) Markov transition matrix unique?Definition of Stationary Distributions of a Markov ChainFinite state space Markov chainfinite state markov chain stationary distribution existenceDetermining the Stationary Distribution of a Homogeneous Markov ChainContinuous-state Markov chain: Existence and uniqueness of stationary distributionFinite state Markov Chain always has long-term stationary distribution?$pi = pi P$Always exists?Selecting a Stationary distribution of a Markov chain













2












$begingroup$


I am trying to understand the following proof that every finite-state Markov chain has a stationary distribution. The proof is from here.



Let $P$ be the $k times k$ (stochastic) transition probability matrix for our Markov chain. Now,




... $1$ is an eigenvalue for $P$ and therefore also for $P^t$ .
Writing a $P^t$ invariant $v$ as $v = v^+ − v^−$ with $v^+ , v^− in (
> mathbbR_+ )^k$
, we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone; if $v^+neq 0$ take $ν = ( sum v^+_i )^-1 · v^+,$
otherwise normalize $v^−$.




The main thing I don't understand is




we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone




Why is this true?



I also don't understand why $( sum v^+_i )^-1 · v^+$ works if $v^+ neq 0$.



Is there any easier way to show that every finite state Markov chain has a stationary probability distribution?










share|cite|improve this question









$endgroup$











  • $begingroup$
    For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
    $endgroup$
    – amd
    Dec 2 '18 at 0:23











  • $begingroup$
    They add up to 1, makes sense. I get that part now
    $endgroup$
    – jackson5
    Dec 2 '18 at 0:25















2












$begingroup$


I am trying to understand the following proof that every finite-state Markov chain has a stationary distribution. The proof is from here.



Let $P$ be the $k times k$ (stochastic) transition probability matrix for our Markov chain. Now,




... $1$ is an eigenvalue for $P$ and therefore also for $P^t$ .
Writing a $P^t$ invariant $v$ as $v = v^+ − v^−$ with $v^+ , v^− in (
> mathbbR_+ )^k$
, we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone; if $v^+neq 0$ take $ν = ( sum v^+_i )^-1 · v^+,$
otherwise normalize $v^−$.




The main thing I don't understand is




we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone




Why is this true?



I also don't understand why $( sum v^+_i )^-1 · v^+$ works if $v^+ neq 0$.



Is there any easier way to show that every finite state Markov chain has a stationary probability distribution?










share|cite|improve this question









$endgroup$











  • $begingroup$
    For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
    $endgroup$
    – amd
    Dec 2 '18 at 0:23











  • $begingroup$
    They add up to 1, makes sense. I get that part now
    $endgroup$
    – jackson5
    Dec 2 '18 at 0:25













2












2








2


2



$begingroup$


I am trying to understand the following proof that every finite-state Markov chain has a stationary distribution. The proof is from here.



Let $P$ be the $k times k$ (stochastic) transition probability matrix for our Markov chain. Now,




... $1$ is an eigenvalue for $P$ and therefore also for $P^t$ .
Writing a $P^t$ invariant $v$ as $v = v^+ − v^−$ with $v^+ , v^− in (
> mathbbR_+ )^k$
, we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone; if $v^+neq 0$ take $ν = ( sum v^+_i )^-1 · v^+,$
otherwise normalize $v^−$.




The main thing I don't understand is




we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone




Why is this true?



I also don't understand why $( sum v^+_i )^-1 · v^+$ works if $v^+ neq 0$.



Is there any easier way to show that every finite state Markov chain has a stationary probability distribution?










share|cite|improve this question









$endgroup$




I am trying to understand the following proof that every finite-state Markov chain has a stationary distribution. The proof is from here.



Let $P$ be the $k times k$ (stochastic) transition probability matrix for our Markov chain. Now,




... $1$ is an eigenvalue for $P$ and therefore also for $P^t$ .
Writing a $P^t$ invariant $v$ as $v = v^+ − v^−$ with $v^+ , v^− in (
> mathbbR_+ )^k$
, we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone; if $v^+neq 0$ take $ν = ( sum v^+_i )^-1 · v^+,$
otherwise normalize $v^−$.




The main thing I don't understand is




we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone




Why is this true?



I also don't understand why $( sum v^+_i )^-1 · v^+$ works if $v^+ neq 0$.



Is there any easier way to show that every finite state Markov chain has a stationary probability distribution?







probability proof-verification markov-chains stochastic-matrices






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 1 '18 at 22:23









jackson5jackson5

632513




632513











  • $begingroup$
    For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
    $endgroup$
    – amd
    Dec 2 '18 at 0:23











  • $begingroup$
    They add up to 1, makes sense. I get that part now
    $endgroup$
    – jackson5
    Dec 2 '18 at 0:25
















  • $begingroup$
    For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
    $endgroup$
    – amd
    Dec 2 '18 at 0:23











  • $begingroup$
    They add up to 1, makes sense. I get that part now
    $endgroup$
    – jackson5
    Dec 2 '18 at 0:25















$begingroup$
For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
$endgroup$
– amd
Dec 2 '18 at 0:23





$begingroup$
For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
$endgroup$
– amd
Dec 2 '18 at 0:23













$begingroup$
They add up to 1, makes sense. I get that part now
$endgroup$
– jackson5
Dec 2 '18 at 0:25




$begingroup$
They add up to 1, makes sense. I get that part now
$endgroup$
– jackson5
Dec 2 '18 at 0:25










2 Answers
2






active

oldest

votes


















1












$begingroup$

The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



Interpretation 1. Maybe the statement below is being claimed:




(*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.






share|cite|improve this answer











$endgroup$




















    0












    $begingroup$

    To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



    Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
    beginequation
    beginsplit
    mathbfP : Delta_d &to Delta_d \
    mu &mapsto mu mathbfP
    endsplit
    endequation

    As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.






    share|cite|improve this answer









    $endgroup$












      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3021916%2fevery-finite-state-markov-chain-has-a-stationary-probability-distribution%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1












      $begingroup$

      The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



      It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



      Interpretation 1. Maybe the statement below is being claimed:




      (*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




      This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



      Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




      If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




      This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



      It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



      For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.






      share|cite|improve this answer











      $endgroup$

















        1












        $begingroup$

        The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



        It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



        Interpretation 1. Maybe the statement below is being claimed:




        (*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




        This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



        Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




        If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




        This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



        It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



        For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.






        share|cite|improve this answer











        $endgroup$















          1












          1








          1





          $begingroup$

          The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



          It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



          Interpretation 1. Maybe the statement below is being claimed:




          (*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




          This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



          Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




          If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




          This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



          It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



          For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.






          share|cite|improve this answer











          $endgroup$



          The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



          It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



          Interpretation 1. Maybe the statement below is being claimed:




          (*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




          This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



          Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




          If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




          This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



          It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



          For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Dec 7 '18 at 14:48

























          answered Dec 2 '18 at 13:37









          BenBen

          4,303617




          4,303617





















              0












              $begingroup$

              To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



              Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
              beginequation
              beginsplit
              mathbfP : Delta_d &to Delta_d \
              mu &mapsto mu mathbfP
              endsplit
              endequation

              As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.






              share|cite|improve this answer









              $endgroup$

















                0












                $begingroup$

                To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



                Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
                beginequation
                beginsplit
                mathbfP : Delta_d &to Delta_d \
                mu &mapsto mu mathbfP
                endsplit
                endequation

                As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.






                share|cite|improve this answer









                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



                  Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
                  beginequation
                  beginsplit
                  mathbfP : Delta_d &to Delta_d \
                  mu &mapsto mu mathbfP
                  endsplit
                  endequation

                  As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.






                  share|cite|improve this answer









                  $endgroup$



                  To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



                  Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
                  beginequation
                  beginsplit
                  mathbfP : Delta_d &to Delta_d \
                  mu &mapsto mu mathbfP
                  endsplit
                  endequation

                  As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Mar 16 at 8:46









                  ippiki-ookamiippiki-ookami

                  451317




                  451317



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3021916%2fevery-finite-state-markov-chain-has-a-stationary-probability-distribution%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Solar Wings Breeze Design and development Specifications (Breeze) References Navigation menu1368-485X"Hang glider: Breeze (Solar Wings)"e

                      Kathakali Contents Etymology and nomenclature History Repertoire Songs and musical instruments Traditional plays Styles: Sampradayam Training centers and awards Relationship to other dance forms See also Notes References External links Navigation menueThe Illustrated Encyclopedia of Hinduism: A-MSouth Asian Folklore: An EncyclopediaRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to Play10.1353/atj.2005.0004The Illustrated Encyclopedia of Hinduism: A-MEncyclopedia of HinduismKathakali Dance-drama: Where Gods and Demons Come to PlaySonic Liturgy: Ritual and Music in Hindu Tradition"The Mirror of Gesture"Kathakali Dance-drama: Where Gods and Demons Come to Play"Kathakali"Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceMedieval Indian Literature: An AnthologyThe Oxford Companion to Indian TheatreSouth Asian Folklore: An Encyclopedia : Afghanistan, Bangladesh, India, Nepal, Pakistan, Sri LankaThe Rise of Performance Studies: Rethinking Richard Schechner's Broad SpectrumIndian Theatre: Traditions of PerformanceModern Asian Theatre and Performance 1900-2000Critical Theory and PerformanceBetween Theater and AnthropologyKathakali603847011Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceBetween Theater and AnthropologyBetween Theater and AnthropologyNambeesan Smaraka AwardsArchivedThe Cambridge Guide to TheatreRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeThe Garland Encyclopedia of World Music: South Asia : the Indian subcontinentThe Ethos of Noh: Actors and Their Art10.2307/1145740By Means of Performance: Intercultural Studies of Theatre and Ritual10.1017/s204912550000100xReconceiving the Renaissance: A Critical ReaderPerformance TheoryListening to Theatre: The Aural Dimension of Beijing Opera10.2307/1146013Kathakali: The Art of the Non-WorldlyOn KathakaliKathakali, the dance theatreThe Kathakali Complex: Performance & StructureKathakali Dance-Drama: Where Gods and Demons Come to Play10.1093/obo/9780195399318-0071Drama and Ritual of Early Hinduism"In the Shadow of Hollywood Orientalism: Authentic East Indian Dancing"10.1080/08949460490274013Sanskrit Play Production in Ancient IndiaIndian Music: History and StructureBharata, the Nāṭyaśāstra233639306Table of Contents2238067286469807Dance In Indian Painting10.2307/32047833204783Kathakali Dance-Theatre: A Visual Narrative of Sacred Indian MimeIndian Classical Dance: The Renaissance and BeyondKathakali: an indigenous art-form of Keralaeee

                      Method to test if a number is a perfect power? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Detecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers