Every finite state Markov chain has a stationary probability distributionFinite State Markov Chain Stationary DistributionStationary distribution behavior - Markov chainGiven an invariant distribution is the (finite state) Markov transition matrix unique?Definition of Stationary Distributions of a Markov ChainFinite state space Markov chainfinite state markov chain stationary distribution existenceDetermining the Stationary Distribution of a Homogeneous Markov ChainContinuous-state Markov chain: Existence and uniqueness of stationary distributionFinite state Markov Chain always has long-term stationary distribution?$pi = pi P$Always exists?Selecting a Stationary distribution of a Markov chain

How do I extrude a face to a single vertex

Freedom of speech and where it applies

Is camera lens focus an exact point or a range?

Can I use my Chinese passport to enter China after I acquired another citizenship?

Can I sign legal documents with a smiley face?

Is there a word to describe the feeling of being transfixed out of horror?

How to color a curve

Some numbers are more equivalent than others

How to align and center standalone amsmath equations?

Difference between -| and |- in TikZ

Bob has never been a M before

Find last 3 digits of this monster number

About a little hole in Z'ha'dum

Can the Supreme Court overturn an impeachment?

On a tidally locked planet, would time be quantized?

Could the E-bike drivetrain wear down till needing replacement after 400 km?

Why is Arduino resetting while driving motors?

How can Trident be so inexpensive? Will it orbit Triton or just do a (slow) flyby?

Diode in opposite direction?

List of people who lose a child in תנ"ך

Do Legal Documents Require Signing In Standard Pen Colors?

MAXDOP Settings for SQL Server 2014

Two-sided logarithm inequality

Why does Async/Await work properly when the loop is inside the async function and not the other way around?



Every finite state Markov chain has a stationary probability distribution


Finite State Markov Chain Stationary DistributionStationary distribution behavior - Markov chainGiven an invariant distribution is the (finite state) Markov transition matrix unique?Definition of Stationary Distributions of a Markov ChainFinite state space Markov chainfinite state markov chain stationary distribution existenceDetermining the Stationary Distribution of a Homogeneous Markov ChainContinuous-state Markov chain: Existence and uniqueness of stationary distributionFinite state Markov Chain always has long-term stationary distribution?$pi = pi P$Always exists?Selecting a Stationary distribution of a Markov chain













2












$begingroup$


I am trying to understand the following proof that every finite-state Markov chain has a stationary distribution. The proof is from here.



Let $P$ be the $k times k$ (stochastic) transition probability matrix for our Markov chain. Now,




... $1$ is an eigenvalue for $P$ and therefore also for $P^t$ .
Writing a $P^t$ invariant $v$ as $v = v^+ − v^−$ with $v^+ , v^− in (
> mathbbR_+ )^k$
, we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone; if $v^+neq 0$ take $ν = ( sum v^+_i )^-1 · v^+,$
otherwise normalize $v^−$.




The main thing I don't understand is




we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone




Why is this true?



I also don't understand why $( sum v^+_i )^-1 · v^+$ works if $v^+ neq 0$.



Is there any easier way to show that every finite state Markov chain has a stationary probability distribution?










share|cite|improve this question









$endgroup$











  • $begingroup$
    For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
    $endgroup$
    – amd
    Dec 2 '18 at 0:23











  • $begingroup$
    They add up to 1, makes sense. I get that part now
    $endgroup$
    – jackson5
    Dec 2 '18 at 0:25















2












$begingroup$


I am trying to understand the following proof that every finite-state Markov chain has a stationary distribution. The proof is from here.



Let $P$ be the $k times k$ (stochastic) transition probability matrix for our Markov chain. Now,




... $1$ is an eigenvalue for $P$ and therefore also for $P^t$ .
Writing a $P^t$ invariant $v$ as $v = v^+ − v^−$ with $v^+ , v^− in (
> mathbbR_+ )^k$
, we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone; if $v^+neq 0$ take $ν = ( sum v^+_i )^-1 · v^+,$
otherwise normalize $v^−$.




The main thing I don't understand is




we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone




Why is this true?



I also don't understand why $( sum v^+_i )^-1 · v^+$ works if $v^+ neq 0$.



Is there any easier way to show that every finite state Markov chain has a stationary probability distribution?










share|cite|improve this question









$endgroup$











  • $begingroup$
    For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
    $endgroup$
    – amd
    Dec 2 '18 at 0:23











  • $begingroup$
    They add up to 1, makes sense. I get that part now
    $endgroup$
    – jackson5
    Dec 2 '18 at 0:25













2












2








2


2



$begingroup$


I am trying to understand the following proof that every finite-state Markov chain has a stationary distribution. The proof is from here.



Let $P$ be the $k times k$ (stochastic) transition probability matrix for our Markov chain. Now,




... $1$ is an eigenvalue for $P$ and therefore also for $P^t$ .
Writing a $P^t$ invariant $v$ as $v = v^+ − v^−$ with $v^+ , v^− in (
> mathbbR_+ )^k$
, we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone; if $v^+neq 0$ take $ν = ( sum v^+_i )^-1 · v^+,$
otherwise normalize $v^−$.




The main thing I don't understand is




we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone




Why is this true?



I also don't understand why $( sum v^+_i )^-1 · v^+$ works if $v^+ neq 0$.



Is there any easier way to show that every finite state Markov chain has a stationary probability distribution?










share|cite|improve this question









$endgroup$




I am trying to understand the following proof that every finite-state Markov chain has a stationary distribution. The proof is from here.



Let $P$ be the $k times k$ (stochastic) transition probability matrix for our Markov chain. Now,




... $1$ is an eigenvalue for $P$ and therefore also for $P^t$ .
Writing a $P^t$ invariant $v$ as $v = v^+ − v^−$ with $v^+ , v^− in (
> mathbbR_+ )^k$
, we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone; if $v^+neq 0$ take $ν = ( sum v^+_i )^-1 · v^+,$
otherwise normalize $v^−$.




The main thing I don't understand is




we obtain $P^t v^± = v^±$ because $P^t$ preserves
the positive cone




Why is this true?



I also don't understand why $( sum v^+_i )^-1 · v^+$ works if $v^+ neq 0$.



Is there any easier way to show that every finite state Markov chain has a stationary probability distribution?







probability proof-verification markov-chains stochastic-matrices






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 1 '18 at 22:23









jackson5jackson5

632513




632513











  • $begingroup$
    For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
    $endgroup$
    – amd
    Dec 2 '18 at 0:23











  • $begingroup$
    They add up to 1, makes sense. I get that part now
    $endgroup$
    – jackson5
    Dec 2 '18 at 0:25
















  • $begingroup$
    For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
    $endgroup$
    – amd
    Dec 2 '18 at 0:23











  • $begingroup$
    They add up to 1, makes sense. I get that part now
    $endgroup$
    – jackson5
    Dec 2 '18 at 0:25















$begingroup$
For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
$endgroup$
– amd
Dec 2 '18 at 0:23





$begingroup$
For your second question, what must a state distribution vector look like and what can you say about the elements of the scaled version of $v^+$?
$endgroup$
– amd
Dec 2 '18 at 0:23













$begingroup$
They add up to 1, makes sense. I get that part now
$endgroup$
– jackson5
Dec 2 '18 at 0:25




$begingroup$
They add up to 1, makes sense. I get that part now
$endgroup$
– jackson5
Dec 2 '18 at 0:25










2 Answers
2






active

oldest

votes


















1












$begingroup$

The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



Interpretation 1. Maybe the statement below is being claimed:




(*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.






share|cite|improve this answer











$endgroup$




















    0












    $begingroup$

    To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



    Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
    beginequation
    beginsplit
    mathbfP : Delta_d &to Delta_d \
    mu &mapsto mu mathbfP
    endsplit
    endequation

    As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.






    share|cite|improve this answer









    $endgroup$












      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3021916%2fevery-finite-state-markov-chain-has-a-stationary-probability-distribution%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1












      $begingroup$

      The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



      It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



      Interpretation 1. Maybe the statement below is being claimed:




      (*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




      This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



      Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




      If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




      This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



      It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



      For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.






      share|cite|improve this answer











      $endgroup$

















        1












        $begingroup$

        The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



        It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



        Interpretation 1. Maybe the statement below is being claimed:




        (*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




        This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



        Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




        If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




        This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



        It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



        For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.






        share|cite|improve this answer











        $endgroup$















          1












          1








          1





          $begingroup$

          The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



          It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



          Interpretation 1. Maybe the statement below is being claimed:




          (*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




          This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



          Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




          If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




          This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



          It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



          For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.






          share|cite|improve this answer











          $endgroup$



          The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".



          It looks like it may be a case of mistakenly using that a mapping fixes a subset when it only preserves a subset. (I.e. $f|_X = textid_X$ vs $textim(f) subset X$.)



          Interpretation 1. Maybe the statement below is being claimed:




          (*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.




          This is false unless $P$ is the identity matrix. Let $x in C$, then according to (*) then $bar v^pm = v^pm + x$ must satisfy $P^tbar v^+ = bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.



          Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:




          If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.




          This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.



          It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.



          For reducible matrices the eigenspace for $lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Dec 7 '18 at 14:48

























          answered Dec 2 '18 at 13:37









          BenBen

          4,303617




          4,303617





















              0












              $begingroup$

              To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



              Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
              beginequation
              beginsplit
              mathbfP : Delta_d &to Delta_d \
              mu &mapsto mu mathbfP
              endsplit
              endequation

              As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.






              share|cite|improve this answer









              $endgroup$

















                0












                $begingroup$

                To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



                Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
                beginequation
                beginsplit
                mathbfP : Delta_d &to Delta_d \
                mu &mapsto mu mathbfP
                endsplit
                endequation

                As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.






                share|cite|improve this answer









                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



                  Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
                  beginequation
                  beginsplit
                  mathbfP : Delta_d &to Delta_d \
                  mu &mapsto mu mathbfP
                  endsplit
                  endequation

                  As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.






                  share|cite|improve this answer









                  $endgroup$



                  To your last question, as to whether there is a simpler way of proving existence of a stationary distribution for finite state Markov chains, that depends what tools you have at your disposal. Here is a nice and short consequence of a fixed point theorem:



                  Let a Markov chain $mathbfP$ over $d$ states. The simplex $Delta_d$ is a convex and compact subset of $mathbbR^d$, which is a Euclidean vector space with usual inner product. We can look at the kernel as the following linear operator,
                  beginequation
                  beginsplit
                  mathbfP : Delta_d &to Delta_d \
                  mu &mapsto mu mathbfP
                  endsplit
                  endequation

                  As $|mathbfP|_2 leq sqrtd < infty$, the operator is bounded and therefore continuous. As a consequence, we can apply Brouwer's fixed point theorem to show that $exists pi in Delta_d, pi mathbfP = pi$.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Mar 16 at 8:46









                  ippiki-ookamiippiki-ookami

                  451317




                  451317



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3021916%2fevery-finite-state-markov-chain-has-a-stationary-probability-distribution%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer

                      random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

                      Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye