Intuition behind positive recurrent and null recurrent Markov ChainsMarkov chain with finite positive recurrent statesMarkov Chains: Limiting probabilities of positive recurrent states sum to one?How to show positive recurrence/ null recurrence?Prove that markov chain is recurrentExample of a markov chain with transient and recurrent statesDistinguish positive recurrent, null recurrent and transientCountable state space Markov chain — positive recurrence & return timeIn Markov Chains, what is the difference between null recurrent and positive recurrent?Classes and Markov ChainsExistence of recurrent and transient classes in general state Markov chains

Why does Deadpool say "You're welcome, Canada," after shooting Ryan Reynolds in the end credits?

Informing my boss about remarks from a nasty colleague

Happy pi day, everyone!

What has been your most complicated TikZ drawing?

Rejected in 4th interview round citing insufficient years of experience

The use of "touch" and "touch on" in context

How to explain that I do not want to visit a country due to personal safety concern?

Should we release the security issues we found in our product as CVE or we can just update those on weekly release notes?

Is it possible that AIC = BIC?

Co-worker team leader wants to inject his friend's awful software into our development. What should I say to our common boss?

Why do passenger jet manufacturers design their planes with stall prevention systems?

Is a lawful good "antagonist" effective?

Does this AnyDice function accurately calculate the number of ogres you make unconcious with three 4th-level castings of Sleep?

Unreachable code, but reachable with exception

Science-fiction short story where space navy wanted hospital ships and settlers had guns mounted everywhere

Why using two cd commands in bash script does not execute the second command

Can hydraulic brake levers get hot when brakes overheat?

Why are the outputs of printf and std::cout different

Does the statement `int val = (++i > ++j) ? ++i : ++j;` invoke undefined behavior?

Identifying the interval from A♭ to D♯

Know when to turn notes upside-down(eighth notes, sixteen notes, etc.)

I need to drive a 7/16" nut but am unsure how to use the socket I bought for my screwdriver

How is the Swiss post e-voting system supposed to work, and how was it wrong?

Latest web browser compatible with Windows 98



Intuition behind positive recurrent and null recurrent Markov Chains


Markov chain with finite positive recurrent statesMarkov Chains: Limiting probabilities of positive recurrent states sum to one?How to show positive recurrence/ null recurrence?Prove that markov chain is recurrentExample of a markov chain with transient and recurrent statesDistinguish positive recurrent, null recurrent and transientCountable state space Markov chain — positive recurrence & return timeIn Markov Chains, what is the difference between null recurrent and positive recurrent?Classes and Markov ChainsExistence of recurrent and transient classes in general state Markov chains













1












$begingroup$


I cannot understand how there can be positive recurrent and null recurrent Markov Chains. Markov Chains can be split up into transient and recurrent states, where recurrent means that it will be able to go back to that state sooner or later, as compared to a transient state whereby it may escape without ever being able to come back to the state.



Since by definition, a recurrent state means that the Markov chain will be able to return to the state in finite time, why is there a need to define another subset of recurrent Markov chain (null recurrent), whose definition (I feel, even though I know it's not true) violates the whole point of a recurrent Markov Chain in the first place?



Could someone please help with the intuition behind this?










share|cite|improve this question









$endgroup$
















    1












    $begingroup$


    I cannot understand how there can be positive recurrent and null recurrent Markov Chains. Markov Chains can be split up into transient and recurrent states, where recurrent means that it will be able to go back to that state sooner or later, as compared to a transient state whereby it may escape without ever being able to come back to the state.



    Since by definition, a recurrent state means that the Markov chain will be able to return to the state in finite time, why is there a need to define another subset of recurrent Markov chain (null recurrent), whose definition (I feel, even though I know it's not true) violates the whole point of a recurrent Markov Chain in the first place?



    Could someone please help with the intuition behind this?










    share|cite|improve this question









    $endgroup$














      1












      1








      1





      $begingroup$


      I cannot understand how there can be positive recurrent and null recurrent Markov Chains. Markov Chains can be split up into transient and recurrent states, where recurrent means that it will be able to go back to that state sooner or later, as compared to a transient state whereby it may escape without ever being able to come back to the state.



      Since by definition, a recurrent state means that the Markov chain will be able to return to the state in finite time, why is there a need to define another subset of recurrent Markov chain (null recurrent), whose definition (I feel, even though I know it's not true) violates the whole point of a recurrent Markov Chain in the first place?



      Could someone please help with the intuition behind this?










      share|cite|improve this question









      $endgroup$




      I cannot understand how there can be positive recurrent and null recurrent Markov Chains. Markov Chains can be split up into transient and recurrent states, where recurrent means that it will be able to go back to that state sooner or later, as compared to a transient state whereby it may escape without ever being able to come back to the state.



      Since by definition, a recurrent state means that the Markov chain will be able to return to the state in finite time, why is there a need to define another subset of recurrent Markov chain (null recurrent), whose definition (I feel, even though I know it's not true) violates the whole point of a recurrent Markov Chain in the first place?



      Could someone please help with the intuition behind this?







      markov-chains






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Mar 11 at 10:01









      statsguy21statsguy21

      1129




      1129




















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          A state is recurrent, if the waiting time $tau$ for the chain's return to that state is almost surely finite. If $tau$ also has finite expectation, one speaks of positive recurrence, otherwise of null-recurrence. (Recall that a random variable with finite expectation is necessarily almost surely finite, while the converse is not true in general.)



          Intuitively speaking, recurrence means that the chain will eventually return, and positive recurrence means that the chain will return relatively fast. This line of thinking is also encouraged by asymptotic results like the Ratio Limit Theorem or Orey's Ergodic Theorem.






          share|cite|improve this answer











          $endgroup$




















            1












            $begingroup$

            Mars Plastic puts it rather nicely. Here are additional elements.



            In order to better understand this concept of positive recurrent and null recurrent Markov chains, first it is good to set ourselves in a context where it becomes important.



            One of the fundamental questions for Markov chains is whether there exists a stationary distribution. If you restrict yourself to finite state chains, then there is always one (Brouwer's fixed point theorem), and the notion of a null-recurrent state simply does not exist.



            In the infinite case, you can start asking new questions. Even fully connected chains can fail to have a stationary distribution. It can be proven that if the chain is positive recurrent then it must exist, and $pi(i) = 1/E[tau_i]$.



            If it's null recurrent, that means $pi$ does not exist, but you still have a guarantee of returning to every state.



            In other words, even if the concept of a mixing time does not make sense, you still have finite hitting times.






            share|cite|improve this answer









            $endgroup$












              Your Answer





              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "69"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3143502%2fintuition-behind-positive-recurrent-and-null-recurrent-markov-chains%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              A state is recurrent, if the waiting time $tau$ for the chain's return to that state is almost surely finite. If $tau$ also has finite expectation, one speaks of positive recurrence, otherwise of null-recurrence. (Recall that a random variable with finite expectation is necessarily almost surely finite, while the converse is not true in general.)



              Intuitively speaking, recurrence means that the chain will eventually return, and positive recurrence means that the chain will return relatively fast. This line of thinking is also encouraged by asymptotic results like the Ratio Limit Theorem or Orey's Ergodic Theorem.






              share|cite|improve this answer











              $endgroup$

















                1












                $begingroup$

                A state is recurrent, if the waiting time $tau$ for the chain's return to that state is almost surely finite. If $tau$ also has finite expectation, one speaks of positive recurrence, otherwise of null-recurrence. (Recall that a random variable with finite expectation is necessarily almost surely finite, while the converse is not true in general.)



                Intuitively speaking, recurrence means that the chain will eventually return, and positive recurrence means that the chain will return relatively fast. This line of thinking is also encouraged by asymptotic results like the Ratio Limit Theorem or Orey's Ergodic Theorem.






                share|cite|improve this answer











                $endgroup$















                  1












                  1








                  1





                  $begingroup$

                  A state is recurrent, if the waiting time $tau$ for the chain's return to that state is almost surely finite. If $tau$ also has finite expectation, one speaks of positive recurrence, otherwise of null-recurrence. (Recall that a random variable with finite expectation is necessarily almost surely finite, while the converse is not true in general.)



                  Intuitively speaking, recurrence means that the chain will eventually return, and positive recurrence means that the chain will return relatively fast. This line of thinking is also encouraged by asymptotic results like the Ratio Limit Theorem or Orey's Ergodic Theorem.






                  share|cite|improve this answer











                  $endgroup$



                  A state is recurrent, if the waiting time $tau$ for the chain's return to that state is almost surely finite. If $tau$ also has finite expectation, one speaks of positive recurrence, otherwise of null-recurrence. (Recall that a random variable with finite expectation is necessarily almost surely finite, while the converse is not true in general.)



                  Intuitively speaking, recurrence means that the chain will eventually return, and positive recurrence means that the chain will return relatively fast. This line of thinking is also encouraged by asymptotic results like the Ratio Limit Theorem or Orey's Ergodic Theorem.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Mar 11 at 15:42

























                  answered Mar 11 at 11:27









                  Mars PlasticMars Plastic

                  1,451121




                  1,451121





















                      1












                      $begingroup$

                      Mars Plastic puts it rather nicely. Here are additional elements.



                      In order to better understand this concept of positive recurrent and null recurrent Markov chains, first it is good to set ourselves in a context where it becomes important.



                      One of the fundamental questions for Markov chains is whether there exists a stationary distribution. If you restrict yourself to finite state chains, then there is always one (Brouwer's fixed point theorem), and the notion of a null-recurrent state simply does not exist.



                      In the infinite case, you can start asking new questions. Even fully connected chains can fail to have a stationary distribution. It can be proven that if the chain is positive recurrent then it must exist, and $pi(i) = 1/E[tau_i]$.



                      If it's null recurrent, that means $pi$ does not exist, but you still have a guarantee of returning to every state.



                      In other words, even if the concept of a mixing time does not make sense, you still have finite hitting times.






                      share|cite|improve this answer









                      $endgroup$

















                        1












                        $begingroup$

                        Mars Plastic puts it rather nicely. Here are additional elements.



                        In order to better understand this concept of positive recurrent and null recurrent Markov chains, first it is good to set ourselves in a context where it becomes important.



                        One of the fundamental questions for Markov chains is whether there exists a stationary distribution. If you restrict yourself to finite state chains, then there is always one (Brouwer's fixed point theorem), and the notion of a null-recurrent state simply does not exist.



                        In the infinite case, you can start asking new questions. Even fully connected chains can fail to have a stationary distribution. It can be proven that if the chain is positive recurrent then it must exist, and $pi(i) = 1/E[tau_i]$.



                        If it's null recurrent, that means $pi$ does not exist, but you still have a guarantee of returning to every state.



                        In other words, even if the concept of a mixing time does not make sense, you still have finite hitting times.






                        share|cite|improve this answer









                        $endgroup$















                          1












                          1








                          1





                          $begingroup$

                          Mars Plastic puts it rather nicely. Here are additional elements.



                          In order to better understand this concept of positive recurrent and null recurrent Markov chains, first it is good to set ourselves in a context where it becomes important.



                          One of the fundamental questions for Markov chains is whether there exists a stationary distribution. If you restrict yourself to finite state chains, then there is always one (Brouwer's fixed point theorem), and the notion of a null-recurrent state simply does not exist.



                          In the infinite case, you can start asking new questions. Even fully connected chains can fail to have a stationary distribution. It can be proven that if the chain is positive recurrent then it must exist, and $pi(i) = 1/E[tau_i]$.



                          If it's null recurrent, that means $pi$ does not exist, but you still have a guarantee of returning to every state.



                          In other words, even if the concept of a mixing time does not make sense, you still have finite hitting times.






                          share|cite|improve this answer









                          $endgroup$



                          Mars Plastic puts it rather nicely. Here are additional elements.



                          In order to better understand this concept of positive recurrent and null recurrent Markov chains, first it is good to set ourselves in a context where it becomes important.



                          One of the fundamental questions for Markov chains is whether there exists a stationary distribution. If you restrict yourself to finite state chains, then there is always one (Brouwer's fixed point theorem), and the notion of a null-recurrent state simply does not exist.



                          In the infinite case, you can start asking new questions. Even fully connected chains can fail to have a stationary distribution. It can be proven that if the chain is positive recurrent then it must exist, and $pi(i) = 1/E[tau_i]$.



                          If it's null recurrent, that means $pi$ does not exist, but you still have a guarantee of returning to every state.



                          In other words, even if the concept of a mixing time does not make sense, you still have finite hitting times.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered 2 days ago









                          ippiki-ookamiippiki-ookami

                          441317




                          441317



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3143502%2fintuition-behind-positive-recurrent-and-null-recurrent-markov-chains%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye

                              random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

                              How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer