Convergence of a Stochastic Process - Am I missing something obvious?Subsequence that converges to $lim textinf$Subsequence of a sequence converging to its lim sup and lim infIntegrating the two-views of lim sup and lim infEvaluating the limit of a quotient [Baby Rudin: 5.19]The limit of the difference quotientsIf $x_n$ converges to $x$, then $lim inf(x_n+y_n)=x+lim inf y_n$.If $x_n$ Cauchy then $limsup x_n = liminf x_n$Gambler's ruin modelProb. 15, Sec. 5.1, in Bartle & Sherbert's INTRO TO REAL ANALYSIS: A bounded function on $(0, 1)$ having no limit as $x to 0$Proving convergence of a bizarre sequence

Placing subfig vertically

How to create a hard link to an inode (ext4)?

How do anti-virus programs start at Windows boot?

How did Alan Turing break the enigma code using the hint given by the lady in the bar?

Force user to remove USB token

How are such low op-amp input currents possible?

Latest web browser compatible with Windows 98

What are some noteworthy "mic-drop" moments in math?

Why doesn't this Google Translate ad use the word "Translation" instead of "Translate"?

Make a transparent 448*448 image

How could our ancestors have domesticated a solitary predator?

PTIJ: Why can't I eat anything?

How strictly should I take "Candidates must be local"?

Why don't MCU characters ever seem to have language issues?

show this identity with trigometric

"One can do his homework in the library"

Fourth person (in Slavey language)

What is the meaning of triple curly braces in phtml template files? When and how do we use them?

Is "history" a male-biased word ("his+story")?

If the Captain's screens are out, does he switch seats with the co-pilot?

Append a note to one of three files based on user choice

Built-In Shelves/Bookcases - IKEA vs Built

Do f-stop and exposure time perfectly cancel?

Could a cubesat propel itself to Mars?



Convergence of a Stochastic Process - Am I missing something obvious?


Subsequence that converges to $lim textinf$Subsequence of a sequence converging to its lim sup and lim infIntegrating the two-views of lim sup and lim infEvaluating the limit of a quotient [Baby Rudin: 5.19]The limit of the difference quotientsIf $x_n$ converges to $x$, then $lim inf(x_n+y_n)=x+lim inf y_n$.If $x_n$ Cauchy then $limsup x_n = liminf x_n$Gambler's ruin modelProb. 15, Sec. 5.1, in Bartle & Sherbert's INTRO TO REAL ANALYSIS: A bounded function on $(0, 1)$ having no limit as $x to 0$Proving convergence of a bizarre sequence













5












$begingroup$


In the paper On the Convergence of Stochastic Iterative Dynamic Programming Algorithms (Jaakkola et al. 1994) the authors claim that a statement is "easy to show". Now I am starting to suspect that it isn't easy and they might have just wanted to avoid having to show it. But I thought I would post this to see if I missed something obvious.



enter image description here



What I have so far:



  1. Since $X_n$ is bounded in a compact interval it certainly has convergent subsequences


  2. $|X_n+1-X_n|le (alpha_n+gamma beta_n)C_1$ which implies that it converges to zero, but that isn't enough for it to be a Cauchy sequence even with the first statement


  3. $liminf X_nge 0$
    $$
    beginalign
    X_n+1(x)&=(1-alpha_n(x))X_n(x) + gammabeta_n(x)|X_n| \
    &ge (1-alpha_n(x))X_n(x)\
    &=prod^n_k=0(1-alpha_k(x))X_0 to 0
    endalign$$

    since $sum alpha_n=infty$ (c.f. Infinite Product).

  4. Because of $alpha_n(x) to 0$ we know that $(1-alpha_n(x))ge 0$ for almost all $n$, and if $X_n(x)ge 0$ then
    $$
    X_n+1(x) = underbrace(1-alpha_n(x))_ge 0
    underbraceX_n(x)_ge 0 +underbracegammabeta_n(x)_ge 0
    $$

    thus if one element of the sequence is positive all following elements will be positive too. The sequences which stay negative converge to zero ($liminf X_nge 0$). The other sequences will be positive for almost all n.

  5. For $|X_n|$ not to converge $|X_n|=max_x X_n(x)$ for an infinite amount of n. If it was equal to the maximum of the negative sequences for almost all n it would converge.
    $$|X_n|=max_x -X_n(x) le max_x - prod_k=0^n (1-alpha_k) X_0 to 0 $$

  6. If we set $beta_n=0$ we have $$X_m=prod_k=n^m-1 (1-alpha_k)X_n to 0$$ So my intuition is: since $beta_n$ is smaller than $alpha_n$ (on average) replacing $beta_n$ with $alpha_n$ should probably be fine, since you introduce a larger difference to zero. So I think going in the direction
    $$X_n+1sim (1-alpha_n)X_n +gamma alpha_n X_n = (1-(1-gamma)alpha_n)X_n$$
    Which is fine since $sum(1-gamma)alpha_n =infty$ for $gammain(0,1)$

But I still need to formalize replacing $beta_n$ with $alpha_n$ which only works if I take the expected value. And I don't know if the expected value leaves the infinite sums intact. I also have to justify replacing the norm with just one element. I think I can assume that the norm is the max norm without disrupting later proofs. And since $liminf X_nge 0$, $|X_n|$ is basically equal to $X_n$.



I am also a bit confused because I haven't used $sum alpha_n^2 <infty$. So something is off. Additionally the approach I am currently following would show that it converges to 0 instantly while the proof wants me to show that it converges to some $X^*$ and then continuous with arguments on how to show that it converges to $0$ from there. Which makes me think, that I am not on the "intended proof path". So maybe I am missing something obvious which could save me a lot of trouble. Especially since they claim it should be easy.










share|cite|improve this question











$endgroup$
















    5












    $begingroup$


    In the paper On the Convergence of Stochastic Iterative Dynamic Programming Algorithms (Jaakkola et al. 1994) the authors claim that a statement is "easy to show". Now I am starting to suspect that it isn't easy and they might have just wanted to avoid having to show it. But I thought I would post this to see if I missed something obvious.



    enter image description here



    What I have so far:



    1. Since $X_n$ is bounded in a compact interval it certainly has convergent subsequences


    2. $|X_n+1-X_n|le (alpha_n+gamma beta_n)C_1$ which implies that it converges to zero, but that isn't enough for it to be a Cauchy sequence even with the first statement


    3. $liminf X_nge 0$
      $$
      beginalign
      X_n+1(x)&=(1-alpha_n(x))X_n(x) + gammabeta_n(x)|X_n| \
      &ge (1-alpha_n(x))X_n(x)\
      &=prod^n_k=0(1-alpha_k(x))X_0 to 0
      endalign$$

      since $sum alpha_n=infty$ (c.f. Infinite Product).

    4. Because of $alpha_n(x) to 0$ we know that $(1-alpha_n(x))ge 0$ for almost all $n$, and if $X_n(x)ge 0$ then
      $$
      X_n+1(x) = underbrace(1-alpha_n(x))_ge 0
      underbraceX_n(x)_ge 0 +underbracegammabeta_n(x)_ge 0
      $$

      thus if one element of the sequence is positive all following elements will be positive too. The sequences which stay negative converge to zero ($liminf X_nge 0$). The other sequences will be positive for almost all n.

    5. For $|X_n|$ not to converge $|X_n|=max_x X_n(x)$ for an infinite amount of n. If it was equal to the maximum of the negative sequences for almost all n it would converge.
      $$|X_n|=max_x -X_n(x) le max_x - prod_k=0^n (1-alpha_k) X_0 to 0 $$

    6. If we set $beta_n=0$ we have $$X_m=prod_k=n^m-1 (1-alpha_k)X_n to 0$$ So my intuition is: since $beta_n$ is smaller than $alpha_n$ (on average) replacing $beta_n$ with $alpha_n$ should probably be fine, since you introduce a larger difference to zero. So I think going in the direction
      $$X_n+1sim (1-alpha_n)X_n +gamma alpha_n X_n = (1-(1-gamma)alpha_n)X_n$$
      Which is fine since $sum(1-gamma)alpha_n =infty$ for $gammain(0,1)$

    But I still need to formalize replacing $beta_n$ with $alpha_n$ which only works if I take the expected value. And I don't know if the expected value leaves the infinite sums intact. I also have to justify replacing the norm with just one element. I think I can assume that the norm is the max norm without disrupting later proofs. And since $liminf X_nge 0$, $|X_n|$ is basically equal to $X_n$.



    I am also a bit confused because I haven't used $sum alpha_n^2 <infty$. So something is off. Additionally the approach I am currently following would show that it converges to 0 instantly while the proof wants me to show that it converges to some $X^*$ and then continuous with arguments on how to show that it converges to $0$ from there. Which makes me think, that I am not on the "intended proof path". So maybe I am missing something obvious which could save me a lot of trouble. Especially since they claim it should be easy.










    share|cite|improve this question











    $endgroup$














      5












      5








      5


      1



      $begingroup$


      In the paper On the Convergence of Stochastic Iterative Dynamic Programming Algorithms (Jaakkola et al. 1994) the authors claim that a statement is "easy to show". Now I am starting to suspect that it isn't easy and they might have just wanted to avoid having to show it. But I thought I would post this to see if I missed something obvious.



      enter image description here



      What I have so far:



      1. Since $X_n$ is bounded in a compact interval it certainly has convergent subsequences


      2. $|X_n+1-X_n|le (alpha_n+gamma beta_n)C_1$ which implies that it converges to zero, but that isn't enough for it to be a Cauchy sequence even with the first statement


      3. $liminf X_nge 0$
        $$
        beginalign
        X_n+1(x)&=(1-alpha_n(x))X_n(x) + gammabeta_n(x)|X_n| \
        &ge (1-alpha_n(x))X_n(x)\
        &=prod^n_k=0(1-alpha_k(x))X_0 to 0
        endalign$$

        since $sum alpha_n=infty$ (c.f. Infinite Product).

      4. Because of $alpha_n(x) to 0$ we know that $(1-alpha_n(x))ge 0$ for almost all $n$, and if $X_n(x)ge 0$ then
        $$
        X_n+1(x) = underbrace(1-alpha_n(x))_ge 0
        underbraceX_n(x)_ge 0 +underbracegammabeta_n(x)_ge 0
        $$

        thus if one element of the sequence is positive all following elements will be positive too. The sequences which stay negative converge to zero ($liminf X_nge 0$). The other sequences will be positive for almost all n.

      5. For $|X_n|$ not to converge $|X_n|=max_x X_n(x)$ for an infinite amount of n. If it was equal to the maximum of the negative sequences for almost all n it would converge.
        $$|X_n|=max_x -X_n(x) le max_x - prod_k=0^n (1-alpha_k) X_0 to 0 $$

      6. If we set $beta_n=0$ we have $$X_m=prod_k=n^m-1 (1-alpha_k)X_n to 0$$ So my intuition is: since $beta_n$ is smaller than $alpha_n$ (on average) replacing $beta_n$ with $alpha_n$ should probably be fine, since you introduce a larger difference to zero. So I think going in the direction
        $$X_n+1sim (1-alpha_n)X_n +gamma alpha_n X_n = (1-(1-gamma)alpha_n)X_n$$
        Which is fine since $sum(1-gamma)alpha_n =infty$ for $gammain(0,1)$

      But I still need to formalize replacing $beta_n$ with $alpha_n$ which only works if I take the expected value. And I don't know if the expected value leaves the infinite sums intact. I also have to justify replacing the norm with just one element. I think I can assume that the norm is the max norm without disrupting later proofs. And since $liminf X_nge 0$, $|X_n|$ is basically equal to $X_n$.



      I am also a bit confused because I haven't used $sum alpha_n^2 <infty$. So something is off. Additionally the approach I am currently following would show that it converges to 0 instantly while the proof wants me to show that it converges to some $X^*$ and then continuous with arguments on how to show that it converges to $0$ from there. Which makes me think, that I am not on the "intended proof path". So maybe I am missing something obvious which could save me a lot of trouble. Especially since they claim it should be easy.










      share|cite|improve this question











      $endgroup$




      In the paper On the Convergence of Stochastic Iterative Dynamic Programming Algorithms (Jaakkola et al. 1994) the authors claim that a statement is "easy to show". Now I am starting to suspect that it isn't easy and they might have just wanted to avoid having to show it. But I thought I would post this to see if I missed something obvious.



      enter image description here



      What I have so far:



      1. Since $X_n$ is bounded in a compact interval it certainly has convergent subsequences


      2. $|X_n+1-X_n|le (alpha_n+gamma beta_n)C_1$ which implies that it converges to zero, but that isn't enough for it to be a Cauchy sequence even with the first statement


      3. $liminf X_nge 0$
        $$
        beginalign
        X_n+1(x)&=(1-alpha_n(x))X_n(x) + gammabeta_n(x)|X_n| \
        &ge (1-alpha_n(x))X_n(x)\
        &=prod^n_k=0(1-alpha_k(x))X_0 to 0
        endalign$$

        since $sum alpha_n=infty$ (c.f. Infinite Product).

      4. Because of $alpha_n(x) to 0$ we know that $(1-alpha_n(x))ge 0$ for almost all $n$, and if $X_n(x)ge 0$ then
        $$
        X_n+1(x) = underbrace(1-alpha_n(x))_ge 0
        underbraceX_n(x)_ge 0 +underbracegammabeta_n(x)_ge 0
        $$

        thus if one element of the sequence is positive all following elements will be positive too. The sequences which stay negative converge to zero ($liminf X_nge 0$). The other sequences will be positive for almost all n.

      5. For $|X_n|$ not to converge $|X_n|=max_x X_n(x)$ for an infinite amount of n. If it was equal to the maximum of the negative sequences for almost all n it would converge.
        $$|X_n|=max_x -X_n(x) le max_x - prod_k=0^n (1-alpha_k) X_0 to 0 $$

      6. If we set $beta_n=0$ we have $$X_m=prod_k=n^m-1 (1-alpha_k)X_n to 0$$ So my intuition is: since $beta_n$ is smaller than $alpha_n$ (on average) replacing $beta_n$ with $alpha_n$ should probably be fine, since you introduce a larger difference to zero. So I think going in the direction
        $$X_n+1sim (1-alpha_n)X_n +gamma alpha_n X_n = (1-(1-gamma)alpha_n)X_n$$
        Which is fine since $sum(1-gamma)alpha_n =infty$ for $gammain(0,1)$

      But I still need to formalize replacing $beta_n$ with $alpha_n$ which only works if I take the expected value. And I don't know if the expected value leaves the infinite sums intact. I also have to justify replacing the norm with just one element. I think I can assume that the norm is the max norm without disrupting later proofs. And since $liminf X_nge 0$, $|X_n|$ is basically equal to $X_n$.



      I am also a bit confused because I haven't used $sum alpha_n^2 <infty$. So something is off. Additionally the approach I am currently following would show that it converges to 0 instantly while the proof wants me to show that it converges to some $X^*$ and then continuous with arguments on how to show that it converges to $0$ from there. Which makes me think, that I am not on the "intended proof path". So maybe I am missing something obvious which could save me a lot of trouble. Especially since they claim it should be easy.







      real-analysis stochastic-processes stochastic-analysis stochastic-approximation






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited yesterday







      Felix B.

















      asked 2 days ago









      Felix B.Felix B.

      774317




      774317




















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3142293%2fconvergence-of-a-stochastic-process-am-i-missing-something-obvious%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3142293%2fconvergence-of-a-stochastic-process-am-i-missing-something-obvious%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye

          random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

          How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer