How to show the optimization/ODE fixed point iteration steps converge?Solving a system of ODESolution to ODEHaving trouble using eigenvectors to solve differential equationsOptimization when the function is not known, how generally it is performed?Rate of Convergence for Trapezoidal Method-System of Linear ODEsTaylor's method for ODESystem of linear nonhomogeneous fourth order ODE'sUsing Fixed point iteration to find sum of a SeriasOptimization when one parameter is more important than otherhow to show an iterative equation converges to a fixed pointAny methods to solve this system of ODE when the RHS is unknown?

High voltage LED indicator 40-1000 VDC without additional power supply

Was any UN Security Council vote triple-vetoed?

LaTeX: Why are digits allowed in environments, but forbidden in commands?

What would happen to a modern skyscraper if it rains micro blackholes?

dbcc cleantable batch size explanation

Alternative to sending password over mail?

Do infinite dimensional systems make sense?

Accidentally leaked the solution to an assignment, what to do now? (I'm the prof)

What does the "remote control" for a QF-4 look like?

Is it inappropriate for a student to attend their mentor's dissertation defense?

Arrow those variables!

Why is 150k or 200k jobs considered good when there's 300k+ births a month?

Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?

Important Resources for Dark Age Civilizations?

Are the number of citations and number of published articles the most important criteria for a tenure promotion?

What does "Puller Prush Person" mean?

Perform and show arithmetic with LuaLaTeX

How can I prevent hyper evolved versions of regular creatures from wiping out their cousins?

DC-DC converter from low voltage at high current, to high voltage at low current

Why can't I see bouncing of a switch on an oscilloscope?

Horror movie about a virus at the prom; beginning and end are stylized as a cartoon

How to format long polynomial?

Why doesn't H₄O²⁺ exist?

LWC SFDX source push error TypeError: LWC1009: decl.moveTo is not a function



How to show the optimization/ODE fixed point iteration steps converge?


Solving a system of ODESolution to ODEHaving trouble using eigenvectors to solve differential equationsOptimization when the function is not known, how generally it is performed?Rate of Convergence for Trapezoidal Method-System of Linear ODEsTaylor's method for ODESystem of linear nonhomogeneous fourth order ODE'sUsing Fixed point iteration to find sum of a SeriasOptimization when one parameter is more important than otherhow to show an iterative equation converges to a fixed pointAny methods to solve this system of ODE when the RHS is unknown?













6












$begingroup$


  1. I have $vecC = G(vecbeta)$ by solving a system of ODE numerically.
    Thanks for the help of Robert the ODE can be found in this link: Solving a system of ODE


  2. Also $vecbeta$ should satisfy
    $$Avecbetale f(vecbeta, vecC)$$
    and $$max 19beta_1+0.5beta_2+16beta_3.$$
    where $A$ is a given matrix and $f$ is some given function.


I am thinking of solving this process using iteration.
I have a initial approximation $vecbeta^0$, then for $k=1,2,3...$
solve Part $1$ using $vecC^k = G(vecbeta^k-1)$ then solving part $2$ optimization using $$Avecbeta^k+1le f(vecbeta^k, vecC^k)$$
and $$max 19beta_1^k+1+0.5beta_2^k+1+16beta_3^k+1.$$



But I am worried this step will not converge as $ktoinfty$. My questions is if this method will converge? if it is not, how to solve the optimization/ODE system to make it converge to the true solution?



Any help is appreciated! Many thanks!










share|cite|improve this question









$endgroup$
















    6












    $begingroup$


    1. I have $vecC = G(vecbeta)$ by solving a system of ODE numerically.
      Thanks for the help of Robert the ODE can be found in this link: Solving a system of ODE


    2. Also $vecbeta$ should satisfy
      $$Avecbetale f(vecbeta, vecC)$$
      and $$max 19beta_1+0.5beta_2+16beta_3.$$
      where $A$ is a given matrix and $f$ is some given function.


    I am thinking of solving this process using iteration.
    I have a initial approximation $vecbeta^0$, then for $k=1,2,3...$
    solve Part $1$ using $vecC^k = G(vecbeta^k-1)$ then solving part $2$ optimization using $$Avecbeta^k+1le f(vecbeta^k, vecC^k)$$
    and $$max 19beta_1^k+1+0.5beta_2^k+1+16beta_3^k+1.$$



    But I am worried this step will not converge as $ktoinfty$. My questions is if this method will converge? if it is not, how to solve the optimization/ODE system to make it converge to the true solution?



    Any help is appreciated! Many thanks!










    share|cite|improve this question









    $endgroup$














      6












      6








      6





      $begingroup$


      1. I have $vecC = G(vecbeta)$ by solving a system of ODE numerically.
        Thanks for the help of Robert the ODE can be found in this link: Solving a system of ODE


      2. Also $vecbeta$ should satisfy
        $$Avecbetale f(vecbeta, vecC)$$
        and $$max 19beta_1+0.5beta_2+16beta_3.$$
        where $A$ is a given matrix and $f$ is some given function.


      I am thinking of solving this process using iteration.
      I have a initial approximation $vecbeta^0$, then for $k=1,2,3...$
      solve Part $1$ using $vecC^k = G(vecbeta^k-1)$ then solving part $2$ optimization using $$Avecbeta^k+1le f(vecbeta^k, vecC^k)$$
      and $$max 19beta_1^k+1+0.5beta_2^k+1+16beta_3^k+1.$$



      But I am worried this step will not converge as $ktoinfty$. My questions is if this method will converge? if it is not, how to solve the optimization/ODE system to make it converge to the true solution?



      Any help is appreciated! Many thanks!










      share|cite|improve this question









      $endgroup$




      1. I have $vecC = G(vecbeta)$ by solving a system of ODE numerically.
        Thanks for the help of Robert the ODE can be found in this link: Solving a system of ODE


      2. Also $vecbeta$ should satisfy
        $$Avecbetale f(vecbeta, vecC)$$
        and $$max 19beta_1+0.5beta_2+16beta_3.$$
        where $A$ is a given matrix and $f$ is some given function.


      I am thinking of solving this process using iteration.
      I have a initial approximation $vecbeta^0$, then for $k=1,2,3...$
      solve Part $1$ using $vecC^k = G(vecbeta^k-1)$ then solving part $2$ optimization using $$Avecbeta^k+1le f(vecbeta^k, vecC^k)$$
      and $$max 19beta_1^k+1+0.5beta_2^k+1+16beta_3^k+1.$$



      But I am worried this step will not converge as $ktoinfty$. My questions is if this method will converge? if it is not, how to solve the optimization/ODE system to make it converge to the true solution?



      Any help is appreciated! Many thanks!







      ordinary-differential-equations convergence optimization fixed-point-theorems






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Mar 21 at 19:02









      TonyTony

      1,5191828




      1,5191828




















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          The task standing has unknown significant parameters. The quantity and localization of maxima are unknown too. Also, the optimization methods are not detalized. In such conditions, the convergence of iterations cannot be guaranteed.



          This situation can be improved if to make optimization as accurate as possible.



          Let us consider the possible ways for that.



          $colorbrowntextbfThe choice of initial point.$



          1. The greatest value of linear function in the area can be achieved only in the bounds of the area.

            This mean that the constraints $Avecbetale f(vecbeta, C)$
            can be used in the rigorous variant
            $$Avecbeta = fleft(vecbeta, vec Cright).tag1$$
            Thus, the task is to maximize the scalar production $vec w vec beta,$ where
            $$vec w=beginpmatrix19\0.5\16endpmatrix,tag2$$
            under the constraint $(1).$

          2. The obtained task can be solved by Lagrange multipliers method, which performs it as
            calculation of uncondiional maxima of the function
            $$varphileft(vec lambda,vec betaright) = vec w vec beta + veclambdaleft(Avecbeta-fleft(vecbeta,Gleft(vecbetaright)right)right).tag3$$
            The maxima of $varphi$ achieves only in its stationary points.

          3. The stationary points of $varphi$ can be defined from the system
            $$dfracpartial varphipartial vec beta = 0,quad dfracpartial varphipartial vec lambda = 0,$$
            or
            begincases
            vec w + left(A-dfracdfdvec betaright)^T vec lambda = 0\[4pt]
            dfracdfdvec beta = dfracpartial fpartial vec beta +dfracpartial fpartial GdfracdGdvec beta\
            Avecbeta = fleft(vecbeta, Gleft(vecbetaright)right).tag4
            endcases

            The maxima should be selected among of the all stationary points. Each of them can be choosen as the initial point for the iterations.

            This approach allows to localize the initial points near the possible maxima.

          $colorbrowntextbfIterations.$



          1. Iterations can use detalized model of the optimization task.

          2. The optimization task does not require the full soluiion $vec C.$ In particular, the derivatives can be obtained from the Part 1 immediately.

          3. Convergency of the iterations in the proposed model basically depends from the stability of the system $(1)$ solutions near the maxima points.





          share|cite|improve this answer











          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3157245%2fhow-to-show-the-optimization-ode-fixed-point-iteration-steps-converge%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$

            The task standing has unknown significant parameters. The quantity and localization of maxima are unknown too. Also, the optimization methods are not detalized. In such conditions, the convergence of iterations cannot be guaranteed.



            This situation can be improved if to make optimization as accurate as possible.



            Let us consider the possible ways for that.



            $colorbrowntextbfThe choice of initial point.$



            1. The greatest value of linear function in the area can be achieved only in the bounds of the area.

              This mean that the constraints $Avecbetale f(vecbeta, C)$
              can be used in the rigorous variant
              $$Avecbeta = fleft(vecbeta, vec Cright).tag1$$
              Thus, the task is to maximize the scalar production $vec w vec beta,$ where
              $$vec w=beginpmatrix19\0.5\16endpmatrix,tag2$$
              under the constraint $(1).$

            2. The obtained task can be solved by Lagrange multipliers method, which performs it as
              calculation of uncondiional maxima of the function
              $$varphileft(vec lambda,vec betaright) = vec w vec beta + veclambdaleft(Avecbeta-fleft(vecbeta,Gleft(vecbetaright)right)right).tag3$$
              The maxima of $varphi$ achieves only in its stationary points.

            3. The stationary points of $varphi$ can be defined from the system
              $$dfracpartial varphipartial vec beta = 0,quad dfracpartial varphipartial vec lambda = 0,$$
              or
              begincases
              vec w + left(A-dfracdfdvec betaright)^T vec lambda = 0\[4pt]
              dfracdfdvec beta = dfracpartial fpartial vec beta +dfracpartial fpartial GdfracdGdvec beta\
              Avecbeta = fleft(vecbeta, Gleft(vecbetaright)right).tag4
              endcases

              The maxima should be selected among of the all stationary points. Each of them can be choosen as the initial point for the iterations.

              This approach allows to localize the initial points near the possible maxima.

            $colorbrowntextbfIterations.$



            1. Iterations can use detalized model of the optimization task.

            2. The optimization task does not require the full soluiion $vec C.$ In particular, the derivatives can be obtained from the Part 1 immediately.

            3. Convergency of the iterations in the proposed model basically depends from the stability of the system $(1)$ solutions near the maxima points.





            share|cite|improve this answer











            $endgroup$

















              1












              $begingroup$

              The task standing has unknown significant parameters. The quantity and localization of maxima are unknown too. Also, the optimization methods are not detalized. In such conditions, the convergence of iterations cannot be guaranteed.



              This situation can be improved if to make optimization as accurate as possible.



              Let us consider the possible ways for that.



              $colorbrowntextbfThe choice of initial point.$



              1. The greatest value of linear function in the area can be achieved only in the bounds of the area.

                This mean that the constraints $Avecbetale f(vecbeta, C)$
                can be used in the rigorous variant
                $$Avecbeta = fleft(vecbeta, vec Cright).tag1$$
                Thus, the task is to maximize the scalar production $vec w vec beta,$ where
                $$vec w=beginpmatrix19\0.5\16endpmatrix,tag2$$
                under the constraint $(1).$

              2. The obtained task can be solved by Lagrange multipliers method, which performs it as
                calculation of uncondiional maxima of the function
                $$varphileft(vec lambda,vec betaright) = vec w vec beta + veclambdaleft(Avecbeta-fleft(vecbeta,Gleft(vecbetaright)right)right).tag3$$
                The maxima of $varphi$ achieves only in its stationary points.

              3. The stationary points of $varphi$ can be defined from the system
                $$dfracpartial varphipartial vec beta = 0,quad dfracpartial varphipartial vec lambda = 0,$$
                or
                begincases
                vec w + left(A-dfracdfdvec betaright)^T vec lambda = 0\[4pt]
                dfracdfdvec beta = dfracpartial fpartial vec beta +dfracpartial fpartial GdfracdGdvec beta\
                Avecbeta = fleft(vecbeta, Gleft(vecbetaright)right).tag4
                endcases

                The maxima should be selected among of the all stationary points. Each of them can be choosen as the initial point for the iterations.

                This approach allows to localize the initial points near the possible maxima.

              $colorbrowntextbfIterations.$



              1. Iterations can use detalized model of the optimization task.

              2. The optimization task does not require the full soluiion $vec C.$ In particular, the derivatives can be obtained from the Part 1 immediately.

              3. Convergency of the iterations in the proposed model basically depends from the stability of the system $(1)$ solutions near the maxima points.





              share|cite|improve this answer











              $endgroup$















                1












                1








                1





                $begingroup$

                The task standing has unknown significant parameters. The quantity and localization of maxima are unknown too. Also, the optimization methods are not detalized. In such conditions, the convergence of iterations cannot be guaranteed.



                This situation can be improved if to make optimization as accurate as possible.



                Let us consider the possible ways for that.



                $colorbrowntextbfThe choice of initial point.$



                1. The greatest value of linear function in the area can be achieved only in the bounds of the area.

                  This mean that the constraints $Avecbetale f(vecbeta, C)$
                  can be used in the rigorous variant
                  $$Avecbeta = fleft(vecbeta, vec Cright).tag1$$
                  Thus, the task is to maximize the scalar production $vec w vec beta,$ where
                  $$vec w=beginpmatrix19\0.5\16endpmatrix,tag2$$
                  under the constraint $(1).$

                2. The obtained task can be solved by Lagrange multipliers method, which performs it as
                  calculation of uncondiional maxima of the function
                  $$varphileft(vec lambda,vec betaright) = vec w vec beta + veclambdaleft(Avecbeta-fleft(vecbeta,Gleft(vecbetaright)right)right).tag3$$
                  The maxima of $varphi$ achieves only in its stationary points.

                3. The stationary points of $varphi$ can be defined from the system
                  $$dfracpartial varphipartial vec beta = 0,quad dfracpartial varphipartial vec lambda = 0,$$
                  or
                  begincases
                  vec w + left(A-dfracdfdvec betaright)^T vec lambda = 0\[4pt]
                  dfracdfdvec beta = dfracpartial fpartial vec beta +dfracpartial fpartial GdfracdGdvec beta\
                  Avecbeta = fleft(vecbeta, Gleft(vecbetaright)right).tag4
                  endcases

                  The maxima should be selected among of the all stationary points. Each of them can be choosen as the initial point for the iterations.

                  This approach allows to localize the initial points near the possible maxima.

                $colorbrowntextbfIterations.$



                1. Iterations can use detalized model of the optimization task.

                2. The optimization task does not require the full soluiion $vec C.$ In particular, the derivatives can be obtained from the Part 1 immediately.

                3. Convergency of the iterations in the proposed model basically depends from the stability of the system $(1)$ solutions near the maxima points.





                share|cite|improve this answer











                $endgroup$



                The task standing has unknown significant parameters. The quantity and localization of maxima are unknown too. Also, the optimization methods are not detalized. In such conditions, the convergence of iterations cannot be guaranteed.



                This situation can be improved if to make optimization as accurate as possible.



                Let us consider the possible ways for that.



                $colorbrowntextbfThe choice of initial point.$



                1. The greatest value of linear function in the area can be achieved only in the bounds of the area.

                  This mean that the constraints $Avecbetale f(vecbeta, C)$
                  can be used in the rigorous variant
                  $$Avecbeta = fleft(vecbeta, vec Cright).tag1$$
                  Thus, the task is to maximize the scalar production $vec w vec beta,$ where
                  $$vec w=beginpmatrix19\0.5\16endpmatrix,tag2$$
                  under the constraint $(1).$

                2. The obtained task can be solved by Lagrange multipliers method, which performs it as
                  calculation of uncondiional maxima of the function
                  $$varphileft(vec lambda,vec betaright) = vec w vec beta + veclambdaleft(Avecbeta-fleft(vecbeta,Gleft(vecbetaright)right)right).tag3$$
                  The maxima of $varphi$ achieves only in its stationary points.

                3. The stationary points of $varphi$ can be defined from the system
                  $$dfracpartial varphipartial vec beta = 0,quad dfracpartial varphipartial vec lambda = 0,$$
                  or
                  begincases
                  vec w + left(A-dfracdfdvec betaright)^T vec lambda = 0\[4pt]
                  dfracdfdvec beta = dfracpartial fpartial vec beta +dfracpartial fpartial GdfracdGdvec beta\
                  Avecbeta = fleft(vecbeta, Gleft(vecbetaright)right).tag4
                  endcases

                  The maxima should be selected among of the all stationary points. Each of them can be choosen as the initial point for the iterations.

                  This approach allows to localize the initial points near the possible maxima.

                $colorbrowntextbfIterations.$



                1. Iterations can use detalized model of the optimization task.

                2. The optimization task does not require the full soluiion $vec C.$ In particular, the derivatives can be obtained from the Part 1 immediately.

                3. Convergency of the iterations in the proposed model basically depends from the stability of the system $(1)$ solutions near the maxima points.






                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Mar 31 at 10:36

























                answered Mar 29 at 12:56









                Yuri NegometyanovYuri Negometyanov

                12.5k1729




                12.5k1729



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3157245%2fhow-to-show-the-optimization-ode-fixed-point-iteration-steps-converge%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer

                    random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

                    Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye