Find random non-almost-degenerated multivariate polynomials.Hermite Interpolation of $e^x$. Strange behaviour when increasing the number of derivatives at interpolating points.Generation of “random” multilinear polynomials for testing non-negativity algorithmWhat does a polynomial look like under projection of underlying space?Polynomials with $S_n times mathbbZ_2$ symmetryIrreducible Hurwitz Factorization of A Complex PolynomialApproximate roots of Jacobi PolynomialsCan we prove that all roots of those polynomials are real negative?What's the degree of a multivariate polynomial in Artin Algebra?Special polynomials and an identity of hypergeometric seriesAn Unusual “Polynomial Identity”

How to get directions in deep space?

If Captain Marvel (MCU) were to have a child with a human male, would the child be human or Kree?

"Oh no!" in Latin

Why is the Sun approximated as a black body at ~ 5800 K?

Why the "ls" command is showing the permissions of files in a FAT32 partition?

Why can't the Brexit deadlock in the UK parliament be solved with a plurality vote?

Limit max CPU usage SQL SERVER with WSRM

Visualizing the difference curve in a 2D plot?

El Dorado Word Puzzle II: Videogame Edition

Can I run 125kHz RF circuit on a breadboard?

Do I have to know the General Relativity theory to understand the concept of inertial frame?

Why would five hundred and five be same as one?

How many people need to be born every 8 years to sustain population?

What is the meaning of the following sentence?

What does “짐” mean?

Proving an identity involving cross products and coplanar vectors

If the only attacker is removed from combat, is a creature still counted as having attacked this turn?

Why Shazam when there is already Superman?

How would you translate "more" for use as an interface button?

How do I fix the group tension caused by my character stealing and possibly killing without provocation?

Logistic function with a slope but no asymptotes?

In One Punch Man, is King actually weak?

How to test the sharpness of a knife?

Usage of an old photo with expired copyright



Find random non-almost-degenerated multivariate polynomials.


Hermite Interpolation of $e^x$. Strange behaviour when increasing the number of derivatives at interpolating points.Generation of “random” multilinear polynomials for testing non-negativity algorithmWhat does a polynomial look like under projection of underlying space?Polynomials with $S_n times mathbbZ_2$ symmetryIrreducible Hurwitz Factorization of A Complex PolynomialApproximate roots of Jacobi PolynomialsCan we prove that all roots of those polynomials are real negative?What's the degree of a multivariate polynomial in Artin Algebra?Special polynomials and an identity of hypergeometric seriesAn Unusual “Polynomial Identity”













2












$begingroup$


If I randomly draw parameters for a polynomial of degree $n$, say $P_n$, there seems to be big chances that this polynomial can be closely approximated by a polynomial of smaller degree $P_n-k, kin1,dots,n$.



For instance, this $P_5$ (in blue) is easily approximated by a $P_3$ (in red), as measured by their Mean Squared Error on the interval $[-1, 1]$:



P5 easily approximated by P3



I am interested in random polynomials $P_n$ that are "complete" in the sense that they are not easily approximated by lower-degree polynomials.



For instance, this $P_4$ (in orange) fails to approximate the $P_5$ (in blue) as its measured MSE is high.



P4 fails to approximate P5



How do I randomly draw from this class of polynomials only?




Attempt to formalize the problem, and generalize to multivariate polynomials in $d$ dimensions:



Let $t in mathbbR^+*$ be a minimal dissimilarity threshold.
A multivariate polynomial of degree $n$ is considered on the hypercube
$P_n: [-1,1]^d to mathbbR$.
Its coefficients $a$ can be refered to by $d$ indexes so that $P_n(x)$ can be expressed as the sum of each monom



$P_n(x) = displaystylesum_i_1,dots,i_d in 0,dots,na_i_1,dots,i_d times x_1^i_1 times dots times x_d^i_d$



Each coefficient is restricted to the bounded hypercube $a_i_1,dots,i_d in [-A,A], A in mathbbR^+$.



The dissimilarity between two polynomials is defined as (or monotonic with)
$d(P, Q) propto displaystyleint_xin[-1,1]^dleft(P(x) - Q(x)right)^2,mathrmdx$.



For each polynomial $P_n$, its "completeness" is measured by the smallest dissimilarity between itself and another $Q_n-1$ polynomial:



$c(P_n) = displaystylemin_Q_n-1d(P_n, Q_n-1)$



How do I randomly draw parameters $a$ so as to ensure that



$c(P_n) geqslant t$



?




Put it another way, the problem seems to be that the "average $P_n$" is the trivial, null, degenerated polynomial $P_n(x) = 0, forall x in [-1,1]^d$. Therefore, if I naively, randomly draw the parameters from $[-A,A]$, I'll get something closer from degenerated polynomials than to "complete" ones. How do I bias the sampling in $[-A,A]$ so as to avoid such almost-degenerated polynomials?




Bonus: if I could also relax the $A$ restriction but ensure that



$forall x in [-1,1]^d, P_n(x) in [-1, 1]$



the satisfaction would be complete.










share|cite|improve this question









$endgroup$
















    2












    $begingroup$


    If I randomly draw parameters for a polynomial of degree $n$, say $P_n$, there seems to be big chances that this polynomial can be closely approximated by a polynomial of smaller degree $P_n-k, kin1,dots,n$.



    For instance, this $P_5$ (in blue) is easily approximated by a $P_3$ (in red), as measured by their Mean Squared Error on the interval $[-1, 1]$:



    P5 easily approximated by P3



    I am interested in random polynomials $P_n$ that are "complete" in the sense that they are not easily approximated by lower-degree polynomials.



    For instance, this $P_4$ (in orange) fails to approximate the $P_5$ (in blue) as its measured MSE is high.



    P4 fails to approximate P5



    How do I randomly draw from this class of polynomials only?




    Attempt to formalize the problem, and generalize to multivariate polynomials in $d$ dimensions:



    Let $t in mathbbR^+*$ be a minimal dissimilarity threshold.
    A multivariate polynomial of degree $n$ is considered on the hypercube
    $P_n: [-1,1]^d to mathbbR$.
    Its coefficients $a$ can be refered to by $d$ indexes so that $P_n(x)$ can be expressed as the sum of each monom



    $P_n(x) = displaystylesum_i_1,dots,i_d in 0,dots,na_i_1,dots,i_d times x_1^i_1 times dots times x_d^i_d$



    Each coefficient is restricted to the bounded hypercube $a_i_1,dots,i_d in [-A,A], A in mathbbR^+$.



    The dissimilarity between two polynomials is defined as (or monotonic with)
    $d(P, Q) propto displaystyleint_xin[-1,1]^dleft(P(x) - Q(x)right)^2,mathrmdx$.



    For each polynomial $P_n$, its "completeness" is measured by the smallest dissimilarity between itself and another $Q_n-1$ polynomial:



    $c(P_n) = displaystylemin_Q_n-1d(P_n, Q_n-1)$



    How do I randomly draw parameters $a$ so as to ensure that



    $c(P_n) geqslant t$



    ?




    Put it another way, the problem seems to be that the "average $P_n$" is the trivial, null, degenerated polynomial $P_n(x) = 0, forall x in [-1,1]^d$. Therefore, if I naively, randomly draw the parameters from $[-A,A]$, I'll get something closer from degenerated polynomials than to "complete" ones. How do I bias the sampling in $[-A,A]$ so as to avoid such almost-degenerated polynomials?




    Bonus: if I could also relax the $A$ restriction but ensure that



    $forall x in [-1,1]^d, P_n(x) in [-1, 1]$



    the satisfaction would be complete.










    share|cite|improve this question









    $endgroup$














      2












      2








      2





      $begingroup$


      If I randomly draw parameters for a polynomial of degree $n$, say $P_n$, there seems to be big chances that this polynomial can be closely approximated by a polynomial of smaller degree $P_n-k, kin1,dots,n$.



      For instance, this $P_5$ (in blue) is easily approximated by a $P_3$ (in red), as measured by their Mean Squared Error on the interval $[-1, 1]$:



      P5 easily approximated by P3



      I am interested in random polynomials $P_n$ that are "complete" in the sense that they are not easily approximated by lower-degree polynomials.



      For instance, this $P_4$ (in orange) fails to approximate the $P_5$ (in blue) as its measured MSE is high.



      P4 fails to approximate P5



      How do I randomly draw from this class of polynomials only?




      Attempt to formalize the problem, and generalize to multivariate polynomials in $d$ dimensions:



      Let $t in mathbbR^+*$ be a minimal dissimilarity threshold.
      A multivariate polynomial of degree $n$ is considered on the hypercube
      $P_n: [-1,1]^d to mathbbR$.
      Its coefficients $a$ can be refered to by $d$ indexes so that $P_n(x)$ can be expressed as the sum of each monom



      $P_n(x) = displaystylesum_i_1,dots,i_d in 0,dots,na_i_1,dots,i_d times x_1^i_1 times dots times x_d^i_d$



      Each coefficient is restricted to the bounded hypercube $a_i_1,dots,i_d in [-A,A], A in mathbbR^+$.



      The dissimilarity between two polynomials is defined as (or monotonic with)
      $d(P, Q) propto displaystyleint_xin[-1,1]^dleft(P(x) - Q(x)right)^2,mathrmdx$.



      For each polynomial $P_n$, its "completeness" is measured by the smallest dissimilarity between itself and another $Q_n-1$ polynomial:



      $c(P_n) = displaystylemin_Q_n-1d(P_n, Q_n-1)$



      How do I randomly draw parameters $a$ so as to ensure that



      $c(P_n) geqslant t$



      ?




      Put it another way, the problem seems to be that the "average $P_n$" is the trivial, null, degenerated polynomial $P_n(x) = 0, forall x in [-1,1]^d$. Therefore, if I naively, randomly draw the parameters from $[-A,A]$, I'll get something closer from degenerated polynomials than to "complete" ones. How do I bias the sampling in $[-A,A]$ so as to avoid such almost-degenerated polynomials?




      Bonus: if I could also relax the $A$ restriction but ensure that



      $forall x in [-1,1]^d, P_n(x) in [-1, 1]$



      the satisfaction would be complete.










      share|cite|improve this question









      $endgroup$




      If I randomly draw parameters for a polynomial of degree $n$, say $P_n$, there seems to be big chances that this polynomial can be closely approximated by a polynomial of smaller degree $P_n-k, kin1,dots,n$.



      For instance, this $P_5$ (in blue) is easily approximated by a $P_3$ (in red), as measured by their Mean Squared Error on the interval $[-1, 1]$:



      P5 easily approximated by P3



      I am interested in random polynomials $P_n$ that are "complete" in the sense that they are not easily approximated by lower-degree polynomials.



      For instance, this $P_4$ (in orange) fails to approximate the $P_5$ (in blue) as its measured MSE is high.



      P4 fails to approximate P5



      How do I randomly draw from this class of polynomials only?




      Attempt to formalize the problem, and generalize to multivariate polynomials in $d$ dimensions:



      Let $t in mathbbR^+*$ be a minimal dissimilarity threshold.
      A multivariate polynomial of degree $n$ is considered on the hypercube
      $P_n: [-1,1]^d to mathbbR$.
      Its coefficients $a$ can be refered to by $d$ indexes so that $P_n(x)$ can be expressed as the sum of each monom



      $P_n(x) = displaystylesum_i_1,dots,i_d in 0,dots,na_i_1,dots,i_d times x_1^i_1 times dots times x_d^i_d$



      Each coefficient is restricted to the bounded hypercube $a_i_1,dots,i_d in [-A,A], A in mathbbR^+$.



      The dissimilarity between two polynomials is defined as (or monotonic with)
      $d(P, Q) propto displaystyleint_xin[-1,1]^dleft(P(x) - Q(x)right)^2,mathrmdx$.



      For each polynomial $P_n$, its "completeness" is measured by the smallest dissimilarity between itself and another $Q_n-1$ polynomial:



      $c(P_n) = displaystylemin_Q_n-1d(P_n, Q_n-1)$



      How do I randomly draw parameters $a$ so as to ensure that



      $c(P_n) geqslant t$



      ?




      Put it another way, the problem seems to be that the "average $P_n$" is the trivial, null, degenerated polynomial $P_n(x) = 0, forall x in [-1,1]^d$. Therefore, if I naively, randomly draw the parameters from $[-A,A]$, I'll get something closer from degenerated polynomials than to "complete" ones. How do I bias the sampling in $[-A,A]$ so as to avoid such almost-degenerated polynomials?




      Bonus: if I could also relax the $A$ restriction but ensure that



      $forall x in [-1,1]^d, P_n(x) in [-1, 1]$



      the satisfaction would be complete.







      polynomials approximation random multivariate-polynomial






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Mar 14 at 9:05









      iago-litoiago-lito

      307412




      307412




















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08










          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3147751%2ffind-random-non-almost-degenerated-multivariate-polynomials%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08















          1












          $begingroup$

          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08













          1












          1








          1





          $begingroup$

          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.






          share|cite|improve this answer









          $endgroup$



          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 14 at 9:24









          WouterWouter

          5,93421436




          5,93421436











          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08
















          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08















          $begingroup$
          This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
          $endgroup$
          – iago-lito
          Mar 14 at 10:33




          $begingroup$
          This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
          $endgroup$
          – iago-lito
          Mar 14 at 10:33




          1




          1




          $begingroup$
          Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
          $endgroup$
          – Wouter
          Mar 14 at 12:37




          $begingroup$
          Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
          $endgroup$
          – Wouter
          Mar 14 at 12:37












          $begingroup$
          .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
          $endgroup$
          – iago-lito
          Mar 14 at 12:38




          $begingroup$
          .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
          $endgroup$
          – iago-lito
          Mar 14 at 12:38












          $begingroup$
          As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
          $endgroup$
          – iago-lito
          Mar 15 at 14:03





          $begingroup$
          As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
          $endgroup$
          – iago-lito
          Mar 15 at 14:03













          $begingroup$
          You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
          $endgroup$
          – Wouter
          Mar 15 at 20:08




          $begingroup$
          You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
          $endgroup$
          – Wouter
          Mar 15 at 20:08

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3147751%2ffind-random-non-almost-degenerated-multivariate-polynomials%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye

          random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

          How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer