Find random non-almost-degenerated multivariate polynomials.Hermite Interpolation of $e^x$. Strange behaviour when increasing the number of derivatives at interpolating points.Generation of “random” multilinear polynomials for testing non-negativity algorithmWhat does a polynomial look like under projection of underlying space?Polynomials with $S_n times mathbbZ_2$ symmetryIrreducible Hurwitz Factorization of A Complex PolynomialApproximate roots of Jacobi PolynomialsCan we prove that all roots of those polynomials are real negative?What's the degree of a multivariate polynomial in Artin Algebra?Special polynomials and an identity of hypergeometric seriesAn Unusual “Polynomial Identity”

How to get directions in deep space?

If Captain Marvel (MCU) were to have a child with a human male, would the child be human or Kree?

"Oh no!" in Latin

Why is the Sun approximated as a black body at ~ 5800 K?

Why the "ls" command is showing the permissions of files in a FAT32 partition?

Why can't the Brexit deadlock in the UK parliament be solved with a plurality vote?

Limit max CPU usage SQL SERVER with WSRM

Visualizing the difference curve in a 2D plot?

El Dorado Word Puzzle II: Videogame Edition

Can I run 125kHz RF circuit on a breadboard?

Do I have to know the General Relativity theory to understand the concept of inertial frame?

Why would five hundred and five be same as one?

How many people need to be born every 8 years to sustain population?

What is the meaning of the following sentence?

What does “짐” mean?

Proving an identity involving cross products and coplanar vectors

If the only attacker is removed from combat, is a creature still counted as having attacked this turn?

Why Shazam when there is already Superman?

How would you translate "more" for use as an interface button?

How do I fix the group tension caused by my character stealing and possibly killing without provocation?

Logistic function with a slope but no asymptotes?

In One Punch Man, is King actually weak?

How to test the sharpness of a knife?

Usage of an old photo with expired copyright



Find random non-almost-degenerated multivariate polynomials.


Hermite Interpolation of $e^x$. Strange behaviour when increasing the number of derivatives at interpolating points.Generation of “random” multilinear polynomials for testing non-negativity algorithmWhat does a polynomial look like under projection of underlying space?Polynomials with $S_n times mathbbZ_2$ symmetryIrreducible Hurwitz Factorization of A Complex PolynomialApproximate roots of Jacobi PolynomialsCan we prove that all roots of those polynomials are real negative?What's the degree of a multivariate polynomial in Artin Algebra?Special polynomials and an identity of hypergeometric seriesAn Unusual “Polynomial Identity”













2












$begingroup$


If I randomly draw parameters for a polynomial of degree $n$, say $P_n$, there seems to be big chances that this polynomial can be closely approximated by a polynomial of smaller degree $P_n-k, kin1,dots,n$.



For instance, this $P_5$ (in blue) is easily approximated by a $P_3$ (in red), as measured by their Mean Squared Error on the interval $[-1, 1]$:



P5 easily approximated by P3



I am interested in random polynomials $P_n$ that are "complete" in the sense that they are not easily approximated by lower-degree polynomials.



For instance, this $P_4$ (in orange) fails to approximate the $P_5$ (in blue) as its measured MSE is high.



P4 fails to approximate P5



How do I randomly draw from this class of polynomials only?




Attempt to formalize the problem, and generalize to multivariate polynomials in $d$ dimensions:



Let $t in mathbbR^+*$ be a minimal dissimilarity threshold.
A multivariate polynomial of degree $n$ is considered on the hypercube
$P_n: [-1,1]^d to mathbbR$.
Its coefficients $a$ can be refered to by $d$ indexes so that $P_n(x)$ can be expressed as the sum of each monom



$P_n(x) = displaystylesum_i_1,dots,i_d in 0,dots,na_i_1,dots,i_d times x_1^i_1 times dots times x_d^i_d$



Each coefficient is restricted to the bounded hypercube $a_i_1,dots,i_d in [-A,A], A in mathbbR^+$.



The dissimilarity between two polynomials is defined as (or monotonic with)
$d(P, Q) propto displaystyleint_xin[-1,1]^dleft(P(x) - Q(x)right)^2,mathrmdx$.



For each polynomial $P_n$, its "completeness" is measured by the smallest dissimilarity between itself and another $Q_n-1$ polynomial:



$c(P_n) = displaystylemin_Q_n-1d(P_n, Q_n-1)$



How do I randomly draw parameters $a$ so as to ensure that



$c(P_n) geqslant t$



?




Put it another way, the problem seems to be that the "average $P_n$" is the trivial, null, degenerated polynomial $P_n(x) = 0, forall x in [-1,1]^d$. Therefore, if I naively, randomly draw the parameters from $[-A,A]$, I'll get something closer from degenerated polynomials than to "complete" ones. How do I bias the sampling in $[-A,A]$ so as to avoid such almost-degenerated polynomials?




Bonus: if I could also relax the $A$ restriction but ensure that



$forall x in [-1,1]^d, P_n(x) in [-1, 1]$



the satisfaction would be complete.










share|cite|improve this question









$endgroup$
















    2












    $begingroup$


    If I randomly draw parameters for a polynomial of degree $n$, say $P_n$, there seems to be big chances that this polynomial can be closely approximated by a polynomial of smaller degree $P_n-k, kin1,dots,n$.



    For instance, this $P_5$ (in blue) is easily approximated by a $P_3$ (in red), as measured by their Mean Squared Error on the interval $[-1, 1]$:



    P5 easily approximated by P3



    I am interested in random polynomials $P_n$ that are "complete" in the sense that they are not easily approximated by lower-degree polynomials.



    For instance, this $P_4$ (in orange) fails to approximate the $P_5$ (in blue) as its measured MSE is high.



    P4 fails to approximate P5



    How do I randomly draw from this class of polynomials only?




    Attempt to formalize the problem, and generalize to multivariate polynomials in $d$ dimensions:



    Let $t in mathbbR^+*$ be a minimal dissimilarity threshold.
    A multivariate polynomial of degree $n$ is considered on the hypercube
    $P_n: [-1,1]^d to mathbbR$.
    Its coefficients $a$ can be refered to by $d$ indexes so that $P_n(x)$ can be expressed as the sum of each monom



    $P_n(x) = displaystylesum_i_1,dots,i_d in 0,dots,na_i_1,dots,i_d times x_1^i_1 times dots times x_d^i_d$



    Each coefficient is restricted to the bounded hypercube $a_i_1,dots,i_d in [-A,A], A in mathbbR^+$.



    The dissimilarity between two polynomials is defined as (or monotonic with)
    $d(P, Q) propto displaystyleint_xin[-1,1]^dleft(P(x) - Q(x)right)^2,mathrmdx$.



    For each polynomial $P_n$, its "completeness" is measured by the smallest dissimilarity between itself and another $Q_n-1$ polynomial:



    $c(P_n) = displaystylemin_Q_n-1d(P_n, Q_n-1)$



    How do I randomly draw parameters $a$ so as to ensure that



    $c(P_n) geqslant t$



    ?




    Put it another way, the problem seems to be that the "average $P_n$" is the trivial, null, degenerated polynomial $P_n(x) = 0, forall x in [-1,1]^d$. Therefore, if I naively, randomly draw the parameters from $[-A,A]$, I'll get something closer from degenerated polynomials than to "complete" ones. How do I bias the sampling in $[-A,A]$ so as to avoid such almost-degenerated polynomials?




    Bonus: if I could also relax the $A$ restriction but ensure that



    $forall x in [-1,1]^d, P_n(x) in [-1, 1]$



    the satisfaction would be complete.










    share|cite|improve this question









    $endgroup$














      2












      2








      2





      $begingroup$


      If I randomly draw parameters for a polynomial of degree $n$, say $P_n$, there seems to be big chances that this polynomial can be closely approximated by a polynomial of smaller degree $P_n-k, kin1,dots,n$.



      For instance, this $P_5$ (in blue) is easily approximated by a $P_3$ (in red), as measured by their Mean Squared Error on the interval $[-1, 1]$:



      P5 easily approximated by P3



      I am interested in random polynomials $P_n$ that are "complete" in the sense that they are not easily approximated by lower-degree polynomials.



      For instance, this $P_4$ (in orange) fails to approximate the $P_5$ (in blue) as its measured MSE is high.



      P4 fails to approximate P5



      How do I randomly draw from this class of polynomials only?




      Attempt to formalize the problem, and generalize to multivariate polynomials in $d$ dimensions:



      Let $t in mathbbR^+*$ be a minimal dissimilarity threshold.
      A multivariate polynomial of degree $n$ is considered on the hypercube
      $P_n: [-1,1]^d to mathbbR$.
      Its coefficients $a$ can be refered to by $d$ indexes so that $P_n(x)$ can be expressed as the sum of each monom



      $P_n(x) = displaystylesum_i_1,dots,i_d in 0,dots,na_i_1,dots,i_d times x_1^i_1 times dots times x_d^i_d$



      Each coefficient is restricted to the bounded hypercube $a_i_1,dots,i_d in [-A,A], A in mathbbR^+$.



      The dissimilarity between two polynomials is defined as (or monotonic with)
      $d(P, Q) propto displaystyleint_xin[-1,1]^dleft(P(x) - Q(x)right)^2,mathrmdx$.



      For each polynomial $P_n$, its "completeness" is measured by the smallest dissimilarity between itself and another $Q_n-1$ polynomial:



      $c(P_n) = displaystylemin_Q_n-1d(P_n, Q_n-1)$



      How do I randomly draw parameters $a$ so as to ensure that



      $c(P_n) geqslant t$



      ?




      Put it another way, the problem seems to be that the "average $P_n$" is the trivial, null, degenerated polynomial $P_n(x) = 0, forall x in [-1,1]^d$. Therefore, if I naively, randomly draw the parameters from $[-A,A]$, I'll get something closer from degenerated polynomials than to "complete" ones. How do I bias the sampling in $[-A,A]$ so as to avoid such almost-degenerated polynomials?




      Bonus: if I could also relax the $A$ restriction but ensure that



      $forall x in [-1,1]^d, P_n(x) in [-1, 1]$



      the satisfaction would be complete.










      share|cite|improve this question









      $endgroup$




      If I randomly draw parameters for a polynomial of degree $n$, say $P_n$, there seems to be big chances that this polynomial can be closely approximated by a polynomial of smaller degree $P_n-k, kin1,dots,n$.



      For instance, this $P_5$ (in blue) is easily approximated by a $P_3$ (in red), as measured by their Mean Squared Error on the interval $[-1, 1]$:



      P5 easily approximated by P3



      I am interested in random polynomials $P_n$ that are "complete" in the sense that they are not easily approximated by lower-degree polynomials.



      For instance, this $P_4$ (in orange) fails to approximate the $P_5$ (in blue) as its measured MSE is high.



      P4 fails to approximate P5



      How do I randomly draw from this class of polynomials only?




      Attempt to formalize the problem, and generalize to multivariate polynomials in $d$ dimensions:



      Let $t in mathbbR^+*$ be a minimal dissimilarity threshold.
      A multivariate polynomial of degree $n$ is considered on the hypercube
      $P_n: [-1,1]^d to mathbbR$.
      Its coefficients $a$ can be refered to by $d$ indexes so that $P_n(x)$ can be expressed as the sum of each monom



      $P_n(x) = displaystylesum_i_1,dots,i_d in 0,dots,na_i_1,dots,i_d times x_1^i_1 times dots times x_d^i_d$



      Each coefficient is restricted to the bounded hypercube $a_i_1,dots,i_d in [-A,A], A in mathbbR^+$.



      The dissimilarity between two polynomials is defined as (or monotonic with)
      $d(P, Q) propto displaystyleint_xin[-1,1]^dleft(P(x) - Q(x)right)^2,mathrmdx$.



      For each polynomial $P_n$, its "completeness" is measured by the smallest dissimilarity between itself and another $Q_n-1$ polynomial:



      $c(P_n) = displaystylemin_Q_n-1d(P_n, Q_n-1)$



      How do I randomly draw parameters $a$ so as to ensure that



      $c(P_n) geqslant t$



      ?




      Put it another way, the problem seems to be that the "average $P_n$" is the trivial, null, degenerated polynomial $P_n(x) = 0, forall x in [-1,1]^d$. Therefore, if I naively, randomly draw the parameters from $[-A,A]$, I'll get something closer from degenerated polynomials than to "complete" ones. How do I bias the sampling in $[-A,A]$ so as to avoid such almost-degenerated polynomials?




      Bonus: if I could also relax the $A$ restriction but ensure that



      $forall x in [-1,1]^d, P_n(x) in [-1, 1]$



      the satisfaction would be complete.







      polynomials approximation random multivariate-polynomial






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Mar 14 at 9:05









      iago-litoiago-lito

      307412




      307412




















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08










          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3147751%2ffind-random-non-almost-degenerated-multivariate-polynomials%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08















          1












          $begingroup$

          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08













          1












          1








          1





          $begingroup$

          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.






          share|cite|improve this answer









          $endgroup$



          Use orthogonal polynomials, say Legendre polynomials.



          Instead of writing
          $$f_n=sum_i^n alpha_i x^i$$
          with random $alpha_i$, write
          $$f_n=sum_i^n alpha_i P_i(x)$$
          with $P_i$ the $i$th Legendre polynomial. Let $g$ be the best $(n-1)$th order approximation to $f_n$:
          $$g=sum_i=0^n-1left((2i+1)int_-1^1 f_n(x') P_i(x') dx'right) P_i(x)$$
          $$g=sum_i=0^n-1 alpha_i P_i(x)$$
          so the approximation error is given entirely by $alpha_n$. Put a lower bound on $|alpha_n|$, and you will have a non-degenerate polynomial.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 14 at 9:24









          WouterWouter

          5,93421436




          5,93421436











          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08
















          • $begingroup$
            This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
            $endgroup$
            – iago-lito
            Mar 14 at 10:33






          • 1




            $begingroup$
            Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
            $endgroup$
            – Wouter
            Mar 14 at 12:37










          • $begingroup$
            .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
            $endgroup$
            – iago-lito
            Mar 14 at 12:38










          • $begingroup$
            As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
            $endgroup$
            – iago-lito
            Mar 15 at 14:03











          • $begingroup$
            You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
            $endgroup$
            – Wouter
            Mar 15 at 20:08















          $begingroup$
          This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
          $endgroup$
          – iago-lito
          Mar 14 at 10:33




          $begingroup$
          This is really helpful! And thank you for these pointers :) The restriction to $[-1,1]$ codomain can be ensured if I enforce that $sum=1$. Is the plain product of Legendre polynomials the correct generalization to multivariate functions? Is the class of functions you describe somehow restricted compared to the class of all polynoms that would fit? For instance, do I have any interest in generalizing to Jacobi polynoms instead?
          $endgroup$
          – iago-lito
          Mar 14 at 10:33




          1




          1




          $begingroup$
          Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
          $endgroup$
          – Wouter
          Mar 14 at 12:37




          $begingroup$
          Yes, the product is the generalization to multivariate functions. Any set of orthogonal polynomials up to order $n$ spans all polynomials up to order $n$, so there is no restriction. Using different types of orthogonal polynomials, which are orthogonal under different inner products, will also work, but the norm that you want to lower-bound (what you call dissimilarity) is the norm corresponding to the inner product for whatever set of orthogonal polynomials you use.
          $endgroup$
          – Wouter
          Mar 14 at 12:37












          $begingroup$
          .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
          $endgroup$
          – iago-lito
          Mar 14 at 12:38




          $begingroup$
          .. thus the awesomeness of Legendre polynomials. Thanks a lot :)
          $endgroup$
          – iago-lito
          Mar 14 at 12:38












          $begingroup$
          As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
          $endgroup$
          – iago-lito
          Mar 15 at 14:03





          $begingroup$
          As an update on this: I can ensure that $f_n(x) in [-1,1] forall x in [-1,1]^d$ by picking each $alpha_i$ from a Dirichlet distribution, then I randomly flip their sign. The problem is that, the higher the degree $n$, the closer $f_n$ is from 0 (for the same reasons I suppose). How can I adjust the $alpha_i$ so that the full range [-1,1] is exploited no matter the degree? I have tried with $tanh(atimes arctanh(f_n))$ transformations ($ainmathbbR^+$) with not much success. Maybe I should increase $a$ depending on $d$ and $n$.. Does this deserve another post?
          $endgroup$
          – iago-lito
          Mar 15 at 14:03













          $begingroup$
          You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
          $endgroup$
          – Wouter
          Mar 15 at 20:08




          $begingroup$
          You can always try the quick-and-dirty method of finding the maximum and minimum numerically and then rescaling the polynomial such that they are -1 and 1.
          $endgroup$
          – Wouter
          Mar 15 at 20:08

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3147751%2ffind-random-non-almost-degenerated-multivariate-polynomials%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Solar Wings Breeze Design and development Specifications (Breeze) References Navigation menu1368-485X"Hang glider: Breeze (Solar Wings)"e

          Kathakali Contents Etymology and nomenclature History Repertoire Songs and musical instruments Traditional plays Styles: Sampradayam Training centers and awards Relationship to other dance forms See also Notes References External links Navigation menueThe Illustrated Encyclopedia of Hinduism: A-MSouth Asian Folklore: An EncyclopediaRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to Play10.1353/atj.2005.0004The Illustrated Encyclopedia of Hinduism: A-MEncyclopedia of HinduismKathakali Dance-drama: Where Gods and Demons Come to PlaySonic Liturgy: Ritual and Music in Hindu Tradition"The Mirror of Gesture"Kathakali Dance-drama: Where Gods and Demons Come to Play"Kathakali"Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceMedieval Indian Literature: An AnthologyThe Oxford Companion to Indian TheatreSouth Asian Folklore: An Encyclopedia : Afghanistan, Bangladesh, India, Nepal, Pakistan, Sri LankaThe Rise of Performance Studies: Rethinking Richard Schechner's Broad SpectrumIndian Theatre: Traditions of PerformanceModern Asian Theatre and Performance 1900-2000Critical Theory and PerformanceBetween Theater and AnthropologyKathakali603847011Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceBetween Theater and AnthropologyBetween Theater and AnthropologyNambeesan Smaraka AwardsArchivedThe Cambridge Guide to TheatreRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeThe Garland Encyclopedia of World Music: South Asia : the Indian subcontinentThe Ethos of Noh: Actors and Their Art10.2307/1145740By Means of Performance: Intercultural Studies of Theatre and Ritual10.1017/s204912550000100xReconceiving the Renaissance: A Critical ReaderPerformance TheoryListening to Theatre: The Aural Dimension of Beijing Opera10.2307/1146013Kathakali: The Art of the Non-WorldlyOn KathakaliKathakali, the dance theatreThe Kathakali Complex: Performance & StructureKathakali Dance-Drama: Where Gods and Demons Come to Play10.1093/obo/9780195399318-0071Drama and Ritual of Early Hinduism"In the Shadow of Hollywood Orientalism: Authentic East Indian Dancing"10.1080/08949460490274013Sanskrit Play Production in Ancient IndiaIndian Music: History and StructureBharata, the Nāṭyaśāstra233639306Table of Contents2238067286469807Dance In Indian Painting10.2307/32047833204783Kathakali Dance-Theatre: A Visual Narrative of Sacred Indian MimeIndian Classical Dance: The Renaissance and BeyondKathakali: an indigenous art-form of Keralaeee

          Method to test if a number is a perfect power? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Detecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers