After multiplying a positive definite matrix several times to 'a vector A', still less than 90 degree between the 'vector A' and the 'mapped vector'?How to show symmetric matrices are orthogonally diagonalizableIf the inner product of two vectors results in a positive definite matrix, does their commutative inner product result in a positive scalar?If $A$ is a positive definite matrix, what about the matrix with entries $a_ij^2$?Question about inner product space and positive definite matrixA Question on Positive Definite Matrix and Sesquilinear FormWhen I add a constant to all entries in a positive semi-definite matrix, is the resulting matrix again psd.?Matrix Inner Product ConfusionContractions of product of symmetric positive definite matrix and symmetric matrixIs the maximum rotation by multiplying a positive definite matrix is less than 90 degrees?Let $AinmathbbR^ntimes n$ be positive definite and symmetric. Show that $langle x,y rangle_A:=langle Ax,y rangle$ defines an inner product.The true meaning of eigenvalues and eigenvectors, and positive (semi) definite matrix?

What is a romance in Latin?

What reasons are there for a Capitalist to oppose a 100% inheritance tax?

What's the in-universe reasoning behind sorcerers needing material components?

Avoiding direct proof while writing proof by induction

What is the most common color to indicate the input-field is disabled?

Ambiguity in the definition of entropy

CAST throwing error when run in stored procedure but not when run as raw query

How much of data wrangling is a data scientist's job?

iPad being using in wall mount battery swollen

Could the museum Saturn V's be refitted for one more flight?

Watching something be piped to a file live with tail

Little known, relatively unlikely, but scientifically plausible, apocalyptic (or near apocalyptic) events

How do conventional missiles fly?

Why can't we play rap on piano?

Can my sorcerer use a spellbook only to collect spells and scribe scrolls, not cast?

GFCI outlets - can they be repaired? Are they really needed at the end of a circuit?

Why do bosons tend to occupy the same state?

What does “the session was packed” mean in this context?

How badly should I try to prevent a user from XSSing themselves?

Can a virus destroy the BIOS of a modern computer?

Examples of smooth manifolds admitting inbetween one and a continuum of complex structures

Arrow those variables!

Plagiarism or not?

Im going to France and my passport expires June 19th



After multiplying a positive definite matrix several times to 'a vector A', still less than 90 degree between the 'vector A' and the 'mapped vector'?


How to show symmetric matrices are orthogonally diagonalizableIf the inner product of two vectors results in a positive definite matrix, does their commutative inner product result in a positive scalar?If $A$ is a positive definite matrix, what about the matrix with entries $a_ij^2$?Question about inner product space and positive definite matrixA Question on Positive Definite Matrix and Sesquilinear FormWhen I add a constant to all entries in a positive semi-definite matrix, is the resulting matrix again psd.?Matrix Inner Product ConfusionContractions of product of symmetric positive definite matrix and symmetric matrixIs the maximum rotation by multiplying a positive definite matrix is less than 90 degrees?Let $AinmathbbR^ntimes n$ be positive definite and symmetric. Show that $langle x,y rangle_A:=langle Ax,y rangle$ defines an inner product.The true meaning of eigenvalues and eigenvectors, and positive (semi) definite matrix?













2












$begingroup$


My question



Would the $theta$ be still less than 90 degrees in vT * Mk v = ||v|| * ||Mk v|| * cos $theta$, if the matrix M is positive definite?




Background Information



  • Let's suppose that v (original vector: v) is a non-zero vector and M is a positive definite matrix. I multiply M several K times to a vector v (mapped vector : Mk v ) and then inner product between those two vectors. The mathematical equation is as follows :

         vT * Mk v = ||v|| * ||Mk v|| * cos $theta$




  • vT * M v >0 : This is a sure thing because M is a positive definite matrix. As such, the angle $theta$ between the v (original vector) and the Mv (the mapped vector) is less than 90 degree, since the inner product should be positive.

  • I am curious to know if the $theta$ in the equation below is also between 0 degree and 90 degree.

         vT * Mk v = ||v|| * ||Mk v|| * cos $theta$



  • I got motivated to ask this question from the Towards Data Science article 'What is a Positive Definite Matrix?'. The quote is as follows:


Wouldn’t it be nice in an abstract sense… if you could multiply some matrices multiple times and they won’t change the sign of the vectors? If you multiply positive numbers to other positive numbers, it doesn’t change its sign. I think it’s a neat property for a matrix to have.











share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    If $M$ is positive definite, so is $M^k$ for any integer $k$ (even if it is negative). This is because the eigenvalues of $M^k$ are the $k$-th powers of the respective eigenvalues of $M$, which are all positive, and whose powers are also therefore positive. (And if $M$ is symmetric, then of course $M^k$ is symmetric as well).
    $endgroup$
    – M. Vinay
    Mar 21 at 8:01










  • $begingroup$
    @M.Vinay Thank you for the comment. But how do eigenvalue of the matrix M have anything to do with the angle between the' original vector' and the 'mapped vector'? To me, eigenvalue is just a scaling factor of the eigenvectors and the 'original vector v' there is any vector, rather than eigenvector.
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:12










  • $begingroup$
    $A$ is positive definite iff all its eigenvalues are positive (and often, it's also required to be symmetric, though that's sometimes dropped). If $M$ is positive definitive, so is $M^k$ (as I showed in my previous comment). You have yourself noted if $A$ is positive definite, then the angle between $x$ and $Ax$ is acute.
    $endgroup$
    – M. Vinay
    Mar 21 at 8:15










  • $begingroup$
    Okay, hold on, I'll update my answer with this. I was planning to do that anyway (as soon as I figured it out, hehe!). I'm pretty sure that can also be obtained from the eigenvalues. You see, eigenvectors are those vectors that merely get scaled by the transformation (or matrix). Now given any vector, if you decompose it along the eigenvectors [there's some additional conditions to be discussed here, which I shall do in my answer], then applying $M^k$ is going to scale these components by the respective eigenvalues. Hm, so I think I have the answer now.
    $endgroup$
    – M. Vinay
    Mar 21 at 8:31










  • $begingroup$
    @M. Vinay I would appreciate it a lot if you add your last comment's part into your answer because I am excited to see how this mystery (a vector still stays in less than the 90 degree with the original vector after getting transformed for several times) would be solved!
    $endgroup$
    – Eiffelbear
    Mar 21 at 9:16















2












$begingroup$


My question



Would the $theta$ be still less than 90 degrees in vT * Mk v = ||v|| * ||Mk v|| * cos $theta$, if the matrix M is positive definite?




Background Information



  • Let's suppose that v (original vector: v) is a non-zero vector and M is a positive definite matrix. I multiply M several K times to a vector v (mapped vector : Mk v ) and then inner product between those two vectors. The mathematical equation is as follows :

         vT * Mk v = ||v|| * ||Mk v|| * cos $theta$




  • vT * M v >0 : This is a sure thing because M is a positive definite matrix. As such, the angle $theta$ between the v (original vector) and the Mv (the mapped vector) is less than 90 degree, since the inner product should be positive.

  • I am curious to know if the $theta$ in the equation below is also between 0 degree and 90 degree.

         vT * Mk v = ||v|| * ||Mk v|| * cos $theta$



  • I got motivated to ask this question from the Towards Data Science article 'What is a Positive Definite Matrix?'. The quote is as follows:


Wouldn’t it be nice in an abstract sense… if you could multiply some matrices multiple times and they won’t change the sign of the vectors? If you multiply positive numbers to other positive numbers, it doesn’t change its sign. I think it’s a neat property for a matrix to have.











share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    If $M$ is positive definite, so is $M^k$ for any integer $k$ (even if it is negative). This is because the eigenvalues of $M^k$ are the $k$-th powers of the respective eigenvalues of $M$, which are all positive, and whose powers are also therefore positive. (And if $M$ is symmetric, then of course $M^k$ is symmetric as well).
    $endgroup$
    – M. Vinay
    Mar 21 at 8:01










  • $begingroup$
    @M.Vinay Thank you for the comment. But how do eigenvalue of the matrix M have anything to do with the angle between the' original vector' and the 'mapped vector'? To me, eigenvalue is just a scaling factor of the eigenvectors and the 'original vector v' there is any vector, rather than eigenvector.
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:12










  • $begingroup$
    $A$ is positive definite iff all its eigenvalues are positive (and often, it's also required to be symmetric, though that's sometimes dropped). If $M$ is positive definitive, so is $M^k$ (as I showed in my previous comment). You have yourself noted if $A$ is positive definite, then the angle between $x$ and $Ax$ is acute.
    $endgroup$
    – M. Vinay
    Mar 21 at 8:15










  • $begingroup$
    Okay, hold on, I'll update my answer with this. I was planning to do that anyway (as soon as I figured it out, hehe!). I'm pretty sure that can also be obtained from the eigenvalues. You see, eigenvectors are those vectors that merely get scaled by the transformation (or matrix). Now given any vector, if you decompose it along the eigenvectors [there's some additional conditions to be discussed here, which I shall do in my answer], then applying $M^k$ is going to scale these components by the respective eigenvalues. Hm, so I think I have the answer now.
    $endgroup$
    – M. Vinay
    Mar 21 at 8:31










  • $begingroup$
    @M. Vinay I would appreciate it a lot if you add your last comment's part into your answer because I am excited to see how this mystery (a vector still stays in less than the 90 degree with the original vector after getting transformed for several times) would be solved!
    $endgroup$
    – Eiffelbear
    Mar 21 at 9:16













2












2








2





$begingroup$


My question



Would the $theta$ be still less than 90 degrees in vT * Mk v = ||v|| * ||Mk v|| * cos $theta$, if the matrix M is positive definite?




Background Information



  • Let's suppose that v (original vector: v) is a non-zero vector and M is a positive definite matrix. I multiply M several K times to a vector v (mapped vector : Mk v ) and then inner product between those two vectors. The mathematical equation is as follows :

         vT * Mk v = ||v|| * ||Mk v|| * cos $theta$




  • vT * M v >0 : This is a sure thing because M is a positive definite matrix. As such, the angle $theta$ between the v (original vector) and the Mv (the mapped vector) is less than 90 degree, since the inner product should be positive.

  • I am curious to know if the $theta$ in the equation below is also between 0 degree and 90 degree.

         vT * Mk v = ||v|| * ||Mk v|| * cos $theta$



  • I got motivated to ask this question from the Towards Data Science article 'What is a Positive Definite Matrix?'. The quote is as follows:


Wouldn’t it be nice in an abstract sense… if you could multiply some matrices multiple times and they won’t change the sign of the vectors? If you multiply positive numbers to other positive numbers, it doesn’t change its sign. I think it’s a neat property for a matrix to have.











share|cite|improve this question











$endgroup$




My question



Would the $theta$ be still less than 90 degrees in vT * Mk v = ||v|| * ||Mk v|| * cos $theta$, if the matrix M is positive definite?




Background Information



  • Let's suppose that v (original vector: v) is a non-zero vector and M is a positive definite matrix. I multiply M several K times to a vector v (mapped vector : Mk v ) and then inner product between those two vectors. The mathematical equation is as follows :

         vT * Mk v = ||v|| * ||Mk v|| * cos $theta$




  • vT * M v >0 : This is a sure thing because M is a positive definite matrix. As such, the angle $theta$ between the v (original vector) and the Mv (the mapped vector) is less than 90 degree, since the inner product should be positive.

  • I am curious to know if the $theta$ in the equation below is also between 0 degree and 90 degree.

         vT * Mk v = ||v|| * ||Mk v|| * cos $theta$



  • I got motivated to ask this question from the Towards Data Science article 'What is a Positive Definite Matrix?'. The quote is as follows:


Wouldn’t it be nice in an abstract sense… if you could multiply some matrices multiple times and they won’t change the sign of the vectors? If you multiply positive numbers to other positive numbers, it doesn’t change its sign. I think it’s a neat property for a matrix to have.








linear-algebra matrices linear-transformations positive-definite






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 21 at 8:09







Eiffelbear

















asked Mar 21 at 7:42









EiffelbearEiffelbear

134




134







  • 1




    $begingroup$
    If $M$ is positive definite, so is $M^k$ for any integer $k$ (even if it is negative). This is because the eigenvalues of $M^k$ are the $k$-th powers of the respective eigenvalues of $M$, which are all positive, and whose powers are also therefore positive. (And if $M$ is symmetric, then of course $M^k$ is symmetric as well).
    $endgroup$
    – M. Vinay
    Mar 21 at 8:01










  • $begingroup$
    @M.Vinay Thank you for the comment. But how do eigenvalue of the matrix M have anything to do with the angle between the' original vector' and the 'mapped vector'? To me, eigenvalue is just a scaling factor of the eigenvectors and the 'original vector v' there is any vector, rather than eigenvector.
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:12










  • $begingroup$
    $A$ is positive definite iff all its eigenvalues are positive (and often, it's also required to be symmetric, though that's sometimes dropped). If $M$ is positive definitive, so is $M^k$ (as I showed in my previous comment). You have yourself noted if $A$ is positive definite, then the angle between $x$ and $Ax$ is acute.
    $endgroup$
    – M. Vinay
    Mar 21 at 8:15










  • $begingroup$
    Okay, hold on, I'll update my answer with this. I was planning to do that anyway (as soon as I figured it out, hehe!). I'm pretty sure that can also be obtained from the eigenvalues. You see, eigenvectors are those vectors that merely get scaled by the transformation (or matrix). Now given any vector, if you decompose it along the eigenvectors [there's some additional conditions to be discussed here, which I shall do in my answer], then applying $M^k$ is going to scale these components by the respective eigenvalues. Hm, so I think I have the answer now.
    $endgroup$
    – M. Vinay
    Mar 21 at 8:31










  • $begingroup$
    @M. Vinay I would appreciate it a lot if you add your last comment's part into your answer because I am excited to see how this mystery (a vector still stays in less than the 90 degree with the original vector after getting transformed for several times) would be solved!
    $endgroup$
    – Eiffelbear
    Mar 21 at 9:16












  • 1




    $begingroup$
    If $M$ is positive definite, so is $M^k$ for any integer $k$ (even if it is negative). This is because the eigenvalues of $M^k$ are the $k$-th powers of the respective eigenvalues of $M$, which are all positive, and whose powers are also therefore positive. (And if $M$ is symmetric, then of course $M^k$ is symmetric as well).
    $endgroup$
    – M. Vinay
    Mar 21 at 8:01










  • $begingroup$
    @M.Vinay Thank you for the comment. But how do eigenvalue of the matrix M have anything to do with the angle between the' original vector' and the 'mapped vector'? To me, eigenvalue is just a scaling factor of the eigenvectors and the 'original vector v' there is any vector, rather than eigenvector.
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:12










  • $begingroup$
    $A$ is positive definite iff all its eigenvalues are positive (and often, it's also required to be symmetric, though that's sometimes dropped). If $M$ is positive definitive, so is $M^k$ (as I showed in my previous comment). You have yourself noted if $A$ is positive definite, then the angle between $x$ and $Ax$ is acute.
    $endgroup$
    – M. Vinay
    Mar 21 at 8:15










  • $begingroup$
    Okay, hold on, I'll update my answer with this. I was planning to do that anyway (as soon as I figured it out, hehe!). I'm pretty sure that can also be obtained from the eigenvalues. You see, eigenvectors are those vectors that merely get scaled by the transformation (or matrix). Now given any vector, if you decompose it along the eigenvectors [there's some additional conditions to be discussed here, which I shall do in my answer], then applying $M^k$ is going to scale these components by the respective eigenvalues. Hm, so I think I have the answer now.
    $endgroup$
    – M. Vinay
    Mar 21 at 8:31










  • $begingroup$
    @M. Vinay I would appreciate it a lot if you add your last comment's part into your answer because I am excited to see how this mystery (a vector still stays in less than the 90 degree with the original vector after getting transformed for several times) would be solved!
    $endgroup$
    – Eiffelbear
    Mar 21 at 9:16







1




1




$begingroup$
If $M$ is positive definite, so is $M^k$ for any integer $k$ (even if it is negative). This is because the eigenvalues of $M^k$ are the $k$-th powers of the respective eigenvalues of $M$, which are all positive, and whose powers are also therefore positive. (And if $M$ is symmetric, then of course $M^k$ is symmetric as well).
$endgroup$
– M. Vinay
Mar 21 at 8:01




$begingroup$
If $M$ is positive definite, so is $M^k$ for any integer $k$ (even if it is negative). This is because the eigenvalues of $M^k$ are the $k$-th powers of the respective eigenvalues of $M$, which are all positive, and whose powers are also therefore positive. (And if $M$ is symmetric, then of course $M^k$ is symmetric as well).
$endgroup$
– M. Vinay
Mar 21 at 8:01












$begingroup$
@M.Vinay Thank you for the comment. But how do eigenvalue of the matrix M have anything to do with the angle between the' original vector' and the 'mapped vector'? To me, eigenvalue is just a scaling factor of the eigenvectors and the 'original vector v' there is any vector, rather than eigenvector.
$endgroup$
– Eiffelbear
Mar 21 at 8:12




$begingroup$
@M.Vinay Thank you for the comment. But how do eigenvalue of the matrix M have anything to do with the angle between the' original vector' and the 'mapped vector'? To me, eigenvalue is just a scaling factor of the eigenvectors and the 'original vector v' there is any vector, rather than eigenvector.
$endgroup$
– Eiffelbear
Mar 21 at 8:12












$begingroup$
$A$ is positive definite iff all its eigenvalues are positive (and often, it's also required to be symmetric, though that's sometimes dropped). If $M$ is positive definitive, so is $M^k$ (as I showed in my previous comment). You have yourself noted if $A$ is positive definite, then the angle between $x$ and $Ax$ is acute.
$endgroup$
– M. Vinay
Mar 21 at 8:15




$begingroup$
$A$ is positive definite iff all its eigenvalues are positive (and often, it's also required to be symmetric, though that's sometimes dropped). If $M$ is positive definitive, so is $M^k$ (as I showed in my previous comment). You have yourself noted if $A$ is positive definite, then the angle between $x$ and $Ax$ is acute.
$endgroup$
– M. Vinay
Mar 21 at 8:15












$begingroup$
Okay, hold on, I'll update my answer with this. I was planning to do that anyway (as soon as I figured it out, hehe!). I'm pretty sure that can also be obtained from the eigenvalues. You see, eigenvectors are those vectors that merely get scaled by the transformation (or matrix). Now given any vector, if you decompose it along the eigenvectors [there's some additional conditions to be discussed here, which I shall do in my answer], then applying $M^k$ is going to scale these components by the respective eigenvalues. Hm, so I think I have the answer now.
$endgroup$
– M. Vinay
Mar 21 at 8:31




$begingroup$
Okay, hold on, I'll update my answer with this. I was planning to do that anyway (as soon as I figured it out, hehe!). I'm pretty sure that can also be obtained from the eigenvalues. You see, eigenvectors are those vectors that merely get scaled by the transformation (or matrix). Now given any vector, if you decompose it along the eigenvectors [there's some additional conditions to be discussed here, which I shall do in my answer], then applying $M^k$ is going to scale these components by the respective eigenvalues. Hm, so I think I have the answer now.
$endgroup$
– M. Vinay
Mar 21 at 8:31












$begingroup$
@M. Vinay I would appreciate it a lot if you add your last comment's part into your answer because I am excited to see how this mystery (a vector still stays in less than the 90 degree with the original vector after getting transformed for several times) would be solved!
$endgroup$
– Eiffelbear
Mar 21 at 9:16




$begingroup$
@M. Vinay I would appreciate it a lot if you add your last comment's part into your answer because I am excited to see how this mystery (a vector still stays in less than the 90 degree with the original vector after getting transformed for several times) would be solved!
$endgroup$
– Eiffelbear
Mar 21 at 9:16










1 Answer
1






active

oldest

votes


















0












$begingroup$

If $M$ is a positive definite matrix, then so is $M^k$, for all integers $k$ (note that negative powers of $M$ also exist since $|M| ne 0$).



This is easy to see. A matrix $A$ is positive definite if and only if all its eigenvalues are positive [additionally, $A$ may also be required to be symmetric, in the more common definition]. If $lambda_1, ldots, lambda_n$ are all the eigenvalues of a matrix $M$, then eigenvalues of $M^k$ are exactly $lambda_1^k, ldots, lambda_n^k$ (this also holds for negative $k$ if $M$ is invertible, which is true if every $lambda_i$ is non-zero). Thus, if $M$ is positive definite, each $lambda_i > 0$, which implies $lambda_i^k > 0$ as well, and therefore $M^k$ is also positive definite [additionally, if $M$ is symmetric, so is $M^k$].



Thus, $v^T M^k v > 0$ for all $v ne 0$, which proves (as shown in the question itself) that the angle between $v$ and $M^k v$ is acute.




We can also see this geometrically. Now I will assume that $M$ is indeed a (real) symmetric $n times n$ positive definite matrix. Being real and symmetric guarantees that it is diagonalisable, or equivalently (what is important for us), that it has a set of $n$ eigenvectors that form a basis for $mathbb R^n$. Indeed, there is an orthonormal basis of $mathbb R^n$ whose elements are eigenvectors of $M$. Let this orthonormal basis be $B = x_1, ldots, x_n$, and let $lambda_i$ be the eigenvalue corresponding to the eigenvector $x_i$, $i = 1, ldots, n$. Thus, $M x_i = lambda_i x_i$.



Thus, given any vector $v in mathbb R^n$, we can decompose it along the basis vectors as, say, $$v = alpha_1 x_1 + cdots + alpha_n x_n.$$



Now, if $M$ is applied to $v$, each component in the above representation gets scaled by the corresponding eigenvalue (because each component is an eigenvector). That is,



beginalign*
Mv &= M(alpha_1 x_1 + cdots + alpha_n x_n)\
&= alpha_1 (M x_1) + cdots + alpha_n (M x_n)\
&= alpha_1 lambda_1 x_1 + cdots + alpha_n lambda_n x_n\
&= lambda_1 (alpha_1 x_1) + cdots + lambda_n (alpha_n x_n).
endalign*



Since each $lambda_i$ is positive, all the components get scaled in their current direction (up, down, or not at all, according to the eigenvalue being greater than, less than, or equal to $1$). This makes it obvious that the direction of none of the components is reversed (and therefore the direction of the whole vector is also not reversed). Furthermore, since no eigenvalue is zero, no "projection" occurs. Thus, $Mv$ cannot be orthogonal to $v$. Indeed, the angle between $Mv$ and $v$ will be acute.



Consider the orthonormal basis $B$ consisting of eigenvectors of $M$. This orthonormal system defines its own $2^n$ orthants in $mathbb R^n$ (not the standard orthants). The vector $v$ lies in one of these, or possibly between some of these (if its components along some of the eigenvectors are zero). If it lies between some orthants, then then we may simply consider the lower dimensional subspace spanned by its non-zero components and all the arguments below will hold in this subspace.



Consider the example shown below in $mathbb R^2$. The light blue vector is $v$ and the shaded region is the orthant containing it in the coordinate system defined by the red axes defined by the two eigenvectors of $M$. The components of $v$ along the axes are also shown in light blue.



Successive PD transformations



Each time $M$ is applied (to the result of the previous application), each component gets scaled by the corresponding eigenvalue. The diagram shows $Mv$ and $M^2 v$ and their respective components (the darker blue vectors). Thus, higher and higher powers of $M$ applied to $v$ produce vectors where the components of $v$ along the eigenvectors corresponding to the highest eigenvalues have been scaled abnormally high (assuming these eigenvalues are greater than $1$). On the other hand, the components corresponding to eigenvalues less than $1$ if any will get scaled down, closer and closer to zero (none such in the example shown).



However, since the scaling never reverses any component, all the vectors from the successive applications remain in the same orthant as the original vector. Any two vectors strictly inside one octant have an acute angle between them.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    Oh that's right. Thank you. But can you give me a geometric interpretation of this? I can't understand after mapping the vector v several k times with the matrix M, still the angle between the original vector v and the mapped vector M <sup>k</sup> is still less than 90 degree
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:28











  • $begingroup$
    Actually I'm planning to update this answer even further.
    $endgroup$
    – M. Vinay
    Mar 21 at 13:46










  • $begingroup$
    Your suggestion is more than welcome. I have 2 questions. Q.1. I dont get the part "since no eigenvalue is zero, no projection occurs". Isn't it the case that dimension would collapse if eigenvalue is zero? It's because as Mv is a linear combinations of "lambda* alpha * x ". If one of the lambdas is 0 for example lambda 2 is 0, the direction of the vector expressed by x2 disappears. That dimension is gone. To me, if a eigen value is zero, it means that after transformation by multiplying the matrix M to a vector v, a dimension reduction occurs. Curious to know why you said it's a "projection"
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:03











  • $begingroup$
    Q.2 I can see that since all eigenvalues are bigger than 0 (positive definite matrix), if you keep multiplying the transformation matrix M, the eigenvectors get scaled in the current direction. But how does that mean "mapped vector and the original vector has an angle between them less than 90 degree?" I cannot see the link between those two statements. Any intuitive explanation for that please?
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:09










  • $begingroup$
    Regarding my first question, the dimension collapse and projection are the same things! Haha. Have thought about it overnight :) But still have no clue on the question no.2..
    $endgroup$
    – Eiffelbear
    Mar 22 at 2:06












Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3156481%2fafter-multiplying-a-positive-definite-matrix-several-times-to-a-vector-a-stil%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

If $M$ is a positive definite matrix, then so is $M^k$, for all integers $k$ (note that negative powers of $M$ also exist since $|M| ne 0$).



This is easy to see. A matrix $A$ is positive definite if and only if all its eigenvalues are positive [additionally, $A$ may also be required to be symmetric, in the more common definition]. If $lambda_1, ldots, lambda_n$ are all the eigenvalues of a matrix $M$, then eigenvalues of $M^k$ are exactly $lambda_1^k, ldots, lambda_n^k$ (this also holds for negative $k$ if $M$ is invertible, which is true if every $lambda_i$ is non-zero). Thus, if $M$ is positive definite, each $lambda_i > 0$, which implies $lambda_i^k > 0$ as well, and therefore $M^k$ is also positive definite [additionally, if $M$ is symmetric, so is $M^k$].



Thus, $v^T M^k v > 0$ for all $v ne 0$, which proves (as shown in the question itself) that the angle between $v$ and $M^k v$ is acute.




We can also see this geometrically. Now I will assume that $M$ is indeed a (real) symmetric $n times n$ positive definite matrix. Being real and symmetric guarantees that it is diagonalisable, or equivalently (what is important for us), that it has a set of $n$ eigenvectors that form a basis for $mathbb R^n$. Indeed, there is an orthonormal basis of $mathbb R^n$ whose elements are eigenvectors of $M$. Let this orthonormal basis be $B = x_1, ldots, x_n$, and let $lambda_i$ be the eigenvalue corresponding to the eigenvector $x_i$, $i = 1, ldots, n$. Thus, $M x_i = lambda_i x_i$.



Thus, given any vector $v in mathbb R^n$, we can decompose it along the basis vectors as, say, $$v = alpha_1 x_1 + cdots + alpha_n x_n.$$



Now, if $M$ is applied to $v$, each component in the above representation gets scaled by the corresponding eigenvalue (because each component is an eigenvector). That is,



beginalign*
Mv &= M(alpha_1 x_1 + cdots + alpha_n x_n)\
&= alpha_1 (M x_1) + cdots + alpha_n (M x_n)\
&= alpha_1 lambda_1 x_1 + cdots + alpha_n lambda_n x_n\
&= lambda_1 (alpha_1 x_1) + cdots + lambda_n (alpha_n x_n).
endalign*



Since each $lambda_i$ is positive, all the components get scaled in their current direction (up, down, or not at all, according to the eigenvalue being greater than, less than, or equal to $1$). This makes it obvious that the direction of none of the components is reversed (and therefore the direction of the whole vector is also not reversed). Furthermore, since no eigenvalue is zero, no "projection" occurs. Thus, $Mv$ cannot be orthogonal to $v$. Indeed, the angle between $Mv$ and $v$ will be acute.



Consider the orthonormal basis $B$ consisting of eigenvectors of $M$. This orthonormal system defines its own $2^n$ orthants in $mathbb R^n$ (not the standard orthants). The vector $v$ lies in one of these, or possibly between some of these (if its components along some of the eigenvectors are zero). If it lies between some orthants, then then we may simply consider the lower dimensional subspace spanned by its non-zero components and all the arguments below will hold in this subspace.



Consider the example shown below in $mathbb R^2$. The light blue vector is $v$ and the shaded region is the orthant containing it in the coordinate system defined by the red axes defined by the two eigenvectors of $M$. The components of $v$ along the axes are also shown in light blue.



Successive PD transformations



Each time $M$ is applied (to the result of the previous application), each component gets scaled by the corresponding eigenvalue. The diagram shows $Mv$ and $M^2 v$ and their respective components (the darker blue vectors). Thus, higher and higher powers of $M$ applied to $v$ produce vectors where the components of $v$ along the eigenvectors corresponding to the highest eigenvalues have been scaled abnormally high (assuming these eigenvalues are greater than $1$). On the other hand, the components corresponding to eigenvalues less than $1$ if any will get scaled down, closer and closer to zero (none such in the example shown).



However, since the scaling never reverses any component, all the vectors from the successive applications remain in the same orthant as the original vector. Any two vectors strictly inside one octant have an acute angle between them.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    Oh that's right. Thank you. But can you give me a geometric interpretation of this? I can't understand after mapping the vector v several k times with the matrix M, still the angle between the original vector v and the mapped vector M <sup>k</sup> is still less than 90 degree
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:28











  • $begingroup$
    Actually I'm planning to update this answer even further.
    $endgroup$
    – M. Vinay
    Mar 21 at 13:46










  • $begingroup$
    Your suggestion is more than welcome. I have 2 questions. Q.1. I dont get the part "since no eigenvalue is zero, no projection occurs". Isn't it the case that dimension would collapse if eigenvalue is zero? It's because as Mv is a linear combinations of "lambda* alpha * x ". If one of the lambdas is 0 for example lambda 2 is 0, the direction of the vector expressed by x2 disappears. That dimension is gone. To me, if a eigen value is zero, it means that after transformation by multiplying the matrix M to a vector v, a dimension reduction occurs. Curious to know why you said it's a "projection"
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:03











  • $begingroup$
    Q.2 I can see that since all eigenvalues are bigger than 0 (positive definite matrix), if you keep multiplying the transformation matrix M, the eigenvectors get scaled in the current direction. But how does that mean "mapped vector and the original vector has an angle between them less than 90 degree?" I cannot see the link between those two statements. Any intuitive explanation for that please?
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:09










  • $begingroup$
    Regarding my first question, the dimension collapse and projection are the same things! Haha. Have thought about it overnight :) But still have no clue on the question no.2..
    $endgroup$
    – Eiffelbear
    Mar 22 at 2:06
















0












$begingroup$

If $M$ is a positive definite matrix, then so is $M^k$, for all integers $k$ (note that negative powers of $M$ also exist since $|M| ne 0$).



This is easy to see. A matrix $A$ is positive definite if and only if all its eigenvalues are positive [additionally, $A$ may also be required to be symmetric, in the more common definition]. If $lambda_1, ldots, lambda_n$ are all the eigenvalues of a matrix $M$, then eigenvalues of $M^k$ are exactly $lambda_1^k, ldots, lambda_n^k$ (this also holds for negative $k$ if $M$ is invertible, which is true if every $lambda_i$ is non-zero). Thus, if $M$ is positive definite, each $lambda_i > 0$, which implies $lambda_i^k > 0$ as well, and therefore $M^k$ is also positive definite [additionally, if $M$ is symmetric, so is $M^k$].



Thus, $v^T M^k v > 0$ for all $v ne 0$, which proves (as shown in the question itself) that the angle between $v$ and $M^k v$ is acute.




We can also see this geometrically. Now I will assume that $M$ is indeed a (real) symmetric $n times n$ positive definite matrix. Being real and symmetric guarantees that it is diagonalisable, or equivalently (what is important for us), that it has a set of $n$ eigenvectors that form a basis for $mathbb R^n$. Indeed, there is an orthonormal basis of $mathbb R^n$ whose elements are eigenvectors of $M$. Let this orthonormal basis be $B = x_1, ldots, x_n$, and let $lambda_i$ be the eigenvalue corresponding to the eigenvector $x_i$, $i = 1, ldots, n$. Thus, $M x_i = lambda_i x_i$.



Thus, given any vector $v in mathbb R^n$, we can decompose it along the basis vectors as, say, $$v = alpha_1 x_1 + cdots + alpha_n x_n.$$



Now, if $M$ is applied to $v$, each component in the above representation gets scaled by the corresponding eigenvalue (because each component is an eigenvector). That is,



beginalign*
Mv &= M(alpha_1 x_1 + cdots + alpha_n x_n)\
&= alpha_1 (M x_1) + cdots + alpha_n (M x_n)\
&= alpha_1 lambda_1 x_1 + cdots + alpha_n lambda_n x_n\
&= lambda_1 (alpha_1 x_1) + cdots + lambda_n (alpha_n x_n).
endalign*



Since each $lambda_i$ is positive, all the components get scaled in their current direction (up, down, or not at all, according to the eigenvalue being greater than, less than, or equal to $1$). This makes it obvious that the direction of none of the components is reversed (and therefore the direction of the whole vector is also not reversed). Furthermore, since no eigenvalue is zero, no "projection" occurs. Thus, $Mv$ cannot be orthogonal to $v$. Indeed, the angle between $Mv$ and $v$ will be acute.



Consider the orthonormal basis $B$ consisting of eigenvectors of $M$. This orthonormal system defines its own $2^n$ orthants in $mathbb R^n$ (not the standard orthants). The vector $v$ lies in one of these, or possibly between some of these (if its components along some of the eigenvectors are zero). If it lies between some orthants, then then we may simply consider the lower dimensional subspace spanned by its non-zero components and all the arguments below will hold in this subspace.



Consider the example shown below in $mathbb R^2$. The light blue vector is $v$ and the shaded region is the orthant containing it in the coordinate system defined by the red axes defined by the two eigenvectors of $M$. The components of $v$ along the axes are also shown in light blue.



Successive PD transformations



Each time $M$ is applied (to the result of the previous application), each component gets scaled by the corresponding eigenvalue. The diagram shows $Mv$ and $M^2 v$ and their respective components (the darker blue vectors). Thus, higher and higher powers of $M$ applied to $v$ produce vectors where the components of $v$ along the eigenvectors corresponding to the highest eigenvalues have been scaled abnormally high (assuming these eigenvalues are greater than $1$). On the other hand, the components corresponding to eigenvalues less than $1$ if any will get scaled down, closer and closer to zero (none such in the example shown).



However, since the scaling never reverses any component, all the vectors from the successive applications remain in the same orthant as the original vector. Any two vectors strictly inside one octant have an acute angle between them.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    Oh that's right. Thank you. But can you give me a geometric interpretation of this? I can't understand after mapping the vector v several k times with the matrix M, still the angle between the original vector v and the mapped vector M <sup>k</sup> is still less than 90 degree
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:28











  • $begingroup$
    Actually I'm planning to update this answer even further.
    $endgroup$
    – M. Vinay
    Mar 21 at 13:46










  • $begingroup$
    Your suggestion is more than welcome. I have 2 questions. Q.1. I dont get the part "since no eigenvalue is zero, no projection occurs". Isn't it the case that dimension would collapse if eigenvalue is zero? It's because as Mv is a linear combinations of "lambda* alpha * x ". If one of the lambdas is 0 for example lambda 2 is 0, the direction of the vector expressed by x2 disappears. That dimension is gone. To me, if a eigen value is zero, it means that after transformation by multiplying the matrix M to a vector v, a dimension reduction occurs. Curious to know why you said it's a "projection"
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:03











  • $begingroup$
    Q.2 I can see that since all eigenvalues are bigger than 0 (positive definite matrix), if you keep multiplying the transformation matrix M, the eigenvectors get scaled in the current direction. But how does that mean "mapped vector and the original vector has an angle between them less than 90 degree?" I cannot see the link between those two statements. Any intuitive explanation for that please?
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:09










  • $begingroup$
    Regarding my first question, the dimension collapse and projection are the same things! Haha. Have thought about it overnight :) But still have no clue on the question no.2..
    $endgroup$
    – Eiffelbear
    Mar 22 at 2:06














0












0








0





$begingroup$

If $M$ is a positive definite matrix, then so is $M^k$, for all integers $k$ (note that negative powers of $M$ also exist since $|M| ne 0$).



This is easy to see. A matrix $A$ is positive definite if and only if all its eigenvalues are positive [additionally, $A$ may also be required to be symmetric, in the more common definition]. If $lambda_1, ldots, lambda_n$ are all the eigenvalues of a matrix $M$, then eigenvalues of $M^k$ are exactly $lambda_1^k, ldots, lambda_n^k$ (this also holds for negative $k$ if $M$ is invertible, which is true if every $lambda_i$ is non-zero). Thus, if $M$ is positive definite, each $lambda_i > 0$, which implies $lambda_i^k > 0$ as well, and therefore $M^k$ is also positive definite [additionally, if $M$ is symmetric, so is $M^k$].



Thus, $v^T M^k v > 0$ for all $v ne 0$, which proves (as shown in the question itself) that the angle between $v$ and $M^k v$ is acute.




We can also see this geometrically. Now I will assume that $M$ is indeed a (real) symmetric $n times n$ positive definite matrix. Being real and symmetric guarantees that it is diagonalisable, or equivalently (what is important for us), that it has a set of $n$ eigenvectors that form a basis for $mathbb R^n$. Indeed, there is an orthonormal basis of $mathbb R^n$ whose elements are eigenvectors of $M$. Let this orthonormal basis be $B = x_1, ldots, x_n$, and let $lambda_i$ be the eigenvalue corresponding to the eigenvector $x_i$, $i = 1, ldots, n$. Thus, $M x_i = lambda_i x_i$.



Thus, given any vector $v in mathbb R^n$, we can decompose it along the basis vectors as, say, $$v = alpha_1 x_1 + cdots + alpha_n x_n.$$



Now, if $M$ is applied to $v$, each component in the above representation gets scaled by the corresponding eigenvalue (because each component is an eigenvector). That is,



beginalign*
Mv &= M(alpha_1 x_1 + cdots + alpha_n x_n)\
&= alpha_1 (M x_1) + cdots + alpha_n (M x_n)\
&= alpha_1 lambda_1 x_1 + cdots + alpha_n lambda_n x_n\
&= lambda_1 (alpha_1 x_1) + cdots + lambda_n (alpha_n x_n).
endalign*



Since each $lambda_i$ is positive, all the components get scaled in their current direction (up, down, or not at all, according to the eigenvalue being greater than, less than, or equal to $1$). This makes it obvious that the direction of none of the components is reversed (and therefore the direction of the whole vector is also not reversed). Furthermore, since no eigenvalue is zero, no "projection" occurs. Thus, $Mv$ cannot be orthogonal to $v$. Indeed, the angle between $Mv$ and $v$ will be acute.



Consider the orthonormal basis $B$ consisting of eigenvectors of $M$. This orthonormal system defines its own $2^n$ orthants in $mathbb R^n$ (not the standard orthants). The vector $v$ lies in one of these, or possibly between some of these (if its components along some of the eigenvectors are zero). If it lies between some orthants, then then we may simply consider the lower dimensional subspace spanned by its non-zero components and all the arguments below will hold in this subspace.



Consider the example shown below in $mathbb R^2$. The light blue vector is $v$ and the shaded region is the orthant containing it in the coordinate system defined by the red axes defined by the two eigenvectors of $M$. The components of $v$ along the axes are also shown in light blue.



Successive PD transformations



Each time $M$ is applied (to the result of the previous application), each component gets scaled by the corresponding eigenvalue. The diagram shows $Mv$ and $M^2 v$ and their respective components (the darker blue vectors). Thus, higher and higher powers of $M$ applied to $v$ produce vectors where the components of $v$ along the eigenvectors corresponding to the highest eigenvalues have been scaled abnormally high (assuming these eigenvalues are greater than $1$). On the other hand, the components corresponding to eigenvalues less than $1$ if any will get scaled down, closer and closer to zero (none such in the example shown).



However, since the scaling never reverses any component, all the vectors from the successive applications remain in the same orthant as the original vector. Any two vectors strictly inside one octant have an acute angle between them.






share|cite|improve this answer











$endgroup$



If $M$ is a positive definite matrix, then so is $M^k$, for all integers $k$ (note that negative powers of $M$ also exist since $|M| ne 0$).



This is easy to see. A matrix $A$ is positive definite if and only if all its eigenvalues are positive [additionally, $A$ may also be required to be symmetric, in the more common definition]. If $lambda_1, ldots, lambda_n$ are all the eigenvalues of a matrix $M$, then eigenvalues of $M^k$ are exactly $lambda_1^k, ldots, lambda_n^k$ (this also holds for negative $k$ if $M$ is invertible, which is true if every $lambda_i$ is non-zero). Thus, if $M$ is positive definite, each $lambda_i > 0$, which implies $lambda_i^k > 0$ as well, and therefore $M^k$ is also positive definite [additionally, if $M$ is symmetric, so is $M^k$].



Thus, $v^T M^k v > 0$ for all $v ne 0$, which proves (as shown in the question itself) that the angle between $v$ and $M^k v$ is acute.




We can also see this geometrically. Now I will assume that $M$ is indeed a (real) symmetric $n times n$ positive definite matrix. Being real and symmetric guarantees that it is diagonalisable, or equivalently (what is important for us), that it has a set of $n$ eigenvectors that form a basis for $mathbb R^n$. Indeed, there is an orthonormal basis of $mathbb R^n$ whose elements are eigenvectors of $M$. Let this orthonormal basis be $B = x_1, ldots, x_n$, and let $lambda_i$ be the eigenvalue corresponding to the eigenvector $x_i$, $i = 1, ldots, n$. Thus, $M x_i = lambda_i x_i$.



Thus, given any vector $v in mathbb R^n$, we can decompose it along the basis vectors as, say, $$v = alpha_1 x_1 + cdots + alpha_n x_n.$$



Now, if $M$ is applied to $v$, each component in the above representation gets scaled by the corresponding eigenvalue (because each component is an eigenvector). That is,



beginalign*
Mv &= M(alpha_1 x_1 + cdots + alpha_n x_n)\
&= alpha_1 (M x_1) + cdots + alpha_n (M x_n)\
&= alpha_1 lambda_1 x_1 + cdots + alpha_n lambda_n x_n\
&= lambda_1 (alpha_1 x_1) + cdots + lambda_n (alpha_n x_n).
endalign*



Since each $lambda_i$ is positive, all the components get scaled in their current direction (up, down, or not at all, according to the eigenvalue being greater than, less than, or equal to $1$). This makes it obvious that the direction of none of the components is reversed (and therefore the direction of the whole vector is also not reversed). Furthermore, since no eigenvalue is zero, no "projection" occurs. Thus, $Mv$ cannot be orthogonal to $v$. Indeed, the angle between $Mv$ and $v$ will be acute.



Consider the orthonormal basis $B$ consisting of eigenvectors of $M$. This orthonormal system defines its own $2^n$ orthants in $mathbb R^n$ (not the standard orthants). The vector $v$ lies in one of these, or possibly between some of these (if its components along some of the eigenvectors are zero). If it lies between some orthants, then then we may simply consider the lower dimensional subspace spanned by its non-zero components and all the arguments below will hold in this subspace.



Consider the example shown below in $mathbb R^2$. The light blue vector is $v$ and the shaded region is the orthant containing it in the coordinate system defined by the red axes defined by the two eigenvectors of $M$. The components of $v$ along the axes are also shown in light blue.



Successive PD transformations



Each time $M$ is applied (to the result of the previous application), each component gets scaled by the corresponding eigenvalue. The diagram shows $Mv$ and $M^2 v$ and their respective components (the darker blue vectors). Thus, higher and higher powers of $M$ applied to $v$ produce vectors where the components of $v$ along the eigenvectors corresponding to the highest eigenvalues have been scaled abnormally high (assuming these eigenvalues are greater than $1$). On the other hand, the components corresponding to eigenvalues less than $1$ if any will get scaled down, closer and closer to zero (none such in the example shown).



However, since the scaling never reverses any component, all the vectors from the successive applications remain in the same orthant as the original vector. Any two vectors strictly inside one octant have an acute angle between them.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Mar 22 at 14:48

























answered Mar 21 at 8:25









M. VinayM. Vinay

7,28822135




7,28822135











  • $begingroup$
    Oh that's right. Thank you. But can you give me a geometric interpretation of this? I can't understand after mapping the vector v several k times with the matrix M, still the angle between the original vector v and the mapped vector M <sup>k</sup> is still less than 90 degree
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:28











  • $begingroup$
    Actually I'm planning to update this answer even further.
    $endgroup$
    – M. Vinay
    Mar 21 at 13:46










  • $begingroup$
    Your suggestion is more than welcome. I have 2 questions. Q.1. I dont get the part "since no eigenvalue is zero, no projection occurs". Isn't it the case that dimension would collapse if eigenvalue is zero? It's because as Mv is a linear combinations of "lambda* alpha * x ". If one of the lambdas is 0 for example lambda 2 is 0, the direction of the vector expressed by x2 disappears. That dimension is gone. To me, if a eigen value is zero, it means that after transformation by multiplying the matrix M to a vector v, a dimension reduction occurs. Curious to know why you said it's a "projection"
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:03











  • $begingroup$
    Q.2 I can see that since all eigenvalues are bigger than 0 (positive definite matrix), if you keep multiplying the transformation matrix M, the eigenvectors get scaled in the current direction. But how does that mean "mapped vector and the original vector has an angle between them less than 90 degree?" I cannot see the link between those two statements. Any intuitive explanation for that please?
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:09










  • $begingroup$
    Regarding my first question, the dimension collapse and projection are the same things! Haha. Have thought about it overnight :) But still have no clue on the question no.2..
    $endgroup$
    – Eiffelbear
    Mar 22 at 2:06

















  • $begingroup$
    Oh that's right. Thank you. But can you give me a geometric interpretation of this? I can't understand after mapping the vector v several k times with the matrix M, still the angle between the original vector v and the mapped vector M <sup>k</sup> is still less than 90 degree
    $endgroup$
    – Eiffelbear
    Mar 21 at 8:28











  • $begingroup$
    Actually I'm planning to update this answer even further.
    $endgroup$
    – M. Vinay
    Mar 21 at 13:46










  • $begingroup$
    Your suggestion is more than welcome. I have 2 questions. Q.1. I dont get the part "since no eigenvalue is zero, no projection occurs". Isn't it the case that dimension would collapse if eigenvalue is zero? It's because as Mv is a linear combinations of "lambda* alpha * x ". If one of the lambdas is 0 for example lambda 2 is 0, the direction of the vector expressed by x2 disappears. That dimension is gone. To me, if a eigen value is zero, it means that after transformation by multiplying the matrix M to a vector v, a dimension reduction occurs. Curious to know why you said it's a "projection"
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:03











  • $begingroup$
    Q.2 I can see that since all eigenvalues are bigger than 0 (positive definite matrix), if you keep multiplying the transformation matrix M, the eigenvectors get scaled in the current direction. But how does that mean "mapped vector and the original vector has an angle between them less than 90 degree?" I cannot see the link between those two statements. Any intuitive explanation for that please?
    $endgroup$
    – Eiffelbear
    Mar 21 at 14:09










  • $begingroup$
    Regarding my first question, the dimension collapse and projection are the same things! Haha. Have thought about it overnight :) But still have no clue on the question no.2..
    $endgroup$
    – Eiffelbear
    Mar 22 at 2:06
















$begingroup$
Oh that's right. Thank you. But can you give me a geometric interpretation of this? I can't understand after mapping the vector v several k times with the matrix M, still the angle between the original vector v and the mapped vector M <sup>k</sup> is still less than 90 degree
$endgroup$
– Eiffelbear
Mar 21 at 8:28





$begingroup$
Oh that's right. Thank you. But can you give me a geometric interpretation of this? I can't understand after mapping the vector v several k times with the matrix M, still the angle between the original vector v and the mapped vector M <sup>k</sup> is still less than 90 degree
$endgroup$
– Eiffelbear
Mar 21 at 8:28













$begingroup$
Actually I'm planning to update this answer even further.
$endgroup$
– M. Vinay
Mar 21 at 13:46




$begingroup$
Actually I'm planning to update this answer even further.
$endgroup$
– M. Vinay
Mar 21 at 13:46












$begingroup$
Your suggestion is more than welcome. I have 2 questions. Q.1. I dont get the part "since no eigenvalue is zero, no projection occurs". Isn't it the case that dimension would collapse if eigenvalue is zero? It's because as Mv is a linear combinations of "lambda* alpha * x ". If one of the lambdas is 0 for example lambda 2 is 0, the direction of the vector expressed by x2 disappears. That dimension is gone. To me, if a eigen value is zero, it means that after transformation by multiplying the matrix M to a vector v, a dimension reduction occurs. Curious to know why you said it's a "projection"
$endgroup$
– Eiffelbear
Mar 21 at 14:03





$begingroup$
Your suggestion is more than welcome. I have 2 questions. Q.1. I dont get the part "since no eigenvalue is zero, no projection occurs". Isn't it the case that dimension would collapse if eigenvalue is zero? It's because as Mv is a linear combinations of "lambda* alpha * x ". If one of the lambdas is 0 for example lambda 2 is 0, the direction of the vector expressed by x2 disappears. That dimension is gone. To me, if a eigen value is zero, it means that after transformation by multiplying the matrix M to a vector v, a dimension reduction occurs. Curious to know why you said it's a "projection"
$endgroup$
– Eiffelbear
Mar 21 at 14:03













$begingroup$
Q.2 I can see that since all eigenvalues are bigger than 0 (positive definite matrix), if you keep multiplying the transformation matrix M, the eigenvectors get scaled in the current direction. But how does that mean "mapped vector and the original vector has an angle between them less than 90 degree?" I cannot see the link between those two statements. Any intuitive explanation for that please?
$endgroup$
– Eiffelbear
Mar 21 at 14:09




$begingroup$
Q.2 I can see that since all eigenvalues are bigger than 0 (positive definite matrix), if you keep multiplying the transformation matrix M, the eigenvectors get scaled in the current direction. But how does that mean "mapped vector and the original vector has an angle between them less than 90 degree?" I cannot see the link between those two statements. Any intuitive explanation for that please?
$endgroup$
– Eiffelbear
Mar 21 at 14:09












$begingroup$
Regarding my first question, the dimension collapse and projection are the same things! Haha. Have thought about it overnight :) But still have no clue on the question no.2..
$endgroup$
– Eiffelbear
Mar 22 at 2:06





$begingroup$
Regarding my first question, the dimension collapse and projection are the same things! Haha. Have thought about it overnight :) But still have no clue on the question no.2..
$endgroup$
– Eiffelbear
Mar 22 at 2:06


















draft saved

draft discarded
















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3156481%2fafter-multiplying-a-positive-definite-matrix-several-times-to-a-vector-a-stil%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer

random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye