Inverse of SPD matrix + identity The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Find diagonal of inverse matrixIs there a faster way to calculate a few diagonal elements of the inverse of a huge symmetric positive definite matrix?Inverse of identity plus scalar multiple of matrixCalculate rotation/translation matrix to match measurement points to nominal pointsHow to reverse matrix vector multiplication?Invertibility, inverse, and line weight of big circulant matricesInverse or approximation to the inverse of a sum of block diagonal and diagonal matrixSimplification of multiplication of sparse matrices $B(A A^T)A$Generating a random sparse hermitian matrix in PythonCalculating the diagonal of $(I-Q)^-1$ efficientlyFaster way to calculate the inverse of matrice $C $(when diagonisable.. $C-1AC = D$)

Are my PIs rude or am I just being too sensitive?

What aspect of planet Earth must be changed to prevent the industrial revolution?

I could not break this equation. Please help me

does high air pressure throw off wheel balance?

Did the UK government pay "millions and millions of dollars" to try to snag Julian Assange?

Why is superheterodyning better than direct conversion?

Why did all the guest students take carriages to the Yule Ball?

Semisimplicity of the category of coherent sheaves?

Why is the object placed in the middle of the sentence here?

Make it rain characters

Can the DM override racial traits?

Simulating Exploding Dice

Sort a list of pairs representing an acyclic, partial automorphism

Why does this iterative way of solving of equation work?

Road tyres vs "Street" tyres for charity ride on MTB Tandem

Slither Like a Snake

Python - Fishing Simulator

Can the prologue be the backstory of your main character?

Typeface like Times New Roman but with "tied" percent sign

Format single node in tikzcd

Can smartphones with the same camera sensor have different image quality?

rotate text in posterbox

How to delete random line from file using Unix command?

Do warforged have souls?



Inverse of SPD matrix + identity



The 2019 Stack Overflow Developer Survey Results Are In
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Find diagonal of inverse matrixIs there a faster way to calculate a few diagonal elements of the inverse of a huge symmetric positive definite matrix?Inverse of identity plus scalar multiple of matrixCalculate rotation/translation matrix to match measurement points to nominal pointsHow to reverse matrix vector multiplication?Invertibility, inverse, and line weight of big circulant matricesInverse or approximation to the inverse of a sum of block diagonal and diagonal matrixSimplification of multiplication of sparse matrices $B(A A^T)A$Generating a random sparse hermitian matrix in PythonCalculating the diagonal of $(I-Q)^-1$ efficientlyFaster way to calculate the inverse of matrice $C $(when diagonisable.. $C-1AC = D$)










1












$begingroup$


I didn't know exactly where to post this question. I feel like it falls between Computer Science and Mathematics, so I'm sorry if it doesn't fit here.



I need to calculate $(A+alpha I)^-1$, given $alpha>0$ and $A^-1$ which is SPD, and with a known sparsity pattern. (If it helps in any way, I need it for calculating Mahalanobis distance)



I'm dealing with an high dimension and sparse $A^-1$ so I would also like to avoid calculating $A$ (or any other inverse) using the inverse operation.



I tried looking into Woodbury Matrix Identity, but I can't find a way to use it in my case.

Is there any closed form solution or iterative method that I can use?

Is the fact that I need only to calculate $x^T(A+alpha I)^-1x$ can help in any way?



update:

I found an interesting way to avoid calculating $A$ out of $A^-1$ for this:



$(A+alpha I)^-1 = (A+alpha AA^-1)^-1 = (A(I+alpha A^-1))^-1 = (I+alpha A^-1)^-1A^-1$



So now when calculating the Mahalanobis distance I need:
$x^T(I+alpha A^-1)^-1A^-1x$



Now I only need to do one inverse operation.
$A^-1$ is somewhat of a k-diagonal matrix.

So maybe now I'll find a way to calculate what i need more efficiently.










share|cite|improve this question











$endgroup$











  • $begingroup$
    Look up krylov methods
    $endgroup$
    – Ryan Howe
    Mar 26 at 3:42















1












$begingroup$


I didn't know exactly where to post this question. I feel like it falls between Computer Science and Mathematics, so I'm sorry if it doesn't fit here.



I need to calculate $(A+alpha I)^-1$, given $alpha>0$ and $A^-1$ which is SPD, and with a known sparsity pattern. (If it helps in any way, I need it for calculating Mahalanobis distance)



I'm dealing with an high dimension and sparse $A^-1$ so I would also like to avoid calculating $A$ (or any other inverse) using the inverse operation.



I tried looking into Woodbury Matrix Identity, but I can't find a way to use it in my case.

Is there any closed form solution or iterative method that I can use?

Is the fact that I need only to calculate $x^T(A+alpha I)^-1x$ can help in any way?



update:

I found an interesting way to avoid calculating $A$ out of $A^-1$ for this:



$(A+alpha I)^-1 = (A+alpha AA^-1)^-1 = (A(I+alpha A^-1))^-1 = (I+alpha A^-1)^-1A^-1$



So now when calculating the Mahalanobis distance I need:
$x^T(I+alpha A^-1)^-1A^-1x$



Now I only need to do one inverse operation.
$A^-1$ is somewhat of a k-diagonal matrix.

So maybe now I'll find a way to calculate what i need more efficiently.










share|cite|improve this question











$endgroup$











  • $begingroup$
    Look up krylov methods
    $endgroup$
    – Ryan Howe
    Mar 26 at 3:42













1












1








1


1



$begingroup$


I didn't know exactly where to post this question. I feel like it falls between Computer Science and Mathematics, so I'm sorry if it doesn't fit here.



I need to calculate $(A+alpha I)^-1$, given $alpha>0$ and $A^-1$ which is SPD, and with a known sparsity pattern. (If it helps in any way, I need it for calculating Mahalanobis distance)



I'm dealing with an high dimension and sparse $A^-1$ so I would also like to avoid calculating $A$ (or any other inverse) using the inverse operation.



I tried looking into Woodbury Matrix Identity, but I can't find a way to use it in my case.

Is there any closed form solution or iterative method that I can use?

Is the fact that I need only to calculate $x^T(A+alpha I)^-1x$ can help in any way?



update:

I found an interesting way to avoid calculating $A$ out of $A^-1$ for this:



$(A+alpha I)^-1 = (A+alpha AA^-1)^-1 = (A(I+alpha A^-1))^-1 = (I+alpha A^-1)^-1A^-1$



So now when calculating the Mahalanobis distance I need:
$x^T(I+alpha A^-1)^-1A^-1x$



Now I only need to do one inverse operation.
$A^-1$ is somewhat of a k-diagonal matrix.

So maybe now I'll find a way to calculate what i need more efficiently.










share|cite|improve this question











$endgroup$




I didn't know exactly where to post this question. I feel like it falls between Computer Science and Mathematics, so I'm sorry if it doesn't fit here.



I need to calculate $(A+alpha I)^-1$, given $alpha>0$ and $A^-1$ which is SPD, and with a known sparsity pattern. (If it helps in any way, I need it for calculating Mahalanobis distance)



I'm dealing with an high dimension and sparse $A^-1$ so I would also like to avoid calculating $A$ (or any other inverse) using the inverse operation.



I tried looking into Woodbury Matrix Identity, but I can't find a way to use it in my case.

Is there any closed form solution or iterative method that I can use?

Is the fact that I need only to calculate $x^T(A+alpha I)^-1x$ can help in any way?



update:

I found an interesting way to avoid calculating $A$ out of $A^-1$ for this:



$(A+alpha I)^-1 = (A+alpha AA^-1)^-1 = (A(I+alpha A^-1))^-1 = (I+alpha A^-1)^-1A^-1$



So now when calculating the Mahalanobis distance I need:
$x^T(I+alpha A^-1)^-1A^-1x$



Now I only need to do one inverse operation.
$A^-1$ is somewhat of a k-diagonal matrix.

So maybe now I'll find a way to calculate what i need more efficiently.







linear-algebra matrices numerical-linear-algebra computational-mathematics






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 29 at 14:01







shahaf finder

















asked Mar 25 at 9:54









shahaf findershahaf finder

566




566











  • $begingroup$
    Look up krylov methods
    $endgroup$
    – Ryan Howe
    Mar 26 at 3:42
















  • $begingroup$
    Look up krylov methods
    $endgroup$
    – Ryan Howe
    Mar 26 at 3:42















$begingroup$
Look up krylov methods
$endgroup$
– Ryan Howe
Mar 26 at 3:42




$begingroup$
Look up krylov methods
$endgroup$
– Ryan Howe
Mar 26 at 3:42










3 Answers
3






active

oldest

votes


















1












$begingroup$

As you suggest, the Woodbury Matrix Identity is of no use since the perturbation of $alpha I$ is not low rank. In addition, in general, computing a full eigendecomposition $A=QDQ^T$ will be much slower than just using sparse Gaussian elimination on $A+alpha I$, so this won't be helpful either in practice.



As a commenter suggested, this may be a good time for preconditioned conjugate gradient, a Krylov subspace method. For $alpha$ small, we have $A + alpha I approx A$ so $A^-1(A+alpha I) approx I$. Thus, $A^-1$, which you already know, should provide a good preconditioner. If a better preconditioner is needed and $alpha |A^-1| < 1$ for an appropriate matrix norm, then we have the Neumann series



$$
(A+alpha I)^-1 = A^-1(I+alpha A^-1)^-1 = A^-1 - alpha A^-2 + alpha^2 A^-3-cdots.
$$



You can truncate this infinite series up to the $A^-k$ power and evaluate the preconditioner $(A^-1 - alpha A^-2 + alpha^2 A^-3 -cdots + (-1)^k+1A^-k)x$ using $k$ matrix-vector products with $A^-1$ using Horner's method.



If instead $alpha$ is very large, specifically $|A|/alpha < 1$, then we can instead use the Neuman series



$$
(A+alpha I)^-1 = alpha^-1(I+alpha^-1A)^-1 = alpha^-1I - alpha^-2A + alpha^-3A - cdots
$$



as a preconditioner. If $alpha$ is somewhere in between, then neither of these Neumann series ideas will work, and you might want to investigate another preconditioner if you intend to use Conjugate Gradient. (Algebraic multigrid may work well, for instance.)






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Thanks for the answer. All my $A^-1$ come from a special case GMM, so I don't know if I can bound $||A||/alpha$. But I'll check it!
    $endgroup$
    – shahaf finder
    Mar 29 at 17:46










  • $begingroup$
    What do you mean by GMM?
    $endgroup$
    – eepperly16
    Mar 30 at 19:14










  • $begingroup$
    The matrices are cov/precision matrices from a gaussian mixture model
    $endgroup$
    – shahaf finder
    Mar 30 at 19:23


















0












$begingroup$

WRONG ANSWER (sorry, I did something wrong, please ignore!)



There is a way to write this up so that it contains $det(A)$, but not $A$:



$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I$$



Proof:



I'll use the formula of



$$ adj(A+alpha I) = adj(A) + \ + beginbmatrix
alpha det(A+alpha I) & 0 & dots & 0 \
0 & alpha det(A+alpha I) & dots & 0 \
vdots & vdots & ddots & vdots \
0 & 0 & dots & alpha det(A+alpha I) \
endbmatrix$$

Which comes from the definition of the Adjugate matrix: https://en.wikipedia.org/wiki/Adjugate_matrix



This is equivalent to:
$$adj(A+alpha I) = adj(A) + det(A+alpha I) alpha I$$
I'll use the equality of $A^-1 det(A) = adj(A)$ and reorder the equation:
$$det(A+alpha I)(A+alpha I)^-1 = adj(A) + det(A + alpha I) alpha I$$
Diving both sides by $det(A+alpha I)$ (if it's non-$0$):
$$(A+alpha I)^-1 = fracadj(A)det(A+alpha I) + alpha I$$
And using the $A^-1 det(A) = adj(A)$ again:
$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I quad blacksquare$$



There might be an even simpler way of calculating $(A+alpha I)^-1$, but this is the only thing that comes to my mind at the moment.



Edit: Also, $det(A+alpha I)$ can be calculated from $det(A)$ and $A$'s main diagonal:



$$det(A+alpha I) = det(A) + (a_11 + alpha)(a_22 + alpha)...(a_nn + alpha) - a_11a_22...a_nn$$



if $A=[a_ij]_i,j in 1,2,...,n$.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    This sounds like a good option to consider, the only problem is the last part, because I need the diagonal values of A. Can I somehow get it without doing inverting $A^-1$?
    $endgroup$
    – shahaf finder
    Mar 25 at 18:53










  • $begingroup$
    Well there is the equality of $det(A^-1)=frac1det(A)$, so $fracdet(A)det(A+alpha I)=fracdet((A+alpha I)^-1)det(A^-1)$. I can't think of anything else right now, but if I do, I'll comment about it. I also looked up the problem of the site and found this: math.stackexchange.com/questions/978051/…
    $endgroup$
    – Daniel P
    Mar 25 at 19:15











  • $begingroup$
    $det(A)$ is not the problem, but $det(A+alpha I)$ is. Ignoring this issue, I've done a quick test using python and it doesn't work, I'll keep on testing but there might be an error somewhere
    $endgroup$
    – shahaf finder
    Mar 25 at 19:20











  • $begingroup$
    Where does the first formula come from?
    $endgroup$
    – shahaf finder
    Mar 25 at 19:51






  • 1




    $begingroup$
    Thats ok, thanks for trying!
    $endgroup$
    – shahaf finder
    Mar 25 at 20:02


















0












$begingroup$

For every symmetric real matrix $A$ there exists a real orthogonal matrix $Q$ such that $Q^TAQ$ is a diagonal matrix. Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. If you can find it, then $A=QDQ^T$ and your expression becomes $Q(D+alpha I)^-1Q^T.$ Since $A$ is positive semidefinite, $(D+alpha I)^-1$ with $alpha>0$, exists even when $A^-1$ does not.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Thanks for the answer, A is positive definite, so $A^-1$ exists (and in fact it's whats given). The complexity of eigendecomposition is also $O(n^3)$ which is like inverting (if I remember correctly). I'll update the post with something I found to somewhat reduce the calculation of $A$ out of $A^-1$.
    $endgroup$
    – shahaf finder
    Mar 29 at 13:49












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3161587%2finverse-of-spd-matrix-identity%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

As you suggest, the Woodbury Matrix Identity is of no use since the perturbation of $alpha I$ is not low rank. In addition, in general, computing a full eigendecomposition $A=QDQ^T$ will be much slower than just using sparse Gaussian elimination on $A+alpha I$, so this won't be helpful either in practice.



As a commenter suggested, this may be a good time for preconditioned conjugate gradient, a Krylov subspace method. For $alpha$ small, we have $A + alpha I approx A$ so $A^-1(A+alpha I) approx I$. Thus, $A^-1$, which you already know, should provide a good preconditioner. If a better preconditioner is needed and $alpha |A^-1| < 1$ for an appropriate matrix norm, then we have the Neumann series



$$
(A+alpha I)^-1 = A^-1(I+alpha A^-1)^-1 = A^-1 - alpha A^-2 + alpha^2 A^-3-cdots.
$$



You can truncate this infinite series up to the $A^-k$ power and evaluate the preconditioner $(A^-1 - alpha A^-2 + alpha^2 A^-3 -cdots + (-1)^k+1A^-k)x$ using $k$ matrix-vector products with $A^-1$ using Horner's method.



If instead $alpha$ is very large, specifically $|A|/alpha < 1$, then we can instead use the Neuman series



$$
(A+alpha I)^-1 = alpha^-1(I+alpha^-1A)^-1 = alpha^-1I - alpha^-2A + alpha^-3A - cdots
$$



as a preconditioner. If $alpha$ is somewhere in between, then neither of these Neumann series ideas will work, and you might want to investigate another preconditioner if you intend to use Conjugate Gradient. (Algebraic multigrid may work well, for instance.)






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Thanks for the answer. All my $A^-1$ come from a special case GMM, so I don't know if I can bound $||A||/alpha$. But I'll check it!
    $endgroup$
    – shahaf finder
    Mar 29 at 17:46










  • $begingroup$
    What do you mean by GMM?
    $endgroup$
    – eepperly16
    Mar 30 at 19:14










  • $begingroup$
    The matrices are cov/precision matrices from a gaussian mixture model
    $endgroup$
    – shahaf finder
    Mar 30 at 19:23















1












$begingroup$

As you suggest, the Woodbury Matrix Identity is of no use since the perturbation of $alpha I$ is not low rank. In addition, in general, computing a full eigendecomposition $A=QDQ^T$ will be much slower than just using sparse Gaussian elimination on $A+alpha I$, so this won't be helpful either in practice.



As a commenter suggested, this may be a good time for preconditioned conjugate gradient, a Krylov subspace method. For $alpha$ small, we have $A + alpha I approx A$ so $A^-1(A+alpha I) approx I$. Thus, $A^-1$, which you already know, should provide a good preconditioner. If a better preconditioner is needed and $alpha |A^-1| < 1$ for an appropriate matrix norm, then we have the Neumann series



$$
(A+alpha I)^-1 = A^-1(I+alpha A^-1)^-1 = A^-1 - alpha A^-2 + alpha^2 A^-3-cdots.
$$



You can truncate this infinite series up to the $A^-k$ power and evaluate the preconditioner $(A^-1 - alpha A^-2 + alpha^2 A^-3 -cdots + (-1)^k+1A^-k)x$ using $k$ matrix-vector products with $A^-1$ using Horner's method.



If instead $alpha$ is very large, specifically $|A|/alpha < 1$, then we can instead use the Neuman series



$$
(A+alpha I)^-1 = alpha^-1(I+alpha^-1A)^-1 = alpha^-1I - alpha^-2A + alpha^-3A - cdots
$$



as a preconditioner. If $alpha$ is somewhere in between, then neither of these Neumann series ideas will work, and you might want to investigate another preconditioner if you intend to use Conjugate Gradient. (Algebraic multigrid may work well, for instance.)






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Thanks for the answer. All my $A^-1$ come from a special case GMM, so I don't know if I can bound $||A||/alpha$. But I'll check it!
    $endgroup$
    – shahaf finder
    Mar 29 at 17:46










  • $begingroup$
    What do you mean by GMM?
    $endgroup$
    – eepperly16
    Mar 30 at 19:14










  • $begingroup$
    The matrices are cov/precision matrices from a gaussian mixture model
    $endgroup$
    – shahaf finder
    Mar 30 at 19:23













1












1








1





$begingroup$

As you suggest, the Woodbury Matrix Identity is of no use since the perturbation of $alpha I$ is not low rank. In addition, in general, computing a full eigendecomposition $A=QDQ^T$ will be much slower than just using sparse Gaussian elimination on $A+alpha I$, so this won't be helpful either in practice.



As a commenter suggested, this may be a good time for preconditioned conjugate gradient, a Krylov subspace method. For $alpha$ small, we have $A + alpha I approx A$ so $A^-1(A+alpha I) approx I$. Thus, $A^-1$, which you already know, should provide a good preconditioner. If a better preconditioner is needed and $alpha |A^-1| < 1$ for an appropriate matrix norm, then we have the Neumann series



$$
(A+alpha I)^-1 = A^-1(I+alpha A^-1)^-1 = A^-1 - alpha A^-2 + alpha^2 A^-3-cdots.
$$



You can truncate this infinite series up to the $A^-k$ power and evaluate the preconditioner $(A^-1 - alpha A^-2 + alpha^2 A^-3 -cdots + (-1)^k+1A^-k)x$ using $k$ matrix-vector products with $A^-1$ using Horner's method.



If instead $alpha$ is very large, specifically $|A|/alpha < 1$, then we can instead use the Neuman series



$$
(A+alpha I)^-1 = alpha^-1(I+alpha^-1A)^-1 = alpha^-1I - alpha^-2A + alpha^-3A - cdots
$$



as a preconditioner. If $alpha$ is somewhere in between, then neither of these Neumann series ideas will work, and you might want to investigate another preconditioner if you intend to use Conjugate Gradient. (Algebraic multigrid may work well, for instance.)






share|cite|improve this answer









$endgroup$



As you suggest, the Woodbury Matrix Identity is of no use since the perturbation of $alpha I$ is not low rank. In addition, in general, computing a full eigendecomposition $A=QDQ^T$ will be much slower than just using sparse Gaussian elimination on $A+alpha I$, so this won't be helpful either in practice.



As a commenter suggested, this may be a good time for preconditioned conjugate gradient, a Krylov subspace method. For $alpha$ small, we have $A + alpha I approx A$ so $A^-1(A+alpha I) approx I$. Thus, $A^-1$, which you already know, should provide a good preconditioner. If a better preconditioner is needed and $alpha |A^-1| < 1$ for an appropriate matrix norm, then we have the Neumann series



$$
(A+alpha I)^-1 = A^-1(I+alpha A^-1)^-1 = A^-1 - alpha A^-2 + alpha^2 A^-3-cdots.
$$



You can truncate this infinite series up to the $A^-k$ power and evaluate the preconditioner $(A^-1 - alpha A^-2 + alpha^2 A^-3 -cdots + (-1)^k+1A^-k)x$ using $k$ matrix-vector products with $A^-1$ using Horner's method.



If instead $alpha$ is very large, specifically $|A|/alpha < 1$, then we can instead use the Neuman series



$$
(A+alpha I)^-1 = alpha^-1(I+alpha^-1A)^-1 = alpha^-1I - alpha^-2A + alpha^-3A - cdots
$$



as a preconditioner. If $alpha$ is somewhere in between, then neither of these Neumann series ideas will work, and you might want to investigate another preconditioner if you intend to use Conjugate Gradient. (Algebraic multigrid may work well, for instance.)







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Mar 29 at 17:31









eepperly16eepperly16

3,07111125




3,07111125











  • $begingroup$
    Thanks for the answer. All my $A^-1$ come from a special case GMM, so I don't know if I can bound $||A||/alpha$. But I'll check it!
    $endgroup$
    – shahaf finder
    Mar 29 at 17:46










  • $begingroup$
    What do you mean by GMM?
    $endgroup$
    – eepperly16
    Mar 30 at 19:14










  • $begingroup$
    The matrices are cov/precision matrices from a gaussian mixture model
    $endgroup$
    – shahaf finder
    Mar 30 at 19:23
















  • $begingroup$
    Thanks for the answer. All my $A^-1$ come from a special case GMM, so I don't know if I can bound $||A||/alpha$. But I'll check it!
    $endgroup$
    – shahaf finder
    Mar 29 at 17:46










  • $begingroup$
    What do you mean by GMM?
    $endgroup$
    – eepperly16
    Mar 30 at 19:14










  • $begingroup$
    The matrices are cov/precision matrices from a gaussian mixture model
    $endgroup$
    – shahaf finder
    Mar 30 at 19:23















$begingroup$
Thanks for the answer. All my $A^-1$ come from a special case GMM, so I don't know if I can bound $||A||/alpha$. But I'll check it!
$endgroup$
– shahaf finder
Mar 29 at 17:46




$begingroup$
Thanks for the answer. All my $A^-1$ come from a special case GMM, so I don't know if I can bound $||A||/alpha$. But I'll check it!
$endgroup$
– shahaf finder
Mar 29 at 17:46












$begingroup$
What do you mean by GMM?
$endgroup$
– eepperly16
Mar 30 at 19:14




$begingroup$
What do you mean by GMM?
$endgroup$
– eepperly16
Mar 30 at 19:14












$begingroup$
The matrices are cov/precision matrices from a gaussian mixture model
$endgroup$
– shahaf finder
Mar 30 at 19:23




$begingroup$
The matrices are cov/precision matrices from a gaussian mixture model
$endgroup$
– shahaf finder
Mar 30 at 19:23











0












$begingroup$

WRONG ANSWER (sorry, I did something wrong, please ignore!)



There is a way to write this up so that it contains $det(A)$, but not $A$:



$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I$$



Proof:



I'll use the formula of



$$ adj(A+alpha I) = adj(A) + \ + beginbmatrix
alpha det(A+alpha I) & 0 & dots & 0 \
0 & alpha det(A+alpha I) & dots & 0 \
vdots & vdots & ddots & vdots \
0 & 0 & dots & alpha det(A+alpha I) \
endbmatrix$$

Which comes from the definition of the Adjugate matrix: https://en.wikipedia.org/wiki/Adjugate_matrix



This is equivalent to:
$$adj(A+alpha I) = adj(A) + det(A+alpha I) alpha I$$
I'll use the equality of $A^-1 det(A) = adj(A)$ and reorder the equation:
$$det(A+alpha I)(A+alpha I)^-1 = adj(A) + det(A + alpha I) alpha I$$
Diving both sides by $det(A+alpha I)$ (if it's non-$0$):
$$(A+alpha I)^-1 = fracadj(A)det(A+alpha I) + alpha I$$
And using the $A^-1 det(A) = adj(A)$ again:
$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I quad blacksquare$$



There might be an even simpler way of calculating $(A+alpha I)^-1$, but this is the only thing that comes to my mind at the moment.



Edit: Also, $det(A+alpha I)$ can be calculated from $det(A)$ and $A$'s main diagonal:



$$det(A+alpha I) = det(A) + (a_11 + alpha)(a_22 + alpha)...(a_nn + alpha) - a_11a_22...a_nn$$



if $A=[a_ij]_i,j in 1,2,...,n$.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    This sounds like a good option to consider, the only problem is the last part, because I need the diagonal values of A. Can I somehow get it without doing inverting $A^-1$?
    $endgroup$
    – shahaf finder
    Mar 25 at 18:53










  • $begingroup$
    Well there is the equality of $det(A^-1)=frac1det(A)$, so $fracdet(A)det(A+alpha I)=fracdet((A+alpha I)^-1)det(A^-1)$. I can't think of anything else right now, but if I do, I'll comment about it. I also looked up the problem of the site and found this: math.stackexchange.com/questions/978051/…
    $endgroup$
    – Daniel P
    Mar 25 at 19:15











  • $begingroup$
    $det(A)$ is not the problem, but $det(A+alpha I)$ is. Ignoring this issue, I've done a quick test using python and it doesn't work, I'll keep on testing but there might be an error somewhere
    $endgroup$
    – shahaf finder
    Mar 25 at 19:20











  • $begingroup$
    Where does the first formula come from?
    $endgroup$
    – shahaf finder
    Mar 25 at 19:51






  • 1




    $begingroup$
    Thats ok, thanks for trying!
    $endgroup$
    – shahaf finder
    Mar 25 at 20:02















0












$begingroup$

WRONG ANSWER (sorry, I did something wrong, please ignore!)



There is a way to write this up so that it contains $det(A)$, but not $A$:



$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I$$



Proof:



I'll use the formula of



$$ adj(A+alpha I) = adj(A) + \ + beginbmatrix
alpha det(A+alpha I) & 0 & dots & 0 \
0 & alpha det(A+alpha I) & dots & 0 \
vdots & vdots & ddots & vdots \
0 & 0 & dots & alpha det(A+alpha I) \
endbmatrix$$

Which comes from the definition of the Adjugate matrix: https://en.wikipedia.org/wiki/Adjugate_matrix



This is equivalent to:
$$adj(A+alpha I) = adj(A) + det(A+alpha I) alpha I$$
I'll use the equality of $A^-1 det(A) = adj(A)$ and reorder the equation:
$$det(A+alpha I)(A+alpha I)^-1 = adj(A) + det(A + alpha I) alpha I$$
Diving both sides by $det(A+alpha I)$ (if it's non-$0$):
$$(A+alpha I)^-1 = fracadj(A)det(A+alpha I) + alpha I$$
And using the $A^-1 det(A) = adj(A)$ again:
$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I quad blacksquare$$



There might be an even simpler way of calculating $(A+alpha I)^-1$, but this is the only thing that comes to my mind at the moment.



Edit: Also, $det(A+alpha I)$ can be calculated from $det(A)$ and $A$'s main diagonal:



$$det(A+alpha I) = det(A) + (a_11 + alpha)(a_22 + alpha)...(a_nn + alpha) - a_11a_22...a_nn$$



if $A=[a_ij]_i,j in 1,2,...,n$.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    This sounds like a good option to consider, the only problem is the last part, because I need the diagonal values of A. Can I somehow get it without doing inverting $A^-1$?
    $endgroup$
    – shahaf finder
    Mar 25 at 18:53










  • $begingroup$
    Well there is the equality of $det(A^-1)=frac1det(A)$, so $fracdet(A)det(A+alpha I)=fracdet((A+alpha I)^-1)det(A^-1)$. I can't think of anything else right now, but if I do, I'll comment about it. I also looked up the problem of the site and found this: math.stackexchange.com/questions/978051/…
    $endgroup$
    – Daniel P
    Mar 25 at 19:15











  • $begingroup$
    $det(A)$ is not the problem, but $det(A+alpha I)$ is. Ignoring this issue, I've done a quick test using python and it doesn't work, I'll keep on testing but there might be an error somewhere
    $endgroup$
    – shahaf finder
    Mar 25 at 19:20











  • $begingroup$
    Where does the first formula come from?
    $endgroup$
    – shahaf finder
    Mar 25 at 19:51






  • 1




    $begingroup$
    Thats ok, thanks for trying!
    $endgroup$
    – shahaf finder
    Mar 25 at 20:02













0












0








0





$begingroup$

WRONG ANSWER (sorry, I did something wrong, please ignore!)



There is a way to write this up so that it contains $det(A)$, but not $A$:



$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I$$



Proof:



I'll use the formula of



$$ adj(A+alpha I) = adj(A) + \ + beginbmatrix
alpha det(A+alpha I) & 0 & dots & 0 \
0 & alpha det(A+alpha I) & dots & 0 \
vdots & vdots & ddots & vdots \
0 & 0 & dots & alpha det(A+alpha I) \
endbmatrix$$

Which comes from the definition of the Adjugate matrix: https://en.wikipedia.org/wiki/Adjugate_matrix



This is equivalent to:
$$adj(A+alpha I) = adj(A) + det(A+alpha I) alpha I$$
I'll use the equality of $A^-1 det(A) = adj(A)$ and reorder the equation:
$$det(A+alpha I)(A+alpha I)^-1 = adj(A) + det(A + alpha I) alpha I$$
Diving both sides by $det(A+alpha I)$ (if it's non-$0$):
$$(A+alpha I)^-1 = fracadj(A)det(A+alpha I) + alpha I$$
And using the $A^-1 det(A) = adj(A)$ again:
$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I quad blacksquare$$



There might be an even simpler way of calculating $(A+alpha I)^-1$, but this is the only thing that comes to my mind at the moment.



Edit: Also, $det(A+alpha I)$ can be calculated from $det(A)$ and $A$'s main diagonal:



$$det(A+alpha I) = det(A) + (a_11 + alpha)(a_22 + alpha)...(a_nn + alpha) - a_11a_22...a_nn$$



if $A=[a_ij]_i,j in 1,2,...,n$.






share|cite|improve this answer











$endgroup$



WRONG ANSWER (sorry, I did something wrong, please ignore!)



There is a way to write this up so that it contains $det(A)$, but not $A$:



$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I$$



Proof:



I'll use the formula of



$$ adj(A+alpha I) = adj(A) + \ + beginbmatrix
alpha det(A+alpha I) & 0 & dots & 0 \
0 & alpha det(A+alpha I) & dots & 0 \
vdots & vdots & ddots & vdots \
0 & 0 & dots & alpha det(A+alpha I) \
endbmatrix$$

Which comes from the definition of the Adjugate matrix: https://en.wikipedia.org/wiki/Adjugate_matrix



This is equivalent to:
$$adj(A+alpha I) = adj(A) + det(A+alpha I) alpha I$$
I'll use the equality of $A^-1 det(A) = adj(A)$ and reorder the equation:
$$det(A+alpha I)(A+alpha I)^-1 = adj(A) + det(A + alpha I) alpha I$$
Diving both sides by $det(A+alpha I)$ (if it's non-$0$):
$$(A+alpha I)^-1 = fracadj(A)det(A+alpha I) + alpha I$$
And using the $A^-1 det(A) = adj(A)$ again:
$$(A+alpha I)^-1=fracdet(A)det(A+alpha I)A^-1+alpha I quad blacksquare$$



There might be an even simpler way of calculating $(A+alpha I)^-1$, but this is the only thing that comes to my mind at the moment.



Edit: Also, $det(A+alpha I)$ can be calculated from $det(A)$ and $A$'s main diagonal:



$$det(A+alpha I) = det(A) + (a_11 + alpha)(a_22 + alpha)...(a_nn + alpha) - a_11a_22...a_nn$$



if $A=[a_ij]_i,j in 1,2,...,n$.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Mar 25 at 20:02

























answered Mar 25 at 17:43









Daniel PDaniel P

69714




69714











  • $begingroup$
    This sounds like a good option to consider, the only problem is the last part, because I need the diagonal values of A. Can I somehow get it without doing inverting $A^-1$?
    $endgroup$
    – shahaf finder
    Mar 25 at 18:53










  • $begingroup$
    Well there is the equality of $det(A^-1)=frac1det(A)$, so $fracdet(A)det(A+alpha I)=fracdet((A+alpha I)^-1)det(A^-1)$. I can't think of anything else right now, but if I do, I'll comment about it. I also looked up the problem of the site and found this: math.stackexchange.com/questions/978051/…
    $endgroup$
    – Daniel P
    Mar 25 at 19:15











  • $begingroup$
    $det(A)$ is not the problem, but $det(A+alpha I)$ is. Ignoring this issue, I've done a quick test using python and it doesn't work, I'll keep on testing but there might be an error somewhere
    $endgroup$
    – shahaf finder
    Mar 25 at 19:20











  • $begingroup$
    Where does the first formula come from?
    $endgroup$
    – shahaf finder
    Mar 25 at 19:51






  • 1




    $begingroup$
    Thats ok, thanks for trying!
    $endgroup$
    – shahaf finder
    Mar 25 at 20:02
















  • $begingroup$
    This sounds like a good option to consider, the only problem is the last part, because I need the diagonal values of A. Can I somehow get it without doing inverting $A^-1$?
    $endgroup$
    – shahaf finder
    Mar 25 at 18:53










  • $begingroup$
    Well there is the equality of $det(A^-1)=frac1det(A)$, so $fracdet(A)det(A+alpha I)=fracdet((A+alpha I)^-1)det(A^-1)$. I can't think of anything else right now, but if I do, I'll comment about it. I also looked up the problem of the site and found this: math.stackexchange.com/questions/978051/…
    $endgroup$
    – Daniel P
    Mar 25 at 19:15











  • $begingroup$
    $det(A)$ is not the problem, but $det(A+alpha I)$ is. Ignoring this issue, I've done a quick test using python and it doesn't work, I'll keep on testing but there might be an error somewhere
    $endgroup$
    – shahaf finder
    Mar 25 at 19:20











  • $begingroup$
    Where does the first formula come from?
    $endgroup$
    – shahaf finder
    Mar 25 at 19:51






  • 1




    $begingroup$
    Thats ok, thanks for trying!
    $endgroup$
    – shahaf finder
    Mar 25 at 20:02















$begingroup$
This sounds like a good option to consider, the only problem is the last part, because I need the diagonal values of A. Can I somehow get it without doing inverting $A^-1$?
$endgroup$
– shahaf finder
Mar 25 at 18:53




$begingroup$
This sounds like a good option to consider, the only problem is the last part, because I need the diagonal values of A. Can I somehow get it without doing inverting $A^-1$?
$endgroup$
– shahaf finder
Mar 25 at 18:53












$begingroup$
Well there is the equality of $det(A^-1)=frac1det(A)$, so $fracdet(A)det(A+alpha I)=fracdet((A+alpha I)^-1)det(A^-1)$. I can't think of anything else right now, but if I do, I'll comment about it. I also looked up the problem of the site and found this: math.stackexchange.com/questions/978051/…
$endgroup$
– Daniel P
Mar 25 at 19:15





$begingroup$
Well there is the equality of $det(A^-1)=frac1det(A)$, so $fracdet(A)det(A+alpha I)=fracdet((A+alpha I)^-1)det(A^-1)$. I can't think of anything else right now, but if I do, I'll comment about it. I also looked up the problem of the site and found this: math.stackexchange.com/questions/978051/…
$endgroup$
– Daniel P
Mar 25 at 19:15













$begingroup$
$det(A)$ is not the problem, but $det(A+alpha I)$ is. Ignoring this issue, I've done a quick test using python and it doesn't work, I'll keep on testing but there might be an error somewhere
$endgroup$
– shahaf finder
Mar 25 at 19:20





$begingroup$
$det(A)$ is not the problem, but $det(A+alpha I)$ is. Ignoring this issue, I've done a quick test using python and it doesn't work, I'll keep on testing but there might be an error somewhere
$endgroup$
– shahaf finder
Mar 25 at 19:20













$begingroup$
Where does the first formula come from?
$endgroup$
– shahaf finder
Mar 25 at 19:51




$begingroup$
Where does the first formula come from?
$endgroup$
– shahaf finder
Mar 25 at 19:51




1




1




$begingroup$
Thats ok, thanks for trying!
$endgroup$
– shahaf finder
Mar 25 at 20:02




$begingroup$
Thats ok, thanks for trying!
$endgroup$
– shahaf finder
Mar 25 at 20:02











0












$begingroup$

For every symmetric real matrix $A$ there exists a real orthogonal matrix $Q$ such that $Q^TAQ$ is a diagonal matrix. Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. If you can find it, then $A=QDQ^T$ and your expression becomes $Q(D+alpha I)^-1Q^T.$ Since $A$ is positive semidefinite, $(D+alpha I)^-1$ with $alpha>0$, exists even when $A^-1$ does not.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Thanks for the answer, A is positive definite, so $A^-1$ exists (and in fact it's whats given). The complexity of eigendecomposition is also $O(n^3)$ which is like inverting (if I remember correctly). I'll update the post with something I found to somewhat reduce the calculation of $A$ out of $A^-1$.
    $endgroup$
    – shahaf finder
    Mar 29 at 13:49
















0












$begingroup$

For every symmetric real matrix $A$ there exists a real orthogonal matrix $Q$ such that $Q^TAQ$ is a diagonal matrix. Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. If you can find it, then $A=QDQ^T$ and your expression becomes $Q(D+alpha I)^-1Q^T.$ Since $A$ is positive semidefinite, $(D+alpha I)^-1$ with $alpha>0$, exists even when $A^-1$ does not.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Thanks for the answer, A is positive definite, so $A^-1$ exists (and in fact it's whats given). The complexity of eigendecomposition is also $O(n^3)$ which is like inverting (if I remember correctly). I'll update the post with something I found to somewhat reduce the calculation of $A$ out of $A^-1$.
    $endgroup$
    – shahaf finder
    Mar 29 at 13:49














0












0








0





$begingroup$

For every symmetric real matrix $A$ there exists a real orthogonal matrix $Q$ such that $Q^TAQ$ is a diagonal matrix. Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. If you can find it, then $A=QDQ^T$ and your expression becomes $Q(D+alpha I)^-1Q^T.$ Since $A$ is positive semidefinite, $(D+alpha I)^-1$ with $alpha>0$, exists even when $A^-1$ does not.






share|cite|improve this answer









$endgroup$



For every symmetric real matrix $A$ there exists a real orthogonal matrix $Q$ such that $Q^TAQ$ is a diagonal matrix. Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. If you can find it, then $A=QDQ^T$ and your expression becomes $Q(D+alpha I)^-1Q^T.$ Since $A$ is positive semidefinite, $(D+alpha I)^-1$ with $alpha>0$, exists even when $A^-1$ does not.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Mar 29 at 13:40









rychrych

2,5461718




2,5461718











  • $begingroup$
    Thanks for the answer, A is positive definite, so $A^-1$ exists (and in fact it's whats given). The complexity of eigendecomposition is also $O(n^3)$ which is like inverting (if I remember correctly). I'll update the post with something I found to somewhat reduce the calculation of $A$ out of $A^-1$.
    $endgroup$
    – shahaf finder
    Mar 29 at 13:49

















  • $begingroup$
    Thanks for the answer, A is positive definite, so $A^-1$ exists (and in fact it's whats given). The complexity of eigendecomposition is also $O(n^3)$ which is like inverting (if I remember correctly). I'll update the post with something I found to somewhat reduce the calculation of $A$ out of $A^-1$.
    $endgroup$
    – shahaf finder
    Mar 29 at 13:49
















$begingroup$
Thanks for the answer, A is positive definite, so $A^-1$ exists (and in fact it's whats given). The complexity of eigendecomposition is also $O(n^3)$ which is like inverting (if I remember correctly). I'll update the post with something I found to somewhat reduce the calculation of $A$ out of $A^-1$.
$endgroup$
– shahaf finder
Mar 29 at 13:49





$begingroup$
Thanks for the answer, A is positive definite, so $A^-1$ exists (and in fact it's whats given). The complexity of eigendecomposition is also $O(n^3)$ which is like inverting (if I remember correctly). I'll update the post with something I found to somewhat reduce the calculation of $A$ out of $A^-1$.
$endgroup$
– shahaf finder
Mar 29 at 13:49


















draft saved

draft discarded
















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3161587%2finverse-of-spd-matrix-identity%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer

random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye