'Gauss's Algorithm' for computing modular fractions and inversesSolving simple congruences by handNot understanding Simple Modulus CongruencyEuclid proof explanationdouble check my steps to find multiplicative inverse?Modular Inversesusing Gauss' algorithm (for linear congruences) for A > BCalculating modular inverses with limited multiplicationIs modular arithmetic defined for all rational numbers (with denominators coprime to modulus)?Find inverse modulo when modulo is smaller than the numberGeneralization of a Result on Modular InversesBezout's Identity proof and the Extended Euclidean AlgorithmExtended Euclidean Algorithm (the going backwards step) and modular inversesLet $x < 81$, $x equiv 1pmod3$ and $x^3+x+1equiv 0pmod81$ find xIs there a name for this exponential analog to modular arithmetic? (octave equivalency)
CREATE opcode: what does it really do?
How easy is it to start Magic from scratch?
Purchasing a ticket for someone else in another country?
Roman Numeral Treatment of Suspensions
How can we prove that any integral in the set of non-elementary integrals cannot be expressed in the form of elementary functions?
Why not increase contact surface when reentering the atmosphere?
Lay out the Carpet
Increase performance creating Mandelbrot set in python
How does buying out courses with grant money work?
How to write papers efficiently when English isn't my first language?
Where does the Z80 processor start executing from?
Is there a korbon needed for conversion?
Is the destination of a commercial flight important for the pilot?
Customer Requests (Sometimes) Drive Me Bonkers!
How can I get through very long and very dry, but also very useful technical documents when learning a new tool?
How to be diplomatic in refusing to write code that breaches the privacy of our users
Why are there no referendums in the US?
Go Pregnant or Go Home
Pre-amplifier input protection
Is there a good way to store credentials outside of a password manager?
What is the intuitive meaning of having a linear relationship between the logs of two variables?
Two monoidal structures and copowering
What is the opposite of 'gravitas'?
How can I kill an app using Terminal?
'Gauss's Algorithm' for computing modular fractions and inverses
Solving simple congruences by handNot understanding Simple Modulus CongruencyEuclid proof explanationdouble check my steps to find multiplicative inverse?Modular Inversesusing Gauss' algorithm (for linear congruences) for A > BCalculating modular inverses with limited multiplicationIs modular arithmetic defined for all rational numbers (with denominators coprime to modulus)?Find inverse modulo when modulo is smaller than the numberGeneralization of a Result on Modular InversesBezout's Identity proof and the Extended Euclidean AlgorithmExtended Euclidean Algorithm (the going backwards step) and modular inversesLet $x < 81$, $x equiv 1pmod3$ and $x^3+x+1equiv 0pmod81$ find xIs there a name for this exponential analog to modular arithmetic? (octave equivalency)
$begingroup$
There is an answer on the site for solving simple linear congruences via so called 'Gauss's Algorithm' presented in a fractional form. Answer was given by Bill Dubuque and it was said that the fractional form is essentially Gauss, Disquisitiones Arithmeticae, Art. 13, 1801.
Now I have studied the article from the book, but I am not seeing the connection to the fractional form. What Gauss does is reducing $b$ via $pbmod b= p - qb$ and I do not see that happening in the fractional form nor do I see how it computes an inverse. I have already talked with Bill about this via comments, but decided to open a new question so he or anyone else can help me more intuitively understand what is going on here. This article is supposed to give an algorithm to compute inverses in a prime modulus, yet I have no idea how.
Edit:
Actual question for Bill:
I may have been asking some stupid questions up till now so I will give something concrete and hopefully you can provide an answer to that.
Let's take your sci.math example for this:
So we are looking for a multiplicative inverse $x$ of $60$ in modulo $103$
$$60x equiv 1 pmod103$$
The tool we can use for this is, as Bill has said, a special case of the Euclidean algorithm which iterates $(pbmod b,, p)$ instead of the usual Euclidean algorithm that iterates $(p bmod b,, b)$.
This is the result of that algorithm:
$$103=60 cdot color#c00 1 + 43 = 43 cdot color#c002 + 17 = 17 cdot color#c00 6 + 1$$
And then this translates into the following in mod $103$:
$$60 cdot color#c00(-1) equiv 43 rightarrow 43 cdot color#c00(-2) equiv 17 rightarrow 17 cdot color#c00(-6) equiv 1$$
Producing the numbers in red which when multiplied give an inverse:
$$60 cdot color#c00(-1)(-2)(-6) equiv 1 pmod103$$
$$x equiv-12 pmod103$$
And this is fine and I see it works, of course only when the number and modulo are coprime.
Now my question is why this works. I am not interested in optimisations and different ways of reaching the inverse, but specifically why do the same values of the numbers in red(the coefficients of the algorithm descent) produce an inverse? This method of reusing the coefficients does not work via the normal Euclidean algorithm, but only with this special case. What is special about this? I would like to see a generalized proof or reason as to why the generated numbers produced via this special algorithm have this property.
elementary-number-theory modular-arithmetic euclidean-algorithm
$endgroup$
|
show 3 more comments
$begingroup$
There is an answer on the site for solving simple linear congruences via so called 'Gauss's Algorithm' presented in a fractional form. Answer was given by Bill Dubuque and it was said that the fractional form is essentially Gauss, Disquisitiones Arithmeticae, Art. 13, 1801.
Now I have studied the article from the book, but I am not seeing the connection to the fractional form. What Gauss does is reducing $b$ via $pbmod b= p - qb$ and I do not see that happening in the fractional form nor do I see how it computes an inverse. I have already talked with Bill about this via comments, but decided to open a new question so he or anyone else can help me more intuitively understand what is going on here. This article is supposed to give an algorithm to compute inverses in a prime modulus, yet I have no idea how.
Edit:
Actual question for Bill:
I may have been asking some stupid questions up till now so I will give something concrete and hopefully you can provide an answer to that.
Let's take your sci.math example for this:
So we are looking for a multiplicative inverse $x$ of $60$ in modulo $103$
$$60x equiv 1 pmod103$$
The tool we can use for this is, as Bill has said, a special case of the Euclidean algorithm which iterates $(pbmod b,, p)$ instead of the usual Euclidean algorithm that iterates $(p bmod b,, b)$.
This is the result of that algorithm:
$$103=60 cdot color#c00 1 + 43 = 43 cdot color#c002 + 17 = 17 cdot color#c00 6 + 1$$
And then this translates into the following in mod $103$:
$$60 cdot color#c00(-1) equiv 43 rightarrow 43 cdot color#c00(-2) equiv 17 rightarrow 17 cdot color#c00(-6) equiv 1$$
Producing the numbers in red which when multiplied give an inverse:
$$60 cdot color#c00(-1)(-2)(-6) equiv 1 pmod103$$
$$x equiv-12 pmod103$$
And this is fine and I see it works, of course only when the number and modulo are coprime.
Now my question is why this works. I am not interested in optimisations and different ways of reaching the inverse, but specifically why do the same values of the numbers in red(the coefficients of the algorithm descent) produce an inverse? This method of reusing the coefficients does not work via the normal Euclidean algorithm, but only with this special case. What is special about this? I would like to see a generalized proof or reason as to why the generated numbers produced via this special algorithm have this property.
elementary-number-theory modular-arithmetic euclidean-algorithm
$endgroup$
1
$begingroup$
I will answer later when I have more spare time. Others who may be interested in answering can learn the detailed context from the "via comments" link above.
$endgroup$
– Bill Dubuque
Jan 2 at 17:02
$begingroup$
@BillDubuque I have edited the question to give you specifically the problem I am having with this. Hopefully you will be able to give an answer now.
$endgroup$
– Michael Munta
Feb 28 at 12:21
$begingroup$
I'm not sure precisely where you are stuck, but maybe placing the various forms side-by-side will prove illuminating - see my answer.
$endgroup$
– Bill Dubuque
Feb 28 at 22:05
$begingroup$
Are you asking about how congruences like the $rmcolor#0a0green$ congruence in my answer follow from the corresponding $rmcolor#0a0green$ equation in the iterated mods preceding it? Or are you asking how the congruence in the final line follows from all before it?
$endgroup$
– Bill Dubuque
Mar 1 at 2:38
$begingroup$
@BillDubuque Well I am asking both really. Why do the coefficients $1, 2, 6$ generated from the initial descent algorithm produce an inverse when multiplied mod $103$? I understand all the other optimizations work simply because of multiplication property of congruences. You keep picking numbers to multiply with as long as it will produce a lesser number mod $n$. But here you don't pick, you generate them with the algorithm first. $p - colorredq_1b_1 rightarrow p - colorredq_2b_2...$
$endgroup$
– Michael Munta
Mar 1 at 5:55
|
show 3 more comments
$begingroup$
There is an answer on the site for solving simple linear congruences via so called 'Gauss's Algorithm' presented in a fractional form. Answer was given by Bill Dubuque and it was said that the fractional form is essentially Gauss, Disquisitiones Arithmeticae, Art. 13, 1801.
Now I have studied the article from the book, but I am not seeing the connection to the fractional form. What Gauss does is reducing $b$ via $pbmod b= p - qb$ and I do not see that happening in the fractional form nor do I see how it computes an inverse. I have already talked with Bill about this via comments, but decided to open a new question so he or anyone else can help me more intuitively understand what is going on here. This article is supposed to give an algorithm to compute inverses in a prime modulus, yet I have no idea how.
Edit:
Actual question for Bill:
I may have been asking some stupid questions up till now so I will give something concrete and hopefully you can provide an answer to that.
Let's take your sci.math example for this:
So we are looking for a multiplicative inverse $x$ of $60$ in modulo $103$
$$60x equiv 1 pmod103$$
The tool we can use for this is, as Bill has said, a special case of the Euclidean algorithm which iterates $(pbmod b,, p)$ instead of the usual Euclidean algorithm that iterates $(p bmod b,, b)$.
This is the result of that algorithm:
$$103=60 cdot color#c00 1 + 43 = 43 cdot color#c002 + 17 = 17 cdot color#c00 6 + 1$$
And then this translates into the following in mod $103$:
$$60 cdot color#c00(-1) equiv 43 rightarrow 43 cdot color#c00(-2) equiv 17 rightarrow 17 cdot color#c00(-6) equiv 1$$
Producing the numbers in red which when multiplied give an inverse:
$$60 cdot color#c00(-1)(-2)(-6) equiv 1 pmod103$$
$$x equiv-12 pmod103$$
And this is fine and I see it works, of course only when the number and modulo are coprime.
Now my question is why this works. I am not interested in optimisations and different ways of reaching the inverse, but specifically why do the same values of the numbers in red(the coefficients of the algorithm descent) produce an inverse? This method of reusing the coefficients does not work via the normal Euclidean algorithm, but only with this special case. What is special about this? I would like to see a generalized proof or reason as to why the generated numbers produced via this special algorithm have this property.
elementary-number-theory modular-arithmetic euclidean-algorithm
$endgroup$
There is an answer on the site for solving simple linear congruences via so called 'Gauss's Algorithm' presented in a fractional form. Answer was given by Bill Dubuque and it was said that the fractional form is essentially Gauss, Disquisitiones Arithmeticae, Art. 13, 1801.
Now I have studied the article from the book, but I am not seeing the connection to the fractional form. What Gauss does is reducing $b$ via $pbmod b= p - qb$ and I do not see that happening in the fractional form nor do I see how it computes an inverse. I have already talked with Bill about this via comments, but decided to open a new question so he or anyone else can help me more intuitively understand what is going on here. This article is supposed to give an algorithm to compute inverses in a prime modulus, yet I have no idea how.
Edit:
Actual question for Bill:
I may have been asking some stupid questions up till now so I will give something concrete and hopefully you can provide an answer to that.
Let's take your sci.math example for this:
So we are looking for a multiplicative inverse $x$ of $60$ in modulo $103$
$$60x equiv 1 pmod103$$
The tool we can use for this is, as Bill has said, a special case of the Euclidean algorithm which iterates $(pbmod b,, p)$ instead of the usual Euclidean algorithm that iterates $(p bmod b,, b)$.
This is the result of that algorithm:
$$103=60 cdot color#c00 1 + 43 = 43 cdot color#c002 + 17 = 17 cdot color#c00 6 + 1$$
And then this translates into the following in mod $103$:
$$60 cdot color#c00(-1) equiv 43 rightarrow 43 cdot color#c00(-2) equiv 17 rightarrow 17 cdot color#c00(-6) equiv 1$$
Producing the numbers in red which when multiplied give an inverse:
$$60 cdot color#c00(-1)(-2)(-6) equiv 1 pmod103$$
$$x equiv-12 pmod103$$
And this is fine and I see it works, of course only when the number and modulo are coprime.
Now my question is why this works. I am not interested in optimisations and different ways of reaching the inverse, but specifically why do the same values of the numbers in red(the coefficients of the algorithm descent) produce an inverse? This method of reusing the coefficients does not work via the normal Euclidean algorithm, but only with this special case. What is special about this? I would like to see a generalized proof or reason as to why the generated numbers produced via this special algorithm have this property.
elementary-number-theory modular-arithmetic euclidean-algorithm
elementary-number-theory modular-arithmetic euclidean-algorithm
edited Mar 1 at 2:10
Bill Dubuque
213k29195654
213k29195654
asked Jan 2 at 9:05
Michael MuntaMichael Munta
99111
99111
1
$begingroup$
I will answer later when I have more spare time. Others who may be interested in answering can learn the detailed context from the "via comments" link above.
$endgroup$
– Bill Dubuque
Jan 2 at 17:02
$begingroup$
@BillDubuque I have edited the question to give you specifically the problem I am having with this. Hopefully you will be able to give an answer now.
$endgroup$
– Michael Munta
Feb 28 at 12:21
$begingroup$
I'm not sure precisely where you are stuck, but maybe placing the various forms side-by-side will prove illuminating - see my answer.
$endgroup$
– Bill Dubuque
Feb 28 at 22:05
$begingroup$
Are you asking about how congruences like the $rmcolor#0a0green$ congruence in my answer follow from the corresponding $rmcolor#0a0green$ equation in the iterated mods preceding it? Or are you asking how the congruence in the final line follows from all before it?
$endgroup$
– Bill Dubuque
Mar 1 at 2:38
$begingroup$
@BillDubuque Well I am asking both really. Why do the coefficients $1, 2, 6$ generated from the initial descent algorithm produce an inverse when multiplied mod $103$? I understand all the other optimizations work simply because of multiplication property of congruences. You keep picking numbers to multiply with as long as it will produce a lesser number mod $n$. But here you don't pick, you generate them with the algorithm first. $p - colorredq_1b_1 rightarrow p - colorredq_2b_2...$
$endgroup$
– Michael Munta
Mar 1 at 5:55
|
show 3 more comments
1
$begingroup$
I will answer later when I have more spare time. Others who may be interested in answering can learn the detailed context from the "via comments" link above.
$endgroup$
– Bill Dubuque
Jan 2 at 17:02
$begingroup$
@BillDubuque I have edited the question to give you specifically the problem I am having with this. Hopefully you will be able to give an answer now.
$endgroup$
– Michael Munta
Feb 28 at 12:21
$begingroup$
I'm not sure precisely where you are stuck, but maybe placing the various forms side-by-side will prove illuminating - see my answer.
$endgroup$
– Bill Dubuque
Feb 28 at 22:05
$begingroup$
Are you asking about how congruences like the $rmcolor#0a0green$ congruence in my answer follow from the corresponding $rmcolor#0a0green$ equation in the iterated mods preceding it? Or are you asking how the congruence in the final line follows from all before it?
$endgroup$
– Bill Dubuque
Mar 1 at 2:38
$begingroup$
@BillDubuque Well I am asking both really. Why do the coefficients $1, 2, 6$ generated from the initial descent algorithm produce an inverse when multiplied mod $103$? I understand all the other optimizations work simply because of multiplication property of congruences. You keep picking numbers to multiply with as long as it will produce a lesser number mod $n$. But here you don't pick, you generate them with the algorithm first. $p - colorredq_1b_1 rightarrow p - colorredq_2b_2...$
$endgroup$
– Michael Munta
Mar 1 at 5:55
1
1
$begingroup$
I will answer later when I have more spare time. Others who may be interested in answering can learn the detailed context from the "via comments" link above.
$endgroup$
– Bill Dubuque
Jan 2 at 17:02
$begingroup$
I will answer later when I have more spare time. Others who may be interested in answering can learn the detailed context from the "via comments" link above.
$endgroup$
– Bill Dubuque
Jan 2 at 17:02
$begingroup$
@BillDubuque I have edited the question to give you specifically the problem I am having with this. Hopefully you will be able to give an answer now.
$endgroup$
– Michael Munta
Feb 28 at 12:21
$begingroup$
@BillDubuque I have edited the question to give you specifically the problem I am having with this. Hopefully you will be able to give an answer now.
$endgroup$
– Michael Munta
Feb 28 at 12:21
$begingroup$
I'm not sure precisely where you are stuck, but maybe placing the various forms side-by-side will prove illuminating - see my answer.
$endgroup$
– Bill Dubuque
Feb 28 at 22:05
$begingroup$
I'm not sure precisely where you are stuck, but maybe placing the various forms side-by-side will prove illuminating - see my answer.
$endgroup$
– Bill Dubuque
Feb 28 at 22:05
$begingroup$
Are you asking about how congruences like the $rmcolor#0a0green$ congruence in my answer follow from the corresponding $rmcolor#0a0green$ equation in the iterated mods preceding it? Or are you asking how the congruence in the final line follows from all before it?
$endgroup$
– Bill Dubuque
Mar 1 at 2:38
$begingroup$
Are you asking about how congruences like the $rmcolor#0a0green$ congruence in my answer follow from the corresponding $rmcolor#0a0green$ equation in the iterated mods preceding it? Or are you asking how the congruence in the final line follows from all before it?
$endgroup$
– Bill Dubuque
Mar 1 at 2:38
$begingroup$
@BillDubuque Well I am asking both really. Why do the coefficients $1, 2, 6$ generated from the initial descent algorithm produce an inverse when multiplied mod $103$? I understand all the other optimizations work simply because of multiplication property of congruences. You keep picking numbers to multiply with as long as it will produce a lesser number mod $n$. But here you don't pick, you generate them with the algorithm first. $p - colorredq_1b_1 rightarrow p - colorredq_2b_2...$
$endgroup$
– Michael Munta
Mar 1 at 5:55
$begingroup$
@BillDubuque Well I am asking both really. Why do the coefficients $1, 2, 6$ generated from the initial descent algorithm produce an inverse when multiplied mod $103$? I understand all the other optimizations work simply because of multiplication property of congruences. You keep picking numbers to multiply with as long as it will produce a lesser number mod $n$. But here you don't pick, you generate them with the algorithm first. $p - colorredq_1b_1 rightarrow p - colorredq_2b_2...$
$endgroup$
– Michael Munta
Mar 1 at 5:55
|
show 3 more comments
2 Answers
2
active
oldest
votes
$begingroup$
Below we compare the related forms. First is the iterated descent $,ato 103bmod a,$ used by Gauss. Second is that rearranged into the form of descending multiples of $60.,$ Third is the fractional view, and fourth is the graph of the descending multiples of $60$ (denominator descent graph).
$$beginalign
103bmod60 &= 103 - 1(60) = 43\
103bmod 43 &= 103color#0a0-2(43)=17\
103bmod 17 &= 103-6(17) = 1
endalignqquadqquadquad$$
$$beginarrayrl
bmod103!:qquad (-1)60!!!! &equiv, 43 &Rightarrow 1/60equiv -1/43\[.3em]
smash[t]oversetlargecolor#0a0*(-2)Longrightarrow (-2)(-1)60!!!! &equiv color#0a0(-2)43equiv 17!! &Rightarrow 1/60equiv 2/17\[.3em]
smash[t]oversetlarge *(-6)Longrightarrow color#c00(-6)(-2)(-1)60!!!! &equiv (-6)17equiv 1 &Rightarrow 1/60 equiv color#c00-12/1\
endarray$$
$$ beginalign
&dfrac160 ,equiv dfrac-143, equiv, dfrac217, equiv, dfraccolor#c00-121 rm[Gauss's algorithm]\[.3em]
&, 60oversetlarge *(-1)longrightarrowcolor#0a043oversetlargecolor#0a0*(-2)longrightarrow,color#0a017oversetlarge *(-6)longrightarrow 1\[.4em]
Rightarrow &,60*(-1)color#0a0*(-2)*(-6)equiv 1
endalign$$
The translation from the first form (iterated mods) to the second (iterated smaller multiples) is realized by viewing the modular reductions as modular multiplications, e.g.
$$ 103color#0a0-2(43) = 17,Rightarrow, color#0a0-2(43) equiv 17!!pmod!103 $$
This leads to the following simple recursive algorithm for computing inverses $!bmod p,$ prime.
$beginalignrm I(a,p) := &rm if a = 1 then 1qquadqquad ; a^-1bmod p,, rm for a,pinBbb N, &, 0 < a < p prime \[.5em]
&rm else let [,q,,r,], =, p div aqquad ;, p = q a + r Rightarrow color#0a0-qa,equiv, r!!pmod!p, 0 < r < a,\[.2em]
&rm (-q*I(r,p))bmod p ; because dfrac1a equiv dfrac-qcolor#0a0-qaequiv dfrac-qcolor#0a0requiv -q * I(r,p) color#90f[![1]!] endalign
$
Theorem $ I(a,p) = a^-1bmod p$
Proof $ $ Clear if $,a = 1.,$ Let $,a > 1,$ and suppose for induction the theorem holds true for all $,n < a$. Since $,p = qa+r,$ we must have $,r > 0,$ (else $,r = 0,Rightarrow,amid p,$ and $,1< a < p,,$ contra $,p,$ prime). Thus $,0 < r < a,$ so induction $,Rightarrow,I(r,p)equiv color#0a0r^-1$ so reducing equation $color#90f[![1]!]bmod p,$ yields the claim.
$endgroup$
$begingroup$
Accepted it. I am wasting too much time on this at the moment. If you feel like, edit this answer to also include the"quicker" method the same way you have shown here. I'll come back to this once I get more proficient. Thank you for all the answers.
$endgroup$
– Michael Munta
Mar 2 at 6:28
$begingroup$
Thank you for the edit. Now I have a full picture of this. So the only important fact about the algorithm is that eventually it will reach $1$. Once we have the equational forms we can rearrange each of them into mod $103$ like so: $$103 - 1(60) = 43 rightarrow -60 + 1(103) = 43 rightarrow -60 equiv 43$$ From here on we just use the next steps of the algorithm to write the congruences in terms of $60$ and it is true because of the $a equiv b$ (mod $n$) $rightarrow ka equiv kb$ (mod $n$) property. Is that about right?
$endgroup$
– Michael Munta
Mar 4 at 10:20
$begingroup$
Also this normal method (where there is no allowing of +/- multipliers/remainders) works only when the modulus is prime like the example here. But since an inverse exists when the numbers are coprime then a prime modulo is not always required if we allow +/- multipliers/remainders like in your post. For example inverse of $4$ modulo $15$. $$15 - 3(4) = 3$$ $$15 - 3(5) = 0$$ It does not terminate at $1$ because $15$ is not prime.
$endgroup$
– Michael Munta
Mar 11 at 13:33
$begingroup$
But when we allow the above mentioned quicker method then it sometimes works. $$15 - 4(4) = -1$$ Multiply both sides by $-1$ and then $$-15 + 4(4) = 1$$ Is this correct and are we allowed to multiply the intermediary steps like so? So is this also called 'Gauss algorithm' or only when we restrict positive remainders?
$endgroup$
– Michael Munta
Mar 11 at 13:36
$begingroup$
@Michael Yes, it may fail for composite moduli, but then we can use the general extended Eucldiean algorithm, which also can be viewed in fractional form.
$endgroup$
– Bill Dubuque
Mar 11 at 14:34
|
show 7 more comments
$begingroup$
I'm not sure I've properly understood what you're looking for, but since the reason why the algorithm works seems to me to be patently clear from the formal proof that it does, in fact, work, here's such a proof for the general case.
Starting with a prime $ p $, and an integer $ b_0in left[1, pright] $, the algorithm successively produces integers $ b_1<b_0,b_2<b_1, dots, b_i+1 < b_i, dots $, with $ b_i+1 equiv -q_i b_i left(,mathrmmod p,right) $, until it obtains $ b_n = 1 $. As long as $ b_i notinleft0, 1right $, it's always possible to carry out the next step of the procedure by using the division algorithm: $ p = q_i,b_i + b_i+1 $, and since the sequence $ b_0, b_1, dots b_i, dots $ is strictly decreasing, the algorithm must eventually terminate with $ b_ninleft0, 1right $. If $ b_n $ were $ 0 $, however, the final step of the algorithm would have been $ p = q_n-1,b_n-1 + b_n = q_n-1,b_n-1 $, whence $ b_n-1 $, strictly smaller than the prime $ p $, would be a divisor of it, and hence equal to $ 1 $. Thus, the algorithm would have terminated on the preceeding step.
Thus the algorithm always terminates with $ b_n=1 $, and we then have begineqnarray
1&equiv& -q_n-1,b_n-1equiv q_n-1,q_n-2,b_n-2equivdots\
&equiv& left(-1right)^n,q_n-1,q_n-2dots q_0,b_0 left(,mathrmmod,p,right) endeqnarray.
$endgroup$
$begingroup$
Do you think that my last comment on Bill's answer generally sums up the algorithm?
$endgroup$
– Michael Munta
Mar 7 at 19:11
$begingroup$
It looks ok to me, although I wouldn't say that the algorithm's eventually reaching $ 1 $ is the only important fact about it. There's also the fact that the number you calculate at each stage is just a non-zero multiple $ mathrmmod p $ of the previous one. If it were a more complicated polynomial function of the previous number, with a non-zero constant term, for instance, the algorithm wouldn't work.
$endgroup$
– lonza leggiera
Mar 7 at 22:49
$begingroup$
Well, yes. But the thing with rearranging and the property are fine though?
$endgroup$
– Michael Munta
Mar 8 at 5:12
$begingroup$
Sure. $ $
$endgroup$
– lonza leggiera
Mar 8 at 9:08
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059260%2fgausss-algorithm-for-computing-modular-fractions-and-inverses%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Below we compare the related forms. First is the iterated descent $,ato 103bmod a,$ used by Gauss. Second is that rearranged into the form of descending multiples of $60.,$ Third is the fractional view, and fourth is the graph of the descending multiples of $60$ (denominator descent graph).
$$beginalign
103bmod60 &= 103 - 1(60) = 43\
103bmod 43 &= 103color#0a0-2(43)=17\
103bmod 17 &= 103-6(17) = 1
endalignqquadqquadquad$$
$$beginarrayrl
bmod103!:qquad (-1)60!!!! &equiv, 43 &Rightarrow 1/60equiv -1/43\[.3em]
smash[t]oversetlargecolor#0a0*(-2)Longrightarrow (-2)(-1)60!!!! &equiv color#0a0(-2)43equiv 17!! &Rightarrow 1/60equiv 2/17\[.3em]
smash[t]oversetlarge *(-6)Longrightarrow color#c00(-6)(-2)(-1)60!!!! &equiv (-6)17equiv 1 &Rightarrow 1/60 equiv color#c00-12/1\
endarray$$
$$ beginalign
&dfrac160 ,equiv dfrac-143, equiv, dfrac217, equiv, dfraccolor#c00-121 rm[Gauss's algorithm]\[.3em]
&, 60oversetlarge *(-1)longrightarrowcolor#0a043oversetlargecolor#0a0*(-2)longrightarrow,color#0a017oversetlarge *(-6)longrightarrow 1\[.4em]
Rightarrow &,60*(-1)color#0a0*(-2)*(-6)equiv 1
endalign$$
The translation from the first form (iterated mods) to the second (iterated smaller multiples) is realized by viewing the modular reductions as modular multiplications, e.g.
$$ 103color#0a0-2(43) = 17,Rightarrow, color#0a0-2(43) equiv 17!!pmod!103 $$
This leads to the following simple recursive algorithm for computing inverses $!bmod p,$ prime.
$beginalignrm I(a,p) := &rm if a = 1 then 1qquadqquad ; a^-1bmod p,, rm for a,pinBbb N, &, 0 < a < p prime \[.5em]
&rm else let [,q,,r,], =, p div aqquad ;, p = q a + r Rightarrow color#0a0-qa,equiv, r!!pmod!p, 0 < r < a,\[.2em]
&rm (-q*I(r,p))bmod p ; because dfrac1a equiv dfrac-qcolor#0a0-qaequiv dfrac-qcolor#0a0requiv -q * I(r,p) color#90f[![1]!] endalign
$
Theorem $ I(a,p) = a^-1bmod p$
Proof $ $ Clear if $,a = 1.,$ Let $,a > 1,$ and suppose for induction the theorem holds true for all $,n < a$. Since $,p = qa+r,$ we must have $,r > 0,$ (else $,r = 0,Rightarrow,amid p,$ and $,1< a < p,,$ contra $,p,$ prime). Thus $,0 < r < a,$ so induction $,Rightarrow,I(r,p)equiv color#0a0r^-1$ so reducing equation $color#90f[![1]!]bmod p,$ yields the claim.
$endgroup$
$begingroup$
Accepted it. I am wasting too much time on this at the moment. If you feel like, edit this answer to also include the"quicker" method the same way you have shown here. I'll come back to this once I get more proficient. Thank you for all the answers.
$endgroup$
– Michael Munta
Mar 2 at 6:28
$begingroup$
Thank you for the edit. Now I have a full picture of this. So the only important fact about the algorithm is that eventually it will reach $1$. Once we have the equational forms we can rearrange each of them into mod $103$ like so: $$103 - 1(60) = 43 rightarrow -60 + 1(103) = 43 rightarrow -60 equiv 43$$ From here on we just use the next steps of the algorithm to write the congruences in terms of $60$ and it is true because of the $a equiv b$ (mod $n$) $rightarrow ka equiv kb$ (mod $n$) property. Is that about right?
$endgroup$
– Michael Munta
Mar 4 at 10:20
$begingroup$
Also this normal method (where there is no allowing of +/- multipliers/remainders) works only when the modulus is prime like the example here. But since an inverse exists when the numbers are coprime then a prime modulo is not always required if we allow +/- multipliers/remainders like in your post. For example inverse of $4$ modulo $15$. $$15 - 3(4) = 3$$ $$15 - 3(5) = 0$$ It does not terminate at $1$ because $15$ is not prime.
$endgroup$
– Michael Munta
Mar 11 at 13:33
$begingroup$
But when we allow the above mentioned quicker method then it sometimes works. $$15 - 4(4) = -1$$ Multiply both sides by $-1$ and then $$-15 + 4(4) = 1$$ Is this correct and are we allowed to multiply the intermediary steps like so? So is this also called 'Gauss algorithm' or only when we restrict positive remainders?
$endgroup$
– Michael Munta
Mar 11 at 13:36
$begingroup$
@Michael Yes, it may fail for composite moduli, but then we can use the general extended Eucldiean algorithm, which also can be viewed in fractional form.
$endgroup$
– Bill Dubuque
Mar 11 at 14:34
|
show 7 more comments
$begingroup$
Below we compare the related forms. First is the iterated descent $,ato 103bmod a,$ used by Gauss. Second is that rearranged into the form of descending multiples of $60.,$ Third is the fractional view, and fourth is the graph of the descending multiples of $60$ (denominator descent graph).
$$beginalign
103bmod60 &= 103 - 1(60) = 43\
103bmod 43 &= 103color#0a0-2(43)=17\
103bmod 17 &= 103-6(17) = 1
endalignqquadqquadquad$$
$$beginarrayrl
bmod103!:qquad (-1)60!!!! &equiv, 43 &Rightarrow 1/60equiv -1/43\[.3em]
smash[t]oversetlargecolor#0a0*(-2)Longrightarrow (-2)(-1)60!!!! &equiv color#0a0(-2)43equiv 17!! &Rightarrow 1/60equiv 2/17\[.3em]
smash[t]oversetlarge *(-6)Longrightarrow color#c00(-6)(-2)(-1)60!!!! &equiv (-6)17equiv 1 &Rightarrow 1/60 equiv color#c00-12/1\
endarray$$
$$ beginalign
&dfrac160 ,equiv dfrac-143, equiv, dfrac217, equiv, dfraccolor#c00-121 rm[Gauss's algorithm]\[.3em]
&, 60oversetlarge *(-1)longrightarrowcolor#0a043oversetlargecolor#0a0*(-2)longrightarrow,color#0a017oversetlarge *(-6)longrightarrow 1\[.4em]
Rightarrow &,60*(-1)color#0a0*(-2)*(-6)equiv 1
endalign$$
The translation from the first form (iterated mods) to the second (iterated smaller multiples) is realized by viewing the modular reductions as modular multiplications, e.g.
$$ 103color#0a0-2(43) = 17,Rightarrow, color#0a0-2(43) equiv 17!!pmod!103 $$
This leads to the following simple recursive algorithm for computing inverses $!bmod p,$ prime.
$beginalignrm I(a,p) := &rm if a = 1 then 1qquadqquad ; a^-1bmod p,, rm for a,pinBbb N, &, 0 < a < p prime \[.5em]
&rm else let [,q,,r,], =, p div aqquad ;, p = q a + r Rightarrow color#0a0-qa,equiv, r!!pmod!p, 0 < r < a,\[.2em]
&rm (-q*I(r,p))bmod p ; because dfrac1a equiv dfrac-qcolor#0a0-qaequiv dfrac-qcolor#0a0requiv -q * I(r,p) color#90f[![1]!] endalign
$
Theorem $ I(a,p) = a^-1bmod p$
Proof $ $ Clear if $,a = 1.,$ Let $,a > 1,$ and suppose for induction the theorem holds true for all $,n < a$. Since $,p = qa+r,$ we must have $,r > 0,$ (else $,r = 0,Rightarrow,amid p,$ and $,1< a < p,,$ contra $,p,$ prime). Thus $,0 < r < a,$ so induction $,Rightarrow,I(r,p)equiv color#0a0r^-1$ so reducing equation $color#90f[![1]!]bmod p,$ yields the claim.
$endgroup$
$begingroup$
Accepted it. I am wasting too much time on this at the moment. If you feel like, edit this answer to also include the"quicker" method the same way you have shown here. I'll come back to this once I get more proficient. Thank you for all the answers.
$endgroup$
– Michael Munta
Mar 2 at 6:28
$begingroup$
Thank you for the edit. Now I have a full picture of this. So the only important fact about the algorithm is that eventually it will reach $1$. Once we have the equational forms we can rearrange each of them into mod $103$ like so: $$103 - 1(60) = 43 rightarrow -60 + 1(103) = 43 rightarrow -60 equiv 43$$ From here on we just use the next steps of the algorithm to write the congruences in terms of $60$ and it is true because of the $a equiv b$ (mod $n$) $rightarrow ka equiv kb$ (mod $n$) property. Is that about right?
$endgroup$
– Michael Munta
Mar 4 at 10:20
$begingroup$
Also this normal method (where there is no allowing of +/- multipliers/remainders) works only when the modulus is prime like the example here. But since an inverse exists when the numbers are coprime then a prime modulo is not always required if we allow +/- multipliers/remainders like in your post. For example inverse of $4$ modulo $15$. $$15 - 3(4) = 3$$ $$15 - 3(5) = 0$$ It does not terminate at $1$ because $15$ is not prime.
$endgroup$
– Michael Munta
Mar 11 at 13:33
$begingroup$
But when we allow the above mentioned quicker method then it sometimes works. $$15 - 4(4) = -1$$ Multiply both sides by $-1$ and then $$-15 + 4(4) = 1$$ Is this correct and are we allowed to multiply the intermediary steps like so? So is this also called 'Gauss algorithm' or only when we restrict positive remainders?
$endgroup$
– Michael Munta
Mar 11 at 13:36
$begingroup$
@Michael Yes, it may fail for composite moduli, but then we can use the general extended Eucldiean algorithm, which also can be viewed in fractional form.
$endgroup$
– Bill Dubuque
Mar 11 at 14:34
|
show 7 more comments
$begingroup$
Below we compare the related forms. First is the iterated descent $,ato 103bmod a,$ used by Gauss. Second is that rearranged into the form of descending multiples of $60.,$ Third is the fractional view, and fourth is the graph of the descending multiples of $60$ (denominator descent graph).
$$beginalign
103bmod60 &= 103 - 1(60) = 43\
103bmod 43 &= 103color#0a0-2(43)=17\
103bmod 17 &= 103-6(17) = 1
endalignqquadqquadquad$$
$$beginarrayrl
bmod103!:qquad (-1)60!!!! &equiv, 43 &Rightarrow 1/60equiv -1/43\[.3em]
smash[t]oversetlargecolor#0a0*(-2)Longrightarrow (-2)(-1)60!!!! &equiv color#0a0(-2)43equiv 17!! &Rightarrow 1/60equiv 2/17\[.3em]
smash[t]oversetlarge *(-6)Longrightarrow color#c00(-6)(-2)(-1)60!!!! &equiv (-6)17equiv 1 &Rightarrow 1/60 equiv color#c00-12/1\
endarray$$
$$ beginalign
&dfrac160 ,equiv dfrac-143, equiv, dfrac217, equiv, dfraccolor#c00-121 rm[Gauss's algorithm]\[.3em]
&, 60oversetlarge *(-1)longrightarrowcolor#0a043oversetlargecolor#0a0*(-2)longrightarrow,color#0a017oversetlarge *(-6)longrightarrow 1\[.4em]
Rightarrow &,60*(-1)color#0a0*(-2)*(-6)equiv 1
endalign$$
The translation from the first form (iterated mods) to the second (iterated smaller multiples) is realized by viewing the modular reductions as modular multiplications, e.g.
$$ 103color#0a0-2(43) = 17,Rightarrow, color#0a0-2(43) equiv 17!!pmod!103 $$
This leads to the following simple recursive algorithm for computing inverses $!bmod p,$ prime.
$beginalignrm I(a,p) := &rm if a = 1 then 1qquadqquad ; a^-1bmod p,, rm for a,pinBbb N, &, 0 < a < p prime \[.5em]
&rm else let [,q,,r,], =, p div aqquad ;, p = q a + r Rightarrow color#0a0-qa,equiv, r!!pmod!p, 0 < r < a,\[.2em]
&rm (-q*I(r,p))bmod p ; because dfrac1a equiv dfrac-qcolor#0a0-qaequiv dfrac-qcolor#0a0requiv -q * I(r,p) color#90f[![1]!] endalign
$
Theorem $ I(a,p) = a^-1bmod p$
Proof $ $ Clear if $,a = 1.,$ Let $,a > 1,$ and suppose for induction the theorem holds true for all $,n < a$. Since $,p = qa+r,$ we must have $,r > 0,$ (else $,r = 0,Rightarrow,amid p,$ and $,1< a < p,,$ contra $,p,$ prime). Thus $,0 < r < a,$ so induction $,Rightarrow,I(r,p)equiv color#0a0r^-1$ so reducing equation $color#90f[![1]!]bmod p,$ yields the claim.
$endgroup$
Below we compare the related forms. First is the iterated descent $,ato 103bmod a,$ used by Gauss. Second is that rearranged into the form of descending multiples of $60.,$ Third is the fractional view, and fourth is the graph of the descending multiples of $60$ (denominator descent graph).
$$beginalign
103bmod60 &= 103 - 1(60) = 43\
103bmod 43 &= 103color#0a0-2(43)=17\
103bmod 17 &= 103-6(17) = 1
endalignqquadqquadquad$$
$$beginarrayrl
bmod103!:qquad (-1)60!!!! &equiv, 43 &Rightarrow 1/60equiv -1/43\[.3em]
smash[t]oversetlargecolor#0a0*(-2)Longrightarrow (-2)(-1)60!!!! &equiv color#0a0(-2)43equiv 17!! &Rightarrow 1/60equiv 2/17\[.3em]
smash[t]oversetlarge *(-6)Longrightarrow color#c00(-6)(-2)(-1)60!!!! &equiv (-6)17equiv 1 &Rightarrow 1/60 equiv color#c00-12/1\
endarray$$
$$ beginalign
&dfrac160 ,equiv dfrac-143, equiv, dfrac217, equiv, dfraccolor#c00-121 rm[Gauss's algorithm]\[.3em]
&, 60oversetlarge *(-1)longrightarrowcolor#0a043oversetlargecolor#0a0*(-2)longrightarrow,color#0a017oversetlarge *(-6)longrightarrow 1\[.4em]
Rightarrow &,60*(-1)color#0a0*(-2)*(-6)equiv 1
endalign$$
The translation from the first form (iterated mods) to the second (iterated smaller multiples) is realized by viewing the modular reductions as modular multiplications, e.g.
$$ 103color#0a0-2(43) = 17,Rightarrow, color#0a0-2(43) equiv 17!!pmod!103 $$
This leads to the following simple recursive algorithm for computing inverses $!bmod p,$ prime.
$beginalignrm I(a,p) := &rm if a = 1 then 1qquadqquad ; a^-1bmod p,, rm for a,pinBbb N, &, 0 < a < p prime \[.5em]
&rm else let [,q,,r,], =, p div aqquad ;, p = q a + r Rightarrow color#0a0-qa,equiv, r!!pmod!p, 0 < r < a,\[.2em]
&rm (-q*I(r,p))bmod p ; because dfrac1a equiv dfrac-qcolor#0a0-qaequiv dfrac-qcolor#0a0requiv -q * I(r,p) color#90f[![1]!] endalign
$
Theorem $ I(a,p) = a^-1bmod p$
Proof $ $ Clear if $,a = 1.,$ Let $,a > 1,$ and suppose for induction the theorem holds true for all $,n < a$. Since $,p = qa+r,$ we must have $,r > 0,$ (else $,r = 0,Rightarrow,amid p,$ and $,1< a < p,,$ contra $,p,$ prime). Thus $,0 < r < a,$ so induction $,Rightarrow,I(r,p)equiv color#0a0r^-1$ so reducing equation $color#90f[![1]!]bmod p,$ yields the claim.
edited Mar 17 at 20:21
answered Feb 28 at 17:07
Bill DubuqueBill Dubuque
213k29195654
213k29195654
$begingroup$
Accepted it. I am wasting too much time on this at the moment. If you feel like, edit this answer to also include the"quicker" method the same way you have shown here. I'll come back to this once I get more proficient. Thank you for all the answers.
$endgroup$
– Michael Munta
Mar 2 at 6:28
$begingroup$
Thank you for the edit. Now I have a full picture of this. So the only important fact about the algorithm is that eventually it will reach $1$. Once we have the equational forms we can rearrange each of them into mod $103$ like so: $$103 - 1(60) = 43 rightarrow -60 + 1(103) = 43 rightarrow -60 equiv 43$$ From here on we just use the next steps of the algorithm to write the congruences in terms of $60$ and it is true because of the $a equiv b$ (mod $n$) $rightarrow ka equiv kb$ (mod $n$) property. Is that about right?
$endgroup$
– Michael Munta
Mar 4 at 10:20
$begingroup$
Also this normal method (where there is no allowing of +/- multipliers/remainders) works only when the modulus is prime like the example here. But since an inverse exists when the numbers are coprime then a prime modulo is not always required if we allow +/- multipliers/remainders like in your post. For example inverse of $4$ modulo $15$. $$15 - 3(4) = 3$$ $$15 - 3(5) = 0$$ It does not terminate at $1$ because $15$ is not prime.
$endgroup$
– Michael Munta
Mar 11 at 13:33
$begingroup$
But when we allow the above mentioned quicker method then it sometimes works. $$15 - 4(4) = -1$$ Multiply both sides by $-1$ and then $$-15 + 4(4) = 1$$ Is this correct and are we allowed to multiply the intermediary steps like so? So is this also called 'Gauss algorithm' or only when we restrict positive remainders?
$endgroup$
– Michael Munta
Mar 11 at 13:36
$begingroup$
@Michael Yes, it may fail for composite moduli, but then we can use the general extended Eucldiean algorithm, which also can be viewed in fractional form.
$endgroup$
– Bill Dubuque
Mar 11 at 14:34
|
show 7 more comments
$begingroup$
Accepted it. I am wasting too much time on this at the moment. If you feel like, edit this answer to also include the"quicker" method the same way you have shown here. I'll come back to this once I get more proficient. Thank you for all the answers.
$endgroup$
– Michael Munta
Mar 2 at 6:28
$begingroup$
Thank you for the edit. Now I have a full picture of this. So the only important fact about the algorithm is that eventually it will reach $1$. Once we have the equational forms we can rearrange each of them into mod $103$ like so: $$103 - 1(60) = 43 rightarrow -60 + 1(103) = 43 rightarrow -60 equiv 43$$ From here on we just use the next steps of the algorithm to write the congruences in terms of $60$ and it is true because of the $a equiv b$ (mod $n$) $rightarrow ka equiv kb$ (mod $n$) property. Is that about right?
$endgroup$
– Michael Munta
Mar 4 at 10:20
$begingroup$
Also this normal method (where there is no allowing of +/- multipliers/remainders) works only when the modulus is prime like the example here. But since an inverse exists when the numbers are coprime then a prime modulo is not always required if we allow +/- multipliers/remainders like in your post. For example inverse of $4$ modulo $15$. $$15 - 3(4) = 3$$ $$15 - 3(5) = 0$$ It does not terminate at $1$ because $15$ is not prime.
$endgroup$
– Michael Munta
Mar 11 at 13:33
$begingroup$
But when we allow the above mentioned quicker method then it sometimes works. $$15 - 4(4) = -1$$ Multiply both sides by $-1$ and then $$-15 + 4(4) = 1$$ Is this correct and are we allowed to multiply the intermediary steps like so? So is this also called 'Gauss algorithm' or only when we restrict positive remainders?
$endgroup$
– Michael Munta
Mar 11 at 13:36
$begingroup$
@Michael Yes, it may fail for composite moduli, but then we can use the general extended Eucldiean algorithm, which also can be viewed in fractional form.
$endgroup$
– Bill Dubuque
Mar 11 at 14:34
$begingroup$
Accepted it. I am wasting too much time on this at the moment. If you feel like, edit this answer to also include the"quicker" method the same way you have shown here. I'll come back to this once I get more proficient. Thank you for all the answers.
$endgroup$
– Michael Munta
Mar 2 at 6:28
$begingroup$
Accepted it. I am wasting too much time on this at the moment. If you feel like, edit this answer to also include the"quicker" method the same way you have shown here. I'll come back to this once I get more proficient. Thank you for all the answers.
$endgroup$
– Michael Munta
Mar 2 at 6:28
$begingroup$
Thank you for the edit. Now I have a full picture of this. So the only important fact about the algorithm is that eventually it will reach $1$. Once we have the equational forms we can rearrange each of them into mod $103$ like so: $$103 - 1(60) = 43 rightarrow -60 + 1(103) = 43 rightarrow -60 equiv 43$$ From here on we just use the next steps of the algorithm to write the congruences in terms of $60$ and it is true because of the $a equiv b$ (mod $n$) $rightarrow ka equiv kb$ (mod $n$) property. Is that about right?
$endgroup$
– Michael Munta
Mar 4 at 10:20
$begingroup$
Thank you for the edit. Now I have a full picture of this. So the only important fact about the algorithm is that eventually it will reach $1$. Once we have the equational forms we can rearrange each of them into mod $103$ like so: $$103 - 1(60) = 43 rightarrow -60 + 1(103) = 43 rightarrow -60 equiv 43$$ From here on we just use the next steps of the algorithm to write the congruences in terms of $60$ and it is true because of the $a equiv b$ (mod $n$) $rightarrow ka equiv kb$ (mod $n$) property. Is that about right?
$endgroup$
– Michael Munta
Mar 4 at 10:20
$begingroup$
Also this normal method (where there is no allowing of +/- multipliers/remainders) works only when the modulus is prime like the example here. But since an inverse exists when the numbers are coprime then a prime modulo is not always required if we allow +/- multipliers/remainders like in your post. For example inverse of $4$ modulo $15$. $$15 - 3(4) = 3$$ $$15 - 3(5) = 0$$ It does not terminate at $1$ because $15$ is not prime.
$endgroup$
– Michael Munta
Mar 11 at 13:33
$begingroup$
Also this normal method (where there is no allowing of +/- multipliers/remainders) works only when the modulus is prime like the example here. But since an inverse exists when the numbers are coprime then a prime modulo is not always required if we allow +/- multipliers/remainders like in your post. For example inverse of $4$ modulo $15$. $$15 - 3(4) = 3$$ $$15 - 3(5) = 0$$ It does not terminate at $1$ because $15$ is not prime.
$endgroup$
– Michael Munta
Mar 11 at 13:33
$begingroup$
But when we allow the above mentioned quicker method then it sometimes works. $$15 - 4(4) = -1$$ Multiply both sides by $-1$ and then $$-15 + 4(4) = 1$$ Is this correct and are we allowed to multiply the intermediary steps like so? So is this also called 'Gauss algorithm' or only when we restrict positive remainders?
$endgroup$
– Michael Munta
Mar 11 at 13:36
$begingroup$
But when we allow the above mentioned quicker method then it sometimes works. $$15 - 4(4) = -1$$ Multiply both sides by $-1$ and then $$-15 + 4(4) = 1$$ Is this correct and are we allowed to multiply the intermediary steps like so? So is this also called 'Gauss algorithm' or only when we restrict positive remainders?
$endgroup$
– Michael Munta
Mar 11 at 13:36
$begingroup$
@Michael Yes, it may fail for composite moduli, but then we can use the general extended Eucldiean algorithm, which also can be viewed in fractional form.
$endgroup$
– Bill Dubuque
Mar 11 at 14:34
$begingroup$
@Michael Yes, it may fail for composite moduli, but then we can use the general extended Eucldiean algorithm, which also can be viewed in fractional form.
$endgroup$
– Bill Dubuque
Mar 11 at 14:34
|
show 7 more comments
$begingroup$
I'm not sure I've properly understood what you're looking for, but since the reason why the algorithm works seems to me to be patently clear from the formal proof that it does, in fact, work, here's such a proof for the general case.
Starting with a prime $ p $, and an integer $ b_0in left[1, pright] $, the algorithm successively produces integers $ b_1<b_0,b_2<b_1, dots, b_i+1 < b_i, dots $, with $ b_i+1 equiv -q_i b_i left(,mathrmmod p,right) $, until it obtains $ b_n = 1 $. As long as $ b_i notinleft0, 1right $, it's always possible to carry out the next step of the procedure by using the division algorithm: $ p = q_i,b_i + b_i+1 $, and since the sequence $ b_0, b_1, dots b_i, dots $ is strictly decreasing, the algorithm must eventually terminate with $ b_ninleft0, 1right $. If $ b_n $ were $ 0 $, however, the final step of the algorithm would have been $ p = q_n-1,b_n-1 + b_n = q_n-1,b_n-1 $, whence $ b_n-1 $, strictly smaller than the prime $ p $, would be a divisor of it, and hence equal to $ 1 $. Thus, the algorithm would have terminated on the preceeding step.
Thus the algorithm always terminates with $ b_n=1 $, and we then have begineqnarray
1&equiv& -q_n-1,b_n-1equiv q_n-1,q_n-2,b_n-2equivdots\
&equiv& left(-1right)^n,q_n-1,q_n-2dots q_0,b_0 left(,mathrmmod,p,right) endeqnarray.
$endgroup$
$begingroup$
Do you think that my last comment on Bill's answer generally sums up the algorithm?
$endgroup$
– Michael Munta
Mar 7 at 19:11
$begingroup$
It looks ok to me, although I wouldn't say that the algorithm's eventually reaching $ 1 $ is the only important fact about it. There's also the fact that the number you calculate at each stage is just a non-zero multiple $ mathrmmod p $ of the previous one. If it were a more complicated polynomial function of the previous number, with a non-zero constant term, for instance, the algorithm wouldn't work.
$endgroup$
– lonza leggiera
Mar 7 at 22:49
$begingroup$
Well, yes. But the thing with rearranging and the property are fine though?
$endgroup$
– Michael Munta
Mar 8 at 5:12
$begingroup$
Sure. $ $
$endgroup$
– lonza leggiera
Mar 8 at 9:08
add a comment |
$begingroup$
I'm not sure I've properly understood what you're looking for, but since the reason why the algorithm works seems to me to be patently clear from the formal proof that it does, in fact, work, here's such a proof for the general case.
Starting with a prime $ p $, and an integer $ b_0in left[1, pright] $, the algorithm successively produces integers $ b_1<b_0,b_2<b_1, dots, b_i+1 < b_i, dots $, with $ b_i+1 equiv -q_i b_i left(,mathrmmod p,right) $, until it obtains $ b_n = 1 $. As long as $ b_i notinleft0, 1right $, it's always possible to carry out the next step of the procedure by using the division algorithm: $ p = q_i,b_i + b_i+1 $, and since the sequence $ b_0, b_1, dots b_i, dots $ is strictly decreasing, the algorithm must eventually terminate with $ b_ninleft0, 1right $. If $ b_n $ were $ 0 $, however, the final step of the algorithm would have been $ p = q_n-1,b_n-1 + b_n = q_n-1,b_n-1 $, whence $ b_n-1 $, strictly smaller than the prime $ p $, would be a divisor of it, and hence equal to $ 1 $. Thus, the algorithm would have terminated on the preceeding step.
Thus the algorithm always terminates with $ b_n=1 $, and we then have begineqnarray
1&equiv& -q_n-1,b_n-1equiv q_n-1,q_n-2,b_n-2equivdots\
&equiv& left(-1right)^n,q_n-1,q_n-2dots q_0,b_0 left(,mathrmmod,p,right) endeqnarray.
$endgroup$
$begingroup$
Do you think that my last comment on Bill's answer generally sums up the algorithm?
$endgroup$
– Michael Munta
Mar 7 at 19:11
$begingroup$
It looks ok to me, although I wouldn't say that the algorithm's eventually reaching $ 1 $ is the only important fact about it. There's also the fact that the number you calculate at each stage is just a non-zero multiple $ mathrmmod p $ of the previous one. If it were a more complicated polynomial function of the previous number, with a non-zero constant term, for instance, the algorithm wouldn't work.
$endgroup$
– lonza leggiera
Mar 7 at 22:49
$begingroup$
Well, yes. But the thing with rearranging and the property are fine though?
$endgroup$
– Michael Munta
Mar 8 at 5:12
$begingroup$
Sure. $ $
$endgroup$
– lonza leggiera
Mar 8 at 9:08
add a comment |
$begingroup$
I'm not sure I've properly understood what you're looking for, but since the reason why the algorithm works seems to me to be patently clear from the formal proof that it does, in fact, work, here's such a proof for the general case.
Starting with a prime $ p $, and an integer $ b_0in left[1, pright] $, the algorithm successively produces integers $ b_1<b_0,b_2<b_1, dots, b_i+1 < b_i, dots $, with $ b_i+1 equiv -q_i b_i left(,mathrmmod p,right) $, until it obtains $ b_n = 1 $. As long as $ b_i notinleft0, 1right $, it's always possible to carry out the next step of the procedure by using the division algorithm: $ p = q_i,b_i + b_i+1 $, and since the sequence $ b_0, b_1, dots b_i, dots $ is strictly decreasing, the algorithm must eventually terminate with $ b_ninleft0, 1right $. If $ b_n $ were $ 0 $, however, the final step of the algorithm would have been $ p = q_n-1,b_n-1 + b_n = q_n-1,b_n-1 $, whence $ b_n-1 $, strictly smaller than the prime $ p $, would be a divisor of it, and hence equal to $ 1 $. Thus, the algorithm would have terminated on the preceeding step.
Thus the algorithm always terminates with $ b_n=1 $, and we then have begineqnarray
1&equiv& -q_n-1,b_n-1equiv q_n-1,q_n-2,b_n-2equivdots\
&equiv& left(-1right)^n,q_n-1,q_n-2dots q_0,b_0 left(,mathrmmod,p,right) endeqnarray.
$endgroup$
I'm not sure I've properly understood what you're looking for, but since the reason why the algorithm works seems to me to be patently clear from the formal proof that it does, in fact, work, here's such a proof for the general case.
Starting with a prime $ p $, and an integer $ b_0in left[1, pright] $, the algorithm successively produces integers $ b_1<b_0,b_2<b_1, dots, b_i+1 < b_i, dots $, with $ b_i+1 equiv -q_i b_i left(,mathrmmod p,right) $, until it obtains $ b_n = 1 $. As long as $ b_i notinleft0, 1right $, it's always possible to carry out the next step of the procedure by using the division algorithm: $ p = q_i,b_i + b_i+1 $, and since the sequence $ b_0, b_1, dots b_i, dots $ is strictly decreasing, the algorithm must eventually terminate with $ b_ninleft0, 1right $. If $ b_n $ were $ 0 $, however, the final step of the algorithm would have been $ p = q_n-1,b_n-1 + b_n = q_n-1,b_n-1 $, whence $ b_n-1 $, strictly smaller than the prime $ p $, would be a divisor of it, and hence equal to $ 1 $. Thus, the algorithm would have terminated on the preceeding step.
Thus the algorithm always terminates with $ b_n=1 $, and we then have begineqnarray
1&equiv& -q_n-1,b_n-1equiv q_n-1,q_n-2,b_n-2equivdots\
&equiv& left(-1right)^n,q_n-1,q_n-2dots q_0,b_0 left(,mathrmmod,p,right) endeqnarray.
edited Feb 28 at 18:55
answered Feb 28 at 18:42
lonza leggieralonza leggiera
1,20828
1,20828
$begingroup$
Do you think that my last comment on Bill's answer generally sums up the algorithm?
$endgroup$
– Michael Munta
Mar 7 at 19:11
$begingroup$
It looks ok to me, although I wouldn't say that the algorithm's eventually reaching $ 1 $ is the only important fact about it. There's also the fact that the number you calculate at each stage is just a non-zero multiple $ mathrmmod p $ of the previous one. If it were a more complicated polynomial function of the previous number, with a non-zero constant term, for instance, the algorithm wouldn't work.
$endgroup$
– lonza leggiera
Mar 7 at 22:49
$begingroup$
Well, yes. But the thing with rearranging and the property are fine though?
$endgroup$
– Michael Munta
Mar 8 at 5:12
$begingroup$
Sure. $ $
$endgroup$
– lonza leggiera
Mar 8 at 9:08
add a comment |
$begingroup$
Do you think that my last comment on Bill's answer generally sums up the algorithm?
$endgroup$
– Michael Munta
Mar 7 at 19:11
$begingroup$
It looks ok to me, although I wouldn't say that the algorithm's eventually reaching $ 1 $ is the only important fact about it. There's also the fact that the number you calculate at each stage is just a non-zero multiple $ mathrmmod p $ of the previous one. If it were a more complicated polynomial function of the previous number, with a non-zero constant term, for instance, the algorithm wouldn't work.
$endgroup$
– lonza leggiera
Mar 7 at 22:49
$begingroup$
Well, yes. But the thing with rearranging and the property are fine though?
$endgroup$
– Michael Munta
Mar 8 at 5:12
$begingroup$
Sure. $ $
$endgroup$
– lonza leggiera
Mar 8 at 9:08
$begingroup$
Do you think that my last comment on Bill's answer generally sums up the algorithm?
$endgroup$
– Michael Munta
Mar 7 at 19:11
$begingroup$
Do you think that my last comment on Bill's answer generally sums up the algorithm?
$endgroup$
– Michael Munta
Mar 7 at 19:11
$begingroup$
It looks ok to me, although I wouldn't say that the algorithm's eventually reaching $ 1 $ is the only important fact about it. There's also the fact that the number you calculate at each stage is just a non-zero multiple $ mathrmmod p $ of the previous one. If it were a more complicated polynomial function of the previous number, with a non-zero constant term, for instance, the algorithm wouldn't work.
$endgroup$
– lonza leggiera
Mar 7 at 22:49
$begingroup$
It looks ok to me, although I wouldn't say that the algorithm's eventually reaching $ 1 $ is the only important fact about it. There's also the fact that the number you calculate at each stage is just a non-zero multiple $ mathrmmod p $ of the previous one. If it were a more complicated polynomial function of the previous number, with a non-zero constant term, for instance, the algorithm wouldn't work.
$endgroup$
– lonza leggiera
Mar 7 at 22:49
$begingroup$
Well, yes. But the thing with rearranging and the property are fine though?
$endgroup$
– Michael Munta
Mar 8 at 5:12
$begingroup$
Well, yes. But the thing with rearranging and the property are fine though?
$endgroup$
– Michael Munta
Mar 8 at 5:12
$begingroup$
Sure. $ $
$endgroup$
– lonza leggiera
Mar 8 at 9:08
$begingroup$
Sure. $ $
$endgroup$
– lonza leggiera
Mar 8 at 9:08
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059260%2fgausss-algorithm-for-computing-modular-fractions-and-inverses%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
I will answer later when I have more spare time. Others who may be interested in answering can learn the detailed context from the "via comments" link above.
$endgroup$
– Bill Dubuque
Jan 2 at 17:02
$begingroup$
@BillDubuque I have edited the question to give you specifically the problem I am having with this. Hopefully you will be able to give an answer now.
$endgroup$
– Michael Munta
Feb 28 at 12:21
$begingroup$
I'm not sure precisely where you are stuck, but maybe placing the various forms side-by-side will prove illuminating - see my answer.
$endgroup$
– Bill Dubuque
Feb 28 at 22:05
$begingroup$
Are you asking about how congruences like the $rmcolor#0a0green$ congruence in my answer follow from the corresponding $rmcolor#0a0green$ equation in the iterated mods preceding it? Or are you asking how the congruence in the final line follows from all before it?
$endgroup$
– Bill Dubuque
Mar 1 at 2:38
$begingroup$
@BillDubuque Well I am asking both really. Why do the coefficients $1, 2, 6$ generated from the initial descent algorithm produce an inverse when multiplied mod $103$? I understand all the other optimizations work simply because of multiplication property of congruences. You keep picking numbers to multiply with as long as it will produce a lesser number mod $n$. But here you don't pick, you generate them with the algorithm first. $p - colorredq_1b_1 rightarrow p - colorredq_2b_2...$
$endgroup$
– Michael Munta
Mar 1 at 5:55