Syntactic proof that Peirce's law doesn't hold in simply-typed lambda calculusProof of $(forall x. varepsilon(x)) Rightarrow bot $ in $lambdapi $ calculus $equiv$The approximation rule implies the equality rule in systems of type assignmentsProving that $Omega = (lambda x.xx)(lambda x.xx)$ is not typable in the simply typed lambda calculusAre untyped and simply typed lambda calculus mutually exclusive?Does this variant of Peirce's law hold in intuitionistic logic? $vdash ((A rightarrow B) rightarrow A) implies vdash A$Are all simply typed lambda terms of type $(Ato A) to A to A$ church numerals?Prove if-case of strong normalization of simply typed lambda calculusA confusion about Church's simple theory of types and the Curry-Howard isomorphismVariable Condition in Typed Lambda CalculusThinning lemma in simply typed lambda calculus

Forgetting the musical notes while performing in concert

How do conventional missiles fly?

Can we compute the area of a quadrilateral with one right angle when we only know the lengths of any three sides?

Detention in 1997

Bullying boss launched a smear campaign and made me unemployable

Are there any examples of a variable being normally distributed that is *not* due to the Central Limit Theorem?

How did the Super Star Destroyer Executor get destroyed exactly?

How would I stat a creature to be immune to everything but the Magic Missile spell? (just for fun)

Is "remove commented out code" correct English?

How do I deal with an unproductive colleague in a small company?

Unlock My Phone! February 2018

Expand and Contract

Why is this clock signal connected to a capacitor to gnd?

Plagiarism or not?

Is it possible to create a QR code using text?

Size of subfigure fitting its content (tikzpicture)

How to compactly explain secondary and tertiary characters without resorting to stereotypes?

Why didn't Boeing produce its own regional jet?

What does “the session was packed” mean in this context?

Assassin's bullet with mercury

Can I run a new neutral wire to repair a broken circuit?

What is the most common color to indicate the input-field is disabled?

Do scales need to be in alphabetical order?

What exploit are these user agents trying to use?



Syntactic proof that Peirce's law doesn't hold in simply-typed lambda calculus


Proof of $(forall x. varepsilon(x)) Rightarrow bot $ in $lambdapi $ calculus $equiv$The approximation rule implies the equality rule in systems of type assignmentsProving that $Omega = (lambda x.xx)(lambda x.xx)$ is not typable in the simply typed lambda calculusAre untyped and simply typed lambda calculus mutually exclusive?Does this variant of Peirce's law hold in intuitionistic logic? $vdash ((A rightarrow B) rightarrow A) implies vdash A$Are all simply typed lambda terms of type $(Ato A) to A to A$ church numerals?Prove if-case of strong normalization of simply typed lambda calculusA confusion about Church's simple theory of types and the Curry-Howard isomorphismVariable Condition in Typed Lambda CalculusThinning lemma in simply typed lambda calculus













2












$begingroup$


This might have been asked before, but certainly I don't find any source. Even in the literature I've consulted, there is no such proof so far.



Context



In the context of the simply typed lambda calculus (say with just one base type $iota)$ there are judgments of the form
$$
Gamma vdash t : sigma
$$

denoting $t$ has type $sigma$ under the environment (or context) $Gamma$. Via proposition-as-types, this can also be read as $sigma$ (as a implicational first-order formula) follows from the assumptions $Gamma$.



In this sense, simply typed lambda calculus is a calculus for (the implicational fraction of) first-order logic. But not for the classical but for intuitionistic logic.



This especially means that Peirce's law is not a theorem. So, for any types $sigma, tau$ it is not the case that there is term $t$ such that
$$
vdash t : ((sigma to tau)to sigma)tosigma
$$



Question



In short: How to prove this? The point is, I've found a few proofs which involve models for $lambda$-calculus or intuitionistic logic -- which is fine, but I'm rather interested in a proof by syntactic means. Peirce's law shouldn't hold syntactically either, or?



The proof might involve confluence, subject-reduction, normalisation even... any ideas?



Thanks in advance!










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    Yes; usually a proof of "underivability" uses semantics. The calculus (e.g. that for intuitionsitic logic) is sound and complete for the corresponding semantics : e.g.BHK-interpretation and Kripke's semantics. Peirce's law is not valid : a semantical counter-example is produced. Therefore, it is not derivable.
    $endgroup$
    – Mauro ALLEGRANZA
    Mar 21 at 8:09







  • 2




    $begingroup$
    Okay, but do you know of a proof which does not involve semantics?
    $endgroup$
    – aphorisme
    Mar 21 at 9:21















2












$begingroup$


This might have been asked before, but certainly I don't find any source. Even in the literature I've consulted, there is no such proof so far.



Context



In the context of the simply typed lambda calculus (say with just one base type $iota)$ there are judgments of the form
$$
Gamma vdash t : sigma
$$

denoting $t$ has type $sigma$ under the environment (or context) $Gamma$. Via proposition-as-types, this can also be read as $sigma$ (as a implicational first-order formula) follows from the assumptions $Gamma$.



In this sense, simply typed lambda calculus is a calculus for (the implicational fraction of) first-order logic. But not for the classical but for intuitionistic logic.



This especially means that Peirce's law is not a theorem. So, for any types $sigma, tau$ it is not the case that there is term $t$ such that
$$
vdash t : ((sigma to tau)to sigma)tosigma
$$



Question



In short: How to prove this? The point is, I've found a few proofs which involve models for $lambda$-calculus or intuitionistic logic -- which is fine, but I'm rather interested in a proof by syntactic means. Peirce's law shouldn't hold syntactically either, or?



The proof might involve confluence, subject-reduction, normalisation even... any ideas?



Thanks in advance!










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    Yes; usually a proof of "underivability" uses semantics. The calculus (e.g. that for intuitionsitic logic) is sound and complete for the corresponding semantics : e.g.BHK-interpretation and Kripke's semantics. Peirce's law is not valid : a semantical counter-example is produced. Therefore, it is not derivable.
    $endgroup$
    – Mauro ALLEGRANZA
    Mar 21 at 8:09







  • 2




    $begingroup$
    Okay, but do you know of a proof which does not involve semantics?
    $endgroup$
    – aphorisme
    Mar 21 at 9:21













2












2








2





$begingroup$


This might have been asked before, but certainly I don't find any source. Even in the literature I've consulted, there is no such proof so far.



Context



In the context of the simply typed lambda calculus (say with just one base type $iota)$ there are judgments of the form
$$
Gamma vdash t : sigma
$$

denoting $t$ has type $sigma$ under the environment (or context) $Gamma$. Via proposition-as-types, this can also be read as $sigma$ (as a implicational first-order formula) follows from the assumptions $Gamma$.



In this sense, simply typed lambda calculus is a calculus for (the implicational fraction of) first-order logic. But not for the classical but for intuitionistic logic.



This especially means that Peirce's law is not a theorem. So, for any types $sigma, tau$ it is not the case that there is term $t$ such that
$$
vdash t : ((sigma to tau)to sigma)tosigma
$$



Question



In short: How to prove this? The point is, I've found a few proofs which involve models for $lambda$-calculus or intuitionistic logic -- which is fine, but I'm rather interested in a proof by syntactic means. Peirce's law shouldn't hold syntactically either, or?



The proof might involve confluence, subject-reduction, normalisation even... any ideas?



Thanks in advance!










share|cite|improve this question











$endgroup$




This might have been asked before, but certainly I don't find any source. Even in the literature I've consulted, there is no such proof so far.



Context



In the context of the simply typed lambda calculus (say with just one base type $iota)$ there are judgments of the form
$$
Gamma vdash t : sigma
$$

denoting $t$ has type $sigma$ under the environment (or context) $Gamma$. Via proposition-as-types, this can also be read as $sigma$ (as a implicational first-order formula) follows from the assumptions $Gamma$.



In this sense, simply typed lambda calculus is a calculus for (the implicational fraction of) first-order logic. But not for the classical but for intuitionistic logic.



This especially means that Peirce's law is not a theorem. So, for any types $sigma, tau$ it is not the case that there is term $t$ such that
$$
vdash t : ((sigma to tau)to sigma)tosigma
$$



Question



In short: How to prove this? The point is, I've found a few proofs which involve models for $lambda$-calculus or intuitionistic logic -- which is fine, but I'm rather interested in a proof by syntactic means. Peirce's law shouldn't hold syntactically either, or?



The proof might involve confluence, subject-reduction, normalisation even... any ideas?



Thanks in advance!







lambda-calculus intuitionistic-logic






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 21 at 9:44









MJD

47.7k29215397




47.7k29215397










asked Mar 21 at 7:46









aphorismeaphorisme

62438




62438







  • 1




    $begingroup$
    Yes; usually a proof of "underivability" uses semantics. The calculus (e.g. that for intuitionsitic logic) is sound and complete for the corresponding semantics : e.g.BHK-interpretation and Kripke's semantics. Peirce's law is not valid : a semantical counter-example is produced. Therefore, it is not derivable.
    $endgroup$
    – Mauro ALLEGRANZA
    Mar 21 at 8:09







  • 2




    $begingroup$
    Okay, but do you know of a proof which does not involve semantics?
    $endgroup$
    – aphorisme
    Mar 21 at 9:21












  • 1




    $begingroup$
    Yes; usually a proof of "underivability" uses semantics. The calculus (e.g. that for intuitionsitic logic) is sound and complete for the corresponding semantics : e.g.BHK-interpretation and Kripke's semantics. Peirce's law is not valid : a semantical counter-example is produced. Therefore, it is not derivable.
    $endgroup$
    – Mauro ALLEGRANZA
    Mar 21 at 8:09







  • 2




    $begingroup$
    Okay, but do you know of a proof which does not involve semantics?
    $endgroup$
    – aphorisme
    Mar 21 at 9:21







1




1




$begingroup$
Yes; usually a proof of "underivability" uses semantics. The calculus (e.g. that for intuitionsitic logic) is sound and complete for the corresponding semantics : e.g.BHK-interpretation and Kripke's semantics. Peirce's law is not valid : a semantical counter-example is produced. Therefore, it is not derivable.
$endgroup$
– Mauro ALLEGRANZA
Mar 21 at 8:09





$begingroup$
Yes; usually a proof of "underivability" uses semantics. The calculus (e.g. that for intuitionsitic logic) is sound and complete for the corresponding semantics : e.g.BHK-interpretation and Kripke's semantics. Peirce's law is not valid : a semantical counter-example is produced. Therefore, it is not derivable.
$endgroup$
– Mauro ALLEGRANZA
Mar 21 at 8:09





2




2




$begingroup$
Okay, but do you know of a proof which does not involve semantics?
$endgroup$
– aphorisme
Mar 21 at 9:21




$begingroup$
Okay, but do you know of a proof which does not involve semantics?
$endgroup$
– aphorisme
Mar 21 at 9:21










2 Answers
2






active

oldest

votes


















3












$begingroup$

You are right in your intuition that proving that a derivation does not exist is somewhat cumbersome. Soundness and completeness of formal proof systems provide us with an interesting duality:



$vDash phi Leftrightarrow$ all models satisfy $phi$ $Leftrightarrow$ there exists no model that does not satisfy $phi$

(soundness) $Leftarrow$ $Rightarrow$ (completeness)
$vdash phi Leftrightarrow$ there exists a derivation that proves $phi$



$not vDash phi Leftrightarrow$ not all models satisfy $phi$ $Leftrightarrow$ there exists a model that does not satisfy $phi$

(completeness) $Leftarrow$ $Rightarrow$ (soundness)
$not vdash phi Leftrightarrow$ there exists no derivation that proves $phi$ $Leftrightarrow$ all potential derivations can not prove $phi$



Proving a universally quantified statement ("all") is often an awkward thing to do because you need to make an argument that any entity from a possibly infinite domain (such as any structure or any possible derivation) has the desired property (such as (not) satisfying or (not) proving a formula), while proving an existentially quantified statement ("there is") is very easy, because it suffices to just provide one example (one counter-model/one proof) and you're done.



So if you are supposed to show $vDash$, rather than making a possibly cumbersome semantic argument about validity in all models, soundness and completeness allow as to show $vdash$, i.e. provide one syntactic derivation that proves the desired formula.

One the other hand, showing $not vdash$, i.e. arguing that there can be no derivation can be annoying, but soundness and completeness enable us to just show that $not vDash$ holds by providing one counter-model and you're done.



So the easier way to show that $not vdash t: ((sigma to tau) to sigma) to sigma)$ would be to provide a semantic argument that shows that there are models of typed $lambda$-calculus/intuitionistic logic in which the formula $((sigma to tau) to sigma) to sigma)$ does not hold. A semantics for intuitionistic logic is e.g. Kripke semantics.

If you are, however, interested in syntactic proof, you need to make an argument that no syntactic derivation is possible, i.e. any attempt to prove the formula will eventually lead to a dead-end where it is not possible to continue the proof, and you need to reason why.




So let's attempt to build up a derivation tree:



enter image description here



The only reasonable way to start (bottom-up) is to first (un-)do an implication introduction to get rid of $((sigma to tau) to sigma)$: Implication elimination would require an even larger major premise and a suitable minor premise$^1$, and introducing a new variable is not an option because the set of assumptions (the left-hand-side of the sequent) has to be empty in the conclusion formula.

The next reasonable step after $(to I)$ is to make use of $x: ((sigma to tau) to sigma)$ - which can in turn be derived by one application of the identity rule - to show $x(lambda y.M): sigma$ by an application of $(to E)$, which requries a proof of the antecedent, $lambda y.M: (sigma to tau)$.

With a similar argument as in the first step, the only useful continutation for the latter node is another $(to I)$, which adds $y: sigma$ as a new assumption, and we now have to prove $M: tau$. This is the point were we get stuck.



The first observation is that we can't use $(to I)$ anymore because $tau$ is not a function type.

Simply adding $tau$ to the context, i.e. introducing a new variable $z$ which has the type $tau$, and closing the branch with $(Id)$ is not an option either, because we would need to carry this additional assumption down the tree, and end up with the sequent $z: tau vdash lambda x.(x(lambda y.z)): ((sigma to tau) to sigma) to sigma$, with a context containing $z: tau$ instead of being empty. So we would not have a proof (= a derivation of the form $vdash t: alpha$, with no assumptions on the left-hand side of the sequent), but instead the derivability of the type would depend on the stipulation of some variable $z$ being declared to have the type $tau$. But then our term would contain a free variable, which does not solve the type inhabitation problem: That for any type $alpha$, we could find a trivial "inhabitation" of the form $x:alpha vdash x:alpha$ is uninteresting - instead, type inhabitation asks about the existence of a closed term.

So alternatively, we could attempt to introduce new declarations of the form $... to tau$ to the context, and continue the tree by an application of elimination rules; however this would merely shift the problem upwards the tree:

If we chose $ldots vdash sigma to tau$ as the major premise with $ldots vdash sigma$ (derived by an application of the $(Id)$ rule) as the minor premise to the $(to E)$ rule, an endless loop would occur, since a similar rule application is already used one step lower in the tree. The fact that the context now in addition contains $sigma$ does not make a difference in this case.

If we chose $vdash tau to tau$ as the major premise to the $(to E)$ rule, another endless loop would occur, since the other branch (the minor premise) would need to contain $vdash tau$ with the same context again.
And introducing new type variables different from $sigma$ and $tau$ wouldn't help either, but only leave us with even more to prove.



So there is no way to continue the upper right branch of the derivation tree, and at no point could we have made a smarter choice by selecting a different rule.

Since the tree can not be resolved, the type is not inhabited.



As you see, proving that a formula is not derivable can be rather cumbersome, because you need to make sure that every way of trying to build up a tree will eventually fail - I hope my proof was comprehensive enough in this respect.




Another thing you might find interesting w.r.t. to the correspondence to the $to$-fragment of intuitionistic logic: The formula $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$, i.e. the above formula with two succedents $tau$ appended, is deriavable/inhabited. This formula represents the double negation of Peirce's law (under the assumption that $tau$ stands for $bot$, and $neg neg phi$ is seen as an abbreviation of $(phi to bot) to bot$). This formula can be derived in intuitionistic logic, and in particular in the $to$-fragment, i.e. in positive implication logic.$^2$ Thus, the type $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$ is inhabited (take $alpha$ for $sigma$ and $beta$ for $tau$):



enter image description here



An inhabitant (a $lambda$-term) corresponding to this type (formula) would be $lambda x.(x(lambda y.(lambda z.x(lambda y.z)))$. You see that the part $lambda x.(x(lambda y.M))$ is the same as before, except that now $M$ can be resolved.




Some more words on Peirce from a proof-theoretic perspective:

A derivation of Peirce's law in classical logic requires an application of the reductio ad absurdum rule (or something equivalent, like double negation eliminiation): An assumption $neg alpha$ is made from which $bot$ is derived, then $alpha$ is concluded while discharging the assumption $neg alpha$. Again, you would need to argue why RAA is necessary, i.o.w., why there can be no derivation without this step.

RAA/DNE is part of/admissible in natural deduction for classical logic, but not intuitionistic logic: In IL, only ex falso quodlibet sequitur available, which allows to conclude anything from $bot$ but does not allow to discharge assumptions for the form $neg alpha$. In fact, from a proof-theoretic perspective, the presence or absence of RAA/DNE is exactly what tells the two systems apart; the remaining set of rules is the same; so IL is basically CL with RAA/EFQL+DNE replaced by the weaker EFQL rule.

Since RAA/DNE is necesssary for the proof but this rule is not available in IL, Peirce is not derivable in ND (or any other system) for IL, and in turn not for positive implcation logic, the $to$-fragment of IL.




$^1$ In a formal proof, major premise = the premise with the formula that contains the occurrence of the operator to be eliminated by the rule, minor premises = the other premises. In the $(to E)$ rule, with a conclusion is of the form $Gamma vdash beta$ and two premises, the major premise is the premise of the form $Gamma vdash alpha to beta$, and the minor premise is the one of the form $Gamma vdash alpha$.



$^2$ In fact, it can be shown that for any formula which is only valid classically but not in IL, its double negation can be derived in IL. Other than in classical logic, in IL, $neg neg phi$ is not equivalent to $phi$: it is a weaker propositions, so $phi$ implies $neg neg phi$, but not necessarily the other way round.






share|cite|improve this answer











$endgroup$








  • 1




    $begingroup$
    The strong normalizability of the STLC is what helps you be confident you cover "all" bases. You switch from trying to find a term that inhabits the type to finding a normal form term that inhabits the types. This rules out adding "extra" assumptions as those would eventually need to be discharged and you'd get non-normal subterms like $(lambda f:tautotau.M)N$. We can only apply $to E$ where the left argument is a variable applied to a series of normal form terms (e.g. $x$ as in your derivation, or $x(lambda y.y)$ for another example).
    $endgroup$
    – Derek Elkins
    Mar 31 at 19:41



















3












$begingroup$

Gentzen designed his logic to be normalizable. So for example, if a proof has $landtext-intro$ followed by $landtext-elim$ of the same introduced subexpression, then the proof can be simplified by removing those 2 steps.



Type theoretic proofs are represented as typed lambda expressions. The normalization corresponds to simplifying the lambda expression by following the usual evaluation rules. So a proof containing $landtext-intro$ followed by $landtext-elim$ would be represented by a lambda expression containing a pair $(x, y)$ (cons in lisp terms) and a request for the the first or second value of that expression (car or cdr in lisp terms), so the simplification amounts to converting part of the proof from (car (cons x y)) to just x.



Some of the consequences of this design are



(a) all (gentzen style intuitionistic postive partial fragment of propositional logic) proofs can be normalized



(b) all normalized proofs only contain rules pertaining to the operators used in them (a theorem without a $lor$ in it won't contain $lortext-elim$ in the proof, for example)



(c) all normalized proofs only contain sub expressions of the theorem to be proven (and assumptions if those aren't eliminated)



So to prove there is no proof of $((p to q) to p) to p$ you only need to check if there is a normalized proof or not (a). And to do that you only need to work backwards from the final statement, for this statement only needing to check $totext-into$ and $totext-elim$ since only $to$ is present in the theorem (b). And you only need to check instances of the inference that come from sub expressions, 5 of them in this case (c). So...



$$((p to q) to p) to p$$



Can only be a result of $totext-intro$, so



$$beginarray ll
quad (p to q) to p & textassumption \
quad vdots \
quad p \
((p to q) to p) to p & totext-intro
endarray$$



The only sub expressions that allows $p$ to be constructed are $(p to q) to p$ with modus ponens, so



$$beginarray ll
quad (p to q) to p & textassumption \
quad vdots \
quad p to q \
quad p & totext-elim \
((p to q) to p) to p & totext-intro
endarray$$



And the only way to construct $p to q$ is



$$beginarray ll
quad (p to q) to p & textassumption \
quad quad p & textassumption \
quad quad vdots \
quad quad q \
quad p to q & totextintro \
quad p & totext-elim \
((p to q) to p) to p & totext-intro
endarray$$



And as there are no sub expressions left to allow for the inference of $q$, this is an unprovable statement. This process could be automated with a (multi-)graph algorithm, using sub expressions as vertices and inferences as edges.






share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3156487%2fsyntactic-proof-that-peirces-law-doesnt-hold-in-simply-typed-lambda-calculus%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    You are right in your intuition that proving that a derivation does not exist is somewhat cumbersome. Soundness and completeness of formal proof systems provide us with an interesting duality:



    $vDash phi Leftrightarrow$ all models satisfy $phi$ $Leftrightarrow$ there exists no model that does not satisfy $phi$

    (soundness) $Leftarrow$ $Rightarrow$ (completeness)
    $vdash phi Leftrightarrow$ there exists a derivation that proves $phi$



    $not vDash phi Leftrightarrow$ not all models satisfy $phi$ $Leftrightarrow$ there exists a model that does not satisfy $phi$

    (completeness) $Leftarrow$ $Rightarrow$ (soundness)
    $not vdash phi Leftrightarrow$ there exists no derivation that proves $phi$ $Leftrightarrow$ all potential derivations can not prove $phi$



    Proving a universally quantified statement ("all") is often an awkward thing to do because you need to make an argument that any entity from a possibly infinite domain (such as any structure or any possible derivation) has the desired property (such as (not) satisfying or (not) proving a formula), while proving an existentially quantified statement ("there is") is very easy, because it suffices to just provide one example (one counter-model/one proof) and you're done.



    So if you are supposed to show $vDash$, rather than making a possibly cumbersome semantic argument about validity in all models, soundness and completeness allow as to show $vdash$, i.e. provide one syntactic derivation that proves the desired formula.

    One the other hand, showing $not vdash$, i.e. arguing that there can be no derivation can be annoying, but soundness and completeness enable us to just show that $not vDash$ holds by providing one counter-model and you're done.



    So the easier way to show that $not vdash t: ((sigma to tau) to sigma) to sigma)$ would be to provide a semantic argument that shows that there are models of typed $lambda$-calculus/intuitionistic logic in which the formula $((sigma to tau) to sigma) to sigma)$ does not hold. A semantics for intuitionistic logic is e.g. Kripke semantics.

    If you are, however, interested in syntactic proof, you need to make an argument that no syntactic derivation is possible, i.e. any attempt to prove the formula will eventually lead to a dead-end where it is not possible to continue the proof, and you need to reason why.




    So let's attempt to build up a derivation tree:



    enter image description here



    The only reasonable way to start (bottom-up) is to first (un-)do an implication introduction to get rid of $((sigma to tau) to sigma)$: Implication elimination would require an even larger major premise and a suitable minor premise$^1$, and introducing a new variable is not an option because the set of assumptions (the left-hand-side of the sequent) has to be empty in the conclusion formula.

    The next reasonable step after $(to I)$ is to make use of $x: ((sigma to tau) to sigma)$ - which can in turn be derived by one application of the identity rule - to show $x(lambda y.M): sigma$ by an application of $(to E)$, which requries a proof of the antecedent, $lambda y.M: (sigma to tau)$.

    With a similar argument as in the first step, the only useful continutation for the latter node is another $(to I)$, which adds $y: sigma$ as a new assumption, and we now have to prove $M: tau$. This is the point were we get stuck.



    The first observation is that we can't use $(to I)$ anymore because $tau$ is not a function type.

    Simply adding $tau$ to the context, i.e. introducing a new variable $z$ which has the type $tau$, and closing the branch with $(Id)$ is not an option either, because we would need to carry this additional assumption down the tree, and end up with the sequent $z: tau vdash lambda x.(x(lambda y.z)): ((sigma to tau) to sigma) to sigma$, with a context containing $z: tau$ instead of being empty. So we would not have a proof (= a derivation of the form $vdash t: alpha$, with no assumptions on the left-hand side of the sequent), but instead the derivability of the type would depend on the stipulation of some variable $z$ being declared to have the type $tau$. But then our term would contain a free variable, which does not solve the type inhabitation problem: That for any type $alpha$, we could find a trivial "inhabitation" of the form $x:alpha vdash x:alpha$ is uninteresting - instead, type inhabitation asks about the existence of a closed term.

    So alternatively, we could attempt to introduce new declarations of the form $... to tau$ to the context, and continue the tree by an application of elimination rules; however this would merely shift the problem upwards the tree:

    If we chose $ldots vdash sigma to tau$ as the major premise with $ldots vdash sigma$ (derived by an application of the $(Id)$ rule) as the minor premise to the $(to E)$ rule, an endless loop would occur, since a similar rule application is already used one step lower in the tree. The fact that the context now in addition contains $sigma$ does not make a difference in this case.

    If we chose $vdash tau to tau$ as the major premise to the $(to E)$ rule, another endless loop would occur, since the other branch (the minor premise) would need to contain $vdash tau$ with the same context again.
    And introducing new type variables different from $sigma$ and $tau$ wouldn't help either, but only leave us with even more to prove.



    So there is no way to continue the upper right branch of the derivation tree, and at no point could we have made a smarter choice by selecting a different rule.

    Since the tree can not be resolved, the type is not inhabited.



    As you see, proving that a formula is not derivable can be rather cumbersome, because you need to make sure that every way of trying to build up a tree will eventually fail - I hope my proof was comprehensive enough in this respect.




    Another thing you might find interesting w.r.t. to the correspondence to the $to$-fragment of intuitionistic logic: The formula $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$, i.e. the above formula with two succedents $tau$ appended, is deriavable/inhabited. This formula represents the double negation of Peirce's law (under the assumption that $tau$ stands for $bot$, and $neg neg phi$ is seen as an abbreviation of $(phi to bot) to bot$). This formula can be derived in intuitionistic logic, and in particular in the $to$-fragment, i.e. in positive implication logic.$^2$ Thus, the type $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$ is inhabited (take $alpha$ for $sigma$ and $beta$ for $tau$):



    enter image description here



    An inhabitant (a $lambda$-term) corresponding to this type (formula) would be $lambda x.(x(lambda y.(lambda z.x(lambda y.z)))$. You see that the part $lambda x.(x(lambda y.M))$ is the same as before, except that now $M$ can be resolved.




    Some more words on Peirce from a proof-theoretic perspective:

    A derivation of Peirce's law in classical logic requires an application of the reductio ad absurdum rule (or something equivalent, like double negation eliminiation): An assumption $neg alpha$ is made from which $bot$ is derived, then $alpha$ is concluded while discharging the assumption $neg alpha$. Again, you would need to argue why RAA is necessary, i.o.w., why there can be no derivation without this step.

    RAA/DNE is part of/admissible in natural deduction for classical logic, but not intuitionistic logic: In IL, only ex falso quodlibet sequitur available, which allows to conclude anything from $bot$ but does not allow to discharge assumptions for the form $neg alpha$. In fact, from a proof-theoretic perspective, the presence or absence of RAA/DNE is exactly what tells the two systems apart; the remaining set of rules is the same; so IL is basically CL with RAA/EFQL+DNE replaced by the weaker EFQL rule.

    Since RAA/DNE is necesssary for the proof but this rule is not available in IL, Peirce is not derivable in ND (or any other system) for IL, and in turn not for positive implcation logic, the $to$-fragment of IL.




    $^1$ In a formal proof, major premise = the premise with the formula that contains the occurrence of the operator to be eliminated by the rule, minor premises = the other premises. In the $(to E)$ rule, with a conclusion is of the form $Gamma vdash beta$ and two premises, the major premise is the premise of the form $Gamma vdash alpha to beta$, and the minor premise is the one of the form $Gamma vdash alpha$.



    $^2$ In fact, it can be shown that for any formula which is only valid classically but not in IL, its double negation can be derived in IL. Other than in classical logic, in IL, $neg neg phi$ is not equivalent to $phi$: it is a weaker propositions, so $phi$ implies $neg neg phi$, but not necessarily the other way round.






    share|cite|improve this answer











    $endgroup$








    • 1




      $begingroup$
      The strong normalizability of the STLC is what helps you be confident you cover "all" bases. You switch from trying to find a term that inhabits the type to finding a normal form term that inhabits the types. This rules out adding "extra" assumptions as those would eventually need to be discharged and you'd get non-normal subterms like $(lambda f:tautotau.M)N$. We can only apply $to E$ where the left argument is a variable applied to a series of normal form terms (e.g. $x$ as in your derivation, or $x(lambda y.y)$ for another example).
      $endgroup$
      – Derek Elkins
      Mar 31 at 19:41
















    3












    $begingroup$

    You are right in your intuition that proving that a derivation does not exist is somewhat cumbersome. Soundness and completeness of formal proof systems provide us with an interesting duality:



    $vDash phi Leftrightarrow$ all models satisfy $phi$ $Leftrightarrow$ there exists no model that does not satisfy $phi$

    (soundness) $Leftarrow$ $Rightarrow$ (completeness)
    $vdash phi Leftrightarrow$ there exists a derivation that proves $phi$



    $not vDash phi Leftrightarrow$ not all models satisfy $phi$ $Leftrightarrow$ there exists a model that does not satisfy $phi$

    (completeness) $Leftarrow$ $Rightarrow$ (soundness)
    $not vdash phi Leftrightarrow$ there exists no derivation that proves $phi$ $Leftrightarrow$ all potential derivations can not prove $phi$



    Proving a universally quantified statement ("all") is often an awkward thing to do because you need to make an argument that any entity from a possibly infinite domain (such as any structure or any possible derivation) has the desired property (such as (not) satisfying or (not) proving a formula), while proving an existentially quantified statement ("there is") is very easy, because it suffices to just provide one example (one counter-model/one proof) and you're done.



    So if you are supposed to show $vDash$, rather than making a possibly cumbersome semantic argument about validity in all models, soundness and completeness allow as to show $vdash$, i.e. provide one syntactic derivation that proves the desired formula.

    One the other hand, showing $not vdash$, i.e. arguing that there can be no derivation can be annoying, but soundness and completeness enable us to just show that $not vDash$ holds by providing one counter-model and you're done.



    So the easier way to show that $not vdash t: ((sigma to tau) to sigma) to sigma)$ would be to provide a semantic argument that shows that there are models of typed $lambda$-calculus/intuitionistic logic in which the formula $((sigma to tau) to sigma) to sigma)$ does not hold. A semantics for intuitionistic logic is e.g. Kripke semantics.

    If you are, however, interested in syntactic proof, you need to make an argument that no syntactic derivation is possible, i.e. any attempt to prove the formula will eventually lead to a dead-end where it is not possible to continue the proof, and you need to reason why.




    So let's attempt to build up a derivation tree:



    enter image description here



    The only reasonable way to start (bottom-up) is to first (un-)do an implication introduction to get rid of $((sigma to tau) to sigma)$: Implication elimination would require an even larger major premise and a suitable minor premise$^1$, and introducing a new variable is not an option because the set of assumptions (the left-hand-side of the sequent) has to be empty in the conclusion formula.

    The next reasonable step after $(to I)$ is to make use of $x: ((sigma to tau) to sigma)$ - which can in turn be derived by one application of the identity rule - to show $x(lambda y.M): sigma$ by an application of $(to E)$, which requries a proof of the antecedent, $lambda y.M: (sigma to tau)$.

    With a similar argument as in the first step, the only useful continutation for the latter node is another $(to I)$, which adds $y: sigma$ as a new assumption, and we now have to prove $M: tau$. This is the point were we get stuck.



    The first observation is that we can't use $(to I)$ anymore because $tau$ is not a function type.

    Simply adding $tau$ to the context, i.e. introducing a new variable $z$ which has the type $tau$, and closing the branch with $(Id)$ is not an option either, because we would need to carry this additional assumption down the tree, and end up with the sequent $z: tau vdash lambda x.(x(lambda y.z)): ((sigma to tau) to sigma) to sigma$, with a context containing $z: tau$ instead of being empty. So we would not have a proof (= a derivation of the form $vdash t: alpha$, with no assumptions on the left-hand side of the sequent), but instead the derivability of the type would depend on the stipulation of some variable $z$ being declared to have the type $tau$. But then our term would contain a free variable, which does not solve the type inhabitation problem: That for any type $alpha$, we could find a trivial "inhabitation" of the form $x:alpha vdash x:alpha$ is uninteresting - instead, type inhabitation asks about the existence of a closed term.

    So alternatively, we could attempt to introduce new declarations of the form $... to tau$ to the context, and continue the tree by an application of elimination rules; however this would merely shift the problem upwards the tree:

    If we chose $ldots vdash sigma to tau$ as the major premise with $ldots vdash sigma$ (derived by an application of the $(Id)$ rule) as the minor premise to the $(to E)$ rule, an endless loop would occur, since a similar rule application is already used one step lower in the tree. The fact that the context now in addition contains $sigma$ does not make a difference in this case.

    If we chose $vdash tau to tau$ as the major premise to the $(to E)$ rule, another endless loop would occur, since the other branch (the minor premise) would need to contain $vdash tau$ with the same context again.
    And introducing new type variables different from $sigma$ and $tau$ wouldn't help either, but only leave us with even more to prove.



    So there is no way to continue the upper right branch of the derivation tree, and at no point could we have made a smarter choice by selecting a different rule.

    Since the tree can not be resolved, the type is not inhabited.



    As you see, proving that a formula is not derivable can be rather cumbersome, because you need to make sure that every way of trying to build up a tree will eventually fail - I hope my proof was comprehensive enough in this respect.




    Another thing you might find interesting w.r.t. to the correspondence to the $to$-fragment of intuitionistic logic: The formula $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$, i.e. the above formula with two succedents $tau$ appended, is deriavable/inhabited. This formula represents the double negation of Peirce's law (under the assumption that $tau$ stands for $bot$, and $neg neg phi$ is seen as an abbreviation of $(phi to bot) to bot$). This formula can be derived in intuitionistic logic, and in particular in the $to$-fragment, i.e. in positive implication logic.$^2$ Thus, the type $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$ is inhabited (take $alpha$ for $sigma$ and $beta$ for $tau$):



    enter image description here



    An inhabitant (a $lambda$-term) corresponding to this type (formula) would be $lambda x.(x(lambda y.(lambda z.x(lambda y.z)))$. You see that the part $lambda x.(x(lambda y.M))$ is the same as before, except that now $M$ can be resolved.




    Some more words on Peirce from a proof-theoretic perspective:

    A derivation of Peirce's law in classical logic requires an application of the reductio ad absurdum rule (or something equivalent, like double negation eliminiation): An assumption $neg alpha$ is made from which $bot$ is derived, then $alpha$ is concluded while discharging the assumption $neg alpha$. Again, you would need to argue why RAA is necessary, i.o.w., why there can be no derivation without this step.

    RAA/DNE is part of/admissible in natural deduction for classical logic, but not intuitionistic logic: In IL, only ex falso quodlibet sequitur available, which allows to conclude anything from $bot$ but does not allow to discharge assumptions for the form $neg alpha$. In fact, from a proof-theoretic perspective, the presence or absence of RAA/DNE is exactly what tells the two systems apart; the remaining set of rules is the same; so IL is basically CL with RAA/EFQL+DNE replaced by the weaker EFQL rule.

    Since RAA/DNE is necesssary for the proof but this rule is not available in IL, Peirce is not derivable in ND (or any other system) for IL, and in turn not for positive implcation logic, the $to$-fragment of IL.




    $^1$ In a formal proof, major premise = the premise with the formula that contains the occurrence of the operator to be eliminated by the rule, minor premises = the other premises. In the $(to E)$ rule, with a conclusion is of the form $Gamma vdash beta$ and two premises, the major premise is the premise of the form $Gamma vdash alpha to beta$, and the minor premise is the one of the form $Gamma vdash alpha$.



    $^2$ In fact, it can be shown that for any formula which is only valid classically but not in IL, its double negation can be derived in IL. Other than in classical logic, in IL, $neg neg phi$ is not equivalent to $phi$: it is a weaker propositions, so $phi$ implies $neg neg phi$, but not necessarily the other way round.






    share|cite|improve this answer











    $endgroup$








    • 1




      $begingroup$
      The strong normalizability of the STLC is what helps you be confident you cover "all" bases. You switch from trying to find a term that inhabits the type to finding a normal form term that inhabits the types. This rules out adding "extra" assumptions as those would eventually need to be discharged and you'd get non-normal subterms like $(lambda f:tautotau.M)N$. We can only apply $to E$ where the left argument is a variable applied to a series of normal form terms (e.g. $x$ as in your derivation, or $x(lambda y.y)$ for another example).
      $endgroup$
      – Derek Elkins
      Mar 31 at 19:41














    3












    3








    3





    $begingroup$

    You are right in your intuition that proving that a derivation does not exist is somewhat cumbersome. Soundness and completeness of formal proof systems provide us with an interesting duality:



    $vDash phi Leftrightarrow$ all models satisfy $phi$ $Leftrightarrow$ there exists no model that does not satisfy $phi$

    (soundness) $Leftarrow$ $Rightarrow$ (completeness)
    $vdash phi Leftrightarrow$ there exists a derivation that proves $phi$



    $not vDash phi Leftrightarrow$ not all models satisfy $phi$ $Leftrightarrow$ there exists a model that does not satisfy $phi$

    (completeness) $Leftarrow$ $Rightarrow$ (soundness)
    $not vdash phi Leftrightarrow$ there exists no derivation that proves $phi$ $Leftrightarrow$ all potential derivations can not prove $phi$



    Proving a universally quantified statement ("all") is often an awkward thing to do because you need to make an argument that any entity from a possibly infinite domain (such as any structure or any possible derivation) has the desired property (such as (not) satisfying or (not) proving a formula), while proving an existentially quantified statement ("there is") is very easy, because it suffices to just provide one example (one counter-model/one proof) and you're done.



    So if you are supposed to show $vDash$, rather than making a possibly cumbersome semantic argument about validity in all models, soundness and completeness allow as to show $vdash$, i.e. provide one syntactic derivation that proves the desired formula.

    One the other hand, showing $not vdash$, i.e. arguing that there can be no derivation can be annoying, but soundness and completeness enable us to just show that $not vDash$ holds by providing one counter-model and you're done.



    So the easier way to show that $not vdash t: ((sigma to tau) to sigma) to sigma)$ would be to provide a semantic argument that shows that there are models of typed $lambda$-calculus/intuitionistic logic in which the formula $((sigma to tau) to sigma) to sigma)$ does not hold. A semantics for intuitionistic logic is e.g. Kripke semantics.

    If you are, however, interested in syntactic proof, you need to make an argument that no syntactic derivation is possible, i.e. any attempt to prove the formula will eventually lead to a dead-end where it is not possible to continue the proof, and you need to reason why.




    So let's attempt to build up a derivation tree:



    enter image description here



    The only reasonable way to start (bottom-up) is to first (un-)do an implication introduction to get rid of $((sigma to tau) to sigma)$: Implication elimination would require an even larger major premise and a suitable minor premise$^1$, and introducing a new variable is not an option because the set of assumptions (the left-hand-side of the sequent) has to be empty in the conclusion formula.

    The next reasonable step after $(to I)$ is to make use of $x: ((sigma to tau) to sigma)$ - which can in turn be derived by one application of the identity rule - to show $x(lambda y.M): sigma$ by an application of $(to E)$, which requries a proof of the antecedent, $lambda y.M: (sigma to tau)$.

    With a similar argument as in the first step, the only useful continutation for the latter node is another $(to I)$, which adds $y: sigma$ as a new assumption, and we now have to prove $M: tau$. This is the point were we get stuck.



    The first observation is that we can't use $(to I)$ anymore because $tau$ is not a function type.

    Simply adding $tau$ to the context, i.e. introducing a new variable $z$ which has the type $tau$, and closing the branch with $(Id)$ is not an option either, because we would need to carry this additional assumption down the tree, and end up with the sequent $z: tau vdash lambda x.(x(lambda y.z)): ((sigma to tau) to sigma) to sigma$, with a context containing $z: tau$ instead of being empty. So we would not have a proof (= a derivation of the form $vdash t: alpha$, with no assumptions on the left-hand side of the sequent), but instead the derivability of the type would depend on the stipulation of some variable $z$ being declared to have the type $tau$. But then our term would contain a free variable, which does not solve the type inhabitation problem: That for any type $alpha$, we could find a trivial "inhabitation" of the form $x:alpha vdash x:alpha$ is uninteresting - instead, type inhabitation asks about the existence of a closed term.

    So alternatively, we could attempt to introduce new declarations of the form $... to tau$ to the context, and continue the tree by an application of elimination rules; however this would merely shift the problem upwards the tree:

    If we chose $ldots vdash sigma to tau$ as the major premise with $ldots vdash sigma$ (derived by an application of the $(Id)$ rule) as the minor premise to the $(to E)$ rule, an endless loop would occur, since a similar rule application is already used one step lower in the tree. The fact that the context now in addition contains $sigma$ does not make a difference in this case.

    If we chose $vdash tau to tau$ as the major premise to the $(to E)$ rule, another endless loop would occur, since the other branch (the minor premise) would need to contain $vdash tau$ with the same context again.
    And introducing new type variables different from $sigma$ and $tau$ wouldn't help either, but only leave us with even more to prove.



    So there is no way to continue the upper right branch of the derivation tree, and at no point could we have made a smarter choice by selecting a different rule.

    Since the tree can not be resolved, the type is not inhabited.



    As you see, proving that a formula is not derivable can be rather cumbersome, because you need to make sure that every way of trying to build up a tree will eventually fail - I hope my proof was comprehensive enough in this respect.




    Another thing you might find interesting w.r.t. to the correspondence to the $to$-fragment of intuitionistic logic: The formula $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$, i.e. the above formula with two succedents $tau$ appended, is deriavable/inhabited. This formula represents the double negation of Peirce's law (under the assumption that $tau$ stands for $bot$, and $neg neg phi$ is seen as an abbreviation of $(phi to bot) to bot$). This formula can be derived in intuitionistic logic, and in particular in the $to$-fragment, i.e. in positive implication logic.$^2$ Thus, the type $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$ is inhabited (take $alpha$ for $sigma$ and $beta$ for $tau$):



    enter image description here



    An inhabitant (a $lambda$-term) corresponding to this type (formula) would be $lambda x.(x(lambda y.(lambda z.x(lambda y.z)))$. You see that the part $lambda x.(x(lambda y.M))$ is the same as before, except that now $M$ can be resolved.




    Some more words on Peirce from a proof-theoretic perspective:

    A derivation of Peirce's law in classical logic requires an application of the reductio ad absurdum rule (or something equivalent, like double negation eliminiation): An assumption $neg alpha$ is made from which $bot$ is derived, then $alpha$ is concluded while discharging the assumption $neg alpha$. Again, you would need to argue why RAA is necessary, i.o.w., why there can be no derivation without this step.

    RAA/DNE is part of/admissible in natural deduction for classical logic, but not intuitionistic logic: In IL, only ex falso quodlibet sequitur available, which allows to conclude anything from $bot$ but does not allow to discharge assumptions for the form $neg alpha$. In fact, from a proof-theoretic perspective, the presence or absence of RAA/DNE is exactly what tells the two systems apart; the remaining set of rules is the same; so IL is basically CL with RAA/EFQL+DNE replaced by the weaker EFQL rule.

    Since RAA/DNE is necesssary for the proof but this rule is not available in IL, Peirce is not derivable in ND (or any other system) for IL, and in turn not for positive implcation logic, the $to$-fragment of IL.




    $^1$ In a formal proof, major premise = the premise with the formula that contains the occurrence of the operator to be eliminated by the rule, minor premises = the other premises. In the $(to E)$ rule, with a conclusion is of the form $Gamma vdash beta$ and two premises, the major premise is the premise of the form $Gamma vdash alpha to beta$, and the minor premise is the one of the form $Gamma vdash alpha$.



    $^2$ In fact, it can be shown that for any formula which is only valid classically but not in IL, its double negation can be derived in IL. Other than in classical logic, in IL, $neg neg phi$ is not equivalent to $phi$: it is a weaker propositions, so $phi$ implies $neg neg phi$, but not necessarily the other way round.






    share|cite|improve this answer











    $endgroup$



    You are right in your intuition that proving that a derivation does not exist is somewhat cumbersome. Soundness and completeness of formal proof systems provide us with an interesting duality:



    $vDash phi Leftrightarrow$ all models satisfy $phi$ $Leftrightarrow$ there exists no model that does not satisfy $phi$

    (soundness) $Leftarrow$ $Rightarrow$ (completeness)
    $vdash phi Leftrightarrow$ there exists a derivation that proves $phi$



    $not vDash phi Leftrightarrow$ not all models satisfy $phi$ $Leftrightarrow$ there exists a model that does not satisfy $phi$

    (completeness) $Leftarrow$ $Rightarrow$ (soundness)
    $not vdash phi Leftrightarrow$ there exists no derivation that proves $phi$ $Leftrightarrow$ all potential derivations can not prove $phi$



    Proving a universally quantified statement ("all") is often an awkward thing to do because you need to make an argument that any entity from a possibly infinite domain (such as any structure or any possible derivation) has the desired property (such as (not) satisfying or (not) proving a formula), while proving an existentially quantified statement ("there is") is very easy, because it suffices to just provide one example (one counter-model/one proof) and you're done.



    So if you are supposed to show $vDash$, rather than making a possibly cumbersome semantic argument about validity in all models, soundness and completeness allow as to show $vdash$, i.e. provide one syntactic derivation that proves the desired formula.

    One the other hand, showing $not vdash$, i.e. arguing that there can be no derivation can be annoying, but soundness and completeness enable us to just show that $not vDash$ holds by providing one counter-model and you're done.



    So the easier way to show that $not vdash t: ((sigma to tau) to sigma) to sigma)$ would be to provide a semantic argument that shows that there are models of typed $lambda$-calculus/intuitionistic logic in which the formula $((sigma to tau) to sigma) to sigma)$ does not hold. A semantics for intuitionistic logic is e.g. Kripke semantics.

    If you are, however, interested in syntactic proof, you need to make an argument that no syntactic derivation is possible, i.e. any attempt to prove the formula will eventually lead to a dead-end where it is not possible to continue the proof, and you need to reason why.




    So let's attempt to build up a derivation tree:



    enter image description here



    The only reasonable way to start (bottom-up) is to first (un-)do an implication introduction to get rid of $((sigma to tau) to sigma)$: Implication elimination would require an even larger major premise and a suitable minor premise$^1$, and introducing a new variable is not an option because the set of assumptions (the left-hand-side of the sequent) has to be empty in the conclusion formula.

    The next reasonable step after $(to I)$ is to make use of $x: ((sigma to tau) to sigma)$ - which can in turn be derived by one application of the identity rule - to show $x(lambda y.M): sigma$ by an application of $(to E)$, which requries a proof of the antecedent, $lambda y.M: (sigma to tau)$.

    With a similar argument as in the first step, the only useful continutation for the latter node is another $(to I)$, which adds $y: sigma$ as a new assumption, and we now have to prove $M: tau$. This is the point were we get stuck.



    The first observation is that we can't use $(to I)$ anymore because $tau$ is not a function type.

    Simply adding $tau$ to the context, i.e. introducing a new variable $z$ which has the type $tau$, and closing the branch with $(Id)$ is not an option either, because we would need to carry this additional assumption down the tree, and end up with the sequent $z: tau vdash lambda x.(x(lambda y.z)): ((sigma to tau) to sigma) to sigma$, with a context containing $z: tau$ instead of being empty. So we would not have a proof (= a derivation of the form $vdash t: alpha$, with no assumptions on the left-hand side of the sequent), but instead the derivability of the type would depend on the stipulation of some variable $z$ being declared to have the type $tau$. But then our term would contain a free variable, which does not solve the type inhabitation problem: That for any type $alpha$, we could find a trivial "inhabitation" of the form $x:alpha vdash x:alpha$ is uninteresting - instead, type inhabitation asks about the existence of a closed term.

    So alternatively, we could attempt to introduce new declarations of the form $... to tau$ to the context, and continue the tree by an application of elimination rules; however this would merely shift the problem upwards the tree:

    If we chose $ldots vdash sigma to tau$ as the major premise with $ldots vdash sigma$ (derived by an application of the $(Id)$ rule) as the minor premise to the $(to E)$ rule, an endless loop would occur, since a similar rule application is already used one step lower in the tree. The fact that the context now in addition contains $sigma$ does not make a difference in this case.

    If we chose $vdash tau to tau$ as the major premise to the $(to E)$ rule, another endless loop would occur, since the other branch (the minor premise) would need to contain $vdash tau$ with the same context again.
    And introducing new type variables different from $sigma$ and $tau$ wouldn't help either, but only leave us with even more to prove.



    So there is no way to continue the upper right branch of the derivation tree, and at no point could we have made a smarter choice by selecting a different rule.

    Since the tree can not be resolved, the type is not inhabited.



    As you see, proving that a formula is not derivable can be rather cumbersome, because you need to make sure that every way of trying to build up a tree will eventually fail - I hope my proof was comprehensive enough in this respect.




    Another thing you might find interesting w.r.t. to the correspondence to the $to$-fragment of intuitionistic logic: The formula $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$, i.e. the above formula with two succedents $tau$ appended, is deriavable/inhabited. This formula represents the double negation of Peirce's law (under the assumption that $tau$ stands for $bot$, and $neg neg phi$ is seen as an abbreviation of $(phi to bot) to bot$). This formula can be derived in intuitionistic logic, and in particular in the $to$-fragment, i.e. in positive implication logic.$^2$ Thus, the type $(((((sigma to tau) to sigma) to sigma) to tau) to tau)$ is inhabited (take $alpha$ for $sigma$ and $beta$ for $tau$):



    enter image description here



    An inhabitant (a $lambda$-term) corresponding to this type (formula) would be $lambda x.(x(lambda y.(lambda z.x(lambda y.z)))$. You see that the part $lambda x.(x(lambda y.M))$ is the same as before, except that now $M$ can be resolved.




    Some more words on Peirce from a proof-theoretic perspective:

    A derivation of Peirce's law in classical logic requires an application of the reductio ad absurdum rule (or something equivalent, like double negation eliminiation): An assumption $neg alpha$ is made from which $bot$ is derived, then $alpha$ is concluded while discharging the assumption $neg alpha$. Again, you would need to argue why RAA is necessary, i.o.w., why there can be no derivation without this step.

    RAA/DNE is part of/admissible in natural deduction for classical logic, but not intuitionistic logic: In IL, only ex falso quodlibet sequitur available, which allows to conclude anything from $bot$ but does not allow to discharge assumptions for the form $neg alpha$. In fact, from a proof-theoretic perspective, the presence or absence of RAA/DNE is exactly what tells the two systems apart; the remaining set of rules is the same; so IL is basically CL with RAA/EFQL+DNE replaced by the weaker EFQL rule.

    Since RAA/DNE is necesssary for the proof but this rule is not available in IL, Peirce is not derivable in ND (or any other system) for IL, and in turn not for positive implcation logic, the $to$-fragment of IL.




    $^1$ In a formal proof, major premise = the premise with the formula that contains the occurrence of the operator to be eliminated by the rule, minor premises = the other premises. In the $(to E)$ rule, with a conclusion is of the form $Gamma vdash beta$ and two premises, the major premise is the premise of the form $Gamma vdash alpha to beta$, and the minor premise is the one of the form $Gamma vdash alpha$.



    $^2$ In fact, it can be shown that for any formula which is only valid classically but not in IL, its double negation can be derived in IL. Other than in classical logic, in IL, $neg neg phi$ is not equivalent to $phi$: it is a weaker propositions, so $phi$ implies $neg neg phi$, but not necessarily the other way round.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Mar 24 at 19:12

























    answered Mar 21 at 14:02









    lemontreelemontree

    1,236622




    1,236622







    • 1




      $begingroup$
      The strong normalizability of the STLC is what helps you be confident you cover "all" bases. You switch from trying to find a term that inhabits the type to finding a normal form term that inhabits the types. This rules out adding "extra" assumptions as those would eventually need to be discharged and you'd get non-normal subterms like $(lambda f:tautotau.M)N$. We can only apply $to E$ where the left argument is a variable applied to a series of normal form terms (e.g. $x$ as in your derivation, or $x(lambda y.y)$ for another example).
      $endgroup$
      – Derek Elkins
      Mar 31 at 19:41













    • 1




      $begingroup$
      The strong normalizability of the STLC is what helps you be confident you cover "all" bases. You switch from trying to find a term that inhabits the type to finding a normal form term that inhabits the types. This rules out adding "extra" assumptions as those would eventually need to be discharged and you'd get non-normal subterms like $(lambda f:tautotau.M)N$. We can only apply $to E$ where the left argument is a variable applied to a series of normal form terms (e.g. $x$ as in your derivation, or $x(lambda y.y)$ for another example).
      $endgroup$
      – Derek Elkins
      Mar 31 at 19:41








    1




    1




    $begingroup$
    The strong normalizability of the STLC is what helps you be confident you cover "all" bases. You switch from trying to find a term that inhabits the type to finding a normal form term that inhabits the types. This rules out adding "extra" assumptions as those would eventually need to be discharged and you'd get non-normal subterms like $(lambda f:tautotau.M)N$. We can only apply $to E$ where the left argument is a variable applied to a series of normal form terms (e.g. $x$ as in your derivation, or $x(lambda y.y)$ for another example).
    $endgroup$
    – Derek Elkins
    Mar 31 at 19:41





    $begingroup$
    The strong normalizability of the STLC is what helps you be confident you cover "all" bases. You switch from trying to find a term that inhabits the type to finding a normal form term that inhabits the types. This rules out adding "extra" assumptions as those would eventually need to be discharged and you'd get non-normal subterms like $(lambda f:tautotau.M)N$. We can only apply $to E$ where the left argument is a variable applied to a series of normal form terms (e.g. $x$ as in your derivation, or $x(lambda y.y)$ for another example).
    $endgroup$
    – Derek Elkins
    Mar 31 at 19:41












    3












    $begingroup$

    Gentzen designed his logic to be normalizable. So for example, if a proof has $landtext-intro$ followed by $landtext-elim$ of the same introduced subexpression, then the proof can be simplified by removing those 2 steps.



    Type theoretic proofs are represented as typed lambda expressions. The normalization corresponds to simplifying the lambda expression by following the usual evaluation rules. So a proof containing $landtext-intro$ followed by $landtext-elim$ would be represented by a lambda expression containing a pair $(x, y)$ (cons in lisp terms) and a request for the the first or second value of that expression (car or cdr in lisp terms), so the simplification amounts to converting part of the proof from (car (cons x y)) to just x.



    Some of the consequences of this design are



    (a) all (gentzen style intuitionistic postive partial fragment of propositional logic) proofs can be normalized



    (b) all normalized proofs only contain rules pertaining to the operators used in them (a theorem without a $lor$ in it won't contain $lortext-elim$ in the proof, for example)



    (c) all normalized proofs only contain sub expressions of the theorem to be proven (and assumptions if those aren't eliminated)



    So to prove there is no proof of $((p to q) to p) to p$ you only need to check if there is a normalized proof or not (a). And to do that you only need to work backwards from the final statement, for this statement only needing to check $totext-into$ and $totext-elim$ since only $to$ is present in the theorem (b). And you only need to check instances of the inference that come from sub expressions, 5 of them in this case (c). So...



    $$((p to q) to p) to p$$



    Can only be a result of $totext-intro$, so



    $$beginarray ll
    quad (p to q) to p & textassumption \
    quad vdots \
    quad p \
    ((p to q) to p) to p & totext-intro
    endarray$$



    The only sub expressions that allows $p$ to be constructed are $(p to q) to p$ with modus ponens, so



    $$beginarray ll
    quad (p to q) to p & textassumption \
    quad vdots \
    quad p to q \
    quad p & totext-elim \
    ((p to q) to p) to p & totext-intro
    endarray$$



    And the only way to construct $p to q$ is



    $$beginarray ll
    quad (p to q) to p & textassumption \
    quad quad p & textassumption \
    quad quad vdots \
    quad quad q \
    quad p to q & totextintro \
    quad p & totext-elim \
    ((p to q) to p) to p & totext-intro
    endarray$$



    And as there are no sub expressions left to allow for the inference of $q$, this is an unprovable statement. This process could be automated with a (multi-)graph algorithm, using sub expressions as vertices and inferences as edges.






    share|cite|improve this answer









    $endgroup$

















      3












      $begingroup$

      Gentzen designed his logic to be normalizable. So for example, if a proof has $landtext-intro$ followed by $landtext-elim$ of the same introduced subexpression, then the proof can be simplified by removing those 2 steps.



      Type theoretic proofs are represented as typed lambda expressions. The normalization corresponds to simplifying the lambda expression by following the usual evaluation rules. So a proof containing $landtext-intro$ followed by $landtext-elim$ would be represented by a lambda expression containing a pair $(x, y)$ (cons in lisp terms) and a request for the the first or second value of that expression (car or cdr in lisp terms), so the simplification amounts to converting part of the proof from (car (cons x y)) to just x.



      Some of the consequences of this design are



      (a) all (gentzen style intuitionistic postive partial fragment of propositional logic) proofs can be normalized



      (b) all normalized proofs only contain rules pertaining to the operators used in them (a theorem without a $lor$ in it won't contain $lortext-elim$ in the proof, for example)



      (c) all normalized proofs only contain sub expressions of the theorem to be proven (and assumptions if those aren't eliminated)



      So to prove there is no proof of $((p to q) to p) to p$ you only need to check if there is a normalized proof or not (a). And to do that you only need to work backwards from the final statement, for this statement only needing to check $totext-into$ and $totext-elim$ since only $to$ is present in the theorem (b). And you only need to check instances of the inference that come from sub expressions, 5 of them in this case (c). So...



      $$((p to q) to p) to p$$



      Can only be a result of $totext-intro$, so



      $$beginarray ll
      quad (p to q) to p & textassumption \
      quad vdots \
      quad p \
      ((p to q) to p) to p & totext-intro
      endarray$$



      The only sub expressions that allows $p$ to be constructed are $(p to q) to p$ with modus ponens, so



      $$beginarray ll
      quad (p to q) to p & textassumption \
      quad vdots \
      quad p to q \
      quad p & totext-elim \
      ((p to q) to p) to p & totext-intro
      endarray$$



      And the only way to construct $p to q$ is



      $$beginarray ll
      quad (p to q) to p & textassumption \
      quad quad p & textassumption \
      quad quad vdots \
      quad quad q \
      quad p to q & totextintro \
      quad p & totext-elim \
      ((p to q) to p) to p & totext-intro
      endarray$$



      And as there are no sub expressions left to allow for the inference of $q$, this is an unprovable statement. This process could be automated with a (multi-)graph algorithm, using sub expressions as vertices and inferences as edges.






      share|cite|improve this answer









      $endgroup$















        3












        3








        3





        $begingroup$

        Gentzen designed his logic to be normalizable. So for example, if a proof has $landtext-intro$ followed by $landtext-elim$ of the same introduced subexpression, then the proof can be simplified by removing those 2 steps.



        Type theoretic proofs are represented as typed lambda expressions. The normalization corresponds to simplifying the lambda expression by following the usual evaluation rules. So a proof containing $landtext-intro$ followed by $landtext-elim$ would be represented by a lambda expression containing a pair $(x, y)$ (cons in lisp terms) and a request for the the first or second value of that expression (car or cdr in lisp terms), so the simplification amounts to converting part of the proof from (car (cons x y)) to just x.



        Some of the consequences of this design are



        (a) all (gentzen style intuitionistic postive partial fragment of propositional logic) proofs can be normalized



        (b) all normalized proofs only contain rules pertaining to the operators used in them (a theorem without a $lor$ in it won't contain $lortext-elim$ in the proof, for example)



        (c) all normalized proofs only contain sub expressions of the theorem to be proven (and assumptions if those aren't eliminated)



        So to prove there is no proof of $((p to q) to p) to p$ you only need to check if there is a normalized proof or not (a). And to do that you only need to work backwards from the final statement, for this statement only needing to check $totext-into$ and $totext-elim$ since only $to$ is present in the theorem (b). And you only need to check instances of the inference that come from sub expressions, 5 of them in this case (c). So...



        $$((p to q) to p) to p$$



        Can only be a result of $totext-intro$, so



        $$beginarray ll
        quad (p to q) to p & textassumption \
        quad vdots \
        quad p \
        ((p to q) to p) to p & totext-intro
        endarray$$



        The only sub expressions that allows $p$ to be constructed are $(p to q) to p$ with modus ponens, so



        $$beginarray ll
        quad (p to q) to p & textassumption \
        quad vdots \
        quad p to q \
        quad p & totext-elim \
        ((p to q) to p) to p & totext-intro
        endarray$$



        And the only way to construct $p to q$ is



        $$beginarray ll
        quad (p to q) to p & textassumption \
        quad quad p & textassumption \
        quad quad vdots \
        quad quad q \
        quad p to q & totextintro \
        quad p & totext-elim \
        ((p to q) to p) to p & totext-intro
        endarray$$



        And as there are no sub expressions left to allow for the inference of $q$, this is an unprovable statement. This process could be automated with a (multi-)graph algorithm, using sub expressions as vertices and inferences as edges.






        share|cite|improve this answer









        $endgroup$



        Gentzen designed his logic to be normalizable. So for example, if a proof has $landtext-intro$ followed by $landtext-elim$ of the same introduced subexpression, then the proof can be simplified by removing those 2 steps.



        Type theoretic proofs are represented as typed lambda expressions. The normalization corresponds to simplifying the lambda expression by following the usual evaluation rules. So a proof containing $landtext-intro$ followed by $landtext-elim$ would be represented by a lambda expression containing a pair $(x, y)$ (cons in lisp terms) and a request for the the first or second value of that expression (car or cdr in lisp terms), so the simplification amounts to converting part of the proof from (car (cons x y)) to just x.



        Some of the consequences of this design are



        (a) all (gentzen style intuitionistic postive partial fragment of propositional logic) proofs can be normalized



        (b) all normalized proofs only contain rules pertaining to the operators used in them (a theorem without a $lor$ in it won't contain $lortext-elim$ in the proof, for example)



        (c) all normalized proofs only contain sub expressions of the theorem to be proven (and assumptions if those aren't eliminated)



        So to prove there is no proof of $((p to q) to p) to p$ you only need to check if there is a normalized proof or not (a). And to do that you only need to work backwards from the final statement, for this statement only needing to check $totext-into$ and $totext-elim$ since only $to$ is present in the theorem (b). And you only need to check instances of the inference that come from sub expressions, 5 of them in this case (c). So...



        $$((p to q) to p) to p$$



        Can only be a result of $totext-intro$, so



        $$beginarray ll
        quad (p to q) to p & textassumption \
        quad vdots \
        quad p \
        ((p to q) to p) to p & totext-intro
        endarray$$



        The only sub expressions that allows $p$ to be constructed are $(p to q) to p$ with modus ponens, so



        $$beginarray ll
        quad (p to q) to p & textassumption \
        quad vdots \
        quad p to q \
        quad p & totext-elim \
        ((p to q) to p) to p & totext-intro
        endarray$$



        And the only way to construct $p to q$ is



        $$beginarray ll
        quad (p to q) to p & textassumption \
        quad quad p & textassumption \
        quad quad vdots \
        quad quad q \
        quad p to q & totextintro \
        quad p & totext-elim \
        ((p to q) to p) to p & totext-intro
        endarray$$



        And as there are no sub expressions left to allow for the inference of $q$, this is an unprovable statement. This process could be automated with a (multi-)graph algorithm, using sub expressions as vertices and inferences as edges.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Mar 21 at 14:52









        DanielVDanielV

        18.1k42755




        18.1k42755



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3156487%2fsyntactic-proof-that-peirces-law-doesnt-hold-in-simply-typed-lambda-calculus%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer

            random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

            Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye