Theory Proof
theory Proof
imports Base
begin
chapter ‹Structured proofs›
section ‹Variables \label{sec:variables}›
text ‹
Any variable that is not explicitly bound by ‹λ›-abstraction is considered
as ``free''. Logically, free variables act like outermost universal
quantification at the sequent level: ‹A⇩1(x), …, A⇩n(x) ⊢ B(x)› means that
the result holds ∗‹for all› values of ‹x›. Free variables for terms (not
types) can be fully internalized into the logic: ‹⊢ B(x)› and ‹⊢ ⋀x. B(x)›
are interchangeable, provided that ‹x› does not occur elsewhere in the
context. Inspecting ‹⊢ ⋀x. B(x)› more closely, we see that inside the
quantifier, ‹x› is essentially ``arbitrary, but fixed'', while from outside
it appears as a place-holder for instantiation (thanks to ‹⋀› elimination).
The Pure logic represents the idea of variables being either inside or
outside the current scope by providing separate syntactic categories for
∗‹fixed variables› (e.g.\ ‹x›) vs.\ ∗‹schematic variables› (e.g.\ ‹?x›).
Incidently, a universal result ‹⊢ ⋀x. B(x)› has the HHF normal form ‹⊢
B(?x)›, which represents its generality without requiring an explicit
quantifier. The same principle works for type variables: ‹⊢ B(?α)›
represents the idea of ``‹⊢ ∀α. B(α)›'' without demanding a truly
polymorphic framework.
┉
Additional care is required to treat type variables in a way that
facilitates type-inference. In principle, term variables depend on type
variables, which means that type variables would have to be declared first.
For example, a raw type-theoretic framework would demand the context to be
constructed in stages as follows: ‹Γ = α: type, x: α, a: A(x⇩α)›.
We allow a slightly less formalistic mode of operation: term variables ‹x›
are fixed without specifying a type yet (essentially ∗‹all› potential
occurrences of some instance ‹x⇩τ› are fixed); the first occurrence of ‹x›
within a specific term assigns its most general type, which is then
maintained consistently in the context. The above example becomes ‹Γ = x:
term, α: type, A(x⇩α)›, where type ‹α› is fixed ∗‹after› term ‹x›, and the
constraint ‹x :: α› is an implicit consequence of the occurrence of ‹x⇩α› in
the subsequent proposition.
This twist of dependencies is also accommodated by the reverse operation of
exporting results from a context: a type variable ‹α› is considered fixed as
long as it occurs in some fixed term variable of the context. For example,
exporting ‹x: term, α: type ⊢ x⇩α ≡ x⇩α› produces in the first step ‹x: term
⊢ x⇩α ≡ x⇩α› for fixed ‹α›, and only in the second step ‹⊢ ?x⇩?⇩α ≡ ?x⇩?⇩α›
for schematic ‹?x› and ‹?α›. The following Isar source text illustrates this
scenario.
›
notepad
begin
{
fix x
{
have "x::'a ≡ x"
by (rule reflexive)
}
thm this
}
thm this
end
text ‹
The Isabelle/Isar proof context manages the details of term vs.\ type
variables, with high-level principles for moving the frontier between fixed
and schematic variables.
The ‹add_fixes› operation explicitly declares fixed variables; the
‹declare_term› operation absorbs a term into a context by fixing new type
variables and adding syntactic constraints.
The ‹export› operation is able to perform the main work of generalizing term
and type variables as sketched above, assuming that fixing variables and
terms have been declared properly.
There ‹import› operation makes a generalized fact a genuine part of the
context, by inventing fixed variables for the schematic ones. The effect can
be reversed by using ‹export› later, potentially with an extended context;
the result is equivalent to the original modulo renaming of schematic
variables.
The ‹focus› operation provides a variant of ‹import› for nested propositions
(with explicit quantification): ‹⋀x⇩1 … x⇩n. B(x⇩1, …, x⇩n)› is decomposed
by inventing fixed variables ‹x⇩1, …, x⇩n› for the body.
›
text %mlref ‹
\begin{mldecls}
@{define_ML Variable.add_fixes: "
string list -> Proof.context -> string list * Proof.context"} \\
@{define_ML Variable.variant_fixes: "
string list -> Proof.context -> string list * Proof.context"} \\
@{define_ML Variable.declare_term: "term -> Proof.context -> Proof.context"} \\
@{define_ML Variable.declare_constraints: "term -> Proof.context -> Proof.context"} \\
@{define_ML Variable.export: "Proof.context -> Proof.context -> thm list -> thm list"} \\
@{define_ML Variable.polymorphic: "Proof.context -> term list -> term list"} \\
@{define_ML Variable.import: "bool -> thm list -> Proof.context ->
((ctyp TVars.table * cterm Vars.table) * thm list)
* Proof.context"} \\
@{define_ML Variable.focus: "binding list option -> term -> Proof.context ->
((string * (string * typ)) list * term) * Proof.context"} \\
\end{mldecls}
➧ \<^ML>‹Variable.add_fixes›~‹xs ctxt› fixes term variables ‹xs›, returning
the resulting internal names. By default, the internal representation
coincides with the external one, which also means that the given variables
must not be fixed already. There is a different policy within a local proof
body: the given names are just hints for newly invented Skolem variables.
➧ \<^ML>‹Variable.variant_fixes› is similar to \<^ML>‹Variable.add_fixes›, but
always produces fresh variants of the given names.
➧ \<^ML>‹Variable.declare_term›~‹t ctxt› declares term ‹t› to belong to the
context. This automatically fixes new type variables, but not term
variables. Syntactic constraints for type and term variables are declared
uniformly, though.
➧ \<^ML>‹Variable.declare_constraints›~‹t ctxt› declares syntactic constraints
from term ‹t›, without making it part of the context yet.
➧ \<^ML>‹Variable.export›~‹inner outer thms› generalizes fixed type and term
variables in ‹thms› according to the difference of the ‹inner› and ‹outer›
context, following the principles sketched above.
➧ \<^ML>‹Variable.polymorphic›~‹ctxt ts› generalizes type variables in ‹ts› as
far as possible, even those occurring in fixed term variables. The default
policy of type-inference is to fix newly introduced type variables, which is
essentially reversed with \<^ML>‹Variable.polymorphic›: here the given terms
are detached from the context as far as possible.
➧ \<^ML>‹Variable.import›~‹open thms ctxt› invents fixed type and term
variables for the schematic ones occurring in ‹thms›. The ‹open› flag
indicates whether the fixed names should be accessible to the user,
otherwise newly introduced names are marked as ``internal''
(\secref{sec:names}).
➧ \<^ML>‹Variable.focus›~‹bindings B› decomposes the outermost ‹⋀› prefix of
proposition ‹B›, using the given name bindings.
›
text %mlex ‹
The following example shows how to work with fixed term and type parameters
and with type-inference.
›
ML_val ‹
val ctxt0 = \<^context>;
val ([x, y], ctxt1) = ctxt0 |> Variable.add_fixes ["x", "y"];
val t1 = Syntax.read_term ctxt1 "x";
val t1' = singleton (Variable.polymorphic ctxt1) t1;
val u = Syntax.read_term ctxt1 "(x::nat) ≡ y";
val ctxt2 = ctxt1 |> Variable.declare_term u;
val t2 = Syntax.read_term ctxt2 "x";
›
text ‹
In the above example, the starting context is derived from the toplevel
theory, which means that fixed variables are internalized literally: ‹x› is
mapped again to ‹x›, and attempting to fix it again in the subsequent
context is an error. Alternatively, fixed parameters can be renamed
explicitly as follows:
›
ML_val ‹
val ctxt0 = \<^context>;
val ([x1, x2, x3], ctxt1) =
ctxt0 |> Variable.variant_fixes ["x", "x", "x"];
›
text ‹
The following ML code can now work with the invented names of ‹x1›, ‹x2›,
‹x3›, without depending on the details on the system policy for introducing
these variants. Recall that within a proof body the system always invents
fresh ``Skolem constants'', e.g.\ as follows:
›
notepad
begin
ML_prf %"ML"
‹val ctxt0 = \<^context>;
val ([x1], ctxt1) = ctxt0 |> Variable.add_fixes ["x"];
val ([x2], ctxt2) = ctxt1 |> Variable.add_fixes ["x"];
val ([x3], ctxt3) = ctxt2 |> Variable.add_fixes ["x"];
val ([y1, y2], ctxt4) =
ctxt3 |> Variable.variant_fixes ["y", "y"];›
end
text ‹
In this situation \<^ML>‹Variable.add_fixes› and \<^ML>‹Variable.variant_fixes›
are very similar, but identical name proposals given in a row are only
accepted by the second version.
›
section ‹Assumptions \label{sec:assumptions}›
text ‹
An ∗‹assumption› is a proposition that it is postulated in the current
context. Local conclusions may use assumptions as additional facts, but this
imposes implicit hypotheses that weaken the overall statement.
Assumptions are restricted to fixed non-schematic statements, i.e.\ all
generality needs to be expressed by explicit quantifiers. Nevertheless, the
result will be in HHF normal form with outermost quantifiers stripped. For
example, by assuming ‹⋀x :: α. P x› we get ‹⋀x :: α. P x ⊢ P ?x› for
schematic ‹?x› of fixed type ‹α›. Local derivations accumulate more and more
explicit references to hypotheses: ‹A⇩1, …, A⇩n ⊢ B› where ‹A⇩1, …, A⇩n›
needs to be covered by the assumptions of the current context.
┉
The ‹add_assms› operation augments the context by local assumptions, which
are parameterized by an arbitrary ‹export› rule (see below).
The ‹export› operation moves facts from a (larger) inner context into a
(smaller) outer context, by discharging the difference of the assumptions as
specified by the associated export rules. Note that the discharged portion
is determined by the difference of contexts, not the facts being exported!
There is a separate flag to indicate a goal context, where the result is
meant to refine an enclosing sub-goal of a structured proof state.
┉
The most basic export rule discharges assumptions directly by means of the
‹⟹› introduction rule:
\[
\infer[(‹⟹‐intro›)]{‹Γ - A ⊢ A ⟹ B›}{‹Γ ⊢ B›}
\]
The variant for goal refinements marks the newly introduced premises, which
causes the canonical Isar goal refinement scheme to enforce unification with
local premises within the goal:
\[
\infer[(‹#⟹‐intro›)]{‹Γ - A ⊢ #A ⟹ B›}{‹Γ ⊢ B›}
\]
┉
Alternative versions of assumptions may perform arbitrary transformations on
export, as long as the corresponding portion of hypotheses is removed from
the given facts. For example, a local definition works by fixing ‹x› and
assuming ‹x ≡ t›, with the following export rule to reverse the effect:
\[
\infer[(‹≡‐expand›)]{‹Γ - (x ≡ t) ⊢ B t›}{‹Γ ⊢ B x›}
\]
This works, because the assumption ‹x ≡ t› was introduced in a context with
‹x› being fresh, so ‹x› does not occur in ‹Γ› here.
›
text %mlref ‹
\begin{mldecls}
@{define_ML_type Assumption.export} \\
@{define_ML Assumption.assume: "Proof.context -> cterm -> thm"} \\
@{define_ML Assumption.add_assms:
"Assumption.export ->
cterm list -> Proof.context -> thm list * Proof.context"} \\
@{define_ML Assumption.add_assumes: "
cterm list -> Proof.context -> thm list * Proof.context"} \\
@{define_ML Assumption.export: "Proof.context -> Proof.context -> thm -> thm"} \\
@{define_ML Assumption.export_goal: "Proof.context -> Proof.context -> thm -> thm"} \\
@{define_ML Assumption.export_term: "Proof.context -> Proof.context -> term -> term"} \\
\end{mldecls}
➧ Type \<^ML_type>‹Assumption.export› represents export rules, as a pair of
functions \<^ML_type>‹bool -> cterm list -> (thm -> thm) * (term -> term)›.
The \<^ML_type>‹bool› argument indicates goal mode, and the \<^ML_type>‹cterm list›
the collection of assumptions to be discharged simultaneously.
➧ \<^ML>‹Assumption.assume›~‹ctxt A› turns proposition ‹A› into a primitive
assumption ‹A ⊢ A'›, where the conclusion ‹A'› is in HHF normal form.
➧ \<^ML>‹Assumption.add_assms›~‹r As› augments the context by assumptions ‹As›
with export rule ‹r›. The resulting facts are hypothetical theorems as
produced by the raw \<^ML>‹Assumption.assume›.
➧ \<^ML>‹Assumption.add_assumes›~‹As› is a special case of
\<^ML>‹Assumption.add_assms› where the export rule performs ‹⟹‐intro› or
‹#⟹‐intro›, depending on goal mode.
➧ \<^ML>‹Assumption.export›~‹inner outer thm› exports result ‹thm› from the
‹inner› context back into the ‹outer› one; \<^ML>‹Assumption.export_goal›
does the same in a goal context (⬚‹fix/assume/show› in Isabelle/Isar). The
result is always in HHF normal form. Note that \<^ML>‹Proof_Context.export›
combines \<^ML>‹Variable.export› and \<^ML>‹Assumption.export› in the
canonical way.
➧ \<^ML>‹Assumption.export_term›~‹inner outer t› exports term ‹t› from the
‹inner› context back into the ‹outer› one. This is analogous to
\<^ML>‹Assumption.export›, but only takes syntactical aspects of the
context into account (such as locally specified variables as seen in
@{command define} or @{command obtain}).
›
text %mlex ‹
The following example demonstrates how rules can be derived by building up a
context of assumptions first, and exporting some local fact afterwards. We
refer to \<^theory>‹Pure› equality here for testing purposes.
›
ML_val ‹
val ctxt0 = \<^context>;
val ([eq], ctxt1) =
ctxt0 |> Assumption.add_assumes [\<^cprop>‹x ≡ y›];
val eq' = Thm.symmetric eq;
val r = Assumption.export ctxt1 ctxt0 eq';
›
text ‹
Note that the variables of the resulting rule are not generalized. This
would have required to fix them properly in the context beforehand, and
export wrt.\ variables afterwards (cf.\ \<^ML>‹Variable.export› or the
combined \<^ML>‹Proof_Context.export›).
›
section ‹Structured goals and results \label{sec:struct-goals}›
text ‹
Local results are established by monotonic reasoning from facts within a
context. This allows common combinations of theorems, e.g.\ via ‹⋀/⟹›
elimination, resolution rules, or equational reasoning, see
\secref{sec:thms}. Unaccounted context manipulations should be avoided,
notably raw ‹⋀/⟹› introduction or ad-hoc references to free variables or
assumptions not present in the proof context.
┉
The ‹SUBPROOF› combinator allows to structure a tactical proof recursively
by decomposing a selected sub-goal: ‹(⋀x. A(x) ⟹ B(x)) ⟹ …› is turned into
‹B(x) ⟹ …› after fixing ‹x› and assuming ‹A(x)›. This means the tactic needs
to solve the conclusion, but may use the premise as a local fact, for
locally fixed variables.
The family of ‹FOCUS› combinators is similar to ‹SUBPROOF›, but allows to
retain schematic variables and pending subgoals in the resulting goal state.
The ‹prove› operation provides an interface for structured backwards
reasoning under program control, with some explicit sanity checks of the
result. The goal context can be augmented by additional fixed variables
(cf.\ \secref{sec:variables}) and assumptions (cf.\
\secref{sec:assumptions}), which will be available as local facts during the
proof and discharged into implications in the result. Type and term
variables are generalized as usual, according to the context.
The ‹obtain› operation produces results by eliminating existing facts by
means of a given tactic. This acts like a dual conclusion: the proof
demonstrates that the context may be augmented by parameters and
assumptions, without affecting any conclusions that do not mention these
parameters. See also \<^cite>‹"isabelle-isar-ref"› for the corresponding Isar
proof command @{command obtain}. Final results, which may not refer to the
parameters in the conclusion, need to exported explicitly into the original
context.
›
text %mlref ‹
\begin{mldecls}
@{define_ML SUBPROOF: "(Subgoal.focus -> tactic) ->
Proof.context -> int -> tactic"} \\
@{define_ML Subgoal.FOCUS: "(Subgoal.focus -> tactic) ->
Proof.context -> int -> tactic"} \\
@{define_ML Subgoal.FOCUS_PREMS: "(Subgoal.focus -> tactic) ->
Proof.context -> int -> tactic"} \\
@{define_ML Subgoal.FOCUS_PARAMS: "(Subgoal.focus -> tactic) ->
Proof.context -> int -> tactic"} \\
@{define_ML Subgoal.focus: "Proof.context -> int -> binding list option ->
thm -> Subgoal.focus * thm"} \\
@{define_ML Subgoal.focus_prems: "Proof.context -> int -> binding list option ->
thm -> Subgoal.focus * thm"} \\
@{define_ML Subgoal.focus_params: "Proof.context -> int -> binding list option ->
thm -> Subgoal.focus * thm"} \\
\end{mldecls}
\begin{mldecls}
@{define_ML Goal.prove: "Proof.context -> string list -> term list -> term ->
({prems: thm list, context: Proof.context} -> tactic) -> thm"} \\
@{define_ML Goal.prove_common: "Proof.context -> int option ->
string list -> term list -> term list ->
({prems: thm list, context: Proof.context} -> tactic) -> thm list"} \\
\end{mldecls}
\begin{mldecls}
@{define_ML Obtain.result: "(Proof.context -> tactic) -> thm list ->
Proof.context -> ((string * cterm) list * thm list) * Proof.context"} \\
\end{mldecls}
➧ \<^ML>‹SUBPROOF›~‹tac ctxt i› decomposes the structure of the specified
sub-goal, producing an extended context and a reduced goal, which needs to
be solved by the given tactic. All schematic parameters of the goal are
imported into the context as fixed ones, which may not be instantiated in
the sub-proof.
➧ \<^ML>‹Subgoal.FOCUS›, \<^ML>‹Subgoal.FOCUS_PREMS›, and \<^ML>‹Subgoal.FOCUS_PARAMS› are similar to \<^ML>‹SUBPROOF›, but are slightly more
flexible: only the specified parts of the subgoal are imported into the
context, and the body tactic may introduce new subgoals and schematic
variables.
➧ \<^ML>‹Subgoal.focus›, \<^ML>‹Subgoal.focus_prems›, \<^ML>‹Subgoal.focus_params›
extract the focus information from a goal state in the same way as the
corresponding tacticals above. This is occasionally useful to experiment
without writing actual tactics yet.
➧ \<^ML>‹Goal.prove›~‹ctxt xs As C tac› states goal ‹C› in the context
augmented by fixed variables ‹xs› and assumptions ‹As›, and applies tactic
‹tac› to solve it. The latter may depend on the local assumptions being
presented as facts. The result is in HHF normal form.
➧ \<^ML>‹Goal.prove_common›~‹ctxt fork_pri› is the common form to state and
prove a simultaneous goal statement, where \<^ML>‹Goal.prove› is a convenient
shorthand that is most frequently used in applications.
The given list of simultaneous conclusions is encoded in the goal state by
means of Pure conjunction: \<^ML>‹Goal.conjunction_tac› will turn this into a
collection of individual subgoals, but note that the original multi-goal
state is usually required for advanced induction.
It is possible to provide an optional priority for a forked proof, typically
\<^ML>‹SOME ~1›, while \<^ML>‹NONE› means the proof is immediate (sequential)
as for \<^ML>‹Goal.prove›. Note that a forked proof does not exhibit any
failures in the usual way via exceptions in ML, but accumulates error
situations under the execution id of the running transaction. Thus the
system is able to expose error messages ultimately to the end-user, even
though the subsequent ML code misses them.
➧ \<^ML>‹Obtain.result›~‹tac thms ctxt› eliminates the given facts using a
tactic, which results in additional fixed variables and assumptions in the
context. Final results need to be exported explicitly.
›
text %mlex ‹
The following minimal example illustrates how to access the focus
information of a structured goal state.
›
notepad
begin
fix A B C :: "'a ⇒ bool"
have "⋀x. A x ⟹ B x ⟹ C x"
ML_val
‹val {goal, context = goal_ctxt, ...} = @{Isar.goal};
val (focus as {params, asms, concl, ...}, goal') =
Subgoal.focus goal_ctxt 1 (SOME [\<^binding>‹x›]) goal;
val [A, B] = #prems focus;
val [(_, x)] = #params focus;›
sorry
end
text ‹
┉
The next example demonstrates forward-elimination in a local context, using
\<^ML>‹Obtain.result›.
›
notepad
begin
assume ex: "∃x. B x"
ML_prf %"ML"
‹val ctxt0 = \<^context>;
val (([(_, x)], [B]), ctxt1) = ctxt0
|> Obtain.result (fn _ => eresolve_tac ctxt0 @{thms exE} 1) [@{thm ex}];›
ML_prf %"ML"
‹singleton (Proof_Context.export ctxt1 ctxt0) @{thm refl};›
ML_prf %"ML"
‹Proof_Context.export ctxt1 ctxt0 [Thm.reflexive x]
handle ERROR msg => (warning msg; []);›
end
end