Theory Tactic
theory Tactic
imports Base
begin
chapter ‹Tactical reasoning›
text ‹
Tactical reasoning works by refining an initial claim in a backwards
fashion, until a solved form is reached. A ‹goal› consists of several
subgoals that need to be solved in order to achieve the main statement; zero
subgoals means that the proof may be finished. A ‹tactic› is a refinement
operation that maps a goal to a lazy sequence of potential successors. A
‹tactical› is a combinator for composing tactics.
›
section ‹Goals \label{sec:tactical-goals}›
text ‹
Isabelle/Pure represents a goal as a theorem stating that the subgoals imply
the main goal: ‹A⇩1 ⟹ … ⟹ A⇩n ⟹ C›. The outermost goal structure is that of
a Horn Clause: i.e.\ an iterated implication without any quantifiers⁋‹Recall
that outermost ‹⋀x. φ[x]› is always represented via schematic variables in
the body: ‹φ[?x]›. These variables may get instantiated during the course of
reasoning.›. For ‹n = 0› a goal is called ``solved''.
The structure of each subgoal ‹A⇩i› is that of a general Hereditary Harrop
Formula ‹⋀x⇩1 … ⋀x⇩k. H⇩1 ⟹ … ⟹ H⇩m ⟹ B›. Here ‹x⇩1, …, x⇩k› are goal
parameters, i.e.\ arbitrary-but-fixed entities of certain types, and ‹H⇩1,
…, H⇩m› are goal hypotheses, i.e.\ facts that may be assumed locally.
Together, this forms the goal context of the conclusion ‹B› to be
established. The goal hypotheses may be again arbitrary Hereditary Harrop
Formulas, although the level of nesting rarely exceeds 1--2 in practice.
The main conclusion ‹C› is internally marked as a protected proposition,
which is represented explicitly by the notation ‹#C› here. This ensures that
the decomposition into subgoals and main conclusion is well-defined for
arbitrarily structured claims.
┉
Basic goal management is performed via the following Isabelle/Pure rules:
\[
\infer[‹(init)›]{‹C ⟹ #C›}{} \qquad
\infer[‹(finish)›]{‹C›}{‹#C›}
\]
┉
The following low-level variants admit general reasoning with protected
propositions:
\[
\infer[‹(protect n)›]{‹A⇩1 ⟹ … ⟹ A⇩n ⟹ #C›}{‹A⇩1 ⟹ … ⟹ A⇩n ⟹ C›}
\]
\[
\infer[‹(conclude)›]{‹A ⟹ … ⟹ C›}{‹A ⟹ … ⟹ #C›}
\]
›
text %mlref ‹
\begin{mldecls}
@{define_ML Goal.init: "cterm -> thm"} \\
@{define_ML Goal.finish: "Proof.context -> thm -> thm"} \\
@{define_ML Goal.protect: "int -> thm -> thm"} \\
@{define_ML Goal.conclude: "thm -> thm"} \\
\end{mldecls}
➧ \<^ML>‹Goal.init›~‹C› initializes a tactical goal from the well-formed
proposition ‹C›.
➧ \<^ML>‹Goal.finish›~‹ctxt thm› checks whether theorem ‹thm› is a solved
goal (no subgoals), and concludes the result by removing the goal
protection. The context is only required for printing error messages.
➧ \<^ML>‹Goal.protect›~‹n thm› protects the statement of theorem ‹thm›. The
parameter ‹n› indicates the number of premises to be retained.
➧ \<^ML>‹Goal.conclude›~‹thm› removes the goal protection, even if there are
pending subgoals.
›
section ‹Tactics\label{sec:tactics}›
text ‹
A ‹tactic› is a function ‹goal → goal⇧*⇧*› that maps a given goal state
(represented as a theorem, cf.\ \secref{sec:tactical-goals}) to a lazy
sequence of potential successor states. The underlying sequence
implementation is lazy both in head and tail, and is purely functional in
∗‹not› supporting memoing.⁋‹The lack of memoing and the strict nature of ML
requires some care when working with low-level sequence operations, to avoid
duplicate or premature evaluation of results. It also means that modified
runtime behavior, such as timeout, is very hard to achieve for general
tactics.›
An ∗‹empty result sequence› means that the tactic has failed: in a compound
tactic expression other tactics might be tried instead, or the whole
refinement step might fail outright, producing a toplevel error message in
the end. When implementing tactics from scratch, one should take care to
observe the basic protocol of mapping regular error conditions to an empty
result; only serious faults should emerge as exceptions.
By enumerating ∗‹multiple results›, a tactic can easily express the
potential outcome of an internal search process. There are also combinators
for building proof tools that involve search systematically, see also
\secref{sec:tacticals}.
┉
As explained before, a goal state essentially consists of a list of subgoals
that imply the main goal (conclusion). Tactics may operate on all subgoals
or on a particularly specified subgoal, but must not change the main
conclusion (apart from instantiating schematic goal variables).
Tactics with explicit ∗‹subgoal addressing› are of the form ‹int → tactic›
and may be applied to a particular subgoal (counting from 1). If the subgoal
number is out of range, the tactic should fail with an empty result
sequence, but must not raise an exception!
Operating on a particular subgoal means to replace it by an interval of zero
or more subgoals in the same place; other subgoals must not be affected,
apart from instantiating schematic variables ranging over the whole goal
state.
A common pattern of composing tactics with subgoal addressing is to try the
first one, and then the second one only if the subgoal has not been solved
yet. Special care is required here to avoid bumping into unrelated subgoals
that happen to come after the original subgoal. Assuming that there is only
a single initial subgoal is a very common error when implementing tactics!
Tactics with internal subgoal addressing should expose the subgoal index as
‹int› argument in full generality; a hardwired subgoal 1 is not acceptable.
┉
The main well-formedness conditions for proper tactics are summarized as
follows.
▪ General tactic failure is indicated by an empty result, only serious
faults may produce an exception.
▪ The main conclusion must not be changed, apart from instantiating
schematic variables.
▪ A tactic operates either uniformly on all subgoals, or specifically on a
selected subgoal (without bumping into unrelated subgoals).
▪ Range errors in subgoal addressing produce an empty result.
Some of these conditions are checked by higher-level goal infrastructure
(\secref{sec:struct-goals}); others are not checked explicitly, and
violating them merely results in ill-behaved tactics experienced by the user
(e.g.\ tactics that insist in being applicable only to singleton goals, or
prevent composition via standard tacticals such as \<^ML>‹REPEAT›).
›
text %mlref ‹
\begin{mldecls}
@{define_ML_type tactic = "thm -> thm Seq.seq"} \\
@{define_ML no_tac: tactic} \\
@{define_ML all_tac: tactic} \\
@{define_ML print_tac: "Proof.context -> string -> tactic"} \\[1ex]
@{define_ML PRIMITIVE: "(thm -> thm) -> tactic"} \\[1ex]
@{define_ML SUBGOAL: "(term * int -> tactic) -> int -> tactic"} \\
@{define_ML CSUBGOAL: "(cterm * int -> tactic) -> int -> tactic"} \\
@{define_ML SELECT_GOAL: "tactic -> int -> tactic"} \\
@{define_ML PREFER_GOAL: "tactic -> int -> tactic"} \\
\end{mldecls}
➧ Type \<^ML_type>‹tactic› represents tactics. The well-formedness conditions
described above need to be observed. See also 🗏‹~~/src/Pure/General/seq.ML›
for the underlying implementation of lazy sequences.
➧ Type \<^ML_type>‹int -> tactic› represents tactics with explicit subgoal
addressing, with well-formedness conditions as described above.
➧ \<^ML>‹no_tac› is a tactic that always fails, returning the empty sequence.
➧ \<^ML>‹all_tac› is a tactic that always succeeds, returning a singleton
sequence with unchanged goal state.
➧ \<^ML>‹print_tac›~‹ctxt message› is like \<^ML>‹all_tac›, but prints a message
together with the goal state on the tracing channel.
➧ \<^ML>‹PRIMITIVE›~‹rule› turns a primitive inference rule into a tactic with
unique result. Exception \<^ML>‹THM› is considered a regular tactic failure
and produces an empty result; other exceptions are passed through.
➧ \<^ML>‹SUBGOAL›~‹(fn (subgoal, i) => tactic)› is the most basic form to
produce a tactic with subgoal addressing. The given abstraction over the
subgoal term and subgoal number allows to peek at the relevant information
of the full goal state. The subgoal range is checked as required above.
➧ \<^ML>‹CSUBGOAL› is similar to \<^ML>‹SUBGOAL›, but passes the subgoal as
\<^ML_type>‹cterm› instead of raw \<^ML_type>‹term›. This avoids expensive
re-certification in situations where the subgoal is used directly for
primitive inferences.
➧ \<^ML>‹SELECT_GOAL›~‹tac i› confines a tactic to the specified subgoal ‹i›.
This rearranges subgoals and the main goal protection
(\secref{sec:tactical-goals}), while retaining the syntactic context of the
overall goal state (concerning schematic variables etc.).
➧ \<^ML>‹PREFER_GOAL›~‹tac i› rearranges subgoals to put ‹i› in front. This is
similar to \<^ML>‹SELECT_GOAL›, but without changing the main goal protection.
›
subsection ‹Resolution and assumption tactics \label{sec:resolve-assume-tac}›
text ‹
∗‹Resolution› is the most basic mechanism for refining a subgoal using a
theorem as object-level rule. ∗‹Elim-resolution› is particularly suited for
elimination rules: it resolves with a rule, proves its first premise by
assumption, and finally deletes that assumption from any new subgoals.
∗‹Destruct-resolution› is like elim-resolution, but the given destruction
rules are first turned into canonical elimination format.
∗‹Forward-resolution› is like destruct-resolution, but without deleting the
selected assumption. The ‹r/e/d/f› naming convention is maintained for
several different kinds of resolution rules and tactics.
Assumption tactics close a subgoal by unifying some of its premises against
its conclusion.
┉
All the tactics in this section operate on a subgoal designated by a
positive integer. Other subgoals might be affected indirectly, due to
instantiation of schematic variables.
There are various sources of non-determinism, the tactic result sequence
enumerates all possibilities of the following choices (if applicable):
▸ selecting one of the rules given as argument to the tactic;
▸ selecting a subgoal premise to eliminate, unifying it against the first
premise of the rule;
▸ unifying the conclusion of the subgoal to the conclusion of the rule.
Recall that higher-order unification may produce multiple results that are
enumerated here.
›
text %mlref ‹
\begin{mldecls}
@{define_ML resolve_tac: "Proof.context -> thm list -> int -> tactic"} \\
@{define_ML eresolve_tac: "Proof.context -> thm list -> int -> tactic"} \\
@{define_ML dresolve_tac: "Proof.context -> thm list -> int -> tactic"} \\
@{define_ML forward_tac: "Proof.context -> thm list -> int -> tactic"} \\
@{define_ML biresolve_tac: "Proof.context -> (bool * thm) list -> int -> tactic"} \\[1ex]
@{define_ML assume_tac: "Proof.context -> int -> tactic"} \\
@{define_ML eq_assume_tac: "int -> tactic"} \\[1ex]
@{define_ML match_tac: "Proof.context -> thm list -> int -> tactic"} \\
@{define_ML ematch_tac: "Proof.context -> thm list -> int -> tactic"} \\
@{define_ML dmatch_tac: "Proof.context -> thm list -> int -> tactic"} \\
@{define_ML bimatch_tac: "Proof.context -> (bool * thm) list -> int -> tactic"} \\
\end{mldecls}
➧ \<^ML>‹resolve_tac›~‹ctxt thms i› refines the goal state using the given
theorems, which should normally be introduction rules. The tactic resolves a
rule's conclusion with subgoal ‹i›, replacing it by the corresponding
versions of the rule's premises.
➧ \<^ML>‹eresolve_tac›~‹ctxt thms i› performs elim-resolution with the given
theorems, which are normally be elimination rules.
Note that \<^ML_text>‹eresolve_tac ctxt [asm_rl]› is equivalent to \<^ML_text>‹assume_tac ctxt›, which facilitates mixing of assumption steps with
genuine eliminations.
➧ \<^ML>‹dresolve_tac›~‹ctxt thms i› performs destruct-resolution with the
given theorems, which should normally be destruction rules. This replaces an
assumption by the result of applying one of the rules.
➧ \<^ML>‹forward_tac› is like \<^ML>‹dresolve_tac› except that the selected
assumption is not deleted. It applies a rule to an assumption, adding the
result as a new assumption.
➧ \<^ML>‹biresolve_tac›~‹ctxt brls i› refines the proof state by resolution or
elim-resolution on each rule, as indicated by its flag. It affects subgoal
‹i› of the proof state.
For each pair ‹(flag, rule)›, it applies resolution if the flag is ‹false›
and elim-resolution if the flag is ‹true›. A single tactic call handles a
mixture of introduction and elimination rules, which is useful to organize
the search process systematically in proof tools.
➧ \<^ML>‹assume_tac›~‹ctxt i› attempts to solve subgoal ‹i› by assumption
(modulo higher-order unification).
➧ \<^ML>‹eq_assume_tac› is similar to \<^ML>‹assume_tac›, but checks only for
immediate ‹α›-convertibility instead of using unification. It succeeds (with
a unique next state) if one of the assumptions is equal to the subgoal's
conclusion. Since it does not instantiate variables, it cannot make other
subgoals unprovable.
➧ \<^ML>‹match_tac›, \<^ML>‹ematch_tac›, \<^ML>‹dmatch_tac›, and \<^ML>‹bimatch_tac›
are similar to \<^ML>‹resolve_tac›, \<^ML>‹eresolve_tac›, \<^ML>‹dresolve_tac›,
and \<^ML>‹biresolve_tac›, respectively, but do not instantiate schematic
variables in the goal state.⁋‹Strictly speaking, matching means to treat the
unknowns in the goal state as constants, but these tactics merely discard
unifiers that would update the goal state. In rare situations (where the
conclusion and goal state have flexible terms at the same position), the
tactic will fail even though an acceptable unifier exists.› These tactics
were written for a specific application within the classical reasoner.
Flexible subgoals are not updated at will, but are left alone.
›
subsection ‹Explicit instantiation within a subgoal context›
text ‹
The main resolution tactics (\secref{sec:resolve-assume-tac}) use
higher-order unification, which works well in many practical situations
despite its daunting theoretical properties. Nonetheless, there are
important problem classes where unguided higher-order unification is not so
useful. This typically involves rules like universal elimination,
existential introduction, or equational substitution. Here the unification
problem involves fully flexible ‹?P ?x› schemes, which are hard to manage
without further hints.
By providing a (small) rigid term for ‹?x› explicitly, the remaining
unification problem is to assign a (large) term to ‹?P›, according to the
shape of the given subgoal. This is sufficiently well-behaved in most
practical situations.
┉
Isabelle provides separate versions of the standard ‹r/e/d/f› resolution
tactics that allow to provide explicit instantiations of unknowns of the
given rule, wrt.\ terms that refer to the implicit context of the selected
subgoal.
An instantiation consists of a list of pairs of the form ‹(?x, t)›, where
‹?x› is a schematic variable occurring in the given rule, and ‹t› is a term
from the current proof context, augmented by the local goal parameters of
the selected subgoal; cf.\ the ‹focus› operation described in
\secref{sec:variables}.
Entering the syntactic context of a subgoal is a brittle operation, because
its exact form is somewhat accidental, and the choice of bound variable
names depends on the presence of other local and global names. Explicit
renaming of subgoal parameters prior to explicit instantiation might help to
achieve a bit more robustness.
Type instantiations may be given as well, via pairs like ‹(?'a, τ)›. Type
instantiations are distinguished from term instantiations by the syntactic
form of the schematic variable. Types are instantiated before terms are.
Since term instantiation already performs simple type-inference, so explicit
type instantiations are seldom necessary.
›
text %mlref ‹
\begin{mldecls}
@{define_ML Rule_Insts.res_inst_tac: "Proof.context ->
((indexname * Position.T) * string) list -> (binding * string option * mixfix) list ->
thm -> int -> tactic"} \\
@{define_ML Rule_Insts.eres_inst_tac: "Proof.context ->
((indexname * Position.T) * string) list -> (binding * string option * mixfix) list ->
thm -> int -> tactic"} \\
@{define_ML Rule_Insts.dres_inst_tac: "Proof.context ->
((indexname * Position.T) * string) list -> (binding * string option * mixfix) list ->
thm -> int -> tactic"} \\
@{define_ML Rule_Insts.forw_inst_tac: "Proof.context ->
((indexname * Position.T) * string) list -> (binding * string option * mixfix) list ->
thm -> int -> tactic"} \\
@{define_ML Rule_Insts.subgoal_tac: "Proof.context -> string ->
(binding * string option * mixfix) list -> int -> tactic"} \\
@{define_ML Rule_Insts.thin_tac: "Proof.context -> string ->
(binding * string option * mixfix) list -> int -> tactic"} \\
@{define_ML rename_tac: "string list -> int -> tactic"} \\
\end{mldecls}
➧ \<^ML>‹Rule_Insts.res_inst_tac›~‹ctxt insts thm i› instantiates the rule
‹thm› with the instantiations ‹insts›, as described above, and then performs
resolution on subgoal ‹i›.
➧ \<^ML>‹Rule_Insts.eres_inst_tac› is like \<^ML>‹Rule_Insts.res_inst_tac›, but
performs elim-resolution.
➧ \<^ML>‹Rule_Insts.dres_inst_tac› is like \<^ML>‹Rule_Insts.res_inst_tac›, but
performs destruct-resolution.
➧ \<^ML>‹Rule_Insts.forw_inst_tac› is like \<^ML>‹Rule_Insts.dres_inst_tac›
except that the selected assumption is not deleted.
➧ \<^ML>‹Rule_Insts.subgoal_tac›~‹ctxt φ i› adds the proposition ‹φ› as local
premise to subgoal ‹i›, and poses the same as a new subgoal ‹i + 1› (in the
original context).
➧ \<^ML>‹Rule_Insts.thin_tac›~‹ctxt φ i› deletes the specified premise from
subgoal ‹i›. Note that ‹φ› may contain schematic variables, to abbreviate
the intended proposition; the first matching subgoal premise will be
deleted. Removing useless premises from a subgoal increases its readability
and can make search tactics run faster.
➧ \<^ML>‹rename_tac›~‹names i› renames the innermost parameters of subgoal ‹i›
according to the provided ‹names› (which need to be distinct identifiers).
For historical reasons, the above instantiation tactics take unparsed string
arguments, which makes them hard to use in general ML code. The slightly
more advanced \<^ML>‹Subgoal.FOCUS› combinator of \secref{sec:struct-goals}
allows to refer to internal goal structure with explicit context management.
›
subsection ‹Rearranging goal states›
text ‹
In rare situations there is a need to rearrange goal states: either the
overall collection of subgoals, or the local structure of a subgoal. Various
administrative tactics allow to operate on the concrete presentation these
conceptual sets of formulae.
›
text %mlref ‹
\begin{mldecls}
@{define_ML rotate_tac: "int -> int -> tactic"} \\
@{define_ML distinct_subgoals_tac: tactic} \\
@{define_ML flexflex_tac: "Proof.context -> tactic"} \\
\end{mldecls}
➧ \<^ML>‹rotate_tac›~‹n i› rotates the premises of subgoal ‹i› by ‹n›
positions: from right to left if ‹n› is positive, and from left to right if
‹n› is negative.
➧ \<^ML>‹distinct_subgoals_tac› removes duplicate subgoals from a proof state.
This is potentially inefficient.
➧ \<^ML>‹flexflex_tac› removes all flex-flex pairs from the proof state by
applying the trivial unifier. This drastic step loses information. It is
already part of the Isar infrastructure for facts resulting from goals, and
rarely needs to be invoked manually.
Flex-flex constraints arise from difficult cases of higher-order
unification. To prevent this, use \<^ML>‹Rule_Insts.res_inst_tac› to
instantiate some variables in a rule. Normally flex-flex constraints can be
ignored; they often disappear as unknowns get instantiated.
›
subsection ‹Raw composition: resolution without lifting›
text ‹
Raw composition of two rules means resolving them without prior lifting or
renaming of unknowns. This low-level operation, which underlies the
resolution tactics, may occasionally be useful for special effects.
Schematic variables are not renamed by default, so beware of clashes!
›
text %mlref ‹
\begin{mldecls}
@{define_ML compose_tac: "Proof.context -> (bool * thm * int) -> int -> tactic"} \\
@{define_ML Drule.compose: "thm * int * thm -> thm"} \\
@{define_ML_infix COMP: "thm * thm -> thm"} \\
\end{mldecls}
➧ \<^ML>‹compose_tac›~‹ctxt (flag, rule, m) i› refines subgoal ‹i› using
‹rule›, without lifting. The ‹rule› is taken to have the form ‹ψ⇩1 ⟹ … ψ⇩m ⟹
ψ›, where ‹ψ› need not be atomic; thus ‹m› determines the number of new
subgoals. If ‹flag› is ‹true› then it performs elim-resolution --- it solves
the first premise of ‹rule› by assumption and deletes that assumption.
➧ \<^ML>‹Drule.compose›~‹(thm⇩1, i, thm⇩2)› uses ‹thm⇩1›, regarded as an
atomic formula, to solve premise ‹i› of ‹thm⇩2›. Let ‹thm⇩1› and ‹thm⇩2› be
‹ψ› and ‹φ⇩1 ⟹ … φ⇩n ⟹ φ›. The unique ‹s› that unifies ‹ψ› and ‹φ⇩i› yields
the theorem ‹(φ⇩1 ⟹ … φ⇩i⇩-⇩1 ⟹ φ⇩i⇩+⇩1 ⟹ … φ⇩n ⟹ φ)s›. Multiple results are
considered as error (exception \<^ML>‹THM›).
➧ ‹thm⇩1 COMP thm⇩2› is the same as ‹Drule.compose (thm⇩1, 1, thm⇩2)›.
\begin{warn}
These low-level operations are stepping outside the structure imposed by
regular rule resolution. Used without understanding of the consequences,
they may produce results that cause problems with standard rules and tactics
later on.
\end{warn}
›
section ‹Tacticals \label{sec:tacticals}›
text ‹
A ∗‹tactical› is a functional combinator for building up complex tactics
from simpler ones. Common tacticals perform sequential composition,
disjunctive choice, iteration, or goal addressing. Various search strategies
may be expressed via tacticals.
›
subsection ‹Combining tactics›
text ‹
Sequential composition and alternative choices are the most basic ways to
combine tactics, similarly to ``▩‹,›'' and ``▩‹|›'' in Isar method notation.
This corresponds to \<^ML_infix>‹THEN› and \<^ML_infix>‹ORELSE› in ML, but there
are further possibilities for fine-tuning alternation of tactics such as
\<^ML_infix>‹APPEND›. Further details become visible in ML due to explicit
subgoal addressing.
›
text %mlref ‹
\begin{mldecls}
@{define_ML_infix "THEN": "tactic * tactic -> tactic"} \\
@{define_ML_infix "ORELSE": "tactic * tactic -> tactic"} \\
@{define_ML_infix "APPEND": "tactic * tactic -> tactic"} \\
@{define_ML "EVERY": "tactic list -> tactic"} \\
@{define_ML "FIRST": "tactic list -> tactic"} \\[0.5ex]
@{define_ML_infix "THEN'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\
@{define_ML_infix "ORELSE'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\
@{define_ML_infix "APPEND'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\
@{define_ML "EVERY'": "('a -> tactic) list -> 'a -> tactic"} \\
@{define_ML "FIRST'": "('a -> tactic) list -> 'a -> tactic"} \\
\end{mldecls}
➧ ‹tac⇩1›~\<^ML_infix>‹THEN›~‹tac⇩2› is the sequential composition of ‹tac⇩1› and
‹tac⇩2›. Applied to a goal state, it returns all states reachable in two
steps by applying ‹tac⇩1› followed by ‹tac⇩2›. First, it applies ‹tac⇩1› to
the goal state, getting a sequence of possible next states; then, it applies
‹tac⇩2› to each of these and concatenates the results to produce again one
flat sequence of states.
➧ ‹tac⇩1›~\<^ML_infix>‹ORELSE›~‹tac⇩2› makes a choice between ‹tac⇩1› and
‹tac⇩2›. Applied to a state, it tries ‹tac⇩1› and returns the result if
non-empty; if ‹tac⇩1› fails then it uses ‹tac⇩2›. This is a deterministic
choice: if ‹tac⇩1› succeeds then ‹tac⇩2› is excluded from the result.
➧ ‹tac⇩1›~\<^ML_infix>‹APPEND›~‹tac⇩2› concatenates the possible results of
‹tac⇩1› and ‹tac⇩2›. Unlike \<^ML_infix>‹ORELSE› there is ∗‹no commitment› to
either tactic, so \<^ML_infix>‹APPEND› helps to avoid incompleteness during
search, at the cost of potential inefficiencies.
➧ \<^ML>‹EVERY›~‹[tac⇩1, …, tac⇩n]› abbreviates ‹tac⇩1›~\<^ML_infix>‹THEN›~‹…›~\<^ML_infix>‹THEN›~‹tac⇩n›. Note that \<^ML>‹EVERY []› is the same as
\<^ML>‹all_tac›: it always succeeds.
➧ \<^ML>‹FIRST›~‹[tac⇩1, …, tac⇩n]› abbreviates ‹tac⇩1›~\<^ML_infix>‹ORELSE›~‹…›~\<^ML_infix>‹ORELSE›~‹tac⇩n›. Note that \<^ML>‹FIRST []› is the
same as \<^ML>‹no_tac›: it always fails.
➧ \<^ML_infix>‹THEN'› is the lifted version of \<^ML_infix>‹THEN›, for tactics
with explicit subgoal addressing. So ‹(tac⇩1›~\<^ML_infix>‹THEN'›~‹tac⇩2) i› is
the same as ‹(tac⇩1 i›~\<^ML_infix>‹THEN›~‹tac⇩2 i)›.
The other primed tacticals work analogously.
›
subsection ‹Repetition tacticals›
text ‹
These tacticals provide further control over repetition of tactics, beyond
the stylized forms of ``▩‹?›'' and ``▩‹+›'' in Isar method expressions.
›
text %mlref ‹
\begin{mldecls}
@{define_ML "TRY": "tactic -> tactic"} \\
@{define_ML "REPEAT": "tactic -> tactic"} \\
@{define_ML "REPEAT1": "tactic -> tactic"} \\
@{define_ML "REPEAT_DETERM": "tactic -> tactic"} \\
@{define_ML "REPEAT_DETERM_N": "int -> tactic -> tactic"} \\
\end{mldecls}
➧ \<^ML>‹TRY›~‹tac› applies ‹tac› to the goal state and returns the resulting
sequence, if non-empty; otherwise it returns the original state. Thus, it
applies ‹tac› at most once.
Note that for tactics with subgoal addressing, the combinator can be applied
via functional composition: \<^ML>‹TRY›~\<^ML_infix>‹o›~‹tac›. There is no need
for ▩‹TRY'›.
➧ \<^ML>‹REPEAT›~‹tac› applies ‹tac› to the goal state and, recursively, to
each element of the resulting sequence. The resulting sequence consists of
those states that make ‹tac› fail. Thus, it applies ‹tac› as many times as
possible (including zero times), and allows backtracking over each
invocation of ‹tac›. \<^ML>‹REPEAT› is more general than \<^ML>‹REPEAT_DETERM›,
but requires more space.
➧ \<^ML>‹REPEAT1›~‹tac› is like \<^ML>‹REPEAT›~‹tac› but it always applies ‹tac›
at least once, failing if this is impossible.
➧ \<^ML>‹REPEAT_DETERM›~‹tac› applies ‹tac› to the goal state and,
recursively, to the head of the resulting sequence. It returns the first
state to make ‹tac› fail. It is deterministic, discarding alternative
outcomes.
➧ \<^ML>‹REPEAT_DETERM_N›~‹n tac› is like \<^ML>‹REPEAT_DETERM›~‹tac› but the
number of repetitions is bound by ‹n› (where \<^ML>‹~1› means ‹∞›).
›
text %mlex ‹
The basic tactics and tacticals considered above follow some algebraic laws:
▪ \<^ML>‹all_tac› is the identity element of the tactical \<^ML_infix>‹THEN›.
▪ \<^ML>‹no_tac› is the identity element of \<^ML_infix>‹ORELSE› and \<^ML_infix>‹APPEND›. Also, it is a zero element for \<^ML_infix>‹THEN›, which means that
‹tac›~\<^ML_infix>‹THEN›~\<^ML>‹no_tac› is equivalent to \<^ML>‹no_tac›.
▪ \<^ML>‹TRY› and \<^ML>‹REPEAT› can be expressed as (recursive) functions over
more basic combinators (ignoring some internal implementation tricks):
›
ML ‹
fun TRY tac = tac ORELSE all_tac;
fun REPEAT tac st = ((tac THEN REPEAT tac) ORELSE all_tac) st;
›
text ‹
If ‹tac› can return multiple outcomes then so can \<^ML>‹REPEAT›~‹tac›. \<^ML>‹REPEAT› uses \<^ML_infix>‹ORELSE› and not \<^ML_infix>‹APPEND›, it applies ‹tac›
as many times as possible in each outcome.
\begin{warn}
Note the explicit abstraction over the goal state in the ML definition of
\<^ML>‹REPEAT›. Recursive tacticals must be coded in this awkward fashion to
avoid infinite recursion of eager functional evaluation in Standard ML. The
following attempt would make \<^ML>‹REPEAT›~‹tac› loop:
\end{warn}
›
ML_val ‹
fun REPEAT tac = (tac THEN REPEAT tac) ORELSE all_tac;
›
subsection ‹Applying tactics to subgoal ranges›
text ‹
Tactics with explicit subgoal addressing \<^ML_type>‹int -> tactic› can be
used together with tacticals that act like ``subgoal quantifiers'': guided
by success of the body tactic a certain range of subgoals is covered. Thus
the body tactic is applied to ∗‹all› subgoals, ∗‹some› subgoal etc.
Suppose that the goal state has ‹n ≥ 0› subgoals. Many of these tacticals
address subgoal ranges counting downwards from ‹n› towards ‹1›. This has the
fortunate effect that newly emerging subgoals are concatenated in the
result, without interfering each other. Nonetheless, there might be
situations where a different order is desired.
›
text %mlref ‹
\begin{mldecls}
@{define_ML ALLGOALS: "(int -> tactic) -> tactic"} \\
@{define_ML SOMEGOAL: "(int -> tactic) -> tactic"} \\
@{define_ML FIRSTGOAL: "(int -> tactic) -> tactic"} \\
@{define_ML HEADGOAL: "(int -> tactic) -> tactic"} \\
@{define_ML REPEAT_SOME: "(int -> tactic) -> tactic"} \\
@{define_ML REPEAT_FIRST: "(int -> tactic) -> tactic"} \\
@{define_ML RANGE: "(int -> tactic) list -> int -> tactic"} \\
\end{mldecls}
➧ \<^ML>‹ALLGOALS›~‹tac› is equivalent to ‹tac n›~\<^ML_infix>‹THEN›~‹…›~\<^ML_infix>‹THEN›~‹tac 1›. It applies the ‹tac› to all the subgoals, counting downwards.
➧ \<^ML>‹SOMEGOAL›~‹tac› is equivalent to ‹tac n›~\<^ML_infix>‹ORELSE›~‹…›~\<^ML_infix>‹ORELSE›~‹tac 1›. It applies ‹tac› to one subgoal, counting downwards.
➧ \<^ML>‹FIRSTGOAL›~‹tac› is equivalent to ‹tac 1›~\<^ML_infix>‹ORELSE›~‹…›~\<^ML_infix>‹ORELSE›~‹tac n›. It applies ‹tac› to one subgoal, counting upwards.
➧ \<^ML>‹HEADGOAL›~‹tac› is equivalent to ‹tac 1›. It applies ‹tac›
unconditionally to the first subgoal.
➧ \<^ML>‹REPEAT_SOME›~‹tac› applies ‹tac› once or more to a subgoal, counting
downwards.
➧ \<^ML>‹REPEAT_FIRST›~‹tac› applies ‹tac› once or more to a subgoal, counting
upwards.
➧ \<^ML>‹RANGE›~‹[tac⇩1, …, tac⇩k] i› is equivalent to ‹tac⇩k (i + k -
1)›~\<^ML_infix>‹THEN›~‹…›~\<^ML_infix>‹THEN›~‹tac⇩1 i›. It applies the given list of
tactics to the corresponding range of subgoals, counting downwards.
›
subsection ‹Control and search tacticals›
text ‹
A predicate on theorems \<^ML_type>‹thm -> bool› can test whether a goal
state enjoys some desirable property --- such as having no subgoals. Tactics
that search for satisfactory goal states are easy to express. The main
search procedures, depth-first, breadth-first and best-first, are provided
as tacticals. They generate the search tree by repeatedly applying a given
tactic.
›
text %mlref ""
subsubsection ‹Filtering a tactic's results›
text ‹
\begin{mldecls}
@{define_ML FILTER: "(thm -> bool) -> tactic -> tactic"} \\
@{define_ML CHANGED: "tactic -> tactic"} \\
\end{mldecls}
➧ \<^ML>‹FILTER›~‹sat tac› applies ‹tac› to the goal state and returns a
sequence consisting of those result goal states that are satisfactory in the
sense of ‹sat›.
➧ \<^ML>‹CHANGED›~‹tac› applies ‹tac› to the goal state and returns precisely
those states that differ from the original state (according to \<^ML>‹Thm.eq_thm›). Thus \<^ML>‹CHANGED›~‹tac› always has some effect on the state.
›
subsubsection ‹Depth-first search›
text ‹
\begin{mldecls}
@{define_ML DEPTH_FIRST: "(thm -> bool) -> tactic -> tactic"} \\
@{define_ML DEPTH_SOLVE: "tactic -> tactic"} \\
@{define_ML DEPTH_SOLVE_1: "tactic -> tactic"} \\
\end{mldecls}
➧ \<^ML>‹DEPTH_FIRST›~‹sat tac› returns the goal state if ‹sat› returns true.
Otherwise it applies ‹tac›, then recursively searches from each element of
the resulting sequence. The code uses a stack for efficiency, in effect
applying ‹tac›~\<^ML_infix>‹THEN›~\<^ML>‹DEPTH_FIRST›~‹sat tac› to the state.
➧ \<^ML>‹DEPTH_SOLVE›‹tac› uses \<^ML>‹DEPTH_FIRST› to search for states having
no subgoals.
➧ \<^ML>‹DEPTH_SOLVE_1›~‹tac› uses \<^ML>‹DEPTH_FIRST› to search for states
having fewer subgoals than the given state. Thus, it insists upon solving at
least one subgoal.
›
subsubsection ‹Other search strategies›
text ‹
\begin{mldecls}
@{define_ML BREADTH_FIRST: "(thm -> bool) -> tactic -> tactic"} \\
@{define_ML BEST_FIRST: "(thm -> bool) * (thm -> int) -> tactic -> tactic"} \\
@{define_ML THEN_BEST_FIRST: "tactic -> (thm -> bool) * (thm -> int) -> tactic -> tactic"} \\
\end{mldecls}
These search strategies will find a solution if one exists. However, they do
not enumerate all solutions; they terminate after the first satisfactory
result from ‹tac›.
➧ \<^ML>‹BREADTH_FIRST›~‹sat tac› uses breadth-first search to find states for
which ‹sat› is true. For most applications, it is too slow.
➧ \<^ML>‹BEST_FIRST›~‹(sat, dist) tac› does a heuristic search, using ‹dist›
to estimate the distance from a satisfactory state (in the sense of ‹sat›).
It maintains a list of states ordered by distance. It applies ‹tac› to the
head of this list; if the result contains any satisfactory states, then it
returns them. Otherwise, \<^ML>‹BEST_FIRST› adds the new states to the list,
and continues.
The distance function is typically \<^ML>‹size_of_thm›, which computes the
size of the state. The smaller the state, the fewer and simpler subgoals it
has.
➧ \<^ML>‹THEN_BEST_FIRST›~‹tac⇩0 (sat, dist) tac› is like \<^ML>‹BEST_FIRST›,
except that the priority queue initially contains the result of applying
‹tac⇩0› to the goal state. This tactical permits separate tactics for
starting the search and continuing the search.
›
subsubsection ‹Auxiliary tacticals for searching›
text ‹
\begin{mldecls}
@{define_ML COND: "(thm -> bool) -> tactic -> tactic -> tactic"} \\
@{define_ML IF_UNSOLVED: "tactic -> tactic"} \\
@{define_ML SOLVE: "tactic -> tactic"} \\
@{define_ML DETERM: "tactic -> tactic"} \\
\end{mldecls}
➧ \<^ML>‹COND›~‹sat tac⇩1 tac⇩2› applies ‹tac⇩1› to the goal state if it
satisfies predicate ‹sat›, and applies ‹tac⇩2›. It is a conditional tactical
in that only one of ‹tac⇩1› and ‹tac⇩2› is applied to a goal state. However,
both ‹tac⇩1› and ‹tac⇩2› are evaluated because ML uses eager evaluation.
➧ \<^ML>‹IF_UNSOLVED›~‹tac› applies ‹tac› to the goal state if it has any
subgoals, and simply returns the goal state otherwise. Many common tactics,
such as \<^ML>‹resolve_tac›, fail if applied to a goal state that has no
subgoals.
➧ \<^ML>‹SOLVE›~‹tac› applies ‹tac› to the goal state and then fails iff there
are subgoals left.
➧ \<^ML>‹DETERM›~‹tac› applies ‹tac› to the goal state and returns the head of
the resulting sequence. \<^ML>‹DETERM› limits the search space by making its
argument deterministic.
›
subsubsection ‹Predicates and functions useful for searching›
text ‹
\begin{mldecls}
@{define_ML has_fewer_prems: "int -> thm -> bool"} \\
@{define_ML Thm.eq_thm: "thm * thm -> bool"} \\
@{define_ML Thm.eq_thm_prop: "thm * thm -> bool"} \\
@{define_ML size_of_thm: "thm -> int"} \\
\end{mldecls}
➧ \<^ML>‹has_fewer_prems›~‹n thm› reports whether ‹thm› has fewer than ‹n›
premises.
➧ \<^ML>‹Thm.eq_thm›~‹(thm⇩1, thm⇩2)› reports whether ‹thm⇩1› and ‹thm⇩2› are
equal. Both theorems must have the same conclusions, the same set of
hypotheses, and the same set of sort hypotheses. Names of bound variables
are ignored as usual.
➧ \<^ML>‹Thm.eq_thm_prop›~‹(thm⇩1, thm⇩2)› reports whether the propositions of
‹thm⇩1› and ‹thm⇩2› are equal. Names of bound variables are ignored.
➧ \<^ML>‹size_of_thm›~‹thm› computes the size of ‹thm›, namely the number of
variables, constants and abstractions in its conclusion. It may serve as a
distance function for \<^ML>‹BEST_FIRST›.
›
end