Theory Prelim
theory Prelim
imports Base
begin
chapter ‹Preliminaries›
section ‹Contexts \label{sec:context}›
text ‹
A logical context represents the background that is required for formulating
statements and composing proofs. It acts as a medium to produce formal
content, depending on earlier material (declarations, results etc.).
For example, derivations within the Isabelle/Pure logic can be described as
a judgment ‹Γ ⊢⇩Θ φ›, which means that a proposition ‹φ› is derivable from
hypotheses ‹Γ› within the theory ‹Θ›. There are logical reasons for keeping
‹Θ› and ‹Γ› separate: theories can be liberal about supporting type
constructors and schematic polymorphism of constants and axioms, while the
inner calculus of ‹Γ ⊢ φ› is strictly limited to Simple Type Theory (with
fixed type variables in the assumptions).
┉
Contexts and derivations are linked by the following key principles:
▪ Transfer: monotonicity of derivations admits results to be transferred
into a ∗‹larger› context, i.e.\ ‹Γ ⊢⇩Θ φ› implies ‹Γ' ⊢⇩Θ⇩' φ› for contexts
‹Θ' ⊇ Θ› and ‹Γ' ⊇ Γ›.
▪ Export: discharge of hypotheses admits results to be exported into a
∗‹smaller› context, i.e.\ ‹Γ' ⊢⇩Θ φ› implies ‹Γ ⊢⇩Θ Δ ⟹ φ› where ‹Γ' ⊇ Γ›
and ‹Δ = Γ' - Γ›. Note that ‹Θ› remains unchanged here, only the ‹Γ› part is
affected.
┉
By modeling the main characteristics of the primitive ‹Θ› and ‹Γ› above, and
abstracting over any particular logical content, we arrive at the
fundamental notions of ∗‹theory context› and ∗‹proof context› in
Isabelle/Isar. These implement a certain policy to manage arbitrary
∗‹context data›. There is a strongly-typed mechanism to declare new kinds of
data at compile time.
The internal bootstrap process of Isabelle/Pure eventually reaches a stage
where certain data slots provide the logical content of ‹Θ› and ‹Γ› sketched
above, but this does not stop there! Various additional data slots support
all kinds of mechanisms that are not necessarily part of the core logic.
For example, there would be data for canonical introduction and elimination
rules for arbitrary operators (depending on the object-logic and
application), which enables users to perform standard proof steps implicitly
(cf.\ the ‹rule› method \<^cite>‹"isabelle-isar-ref"›).
┉
Thus Isabelle/Isar is able to bring forth more and more concepts
successively. In particular, an object-logic like Isabelle/HOL continues the
Isabelle/Pure setup by adding specific components for automated reasoning
(classical reasoner, tableau prover, structured induction etc.) and derived
specification mechanisms (inductive predicates, recursive functions etc.).
All of this is ultimately based on the generic data management by theory and
proof contexts introduced here.
›
subsection ‹Theory context \label{sec:context-theory}›
text ‹
A ∗‹theory› is a data container with explicit name and unique identifier.
Theories are related by a (nominal) sub-theory relation, which corresponds
to the dependency graph of the original construction; each theory is derived
from a certain sub-graph of ancestor theories. To this end, the system
maintains a set of symbolic ``identification stamps'' within each theory.
The ‹begin› operation starts a new theory by importing several parent
theories (with merged contents) and entering a special mode of nameless
incremental updates, until the final ‹end› operation is performed.
┉
The example in \figref{fig:ex-theory} below shows a theory graph derived
from ‹Pure›, with theory ‹Length› importing ‹Nat› and ‹List›. The body of
‹Length› consists of a sequence of updates, resulting in locally a linear
sub-theory relation for each intermediate step.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{rcccl}
& & ‹Pure› \\
& & ‹↓› \\
& & ‹FOL› \\
& $\swarrow$ & & $\searrow$ & \\
‹Nat› & & & & ‹List› \\
& $\searrow$ & & $\swarrow$ \\
& & ‹Length› \\
& & \multicolumn{3}{l}{~~@{keyword "begin"}} \\
& & $\vdots$~~ \\
& & \multicolumn{3}{l}{~~@{command "end"}} \\
\end{tabular}
\caption{A theory definition depending on ancestors}\label{fig:ex-theory}
\end{center}
\end{figure}
┉
Derived formal entities may retain a reference to the background theory in
order to indicate the formal context from which they were produced. This
provides an immutable certificate of the background theory.
›
text %mlref ‹
\begin{mldecls}
@{define_ML_type theory} \\
@{define_ML Context.eq_thy: "theory * theory -> bool"} \\
@{define_ML Context.subthy: "theory * theory -> bool"} \\
@{define_ML Theory.begin_theory: "string * Position.T -> theory list -> theory"} \\
@{define_ML Theory.parents_of: "theory -> theory list"} \\
@{define_ML Theory.ancestors_of: "theory -> theory list"} \\
\end{mldecls}
➧ Type \<^ML_type>‹theory› represents theory contexts.
➧ \<^ML>‹Context.eq_thy›~‹(thy⇩1, thy⇩2)› check strict identity of two
theories.
➧ \<^ML>‹Context.subthy›~‹(thy⇩1, thy⇩2)› compares theories according to the
intrinsic graph structure of the construction. This sub-theory relation is a
nominal approximation of inclusion (‹⊆›) of the corresponding content
(according to the semantics of the ML modules that implement the data).
➧ \<^ML>‹Theory.begin_theory›~‹name parents› constructs a new theory based
on the given parents. This ML function is normally not invoked directly.
➧ \<^ML>‹Theory.parents_of›~‹thy› returns the direct ancestors of ‹thy›.
➧ \<^ML>‹Theory.ancestors_of›~‹thy› returns all ancestors of ‹thy› (not
including ‹thy› itself).
›
text %mlantiq ‹
\begin{matharray}{rcl}
@{ML_antiquotation_def "theory"} & : & ‹ML_antiquotation› \\
@{ML_antiquotation_def "theory_context"} & : & ‹ML_antiquotation› \\
\end{matharray}
\<^rail>‹
@@{ML_antiquotation theory} embedded?
;
@@{ML_antiquotation theory_context} embedded
›
➧ ‹@{theory}› refers to the background theory of the current context --- as
abstract value.
➧ ‹@{theory A}› refers to an explicitly named ancestor theory ‹A› of the
background theory of the current context --- as abstract value.
➧ ‹@{theory_context A}› is similar to ‹@{theory A}›, but presents the result
as initial \<^ML_type>‹Proof.context› (see also \<^ML>‹Proof_Context.init_global›).
›
subsection ‹Proof context \label{sec:context-proof}›
text ‹
A proof context is a container for pure data that refers to the theory from
which it is derived. The ‹init› operation creates a proof context from a
given theory. There is an explicit ‹transfer› operation to force
resynchronization with updates to the background theory -- this is rarely
required in practice.
Entities derived in a proof context need to record logical requirements
explicitly, since there is no separate context identification or symbolic
inclusion as for theories. For example, hypotheses used in primitive
derivations (cf.\ \secref{sec:thms}) are recorded separately within the
sequent ‹Γ ⊢ φ›, just to make double sure. Results could still leak into an
alien proof context due to programming errors, but Isabelle/Isar includes
some extra validity checks in critical positions, notably at the end of a
sub-proof.
Proof contexts may be manipulated arbitrarily, although the common
discipline is to follow block structure as a mental model: a given context
is extended consecutively, and results are exported back into the original
context. Note that an Isar proof state models block-structured reasoning
explicitly, using a stack of proof contexts internally. For various
technical reasons, the background theory of an Isar proof state must not be
changed while the proof is still under construction!
›
text %mlref ‹
\begin{mldecls}
@{define_ML_type Proof.context} \\
@{define_ML Proof_Context.init_global: "theory -> Proof.context"} \\
@{define_ML Proof_Context.theory_of: "Proof.context -> theory"} \\
@{define_ML Proof_Context.transfer: "theory -> Proof.context -> Proof.context"} \\
\end{mldecls}
➧ Type \<^ML_type>‹Proof.context› represents proof contexts.
➧ \<^ML>‹Proof_Context.init_global›~‹thy› produces a proof context derived
from ‹thy›, initializing all data.
➧ \<^ML>‹Proof_Context.theory_of›~‹ctxt› selects the background theory from
‹ctxt›.
➧ \<^ML>‹Proof_Context.transfer›~‹thy ctxt› promotes the background theory of
‹ctxt› to the super theory ‹thy›.
›
text %mlantiq ‹
\begin{matharray}{rcl}
@{ML_antiquotation_def "context"} & : & ‹ML_antiquotation› \\
\end{matharray}
➧ ‹@{context}› refers to ∗‹the› context at compile-time --- as abstract
value. Independently of (local) theory or proof mode, this always produces a
meaningful result.
This is probably the most common antiquotation in interactive
experimentation with ML inside Isar.
›
subsection ‹Generic contexts \label{sec:generic-context}›
text ‹
A generic context is the disjoint sum of either a theory or proof context.
Occasionally, this enables uniform treatment of generic context data,
typically extra-logical information. Operations on generic contexts include
the usual injections, partial selections, and combinators for lifting
operations on either component of the disjoint sum.
Moreover, there are total operations ‹theory_of› and ‹proof_of› to convert a
generic context into either kind: a theory can always be selected from the
sum, while a proof context might have to be constructed by an ad-hoc ‹init›
operation, which incurs a small runtime overhead.
›
text %mlref ‹
\begin{mldecls}
@{define_ML_type Context.generic} \\
@{define_ML Context.theory_of: "Context.generic -> theory"} \\
@{define_ML Context.proof_of: "Context.generic -> Proof.context"} \\
\end{mldecls}
➧ Type \<^ML_type>‹Context.generic› is the direct sum of \<^ML_type>‹theory›
and \<^ML_type>‹Proof.context›, with the datatype constructors \<^ML>‹Context.Theory› and \<^ML>‹Context.Proof›.
➧ \<^ML>‹Context.theory_of›~‹context› always produces a theory from the
generic ‹context›, using \<^ML>‹Proof_Context.theory_of› as required.
➧ \<^ML>‹Context.proof_of›~‹context› always produces a proof context from the
generic ‹context›, using \<^ML>‹Proof_Context.init_global› as required (note
that this re-initializes the context data with each invocation).
›
subsection ‹Context data \label{sec:context-data}›
text ‹
The main purpose of theory and proof contexts is to manage arbitrary (pure)
data. New data types can be declared incrementally at compile time. There
are separate declaration mechanisms for any of the three kinds of contexts:
theory, proof, generic.
›
paragraph ‹Theory data›
text ‹declarations need to implement the following ML signature:
┉
\begin{tabular}{ll}
‹\<type> T› & representing type \\
‹\<val> empty: T› & empty default value \\
‹\<val> extend: T → T› & obsolete (identity function) \\
‹\<val> merge: T × T → T› & merge data \\
\end{tabular}
┉
The ‹empty› value acts as initial default for ∗‹any› theory that does not
declare actual data content; ‹extend› is obsolete: it needs to be the
identity function.
The ‹merge› operation needs to join the data from two theories in a
conservative manner. The standard scheme for ‹merge (data⇩1, data⇩2)›
inserts those parts of ‹data⇩2› into ‹data⇩1› that are not yet present,
while keeping the general order of things. The \<^ML>‹Library.merge›
function on plain lists may serve as canonical template. Particularly note
that shared parts of the data must not be duplicated by naive concatenation,
or a theory graph that resembles a chain of diamonds would cause an
exponential blowup!
Sometimes, the data consists of a single item that cannot be ``merged'' in a
sensible manner. Then the standard scheme degenerates to the projection to
‹data⇩1›, ignoring ‹data⇩2› outright.
›
paragraph ‹Proof context data›
text ‹declarations need to implement the following ML signature:
┉
\begin{tabular}{ll}
‹\<type> T› & representing type \\
‹\<val> init: theory → T› & produce initial value \\
\end{tabular}
┉
The ‹init› operation is supposed to produce a pure value from the given
background theory and should be somehow ``immediate''. Whenever a proof
context is initialized, which happens frequently, the the system invokes the
‹init› operation of ∗‹all› theory data slots ever declared. This also means
that one needs to be economic about the total number of proof data
declarations in the system, i.e.\ each ML module should declare at most one,
sometimes two data slots for its internal use. Repeated data declarations to
simulate a record type should be avoided!
›
paragraph ‹Generic data›
text ‹
provides a hybrid interface for both theory and proof data. The ‹init›
operation for proof contexts is predefined to select the current data value
from the background theory.
━
Any of the above data declarations over type ‹T› result in an ML structure
with the following signature:
┉
\begin{tabular}{ll}
‹get: context → T› \\
‹put: T → context → context› \\
‹map: (T → T) → context → context› \\
\end{tabular}
┉
These other operations provide exclusive access for the particular kind of
context (theory, proof, or generic context). This interface observes the ML
discipline for types and scopes: there is no other way to access the
corresponding data slot of a context. By keeping these operations private,
an Isabelle/ML module may maintain abstract values authentically.
›
text %mlref ‹
\begin{mldecls}
@{define_ML_functor Theory_Data} \\
@{define_ML_functor Proof_Data} \\
@{define_ML_functor Generic_Data} \\
\end{mldecls}
➧ \<^ML_functor>‹Theory_Data›‹(spec)› declares data for type \<^ML_type>‹theory›
according to the specification provided as argument structure. The resulting
structure provides data init and access operations as described above.
➧ \<^ML_functor>‹Proof_Data›‹(spec)› is analogous to \<^ML_functor>‹Theory_Data›
for type \<^ML_type>‹Proof.context›.
➧ \<^ML_functor>‹Generic_Data›‹(spec)› is analogous to \<^ML_functor>‹Theory_Data› for type \<^ML_type>‹Context.generic›. ›
text %mlex ‹
The following artificial example demonstrates theory data: we maintain a set
of terms that are supposed to be wellformed wrt.\ the enclosing theory. The
public interface is as follows:
›
ML ‹
signature WELLFORMED_TERMS =
sig
val get: theory -> term list
val add: term -> theory -> theory
end;
›
text ‹
The implementation uses private theory data internally, and only exposes an
operation that involves explicit argument checking wrt.\ the given theory.
›
ML ‹
structure Wellformed_Terms: WELLFORMED_TERMS =
struct
structure Terms = Theory_Data
(
type T = term Ord_List.T;
val empty = [];
fun merge (ts1, ts2) =
Ord_List.union Term_Ord.fast_term_ord ts1 ts2;
);
val get = Terms.get;
fun add raw_t thy =
let
val t = Sign.cert_term thy raw_t;
in
Terms.map (Ord_List.insert Term_Ord.fast_term_ord t) thy
end;
end;
›
text ‹
Type \<^ML_type>‹term Ord_List.T› is used for reasonably efficient
representation of a set of terms: all operations are linear in the number of
stored elements. Here we assume that users of this module do not care about
the declaration order, since that data structure forces its own arrangement
of elements.
Observe how the \<^ML_text>‹merge› operation joins the data slots of the two
constituents: \<^ML>‹Ord_List.union› prevents duplication of common data from
different branches, thus avoiding the danger of exponential blowup. Plain
list append etc.\ must never be used for theory data merges!
┉
Our intended invariant is achieved as follows:
▸ \<^ML>‹Wellformed_Terms.add› only admits terms that have passed the \<^ML>‹Sign.cert_term› check of the given theory at that point.
▸ Wellformedness in the sense of \<^ML>‹Sign.cert_term› is monotonic wrt.\
the sub-theory relation. So our data can move upwards in the hierarchy
(via extension or merges), and maintain wellformedness without further
checks.
Note that all basic operations of the inference kernel (which includes \<^ML>‹Sign.cert_term›) observe this monotonicity principle, but other user-space
tools don't. For example, fully-featured type-inference via \<^ML>‹Syntax.check_term› (cf.\ \secref{sec:term-check}) is not necessarily
monotonic wrt.\ the background theory, since constraints of term constants
can be modified by later declarations, for example.
In most cases, user-space context data does not have to take such invariants
too seriously. The situation is different in the implementation of the
inference kernel itself, which uses the very same data mechanisms for types,
constants, axioms etc.
›
subsection ‹Configuration options \label{sec:config-options}›
text ‹
A ∗‹configuration option› is a named optional value of some basic type
(Boolean, integer, string) that is stored in the context. It is a simple
application of general context data (\secref{sec:context-data}) that is
sufficiently common to justify customized setup, which includes some
concrete declarations for end-users using existing notation for attributes
(cf.\ \secref{sec:attributes}).
For example, the predefined configuration option @{attribute show_types}
controls output of explicit type constraints for variables in printed terms
(cf.\ \secref{sec:read-print}). Its value can be modified within Isar text
like this:
›
experiment
begin
declare [[show_types = false]]
notepad
begin
note [[show_types = true]]
term x
have "x = x"
using [[show_types = false]]
..
end
end
text ‹
Configuration options that are not set explicitly hold a default value that
can depend on the application context. This allows to retrieve the value
from another slot within the context, or fall back on a global preference
mechanism, for example.
The operations to declare configuration options and get/map their values are
modeled as direct replacements for historic global references, only that the
context is made explicit. This allows easy configuration of tools, without
relying on the execution order as required for old-style mutable
references.
›
text %mlref ‹
\begin{mldecls}
@{define_ML Config.get: "Proof.context -> 'a Config.T -> 'a"} \\
@{define_ML Config.map: "'a Config.T -> ('a -> 'a) -> Proof.context -> Proof.context"} \\
@{define_ML Attrib.setup_config_bool: "binding -> (Context.generic -> bool) ->
bool Config.T"} \\
@{define_ML Attrib.setup_config_int: "binding -> (Context.generic -> int) ->
int Config.T"} \\
@{define_ML Attrib.setup_config_real: "binding -> (Context.generic -> real) ->
real Config.T"} \\
@{define_ML Attrib.setup_config_string: "binding -> (Context.generic -> string) ->
string Config.T"} \\
\end{mldecls}
➧ \<^ML>‹Config.get›~‹ctxt config› gets the value of ‹config› in the given
context.
➧ \<^ML>‹Config.map›~‹config f ctxt› updates the context by updating the value
of ‹config›.
➧ ‹config =›~\<^ML>‹Attrib.setup_config_bool›~‹name default› creates a named
configuration option of type \<^ML_type>‹bool›, with the given ‹default›
depending on the application context. The resulting ‹config› can be used to
get/map its value in a given context. There is an implicit update of the
background theory that registers the option as attribute with some concrete
syntax.
➧ \<^ML>‹Attrib.config_int›, \<^ML>‹Attrib.config_real›, and \<^ML>‹Attrib.config_string› work like \<^ML>‹Attrib.config_bool›, but for types
\<^ML_type>‹int› and \<^ML_type>‹string›, respectively.
›
text %mlex ‹
The following example shows how to declare and use a Boolean configuration
option called ‹my_flag› with constant default value \<^ML>‹false›.
›
ML ‹
val my_flag =
Attrib.setup_config_bool \<^binding>‹my_flag› (K false)
›
text ‹
Now the user can refer to @{attribute my_flag} in declarations, while ML
tools can retrieve the current value from the context via \<^ML>‹Config.get›.
›
ML_val ‹\<^assert> (Config.get \<^context> my_flag = false)›
declare [[my_flag = true]]
ML_val ‹\<^assert> (Config.get \<^context> my_flag = true)›
notepad
begin
{
note [[my_flag = false]]
ML_val ‹\<^assert> (Config.get \<^context> my_flag = false)›
}
ML_val ‹\<^assert> (Config.get \<^context> my_flag = true)›
end
text ‹
Here is another example involving ML type \<^ML_type>‹real› (floating-point
numbers).
›
ML ‹
val airspeed_velocity =
Attrib.setup_config_real \<^binding>‹airspeed_velocity› (K 0.0)
›
declare [[airspeed_velocity = 10]]
declare [[airspeed_velocity = 9.9]]
section ‹Names \label{sec:names}›
text ‹
In principle, a name is just a string, but there are various conventions for
representing additional structure. For example, ``‹Foo.bar.baz›'' is
considered as a long name consisting of qualifier ‹Foo.bar› and base name
‹baz›. The individual constituents of a name may have further substructure,
e.g.\ the string ``▩‹α›'' encodes as a single symbol (\secref{sec:symbols}).
┉
Subsequently, we shall introduce specific categories of names. Roughly
speaking these correspond to logical entities as follows:
▪ Basic names (\secref{sec:basic-name}): free and bound variables.
▪ Indexed names (\secref{sec:indexname}): schematic variables.
▪ Long names (\secref{sec:long-name}): constants of any kind (type
constructors, term constants, other concepts defined in user space). Such
entities are typically managed via name spaces (\secref{sec:name-space}).
›
subsection ‹Basic names \label{sec:basic-name}›
text ‹
A ∗‹basic name› essentially consists of a single Isabelle identifier. There
are conventions to mark separate classes of basic names, by attaching a
suffix of underscores: one underscore means ∗‹internal name›, two
underscores means ∗‹Skolem name›, three underscores means ∗‹internal Skolem
name›.
For example, the basic name ‹foo› has the internal version ‹foo_›, with
Skolem versions ‹foo__› and ‹foo___›, respectively.
These special versions provide copies of the basic name space, apart from
anything that normally appears in the user text. For example, system
generated variables in Isar proof contexts are usually marked as internal,
which prevents mysterious names like ‹xaa› to appear in human-readable text.
┉
Manipulating binding scopes often requires on-the-fly renamings. A ∗‹name
context› contains a collection of already used names. The ‹declare›
operation adds names to the context.
The ‹invents› operation derives a number of fresh names from a given
starting point. For example, the first three names derived from ‹a› are ‹a›,
‹b›, ‹c›.
The ‹variants› operation produces fresh names by incrementing tentative
names as base-26 numbers (with digits ‹a..z›) until all clashes are
resolved. For example, name ‹foo› results in variants ‹fooa›, ‹foob›,
‹fooc›, \dots, ‹fooaa›, ‹fooab› etc.; each renaming step picks the next
unused variant from this sequence.
›
text %mlref ‹
\begin{mldecls}
@{define_ML Name.internal: "string -> string"} \\
@{define_ML Name.skolem: "string -> string"} \\
\end{mldecls}
\begin{mldecls}
@{define_ML_type Name.context} \\
@{define_ML Name.context: Name.context} \\
@{define_ML Name.declare: "string -> Name.context -> Name.context"} \\
@{define_ML Name.invent: "Name.context -> string -> int -> string list"} \\
@{define_ML Name.variant: "string -> Name.context -> string * Name.context"} \\
\end{mldecls}
\begin{mldecls}
@{define_ML Variable.names_of: "Proof.context -> Name.context"} \\
\end{mldecls}
➧ \<^ML>‹Name.internal›~‹name› produces an internal name by adding one
underscore.
➧ \<^ML>‹Name.skolem›~‹name› produces a Skolem name by adding two underscores.
➧ Type \<^ML_type>‹Name.context› represents the context of already used names;
the initial value is \<^ML>‹Name.context›.
➧ \<^ML>‹Name.declare›~‹name› enters a used name into the context.
➧ \<^ML>‹Name.invent›~‹context name n› produces ‹n› fresh names derived from
‹name›.
➧ \<^ML>‹Name.variant›~‹name context› produces a fresh variant of ‹name›; the
result is declared to the context.
➧ \<^ML>‹Variable.names_of›~‹ctxt› retrieves the context of declared type and
term variable names. Projecting a proof context down to a primitive name
context is occasionally useful when invoking lower-level operations. Regular
management of ``fresh variables'' is done by suitable operations of
structure \<^ML_structure>‹Variable›, which is also able to provide an
official status of ``locally fixed variable'' within the logical environment
(cf.\ \secref{sec:variables}).
›
text %mlex ‹
The following simple examples demonstrate how to produce fresh names from
the initial \<^ML>‹Name.context›.
›
ML_val ‹
val list1 = Name.invent Name.context "a" 5;
\<^assert> (list1 = ["a", "b", "c", "d", "e"]);
val list2 =
#1 (fold_map Name.variant ["x", "x", "a", "a", "'a", "'a"] Name.context);
\<^assert> (list2 = ["x", "xa", "a", "aa", "'a", "'aa"]);
›
text ‹
┉
The same works relatively to the formal context as follows.›
experiment fixes a b c :: 'a
begin
ML_val ‹
val names = Variable.names_of \<^context>;
val list1 = Name.invent names "a" 5;
\<^assert> (list1 = ["d", "e", "f", "g", "h"]);
val list2 =
#1 (fold_map Name.variant ["x", "x", "a", "a", "'a", "'a"] names);
\<^assert> (list2 = ["x", "xa", "aa", "ab", "'aa", "'ab"]);
›
end
subsection ‹Indexed names \label{sec:indexname}›
text ‹
An ∗‹indexed name› (or ‹indexname›) is a pair of a basic name and a natural
number. This representation allows efficient renaming by incrementing the
second component only. The canonical way to rename two collections of
indexnames apart from each other is this: determine the maximum index
‹maxidx› of the first collection, then increment all indexes of the second
collection by ‹maxidx + 1›; the maximum index of an empty collection is
‹-1›.
Occasionally, basic names are injected into the same pair type of indexed
names: then ‹(x, -1)› is used to encode the basic name ‹x›.
┉
Isabelle syntax observes the following rules for representing an indexname
‹(x, i)› as a packed string:
▪ ‹?x› if ‹x› does not end with a digit and ‹i = 0›,
▪ ‹?xi› if ‹x› does not end with a digit,
▪ ‹?x.i› otherwise.
Indexnames may acquire large index numbers after several maxidx shifts have
been applied. Results are usually normalized towards ‹0› at certain
checkpoints, notably at the end of a proof. This works by producing variants
of the corresponding basic name components. For example, the collection
‹?x1, ?x7, ?x42› becomes ‹?x, ?xa, ?xb›.
›
text %mlref ‹
\begin{mldecls}
@{define_ML_type indexname = "string * int"} \\
\end{mldecls}
➧ Type \<^ML_type>‹indexname› represents indexed names. This is an
abbreviation for \<^ML_type>‹string * int›. The second component is usually
non-negative, except for situations where ‹(x, -1)› is used to inject basic
names into this type. Other negative indexes should not be used.
›
subsection ‹Long names \label{sec:long-name}›
text ‹
A ∗‹long name› consists of a sequence of non-empty name components. The
packed representation uses a dot as separator, as in ``‹A.b.c›''. The last
component is called ∗‹base name›, the remaining prefix is called
∗‹qualifier› (which may be empty). The qualifier can be understood as the
access path to the named entity while passing through some nested
block-structure, although our free-form long names do not really enforce any
strict discipline.
For example, an item named ``‹A.b.c›'' may be understood as a local entity
‹c›, within a local structure ‹b›, within a global structure ‹A›. In
practice, long names usually represent 1--3 levels of qualification. User ML
code should not make any assumptions about the particular structure of long
names!
The empty name is commonly used as an indication of unnamed entities, or
entities that are not entered into the corresponding name space, whenever
this makes any sense. The basic operations on long names map empty names
again to empty names.
›
text %mlref ‹
\begin{mldecls}
@{define_ML Long_Name.base_name: "string -> string"} \\
@{define_ML Long_Name.qualifier: "string -> string"} \\
@{define_ML Long_Name.append: "string -> string -> string"} \\
@{define_ML Long_Name.implode: "string list -> string"} \\
@{define_ML Long_Name.explode: "string -> string list"} \\
\end{mldecls}
➧ \<^ML>‹Long_Name.base_name›~‹name› returns the base name of a long name.
➧ \<^ML>‹Long_Name.qualifier›~‹name› returns the qualifier of a long name.
➧ \<^ML>‹Long_Name.append›~‹name⇩1 name⇩2› appends two long names.
➧ \<^ML>‹Long_Name.implode›~‹names› and \<^ML>‹Long_Name.explode›~‹name› convert
between the packed string representation and the explicit list form of long
names.
›
subsection ‹Name spaces \label{sec:name-space}›
text ‹
A ‹name space› manages a collection of long names, together with a mapping
between partially qualified external names and fully qualified internal
names (in both directions). Note that the corresponding ‹intern› and
‹extern› operations are mostly used for parsing and printing only! The
‹declare› operation augments a name space according to the accesses
determined by a given binding, and a naming policy from the context.
┉
A ‹binding› specifies details about the prospective long name of a newly
introduced formal entity. It consists of a base name, prefixes for
qualification (separate ones for system infrastructure and user-space
mechanisms), a slot for the original source position, and some additional
flags.
┉
A ‹naming› provides some additional details for producing a long name from a
binding. Normally, the naming is implicit in the theory or proof context.
The ‹full› operation (and its variants for different context types) produces
a fully qualified internal name to be entered into a name space. The main
equation of this ``chemical reaction'' when binding new entities in a
context is as follows:
┉
\begin{tabular}{l}
‹binding + naming ⟶ long name + name space accesses›
\end{tabular}
━
As a general principle, there is a separate name space for each kind of
formal entity, e.g.\ fact, logical constant, type constructor, type class.
It is usually clear from the occurrence in concrete syntax (or from the
scope) which kind of entity a name refers to. For example, the very same
name ‹c› may be used uniformly for a constant, type constructor, and type
class.
There are common schemes to name derived entities systematically according
to the name of the main logical entity involved, e.g.\ fact ‹c.intro› for a
canonical introduction rule related to constant ‹c›. This technique of
mapping names from one space into another requires some care in order to
avoid conflicts. In particular, theorem names derived from a type
constructor or type class should get an additional suffix in addition to the
usual qualification. This leads to the following conventions for derived
names:
┉
\begin{tabular}{ll}
logical entity & fact name \\\hline
constant ‹c› & ‹c.intro› \\
type ‹c› & ‹c_type.intro› \\
class ‹c› & ‹c_class.intro› \\
\end{tabular}
›
text %mlref ‹
\begin{mldecls}
@{define_ML_type binding} \\
@{define_ML Binding.empty: binding} \\
@{define_ML Binding.name: "string -> binding"} \\
@{define_ML Binding.qualify: "bool -> string -> binding -> binding"} \\
@{define_ML Binding.prefix: "bool -> string -> binding -> binding"} \\
@{define_ML Binding.concealed: "binding -> binding"} \\
@{define_ML Binding.print: "binding -> string"} \\
\end{mldecls}
\begin{mldecls}
@{define_ML_type Name_Space.naming} \\
@{define_ML Name_Space.global_naming: Name_Space.naming} \\
@{define_ML Name_Space.add_path: "string -> Name_Space.naming -> Name_Space.naming"} \\
@{define_ML Name_Space.full_name: "Name_Space.naming -> binding -> string"} \\
\end{mldecls}
\begin{mldecls}
@{define_ML_type Name_Space.T} \\
@{define_ML Name_Space.empty: "string -> Name_Space.T"} \\
@{define_ML Name_Space.merge: "Name_Space.T * Name_Space.T -> Name_Space.T"} \\
@{define_ML Name_Space.declare: "Context.generic -> bool ->
binding -> Name_Space.T -> string * Name_Space.T"} \\
@{define_ML Name_Space.intern: "Name_Space.T -> string -> string"} \\
@{define_ML Name_Space.extern: "Proof.context -> Name_Space.T -> string -> string"} \\
@{define_ML Name_Space.is_concealed: "Name_Space.T -> string -> bool"}
\end{mldecls}
➧ Type \<^ML_type>‹binding› represents the abstract concept of name bindings.
➧ \<^ML>‹Binding.empty› is the empty binding.
➧ \<^ML>‹Binding.name›~‹name› produces a binding with base name ‹name›. Note
that this lacks proper source position information; see also the ML
antiquotation @{ML_antiquotation binding}.
➧ \<^ML>‹Binding.qualify›~‹mandatory name binding› prefixes qualifier ‹name›
to ‹binding›. The ‹mandatory› flag tells if this name component always needs
to be given in name space accesses --- this is mostly ‹false› in practice.
Note that this part of qualification is typically used in derived
specification mechanisms.
➧ \<^ML>‹Binding.prefix› is similar to \<^ML>‹Binding.qualify›, but affects the
system prefix. This part of extra qualification is typically used in the
infrastructure for modular specifications, notably ``local theory targets''
(see also \chref{ch:local-theory}).
➧ \<^ML>‹Binding.concealed›~‹binding› indicates that the binding shall refer
to an entity that serves foundational purposes only. This flag helps to mark
implementation details of specification mechanism etc. Other tools should
not depend on the particulars of concealed entities (cf.\ \<^ML>‹Name_Space.is_concealed›).
➧ \<^ML>‹Binding.print›~‹binding› produces a string representation for
human-readable output, together with some formal markup that might get used
in GUI front-ends, for example.
➧ Type \<^ML_type>‹Name_Space.naming› represents the abstract concept of a
naming policy.
➧ \<^ML>‹Name_Space.global_naming› is the default naming policy: it is global
and lacks any path prefix. In a regular theory context this is augmented by
a path prefix consisting of the theory name.
➧ \<^ML>‹Name_Space.add_path›~‹path naming› augments the naming policy by
extending its path component.
➧ \<^ML>‹Name_Space.full_name›~‹naming binding› turns a name binding (usually
a basic name) into the fully qualified internal name, according to the given
naming policy.
➧ Type \<^ML_type>‹Name_Space.T› represents name spaces.
➧ \<^ML>‹Name_Space.empty›~‹kind› and \<^ML>‹Name_Space.merge›~‹(space⇩1,
space⇩2)› are the canonical operations for maintaining name spaces according
to theory data management (\secref{sec:context-data}); ‹kind› is a formal
comment to characterize the purpose of a name space.
➧ \<^ML>‹Name_Space.declare›~‹context strict binding space› enters a name
binding as fully qualified internal name into the name space, using the
naming of the context.
➧ \<^ML>‹Name_Space.intern›~‹space name› internalizes a (partially qualified)
external name.
This operation is mostly for parsing! Note that fully qualified names
stemming from declarations are produced via \<^ML>‹Name_Space.full_name› and
\<^ML>‹Name_Space.declare› (or their derivatives for \<^ML_type>‹theory› and
\<^ML_type>‹Proof.context›).
➧ \<^ML>‹Name_Space.extern›~‹ctxt space name› externalizes a (fully qualified)
internal name.
This operation is mostly for printing! User code should not rely on the
precise result too much.
➧ \<^ML>‹Name_Space.is_concealed›~‹space name› indicates whether ‹name› refers
to a strictly private entity that other tools are supposed to ignore!
›
text %mlantiq ‹
\begin{matharray}{rcl}
@{ML_antiquotation_def "binding"} & : & ‹ML_antiquotation› \\
\end{matharray}
\<^rail>‹
@@{ML_antiquotation binding} embedded
›
➧ ‹@{binding name}› produces a binding with base name ‹name› and the source
position taken from the concrete syntax of this antiquotation. In many
situations this is more appropriate than the more basic \<^ML>‹Binding.name›
function.
›
text %mlex ‹
The following example yields the source position of some concrete binding
inlined into the text:
›
ML_val ‹Binding.pos_of \<^binding>‹here››
text ‹
┉
That position can be also printed in a message as follows:
›
ML_command
‹writeln
("Look here" ^ Position.here (Binding.pos_of \<^binding>‹here›))›
text ‹
This illustrates a key virtue of formalized bindings as opposed to raw
specifications of base names: the system can use this additional information
for feedback given to the user (error messages etc.).
┉
The following example refers to its source position directly, which is
occasionally useful for experimentation and diagnostic purposes:
›
ML_command ‹warning ("Look here" ^ Position.here ⌂)›
end