Theory Syntax
theory Syntax
imports Base
begin
chapter ‹Concrete syntax and type-checking›
text ‹
Pure ‹λ›-calculus as introduced in \chref{ch:logic} is an adequate
foundation for logical languages --- in the tradition of ∗‹higher-order
abstract syntax› --- but end-users require additional means for reading and
printing of terms and types. This important add-on outside the logical core
is called ∗‹inner syntax› in Isabelle jargon, as opposed to the ∗‹outer
syntax› of the theory and proof language \<^cite>‹"isabelle-isar-ref"›.
For example, according to \<^cite>‹church40› quantifiers are represented as
higher-order constants ‹All :: ('a ⇒ bool) ⇒ bool› such that ‹All (λx::'a. B
x)› faithfully represents the idea that is displayed in Isabelle as ‹∀x::'a.
B x› via @{keyword "binder"} notation. Moreover, type-inference in the style
of Hindley-Milner \<^cite>‹hindleymilner› (and extensions) enables users to
write ‹∀x. B x› concisely, when the type ‹'a› is already clear from the
context.⁋‹Type-inference taken to the extreme can easily confuse users.
Beginners often stumble over unexpectedly general types inferred by the
system.›
┉
The main inner syntax operations are ∗‹read› for parsing together with
type-checking, and ∗‹pretty› for formatted output. See also
\secref{sec:read-print}.
Furthermore, the input and output syntax layers are sub-divided into
separate phases for ∗‹concrete syntax› versus ∗‹abstract syntax›, see also
\secref{sec:parse-unparse} and \secref{sec:term-check}, respectively. This
results in the following decomposition of the main operations:
▪ ‹read = parse; check›
▪ ‹pretty = uncheck; unparse›
For example, some specification package might thus intercept syntax
processing at a well-defined stage after ‹parse›, to a augment the resulting
pre-term before full type-reconstruction is performed by ‹check›. Note that
the formal status of bound variables, versus free variables, versus
constants must not be changed between these phases.
┉
In general, ‹check› and ‹uncheck› operate simultaneously on a list of terms.
This is particular important for type-checking, to reconstruct types for
several terms of the same context and scope. In contrast, ‹parse› and
‹unparse› operate separately on single terms.
There are analogous operations to read and print types, with the same
sub-division into phases.
›
section ‹Reading and pretty printing \label{sec:read-print}›
text ‹
Read and print operations are roughly dual to each other, such that for the
user ‹s' = pretty (read s)› looks similar to the original source text ‹s›,
but the details depend on many side-conditions. There are also explicit
options to control the removal of type information in the output. The
default configuration routinely looses information, so ‹t' = read (pretty
t)› might fail, or produce a differently typed term, or a completely
different term in the face of syntactic overloading.
›
text %mlref ‹
\begin{mldecls}
@{define_ML Syntax.read_typs: "Proof.context -> string list -> typ list"} \\
@{define_ML Syntax.read_terms: "Proof.context -> string list -> term list"} \\
@{define_ML Syntax.read_props: "Proof.context -> string list -> term list"} \\[0.5ex]
@{define_ML Syntax.read_typ: "Proof.context -> string -> typ"} \\
@{define_ML Syntax.read_term: "Proof.context -> string -> term"} \\
@{define_ML Syntax.read_prop: "Proof.context -> string -> term"} \\[0.5ex]
@{define_ML Syntax.pretty_typ: "Proof.context -> typ -> Pretty.T"} \\
@{define_ML Syntax.pretty_term: "Proof.context -> term -> Pretty.T"} \\
@{define_ML Syntax.string_of_typ: "Proof.context -> typ -> string"} \\
@{define_ML Syntax.string_of_term: "Proof.context -> term -> string"} \\
\end{mldecls}
➧ \<^ML>‹Syntax.read_typs›~‹ctxt strs› parses and checks a simultaneous list
of source strings as types of the logic.
➧ \<^ML>‹Syntax.read_terms›~‹ctxt strs› parses and checks a simultaneous list
of source strings as terms of the logic. Type-reconstruction puts all parsed
terms into the same scope: types of free variables ultimately need to
coincide.
If particular type-constraints are required for some of the arguments, the
read operations needs to be split into its parse and check phases. Then it
is possible to use \<^ML>‹Type.constraint› on the intermediate pre-terms
(\secref{sec:term-check}).
➧ \<^ML>‹Syntax.read_props›~‹ctxt strs› parses and checks a simultaneous list
of source strings as terms of the logic, with an implicit type-constraint
for each argument to enforce type \<^typ>‹prop›; this also affects the inner
syntax for parsing. The remaining type-reconstruction works as for \<^ML>‹Syntax.read_terms›.
➧ \<^ML>‹Syntax.read_typ›, \<^ML>‹Syntax.read_term›, \<^ML>‹Syntax.read_prop› are
like the simultaneous versions, but operate on a single argument only. This
convenient shorthand is adequate in situations where a single item in its
own scope is processed. Do not use \<^ML>‹map o Syntax.read_term› where \<^ML>‹Syntax.read_terms› is actually intended!
➧ \<^ML>‹Syntax.pretty_typ›~‹ctxt T› and \<^ML>‹Syntax.pretty_term›~‹ctxt t›
uncheck and pretty-print the given type or term, respectively. Although the
uncheck phase acts on a simultaneous list as well, this is rarely used in
practice, so only the singleton case is provided as combined pretty
operation. There is no distinction of term vs.\ proposition.
➧ \<^ML>‹Syntax.string_of_typ› and \<^ML>‹Syntax.string_of_term› are convenient
compositions of \<^ML>‹Syntax.pretty_typ› and \<^ML>‹Syntax.pretty_term› with
\<^ML>‹Pretty.string_of› for output. The result may be concatenated with other
strings, as long as there is no further formatting and line-breaking
involved.
\<^ML>‹Syntax.read_term›, \<^ML>‹Syntax.read_prop›, and \<^ML>‹Syntax.string_of_term› are the most important operations in practice.
┉
Note that the string values that are passed in and out are annotated by the
system, to carry further markup that is relevant for the Prover IDE \<^cite>‹"isabelle-jedit"›. User code should neither compose its own input strings,
nor try to analyze the output strings. Conceptually this is an abstract
datatype, encoded as concrete string for historical reasons.
The standard way to provide the required position markup for input works via
the outer syntax parser wrapper \<^ML>‹Parse.inner_syntax›, which is already
part of \<^ML>‹Parse.typ›, \<^ML>‹Parse.term›, \<^ML>‹Parse.prop›. So a string
obtained from one of the latter may be directly passed to the corresponding
read operation: this yields PIDE markup of the input and precise positions
for warning and error messages.
›
section ‹Parsing and unparsing \label{sec:parse-unparse}›
text ‹
Parsing and unparsing converts between actual source text and a certain
∗‹pre-term› format, where all bindings and scopes are already resolved
faithfully. Thus the names of free variables or constants are determined in
the sense of the logical context, but type information might be still
missing. Pre-terms support an explicit language of ∗‹type constraints› that
may be augmented by user code to guide the later ∗‹check› phase.
Actual parsing is based on traditional lexical analysis and Earley parsing
for arbitrary context-free grammars. The user can specify the grammar
declaratively via mixfix annotations. Moreover, there are ∗‹syntax
translations› that can be augmented by the user, either declaratively via
@{command translations} or programmatically via @{command
parse_translation}, @{command print_translation} \<^cite>‹"isabelle-isar-ref"›. The final scope-resolution is performed by the system,
according to name spaces for types, term variables and constants determined
by the context.
›
text %mlref ‹
\begin{mldecls}
@{define_ML Syntax.parse_typ: "Proof.context -> string -> typ"} \\
@{define_ML Syntax.parse_term: "Proof.context -> string -> term"} \\
@{define_ML Syntax.parse_prop: "Proof.context -> string -> term"} \\[0.5ex]
@{define_ML Syntax.unparse_typ: "Proof.context -> typ -> Pretty.T"} \\
@{define_ML Syntax.unparse_term: "Proof.context -> term -> Pretty.T"} \\
\end{mldecls}
➧ \<^ML>‹Syntax.parse_typ›~‹ctxt str› parses a source string as pre-type that
is ready to be used with subsequent check operations.
➧ \<^ML>‹Syntax.parse_term›~‹ctxt str› parses a source string as pre-term that
is ready to be used with subsequent check operations.
➧ \<^ML>‹Syntax.parse_prop›~‹ctxt str› parses a source string as pre-term that
is ready to be used with subsequent check operations. The inner syntax
category is \<^typ>‹prop› and a suitable type-constraint is included to ensure
that this information is observed in subsequent type reconstruction.
➧ \<^ML>‹Syntax.unparse_typ›~‹ctxt T› unparses a type after uncheck
operations, to turn it into a pretty tree.
➧ \<^ML>‹Syntax.unparse_term›~‹ctxt T› unparses a term after uncheck
operations, to turn it into a pretty tree. There is no distinction for
propositions here.
These operations always operate on a single item; use the combinator \<^ML>‹map› to apply them to a list.
›
section ‹Checking and unchecking \label{sec:term-check}›
text ‹
These operations define the transition from pre-terms and fully-annotated
terms in the sense of the logical core (\chref{ch:logic}).
The ∗‹check› phase is meant to subsume a variety of mechanisms in the manner
of ``type-inference'' or ``type-reconstruction'' or ``type-improvement'',
not just type-checking in the narrow sense. The ∗‹uncheck› phase is roughly
dual, it prunes type-information before pretty printing.
A typical add-on for the check/uncheck syntax layer is the @{command
abbreviation} mechanism \<^cite>‹"isabelle-isar-ref"›. Here the user specifies
syntactic definitions that are managed by the system as polymorphic ‹let›
bindings. These are expanded during the ‹check› phase, and contracted during
the ‹uncheck› phase, without affecting the type-assignment of the given
terms.
┉
The precise meaning of type checking depends on the context --- additional
check/uncheck modules might be defined in user space.
For example, the @{command class} command defines a context where ‹check›
treats certain type instances of overloaded constants according to the
``dictionary construction'' of its logical foundation. This involves ``type
improvement'' (specialization of slightly too general types) and replacement
by certain locale parameters. See also \<^cite>‹"Haftmann-Wenzel:2009"›.
›
text %mlref ‹
\begin{mldecls}
@{define_ML Syntax.check_typs: "Proof.context -> typ list -> typ list"} \\
@{define_ML Syntax.check_terms: "Proof.context -> term list -> term list"} \\
@{define_ML Syntax.check_props: "Proof.context -> term list -> term list"} \\[0.5ex]
@{define_ML Syntax.uncheck_typs: "Proof.context -> typ list -> typ list"} \\
@{define_ML Syntax.uncheck_terms: "Proof.context -> term list -> term list"} \\
\end{mldecls}
➧ \<^ML>‹Syntax.check_typs›~‹ctxt Ts› checks a simultaneous list of pre-types
as types of the logic. Typically, this involves normalization of type
synonyms.
➧ \<^ML>‹Syntax.check_terms›~‹ctxt ts› checks a simultaneous list of pre-terms
as terms of the logic. Typically, this involves type-inference and
normalization term abbreviations. The types within the given terms are
treated in the same way as for \<^ML>‹Syntax.check_typs›.
Applications sometimes need to check several types and terms together. The
standard approach uses \<^ML>‹Logic.mk_type› to embed the language of types
into that of terms; all arguments are appended into one list of terms that
is checked; afterwards the type arguments are recovered with \<^ML>‹Logic.dest_type›.
➧ \<^ML>‹Syntax.check_props›~‹ctxt ts› checks a simultaneous list of pre-terms
as terms of the logic, such that all terms are constrained by type \<^typ>‹prop›. The remaining check operation works as \<^ML>‹Syntax.check_terms›
above.
➧ \<^ML>‹Syntax.uncheck_typs›~‹ctxt Ts› unchecks a simultaneous list of types
of the logic, in preparation of pretty printing.
➧ \<^ML>‹Syntax.uncheck_terms›~‹ctxt ts› unchecks a simultaneous list of terms
of the logic, in preparation of pretty printing. There is no distinction for
propositions here.
These operations always operate simultaneously on a list; use the combinator
\<^ML>‹singleton› to apply them to a single item.
›
end