interaction type theory

A type system is a system that can be used to divide programs into well-typed programs and ill-typed programs, by assigning a type to each term. Functions, for example, are defined as taking an argument which must be an instance of a type. Programs that pass terms outside that type to the function are ill-typed.

Many type theories enjoy many benefits which make them practical, for example:

  • They are composable Knowing that a subprogram is well-typed allows us to make judgements about the superprogram that contains the subprogram.
  • They are partially decidable. This means that there exists an algorithm to check whether a term is contained in a type, regardless of the type's definition. This stands in contrast with set theory, where set membership must be proved in all cases, and can't be "checked" algorithmically.
  • They are sound. This means that you can't get "runtime type errors"; which implies that you can't construct arbitrary instances of types, that the empty type is in fact empty, etc. An unsound system can get unexpected values in runtime that aren't members of a type, despite the program being well-typed. This makes type theories suitable as a foundation of mathematics and as a way to prove program invariants. A lot of type systems aren't sound because they allow constructing terms that check against any type. This is usually the case for "complex" type systems, which introduce a large set of features to express many different of constructions. Here's a collection of unsoudness examples in type systems.

It would be really nice if we had a type theory for interaction nets.

Motivation

There's a lot of challenges with type systems that could be solved by a system which employs interacion nets.

Function extensionality proof

There's a system called the symmetric interaction calculus. It is like λ-calculus, but with explicit duplication and scopeless lambdas. Scopeless lambdas are lambdas that can bind values which are used outside their body. This is possible because of SIC's linearity.

The Symmetric Interaction Calculus can be thought of as a textual representation of symmetric interaction combinators. In October 2023, T6 shared this proof of function extensionality in a pseudo-type-checked language that includes scopeless lambdas

Eq (T: Type) (a: T) (b: T) : Type = (P: T -> Type) (h: P a) -> P b

funext
  (T U : Type)
  (f g : T -> U) 
  (h : (t: T) -> Eq U (f t) (g t))
:
  Eq (T -> U) f g
=
  (P : (T -> U) -> Type)
  (a : P f) =>
    h $t ((u: U) => P (($t : T) => u)) a 

Function extensionality is a proposition which states that "two functions are equal if their results are equal for all arguments". It's usually included as an axiom. But it turns out that it can be proved with scopeless lambdas. $t is the scopeless variable.

I'll assume the reader is familiar with the usual definitions of intensional equality, reflexivity, function extensionality, etc, and broadly explain what is going on here.

To prove equality, we need to generate (P g) for any (P f) with the hypotheses we have at our disposal. The magic hypothesis is (h : (t: T) -> Eq U (f t) (g t)), which tells us that for any P and t, (P (f t)) can be replaced with (P (g t))

We'll start with (P f). This is eta-equivalent to (P ((t : T) => (f t))), simply by eta-expanding f. We can also turn t into a scopeless variable here; it doesn't make a difference yet because this is still a traditional lambda: (P (($t : T) => (f $t))).

Now, we'll "abstract away" the (f $t): We'll replace it by a new variable u that is set by a new lambda: ((u: U) => P (($t : T) => u)) (f $t)).

((u: U) => P (($t : T) => u)) looks like a type class. As a consequence, we can use h to replace (f $t), which is its argument, with (g $t)

Now we have ((u: U) => P (($t : T) => u)) (g $t)). If we reduce the application, we get (P (($t : T) => (g $t))). This is eta-equivalent to (P g).

We have shown how to get from the "premise" (P f) to the conclusion (P g). This is what T6's proof does. The "magic trick" is that scopeless lambdas allow us to separate $t from the binding lambda, which allows using h in a more powerful way. (specifically, the variable occurs in the type class argument, but it's bound in the type class).

Ideally, a interaction type theory would allow us to do this.

Sound type-in-type

SIC is closely related to EAL, and it turns out that in EAL, type-in-type unsoundness can't be constructed. todo....

My attempts

I've worked with a lot of approaches to this. Here's a few of them.

"Bridge" node

This is less of an approach and more of an "ingredient" of interaction type theories.

In the symmetric interaction calculus, each term constructors has an "opposite" or "dual" relative to polarity. For example, lambdas and applications are dual, duplications and superpositions are dual and erased variables and unused values are dual.

In any interaction type theory which features an "annotation" node as a separate agent, that's also based on interaction combinators, it's natural to ask what the dual to an annotation is. I've called the result with a variety of names: the typeof operator, the ann-lambda, the ann-binder, the theta node, the thorn node, and the bridge node. It seems like the bridge node is the name that stuck.

If annotations are like applications, then bridge is like lambda. It's represented like this: θx body. A bridge binds a variable and contains a body. It's defined by the following reduction rule:

<x : θy body>
------------- ANN-BRI
x ~ y; body

This is like the rule for lambda application, which looks like this:

(λy(body) x)
------------ APP-LAM
x ~ y; body

Bridge is the constructor for types. It allows us to define dependent types, not as primitives, but as compound types.

# Function type
Fun = λA λB θf λx <(f <x: A>): B>
# Dependent function
All = λA λB θf λx <(f <x: A>): (B x)>
# Self-dependent (inductive) function
Ind = λA λB θf λx <(f <x: A>): (B f x)>
# by Taelin

This definitions, when used with annotation nodes, will reduce to simple atomic checks such as <<x: A>: A> or <<x: B>: A>.

Bridge is an extremely dependent type constructor. A lot of constructs that aren't traditionally possible are possible with Bridge. I don't yet know if bridge makes a type theory unsound. It could be possible.

Equality node.

todo... also write about self-annihilation.

Taelin's "leap" idea

Repo: https://github.com/VictorTaelin/Interaction-Type-Theory (don't follow the "moved" link in the README)

This is the first interaction type theory I know of. Most documentation for this idea is scattered in a few repos and Twitter posts. The idea is checking all "paths" from the root to the root for a property called "coherence", which is related to the balance of ANN nodes.

Extended ambigious box construct

No repo. This is T6's idea. It is a way to look at Taelin's leap and leaps idea and what it means.

We can imagine an "ambiguous box" node, which can reduce in two ways. We'll say a net is coherent iff both reductions result in equivalent nets.

We can "slice" this 4-port node into two 3-port nodes, either "in parallel" or "in series". This allows us to "move" the 3-port nodes and extend the invariant enforced by the ambiguous box across the whole net.

The "in parallel" slice corresponds to Taelin's annotation. The "in series" slice would work roughly like equality.

Unfortunately this seems impractical, or almost impossible to check naively. That's why I'm not going into much detail.

Fixed point type theory

Repo: https://github.com/FranchuFranchu/fixpoint-itt

What if we define the instance-of relationship like this:

x is an instance of T iff <x: T> == x

This is the result of that idea. This would be very powerful because it'd allow us to define types for a lot of sets of terms; a bit like set theory predicates. Unfortunately, for checking to be practical, we'd like to be able to construct types T such that <<x: T>: T> reduces to <x: T>, but that seems to be impossible. The best I can do is <<x: T>: T> reducing to x, but that's much harder to work with. This is mostly documented in a Discord thread.

Taelin's "interaction calculus of constructions"

Repo: https://github.com/VictorTaelin/Interaction-Calculus-of-Constructions

If I understand correctly, this is essentially SIC + bridges + an equality checker. I'm not sure what the status of this is right now. It seems to be the most complete interaction type theory here as of right now.

T6's inet-lifetimes

Repo: https://github.com/tjjfvi/inet-lifetimes

Inet-lifetimes is an answer to the question: "how can we prevent vicious circles in interaction nets (without Lafont's simplicity criterion)". It works by defining a partial order between wires that's preserved by redution, using rust-like lifetimes and types. Unfortunately, it lacks dependent types, which makes it unsuitable for foundations of mathematics.

Typed agents

Repo: https://github.com/FranchuFranchu/typed-agents

This is my most recent approach. It's based on Polarity, a language with a heavy duality between positive and negative terms, but with interaction nets.

This approach works by defining an interaction system with an instance-of relationship between agent types, which has the following properties.

  • Completeness: If T ~ U is defined, and t: T, u: U, then t ~ u must be defined too.
  • Well-typedness: Undefined interactions are not allowed.

Unfortunately, the way type-checking works does not play well with nonlinearity. A lot of types such as Eq are very nonlinear. I suspect I'll have to define universes and other constructs which is not what I want.

At some point typed-agents became too confusing for me and I stopped working on it. I'm also not sure what the destructor of a type means.

My hunch

I believe that we should discard the textual λ-calculus-like respresentation of interaction nets. It's not powerful and it ties us to traditional λ-calculus-based models, which suffer from a lot of problems that interaction nets fix.

I also think an interaction type theory would be linear. Nonlinearity is just a consequence of λ-calculus. Duplicating types should be avoided whenever possible.

I think that an interaction type theory will not have a natural λ-calculus-based representation. I think that typed-agents is the closest I've gotten to a true interaction type theory, but it still suffers from a lot of complexity that I'd like to get rid of.

Conclusion

A lot of research is needed here!