inet processor

Side content

The limitations of the Von Neumann architecture

Modern computing "pretends" that memory access is O(1). For example, all array accesses look equal, and it doesn't matter whether we're addressing sequential data or sparse data: a[0], a[1], a[10000].

But this does not match up with reality. Data takes time to travel across the processor. If the CPU had to wait for the data stored at the opposite side of the chip to arrive, computers would be very slow.

The solution we have is that processors have a cache closer to the processor where they store frequently accessed data. Thus, in a modern computer, not all addresses are created equal. Some addresses are faster than others. Sequential access, for example, is much faster than random access.

However, our programming languages don't model this. All addresses look equal.

Modern computing also brings limitations to parallelism. The fact that memory is shared between all cores is not good. If two cores which are far apart modify data that is far away, solving the conflict with atomics will take time. The two cores have to communicate (i.e. using atomics) to reach an agreement regarding the value of the modified.

This is because the computations we do are not local. Any computation can change any piece of data anywhere. This is a fundamental stumbling block that prevents massive parallelism with traditional computing.

Interaction Nets

Interaction nets are a model of computation that's, in many ways, nicer than the Turing Machine. Unlike in the Turing machine, computation is parallelizable, Unlike in the Lambda-Calculus, computation is divided into small, constant-time steps. And unlike modern computers, computation is local and confluent.

The computer's state is represented by a graph. Nodes are called agents. Agents have a principal port, and zero or more auxiliary ports. Agents are also assumed to have a type which determines they behavior.

When two agents are connected through their principal ports, they interact. The result is a local graph rewrite. The graph rewrite can rewrite the agent's auxiliary ports, potentially creating new agents in the process.

Interaction Nets are Turing-complete; it is possible to build a set of agents which have reduction rules that simulate a Turing machine.

Interaction Combinators

In λ-calculus, there are two terms that can be combined to encode any other set of term. They are called "combinators".

S = λxλyλz(x z (y z)) K = λxλy(x z)

We can combine them to create any λ-calculus term. For example, λfλx(f (f x)) = (S(S(KS)K)(SKK))

It turns out that there is a set of three interaction net agents which can emulate any other interaction net.

pub const TIME_MAX: u32,

struct Address {
	x: i32,
	y: i32,
	z: i32,
	t: u32,
}

struct Wire {
	address: Address,
}

enum Port {
	Binary(usize, Wire, Wire),
	Redirect(Wire),
}

struct Rewriter {
	coords: Address,
	nodes: [u32; TIME_MAX]
}