Monday, September 18, 2006

Causality

Denotational semantics of processes is often though of as functions from input stream to output stream.
I already argued that streams are not general enough to represent communication between processes, but let's keep the model simple.

Function from stream to stream fails to represent one crucial property of the process - which input message must have been fed to the process in order to observe a specific output message. I do not mean data dependency here, but something like control dependency, or causality. Streams (or full orders) are a special case of partial orders that can be used to model causality, so there seems everything is ok - except there is no ordering between elements of input stream and output stream.
Imagine a function from stream of unit to stream of unit:

data Stream a = Message a (Stream a)
type X = (Stream ()) -> (Stream ())

There is just one function of type X - identity (non-termination is out of equation as we are talking math functions, not Haskell functions).
On the other hand, there are multiple different behaviors that accept empty messages and output empty messages.
One would be an identity behavior - for each input message send one output message. Another would be a counter - for nth input message send n output messages. By applying a function from Nat to Nat to the definition of the counter, one obtains a new behavior: for nth input message send f(n) output messages. Thus there are infinitely many observably different behaviors of this type, and trying to denote them by a single function is unsatisfactory at least.

So, the quick solution seems to define a process as a function from input list (a prefix of stream) to a list of outputs produced after receiving the last input (so the whole output list is obtained by concatenating output lists for all prefixes of the input). This may work for a single input stream, but for multiple streams we need to encode which stream received the last message.

A more general way to look onto this is to say that a process extends a partially ordered set of input messages to a partially ordered set of both input and output messages, with a constraint that it does not introduce PO links to an input message (but obviously is allowed to introduce such links from an input to an output, or between two outputs).
Need to think of a formal (and elegant) way for saying this.

Sunday, September 17, 2006

Agents as frontiers of futures (or arachnoagents)

Imagine truly concurrent constraint programming, but without entailment, or any other appeal to logic. In fact, let's try to separate causality from mechanism of specification of interaction.

It may be easier to approach this task from FSM angle, or more precisely, ISM - interacting state machines.
Unlike ISM, our machines (cobs?) do not communicate via FIFO streams of messages - to model causality fairfully, a more rich structure is needed (a cobweb).
Each machine can be thought as crowling along links of immutable network of message nodes, producing another network as its output (to be traversed by other machines). Traversion is not destructive - network nodes not referenced by any machine operatonally are GCed, while denotationally they have infinite lifetime.
FIFO streams are a special case of network with strictly linear links.
To support growth, immutable networks have future nodes (promises) at their frontier - and exactly these nodes are referred (and fulfilled) by the owner machine (hind legs of the cob). Readers of the network cannot get past these growth points (with their front legs) until creator resolves them - a typical future/promise, but probably with more low-level operations than usual.
A machine is a unit of authority, while execution as such can have finer grained units.

Saturday, September 09, 2006

Marshalling as a game

Investigate the problem of what to put on wire when marshalling (serializing) objects.
Some runtime environments force this decision, at least when it comes to data/code dichotomy.
When it comes to code-is-data view, it becomes very unclear, what should be transferred - think of the reflexive tower - should it be unraveled to its (infinite) roots, or at least down to the fixpoint? What if the receiver of the stream already shares some part of the tower with the sender?
In general, how can they engage in a negotiation that minimizes not only the size of the stream, but also the size of negotiation itself?

It looks it is not really about streams (that's why I do not really like the term serialization), but about identifying the part of the object that needs to be copied, and then grafting that part onto the receiver's view of world).