Friday, December 28, 2007


naasking mentioned that Manticore has nested schedulers - sounds very interesting, and similar to what I want to achieve in Fluid. Must read that.

Thursday, December 27, 2007

Fluid preManifesto

At this point of my life I don't have much time to actively pursue the research I would like to. I decided to make an overview of what is envisioned, so that at least I do not forget that myself when I have spare minute or two.

Fluid is a codename for programming language/framework/calculus/tool that should enable experiments with and teaching of various approaches to concurrency and possibly parallelism.
During my initial experiments with join calculus I decided that a lot of interesting stuff is being swept under the carpet. For one thing, scheduling of reactions is implicit on one hand, and rigid on the other. What if one had a language in which diving into the gory details of scheduling would be a matter of using some operator, like call/cc is being used to dive into underlying continuations? No scheduling would be left implicit, all interleaving would be deterministic and controlled - mostly using popular constructs from the library in a very lean syntactic way (monadic reflection?), but occasionally giving possibility to do exactly what is needed. At runtime, something similar to engines looks promising.

The language would follow the maxim Algorithm = Logic + Control, separating declarative part of the solution from the control of when to pursue which way and for how long. It must be possible to ration CPU cycles and other resources based on the perceived relative utility of the computations, as well as to prune the computations that are believed to have too few value to be worth any future resources. This utility-aware side of the research might come close to the mechanism design (most probably the algorithmic branch of it).

Having outsourced the scheduling to the user code, the next question is how to express the declarative part of the solution. After playing with various alternatives (promises, MVars, channels) I decided that concurrent constraint programming looks most general of them all. But which constraint system to choose? Or should I choose some specific system at all? As control part is very customizable, why not follow the same path in the logical part? I don't yet know the details, or whether it is possible at all, but the idea is to allow user code to define the constraint system as well.

To put it all in one line: Program = Logic + Control, Your Way, Best Value for Resources!

I would like to write much more, but my margin (of time) is too small, so I finish here.

Tuesday, December 11, 2007

A Software Architecture for Distributed Control Systems and Its Transition System Semantics

A software architecture for distributed control systems is presented that is based on a shared data space coordination model. The architecture, named SPLICE, is introduced in two steps.First we give the general structure of the coordination model including the shared data space and the basic operations and we define its semantics by means of a transition system. Second we present a transition system for a refinement of the coordination model: data is distributed and replicated across a communication network using a publication-subscription protocol. We also discuss methods in SPLICE for fault-tolerance and the possibility for on-line system modification and extensions.

Similar to Linda, JavaSpaces, and other paradigms of communication via shared store.

Thursday, November 22, 2007

SOA kills ORM?

Recently I participated in several projects that heavily utilized services (message-based, shared-nothing, over-the-wire thingies). What I noticed is that object/relational mapping (ORM) tools are virtually useless for such projects. Indeed, significant, if not most value of ORM comes from ability to modify values in memory, using OO language constructs for that, and then just synchronize the memory with the database. This functionality goes completely unutilized in typical SOA application, where reads and writes are separated in time, use different objects and acces paths, and in fact can be performed from different machines. E.g., one service can read a list of tasks from the DB and return them as a message to the network peer. After traveling between peers this information (but not this specific object or message) in combination with information from other sources (e.g., user input) can be used to update a specific property of a specific task (written back to the DB). Has need for service/relational (or message/relational) mapping indeed arrived?

Wednesday, November 21, 2007


Sometimes I have to use JavaScript for my daily programming, and while it takes some time to switch to it, it has certain beauty and allows some expressivity missing in "bigger" languages like Java or C#.

I tried to identify, what features make it shine, and what features might add even more to the mix.

So great and already present:
  1. Free extension of any object with properties ("every object is a dictionary").
  2. (Limited) introspection ("for in").
  3. Closures as first-class values ("closure is a value", and even "closure is an object").
  4. Prototypes for ad hoc extensibility, e.g., interception of closure invocation at prototype level.

Potentially great, but missing:

  1. Ergonomic way to qualify property names (a la XML or RDF).
  2. More and deeper interception - e.g., access to properties (get/put/foreach).
  3. More advanced control operators - e.g., co-routines and generators.
  4. Concurrency at language level.
  5. More like a constraint on other features - capability-based security.

I see certain synergy among proposed features, so adding them may not automatically turn the language into a kitchen sink.

I would try to approach features 3 to 5 via join calculus. Now, if I only had time.

Sunday, November 18, 2007

A Methodology and Tool for Top-down Relational Database Design

This paper presents a methodology for logical design of relational schemas and integrity constraints
using semantic binary schemas. This is a top-down methodology. In this methodology,
a conceptual description of an enterprise is designed using a semantic binary model.
Then, this description is converted into the relational database design.


Critical to the design of an historical database model is the representation of the existence of objects across the temporal dimension — for example, the "birth," "death," or "rebirth" of an individual, or the establishment or dis-establishment of a relationship. The notion of the "lifespan" of a database object is proposed as a simple framework for epressing these concepts. An object's lifespan is simply those periods of time during which the database models the properties of that object. In this paper we propose the historical relational data model (HRDM) and algebra that is based upon lifespans and that views the values of all attributes as functions from time points to simple domains. The model that we obtain is a consistent extension of the relational data model, and provides a simple mechanism for providing both time-varying data and time-varying schemes.

Monday, September 03, 2007

Reducibility Between Classes of Port Graph Grammar

... by Charles Stewart (a.k.a. cas).

It is interesting to compare reducability results for PGGs with those for JCs (especially general SPGG in basic SPGG to general JC in core JC).

As a side note, I start to become uneasy with too complicated encodings... There should be some more fine-grained equivalence than an ability to encode one transformation system in another (something along the lines of expressivity).

For example, the standard encoding of general JC in core JC is not homomorphic in constructs of core JC - implying it is not identity on core JC terms - so this encoding does not prove expressibility of general JC in core JC Felleisen-wise.

PS: seems to be down for some reason.

Saturday, September 01, 2007

Core Join Calculus - too hardcore?

After looking closely at core JC, and the proposed encoding of full JC in it, I found it unsatisfactory for my taste.
One problem is that the encoding results in "busy-wait" (stuttering non-termination), even if the original term was "normalizing", which makes the chosen notion of equivalence suspicious. Also, the encoding is sufficiently complicated to ask whether analysis of the resulting terms will be not harder than that of original ones (I recognize that analysis of core JC as a language is easier than of full JC, though - that's the usual trade-off between complexity of a language and its terms, sometimes exemplified by English vs. Chinese, or binary vs. hex).

So maybe I want to take a step back, and pick some variant of JC that is not as spartan as core JC. At the moment I see 3 differences between full and core JCs:
1. Polyadic vs. monadic messages (tuples of channels vs. single channels as payload).
2. N joins per definition vs. 1 join per definition.
3. M patterns per join vs. 2 pattern per join.

I am considering to look closer at the "monadic N M" variant, though in theory one could analyse all combinations, and how they can be encoded in each other (could make a PhD thesis :) ). If one adds to these 3 dimensions another one for allowing or disallowing free names in definitions, this yields 24 = 16 variants. I suspect, though, that one does not need to define encoding for each pair of combinations separately - they all could be factored via few "encoding patterns" (e.g., encoding of N-M JC in 1-2 JC, polyadic JC in monadic JC, and "higher order/closured" JC in "first order/closureless" JC).

Thursday, August 30, 2007

Defunc'ed Join Calculus, now in Haskell

>module J where

This module defines alternative semantics for join calculus (JC) by means of providing an interpreter.

Major differences from standard semantics are:
1. "Kinda closure-converted/defunctionalized" form of object program
- we do not have nested definitions defining scope of names, all pieces of code are "kinda combinators".
2. Making code closed required introducing notion of object instances (data Obj).
- these objects are just identities (plus "vtable"), and could have been defined as just (Int, Code).
- however, this module models identity of the object as history of reactions (joins) that lead to its creation.
3. As all state is carried by messages, the history must be carried by them, as well.
- as a side-effect, we get unlimited stack-, err, graph-trace of the executed program!
- not that this is practical :)

>import Data.List as List
>import Data.Maybe (listToMaybe)

Let us model monadic JC, i.e., messages carrying just one channel as their body.
(see, Chapter 5).

Messages are the main part of the system, carrying all its mutable state.

"cause" is Nothing for primordial messages.

>data Msg = Msg {to::Channel, body::Channel, cause::Maybe Join} -- ("from" is redundant, as both causes have it as their "to")
> deriving (Eq, Ord)
>instance Show(Msg) where
> show (Msg t b Nothing) = (show t)++"("++(show b)++")"
> show (Msg t b (Just j)) = (show j)++" -> "++(show t)++"("++(show b)++")"

Join is just a couple of messages that reacted.

Forward reference: Messages of Join have equal "to", left one has L port id, right one has R.

>data Join = Join Msg Msg
> deriving (Eq, Ord)
>instance Show(Join) where
> show (Join m1 m2) = "<"++(show m1)++", "++(show m2)++">"

Channels are destinations of messages (and in standard monadic JC the only payload of messages).
Channels are identified by a pair of object identity and a port.

>data Channel = Ch {obj::Obj, port::PortId}
> deriving (Eq, Ord)
>instance Show(Channel) where
> show (Ch o p) = (show o)++""++(show p)

As this is core JC, each definition/code/object has just two channels, identified by their port ids - Left or Right.

>data PortId = L | R
> deriving (Show, Eq, Ord)

Obj is in fact a pair of object identity and procedure for handling messages ("kinda vtable").
Identity is formed by NewId qualified by birthjoin (think birthday).

"birthjoin" is Nothing for primordial objects.
TODO: "code" should not be a part of Eq's "key"

>data Obj = Obj {id::NewId, code::Code, birthjoin::Maybe Join}
> deriving (Eq, Ord)
>instance Show(Obj) where
> show (Obj i _ (Just j)) = (show j)++"->"++i
> show (Obj i _ Nothing) = i

NewId differentiates multiple objects created during reaction to the same join.
In more bytecode-like implementation this would be integer.

>type NewId = String

Code is the procedure to react to a join by creating new objects and sending new messages.
Code is closed, and the only input (capabilities!) it has are:
1. Its left and right channels - via ThisChannel
2. Messages in them (parts of the join causing the reaction) - via ThisMsg
3. Channels in newly created objects - via NewChannel

The reaction is a list of codes for new objects, keyed by their NewIds,
and a list of to/body references for new messages.
TODO: introduce datatypes for these pairs.

>data Code = Code [(NewId, Code)] [(Ref, Ref)]
> deriving (Show, Eq, Ord)

Reference - means for constructing capabilities.

>data Ref = ThisChannel PortId
> | ThisMsg PortId
> | NewChannel NewId PortId
> deriving (Show, Eq, Ord)

Config of the system is currently just a list of messages (order is important to define deterministic scheduling).

>type Config = [Msg]

Context is "a config with a hole". It is assumed that in this implementation hole is at the end of list.

>type Context = [Msg]

We follow the usual practice:
1. Find in the config a redex and its surrounding context.
2. Reduce redex.
3. Plug the result back.

One might use Danvy's refocusing here (link?),
but I am reluctant to optimize for speed just right now.

Just find the first(!) pair of messages having the same object and having left and right channels as destinations:

>findRedex :: Config -> Maybe (Join, Context)
>findRedex ms = listToMaybe $ [(Join m1 m2, ms \\ [m1, m2])| m1 <- ms, m2 <- ms, (obj $ to m1) == (obj $ to m2), (port $ to m1) == L, (port $ to m2) == R]

Basically translate capabilities to channels:

>reduce :: Join -> [Msg]
>reduce j@(Join l r) = ( s2m) sends where
> Code news sends = code this
> this = obj $ to l
> s2m (to, body) = Msg (r2c to) (r2c body) $ Just j
> r2c (ThisChannel p) = Ch this p
> r2c (ThisMsg L) = body l
> r2c (ThisMsg R) = body r
> r2c (NewChannel i p) = Ch (i2o i) p
> i2o i = case lookup i news of
> Just c -> Obj i c $ Just j
> Nothing -> error ("broken ref "++(show i))

Put result back into the hole:

>plugBack :: Context -> [Msg] -> Config
>plugBack c ms = c ++ ms

TODO: externalise scheduling out of findRedex/plugBack (probably use lazy evaluation - link?).

Now combine all three functions to make a single step:

>step :: Config -> Maybe Config
>step c = case findRedex c of
> Nothing -> Nothing
> Just (j@(Join l r), c2) -> Just $ plugBack c2 $ reduce j

And now the most dangerous part, as it is potentially non-terminating (unlike step):

>run :: Config -> Config
>run c = case step c of
> Nothing -> c
> Just c2 -> run c2

A few examples:

>c1 = Code [] [(ThisMsg L, ThisMsg R)]
>c2 = Code [] [(ThisMsg L, ThisChannel R), (ThisChannel L, ThisMsg R)]
>--c3 = Code [("x", c2)] [(ThisMsg L, NewChannel "x" L), (ThisMsg R, NewChannel "x" R)]

>ms1 = [Msg (Ch o1 L) (Ch o2 R) Nothing, Msg (Ch o1 R) (Ch o2 R) Nothing, Msg (Ch o2 L) (Ch o1 L) Nothing]
> where o1 = Obj "a" c1 Nothing ; o2 = Obj "b" c2 Nothing

Try "step ms1" and "run ms1" to get an idea.

The next installment should probably include a little compiler from LC terms to Config.

Monday, August 27, 2007

Programming Languages: Application and Interpretation

I have to find time to read this textbook - if only to verify my background.

Friday, August 24, 2007

"Defunctionalization" of Join Calculus

(See Defunctionalization at Work for a background on the defunctionalization technique, The reflexive CHAM and the join-calculus for a background on join calculus.) What I do is not really defunctionalization, but I miss the right term for that.

Lambda calculus can be encoded in join calculus (JC). Algebraic datatypes can be encoded in either. A decent functional PL (think ML) can be encoded in JC. Yesterday I had an idea of looking at the reversal of this encoding - take a JC term, and tell if it is an image of ML term under this encoding (similar in spirit to recovering user-lambdas from brutally CPSed terms). Better yet, take a JC term and split it into functional and (emmm...) superfunctional pieces.
Then I wandered into implementation ideas, and suddenly decided I need to explore some kind of first order JC. I mean, definitions in JC nest and form structures very similar in spirit to closures. What if I closure-convert JC semantics, tinker it a bit, and them put closures back?

I analyzed pages 5-6 of the JC paper, and started scribbling. Spoiled by UML, I didn't use sets or functions, only informal diagrams. "I'll validate it later", thought I.

Everything went fine, until I stumbled upon necessity to construct new instances. Need for elegant generation of fresh names and manifestedly safe references to them was what stopped me... Will have to check literature for inspiration.

An Accidental Simula User: Luca Cardelli

A personal view of the history behind the calculus of objects!

Accidentally, just last night I considered playing with defunctionalized Join Calculus (a good topic for the next post).

Saturday, August 18, 2007

Analyzing the Environment Structure of Higher-Order Languages using Frame Strings

Besides tackling a really fundamental issue, the paper has an interesting preface.

Friday, July 13, 2007

Abstract And-Parallel Machines

I will have to persuade myself I never read this paper before, as reading it induces a deja vu in me.

Excellent stuff!

Monday, July 09, 2007

Multiscale Scheduling: Integrating Competitive and Cooperative Scheduling in Theory and in Practice

They touch a lot of ideas I was vaguely exploring on this blog (hierarchical scheduling, scheduling as game, utility-driven scheduling).

E.g., DAG models are superficially similar to cobs (though cobs are not sequential tasks, they are atomic units of rewriting).

Hope this program will succeed!

Tuesday, July 03, 2007

Sunday, June 24, 2007

Preventing Internet Denial-of-Service with Capabilities

A topic of DoS attacks has surfaced recently on LtU, so this paper may be relevant to PLs design enough.

Imagine combining Trickles with this scheme...

Trickles - Stateless High Performance Networking

The Trickles network stack is a stateless replacement for TCP and the Berkeley sockets interface. It removes all per-connection state from one endpoint, typically the server, while providing important features such as congestion control, security of server-side state, and support for dynamic content.

In addition to markedly reduced memory overhead, our server-side stateless stack allows increased flexibility in redirecting network traffic at packet-level granularity, since any server can service any request regardless of past communications history. This enables new functionality in the network layer, such as transparent failover, load balancing, anycast services, and striped download.

I wonder, how well this can be bent to work with multicast.

Thursday, April 05, 2007

Work, work

This message is just to make sure Blogger does not write me off as MIA :)

My family and job responsibilities withdrew my attention from research and blogging, hope I will find some time soon to post some findings/ideas I had since January.

Thursday, January 25, 2007

Punctuated Data Streams

We explore a kind of stream semantics called punctuated
streams. Punctuations in a stream mark the end of substreams, allowing us to view a
non-terminating stream as a mixture of terminating streams. We introduce three kinds of
invariants to specify the proper behavior of query operators in the presence of punctuation.
Pass invariants unblock blocking operators by defining when such an operator can pass
results on. Keep invariants define what must be kept in local state to continue successful
operation. Propagation invariants define when an operator can pass punctuation on. We
then present a strategy for proving that implementations of these invariants are faithful
to their finite table counterparts.

Some definitions could be streamlined, and typos corrected, but otherwise looks interesting.
I also suspect reformulating the definitions in terms of domain theory might have benefited both reader and writer (less implementation-specific details).
On the other hand, category theory might be an overkill for this...

Also, can this be seen as a good application for concurrent constraint programming?
On the first glance punctuations look just like constraints.

Wednesday, January 24, 2007

Comonadic functional attribute evaluation

They relate zippers to comonads. Wow!

CoKleisli categories make comonads relevant for analyzing notions of context-dependent
function. If the object D A is viewed as the type of contextually situated
values of A, a context-dependent function from A to B is a map D A -> B in the base
category, i.e., a map from A to B in the coKleisli category. The counit eA : D A -> A
discards the context of its input whereas the coextension k† : D A -> D B of a function k : D A -> B essentially duplicates it (to feed it to k and still have a copy left).

No wonder processes looked like zippers to me - they are about contextually situated values (err, actually computations :( ).

Monads, Kleisli Arrows, Comonads and other Rambling Thoughts

Simple, and to the point.

Now I wonder how to generalize this result to stream processors that do not preserve rate (i.e., produce a list to an output stream for each history of the input stream). Generalization to multiple inputs is next.

Tuesday, January 23, 2007

The Theory of Timed I/O Automata

Especially note receptiveness.

What kind of logic is “Independence Friendly” logic?

In a series of singular, thought-provoking publications in recent years, Jaakko
Hintikka has vigorously promoted consideration of an extension of first-order logic called
IF logic, along with claims that its adoption promises to have revolutionary
consequences. My main purpose here is to examine in what sense it deserves to be called
a logic.


Friday, January 19, 2007


I wanted to compare IF logic to computability logic for a long time. Must find time to do that soon.

Payment as part of game semantics

In CL, a semantical object maps each run to a set of possible moves for both players and a currently winning player. One idea would be to replace the latter by payments. I mean, if one uses games as contracts, it makes more sense saying that as a result of our interaction I owe you 10 euro by the end of the year than to say that I lost my contract. As a side effect, it makes possible win/win (or lose/lose) outcomes - I owe you 10 euro by the end of the year, but you owe me 10 dollars by the end of the month. Then again, how to enforce these payments? Sooner or later we must reach some escape hatch for extra-logical enforcement (court?). But I still believe that it may be more useful to come to court asking for 10 euro than asking to make false true.

Also, when going to court, you want to bring some evidence, or witness. So the outcome is not a boolean, but a scheduled flow of expected payments, each annotated with a witness (a non-repudiable pointer to a move that decided this payment). Note that missing a payment must be interpreted as a move as well - and thus modify the outcome. I wonder, if this can be implemented at least in theory, or there is some metacircular recursion ("it's moves all the way down").


Thursday, January 18, 2007

Untangling conditions from responsibility

I've re-read the introduction to computability logic, and found that I might want to define my own version of it (or at least to try that).
For example, in CL it is not possible to define arbitrary predicates for branching - only those that can be assembled from conjunction and negation. It may sound funny at the first glance (and/not is a sufficient set of operations, as is Sheffer stroke, for example, right?), but this only holds in classical logic - as games are stateful in a way, one should not duplicate them lightheartedly, so some kind of linearity is needed. For example, XOR or EQ cannot be defined using AND and NOT without duplicating the parameters (A XOR B is !(!A & !B) & !(A & B)).

To overcome this problem, I prospose to replace conjunction and negation in the logic by a branching operator with attached truth table. So NOT A becomes BRANCH(a:A; not a), AND A B - BRANCH(a:A, b:B; a and b), XOR A B - BRANCH(a:A, b:B; a xor b) - where I write truth tables as classical logic formulas for brevity. Note that each subgame is mentioned by branch exactly once.

I have to explore the issues raised by the fact that now it becomes possible to define branches with not monotonically non-decreasing condition. The previous encoding elegantly combined conditions with switching responsibility. I would like to untangle one from another if only to prove that it was unnecessary. So in addition to replacing and/not by branch, I would like to explicitly annotate choices and quantifications by the party responsible for them, and make this annotation unaffected by branching conditions. This will yield more verbose formulas, but has potential of uncovering some independence between features that were coupled.

Also, for various reasons, I will use game variables instead of predicates as a base case.

All in all, game language could be defined as:
G = branch (Seq n G) satisfying (Seq n Bool -> Bool)
| hide x in G by Bool
| choose (Seq n G) by Bool -- in future also dependently-typed version of this
| let x = G in G -- recursion definition
| x -- recursion backlink
| v -- external game variable
| replicate G by Bool -- replication, may add some constraints on cardinality later

I may want to drop recursion and move to representing games as directed graphs at some point. ATM, textual representation is more handy, though.

The next installment will probably discuss terms (err, runs) for these types (games) or theories and models of this language.

Tuesday, January 16, 2007

Hard-play model

See section 15.

The type of the process I described previously is similar to hard-play model by Giorgi Japaridze.
In terms of game semantics, the environment can make as many moves as it wants (and as is possible in the current state) by filling the holes and only then to give an opportunity to make a move to the process (by forcing it).

Hard-play and easy-play models intersect on a practical class of problems, so it might be enough to support only one of them. However, human factors might require suppporting both (or a more general model).

E.g. (piohiho process):

class Process p i o hi ho where
schedule :: (p, Map hi i, Set ho) -> (p, Map ho o, Set hi)

The environment passes to a process some of the previously requested inputs, and requests some outputs. The process responds by some outputs, and requests some inputs.
Note that while values of both ho and hi must be used linearly, it is allowed to postpone their fullfilment by several steps.

PS: alterrnatively, to make justification structure more obvious:

class Process p i o hi ho where
schedule :: (p, Map hi (i, Set ho)) -> (p, Map ho (o, Set hi))

Monday, January 15, 2007

Indexed Containers

Two news. Good one - Conor McBride shows how to type interactive processes (section 7.3: Imperative Interfaces and Container Drivers).
Bad one - the type system involved is quite involved.

PS: also, I am not yet sure his idea indeed covers non-alternating games, so to say.

Weakly-typed processes

To give a better idea of the process type I mentioned in the previous post, here is its weakly-typed variant:

class Process p o i h | p -> o i h where
force :: p -> (o, Maybe h, p)
fill :: (p, h, i) -> p
runnable :: p -> Bool

Method force obtains an output, a potential hole, and a (new state of the) process.
Method fill fills a hole of the process with an input, obtaining a process.
Method runnable checks whether the process can be forced (at least once).

I call this variant weakly-typed because it permits various run-time errors (which are type errors for the tiny process language being interpreted). Examples are: forcing unrunnable process, filling a hole that does not belong to a process, filling a hole with a value of wrong type.

An encoding of the same functionality that statically catches these three errors is left as an exercise for the reader (the first one is easy, while the remaining require either dependent types or unwieldy tricks, I believe - one approach would be to carry in type of a process a boolean representing runnability and a map from holes represented as Peano numbers to their types - and I do not see how to make types that differ only by order of holes to be equal).

PS: idea, instead of Map Nat Type I can have [Maybe Type], which has simpler equality. However, the latter type either grows unboundedly as process evolves, or requires reuse of garbage (i.e, Nothing) slots. GC at type level, anyone? :)

Number-parameterized types

Yesterday I played with number-parameterized types in Haskell (actually, GHC).
The ultimate goal was to create a type for a process, that when forced, pops out an output and another process, which can be forced again, etc. The environment can later graft inputs at the places in the process that popped outputs (not necessarily in the order of outputs, but keeping correspondence between inputs and outputs). E.g., process can request an input from a file, then from a keyboard, then environment will provide it with input from a keyboard, and then from file. To complicate the issue even more, different choices of outputs have different options for inputs (so types of inputs are indexed by values of outputs).

I started from more modest types to make sure I understand what I am doing.
First I needed a couple of phantom types to build arithmetic on types:

data Z -- 0 inhabitants
data S a = L a -- as much inhabitants as in a

class Sum a b c a b -> c
instance Sum Z a a
instance Sum a (S b) c => Sum (S a) b c

Among other things, this enabled me to build sequences parameterized by their size:

data Seq a s = Seq [a]
deriving Show

nil :: Seq a Z
nil = Seq []

cons :: a -> Seq a m -> Seq a (S m)
cons h (Seq t) = Seq (h:t)

app :: Sum m n s => Seq a m -> Seq a n -> Seq a s
app (Seq m) (Seq n) = Seq (m ++ n)

Note that I use ordinary lists inside Seq, so I can reuse all their operations (e.g., ++).

Next step would be to convert ordinary lists to sequences.
For that, I choose to bundle sequences with their lenght representation in an existential package. First, I define a type for representation of naturals:

data NatRep s = NatRep Int
deriving Show

z :: NatRep Z
z = NatRep 0

s :: NatRep a -> NatRep (S a)
s (NatRep n) = NatRep $ n+1

I could have done the following:

data Nat = Zero | Succ Nat
deriving Show

data NatRep s = NatRep Nat
deriving Show

z :: NatRep Z
z = NatRep Zero

s :: NatRep a -> NatRep (S a)
s (NatRep n) = NatRep $ Succ n

, but decided to keep Peano arithmetic restricted to types.

Now I am equipped with tools to create dynamically typed sequences parameterized by their size (quite a mouthful):

data DynSeq a = forall m . DynSeq (NatRep m) (Seq a m)
instance Show a => Show (DynSeq a) where
show (DynSeq n sq) = "(DynSeq " ++ (show n) ++ " " ++ (show sq) ++ ")"

toDynSeq :: [a] -> DynSeq a
toDynSeq [] = DynSeq z nil
toDynSeq (a:as) = case toDynSeq as of DynSeq n sq -> DynSeq (s n) $ cons a sq

A couple of surprizes notwithstanding (inability to derive Show for DynSeq, or to use pattern with DynSeq in either let or where), it was quite easy (if not optimal, should I have used GADTs?). Also, I suspect that the decision to use lists inside Seq and Int inside DynSeq will bite me more than once. Also, I am not comfortable with the approach of using functions like nil and cons instead of constructors, this may hinder compiler to help me.

Will have to think about ways to reach the ultimate goal now...

See also

PS: I am using the following options:
{-# OPTIONS_GHC -fglasgow-exts -fallow-undecidable-instances -fallow-overlapping-instances #-}

Saturday, January 13, 2007

Helping in a multiprocessor environment

Helping occurs when a thread A wants to lock an object that is already locked by another thread B. Instead of blocking, A donates time to B, helping it to finish its critical section. Helping is an implementation of priority inheritance.

So I learned that the idea I use in my interpreter is in fact widely known and used. No wonder.

Will have to add this line of research to my reading list.

Monday, January 08, 2007

Extending the Multilisp Sponsor Model

That's the paper I should have read a long time ago!
It would give me the use cases to test my ideas against.
Now, I hope I will have more motivation to hack up some running code.