Thursday, January 25, 2007

Punctuated Data Streams


We explore a kind of stream semantics called punctuated
streams. Punctuations in a stream mark the end of substreams, allowing us to view a
non-terminating stream as a mixture of terminating streams. We introduce three kinds of
invariants to specify the proper behavior of query operators in the presence of punctuation.
Pass invariants unblock blocking operators by defining when such an operator can pass
results on. Keep invariants define what must be kept in local state to continue successful
operation. Propagation invariants define when an operator can pass punctuation on. We
then present a strategy for proving that implementations of these invariants are faithful
to their finite table counterparts.

Some definitions could be streamlined, and typos corrected, but otherwise looks interesting.
I also suspect reformulating the definitions in terms of domain theory might have benefited both reader and writer (less implementation-specific details).
On the other hand, category theory might be an overkill for this...

Also, can this be seen as a good application for concurrent constraint programming?
On the first glance punctuations look just like constraints.

Wednesday, January 24, 2007

Comonadic functional attribute evaluation

They relate zippers to comonads. Wow!

CoKleisli categories make comonads relevant for analyzing notions of context-dependent
function. If the object D A is viewed as the type of contextually situated
values of A, a context-dependent function from A to B is a map D A -> B in the base
category, i.e., a map from A to B in the coKleisli category. The counit eA : D A -> A
discards the context of its input whereas the coextension k† : D A -> D B of a function k : D A -> B essentially duplicates it (to feed it to k and still have a copy left).

No wonder processes looked like zippers to me - they are about contextually situated values (err, actually computations :( ).

Monads, Kleisli Arrows, Comonads and other Rambling Thoughts

Simple, and to the point.

Now I wonder how to generalize this result to stream processors that do not preserve rate (i.e., produce a list to an output stream for each history of the input stream). Generalization to multiple inputs is next.

Tuesday, January 23, 2007

The Theory of Timed I/O Automata

Especially note receptiveness.

What kind of logic is “Independence Friendly” logic?

In a series of singular, thought-provoking publications in recent years, Jaakko
Hintikka has vigorously promoted consideration of an extension of first-order logic called
IF logic, along with claims that its adoption promises to have revolutionary
consequences. My main purpose here is to examine in what sense it deserves to be called
a logic.

Uh-oh.

Friday, January 19, 2007

A REVOLUTION IN LOGIC?

I wanted to compare IF logic to computability logic for a long time. Must find time to do that soon.

Payment as part of game semantics

In CL, a semantical object maps each run to a set of possible moves for both players and a currently winning player. One idea would be to replace the latter by payments. I mean, if one uses games as contracts, it makes more sense saying that as a result of our interaction I owe you 10 euro by the end of the year than to say that I lost my contract. As a side effect, it makes possible win/win (or lose/lose) outcomes - I owe you 10 euro by the end of the year, but you owe me 10 dollars by the end of the month. Then again, how to enforce these payments? Sooner or later we must reach some escape hatch for extra-logical enforcement (court?). But I still believe that it may be more useful to come to court asking for 10 euro than asking to make false true.

Also, when going to court, you want to bring some evidence, or witness. So the outcome is not a boolean, but a scheduled flow of expected payments, each annotated with a witness (a non-repudiable pointer to a move that decided this payment). Note that missing a payment must be interpreted as a move as well - and thus modify the outcome. I wonder, if this can be implemented at least in theory, or there is some metacircular recursion ("it's moves all the way down").

PS: http://dsns.csie.nctu.edu.tw/research/crypto/HTML/PDF/C82/199.PDF

Thursday, January 18, 2007

Untangling conditions from responsibility

I've re-read the introduction to computability logic, and found that I might want to define my own version of it (or at least to try that).
For example, in CL it is not possible to define arbitrary predicates for branching - only those that can be assembled from conjunction and negation. It may sound funny at the first glance (and/not is a sufficient set of operations, as is Sheffer stroke, for example, right?), but this only holds in classical logic - as games are stateful in a way, one should not duplicate them lightheartedly, so some kind of linearity is needed. For example, XOR or EQ cannot be defined using AND and NOT without duplicating the parameters (A XOR B is !(!A & !B) & !(A & B)).

To overcome this problem, I prospose to replace conjunction and negation in the logic by a branching operator with attached truth table. So NOT A becomes BRANCH(a:A; not a), AND A B - BRANCH(a:A, b:B; a and b), XOR A B - BRANCH(a:A, b:B; a xor b) - where I write truth tables as classical logic formulas for brevity. Note that each subgame is mentioned by branch exactly once.

I have to explore the issues raised by the fact that now it becomes possible to define branches with not monotonically non-decreasing condition. The previous encoding elegantly combined conditions with switching responsibility. I would like to untangle one from another if only to prove that it was unnecessary. So in addition to replacing and/not by branch, I would like to explicitly annotate choices and quantifications by the party responsible for them, and make this annotation unaffected by branching conditions. This will yield more verbose formulas, but has potential of uncovering some independence between features that were coupled.

Also, for various reasons, I will use game variables instead of predicates as a base case.

All in all, game language could be defined as:
G = branch (Seq n G) satisfying (Seq n Bool -> Bool)
| hide x in G by Bool
| choose (Seq n G) by Bool -- in future also dependently-typed version of this
| let x = G in G -- recursion definition
| x -- recursion backlink
| v -- external game variable
| replicate G by Bool -- replication, may add some constraints on cardinality later

I may want to drop recursion and move to representing games as directed graphs at some point. ATM, textual representation is more handy, though.

The next installment will probably discuss terms (err, runs) for these types (games) or theories and models of this language.

Tuesday, January 16, 2007

Hard-play model

See section 15.

The type of the process I described previously is similar to hard-play model by Giorgi Japaridze.
In terms of game semantics, the environment can make as many moves as it wants (and as is possible in the current state) by filling the holes and only then to give an opportunity to make a move to the process (by forcing it).

Hard-play and easy-play models intersect on a practical class of problems, so it might be enough to support only one of them. However, human factors might require suppporting both (or a more general model).

E.g. (piohiho process):

class Process p i o hi ho where
schedule :: (p, Map hi i, Set ho) -> (p, Map ho o, Set hi)


The environment passes to a process some of the previously requested inputs, and requests some outputs. The process responds by some outputs, and requests some inputs.
Note that while values of both ho and hi must be used linearly, it is allowed to postpone their fullfilment by several steps.

PS: alterrnatively, to make justification structure more obvious:

class Process p i o hi ho where
schedule :: (p, Map hi (i, Set ho)) -> (p, Map ho (o, Set hi))

Monday, January 15, 2007

Indexed Containers

Two news. Good one - Conor McBride shows how to type interactive processes (section 7.3: Imperative Interfaces and Container Drivers).
Bad one - the type system involved is quite involved.

PS: also, I am not yet sure his idea indeed covers non-alternating games, so to say.

Weakly-typed processes

To give a better idea of the process type I mentioned in the previous post, here is its weakly-typed variant:

class Process p o i h | p -> o i h where
force :: p -> (o, Maybe h, p)
fill :: (p, h, i) -> p
runnable :: p -> Bool

Method force obtains an output, a potential hole, and a (new state of the) process.
Method fill fills a hole of the process with an input, obtaining a process.
Method runnable checks whether the process can be forced (at least once).

I call this variant weakly-typed because it permits various run-time errors (which are type errors for the tiny process language being interpreted). Examples are: forcing unrunnable process, filling a hole that does not belong to a process, filling a hole with a value of wrong type.

An encoding of the same functionality that statically catches these three errors is left as an exercise for the reader (the first one is easy, while the remaining require either dependent types or unwieldy tricks, I believe - one approach would be to carry in type of a process a boolean representing runnability and a map from holes represented as Peano numbers to their types - and I do not see how to make types that differ only by order of holes to be equal).

PS: idea, instead of Map Nat Type I can have [Maybe Type], which has simpler equality. However, the latter type either grows unboundedly as process evolves, or requires reuse of garbage (i.e, Nothing) slots. GC at type level, anyone? :)

Number-parameterized types

Yesterday I played with number-parameterized types in Haskell (actually, GHC).
The ultimate goal was to create a type for a process, that when forced, pops out an output and another process, which can be forced again, etc. The environment can later graft inputs at the places in the process that popped outputs (not necessarily in the order of outputs, but keeping correspondence between inputs and outputs). E.g., process can request an input from a file, then from a keyboard, then environment will provide it with input from a keyboard, and then from file. To complicate the issue even more, different choices of outputs have different options for inputs (so types of inputs are indexed by values of outputs).

I started from more modest types to make sure I understand what I am doing.
First I needed a couple of phantom types to build arithmetic on types:

data Z -- 0 inhabitants
data S a = L a -- as much inhabitants as in a

class Sum a b c a b -> c
instance Sum Z a a
instance Sum a (S b) c => Sum (S a) b c


Among other things, this enabled me to build sequences parameterized by their size:

data Seq a s = Seq [a]
deriving Show

nil :: Seq a Z
nil = Seq []

cons :: a -> Seq a m -> Seq a (S m)
cons h (Seq t) = Seq (h:t)

app :: Sum m n s => Seq a m -> Seq a n -> Seq a s
app (Seq m) (Seq n) = Seq (m ++ n)


Note that I use ordinary lists inside Seq, so I can reuse all their operations (e.g., ++).

Next step would be to convert ordinary lists to sequences.
For that, I choose to bundle sequences with their lenght representation in an existential package. First, I define a type for representation of naturals:

data NatRep s = NatRep Int
deriving Show

z :: NatRep Z
z = NatRep 0

s :: NatRep a -> NatRep (S a)
s (NatRep n) = NatRep $ n+1


I could have done the following:

data Nat = Zero | Succ Nat
deriving Show

data NatRep s = NatRep Nat
deriving Show

z :: NatRep Z
z = NatRep Zero

s :: NatRep a -> NatRep (S a)
s (NatRep n) = NatRep $ Succ n


, but decided to keep Peano arithmetic restricted to types.

Now I am equipped with tools to create dynamically typed sequences parameterized by their size (quite a mouthful):

data DynSeq a = forall m . DynSeq (NatRep m) (Seq a m)
instance Show a => Show (DynSeq a) where
show (DynSeq n sq) = "(DynSeq " ++ (show n) ++ " " ++ (show sq) ++ ")"

toDynSeq :: [a] -> DynSeq a
toDynSeq [] = DynSeq z nil
toDynSeq (a:as) = case toDynSeq as of DynSeq n sq -> DynSeq (s n) $ cons a sq


A couple of surprizes notwithstanding (inability to derive Show for DynSeq, or to use pattern with DynSeq in either let or where), it was quite easy (if not optimal, should I have used GADTs?). Also, I suspect that the decision to use lists inside Seq and Int inside DynSeq will bite me more than once. Also, I am not comfortable with the approach of using functions like nil and cons instead of constructors, this may hinder compiler to help me.

Will have to think about ways to reach the ultimate goal now...

See also http://okmij.org.

PS: I am using the following options:
{-# OPTIONS_GHC -fglasgow-exts -fallow-undecidable-instances -fallow-overlapping-instances #-}

Saturday, January 13, 2007

Helping in a multiprocessor environment

Helping occurs when a thread A wants to lock an object that is already locked by another thread B. Instead of blocking, A donates time to B, helping it to finish its critical section. Helping is an implementation of priority inheritance.

So I learned that the idea I use in my interpreter is in fact widely known and used. No wonder.

Will have to add this line of research to my reading list.

Monday, January 08, 2007

Extending the Multilisp Sponsor Model

That's the paper I should have read a long time ago!
It would give me the use cases to test my ideas against.
Now, I hope I will have more motivation to hack up some running code.