While playing with STM, it occurred to me that it actually combines two features: optimistic transaction isolation (OTI), which allows transactions to execute seemingly independently of each other, and backtracking (BT, implemented via retry/orElse/bind in place of fail/or/and), which ensures atomicity.
If one takes care when designing a concurrent language, then OTI becomes unnecessary. For that, it is sufficient to have:
NDW: non-destructive writes - the source of communication cannot undo previous actions, it can only "append" to them.
LRD: localised read and destroy - access to read/destroy operations on communication media is tightly controlled and in some sense local (like a single thread).
MB: monotonic behaviors - behavior cannot fail on an input bigger than one which made it succeed.
Join calculus enjoys all these properties. Indeed, NDW is ensured by writes appending to a multiset of messages, LRD is ensured syntactically by coupling creation of channels with specifying reactions to them, and MB is trivial as well - join calculus behaviors can only fail (which means - block) because of lack of a message, not because of an extra message.
Why do I think that NDW+LRD+MB obviate OTI?
Because MB ensures it's ok to schedule producer a bit sooner, BT ensures it is ok to schedule consumer a bit sooner, and NDW+LRD+MB ensure it is ok to run producer and consumer in parallel.
One still has to be careful at a very low level of operations - e.g., the multi-processor implementation must ensure atomicity of single writes and reads, but this is easier then full OTI.
If I am right with this conjecture, it would be cool to try and split STM into two facets, OTI and BT, and see, what happens.
BTW, isn't BT just a fancy name for Control.Monad.List? :)