At the moment I use a simple mutex-guarded list of messages, referred to as a task's "mailbox". When a message is sent to a task, the mutex is locked, the message (and any applicable data payload) is inserted into the list, and the mutex is released.
This all works pretty well, but obviously will never scale past a couple of simple tasks. My next big project is to switch over to lock-free containers now that I know the basic message passing infrastructure is working correctly.
Before I stumble off to find a flat surface on which to pass out, I will leave you with some tasty code that shows off how the parallelism features of Epoch can be used to trivially split up lengthy calculations across multiple CPU cores. Keep in mind that this is just one of many concurrency features that are planned for the language.
//// TASKS.EPOCH//// Demonstration of the multiprocessing capabilities of Epoch//entrypoint : () -> (){ task(asyncjob1) { pi_task() } task(asyncjob2) { pi_task() } message(asyncjob1, calculate(10000.0)) message(asyncjob2, calculate(50000.0)) debugwritestring("Please wait, async tasks running...") integer(baz, 42) debugwritestring(concat("Main task: ", cast(string, baz))) // Do this twice since we have two results to wait for // In a real program we'd do something more robust than // just copying/pasting the message handler ;-) acceptmsg(result(real(foo)) => { debugwritestring(concat("First result: ", cast(string, foo))) } ) acceptmsg(result(real(foo)) => { debugwritestring(concat("Second result: ", cast(string, foo))) } )}pi_task : () -> (){ acceptmsg(calculate(real(limit)) => { message(caller(), result(pi(limit))) } )}pi : (real(denominator_limit)) -> (real(retval, 0.0)){ real(denominator, 1.0) boolean(isplus, true) do { real(div, 0.0) assign(div, divide(4.0, denominator)) if(equal(isplus, true)) { assign(retval, add(retval, div)) assign(isplus, false) } else { assign(retval, subtract(retval, div)) assign(isplus, true) } assign(denominator, add(denominator, 2.0)) } while(less(denominator, denominator_limit))}