Monday, April 2, 2012

Adding øMQ and LLVM...

Pick an Eiffel compiler, cut it in small pieces, add a cup of socket library - it will work as a concurrency framework - and a cup of a low level virtual machine. Blend it all in the mixer for several revisions until all the tests are passed then pour into a repository.
Recipe from "Recipes for a successful evening with friends"
I would have liked to title this "LEC: Liberty/LLVM Eiffel Compiler".
Currently SmartEiffel is a monolithic piece of code in many senses.
It was considered fast, perhaps the fastest Eiffel compiler available. But it was under precise conditions, more specifically when building projects from the scratch on a single-core 32-bit processor.
Now that «the times they are a-changin'», almost all those assumptions does not held anymore: 
  • most of the time the programmer will rebuild a project after some small changes, 
  • multi-core processors are the norm even in phones; widespread machines easily have 4-6-8 or even more cores. Even the Ubuntu-certified Asus Eee PC TM Seashell 1015PXB I bought last week at 199€ is seen by the kernel as a four-cores machine.
  • most of those processors are 64bit
Like ISE Eiffel also SmartEiffel compiles to C and then invokes a C compiler to actually produce the binaries and that phase has always been parallelized to put at work all the cores of the machine. I initially planned to start to parallelize the parsing phase then after a few days of study I discovered that SmartEiffel design gives me an easier start from the back-end. My idea is simple: replace the original C back-end with one that outputs LLVM bytecodes, one compilation unit per class. After the original code made all the parsing, deconding and syntactic analysis I just wrote (some comments and check instructions removed):

Obviously all the work is done by a LLVM_WORKER which runs as a fork of the main compiler process; that way it has all the data structure ready in its address space; each worker starts listening to socket path for commands:

    CLASS_TEXT_VISITOR -- To access CLASS_TEXT.feature_dictionary

creation communicating_over

feature {} -- Creation
    communicating_over (an_endpoint: ABSTRACT_STRING) is
        -- Start a new worker process that will wait for commands over a øMQ socket connected to `an_endpoint'
    require an_endpoint/=Void
        endpoint := an_endpoint
        start -- this will fork a new process and invoke `run'

feature -- Commands
    run is
        local command: ZMQ_STRING_MESSAGE
            pid := process_id;
            ("Worker #(1) listening on '#(2)'%N" # & pid # endpoint).print_on(std_output)
            create context
            socket := context.new_pull_socket
            from socket.connect(endpoint)
            until socket.is_unsuccessful loop 
                create command
                if socket.is_successful then 
                else throw(socket.zmq_exception)
            ("Worker #(1) ending%N" # & pid ).print_on(std_error)

feature {} -- Implementation
    process (a_command: ZMQ_STRING_MESSAGE) is
        require a_command/=Void
        local words: COLLECTION[STRING]; index: INTEGER; cluster: CLUSTER
            words := a_command.split
            if words/=Void and then words.count=2 and then
                words.first.is_equal("compile-cluster") and then 
                words.last.is_integer then
                index := words.last.to_integer
                cluster := ace.cluster_at(index)
                ("Worker process #(1) starts compiling cluster '#(2)' (##(3))%N" # &pid # # &index).print_on(std_output)
                cluster.for_all(agent visit_class_text)
            ("Cluster '#(2)' (##(3)) compiled by worker #(1)%N" # &pid # # &index).print_on(std_output)

LLVM_WORKER does not yet use the PROCESS_POSIX provided by SmartEiffel: I wanted the quickest'n'dirtiest way to use fork() as this is primarily a test to øMQ bindings . After all the quick'n'dirty approach sometimes proves to be exceptionally successful...
People coming from loosely typed languages may argue that I could have written the process command like this:

    process (a_command: ZMQ_STRING_MESSAGE) is

        local words: COLLECTION[STRING]; index: INTEGER; cluster: CLUSTER

            if a_command.split.first.is_equal("compile-cluster") then 

                index := a_command.split.last.to_integer
                cluster := ace.cluster_at(index)

                 ("Worker process #(1) starts compiling cluster '#(2)' 
(##(3))%N" # &pid # # &index).print_on(std_output)

                cluster.for_all(agent visit_class_text)


            ("Cluster '#(2)' (##(3)) compiled by worker #(1)%N" # &pid # # &index).print_on(std_output)


While this may be true now, I ideally want this to scale at least to a local-network scale - for the messaging part it's just a matter of adding socket.bind("tcp://localhost:9999") after the first bind - so assuming anything about a received message is plainly wrong; we may receive garbage. And when you process garbage all you get is garbage.
Nasty reader or innocent C++ programmers may have noticed that I haven't used threads, so I couldn't have used the real zero-copy in-process messaging. Any Posix programmer worth his/her salt knows that threads are evil... Jokes apart I would really like to have real zero-copy IPC; yet our current compiler is not thread-safe. I think I should rather implement auto-relative references and share a memory region between processes. I actually had a somehow working prototype of such a class, modelled after autorelative pointers, but they are so brittle to use that I was ashamed to commit it anywhere...


  1. compiling with ZMQ - what a thought! I can envision compile farms now...

  2. eToro is the ultimate forex trading platform for newbie and pro traders.