Planet Lisp

Gábor MelisOn the Design of Matrix Libraries

· 3 days ago

I believe there is one design decision in MGL-MAT that has far reaching consequences: to make a single matrix object capable of storing multiple representations of the same data and let operations decide which representation to use based on what's the most convenient or efficient, without having to even know about all the possible representations.

This allows existing code to keep functioning if support for diagonal matrices (represented as a 1d array) lands and one can pick and choose the operations performance critical enough to implement with diagonals.

Adding support for matrices that, for instance, live on a remote machine is thus possible with a new facet type (MAT lingo for representation) and existing code would continue to work (albeit possibly slowly). Then one could optimize the bottleneck operations by sending commands over the network instead of copying data.

Contrast this with what I understand to be the status quo over on the Python side. The specialized Python array libs (cudamat, gpuarray, cudandarray) try to be drop-in replacements for - or at least similar to - numpy.ndarray with various degrees of success. There is lots of explicit conversion going on between ndarray and these CUDA blobs and adding new representations would make this exponentionally worse.

Torch (Lua) also has CUDA and non-CUDA tensors are separate types, and copying between main and GPU memory is explicit which leads to pretty much the same problems.

All of this is kind of understandable. When one thinks in terms of single-dispatch (i.e. object.method()), this kind of design will often emerge. With muliple-dispatch, data representation and operations are more loosely coupled. The facet/operation duality of MGL-MAT is reminiscent of how CLOS classes and generic functions relate to each other. The anology is best if objects are allowed to shapeshift to fit the method signatures.

Speaking of multiple-dispatch, by making the operations generic functions following some kind of protocol to decide which facets and implementation to use would decouple facets further. Ultimately, this could make the entire CUDA related part of MGL-MAT an add-on.

Zach BeaneLogical pathnames and eager parsing

· 4 days ago

Here’s something that cost me some time debugging today:

  (setf (logical-pathname-translations "bug")
        (list (list "bug.lisp.*" "/tmp/bug.lisp")))
  (print (list :bug (probe-file #p"bug:bug.lisp")))

The above form will print (:BUG NIL) in SBCL, even if /tmp/bug.lisp exists. That’s because #p"bug:bug.lisp" is approximately equivalent to #.(parse-namestring "bug:bug.lisp"), which happens at read time. Since the BUG logical host isn’t defined by then, SBCL parses the string as a pathname with a name of “bug:bug” and a type of “lisp”.

Using the string "bug:bug.lisp" instead of the form #p"bug:bug.lisp" defers namestring parsing until runtime. By then, the logical host is defined and the translation kicks in, and the probe-file returns #p”/tmp/bug.lisp” as expected.

(Namestring syntax and parsing is, of course, implementation-dependent. In Clozure CL, a colon is always interpreted as a logical host delimiter, and the progn form above signals an error. To use a colon in a “normal” pathname, it must be prefixed with a backslash in its namestring.)

LispjobsClojure SDE,, Seattle Washington (repost, with new contact info)

· 6 days ago is the leading online retailer in the United States with over $75 billion in global revenue. At Amazon, we are passionate about using technology to solve business problems that have big customer impact.  CORTEX is our next generation platform which handles real-time financial data flows and notifications. Our stateless event-driven compute engine for dynamic data transforms is built entirely in Clojure and is crucial to our ability to provide a highly agile response to financial events within the organization. We leverage AWS to operate on a massive scale and meet high-availability, low-latency SLAs.  If you have over 7 years of experience as a hands-on software developer, have worked with distributed systems, and have experience in start-ups or possess a "start-up mentality", keep reading.

Do you:

  • Obsess over software performance and challenge yourself and others to deliver highly scalable, low latency, reliable and fast computation platforms?
  • Possess great ideas and know how to solve problems, but also follow through with a clean and maintainable implementation?
  • Have a high bar for coding excellence and a passion for design and architecture?
  • Want to work with really cool technology such as Clojure, JVM, and AWS tools?
  • Either live in or desire to live in the greater Seattle area and want to work somewhere that you can have a large scale impact but still work on a small team?

If your answers are yes, this may be the role for you!  Combining a startup atmosphere with the ambition to build and utilize cutting-edge reactive technology, the Cortex team at Amazon is looking for a passionate, results-oriented, and innovative Sr. Software Engineer who wants to move fast and have fun, while being deeply involved in solving business integration problems across various organizations within Amazon.

Basic Qualifications

  • 3+ years of experience designing, building, deploying, operating, scaling, and evolving distributed systems and high-volume transaction application in a 24/7 environment
  • 7+ years of industry experience in software development in Java or C++
  • Exceptional customer relationship skills including the ability to discover the true requirements underlying feature requests, recommend alternative technical and business approaches, and lead engineering efforts to meet aggressive timelines with optimal solutions
  • Bachelor’s Degree in Computer Science or related field or equivalent work experience

Preferred Qualifications

  • Background or strong interest in Clojure
  • Experience with cloud technologies from AWS
  • Proficiency in a Unix/Linux environment
  • Graduate degree (MS/PhD) a plus
  • Experience mentoring and developing junior SDEs

If interested, please send your resume to

ECL NewsECL 15.2.21 released, new maintainer found

· 8 days ago

New release, which is mainly current state of git HEAD (plus a few fixes). Contains numerous bug-fixes in comparison to 13.5.1 and is last, which follows date-based version convention. It's time to finally release ECL 1.0 ;-).

Development moves to gitorious (, same as wiki (previous content is inaccessible now, but once subscription is renewed, I'll start to migrate content from there). In GIT topic - current permissions will be preserved. Just drop me a line with gitorious login, and I'll add person on corresponding permission level to project.

Mailing list and website are staying at SF for now, but I'd really like to switch the latter to something more manageable. Also, SF have lately problems with stability, what is quite annoying.

More on maintainer topic:
My name is Daniel Kochmański (you may meet me on IRC and over internet under the nick "jackdaniel") and I'm willing to spare at least a few hours a week for this amazing project to keep it alive. More on progress ideas and myself might be found on mailing list archive - I ask for comments, suggestions and discussion (and forgiveness for some potential dumb ideas I might propose), to develop them better. Also, I do ask for help.

While I will try to set up Linux/Unix environment to check builds and try to fix problems on various operating systems (I'm thinking about putting vagrant in use), I have no access to neither Windows or OSX environments, so I will have no clue, if any commit will break builds on these. Testers for these platforms are crucial imo.

Best regards,

Gábor MelisBigger and Badder PAX World

· 9 days ago

Bigger because documentation for named-readtables and micmac has been added.

Badder because clicking on a name will produce a permalink such as this: *DOCUMENT-MARK-UP-SIGNATURES*. Clicking on locative types such as [variable] on the page that has just been linked to will take you to the file and line on github where *DOCUMENT-MARK-UP-SIGNATURES* is defined.

LispjobsLisp Hacker with interest in Machine Intelligence

· 16 days ago

I’m hiring a hacker or two for a startup.
Please send CV, portfolio & phone number.

Devon Sean McCullough

(Write him for location, description, etc)

Nicolas HafnerUsing CL+Qt - Confession 49

· 16 days ago

header Some deem it unfortunate, others are not bothered by it at all, but the fact remains that Common Lisp does not have a standard GUI toolkit. It does have a native toolkit called McCLIM, but due to general outdated-ness it is not a very attractive choice. Generally I'm not one to linger long on decisions when it comes to learning something, so after quickly evaluating the options I chose to try CommonQt, a library to allow using the Qt framework with CL.

The first thing I wrote with it was a primitive GUI for a chat client, but while I did finish it, I never went far with it. That is, until Parasol came along. Parasol makes heavy use of Qt, and unfortunately working with CommonQt forces you to write in a rather un-lispy style. This isn't surprising, since Qt itself is a C++ framework and thus matching idioms probably isn't as easy.

Fortunately for us, CommonQt already goes a long way of bridging the gap, but not quite far enough. In an effort to bring GUI writing with Qt closer to home, I created Qtools. In this entry we're going to make use of CommonQt and Qtools to show off what writing a basic GUI in CL can look like.

What we're going to do for this mini project here is write a primitive Twitter client. It'll have a dialog to let users log in via twitter and a main window to display new statuses, as well as let you post some. To make this all possible we'll make use of Chirp and the aforementioned Qtools. In order to understand this tutorial you'll need a moderate understanding of Common Lisp, some prior knowledge of UI programming, and a lack of fear to look things up in the hyperspec, Qt docs, and other documentation. Let's get to it.

(ql:quickload '(:chirp :qtools))

This month (February 2015), you'll want to get Qtools from git (version 0.4.2+) as the Quicklisp version is too outdated. In case the CommonQt loading fails, refer to the CommonQt homepage.

Now, as usual we'll create a new package for ourselves to live in.

(defpackage #:titter
  (:use #:cl+qt)
  (:export #:main))
(in-package #:titter)
(named-readtables:in-readtable :qtools)

Here you'll notice two deviations from the norm. First, we're not :use-ing the standard CL package, but rather CL+QT, which is a package from Qtools that provides convenient access to CL as well as Qt functionality. Second, we need the in-readtable statement to make use of CommonQt's reader extension for Qt methods.

Now we'll finally start with writing our own UI. Defining top-level widgets happens with define-widget, which exactly mirrors defclass, with the exception of some extensions that are irrelevant for this tutorial.

(define-widget login (QDialog)

This will be our dialog to log in with. You can already test it now, but you won't get much beyond a blank window.

(with-main-window (w (make-instance 'login)))

Time to get on to the meat of a widget, its contents. Logging in to twitter can't happen via password anymore unless you get special permission from twitter to do so. We'll instead use twitter's oAuth PIN method. To give that to the user, we'll need to show them a link, let them type in a PIN and have a button to confirm or something.

(define-subwidget (login url) (#_new QLabel login)
  (#_setTextFormat url (#_Qt::RichText))
  (#_setTextInteractionFlags url (#_Qt::TextBrowserInteraction))
  (#_setOpenExternalLinks url T))

That's quite a few new things here so let's go through them. define-subwidget as you probably expect defines a widget on our login widget, called url. This initializes to a QLabel instance with our main widget set as parent. #_new is the CommonQt equivalent to the new operator in C++. While widgets defined on the CL side need to be initialised as usual using make-instance, Qt-native classes need to be instantiated using #_new. Next in the body we set a couple of properties of our label using C++ methods with the #_ reader macro. Make sure to type the method names in their exact case or CommonQt won't be able to find them. These property changes are necessary to allow clickable URLs.

Don't launch your widget quite yet or you'll be disappointed to find it as bleak and empty as before. We'll get to that in a minute, but first let's define the rest of our components real quick.

(define-subwidget (login pin) (#_new QLineEdit login)
  (#_setPlaceholderText pin "PIN"))

(define-subwidget (login go) (#_new QPushButton "Login" login))

Alright, that was easy. Now, the subwidgets won't appear on your main widget magically as the system could not have any idea how you want them to be placed. For this we need layouts.

(define-subwidget (login layout) (#_new QVBoxLayout login)
  (#_setWindowTitle login "Login to Twitter")
  (#_addWidget layout url)
  (let ((inner (#_new QHBoxLayout)))
    (#_addWidget inner pin)
    (#_addWidget inner go)
    (#_addLayout layout inner)))

Rather simple layout stuff by GUI standards. A vertically oriented layout to hold our label and a horizontal layout that holds the PIN text field and button. Now you may launch your widget again and marvel at the impressively unexciting UI.

In order to make things react in Qt you need to employ their system of slots and signals. Slots are signal receptors and signals are identifiers as well as data-carriers for events. So, when a button gets clicked a signal is fired. Whatever slot is connected to the button on that signal then gets called with the signal properties for arguments. Since we have a button in our form, let's make a slot for it.

(define-slot (login done) ()
  (declare (connected go (released)))
  (#_QMessageBox::information login "OOoOo" \(°_o)/¯"))

What we've done here is defined a slot on our widget called done, which takes no arguments and is connected to the go button's released signal (which provides no properties). You'll notice here that Qtools uses declarations like a sly fox in order to make things a bit easier and lispier. Firing up the widget now will already give you the expected effect.

This is all good and well, but it has rather little to do with Twitter, so we'll change that. First, we need to fetch the URL to have the user authenticate with and display it on the label.

(defun set-url (widget)
  (let ((url (chirp:initiate-authentication
              :api-key "D1pMCK17gI10bQ6orBPS0w"
              :api-secret "BfkvKNRRMoBPkEtDYAAOPW4s2G9U8Z7u3KAf0dBUA")))
    (#_setText widget (format NIL "Please enter the pin from <a href=\"~a\">twitter</a>." url))))

Then we need to change our login slot definition to actually make use of this function.

(define-subwidget (login url) (#_new QLabel login)
  (#_setTextFormat url (#_Qt::RichText))
  (#_setTextInteractionFlags url (#_Qt::TextBrowserInteraction))
  (#_setOpenExternalLinks url T)
  (set-url url))

But, we're only half-way there. We still need to actually evaluate the PIN that the user passes back to get the proper authentication credentials. We'll do that in our done slot.

(defvar *logged-in* NIL)

(define-slot (login done) ()
  (declare (connected go (released)))
  (setf *logged-in* NIL)
  (#_setCursor login (#_new QCursor (#_Qt::WaitCursor)))
      (chirp:complete-authentication (#_text pin))
    (error (err)
      (#_QMessageBox::critical login "Error!" "Failed to login.")
      (#_setText pin "")
      (set-url url)
      (#_setCursor login (#_new QCursor (#_Qt::ArrowCursor))))
    (:no-error (&rest args)
      (declare (ignore args))
      (setf *logged-in* T)
      (#_close login))))

So, what happens here? First we have a variable to keep track of the login status and then we do some cursor displaying to let the user know that stuff is happening the back. Next we have error handling in case our authentication fails for some reason, which just resets things to let the user try again. However, if we succeed the widget closes itself and thus returns. To verify that everything logged in smoothly after you've tried it, you can use


So in, little under 50 lines we wrote a complete login dialog for our application. While we're fired up like that, let's move on to writing the actual client. We'll want a field to type new status updates into, a button to submit the tweet, and a list to hold new tweets from our home timeline.

(define-widget client (QWidget)

(define-subwidget (client status) (#_new QLineEdit client)
  (#_setPlaceholderText status "What's old?.."))

(define-subwidget (client tweet) (#_new QPushButton "Tweet!" client))

(define-subwidget (client timeline) (#_new QListWidget client)
  (#_setWordWrap timeline T)
  (#_setTextElideMode timeline (#_Qt::ElideNone)))

(define-subwidget (client layout) (#_new QVBoxLayout client)
  (#_setWindowTitle client "Titter")
  (let ((inner (#_new QHBoxLayout)))
    (#_addWidget inner status)
    (#_addWidget inner tweet)
    (#_addLayout layout inner))
  (#_addWidget layout timeline))

Mostly similar to what we had before, modulo widgets and properties. Now we need another big function to take care of submitting a tweet. This happens as before in a slot since we need to handle a button press.

(define-slot (client tweet) ()
  (declare (connected tweet (released)))
  (cond ((<= 1 (chirp:compute-status-length (#_text status)) 140)
         (#_setCursor client (#_new QCursor (#_Qt::WaitCursor)))
             (chirp:statuses/update (#_text status))
           (error (err)
             (#_QMessageBox::critical client "Error!" (format NIL "Failed to tweet: ~a" err)))
           (:no-error (&rest args)
             (declare (ignore args))
             (#_setText status "")))
         (#_setCursor client (#_new QCursor (#_Qt::ArrowCursor))))
         (#_QMessageBox::information client "Huh?" "Tweet must be between 1 and 140 characters long!"))))

Here we have a simple check to make sure the status has the allowed length (chirp takes care of URLs for us), sends out a new status update, and handles the potential errors. Simple, verbose stuff. Looking at our main window now

(with-main-window (w (make-instance 'client)))

We'll be able to send tweets, but nothing appears in the list. For that we need to cast some more advanced spells. To handle adding new items to our list we'll define our own signal and slot.

(define-signal (client new-tweet) (string string))

(define-slot (client new-tweet) ((user string) (status-text string))
  (declare (connected client (new-tweet string string)))
  (format T "~&Got new tweet from ~a: ~s" user status-text)
  (#_addItem timeline (format NIL "@~a: ~a" user status-text)))

As you can see, the signal definition holds a type argument list. We'll want to transmit the username and the status text and connect the slot to the widget itself. We'll use that to emit the signal once we get new tweets.

Since the main thread will be occupied with the UI we need to launch an additional thread to take care of incoming tweets. However, we also need to make sure that the thread shuts down with the UI as well and only launches after the UI is already available. To do this we'll define a general launch function.

(defun main ()
  (let ((thread))
    (with-main-window (w (make-instance 'client))
      (setf thread
             #'(lambda ()
                  :user #'(lambda (message)
                            (when thread
                              (process-message message w) T)))
                 (format T "~&Shutting down tweet stream"))
             :initial-bindings  `((*standard-output* . ,*standard-output*)))))
    (setf thread NIL)))

Aside from the with-main-window form, the guts here is the start-stream chirp function which will handle stream communication for us for as long as messages come through and our handler function returns with a non-NIL value. Thus we can check for thread termination and let everything clean up nicely once the UI exits. However, this makes use of one function we haven't defined yet, process-message. Let's change that.

(defun process-message (message client)
  (format T "~&Message: ~a" message)
  (when (typep message 'chirp:status)
    (signal! client (new-tweet string string)
             (chirp:screen-name (chirp:user message))
             (chirp:xml-decode (chirp:text-with-expanded-urls message)))))

Here we emit a signal to our client using the new-tweet signal and the mentioned arguments. Chirp takes care of URLs and entities. If you launch the client now using the main function, you should see your own status update, as well as everything that happens on your home timeline. That means we're pretty much done already! As a final addition, let's make the main also handle logging in.

(defun main ()
  (unless *logged-in*
    (with-main-window (w (make-instance 'login))))
  (when *logged-in*
    (let ((thread))
      (with-main-window (w (make-instance 'client))
        (setf thread
               #'(lambda ()
                    :user #'(lambda (message)
                              (when thread
                                (process-message message w) T)))
                   (format T "~&Shutting down tweet stream"))
               :initial-bindings  `((*standard-output* . ,*standard-output*)))))
      (setf thread NIL))))

Aaand done, ship it.

There isn't much else to the general concepts of UI programming with Qt other than widgets, signals, and slots. Everything else lies in knowing about the respective classes and methods, which is more vocabulary than concept. However, I hope that this quick introduction proved interesting and neat enough for you to take making UIs with Common Lisp into your list of feasible things.

I'd always welcome suggestions and ideas for extensions or modifications to Qtools to make working with Qt even more lispy than it is currently.

Thank you for your time.

You may read the source code in one piece here.

Additional note for the curious: You might be wondering how this all works in combination with Qt. As you know from your C/C++ experience, it uses different method naming conventions and types and all that wahoo. And indeed, the culprit for hiding this from you is Qtools. It translates types and method names into their C++ equivalents behind your back. This goes a long way towards bridging the gap. As an exercise, we'll take a look at the entire transformation sequence of a simple slot definition.

(define-slot (widget foo) ((text string))
  (print text))

The first thing that happens is that Qtools translates this into (surprise!) a method definition:

(defmethod %widget-slot-foo ((widget widget) (text string))
  (declare (slot foo (string)))
  (with-slots-bound (widget widget)
    (print text)))

Here we see another instance of using declarations to bridge the gap. You can of course also use defmethod directly if you prefer, and for some scenarios you really might. This also reveals why we need to :use cl+qt rather than cl, since Qtools needs to shadow the default defmethod. However, no worries, you can still use it as normal, the only difference is the extra declaration handling. Now, this method definition needs to be purified, as CL itself won't accept the slot declaration:

  (eval-when (:compile-toplevel :load-toplevel :execute)
    (progn (set-widget-class-option 'widget :slots '("foo(const QString&)" %widget-slot-foo))))
  (cl:defmethod %widget-slot-foo ((widget widget) (text string))
    (with-slots-bound (widget widget)
      (print text)))

And even more interesting things happened now! First what you see is Qtools' widget external redefinition capabilities. Using set-widget-class-option we can change the class definition form of the widget outside of its define-widget form. In this case we set a new :slot value (which is a CommonQt qt-class option). Here we also see that Qtools correctly translated the name and arguments of our slot definition into the equivalent name for the C++ side and links it to the method we define. The method that remains is a standard CL method definition. The with-slots-bound is a special form that performs a with-slots on all available slots of the class. Subwidgets get translated to class slots and using with-slots-bound they become automatically available through their respective symbols. This was added mostly because using accessors to refer to subwidgets becomes so ludicrously tedious, repetitive, and verbose that binding them all by default is the much less painful alternative.

Qtools offers quite a bit more than is outlined here such as additional type translation, menu definition, and finalization to name some. Take a look at the docs to see what it has in store.

Quicklisp newsJanuary 2015 download stats

· 27 days ago
Here are the top 100 downloads for last month:

5231 alexandria
3821 cl-ppcre
3799 trivial-features
3664 babel
3050 cffi
2923 cl-fad
2848 closer-mop
2821 flexi-streams
2770 slime
2765 bordeaux-threads
2632 iterate
2631 trivial-gray-streams
2629 trivial-garbage
2276 split-sequence
2221 named-readtables
2172 chunga
2147 anaphora
2024 local-time
1972 usocket
1965 cl+ssl
1931 md5
1849 cl-base64
1812 trivial-backtrace
1573 metabang-bind
1567 nibbles
1531 ironclad
1517 drakma
1511 hunchentoot
1429 puri
1389 trivial-types
1360 let-plus
1282 cl-unicode
1261 rfc2388
1255 cl-syntax
1245 chipz
1235 cl-colors
1184 cl-annot
1171 cl-ansi-text
1106 trivial-utf-8
1074 optima
1068 cl-interpol
1057 cl-utilities
1032 prove
978 postmodern
956 log4cl
912 stefil
897 cl-json
871 quicklisp-slime-helper
797 st-json
787 parse-number
753 cl-marshal
692 fast-http
674 http-body
674 cl-sqlite
668 cl-who
656 osicat
646 trivial-mimes
610 circular-streams
584 xsubseq
576 quri
571 trivial-arguments
571 fiveam
544 clack
543 clx
538 iolib
536 salza2
528 lparallel
484 cl-dbi
482 ieee-floats
482 sxql
464 parenscript
454 closure-common
448 symbol-munger
447 asdf-system-connections
437 fare-utils
436 cl-opengl
428 cxml
409 cl-containers
397 uuid
395 metatilities-base
391 static-vectors
385 zpb-ttf
373 yason
369 html-template
362 buildapp
360 fare-quasiquote
355 vecto
351 ningle
351 fast-io
337 cl-yacc
337 cl-async
330 cl-vectors
324 esrap
319 command-line-arguments
317 zpng
313 do-urlencode
310 myway
309 map-set
301 arnesi
301 external-program

Nicolas HafnerRunning Tests in CL - Confession 48

· 28 days ago

header I haven't come across this anywhere yet, but I think it's worth writing a quick entry about, just so that it's referable. So, writing tests is a common enough occurrence in programming and Common Lisp is no exception. The vast amount of testing frameworks is both a sign of the repeated desire to have a comfortable way to write tests and the general ‘I can do it better’ syndrome prevalent in Common Lisp. However, this blog is not about those things, but about another, much easier aspect: Running tests.

Having an easy way to run your tests, possibly even automated, is great. Most frameworks don't go into that, so the first instinct of any test writer is to just dump all tests into a file and have a function to run them. Hopefully the tests will be segregated into their own package or system. Still, it's far less than stellar to have to know what the test system is called, load it manually and then run some project-specific test function.

Luckily, if you're using ASDF for your systems there's a way to make this all streamlined and convenient. The first thing you will want to do is define a separate system for your tests that depends on whatever testing framework you use and the system to test, of course. That way the tests won't have to be loaded if the user doesn't need them. Then, in the system definition of your main project you add a new property to connect the two:

(asdf:defsystem my-system
  :in-order-to ((asdf:test-op (asdf:test-op :test-system))))

What this does is tell ASDF that if you perform the test-op on your system, it will delegate that to calling test-op on :test-system, which should be adapted to whatever you named your test system, naturally. This means that you can now call (asdf:test-system :my-system) and have it automatically load and test your proper test system. But, we aren't quite there yet, there's one last thing we need to do, which is to tell ASDF how to execute our test suite.

In order to do this we'll need a method on asdf:perform, the function responsible for performing any kind of ASDF operation on a component or system. This method definition should be in the source of your test system and can either call or directly replace your main test function:

(defmethod asdf:perform ((op asdf:test-op) (sys (eql (asdf:find-system :test-system))))

Once that's in, you can freely call asdf:test-system and it should just work. Doing it this way is beneficial both because it gives users a streamlined interface to perform tests and because it is neatly integrated with the rest of the build system and thus automatable.

Happy testing!

Edit: As Orivej Desh helpfully pointed out to me in an e-mail, there's an alternative way to link the test-op to your test running function. Instead of adding the defmethod you can add a :perform property to your test system:

(asdf:defsystem test-system
  :perform (asdf:test-op (o c) (uiop:symbol-call :test-system-package :run-tests)))

Seeing this, there's yet another alternative of doing things, which is to put everything into your main system:

(asdf:defsystem my-system
  :in-order-to ((asdf:test-op (asdf:load-op :test-system)))
  :perform (asdf:test-op (o c) (uiop:symbol-call :test-system-package :run-tests)))

However, I'm not a fan of this last approach as it requires you to put information of the test system (the name of the main test function) into your main system. Using the :perform property in your test system is definitely a cleaner way to do it than to add your custom defmethod though.

Quicklisp newsSome problems when adding libraries to Quicklisp

· 29 days ago
Here are a few of the problems I encounter when trying to add a library to Quicklisp, as well as how to prevent or fix them.

Library does not build without warnings. As mentioned a little while ago, ql:quickload normally muffles warnings, even, unfortunately, for non-Quicklisp projects. The Quicklisp dist build environment does not muffle any warnings, and any that occur will break the build for the library. Make sure you use the :verbose t option to ql:quickload to see any warnings that crop up during compilation.

Library does not build at all.
 I think this happens when someone sees a library that seems cool, they find it is absent from Quicklisp, and they request its addition without trying it first. Please try it first! It's easy to try libraries: fetch the code, put it into ~/quicklisp/local-projects/, look for *.asd files, and use ql:quickload to load one. If it doesn't load, it may prove difficult for me to try to add it to Quicklisp. And if it doesn't have *.asd files, I can't add it to Quicklisp at all.

Library is missing critical metadata. Make sure the library has :author, :description, and :license in each ASDF system definition.

Library depends on another library that is not available in Quicklisp. It's fine to request the addition of multiple related libraries. It helps if you specify the order in which they need to be added to work.

Library system name conflicts with existing system. This happens sometimes when a library bundles its own private copy of a library already present in Quicklisp. In that case, it is usually best to unbundle the private copy, but I can also work around it on my end if necessary. Conflict also happens when someone just doesn't know that a system name is already in use. To check for conflicts, use (ql-dist:provided-systems t) to get a list of existing systems in Quicklisp.

Zach BeaneYour annual plug for

· 29 days ago

Inspired by, I made in 2008 to make it easy to link to the Common Lisp HyperSpec. The HyperSpec is fantastic work, but the URLs are not all that memorable (with good reason). Its canonical location has also occasionally changed. It was once hosted on Xanalys’s website, but now it’s on, and although it seems unlikely, it may move again in the future. has memorable links. will take you to the page in the CLHS that defines “car”. will take you to section, Constraints on the COMMON-LISP Package for Conforming Programs. will take you to the glossary entry for “function designator”. 

I intend to host and maintain indefinitely. If the CLHS moves from LispWorks to some other domain, will be updated to match. (If, in the future, I can no longer host or maintain it, the source code is on github, and I would be happy to transfer the domain to someone new.)

If you want to link to CL stuff, consider using

LispjobsKnowledge Engineer II, Verisk Health, Durham, NC

· 30 days ago

Verisk Health builds a smarter healthcare ecosystem through analytics. Our 1,500+ global professionals work at the intersection of high tech, healthcare, and "big data" in order to realize audacious aspirations for our healthcare system. Be it eliminating fraud, waste, and abuse; guiding population health management with data-driven insights; improving revenue cycles for our clients; or re-envisioning support systems for new models of healthcare delivery, we hold ourselves to a single standard: having immediate and outsized impact for our clients, and by extension, the broader health community.  To find out more about us click on the link below.





  • 4 yr. college degree majoring in Computer Science, Electrical Engineering, or related field.
  • 5 yr. experience as a full time professional software developer designing and building both system-level and application software using ANSI Common Lisp required.
  • 5 yr. experience with expert system development, employing both forward and backward chaining rule systems required.
  • 3 yr. experience building CLOS based object-oriented and knowledge-based systems required.
  • 3 yr. experience building practical applications of Artificial Intelligence required.
  • 5 yr. experience following a structured Software Development Methodology that has a defined software development life-cycle required; with recent Agile experience preferred.
  • 3 yr. experience with Source Control Management software required, CVS or Subversion is preferred.
  • 1 yr. experience working with natural language authoring environments preferred.
  • 2 yr. experience building Ontologies preferred.
  • 2 yr. experience writing and refining software requirements and experience writing and developing from software requirements required.
  • 1 yr. experience using Oracle and writing SQL is preferred.
  • Excellent verbal and written communication skills required.
  • Experience with object oriented programming and design required.


Principle Responsibilities and Essential Duties:


  • Updates job knowledge by researching new technologies and software products; participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations.
  • Implements new features and change requests based on requirements and technical design specifications.
  • Unit tests software.  Architects and designs new software functionality.
  • Triage, debugs and troubleshoots software issues.  Participates in code reviews by reviewing and providing feedback of others work.
  • Creates software system and integration test plans.  Executes software test plans for system and integration testing.
  • Release Management: builds and packages releases for deployment.
  • Creates technical documentation: software requirements and technical design specifications.

Marco AntoniottiOpen parenthesis

· 30 days ago
It has been a year since I posted something about Common Lisp.  What have I been doing meanwhile on this front?  Well, not much visible, but I am still ill from NIH syndrome, therefore I have been cooking up a few things, while doing my real work ;)
In any case, the most time consuming things in my corner of the Common Lisp world have been:
  • Moving repos from CVS to git and getting the new to agree with me (or me with it: you decide).  Some new things have been deployed on Sourceforge as I had a very old account there.
  • Fixing HEΛP to ensure that it worked nicely in most implementations and Quicklisp.
  • Building a new library called CLAST (Common Lisp Abstract Syntax Trees; reminder of "clastic rocks") that will do TRT according to my personal tastes; this library will play a role in my rebuilding of CLAZY and other little things.
Stay tuned.


Quicklisp newsGetting a library into Quicklisp

· 32 days ago
If there's a library you like, and you'd like to see it available to download and install via the standard Quicklisp dist, here's what to do.

First, make sure the license terms of the code allow for its redistribution. I can't add things with restrictive licenses or with licenses that are missing or unclear.

Second, make sure it works on more than just one implementation of Common Lisp. Quicklisp is for portable libraries. (In the future, I hope to make it easy to create separate new dists specifically for implementation-specific code, but even then, the main Quicklisp dist will be for portable libraries.)

As a side-effect of how I build and test the dist, it must also work on SBCL on Linux/AMD64. That means, unfortunately, that a great portable library that works on three different Windows CL implementations, but not on Linux, cannot be added. I hope to fix this limitation in the future.

Third, make sure the library has certain ASDF system definition metadata: :license, :author, and :description. It also should have a README file in some form or another. A note about the README: it should give at least short overview of what the library is for. "The Foo library is an implementation of Ruby's Snorfle in Common Lisp" is not a good overview; give me an idea of what it actually does, instead, e.g. "The Foo library fetches and parses movie showtime information." It's good to also provide information about how to report bugs and how to contact the author.

Fourth, make sure it builds with ASDF, rather than an external-to-Lisp build mechanism. I can't add libraries that require special configuration or action outside of ASDF. For example, if you have to edit a source file to configure library or resource directories before building, I can't add it to Quicklisp. If the library can be loaded with just (asdf:load-system ...), it's good.

Finally, let me know about it. I prefer to track requests via github's issue system, but you can also send me an email as well. It suffices to write something like "Please add the foo library, which is available from The homepage is"

It's important to note that I don't consider a library's quality or purpose when adding it to Quicklisp. It doesn't matter if you're submitting your own library. If you want it added, and it fits the above criteria, I will almost certainly add it.

There are a few exceptions: projects that require complicated or obscure foreign libraries, projects that can only be downloaded via some ad-laden link system like SourceForge, projects that use CVS, and anything else that makes it difficult for me to fetch or build the project.

When you open a github issue for a library, I'll occasionally update the issue's status. I will add issue comments if I have any problems building, or if any required bit of information (license, ASDF metadata, README) is missing.

Barring any problems, when the github issue for a library is closed, the latest Quicklisp dist has been released and it includes the new library. (Sometimes I mess this up, so if it seems like the library is still missing after a dist update, feel free to get in touch.)

How about updates? Many libraries do not need any extra work to get updated regularly in Quicklisp. For example, if a library can be downloaded from an URL like "", Quicklisp will detect when a new file is posted. For libraries downloaded from version control systems like git, updates are also automatically fetched. Only when a library uses a fixed URL per version is it necessary to open a github issue for updates.

Quicklisp dist updates happen about once per month. If the library is updated upstream, those updates will only be reflected after the next Quicklisp dist update. Each dist update freezes the state of the Quicklisp library "world" until the next monthly update.

If you'd like to see the process in action, watch the quicklisp-projects issue page for a month to see how things typically work.

If you have any questions about the process, feel free to get in touch.

François-René RideauProgramming on valium

· 33 days ago

Google AutoValue: what in Lisp would take a few hundred lines max in Java is over 10000 lines not counting many, many libraries. Just WOW!

Thus, Java has macros too, it's just that they are 10 to 100 times more programmer-intensive than Lisp macros. I feel like I'm back in the dark ages.

Even for "normal" programming without new macros, a program I wrote both in Java and in Clojure was about 4 times bigger in Java (and that's despite using AutoValue). I also took ten times longer to write and debug the Java program (despite having written the Clojure program before, so no hard thinking whatsoever needed), with a frustrating edit-compile-run cycle many orders of magnitude slower. Part of the difference is my being much more experienced in Lisp than in Java, but even accounting for that, Java is slower to develop with.

The Java code is also much harder to read, because you have to wade through a lot of bureaucracy — each line does less, and so may be slightly faster to read, yet takes no less time to write, debug, modify, test, because of all the details that need be just right. Yet you must read and write more Java, and it's therefore harder to get the big picture, because there is less information available by screenful (or mindful) and much more noise. The limitation on available information is not just per screenful but also per file, and you find you have to jump constantly through so many files in addition to classes within a file; this is a lot of pain, even after accounting for the programming environments that alleviate the pain somewhat. Thus the very slight micro-level advantage of Java in readability per line is actually a big macro-level handicap in overall program readability.

Lack of both type aliasing and retroactive implementation of interfaces also means that type abstraction, while possible with generics and interfaces (themselves very verbose, though no more than the rest of the language), will require explicit wrappers with an immense amount of boilerplate, if not reimplementation. This strongly encourages programmers to eschew type abstraction, leading to more code explosion and much decreased maintainability.

Also, because function definition is so syntactically cumbersome in Java, programs tend to rely instead on big functions with a lot of side-effects, which yields spaghetti code that is very hard to read, understand, debug, test or modify — as compared to writing small conceptually simple functions that you compose into larger ones, as you would in a functional programming language.

The lack of tuple types is also a big factor against functional programming in Java: you'll need to declare a lot of extra classes or interfaces as bureaucracy just because you want a couple functions to pass and return a few values together (some people instead use side-effects for that — yuck). You could use a generic pair, but that leads to horrible types with many<layers<of<angle,brackets>>> which is very hard to read or write, and doesn't scale to larger tuples; of course, the need to declare types everywhere instead of having them inferred by the compiler means that even with tuples of arbitrary size, you'll need to spell out long unwieldy types more often that you'd like. Ignorants complain about the number of parentheses in Lisp, but just because of the size increase, there are a lot more parentheses in my Java program than in my Lisp program, and if we are to include all curly, angle and square brackets, that will be another many-fold increase.

Java 8 makes the syntax for functional programs slightly easier, and AutoValue makes it slightly less painful to bundle values together, but even with these improvements, Java remains extremely verbose.

The standard library is horrible, with side-effects everywhere, and a relatively poor set of primitives. This leads to the ugly habit of having to resort to "friend" classes with lots of static methods, which leads to a very different style of invocation and forces more bureaucratic wrapping to give things a unified interface. The lack of either CLOS-style generic functions or Clojure-type protocols mean you can't add decent interfaces to existing data-structures after the fact, making inter-operation with other people's code harder, whether you decide to adopt your own data-structure library (e.g. a pure functional one) or just try to extend existing ones. Lack of multiple inheritance also means you have to repetitively repeat a lot of boilerplate that could have been shared with a common mixin (aka trait class).

All in all, Java is just as heavily bureaucratic as I expected. It was developed by bureaucrats for bureaucrats, mediocre people who think they are productive when they have written a lot of code for a small result, when better tools allow better people to write a small amount of code for a big result. By analogy with programming languages said to be a variant of something "on steroids", I'd say that Java is a semi-decent programming language on valium. As to what template is sedated, I'd say a mutt of Pascal and Smalltalk. But at least it's semi-decent, and you can see that a lot intelligent people who understand programming language design and implementation have worked on it and tried to improve upon the joke of a language that Java was initially. Despite the bureaucracy, the sheer amount of talent thrown at the language has resulted in something that manages to not be bad.

This hard work by clever people makes Java so much better than Python, an attractive nuisance with lots of cool features that lead you into a death by a thousand cuts of small bad decisions that amplify each other. Superficially, Python looks like a crippled Lisp without macros and with a nice toy object system — but despite a lot of very cool features and a syntax that you can tell was spent a lot of time on (yet still ended up with many bad choices), Python was obviously written by someone who doesn't have a remote clue about semantics, resulting in a lot of pitfalls for programmers to avoid (there again with side-effects galore), and an intrinsically slow implementation that requires a lot of compile-time cleverness and runtime bureaucracy to improve upon.

In conclusion, I'd say that Java is a uniformly mediocre language that will drag you down with bureaucracy, which makes it rank well above a lot of overall bad languages like Python — but that's a very low bar.

Does this rampant mediocrity affect all industries? I'm convinced it does — it's not like these industries are fielded by better people than the software industry. Therefore it's an ever renewed wonder to me to see that the world keeps turning, that civilization endures. "A common man marvels at uncommon things; a wise man marvels at the commonplace." — Confucius

Quicklisp newsDecember 2014 download stats

· 34 days ago
Here are the top 100 downloads from Quicklisp for last month:

  3052   alexandria
2743 cl-ppcre
2619 babel
2009 cffi
1886 cl-fad
1829 flexi-streams
1757 trivial-features
1670 bordeaux-threads
1597 slime
1552 closer-mop
1541 trivial-gray-streams
1523 chunga
1390 trivial-garbage
1267 cl+ssl
1239 anaphora
1152 usocket
1106 hunchentoot
1097 drakma
1092 trivial-backtrace
1085 cl-base64
1083 iterate
1077 local-time
1011 split-sequence
987 md5
963 nibbles
920 ironclad
848 let-plus
820 cl-unicode
792 puri
761 rfc2388
736 cl-colors
696 chipz
690 quicklisp-slime-helper
683 named-readtables
653 metabang-bind
635 cl-json
626 cl-who
624 cl-ansi-text
561 parse-number
556 prove
548 cl-interpol
533 postmodern
506 clx
501 yason
443 log4cl
440 lparallel
436 trivial-utf-8
422 salza2
413 cl-csv
406 optima
392 osicat
390 rt
381 asdf-system-connections
369 clack
368 trivial-types
368 parenscript
358 iolib
356 cl-containers
356 cl-syntax
347 metatilities-base
340 cl-utilities
340 cl-opengl
323 cl-annot
316 ieee-floats
307 esrap
294 uuid
289 cl-sqlite
281 vecto
274 html-template
273 cl-async
271 zpb-ttf
269 closure-common
262 restas
262 buildapp
258 external-program
257 command-line-arguments
254 uiop
250 cl-yacc
249 caveman
244 asdf-finalizers
241 weblocks
240 cl-async-future
238 dynamic-classes
233 cl-markdown
232 zpng
229 cxml
226 static-vectors
225 py-configparser
224 uffi
219 mcclim
219 cl-vectors
216 fiveam
210 quri
209 cl-closure-template
208 cl-log
208 cl-libevent2
206 cl-abnf
204 cl-dbi
204 cl-db3
203 ningle

I just started systematically managing Quicklisp HTTP logs, so I will soon present information like this on a regular basis.

Gábor MelisPAX World

· 34 days ago

The promise of MGL-PAX has always been that it will be easy to generate documentation for different libraries without requiring extensive markup and relying on stable urls. For example, without PAX if a docstring in the MGL library wanted to reference the matrix class MGL-MAT:MAT from the MGL-MAT library, it would need to include ugly HTML links in the markdown:

 "Returns a [some-terrible-github-link-to-html][MAT] object."

With PAX however, the uppercase symbol MAT will be automatically linked to the documentation of MAT if its whereabouts are known at documentation generation time, so the above becomes:

 "Returns a MAT object."

The easiest way to tell PAX where to link is to let it generate the documentation for all libraries at the same time like this:

 (document (list mgl-mat:@mat-manual mgl:@mgl-manual))

This is the gist of what MGL-PAX-WORLD does. It has a list of stuff to document, and it creates a set of HTML files. Check out how its output looks on github pages. Here is a good example of cross-links. It's easy to play with locally: just get the gh-pages branch and call UPDATE-PAX-WORLD.

drmeisterRelease 0.2 of clasp is available

· 35 days ago

I uploaded a new release of Clasp today (January 25th, 2015) that brings a lot of stability improvements, improved performance, improved compatibility with the Common Lisp standard and ASDF, SLIME and Quicklisp support.

It requires a reinstallation of externals-clasp.  We are working on eliminating the need for externals-clasp.

Features include:

  1. ASDF support has been added – it can be accessed using (require :asdf)
  2. SLIME support has been added and the changes to SLIME have been uploaded to the standard SLIME repository.  You should be able to access it the standard way you install and upgrade slime.
  3. Quicklisp support has been added.  I’ve submitted the changes to quicklisp to Zach Bean (the developer of quicklisp) and they should be available soon.
  4. Improvements in performance of the compiled code (Linux code is at least 2x faster).
  5. Improvements in stability.
  6. Almost all Common Lisp symbols have been implemented (18 left, you can see them using (core:calculate-missing-common-lisp-symbols).
  7. Example code for Clasp/C++ interoperation is available.
  8. Weak-key-hash-tables, weak-pointer, weak-key-mapping types have been implemented in both garbage collectors.

This release focused on getting SLIME working because I want to incorporate a new compiler front end for Clasp and I want to do it in this extremely powerful programming environment.

Patrick SteinCons-Air (a.k.a. Lispers On A Plane)

· 38 days ago

Before bed last night, I was trying to sort out in my head the lineage and historical inter-relations between BKNR, TBNL, HTML-TEMPLATES, CL-WHO, and Hunchentoot. When my alarm went off this morning, I was having a dream about the Lisp community.

In my dream, there were about 50 Lispers on a prison plane. It was pretty much a commercial 737, but all of the passengers were wearing prison jumpsuits and the plane would never land. We were all just on the plane. We refueled in flight. (The dream had nothing to say about how we restocked with food or emptied the septic tanks. *shrug*)

There was a cage around/over the last three rows of the left side of the plane. The door to the cage was not locked, but everyone knew that you were only allowed to go into the cage if Edi let you. Edi Weitz was the top dog in this prison. The only reason the rest of us were still alive is because Edi hadn’t seen a reason to have us taken care of yet.

Didier Verna was the only person who was allowed to approach the cage if Edi were in it.

There wasn’t much turbulence, but we were flying through a thunderstorm. This had Nick Levine sitting at a window seat, looking out over the wing, nervously bobbing back and forth, and sipping whiskey with ice from a plastic cup.

The storm had Zach Beane pacing back and forth in the aisle.

Cyrus Harmon was sitting doing a crossword puzzle from the in-flight magazines, giggling to himself about Faré.

Faré was standing near the front of the plane throwing little packets of peanuts at everyone. He wasn’t trying to hurt people with them, but he was delighting in watching them bounce of the heads of people who weren’t expecting them (especially the back of Zach’s head when he was fifteen or twenty rows away).

Robert Goldman was trying to get some sleep, but was unable to do so because of the lack of leg-room, the bright cabin lights, and all of the packets of peanuts careening past.

There were a number of other Lisp-folk in this dream. For some of them, I don’t recall exactly what they were up to. For a few, it would probably be impolitic of me to say. :)

Dimitri FontaineMy First Slashdot Effect

· 38 days ago

Thanks to the Postgres Weekly issue #89 and a post to Hacker News front page (see Pgloader: A High-speed PostgreSQL Swiss Army Knife, Written in Lisp it well seems that I just had my first Slashdot effect...

Well actually you know what? I don't...

So please consider using the new mirror and maybe voting on Hacker News for either tooling around your favorite database system, PostgreSQL or your favorite programming language, Common Lisp...

It all happens at

Coming to FOSDEM?

If you want to know more about pgloader and are visiting FOSDEM PGDAY or plain FOSDEM I'll be there talking about Migrating to PostgreSQL, the new story (that's pgloader) and about some more reasons why You'd better have tested backups...

If you're not there on the Friday but still want to talk about pgloader, join us at the PostgreSQL devroom and booth!

LispjobsTwo Clojure positions, AppsFlyer, Herzliya, Israel

· 40 days ago

Server-side Developer: Herzliya, Israel

We are coding commandos with strong product sense. Most of the code we write is operative in hours to days. We insist that you write something operational in your first two days. We decide and move fast, and iterate faster. If you are ready to write production code in Clojure, If you are ready to write the most exciting code of your life - If you are ready to jump on stage and let your code play - come and join us.


Write good, robust, simple, fast and meaningful code that can handle billions of hits

Strong web experience (at least 5 years)
Strong coding ability and architecture familiarity combined with product sense
Experience with Linux (Ubuntu), AWS / EC2, Continuous Deployment (SaltStack, Chef, Puppet)
Experience with Python, Redis, ZeroMQ, CouchDB, MongoDB
Experience with distributed systems
Experience with monitoring tools (Nagios, Graphite)
Product management understanding
Experience working with clients - an advantage
Mobile experience (iOS, Android) - an advantage
Digital advertising experience - an advantage

Contact: Adi Shacham-Shavit <>

Dev Team Leader: Herzliya, Israel

We are looking for a technologies in hart, with passionate to distributed production systems and excellent development abilities to join our team.

Lead a small development team to write good, robust, simple, fast and meaningful code that can handle billions of hits

Strong web experience (at least 5 years)
Strong coding ability and architecture familiarity combined with product sense
Experience with Linux (Ubuntu), AWS / EC2, Continuous Deployment (Chef etc.)
Experience with NoSQL databases, like Redis, Couchbase, MongoDB
Manage team of 2-3 developers (at least 1 year)
Exceptionally self-motivated, have a “get things done” mentality
Experience working with clients - an advantage
Mobile experience (iOS, Android) - an advantage
Digital advertising experience - an advantage

Contact: Adi Shacham-Shavit <>

LispjobsSoftware Engineer - Functional (Clojure), ROKT, North Sydney, NSW, Australia

· 40 days ago

Exceptional opportunity to join one of the fastest growing Australian technology start-ups
— Well-funded
— High growth phase

The Positions
ROKT is seeking a number of functional programmers (ideally with experience in Clojure) at all levels of seniority. The positions involve the continued development and maintenance of established production Clojure code (principally related to data transformation, analysis and predictive optimisation) as well as the development of new products and services.

Successful candidates will join a growing team of ~10 engineers, predominantly developing on a Microsoft/AWS stack at present – we're looking at considerable expansion of our engineering capability this year, a significant part of which includes broadening and accelerating our adoption of the Clojure ecosystem.

The Company
ROKT is a rapidly growing innovative technology company. At ROKT you will have the opportunity to work in a fast paced environment with very smart and talented people.

Recognised as one of the hottest companies in our field, ROKT is in a unique position to continue to lead the market with data driven technology solutions that deliver innovation to publishers, ecommerce platforms and advertisers. With a proven technology platform, an established and growing partner portfolio, and outstanding growth, this is a unique opportunity to join a fast paced Australian technology start-up.

— Competitive packages based on qualifications and experience
— Employee share scheme
— Health and wellness initiatives
— Staff awards program; generous staff referral program
— Office snacks & fruit
— Global company gathering and celebration 3 times a year
— Get paid to write Clojure!

Skills, Experience and Education
— A bachelor's degree in Computer Science or similar discipline
— Previous experience (commercial or demonstrable personal experience) with functional programming in Clojure, or Common Lisp.
— A level of experience that demonstrates competency in a spectrum of development skills, from our chosen development methodology (Agile XP/Scrum) through to discipline in testing, documentation etc.
— Strong SQL skills, "big data" experience, and familiarity with AWS is considered highly desirable
— Excellent verbal and written communication skills

North Sydney, NSW, Australia

Join us
Interested candidates should submit CVs to Claudio Natoli –

Didier VernaELS 2015 programme committee members announced

· 41 days ago

The programme committee members for this year's European Lisp Symposium has just been announced. Ladies and gentlemen, please welcome, on the keyboards...

  • Sacha Chua — Toronto, Canada
  • Edmund Weitz — University of Applied Sciences, Hamburg, Germany
  • Rainer Joswig — Hamburg, Germany
  • Henry Lieberman — MIT, USA
  • Matthew Flatt — University of Utah, USA
  • Christian Queinnec — University Pierre et Marie Curie, Paris 6, France
  • Giuseppe Attardi — University of Pisa, Italy
  • Marc Feeley — University of Montreal, Canada
  • Stephen Eglen — University of Cambridge, UK
  • Robert Strandh — University of Bordeaux, France
  • Nick Levine — RavenPack, Spain

Gábor MelisRecurrent Nets

· 41 days ago

I've been cleaning up and documenting MGL for quite some time now and while it's nowhere near done, a good portion of the code has been overhauled in the process. There are new additions such as the Adam optimizer and Recurrent Neural Nets. My efforts were mainly only the backprop stuff and I think the definition of feed-forward:

 (build-fnn (:class 'digit-fnn)
   (input (->input :size *n-inputs*))
   (hidden-activation (->activation input :size n-hiddens))
   (hidden (->relu hidden-activation))
   (output-activation (->activation hidden :size *n-outputs*))
   (output (->softmax-xe-loss :x output-activation)))

and recurrent nets:

 (build-rnn ()
   (build-fnn (:class 'sum-sign-fnn)
     (input (->input :size 1))
     (h (->lstm input :size n-hiddens))
     (prediction (->softmax-xe-loss
                  (->activation h :name 'prediction :size *n-outputs*)))))

is fairly straight-forward already. There is still much code that needs to accompany such a network definition, mostly having to do with how to give inputs and prediction targets to the network and also with monitoring training. See the full examples for feed-forward and recurrent nets in the documentation.

Dimitri FontaineNew release: pgloader 3.2

· 44 days ago

PostgreSQL comes with an awesome bulk copy protocol and tooling best known as the COPY and \copy commands. Being a transactional system, PostgreSQL COPY implementation will ROLLBACK any work done if a single error is found in the data set you're importing. That's the reason why pgloader got started: it provides with error handling for the COPY protocol.

That's basically what pgloader used to be all about

As soon as we have the capability to load data from unreliable sources, another use case appears on the horizon, and soon enough pgloader grew the capacity to load data from other databases, some having a more liberal notion of what is sane data type input.

To be able to adapt to advanced use cases in database data migration support, pgloader has grown an advanced command language wherein you can define your own load-time data projection and transformations, and your own type casting rules too.

New in version 3.2 is that in simple cases, you don't need that command file any more. Check out the pgloader quick start page to see some examples where you can use pgloader all from your command line!

Here's one such example, migrating a whole MySQL database data set over to PostgreSQL, including automated schema discovery, automated type casting and on-the-fly data cleanup (think about zero dates or booleans in tinyint(1) disguise), support for indexes, primary keys, foreign keys and comments. It's as simple as:

$ createdb sakila
$ pgloader mysql://root@localhost/sakila pgsql:///sakila
2015-01-16T09:49:36.068000+01:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
2015-01-16T09:49:36.074000+01:00 LOG Data errors in '/private/tmp/pgloader/'
                    table name       read   imported     errors            time
------------------------------  ---------  ---------  ---------  --------------
               fetch meta data         43         43          0          0.222s
                  create, drop          0         36          0          0.130s
------------------------------  ---------  ---------  ---------  --------------
                         actor        200        200          0          0.133s
                       address        603        603          0          0.035s
                      category         16         16          0          0.027s
                          city        600        600          0          0.018s
                       country        109        109          0          0.017s
                      customer        599        599          0          0.035s
                          film       1000       1000          0          0.075s
                    film_actor       5462       5462          0          0.147s
                 film_category       1000       1000          0          0.035s
                     film_text       1000       1000          0          0.053s
                     inventory       4581       4581          0          0.086s
                      language          6          6          0          0.041s
                       payment      16049      16049          0          0.436s
                        rental      16044      16044          0          0.474s
                         staff          2          2          0          0.170s
                         store          2          2          0          0.010s
        Index Build Completion          0          0          0          0.000s
------------------------------  ---------  ---------  ---------  --------------
                Create Indexes         40         40          0          0.343s
               Reset Sequences          0         13          0          0.026s
                  Primary Keys         16         14          2          0.013s
                  Foreign Keys         22         22          0          0.078s
                      Comments          0          0          0          0.000s
------------------------------  ---------  ---------  ---------  --------------
             Total import time      47273      47273          0          2.261s

Other options are available to support a variety of input file formats, including compressed csv files found on a remote location, as in:

curl \
    | gunzip -c                                                        \
    | pgloader --type csv                                              \
               --field "usps,geoid,aland,awater,aland_sqmi,awater_sqmi,intptlat,intptlong" \
               --with "skip header = 1"                                \
               --with "fields terminated by '\t'"                      \
               -                                                       \

2015-01-16T10:09:06.027000+01:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
2015-01-16T10:09:06.032000+01:00 LOG Data errors in '/private/tmp/pgloader/'
                    table name       read   imported     errors            time
------------------------------  ---------  ---------  ---------  --------------
                         fetch          0          0          0          0.010s
------------------------------  ---------  ---------  ---------  --------------
             districts_longlat        440        440          0          0.087s
------------------------------  ---------  ---------  ---------  --------------
             Total import time        440        440          0          0.097s

As usual in unix commands, the - input filename stands for standard input and allows streaming data from a remote compressed file down to PostgreSQL.

So if you have any data loading job, including data migrations from SQLite, MySQL or MS SQL server: have a look at pgloader!

LispjobsAmazon still hiring Clojure software developers (Seattle, Washington)

· 45 days ago

I received an email from the Amazon recruiter, and they say they are looking for more Clojure candidates. Please refer to the previous post if you’re interested:

Didier VernaInsider Threat Detection at Haystax

· 46 days ago

Reported to me by Craig Norvell: Haystax appears to have some interesting Lisp projects in the works. Combining Lisp, Prolog, RDF (AllegroGraph), and a BBN for insider threat detection solutions.

From their website:

Haystax's predictive models are continuously updated based on the prevailing threat environment making it highly suitable for both detection and continuous evaluation of threats. These unique models go beyond traditional web and business intelligence to enable organizations to achieve contextual real-time situational awareness by fusing all operationally relevant information - private, public, video and live feeds - into consolidated views to show patterns and identify threats that are usually buried in too much noise or not placed in proper context.

Here is a Technical paper, and the STIDS Conference website. If you are interested, it looks like you can also attend a webcast on January 21st.

Quicklisp newsJanuary 2015 Quicklisp dist update now available

· 47 days ago
New projects:
  • asdf-contrib — Extensions to ASDF — MIT
  • asdf-flv — ASDF support for file-local variables. — GNU All Permissive
  • chrome-native-messaging — A package to communicate with a Chrome extension as the native application — MIT License
  • cl-ansi-term — library to output formatted text on ANSI-compliant terminals — GNU GPL v.3
  • cl-binaural — Utilities to generate binaural sound from mono — GPL
  • cl-hash-util — A simple and natural wrapper around Common Lisp's hash functionality. — MIT
  • cl-hue — Client for Philips Hue light controller — Apache 2
  • cl-junit-xml — Small library for writing junit XML files — MIT
  • cl-mop — Simple, portable tools for dealing with CLOS objects. — Expat (MIT-style)
  • cl-readline — Common Lisp bindings to GNU Readline library — GNU GPL v.3
  • cl-slug — Small library to make slugs, mainly for URIs, from english and beyond. — LLGPL
  • clim-widgets — small collection of clim widgets — BSD Simplified
  • common-doc — A framework for representing and manipulating documents as CLOS objects. — MIT
  • common-html — An HTML parser/emitter for CommonDoc. — MIT
  • defclass-std — A shortcut macro to write DEFCLASS forms quickly. — LLGPL
  • defenum — C++ and Java styled 'enum' in Common Lisp — MIT
  • generic-comparability — CDR-8 implementation — LLGPL
  • lev — libev bindings for Common Lisp — BSD 2-Clause
  • lucerne — A Clack-based microframework. — MIT
  • perlre — perl regular expression api - m// and s/// - for CL-PPCRE with CL-INTERPOL support — BSD Simplified --- the same as let-over-lambda
  • trivial-update — tools for easy modification of places with any given function — MIT
Updated projects: arc-compat, avatar-api, blackbird, buildnode, caveman, cl-acronyms, cl-ana, cl-ansi-text, cl-async, cl-autowrap, cl-base58, cl-creditcard, cl-dbi, cl-gss, cl-inflector, cl-libuv, cl-marshal, cl-mock, cl-mustache, cl-opengl, cl-pass, cl-rabbit, cl-randist, cl-random, cl-sdl2, cl-virtualbox, cl-webkit, clack, clack-errors, clos-fixtures, closer-mop, clss, clx, colleen, com.informatimago, commonqt, corona, crane, css-selectors, datafly, eco, esrap, exponential-backoff, fast-http, fset, gbbopen, gendl, hdf5-cffi, hermetic, http-body, hu.dwim.stefil, integral, introspect-environment, iolib, jsown, lambda-fiddle, lisp-unit2, local-time, lol-re, ltk, mk-string-metrics, modularize, modularize-hooks, modularize-interfaces, new-op, osicat, plump, plump-tex, pp-toml, prove, pzmq, quri, rock, rutils, scalpl, sdl2kit, serapeum, sheeple, slime, spinneret, stumpwm, sxql, template, trivial-arguments, trivial-download, trivial-extract, trivial-features, trivial-garbage, weblocks, weblocks-utils, websocket-driver, woo, wookie, wuwei, xsubseq.

To get this update, use (ql:update-dist "quicklisp").

To install exactly this update, use (ql-dist:install-dist "" :replace t).

This Quicklisp update is supported by my employer, Clozure Associates. If you need commercial support for Quicklisp, or any other Common Lisp programming needs, it's available via Clozure Associates.

LispjobsCommon Lisp Developer, RavenPack, Marbella, Spain

· 47 days ago


Position immediately available for an experienced software professional. You will work with an international team of developers skilled in Common Lisp, PL/SQL, Java and Python.

The ideal candidate will have excellent skills as a software engineer, with a strong computer science background and professional experience delivering quality software.  You must be fluent in modern software development practices, including multi-threading, distributed systems, and cloud computing.  If you are not already an expert in Common Lisp, you aspire to become one. Innovative problem solving and engaging human interaction drive you. With high degree of independence, you will design and implement maintainable software in Common Lisp based on loose and changing specifications.

Familiarity with SQL including query optimization and PL/SQL is very much a plus.  Comfort in a growing, fast-paced environment with a premium on problem solving is required.  Must be adaptable and willing to learn new technologies. You work successfully in a small team environment, with a willingness to teach and to learn. Lead reviews of your code and participate in the reviews of others.

The ability to communicate effectively in English, both in writing and verbally is a must. Knowledge of Spanish is not a business requirement. European Union legal working status is strongly preferred.

Paul KhuongLock-free Sequence Locks

· 47 days ago

Specialised locking schemes and lock-free data structures are a big part of my work these days. I think the main reason the situation is tenable is that, very early on, smart people decided to focus on an SPMC architecture: single writer (producer), multiple readers (consumers).

As programmers, we have a tendency to try and maximise generality: if we can support multiple writers, why would one bother with measly SPMC systems? The thing is SPMC is harder than SPSC, and MPMC is even more complex. Usually, more concurrency means programs are harder to get right, harder to scale and harder to maintain. Worse: it also makes it more difficult to provide theoretical progress guarantees.

Apart from architecting around simple cases, there’s a few ways to deal with this reality. We can define new, weaker, classes of program, like obstruction-freedom: a system is obstruction-free when one thread is guaranteed to make progress if every other thread is suspended. We can also weaken the guarantees of our data structure. For example, rather than exposing a single FIFO, we could distribute load and contention across multiple queues; we lose strict FIFO order, but we also eliminate a system bottleneck. Another option is to try and identify how real computers are more powerful than our abstract models: some argue that, realistically, many lock-free schemes are wait-free, and others exploit the fact that x86-TSO machines have finite store buffers.

Last week, I got lost doodling with x86-specific cross-modifying code, but still stumbled on a cute example of a simple lock-free protocol: lock-free sequence locks. This sounds like an oxymoron, but I promise it makes sense.

Lock-free sequence locks

It helps to define the terms better. Lock-freedom means that the overall system will always make progress, even if some (but not all) threads are suspended. Classical sequence locks are an optimistic form of write-biased reader/writer locks: concurrent writes are forbidden (e.g., with a spinlock), read transactions abort whenever they observe that writes are in progress, and a generation counter avoids ABA problems (when a read transaction would observe that no write is in progress before and after a quick write).

In Transactional Mutex Locks (PDF), sequence locks proved to have enviable performance on small systems and scaled decently well for read-heavy workloads. They even allowed lazy upgrades from reader to writer by atomically checking that the generation has the expected value when acquiring the sequence lock for writes. However, we lose nearly all progress guarantees: one suspended writer can freeze the whole system.

The central trick of lock-freedom is cooperation: it doesn’t matter if a thread is suspended in the middle of a critical section, as long as any other thread that would block can instead complete the work that remains. In general, this is pretty hard, but we can come up with restricted use cases that are idempotent. For lock-free sequence locks, the critical section is a precomputed set of writes: a series of assignments that must appear to execute atomically. It’s fine if writes happen multiple times, as long as they stop before we move on to another set of writes.

There’s a primitive based on compare-and-swap that can easily achieve such conditional writes: restricted double compare and single swap (RDCSS, introduced in A Practical Multi-Word Compare-and-Swap (PDF)). RDCSS atomically checks if both a control word (e.g., a generation counter) and a data word (a mutable cell) have the expected values and, if so, writes a new value in the data word. The pseudocode for regular writes looks like

if (CAS(, self.old, self) == fail) {
    return fail;

if (*self.control != self.expected) {
    CAS(, self, self.old);
    return fail;

CAS(, self,;
return success;

The trick is that, if the first CAS succeeds, we always know how to undo it (data’s old value must be self.old), and that information is stored in self so any thread that observes the first CAS has enough information to complete or rollback the RDCSS. The only annoying part is that we need a two-phase commit: reserve data, confirm that control is as expected, and only then write to data.

For the cost of two compare-and-swap per write – plus one to acquire the sequence lock – writers don’t lock out other writers (writers help each other make progress instead). Threads (especially readers) can still suffer from starvation, but at least the set of writes can be published ahead of time, so readers can even lookup in that set rather than waiting for/helping writes to complete. The generation counter remains a bottleneck, but, as long as writes are short and happen rarely, that seems like an acceptable trade to avoid the 3n CAS in multi-word compare and swap.

Real code

Here’s what the scheme looks like in SBCL.

First, a mutable box because we don’t have raw pointers (I could also have tried to revive my sb-locative hack) in CL.

(defstruct (box
            (:constructor make-box (%value)))

Next, the type for write records: we have the the value for the next generation (once the write is complete) and a hash table of box to pairs of old and new values. There’s a key difference with the way RDCSS is used to implement multiple compare and swap: we don’t check for mismatches in the old value and simply assume that it is correct.

(defstruct (record
             (:constructor %make-record (generation ops)))
  (generation (error "Missing arg") :type fixnum :read-only t)
  ;; map of box -> (cons old new).  I use a hash table for
  ;; convenience but I doubt it's the right choice.
  (ops (error "Missing arg") :type hash-table :read-only t))
(declaim (freeze-type record))

The central bottleneck is the sequence lock, which each (read) transaction must snapshot before attempting to read consistent values.

(declaim (type (or (and unsigned-byte fixnum) record) **current-record**))
(defglobal **current-record** 0)

(defvar *initial-record*)

(defun snapshot-generation ()
  (let ((initial *initial-record*))
    (if (record-p initial)
        (record-generation initial)

The generation associated with a snapshot is the snapshot if it is a positive fixnum, otherwise it is the write record’s generation.

Before using any read, we make sure that the generation counter hasn’t changed.

(defun check ()
  #-(or x86 x86-64) (sb-thread:barrier (:read)) ; x86 don't reorder reads
  (let ((initial *initial-record*)
        (current **current-record**))
    (unless (or (eql initial current)
                (and (record-p initial)
                     (eql (record-generation initial) current)))
      (throw 'fail t))))

I see two ways to deal with starting a read transaction while a write is in progress: we can help the write complete, or we can overlay the write on top of the current heap in software. I chose the latter: reads can already be started by writers. If a write is in progress when we start a transaction, we stash the write set in *current-map* and lookup there first:

(defvar *current-map* nil)

(defun box-value (box)
  (prog1 (let* ((map *current-map*)
                (value (if map
                           (cdr (gethash box map (box-%value box)))
                           (box-%value box))))
           (if (record-p value)
               ;; if we observe a record, either a new write is in
               ;; progress and (check) is about to fail, or this is
               ;; for an old (already completed) write that succeeded
               ;; partially by accident.  In the second case, we want
               ;; the *old* value.
               (car (gethash box (record-ops value)))

We’re now ready to start read transactions. We take a snapshot of the generation counter, update *current-map*, and try to execute a function that uses box-value. Again, we don’t need a read-read barrier on x86oids (nor on SPARC, but SBCL doesn’t have threads on that platform).

(defun call-with-transaction (function &rest; arguments)
  (catch 'fail
    (let* ((*initial-record* **current-record**)
           (*current-map* (and (record-p *initial-record*)
                               (record-ops *initial-record*))))
      #-(or x86 x86-64) (sb-thread:barrier (:read))
      (return-from call-with-transaction
        (values (apply function arguments) t))))
  (values nil nil))

(defmacro with-transaction ((&rest; bindings) &body; body)
  `(call-with-transaction (lambda ,(mapcar #'first bindings)
                          ,@(mapcar #'second bindings)))

The next function is the keystone: helping a write record go through exactly once.

(defun help (record)
  (flet ((write-one (box old new)
           ;; if record isn't the current generation anymore,
           ;; it has already been completed
           (unless (eq **current-record** record)
               (return-from help nil))
             (let ((actual (sb-ext:cas (box-%value box) old record)))
               (when (eql actual new) ;; already done? next!
                 (return-from write-one))
               ;; definite failure -> no write went though; leave.
               (unless (or (eql actual old)
                           (eql actual record))
                 (return-from help nil))

               ;; check for activity before the final write
               (unless (eq **current-record** record)
                 (sb-ext:cas (box-%value box) record old)
                 (return-from help nil))

               ;; Really perform write (this can only fail if
               ;; another thread already succeeded).
               (sb-ext:cas (box-%value box) record new))))
    (maphash (lambda (box op)
               (write-one box (car op) (cdr op)))
             (record-ops record)))
  ;; Success! move the generation counter forward.
  (eql record (sb-ext:cas (symbol-value '**current-record**)
                          (record-generation record))))

Now we can commit with a small wrapper around help. Transactional mutex lock has the idea of transaction that are directly created as write transactions. We assume that we always know how to undo writes, so transactions can only be upgraded from reader to writer. Committing a write will thus check that the generation counter is still consistent with the (read) transaction before publishing the new write set and helping it forward.

(defun commit (record)
  (check-type record record)
  (let ((initial
           (let ((value **current-record**))
             (if (record-p value)
                 (help value)
                 (return value))))))
    (unless (and (eql (sb-ext:cas (symbol-value '**current-record**)
                                  initial record)
                 (help record))
      (throw 'fail t))

And now some syntactic sugar to schedule writes

(defvar *write-record*)

(defun call-with-write-record (function)
  (let ((*write-record* (%make-record (mod (1+ (snapshot-generation))
                                           (1+ most-positive-fixnum))
    (multiple-value-prog1 (funcall function)
      (commit *write-record*))))

(defun (setf box-value) (value box)
  (setf (gethash box (record-ops *write-record*))
        (cons (box-value box) value))

(defmacro with-write (() &body; body)
  `(call-with-write-record (lambda ()

That’s enough for a smoke test on my dual core laptop.

(defvar *a* (make-box 0))
(defvar *b* (make-box 0))
(defvar *semaphore* (sb-thread:make-semaphore))

(defun test-reads (n)
  (let ((a *a*)
        (b *b*))
    (sb-thread:wait-on-semaphore *semaphore*)
    (loop repeat n
          count (with-transaction ()
                  (assert (eql (box-value a) (box-value b)))

(defun test-writes (n)
  (let ((a *a*)
        (b *b*))
    (sb-thread:wait-on-semaphore *semaphore*)
    (loop repeat n
          count (with-transaction ()
                  (with-write ()
                    (incf (box-value a))
                    (incf (box-value b)))

The function test-reads counts the number of successful read transactions and checks that (box-value a) and (box-value b) are always equal. That consistency is preserved by test-writes, which counts the number of times it succeeds in incrementing both (box-value a) and (box-value b).

The baseline case should probably be serial execution, while the ideal case for transactional mutex lock is when there is at most one writer. Hopefully, lock-free sequence locks also does well when there are multiple writers.

(defun test-serial (n)
  (setf *a* (make-box 0)
        *b* (make-box 0)
        *semaphore* (sb-thread:make-semaphore :count 4))
  (list (test-reads (* 10 n))
        (test-reads (* 10 n))
        (test-writes n)
        (test-writes n)))

(defun test-single-writer (n)
  (setf *a* (make-box 0)
        *b* (make-box 0)
        *semaphore* (sb-thread:make-semaphore))
  (let ((threads
          (list (sb-thread:make-thread #'test-reads :arguments (* 10 n))
                (sb-thread:make-thread #'test-reads :arguments (* 10 n))
                (sb-thread:make-thread #'test-writes
                                       :arguments (ceiling (* 1.45 n))))))
    (sb-thread:signal-semaphore *semaphore* 3)
    (mapcar (lambda (x)
              (ignore-errors (sb-thread:join-thread x)))

(defun test-multiple-writers (n)
  (setf *a* (make-box 0)
        *b* (make-box 0)
        *semaphore* (sb-thread:make-semaphore))
  (let ((threads
          (list (sb-thread:make-thread #'test-reads :arguments (* 10 n))
                (sb-thread:make-thread #'test-reads :arguments (* 10 n))
                (sb-thread:make-thread #'test-writes :arguments n)
                (sb-thread:make-thread #'test-writes :arguments n))))
    (sb-thread:signal-semaphore *semaphore* 4)
    (mapcar (lambda (x)
              (ignore-errors (sb-thread:join-thread x)))

Let’s try this!

First, the serial case. As expected, all the transactions succeed, in 6.929 seconds total (6.628 without GC time). With one writer and two readers, all the writes succeed (as expected), and 98.5% of reads do as well; all that in 4.186 non-GC seconds, a 65% speed up. Finally, with two writers and two readers, 76% of writes and 98.5% of reads complete in 4.481 non-GC seconds. That 7% slowdown compared to the single-writer case is pretty good: my laptop only has two cores, so I would expect more aborts on reads and a lot more contention with, e.g., a spinlock.

CL-USER> (gc :full t) (time (test-serial 1000000))
Evaluation took:
  6.929 seconds of real time
  6.944531 seconds of total run time (6.750770 user, 0.193761 system)
  [ Run times consist of 0.301 seconds GC time, and 6.644 seconds non-GC time. ]
  100.23% CPU
  11,063,956,432 processor cycles
  3,104,014,784 bytes consed

(10000000 10000000 1000000 1000000)
CL-USER> (gc :full t) (time (test-single-writer 1000000))
Evaluation took:
  4.429 seconds of real time
  6.465016 seconds of total run time (5.873936 user, 0.591080 system)
  [ Run times consist of 0.243 seconds GC time, and 6.223 seconds non-GC time. ]
  145.97% CPU
  6,938,703,856 processor cycles
  2,426,404,384 bytes consed

(9863611 9867095 1450000)
CL-USER> (gc :full t) (time (test-multiple-writers 1000000))
Evaluation took:
  4.782 seconds of real time
  8.573603 seconds of total run time (7.644405 user, 0.929198 system)
  [ Run times consist of 0.301 seconds GC time, and 8.273 seconds non-GC time. ]
  179.30% CPU
  7,349,757,592 processor cycles
  3,094,950,400 bytes consed

(9850173 9853102 737722 730614)

How does a straight mutex do with four threads?

(defun test-mutex (n)
  (let ((mutex (sb-thread:make-mutex))
        (semaphore (sb-thread:make-semaphore))
        (a 0)
        (b 0))
    (flet ((reader (n)
             (sb-thread:wait-on-semaphore semaphore)
             (loop repeat n do
               (sb-thread:with-mutex (mutex)
                 (assert (eql a b)))))
           (writer (n)
             (sb-thread:wait-on-semaphore semaphore)
             (loop repeat n do
               (sb-thread:with-mutex (mutex)
                 (incf a)
                 (incf b)))))
      (let ((threads
              (list (sb-thread:make-thread #'reader
                                           :arguments (* 10 n))
                    (sb-thread:make-thread #'reader
                                           :arguments (* 10 n))
                    (sb-thread:make-thread #'writer
                                           :arguments (ceiling (* .75 n)))
                    (sb-thread:make-thread #'writer
                                           :arguments (ceiling (* .75 n))))))
        (sb-thread:signal-semaphore semaphore 4)
        (mapc #'sb-thread:join-thread threads)))))
CL-USER> (gc :full t) (time (test-mutex 1000000))
Evaluation took:
  5.814 seconds of real time
  11.226734 seconds of total run time (11.169670 user, 0.057064 system)
  193.10% CPU
  9,248,370,000 processor cycles
  1,216 bytes consed

 #<SB-THREAD:THREAD FINISHED values: NIL {1003A6E383}>
 #<SB-THREAD:THREAD FINISHED values: NIL {1003A6E513}>

There’s almost no allocation (there’s no write record), but the lack of read parallelism makes locks about 20% slower than the lock-free sequence lock. A reader-writer lock would probably close that gap. The difference is that the lock-free sequence lock has stronger guarantees in the worst case: no unlucky preemption (or crash, with shared memory IPC) can cause the whole system to stutter or even halt.

The results above correspond to my general experience. Lock-free algorithms aren’t always (or even regularly) more efficient than well thought out locking schemes; however, they are more robust and easier to reason about. When throughput is more than adequate, it makes sense to eliminate locks, not to improve the best or even the average case, but rather to eliminate a class of worst cases – including deadlocks.

P.S., here’s a sketch of the horrible cross-modifying code hack. It turns out that the instruction cache is fully coherent on (post-586) x86oids; the prefetch queue will even reset itself based on the linear (virtual) address of writes. With a single atomic byte write, we can turn a xchg (%rax), %rcx into xchg (%rbx), %rcx, where %rbx points to a location that’s safe to mutate arbitrarily. That’s an atomic store predicated on the value of a control word elsewhere (hidden in the instruction stream itself, in this case). We can then dedicate one sequence of machine to each transaction and reuse them via some Safe Memory Reclamation mechanism (PDF).

There’s one issue: even without preemption (if a writer is pre-empted, it should see the modified instruction upon rescheduling), stores can take pretty long to execute: in the worst case, the CPU has to translate to a physical address and wait for the bus lock. I’m pretty sure there’s a bound on how long a xchg m, r64 can take, but I couldn’t find any documentation on hard figure. If we knew that xchg m, r64 never lasts more than, e.g., 10k cycles, a program could wait that many cycles before enqueueing a new write. That wait is bounded and, as long as writes are disabled very rarely, should improve the worst-case behaviour without affecting the average throughput.

For older items, see the Planet Lisp Archives.

Last updated: 2015-02-25 23:00