Planet Lisp

Quicklisp newsThe Quicklisp local-projects mechanism

· 46 minutes ago
Quicklisp provides a lot of software, but there's also a simple way to load things that Quicklisp doesn't provide. That same mechanism can be used to override libraries Quicklisp does provide.

The local projects mechanism sets up a special directory that is automatically scanned for software to load. Here are a few quick examples.

Trying a library not in Quicklisp

First, imagine that you just heard about a great new library and want to try it. However, it's not available through Quicklisp yet, only through a git repository on One easy way to try it:
$ cd ~/quicklisp/local-projects
$ git clone
After the git command completes, and there is a fun-project subdirectory with a fun-project/fun-project.asd file present, the system is visible to ASDF and can be loaded either with ql:quickload or asdf:find-system. When loaded through ql:quickload, Quicklisp will automatically fetch and load any prerequisites automatically as well.

Overriding a library in Quicklisp

Second, imagine that you want to hack on a library that Quicklisp already provides. You don't want to load and hack on the version from Quicklisp - that software is not under version control, and just represents a snapshot of the project at a particular point in time.

Once again, the procedure is to put the software in the ~/quicklisp/local-projects/ directory:
$ cd ~/quicklisp/local-projects/
$ git clone
After the git command completes, (ql:quickload "vecto") will load the library from local-projects rather than from the standard Quicklisp release.

How it works

The local-projects mechanism is relatively automatic. Here's how it works underneath, and how to fix problems that might crop up.

ASDF has an extensible mechansim (the asdf:*system-definition-search-functions* variable) for searching for system files. Quicklisp extends this mechanism with a function that does the following, all in the context of the local-projectsdirectory.
  1. If there is no file named system-index.txt, it is created by scanning the directory tree for system files (matching "*.asd"). Each pathname is added to the file.
  2. If the system-index.txt file exists, but its timestamp is older than its containing directory, the directory is rescanned and the index recreated.
  3. The system-index.txt is searched for any entry with a pathname-name that matches the desired system name. If there's a match, matching pathname is probed. If it still exists, it is returned. If it has disappeared, the system-index.txt is recreated as in step 1 and the search is retried.
  4. Otherwise the system search is deferred to the remaining ASDF system search functions.
When there are multiple system files with the same name in the directory tree, the one with the shortest full pathname name is returned. In the case of a pathname length tie, the one that is #'string< is returned.

Timestamp problems can sometimes crop up with step 2 above. For example, if you have a directory local-projects/my-project/ and you create local-projects/my-project/supporting-system.asd, the timestamp of local-projects/ is not updated and supporting-system.asd won't be automatically added to the system index file.

There are a couple ways to force an update of the system index file. Within Lisp, you can use (ql:register-local-projects) to immediately regenerate system-index.txt. Outside of Lisp, you can use the touch command (or an equivalent) to update the timestamp of the local-projects directory, which will trigger a rebuild of the index on the next attempt to find systems..

Because of how the system index file is created (and recreated as needed), Quicklisp must have write access to the local-projects directory to make use of it.


The local-projects mechanism is configured through a special variable ql:*local-project-directories*. By default, it includes only the local-projects subdirectory in the Quicklisp install directory, but you can add or remove directories at any time to have more places scanned for systems.
To disable the local-projects mechanism entirely, set ql:*local-project-directories* to NIL.

Quicklisp newsBuild failures with ASDF 3.3.1

· 1 hour ago
SBCL 1.4.3 ships with ASDF 3.3.1, and a number of Quicklisp projects have build problems as a result. Linedit, mgl, micmac, cl-string-match, and others are affected.

Here is a build failure report for yesterday. (You can ignore the gendl failures - it's a special case.) If anyone has ways to fix these projects, please do so as soon as you can - otherwise they will be removed from the January Quicklisp dist update in a few weeks.

Victor AnyakinReading a file line-by-line revisited

· 31 hours ago

One of the frequent questions is how do you read a file line by line using Common Lisp?

A canonical answer, as formulated by the Practical Common Lisp, section 14. Files and File I/O is essentially the same as the one provided by the Common Lisp Cookbook (Reading a File one Line at a Time):

(let ((in (open "/some/file/name.txt" :if-does-not-exist nil)))
  (when in
    (loop for line = (read-line in nil)
        while line do (format t "~a~%" line))
    (close in)))

And basically it does the job.

But what happens if you deal with a log that has captured random bytes from a crashing application? Lets simulate this scenario by reading from /dev/urandom. SBCL will give us a following result:

debugger invoked on a SB-INT:STREAM-DECODING-ERROR in thread
#:  :UTF-8 stream decoding error on
#:   the octet sequence #(199 231) cannot be decoded.

Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.

restarts (invokable by number or by possibly-abbreviated name):
  0: [ATTEMPT-RESYNC   ] Attempt to resync the stream at a character boundary
                         and continue.
  1: [FORCE-END-OF-FILE] Force an end of file.
  2: [INPUT-REPLACEMENT] Use string as replacement input, attempt to resync at
                         a character boundary and continue.
  3: [ABORT            ] Exit debugger, returning to top level.


The same will be reported on other Lisp implementations. However, dealing with this problem is not really portable, and requires platform-specific switches and boilerplate code.

For example, on SBCL it is possible to specify a replacement character in the external-format specification:

(with-open-file (in "/dev/urandom"
                      :if-does-not-exist nil
                      :external-format '(:utf-8 :replacement "?"))
  ;; read lines

Other Lisps require a different and incompatible external format specification.

But there are actually other ways to read a file line-by line. cl-faster-input looks into some of them. Namely:

  • A standard read-line.
  • read-line-into-sequence suggested by Pascal Bourguignon in a cll discussion. Unlike the standard read-line this function reads lines into a pre-allocated buffer, reducing workload on the garbage collector.
  • read-ascii-line that is the part of the COM.INFORMATIMAGO.COMMON-LISP.CESARUM library.
  • ub-read-line-string from the ASCII-STRINGS package that is a part of the CL-STRING-MATCH library

Please check the src/benchmark-read-line.lisp in the sources repository.

Benchmarks show that the ub-read-line-string outperforms the standard read-line approach, does not require platform-specific switches, and allows trivial character substitution on the fly (like up/down casing the text, replacing control characters etc.)

Sample usage (from the sources):

(with-open-file (is +fname+ :direction :input :element-type 'ascii:ub-char)
    (loop with reader = (ascii:make-ub-line-reader :stream is)
       for line = (ascii:ub-read-line-string reader)
       while line
       count line))

On developer’s desktop it takes 1.71 seconds to complete the benchmark with the standard read-line, and 1.076 seconds with the ub-read-line-string benchmark. Memory consumption is on the same level as the standard read-line, though significantly higher than the read-line-into-sequence.

On Clozure CL 1.9 the read-ascii-line benchmark fails. The ub-read-line-string falls into an infinite loop.

On Embeddable CL 16.0 all functions work, but the ub-read-line-string takes almost 10 times more time to complete than any of the alternatives.

Conclusion: It might be reasonable to look at different approaches for reading files line-by-line if you plan to deal with large volumes of text data with a possibility of presence of malformed characters. Check the sources of cl-faster-input for different ideas, tweak and run the benchmarks as it suits your tasks.

P.S. this post has been written in September of 2015 but never published. As it appeared to be pretty complete I decided to post it now, in the January of 2018. Stay tuned…

Quicklisp newsDownload stats for December, 2017

· 11 days ago
Here are the raw Quicklisp download stats for December, 2017:

27247  alexandria
23604 closer-mop
21195 anaphora
20818 cl-ppcre
20690 split-sequence
20360 let-plus
20153 iterate
20032 babel
19888 trivial-features
18779 trivial-gray-streams
18077 bordeaux-threads
17215 cffi
16969 more-conditions
16966 trivial-garbage
16258 puri
16049 flexi-streams
15447 nibbles
14567 utilities.print-items
14366 usocket
13449 esrap
13366 chunga
13149 cl+ssl
12853 cl-base64
12701 chipz
12408 trivial-backtrace
12365 drakma
9502 cl-fad
9335 asdf-flv
9270 cl-yacc
8593 fiveam
8050 parse-number
7899 closure-common
7893 cxml
7878 log4cl
7798 local-time
7646 ironclad
7621 architecture.hooks
7347 named-readtables
7343 parser.common-rules
6783 plexippus-xpath
6767 cl-json
6708 lift
6050 optima
5425 lparallel
5234 cl-clon
5107 cxml-stp
5031 xml.location
4858 utilities.print-tree
4855 cl-dot
4430 cl-store
4055 fare-quasiquote
3963 fare-utils
3816 inferior-shell
3815 fare-mop
3707 cl-unicode
3432 cl-interpol
3321 slime
2919 trivial-utf-8
2848 cl-utilities
2830 metabang-bind
2744 quri
2628 uuid
2415 trivial-types
2375 cl-annot
2372 cl-syntax
2299 cl-slice
2255 md5
2247 trivial-indent
2234 array-utils
2229 plump
2227 documentation-utils
2226 static-vectors
2219 gettext
2107 symbol-munger
2101 arnesi
2092 collectors
2087 access
2086 fast-io
2065 djula
2056 cl-locale
2051 cl-parser-combinators
2014 hunchentoot
1910 simple-date-time
1844 ieee-floats
1625 yason
1352 rfc2388
1293 monkeylib-binary-data
1171 osicat
1163 salza2
1153 utilities.binary-dump
1135 postmodern
1044 trivial-shell
1015 prove
980 diff
949 cl-who
942 asdf-system-connections
936 command-line-arguments
933 cl-containers
931 cl-custom-hash-table
925 metatilities-base

Christophe Rhodesalgorithms and data structures term1

· 11 days ago

Another term, another cycle of Algorithms & Data Structures.

This year I am teaching the entire two-term course by myself, compared with last academic year when I co-taught with another member of staff. The change brings with it different challenges; notably that I am directly responsible for more content. Or in this case, more multiple-choice quizzes!

This term, the students have been given nine multiple-choice quizzes, of which four are new content and five are revisions of last year's quizzes. The revisions are mostly to add more variety to particular question types - finding axes of generalization that I hadn't exploited last time - and more (and more punishing) distractor answers, so that students cannot game the system too easily.

Some of this is my own adaptation to behaviours I see in the student body; for example, one behaviour I started seeing this year was the habit of selecting all the answers from a multiple-answer multiple-choice question. This was probably a reaction itself to the configuration of the quiz not telling the students the right answer after the attempt, but merely whether they had got it right or wrong; the behaviour of the quiz engine (the Moodle quiz activity) was for each selected answer to indicate the correctness status, and so students were exploiting this to see the right answers. This was not in itself a problem - there were enough questions that in the student's next attempts at the quiz they were unlikely to see the same questions again - but it was being used as a substitute for actually thinking and working at the problem, and so this was a behaviour that I wanted to discourage. The next quiz, therefore, I adapted so that it had many single-answer multiple choice with many more distractor answers than usual: seven or eight, rather than the usual three or so. (I do not know whether the message got through.)

The new quizzes address some weaknesses I saw in the student body of knowledge last year, and indeed have seen in previous years too: a general lack of a robust mental model (or possibly "notional machine" of computation. To try to address this, I taught a specific dialect of pseudocode in the introductory lecture (nothing particularly esoteric; in fact, essentially what is provided by the algorithmicx LaTeX package). I then also wrote a small interpreter in emacs lisp for that pseudocode language (with a s-expression surface syntax, of course) and a pretty-printer from s-expressions to HTML, so that I could randomly generate blocks of pseudocode and ask students questions about them: starting with the basics, with sequences of expressions, and introducing conditionals, loops, nested loops, and loops with break.

The results of this quiz were revealing; at the start of the cycle, many students were unable to answer questions about loops at all - perhaps unsurprising as the complete description of the loop syntax was only given to the students in the second lecture. Even those who had intuited the meaning of the loop form that I was using had difficulty with nested loops, and this difficulty remained evident all the way to the end of the quiz period. (By way of comparison, students were able to deal with quite complicated chains of conditionals with not much more difficulty than straight-line pseudocode.) It will be very interesting to see whether the reforms and extensions we have put in place to our first-year curriculum will make a difference to this particular task next year.

Of course, once I had the beginnings of a pseudocode interpreter and pretty-printer, I then got to use it elsewhere, to the point that it has now grown (almost) enough functionality to warrant a static analyser and compiler. I'm resisting, because fundamentally it's about generating questions for multiple-choice quizzes, not about being a language for doing anything else with - but with an even fuller model for the computation, I could make something akin to Paul F. Dietz' Common Lisp random tester (still active after 15 years) which would probably have helped me spot a mistake I made when generating multiple-choice answers to questions about recursive algorithms, of the form "which of the following expressions replaces X in order to make this function return the sum of its two arguments?".

As well as the quizzes, the students have done six automatically-marked lab exercises and one technical peer-assessment. My direct involvement in assessment after setting all of these exercises has been limited to checking that the results of the peer-assessment are reasonable, by examining cases with outliers at a rate of maybe 6 minutes per student. Indirect involvement includes delivering lectures, answering questions face-to-face and on the forum, system administration, and writing code that writes code that writes summary feedback reports; this is probably a higher-value use of my time for the students than individualized marking; in that time, the students have received, on average: right-or-wrong judgments on 330 quiz questions (many of which have more than one possible right answer); 45 individual judgments on a moderately open algorithmic task; 50 marks and feedback on programming tasks; and a gentle helping of encouragement and sarcasm, in approximately equal measure, from me.

2017-18 A&DS term one coursework marks

The coursework marks are encouraging; there are a cluster of students at the top, and while the lower tail is disappointingly substantial it is artificially enhanced by a number of students who have (for whatever reason) dropped out. Limiting the analysis to those students who missed at most one assessment gives a more pleasing distribution; almost no-one who attempted everything received a failing mark, though I should temper my satisfaction with that by saying that I need to be careful that I'm not simply giving credit for being able to show up. (There are also some marks from this picture in the middle range, 50-70, which are withheld while allegations of plagiarism and/or collusion are resolved.)

And now, as I write, it is the last working day before term starts again, when the students return to find this term's lovingly prepared hastily assembled activities. Onward!

Zach BeaneVectometry is now part of Vecto

· 12 days ago

I wrote vecto to be able to draw stuff to PNGs. It's based on the PostScript/PDF drawing model, all the way down to the level of function arguments. To move to a point, you use (move-to x y). Curves are done with (curve-to x1 y1 x2 y2 x3 y3). Color calls are done with (set-rgb-fill r g b), etc. Each function argument that has multiple components is passed with the components separated.

This is all right, I guess, but it's also pretty inconvenient if you have an object that aggregates X and Y components to break them out all the time. Passing around six things instead of three or three things instead of one is annoying.

So, a long time ago, I made a more objecty frontend for Vecto and called it Vectometry. It has a load of convenience functions for working with points and colors as objects rather than separated components. It predefines some useful points (like *origin*) and colors (like *white* and *black*). It also adds a set of functions for working with rectangles, which are called "boxes" in the interface.

So, for example, the old vecto code might look like this:

(with-canvas (:height 100 :width 100)
  (move-to 0 0)
  (line-to 50 50)
  (line-to 100 0)
  (save-png "foo.png"))

The new code looks something like this:

(let ((canvas (box 0 0 100 100)))
  (with-box-canvas canvas
    (move-to *origin*)
    (line-to (centerpoint canvas))
    (line-to (bottom-right canvas))
    (save-png "bar.png")))

Boxes have maxpoint, minpoint, centerpoint, bottom-left, top-left, top-right, bottom-right, height, and width functions. They all return about what you'd expect.

But there's also a combine function that takes two boxes, or a point and a box, or two points, and returns a box big enough to cover the two objects.

And expand takes a box and an amount, and returns a new box that has its corners moved out by the specified amount, in all directions. And the nice thing about with-box-canvas is that if your box doesn't align with the origin, the drawing system still does - that is, the bottom left of your canvas box can be at positive or negative coordinates, but drawing at the origin will still draw at 0, 0.

displace takes a box and a point, and adds the point components to the minpoint of the box to produce a new box at a new location.

For points, there's new add, sub, mul, and div functions that do about what you'd expect. And there's also a function angle to get the angle between two points, and a function apoint that produces a point at a specified angle and distance from the origin.

Color objects are easier to make and pass around. rgb-color does what you'd expect, but there's also an hsv-color that provides a much nicer way to get related colors, and even an html-color function so you can easily use "#rgb" or "#rrggbb" strings to set fill or stroke colors.

Everything else that doesn't deal with component arguments is just passed through verbatim to Vecto, so things like stroke or fill-path don't change.

This object stuff creates new objects all the time instead of mutating old ones. Maybe it's slower and puts more pressure on the GC. But for the stuff I do I haven't noticed and it hasn't mattered.

I've been sitting on this vectometry code, using it for all my drawing needs for years but never publicizing it, because it wasn't documented yet. But I'd rather put it into vecto and make it easily accessible and document it Someday rather than leave it buried.

If you like drawing stuff and 2D APIs and PNGs and stuff, try the latest vecto from Quicklisp and give it a whirl. If you have problems let me know. If you want me to post some example code and output images let me know. Enjoy!

edit Here's some code I posted to twitter and 100-pointed and 5-pointed stars:

(defun star-demo (box points dip-factor)
  (let* ((radius (/  (max (height box) (width box)) 2))
         (step (/ pi points))
         (angle (/ pi 2))
         (center (centerpoint box)))
    (with-box-canvas (expand box 5)
      (set-fill-color *white*)
      (set-stroke-color (rgba-color 0 0 0 0.5))
      (centered-circle-path center radius)
      (set-line-width 10)
      (translate center)
      (move-to (apoint angle radius))
      (dotimes (i points)
        (incf angle step)
        (line-to (apoint angle (* radius dip-factor)))
        (incf angle step)
        (line-to (apoint angle radius)))
      (set-fill-color *black*)
      (save-png (format nil "star~D.png" points)))))

Christophe Rhodessbcl 1 4 3 released

· 14 days ago

I released sbcl-1.4.3 just before the new year.

Mostly this blog post is to remind me that I need to improve the release script:

The script assumes that I build on x86-64. That is usually true, but this month there's a problem with building on my release build host, which is a squeeze-era VM: the new immobile-space support on x86-64 uses a synchonization builtin (__atomic_load()) that wasn't supported in gcc-4.4, and I didn't have time to investigate and test which of the __sync_ builtins is an acceptable replacement before vanishing off the internet for a couple of days. Probably:

hint = __sync_fetch_and_add(&page_hints[hint_index], 0);

Immobile space isn't (yet) supported on 32-bit x86. This meant that after shuffling some gpg keys around, I could do most of the release process on a 32-bit installation; the hard-coding of the architecture meant that I had to do some bits manually, so be more than usually suspicious when checking gnupg signatures.

Hopefully I'll arrange to fix some of this before the end of this month. (Testing the release script is a little tricky.)

Zach BeaneFASL package pitfall

· 17 days ago

In the past month I’ve seen the same failure affect multiple independent projects. Here’s how it happens:

Project A has code like this:

#+project-b (b:frob "Hello")

Project A’s system definition does not have an explicit dependency on project B.

Instead, the code that relies on project B is only read and compiled if project B happpens to be loaded first, otherwise it’s ignored.

There’s no problem if project A is compiled on its own. The project B code is ignored and the FASL can be loaded normally.

As soon as project B is present before project A is compiled, however, the project A FASLs contain a reference to B:FROB. In the next session, if project A is loaded from FASLs without loading project B, the system signals an error about a missing package B.

People write code like this to get optional extra functionality without unconditionally pulling in a new dependency.

I understand the desire to have optional dependencies, but I don’t think there’s a clear and direct way to express them in the current CL ecosystem. If you have a project’s symbols in your source code, you need to depend-on that project, or risk FASL-related package failure.

Quicklisp newsDecember 2017 Quicklisp dist update now available

· 17 days ago
New projects:
  • cl-ascii-table — Common Lisp library to present tabular data in ascii-art table. — MIT
  • cl-editdistance — A Common Lisp implementation of edit distance. — CC-BY-4.0
  • cl-ntriples — CL-NTRIPLES is a simple basic parser for Ntriples data. — BSD
  • cl-proj — CL-PROJ provides Proj.4 library bindings — BSD
  • de.setf.wilbur — a fork of net.sourceforge.wilbur updated for mcl and sbcl — LLGPL
  • vectors — Some utilities for using vectors — LLGPL
Updated projectsa-cl-loggerableahungry-fleecearchitecture.builder-protocolarchitecture.service-provideraws-sign4babelbeastcaclecarrierceplchancerychanlchronicitycl+sslcl-conllucl-cudacl-digraphcl-diskspacecl-disquecl-emojicl-enumerationcl-fadcl-fondcl-gamepadcl-gobject-introspectioncl-gpiocl-graphcl-iconvcl-interpolcl-liballegrocl-messagepack-rpccl-neovimcl-ntp-clientcl-pcgcl-portaudiocl-portmanteaucl-ppcrecl-rabbitcl-railcl-random-forestcl-spidevcl-strcl-string-matchcl-unificationcl-webkitclazycloser-mopclxcommonqtconfiguration.optionscroatoancurry-compose-reader-macrosdecltdelta-debugesrapf2clfare-csvfare-scriptsfemlispfiascoflexi-streamsfxmlgamebox-frame-managergamebox-gridsgamebox-mathglkitgraphhalftoneharmonyhornerhtml-templatehunchentootlegitlichat-protocollichat-serverliblichat-tcp-serverlisp-criticlocal-timemaidenmcclimmore-conditionsnew-opookoscoverlordpapyrusparser.common-rulesparser.inipostmodernprojecturedqbase64qt-libsqtoolsquickprojectsb-cgascalplserapeumsimple-tasksstaplestumpwmtap-unit-testthe-cost-of-nothingtinaatrivial-bit-streamstrivial-package-managerubiquitousuiopurl-rewriteutilities.binary-dumputilities.print-itemsutilities.print-treevectoworkout-timerxhtmlambdazs3.

To get this update, use (ql:update-dist "quicklisp").

This update was created with an older version of SBCL. The latest SBCL includes ASDF 3.3.1, which breaks a handful of projects in ways that have not been resolved yet. If you use the latest SBCL, see the build failure report to get an idea of what might not work properly.


Paul KhuongHow to Print Integers Really Fast (With Open Source AppNexus Code!)

· 24 days ago

Back in 2014, AppNexus needed a faster way to convert machine integers to strings. It’s a stupid problem to have–just use a binary format–but it’s surprisingly common. Maybe a project can only grow large enough for serialisation speed to be an issue if it first disregards some trivial concerns. I can finally share our solution: after failing to nicely package our internal support code for years, we recently got the OK to open source the AppNexus Common Framework (ACF) under the Apache License 2.0, despite its incomplete state.

In the words of a friend and former colleague:

If you don’t want to read more about what’s in ACF and why I feel it’s important to open source imperfect repositories, jump to the section on fast itoa.

ACF contains the base data structure and runtime library code we use to build production services, in C that targets Linux/x86-64. Some of it is correctly packaged, most of it just has the raw files from our internal repository. Ironically, after settling on the project’s name, we decided not to publish the most “framework-y” bits of code: it’s unclear why anyone else would want to use it. The data structures are in C, and tend to be read-optimised, with perhaps some support for non-blocking single-writer/multi-reader concurrency. There’s also non-blocking algorithms to support the data structures, and basic HTTP server code that we find useful to run CPU-intensive or mixed CPU/network-intensive services.

Publishing this internal code took a long time because we were trying to open a project that didn’t exist yet, despite being composed of code that we use every day. AppNexus doesn’t sell code or binaries. Like many other companies, AppNexus sells services backed by in-house code. Our code base is full of informal libraries (I would be unable to make sense of the code base if it wasn’t organised that way), but enforcing a clean separation between pseudo-libraries can be a lot of extra work for questionable value.

These fuzzy demarcations are made worse by the way we imported some ideas directly from Operating Systems literature, in order to support efficient concurrent operations. That had a snowball effect: everything, even basic data structures, ends up indirectly depending on runtime system/framework code specialised for our use case. The usual initial offenders are the safe memory reclamation module, and the tracking memory allocator (with a bump pointer mode); both go deep in internals that probably don’t make sense outside AppNexus.

Back in 2015, we looked at our support code (i.e., code that doesn’t directly run the business) and decided we should share it. We were–and still are–sure that other people face similar challenges, and exchanging ideas, if not directly trading code, can only be good for us and for programming in general. We tried to untangle the “Common” (great name) support library from the rest of the code base, and to decouple it from the more opinionated parts of our code, while keeping integration around (we need it), but purely opt-in.

That was hard. Aiming for a separate shared object and a real Debian package made it even harder than it had to be. The strong separation between packaged ACF code and the rest of repo added a lot of friction, and the majority of the support code remained in-tree.

Maybe we made a mistake when we tried to librarify our internals. We want a library of reusable code; that doesn’t have to mean a literal shared object. I’m reminded of the two definitions of portable code: code sprinkled with platform conditionals, or code that can be made to run on a new machine with minimal effort. Most of the time, I’d rather have the latter. Especially when code mostly runs on a single platform, or is integrated in few programs, I try to reduce overhead for the common case, while making reuse possible and easy enough that others can benefit.

And that’s how we got the ACF effort out of the door: we accepted that the result would not be as polished as our favourite open source libraries, and that most of the code wouldn’t even be packaged or disentangled from internals. That’s far from an ideal state, but it’s closer to our goals than keeping the project private and on the backburner. We got it out by “feature” boxing the amount of work–paring it down to figuring out what would never be useful to others, and tracking down licenses and provenance–before pushing the partial result out to a public repository. Unsurprisingly, once that was done, we completed more tasks on ACF in a few days than we have in the past year.

Now that ACF is out, we still have to figure out the best way to help others co-opt our code, to synchronise the public repository with our internal repository, and, in my dreams, to accept patches for the public repo and have them also work for the internal one. In the end, what’s important is that the code is out there with a clear license, and that someone with similar problems can easily borrow our ideas, if not our code.

The source isn’t always pretty, and is definitely not as well packaged and easily re-usable as we’d like it to be, but it has proved itself in production (years of use on thousands of cores), and builds to our real needs. The code also tries to expose correct and efficient enough code in ways that make correct usage easy, and, ideally, misuse hard. Since we were addressing specific concrete challenges, we were able to tweak contracts and interfaces a bit, even for standard functionality like memory allocation.

The last two things are what I’m really looking for when exploring other people’s support code: how did usage and development experience drive interface design, and what kind of non-standard tradeoffs allowed them to find new low-hanging fruits?

If anyone else is in the same situation, please give yourself the permission to open source something that’s not yet fully packaged. As frustrating as that can be, it has to be better than keeping it closed. I’d rather see real, flawed but production-tested, code from which I can take inspiration than nothing at all.

How to print integers faster

The integer to string conversion file (an_itoa) is one instance of code that relaxes the usual [u]itoa contract because it was written for a specific problem (which also gave us real data to optimise for). The relaxation stems from the fact that callers should reserve up to 10 chars to convert 32 bit (unsigned) integers, and 20 chars for 64 bit ones: we let the routines write garbage (0/NUL bytes) after the converted string, as long as it’s in bounds. This allowance, coupled with a smidge of thinking, let us combine a few cute ideas to solve the depressingly common problem of needing to print integers quickly. Switching to an_itoa might be a quick win for someone else, so I cleaned it up and packaged it immediately after making the repository public.

We wrote an_itoa in July 2014. Back then, we had an application with a moderate deployment (a couple racks on three continents) that was approaching capacity. While more machines were in the pipeline, a quick perf run showed it was spending a lot of time converting strings to integers and back. We already had a fast-ish string to integer function. Converting machine integers back to string however, is a bit more work, and took up around 20% of total CPU time.

Of course, the real solution here is to not have this problem. We shouldn’t have been using a human-readable format like JSON in the first place. We had realised the format would be a problem a long time ago, and were actually in the middle of a transition to protobuf, after a first temporary fix (replacing a piece of theoretically reconfigurable JavaScript that was almost never reconfigured with hardcoded C that performed the same JSON manipulation). But, there we were, in the middle of this slow transition involving terabytes of valuable persistent data, and we needed another speed boost until protobuf was ready to go.

When you’re stuck with C code that was manually converted, line by line, from JavaScript, you don’t want to try and make high level changes to the code. The only reasonable quick win was to make the conversion from integer to string faster.

Human readable formats wasting CPU cycles to print integers is a common problem, and we quickly found a few promising approaches and libraries. Our baseline was the radix-10 code in stringencoders. This post about Lwan suggested using radix-10, but generating strings backward instead of reversing like the stringencoders library. Facebook apparently hit a similar problem in 2013, which lead to this solution by Andrei Alexandrescu. The Facebook code combines two key ideas: radix-100 encoding, and finding the length of the string with galloping search to write the result backward, directly where it should go.

Radix-100 made sense, although I wasn’t a fan of the 200-byte lookup table. I was also dubious of the galloping search; it’s a lot of branches, and not necessarily easy to predict. The kind of memmove we need to fixup after conversion is small and easy to specialise on x86, so we might not need to predict the number of digits at all.

I then looked at the microbenchmarks for Andrei’s code, and they made it look like the code was either tested on integers with a fixed number of digits (e.g., only 4-digit integers), or randomly picked with uniform probability over a large range.

If the number of digits is fixed, the branchiness of galloping search isn’t an issue. When sampling uniformly... it’s also not an issue because most integers are large! If I pick an integer at random in [0, 1e6), 90% of the integers have 6 digits, 99% 5 or 6, etc.

Sometimes, uniform selection is representative of the real workload (e.g., random uids or sequential object ids). Often, not so much. In general, small numbers are more common; for example, small counts can be expected to roughly follow a Poisson distribution.

I was also worried about data cache footprint with the larger lookup table for radix-100 encoding, but then realised we were converting integers in tight loops, so the lookup table should usually be hot. That also meant we could afford a lot of instruction bytes; a multi-KB atoi function wouldn’t be acceptable, but a couple hundred bytes was fine.

Given these known solutions, John and I started doodling for a bit. Clearly, the radix-100 encoding was a good idea. We now had to know if we could do better.

Our first attempt was to find the number of decimal digits more quickly than with the galloping search. It turns out that approximating \(\log\sb{10}\) is hard, and we gave up ;)

We then realised we didn’t need to know the number of decimal digits. If we generated the string in registers, we could find the length after the fact, slide bytes with bitwise shifts, and directly write to memory.

I was still worried about the lookup table: the random accesses in the 200 byte table for radix-100 encoding could hurt when converting short arrays of small integers. I was more comfortable with some form of arithmetic that would trade best-case speed for consistent, if slightly sub-optimal, performance. As it turns out, it’s easy to convert values between 0 and 100 to unpacked BCD with a reciprocal multiplication by \( 1/10 \) and some in-register bit twiddling. Once we have a string of BCD bytes buffered in a general purpose register, we can vertically add '0' to every byte in the register to convert to ASCII characters. We can even do the whole conversion on a pair of such values at once, with SIMD within a register.

The radix-100 approach is nice because it chops up the input two digits at a time; the makespan for a given integer is roughly half as long, since modern CPUs have plenty of execution units for the body.

The dependency graph for radix-10 encoding of 12345678 looks like the following, with 7 serial steps.

Going for radix-100 halves the number of steps, to 4. The steps are still serial, except for the conversion of integers in [0, 100) to strings.

Could we expose even more ILP than the radix-100 loop?

The trick is to divide and conquer: divide by 10000 (1e4) before splitting each group of four digits with a radix-100 conversion.

Recursive encoding gives us fewer steps, and 2 of the 3 steps can execute in parallel. However, that might not always be worth the trouble for small integers, and we know that small numbers are common. Even if we have a good divide-and-conquer approach for larger integers, we must also implement a fast path for small integers.

The fast path for small integers (or the most significant limb of larger integers) converts a 2- or 4- digit integer to unpacked BCD, bitscans for the number of leading zeros, converts the BCD to ASCII by adding '0' (0x30) to each byte, and shifts out any leading zero; we assume that trailing noise is acceptable, and it’s all NUL bytes anyway.

For 32-bit integers an_itoa (really an_uitoa) looks like:

if number < 100:
    execute specialised 2-digit function
if number < 10000:
    execute specialised 4-digit function

partition number with first 4 digits, next 4 digits, and remainder.

convert first 2 groups of 4 digits to string.
If the number is < 1e8:  # remainder is 0!
    shift out leading zeros, print string.
    print remainder # at most 100, since 2^32 < 1e10
    print strings for the first 2 groups of 4 digits.

The 64 bit version, an_ltoa (really an_ultoa) is more of the same, with differences when the input number exceeds 1e8.

Is it actually faster?

I’ve already concluded that cache footprint was mostly not an issue, but we should still made sure we didn’t get anything too big.

  • an_itoa: 400 bytes.
  • an_ltoa: 880 bytes
  • fb_itoa: 426 bytes + 200 byte LUT
  • fb_constant_itoa (without the galloping search): 172 bytes + 200 byte LUT
  • lwan_itoa (radix-10, backward generation): 60 bytes.
  • modp_uitoa10: 91 bytes.

The galloping search in Facebook’s converter takes a lot of space (there’s a ton of conditional branches, and large numbers must be encoded somewhere). Even if we disregard the lookup table, an_itoa is smaller than fb_itoa, and an_ltoa (which adds code for > 32 bit integers) is only 254 bytes larger than fb_itoa (+ LUT). Now, Facebook’s galloping search attempts to make small integers go faster by checking for them first; if we convert small numbers, we don’t expect to use all ~250 bytes in the galloping search. However, an_itoa and an_ltoa are similar: the code is setup such that larger numbers jump forward over specialised subroutines for small integers. Small integers thus fall through to only execute code at the beginning of the functions. 400 or 800 bytes are sizable footprints compared to the 60 or 90 bytes of the radix-10 functions, but acceptable when called in tight loops.

Now that we feel like the code and lookup table sizes are reasonable (something that microbenchmarks rarely highlight), we can look at speed.

I first ran the conversion with random integers in each digit count class from 1 digit (i.e., numbers in [0, 10)) to 19 (numbers in [1e8, 1e9)). The instruction cache was hot, but the routines were not warmed on that size class of numbers (more realistic that way).

The results are cycle counts (with the minimum overhead for a no-op conversion subtracted from the raw count), on an unloaded 2.4 GHz Xeon E5-2630L, a machine that’s similar to our older production hardware.

We have data for:

  • an_itoa, our 32 bit conversion routine;
  • an_ltoa, our 64 bit conversion routine;
  • fb_constant_itoa, Facebook’s code, with the galloping search stubbed out;
  • fb_itoa, Facebook’s radix-100 code;
  • itoa, GNU libc conversion (via sprintf);
  • lw_itoa, Lwan’s backward radix-10 converter;
  • modp, stringencoder’s radix-10 / strreverse converter.

I included fb_constant_itoa to serve as a lower bound on the radix-100 approach: the conversion loop stops as soon as it hits 0 (same as fb_itoa), but the data is written at a fixed offset, like lw_itoa does. In both fb_constant_itoa’s and lw_itoa’s cases, we’d need another copy to slide the part of the output buffer that was populated with characters over the unused padding (that’s why fb_itoa has a galloping search).

When I chose these functions back in 2014, they were all I could find that was reasonable. Since then, I’ve seen one other divide and conquer implementation, although it uses a lookup table instead of arithmetic to convert radix-100 limbs to characters, and an SSE2 implementation that only pays off for larger integers (32 bits or more).

Some functions only go up to UINT32_MAX, in which case we have no data after 9 digits. The raw data is here; I used this R script to generate the plot.

The solid line is the average time per conversion (in cycles), over 10K data points, while the shaded region covers the 10th percentile to the 90th percentile.

(GNU) libc’s conversion is just wayy out there. The straightforward modp (stringencoders) code overlaps with Facebook’s itoa; it’s slightly slower, but so much smaller.

We then have two incomplete string encoders: neither fb_constant_itoa nor lw_itoa generates their output where it should go. They fill a buffer from the end, and something else (not benchmarked) is responsible for copying the valid bytes where they belong. If an incomplete implementation suffices, Lwan’s radix-10 approach is already competitive with, arguably faster than, the Facebook code. The same backward loop, but in radix-100, is definitely faster than Facebook’s full galloping search/radix-100 converter.

Finally, we have an_itoa and an_ltoa, that are neck and neck with one another, faster than both modp and fb_itoa on small and large integers, and even comparable with or faster than the incomplete converters. Their runtime is also more reliable (less variance) than modp’s and fb_itoa’s: modp pays for the second variable length loop in strreverse, and fb_itoa for the galloping search. There are more code paths in an_itoa and an_ltoa, but no loop, so the number of (unpredictable) conditional branches is lower.

What have we learned from this experiment?

  1. It’s easy to improve on (g)libc’s sprintf. That makes sense, since that code is so generic. However, in practice, we only convert to decimal, some hex, even less octal, and the rest is noise. Maybe we can afford to special case these bases.
  2. The double traverse in modp_uitoa10 hurts. It does make sense to avoid that by generating backward, ideally in the right spot from the start.
  3. Radix-100 encoding is a win over radix-10 (fb_constant_itoa is faster than lwan_itoa).
  4. Using registers as buffers while generating digits is a win (an_itoa and an_ltoa are faster for small values).
  5. Divide and conquer is also a win (an_ltoa is flatter for large integers).

With results that made sense for an easily understood microbenchmark, I decided to try a bunch of distributions. Again, the code was hot, the predictors lukewarm, and we gathered 10K cycle counts per distribution/function. The raw data is here, and I used this R script to generate the plot.

The independent variables are all categorical here, so I use one facet per distribution, and, in each facet, a boxplot per conversion function, as well as a jittered scatter plot to show the distribution of cycle counts.

Clearly, we can disregard glibc’s sprintf (itoa).

The first facet generated integers by choosing uniformly between \(100, 1000, 10\sp{4}, \ldots, 10\sp{8}\). That’s a semi-realistic variation on the earlier dataset, which generated a bunch of numbers in each size class, and serves as an easily understood worst-case for branch prediction. Both an_itoa and an_ltoa are faster than the other implementations, and branchier implementations (fb_itoa and modp) show their variance. Facebook’s fb_itoa isn’t even faster than modp’s radix-10/strreverse encoder. The galloping search really hurts: fb_constant_itoa, without that component, is slightly faster than the radix-10 lw_itoa.

The second facet is an even harder case for branch predictors: random values skewed with an exponential (pow(2, 64.0 * random() / RAND_MAX)), to simulate real-world counts. Both an_itoa and an_ltoa are faster than the other implementations, although an_ltoa less so: an_itoa only handles 32 bit integers, so it deals with less entropy. Between the 32-bit implementations, an_itoa is markedly faster and more consistent than lw_itoa (which is incomplete) and modp. Full 64-bit converters generally exhibit more variance in runtime (their input is more randomised), but an_ltoa is still visibly faster than fb_itoa, and even than the incomplete fb_constant_itoa. We also notice that fb_itoa’s runtimes are more spread out than fb_constant_itoa: the galloping search adds overhead in time, but also a lot of variance. That makes me think that the Facebook code is more sensitive than others to difference in data distribution between microbenchmarks and production.

The third facet should be representative of printing internal sequential object ids: uniform integers in [0, 256K). As expected, every approach is tighter than with the skewed “counts” distribution (most integers are large). The an_itoa/an_ltoa options are faster than the rest, and it’s far from clear that fb_itoa is preferable to even modp. The range was also chosen because it’s somewhat of a worst case for an_itoa: the code does extra work for values between \(10\sp{4}\) and \(10\sp{8}\) to have more to do before the conditional branch for x < 1e8. That never pays off in the range tested here. However, even with this weakness, an_itoa still seems preferable to fb_itoa, and even to the simpler modp_uitoa10.

The fourth facet (first of the second row) shows what happens when we choose random integers in [0, 20). That test case is interesting because it’s small, thus semi-representative of some of our counts, and because it needs 1 or 2 digits with equal probability. Everything does pretty well, and runtime distributions are overall tight; branch predictors can do a decent job when there are only two options. I’m not sure why there’s such a difference between an_itoa and an_ltoa’s distribution. Although the code for any value less than 100 is identical at the C level, there are small difference in code generation... but I can’t pinpoint where the difference might come from.

The fifth facet, for random integers in [100, 200) is similar, with a bit more variance.

The sixth facet generates unix timestamps around a date in 2014 with uniform selection plus or minus one million second. It’s meant to be representative of printing timestamps. Again, an_itoa and an_ltoa are faster than the rest, with an_itoa being slightly faster and more consistent. Radix-100 (fb_constant_itoa) is faster and more consistent than radix-10 (lw_itoa), but it’s not clear if fb_itoa is preferable to modp. The variance for modp is larger than for the other implementations, even fb_itoa: that’s the cost of a radix-10 loop and of the additional strreverse.

This set of results shows that conditional branches are an issue when converting integers to strings, and that the impact of branches strongly depends on the distribution. The Facebook approach, with a galloping search for the number of digits, seems particularly sensitive to the distribution. Running something like fb_itoa because it does well in microbenchmark is thus only a good idea if we know that the microbenchmark is representative of production.

Bigger numbers take more time to convert, but the divide and conquer approach of an_itoa and an_ltoa is consistently faster at the high end, while their unrolled SIMD-within-a-register fast path does well for small numbers.

So, if you ever find yourself bottlenecked on s[n]printf

The correct solution to the “integer printing is too slow” problem is simple: don’t do that. After all, remember the first rule of high performance string processing: “DON’T.” When there’s no special requirement, I find Protobuf does very well as a better JSON.

However, once you find yourself in this bad spot, it’s trivial to do better than generic libc conversion code. This makes it a dangerously fun problem in a way... especially given that the data distribution can matter so much. No benchmark is perfect, but various implementations are affected differently by flaws in microbenchmarks. It’s thus essential not to overfit on the benchmark data, probably even more important than improving performance by another factor of 10% or 20% (doing 4-5x better than libc code is already a given). That’s why I prefer integer conversion code with more consistent cycle counts: there’s less room for differences due to the distribution of data.

Finally, if, like 2014-AppNexus, you find yourself converting a lot of integers to strings in tight loops (on x86-64 machines), try an_itoa or an_ltoa! The whole repository is Apache 2.0, and it should be easy to copy and paste all the dependencies to pare it down to two files. If you do snatch our code, note that the functions use their destination array (up to 10 bytes for an_itoa, and 20 for an_ltoa) as scratch space, even for small integers.

Thank you for reviewing drafts, John, Ruchir, Shreyas, and Andrew.

Luís OliveiraA Lisp REPL in your pocket

· 27 days ago

Thanks to Polos Ruetz, you can now play with Common Lisp directly on your Android phone. All you need to do is install CL REPL from the Google Play Store. CL REPL is one of the examples part of EQL5-Android which is built on top of EQL5, a project that marries ECL with Qt.


A Common Lisp REPL with command line and history, plus a simple editor with syntax highlighting, simple visual paren-matching, a file dialog for opening/saving files, and a simple debug dialog. It uses the ECL implementation for the Lisp side, and Qt5/QML for the UI. This is an open source project (see EQL5-Android).

(via @dk_jackdaniel)

ECL NewsECL license

· 32 days ago

From time to time a little misconception emerges regarding the ECL license - namely some people seem to have the impression that it is GPL-2.0, while in fact it is LGPL-2.1+. In this post I want to talk a little about the history of ECL and to describe what the practical implications of such licensing are.

The heritage of Embeddable Common Lisp is rich, as you can read here. The software has had a few maintainers throughout its history who hold copyrights to various parts of the code. ECL was licensed under GPL-2.0, but that license was changed after ECLS (or ECL-Spain) and ECoLisp were once again one project and Prof. Juanjo García-Ripoll changed the license to LGPL-2.1+ with agreement of Prof. Giuseppe Attardi. That's the point from which I took over in 2015 with blessing from the previous maintainer. I do not own all copyrights to the software and I can't change its license to anything that is incompatible with working in LGPL-2.1+. Note that parts of the codebase are licensed more liberally (like programs in the examples directory which may be used for any purpose and are licensed under the terms of BSD-2-Clause).

That said, I feel very comfortable with current licensing. It preserves a reasonable balance between producent and consumer rights and fits project goals perfectly. The meaning of our current license is, in short, the following: you can use ECL for any purpose in any setting (including commercial applications), but if you commit changes to ECL itself you are obliged to share these changes (and only them).

The core library is a shared object and it is dynamically linked with the software. The binary called ecl is just a program which is a client of this library. Moreover, ECL compilation artifacts are usually shared objects themself (usually disguised under the fas extension):

➜  ~ ldd `which ecl` =>  (0x00007ffff80c3000) => /home/jack/Pulpit/lisps/ecl-16.1.3/lib/ (0x00007fc7c4665000) => /lib/x86_64-linux-gnu/ (0x00007fc7c427a000) => /usr/lib/x86_64-linux-gnu/ (0x00007fc7c3ffa000) => /usr/lib/x86_64-linux-gnu/ (0x00007fc7c3df2000) => /lib/x86_64-linux-gnu/ (0x00007fc7c3bd4000) => /lib/x86_64-linux-gnu/ (0x00007fc7c39ce000) => /lib/x86_64-linux-gnu/ (0x00007fc7c36c5000)
    /lib64/ (0x00005592f1735000)
➜  ~ cd .cache/common-lisp/ecl-16.1.3-21f0b92f-linux-x64/home/jack/Pulpit/repo/mcclim/Apps/Listener
➜  Listener ls listener*
listener.fas listener.o
➜  Listener ldd listener.fas =>  (0x00007fffb43f5000) => /home/jack/Pulpit/lisps/ecl-develop/lib/ (0x00007fa2bbfc1000) => /lib/x86_64-linux-gnu/ (0x00007fa2bbbd6000) => /usr/lib/x86_64-linux-gnu/ (0x00007fa2bb956000) => /usr/lib/x86_64-linux-gnu/ (0x00007fa2bb74e000) => /lib/x86_64-linux-gnu/ (0x00007fa2bb530000) => /lib/x86_64-linux-gnu/ (0x00007fa2bb32a000) => /lib/x86_64-linux-gnu/ (0x00007fa2bb021000)
    /lib64/ (0x0000563db6716000)
➜  Listener file listener.fas 
listener.fas: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=ed1cece88028eb3a388ab0a589a9ee12415532e9, not stripped

There are a few implications of this. First, I will explain (informally) what LLGPL clarification to LGPL is. Many Common Lisp implementations don't work with the notion of linking, so "static linking" and "dynamic linking" doesn't make much sense in them. Libraries are simply added to the image. That may raise questions whenever a binary with an LGPL library in it may be considered derivative work or not? My strong belief lies on the side that it is not derived work and if the author of the Lisp library gave it the LGPL license - they meant it. If we take another interpretation, then it is no different than licensing it with GPL, so it would be nonsense. I think that what is tested in court is intention and there is no rational interpretation of giving a Lisp library the LGPL license except the one that the software does not influence other image components. LLGPL additionally clarifies some wording to make it less ambiguous and clear up unnecessary doubts.

All that said, ECL is safe from concerns like that and such clarification is not necessary because it works with the very same notion LGPL talks about (static and dynamic linking). There is no program image, only objects which a binary is composed of and linked with (exactly like in C programs).

Another interesting license (with which ECL is compatible with thanks to "or later version" clause) is LGPL-3.0. This license is for example used by ECL's fork named MKCL and GMP since version 6. In short, this license adds an important restriction to LGPL-2.1+ called the anti-tivoization provision. This privision adds a new freedom for software users - they must have the ability to update the library on a device with software under this license to their own version of it. This effectively means that the device can't be a complete black box.

This leads us to another topic. ECL is licensed under LGPL-2.1+ license and it is linked with GMP. As we have noted in the paragraph above, the newest version of GMP is licensed under LGPL-3.0. In practice this means that if you use ECL and GMP in your application and any of them is LGPL-3.0 you can't put such a bundle on a camera device which doesn't allow changing its software. To prevent such situation ECL has bundled its own version of GMP, which is a slightly modified version of GMP version 4, which was licensed under the LGPL-2.1+ terms. By default, when building it tries to link against on your system, but given appropriate configure flags it may use the bundled GMP version and link it statically into (ECL doesn't export symbols from libeclgmp.a to avoid symbol name conflicts with the original

I think that summarises it. Now I will provide some made up FAQ to illustrate licensing implications in shorter form:

Q: Is ECL GPL-2.0?
A: No, ECL is LGPL-2.1+.

Q: Can you provide a commercial license for us?
A: No, I don't own all the copyrights.

Q: Can we use ECL in our commercial product with proprietary components?
A: Yes, but you have to keep ECL linked dynamically with them (as a shared object).

Q: Can you incorporate proprietary components in ECL?
A: God no (and I wouldn't do that even if I could).

Q: Can we use ECL in our LGPL/GPL/AGPL product?
A: Yes, you can even link ECL statically for that purpose. Your license will be intact.

Q: Can you incorporate LGPL/GPL/AGPL components in ECL?
A: If you incorporate LGPL-2.1+, then ECL remains in LGPL-2.1+ and it can be integrated in the upstream; but if you incorporate LGPL-3.0, GPL or AGPL, then your fork of ECL will become LGPL-3.0, GPL or AGPL and it won't be integrated upstream.

Q: Can we use ECL in our BSD/MIT/Apache-2.0 product?
A: Yes. If it is dynamically linked there are no further implications. If you link it statically, the overall license is LGPL-2.1+.

Q: Can you incorporate BSD/MIT/Apache-2.0 components in ECL?
A: Yes, sometimes I do that.

Q: If I compile my software with ECL is it LGPL-2.1+?
A: No, products of compilation are not influcenced by compiler license. You may compile any kind of free and proprietary software with ECL without any implications on the compilation artifacts or the code it compiles. I would appreciate not using it for something unethical though.

Q: Can I put ECL in an embedded device to which the consumer doesn't have access to?
A: Yes. You may need to build ECL with bundled GMP library to avoid LGPL-3.0 implications.

Q: If I use two libraries in my application - one being LGPL and the other MIT - what is my program license?
A: That depends. If you link them statically (monolithic programs), then resulting work will be covered at least with LGPL (you may add more restrictions if you want). If you link them dynamically (default), then you may use any license you want.

Q: If I use GPL libraries in my application - what is my program license?
A: Its license is at least GPL.

Q: Are you a lawyer?
A: Nope. You may want to consult one. But hey, you should also consult a lawyer regarding the terms of services you probably agree on surfing the web and EULAs bundled with your favourite software and hardware (i.e phone).

Q: Did you cover all LGPL-2.1+ implications?
A: No, I recommend reading the license. I have talked about things which I find relevant to this post.

Q: Can I buy something from you?
A: Yes, you may buy developer time to work on ECL or to help integrate ECL with your application. I'll probably do it anyway if you drop by on the IRC channel.

If you have more questions you may ask on the mailing list and IRC (channel #ecl on freenode).

Thank you for reading,
Daniel Kochmański


  • I want to thank Elias Mårtenson, Pascal Bourguignon and Tomek Kurcz for proofreading this text and for providing valuable remarks before publishing it on the blog.

  • In this post I've used SPDX-License-Identifier format where appropriate.

Didier VernaAnnouncing Quickref: a global documentation project for Common Lisp

· 34 days ago

Today, we deployed the first version of Quickref, a new global documentation project for Common Lisp.

The purpose of Quickref is to provide a centralized collection of reference manuals for the whole Quicklisp world. This means around 1500 libraries, for a total of around 3000 ASDF systems. The reference manuals are generated by Declt, which is probably the most complete documentation system for Common Lisp currently available, and delivered in HTML (PDF versions could easily be made available as well).

A lot of things can still be improved, but I'm pretty satisfied with the result so far. 3000 ASDF systems is a hell of a test suite for Declt, and I'm happy to report that it passes on practically all of them. Only a couple of issues remain, not even due to Declt itself, and less than a dozen or so libraries still pose problems (mostly technical difficulties due to foreign dependencies).

Quickref was made by Antoine Martin, as part of an internship with me. Many thanks to him! We still have some cleanup and packaging to do, but we expect to open-source the infrastructure soon. I also want to thank Mark Evenson, Erik Huelsmann and the Common Lisp Foundation for hosting the project on (it was only natural)!

Finally, let me restate this again (and again): reference manuals are not user manuals. They are... reference manuals. Although automatically generated, there are some things you can do, as a library author, to improve the output (this is an area of Declt which I intend to work on in the future). Please refer to the Declt user manual (notably section 3.2 Coding Style) for more information.

TurtleWareMcCLIM demo - St Nicolas Day present

· 41 days ago

One of the projects I work on is McCLIM which is a GUI toolkit tailored for Common Lisp. For a few weeks now I was thinking about recording a demo session which shows some random usage of this software. If you are interested to take time and watch it, it takes around 30 minutes.

This is my first tutorial video recorded in the home, so I would appreciate any feedback with remarks what I did good and what I did wrong. Thank you and enjoy the video!

Quicklisp newsNovember 2017 Quicklisp download stats

· 42 days ago

Here are the raw download counts for the top 100 projects in Quicklisp for November:

16654  alexandria
14200 closer-mop
13130 split-sequence
12669 cl-ppcre
12667 anaphora
12397 babel
12381 trivial-features
11989 iterate
11818 trivial-gray-streams
11520 bordeaux-threads
10907 let-plus
10821 cffi
10380 trivial-garbage
9687 flexi-streams
9603 puri
9396 nibbles
8847 usocket
8712 more-conditions
8008 cl+ssl
7807 trivial-backtrace
7741 cl-base64
7707 chunga
7505 utilities.print-items
7409 chipz
7269 esrap
7098 drakma
6996 cl-fad
5862 cl-yacc
5862 ironclad
5524 named-readtables
5337 local-time
5239 parse-number
4966 cxml
4966 closure-common
4878 fiveam
4712 asdf-flv
4565 cl-json
4515 log4cl
4375 bt-semaphore
4258 architecture.hooks
3904 plexippus-xpath
3886 lparallel
3745 parser.common-rules
3576 lift
3486 optima
3238 cl-dot
3166 slime
3109 cl-unicode
3101 cl-interpol
3061 cxml-stp
3036 cl-store
3013 cl-clon
2910 xml.location
2890 trivial-utf-8
2730 utilities.print-tree
2660 uuid
2606 fare-utils
2595 md5
2521 fare-quasiquote
2489 metabang-bind
2488 static-vectors
2401 fare-mop
2400 cl-utilities
2398 inferior-shell
2246 ieee-floats
2227 quri
2174 fast-io
1961 hunchentoot
1956 trivial-types
1930 cl-annot
1921 cl-syntax
1711 symbol-munger
1687 trivial-indent
1679 collectors
1671 arnesi
1671 access
1661 rfc2388
1642 cl-slice
1631 documentation-utils
1626 array-utils
1623 yason
1623 plump
1619 cl-parser-combinators
1614 gettext
1609 cl-locale
1606 djula
1602 cl-who
1496 simple-date-time
1415 osicat
1366 parenscript
1358 monkeylib-binary-data
1305 postmodern
1245 lisp-unit
1239 trivial-shell
1233 command-line-arguments
1227 asdf-system-connections
1223 cl-containers
1221 metatilities-base
1198 salza2
1167 parse-float

Quicklisp newsNovember 2017 Quicklisp dist update now available

· 42 days ago
New projects:
  • cacle — Extensible cache services for Common Lisp — MIT
  • ccl-compat — Clozure CL compatibility module — LLGPL
  • ccldoc — create lisp documentation using s-expressions — Apache License 2.0
  • chancery — A library for procedurally generating text, inspired by Tracery. — MIT/X11
  • cl-flac — Bindings to libflac, a simple FLAC decoding library — Artistic
  • cl-portmanteau — cl-portmanteau — OSI approved 3-clause 'New BSD License'
  • clack-pretend — A testing and debugging tool for Clack — Apache License, version 2.0
  • clath — Clath is single sign-on middleware for Clack. It allows basic login with OAuth1.0a, OAuth2 and OpenID Connect. — Apache License, version 2.0
  • lisp-chat — An experimental chat irc-like — MIT
  • mockingbird — A small stubbing and mocking library for Common Lisp — MIT
  • qbase64 — Fast and flexible base64 encoder and decoder — BSD-3-Clause
  • specialization-store — The specialization store system provides a new kind of function, called a store function, whose behavior depends on the types of objects passed to the function. — Simplified BSD License variant
  • template-function — A system for generating functions from a template. — Simplified BSD License variant
  • trivial-package-manager — Functions for installing packages from distro-specific package manager. — LLGPL
Updated projectsahungry-fleecealso-alsaantikarray-operationsasdf-vizaws-sign4cellsceplcepl.sdl2ceramicchirpchungacl-anacl-ansi-termcl-asynccl-bplustreecl-charmscl-conllucl-cudacl-custom-hash-tablecl-data-framecl-fadcl-formscl-gdcl-gdatacl-graphcl-kyoto-cabinetcl-liballegrocl-messagepackcl-messagepack-rpccl-mixedcl-mpg123cl-neovimcl-online-learningcl-ppcrecl-pythoncl-random-forestcl-readlinecl-rulescl-satcl-sat.glucosecl-sat.minisatcl-sdl2cl-slicecl-tesseractcl-tiledcl-unicodecl-virtualboxcl-whoclachecloser-mopclssclxcoleslawconfiguration.optionscroatoandeedsdexadoreazy-projecteventfdfare-quasiquotefemlispflexi-streamsfs-watcherfxmlgamebox-dgengamebox-frame-managergettextglsl-specharmonyhu.dwim.partial-evalhu.dwim.quasi-quoteinquisitorironcladjsonrpcjsownlasslegionlegitlet-pluslichat-protocollichat-serverliblichat-tcp-clientlisp-namespacelocal-timelog4cllquerymaidenmaxpcmcclimmetabang-bindmk-string-metricsmodularizemoiramtifnibblesoclcloookoverlordparser.common-rulesplumppostmodernproveqlotqmyndqt-libsremote-jsrtg-mathscalplsdl2kitserapeumsimple-currencysketchstumpwmterminfotriviatrivial-batterytrivial-benchmarktrivial-clipboardtrivial-updatetrivial-wsunix-optsutilities.print-itemsvarjoverbosezenekindarl.

Removed projects: de.setf.wilbur, odd-streams.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

LispjobsLisp Engineer, MIND.AI, Seoul, Korea & L.A, USA

· 54 days ago

MIND.AI ( is working on a brand new paradigm in artificial intelligence. We are looking to innovate REAL AI. Not just natural language processing but Natural Language Reasoning. We need versatile talents who are interested in changing the world. This is not deep learning but something totally new. And we need people with open and creative minds.


Artificial intelligence in a brand new paradigm. NOT deep learning/neural networks, but knowledge of same a plus. Symbolic/logical AI experience will be practiced. Developing core logic based on a new theory of information. Duties will include optimization of existing code and developing new features of the AI. We require significant experience in development activities on large projects and advanced software architecture and development in a large Common Lisp codebase. To build new components and extend existing tooling to meet project needs, and implement high-quality library components and products.


Specifically, the Lisp engineer will:

  • Integrate with natural language libraries and packages such as the google parser to form structured data based on our proprietary model (similar to semantic networks)
  • Build data structures and algorithms to acquire new information and relate it to existing ontologies (it learns how to learn)
  • Work with computational linguist to build up knowledge bases and further define knowledge representation
  • Develop persistence model (using database and object serialization)
  • Implementation of logical reasoning based on abductive, inductive and deductive reasoning models
  • Design and develop interactive learning interface
  • Optimizing existing code base



  • BS in Computer Science or equivalent.
  • 3+ years experience in Lisp programming (5+ years programming), working in structured systems and/or software development teams (Common Lisp, LispWorks preferred)
  • Thorough understanding of Lisp paradigms (eg, functional & object oriented)
  • Familiarity with common software architectures, design patterns, and software development life cycles



  • Experience with persistence (database & object serialization)
  • Experience with GOFAI.
  • Familiarity with formal logic.
  • Working with ontologies and taxonomies
  • Static and/or Dynamic Analysis
  • Dynamic analysis, program instrumentation, and profiling

Please send your CV to


Nicolas HafnerHarmony - Confession 77

· 62 days ago

This is a blog entry about Shirakumo's sound system Harmony. While Harmony was released some months back, I spent the last few weeks rewriting large parts of it from the ground up, and I think it's worth it to write a short article describing how it's built and what you can do with it. So, if you're interested in doing sound processing and playback in Lisp, this is for you.

The need for Harmony arose out of me not finding any suitably powerful sound solution in Lisp. I tried doing a pure Lisp solution at first, but was not able to figure out how to make things go fast without sacrificing design. So, in the interest of performance, I first set out to write a C library that does the essential sound computations for me. This library is called libmixed.

I wanted to keep libmixed as simple and straight-forward as possible. As such, it does not do any sound file reading or writing, synthesising, or sound output to devices. Instead, it only concerns itself with the processing of sound samples. Another important factor for its design was that it should be easy to extend it with further capabilities, and to allow combining processing steps. This led to the segment pipeline model.

In libmixed, you assemble a set of segments - audio producers, consumers, or transforms - that perform the computations you need. You connect them together through buffers so that they can transparently exchange audio data. This produces a directed, acyclic graph where each vertex is a segment, and each edge is a buffer. This graph can then be serialised into a simple sequence that dictates the order in which the segments should be run.

Unfortunately, audio data comes in a variety of formats. Samples are frequently encoded as signed or unsigned integers of 8, 16, 24, or 32 bits, or in floats or doubles. The data might have multiple audio channels, and the samples can be either interleaved (LRLRLR..) or sequential (LLL..RRR..). The sample rate might be different as well. All of this can make it quite difficult to deal with the data between different audio components. Libmixed's solution to this problem is to force all the buffers to use the same sample rate, to encode samples in floats, and to only represent a single channel. Almost all segments present in a pipeline thus don't have to worry about any of these discrepancies anymore, reducing complexity tremendously. In order to allow interacting with foreign components easily, it does also include an unpacker and a packer segment.

The packer takes a set of buffers and information about the sample representation, and packs the buffer's data into a single C array, ensuring proper sample format, layout, and sample rate. The unpacker does the opposite. Thus, if for example you have a library that decodes an audio file, you most likely need to add an unpacker segment to the pipeline that decodes the audio data from the library into the proper internal format.

So, for a very simple example of taking two audio files, mixing them together, applying a reverb effect, and then playing them back, the pipeline would need to look something like this:

simple pipeline

We can get away with assigning the same two buffers for both of the unpackers here by using a special property of the basic-mixer segment. Instead of manually processing the two unpackers in our pipeline, we can set the segment property on the basic-mixer's inputs, which tells the basic-mixer to cause the processing on the segment on its own. This way, the mixer can process the segment that produces the input as it mixes it together, reducing the need to allocate individual buffers for each input to the mixer. This is one of the design decisions that still bother me a bit, but I found it necessary after discovering that I would otherwise need to allocate a huge amount of buffers if I wanted to allow playback of a lot of sources simultaneously.

As it currently stands, libmixed includes segments to mix audio by either just adding samples, or through 3D spatial positioning of the source. It also includes segments to change volume and pan, to fade in and out, to generate simple sawtooth, square, triangle, or sine waves, and to include LADSPA plugins in the pipeline. I'd like to add a bunch more effects segments to it to make it more useful for real-time sound processing, but I haven't felt the motivation to get into that yet. If you're interested in sound processing and would be willing to do this, let me know!

Basically the idea of libmixed boils down to this: there's segments that have properties, inputs, and outputs. You can write and read properties, and connect buffers to the inputs and outputs. You can then tell the segment to process a number of samples, and it will read its input buffers, and write to its output buffers. This all works over a struct that contains a bunch of function pointers to perform these actions. It is thus very easy to add further segments to libmixed, even as an outside library: simple produce a struct that holds the appropriate function pointers to the functions that do what you want. This is also how cl-mixed allows you to write segments from Lisp out.

Ascending from the C world to the C+L world then leads us to cl-mixed, which is the bindings and wrapper library for libmixed. It takes care of all the hairy low-level stuff of interacting with the C library, tracking and allocating foreign memory, and so forth. As mentioned, it also gives you a simple interface to write your own segments from Lisp. This can be really nice in order to prototype an effect.

While libmixed is a neat framework to base your sound processing around, it doesn't exactly make most of the common tasks very convenient. Usually you have some audio files that you would like to play back, and maybe apply some effects to them. This is where Harmony comes in.

Harmony takes libmixed's generic view of segments and extends it to include sources, drains, and mixers. Sources are segments with no inputs, drains are segments without outputs, and mixers take a run-time variable number of inputs. It also greatly simplifies the pipeline construction by handling the buffer allocation for you. It does this with the help of a graph library called Flow. More on that later. Harmony also gives you a sound server object that handles the mixing in the background, allowing you to focus on just adding, removing, and changing sources in your program. Finally, Harmony includes a number of pre-made sources and drains that either connect to other libraries, or present native implementations. Currently, it supports playing back MP3, WAV, FLAC, and raw buffers, and supports outputting to out123, OpenAL, WASAPI, CoreAudio, ALSA, PulseAudio, and to raw buffers.

The easiest way to get started is to use the harmony-simple system, which assembles a default pipeline for you, allowing you to just directly play some stuff.

 (ql:quickload :harmony-simple)
 (harmony-simple:play #p"my-cool-music.mp3" :music)
 (harmony-simple:play #p"kablammo.wav" :sfx)

Assembling your own pipeline isn't very difficult, though. It comes down to just telling Harmony how to connect the inputs and outputs between segments. An optimal buffer layout is automatically computed based on the segment's properties and the graph that you describe through the connections. To do this, Harmony uses the Flow library to describe segments in a graph. Unlike most graph libraries, Flow assigns distinct input and output ports to each vertex. These ports have properties like their arity, direction, and so on. For instance, the basic-mixer from the previously illustrated pipeline would be a vertex with an input port of arbitrary arity, and two output ports of single arity. Flow then employs a simple algorithm to assign colours to the edges in such a way that no input is connected to the same two colours, and no input and output on a vertex have the same colour unless the vertex is marked as in-place. This kind of allocation computation has cropped up in a couple of places, so I've been able to use Flow for it in other projects as well. I don't think it's important to know how the algorithm works, but in case you're interested, the source is commented and pretty short.

How to write new sources and segments, or how to assemble your own pipeline is already illustrated pretty succinctly in the documentation, so I suggest you check it out if you're interested in working on that kind of thing. Harmony is primarily geared towards use in games, where simple pipelines and immediate playback of a variety of audio sources is necessary. However, I can also see it being used in some kind of digital audio workstation, where a graphical user interface could allow you to put together segments and configure them, mapping to a libmixed pipeline underneath.

I feel like I've been rambling about tangents for a bit here, but I suppose the reason is that Harmony doesn't really do all that much. At best it just smooths over the last remaining corners that come from the libmixed C heritage and adds some useful segments to the library. All the concepts used and all the sound technology behind it lies more in libmixed's hands though, and I think I've already explained that all earlier.

So, to close off: if you're thinking of doing some kind of digital audio processing in Lisp, keep Harmony and cl-mixed in mind. I'm also more than open to feedback and suggestions, so if you have any ideas or things you'd like to add to the projects, head on over to GitHub's issues, or talk to us on Freenode/#shirakumo.

Quicklisp newsOctober 2017 download stats

· 71 days ago
Here are the top 100 projects from Quicklisp for October, by "raw" download count.

16626  alexandria
15213 closer-mop
13436 anaphora
13420 split-sequence
12954 babel
12879 trivial-features
12719 iterate
12464 cl-ppcre
11798 bordeaux-threads
11724 let-plus
11693 trivial-gray-streams
11389 cffi
11281 trivial-garbage
10622 puri
9884 nibbles
9732 more-conditions
9611 flexi-streams
9026 usocket
8602 cl+ssl
8544 utilities.print-items
8153 cl-base64
8130 chunga
7889 chipz
7791 drakma
7691 esrap
7635 trivial-backtrace
6308 ironclad
5910 cl-yacc
5691 cl-fad
5302 parse-number
4996 named-readtables
4984 fiveam
4959 asdf-flv
4886 log4cl
4756 bt-semaphore
4736 local-time
4701 lparallel
4647 closure-common
4638 cxml
4594 architecture.hooks
4552 lift
3784 plexippus-xpath
3586 cl-json
3569 trivial-utf-8
3322 optima
3157 parser.common-rules
3144 cl-clon
2837 uuid
2819 cxml-stp
2705 xml.location
2700 metabang-bind
2624 cl-dot
2475 utilities.print-tree
2473 slime
2458 cl-unicode
2456 cl-interpol
2273 md5
2267 cl-store
2232 fare-utils
2204 fare-quasiquote
2108 inferior-shell
2105 fare-mop
1769 cl-utilities
1706 quri
1671 ieee-floats
1625 static-vectors
1605 fast-io
1547 trivial-types
1545 cl-annot
1536 cl-syntax
1437 utilities.binary-dump
1431 trivial-indent
1364 trivial-mimes
1335 asdf-system-connections
1334 array-utils
1329 symbol-munger
1320 cl-containers
1318 metatilities-base
1318 plump
1302 cl-slice
1296 hunchentoot
1280 access
1267 arnesi
1266 collectors
1258 gettext
1236 djula
1226 cl-parser-combinators
1221 cl-locale
1187 postmodern
1164 rfc2388
1159 yason
1121 simple-date-time
1050 command-line-arguments
956 cl-sqlite
951 cl-log
947 osicat
943 salza2
913 py-configparser
903 cl-markdown
903 asdf-finalizers

Didier VernaStandard IO syntax and the Robustness Principle

· 81 days ago

Here is a flagrant illustration of the robustness principle, or rather, of a failure to honor it.

I was investigating a bug in Declt where some floating point numbers were printed with exponent markers (e.g. 0.5f0 instead of just 0.5) in the Texinfo file, which broke the parsing of the file by Perl.

Eventually, I found out a double infringement of the robustness principle. First of all, Declt failed to comply with part 1 of the robustness principle: be lenient with the others. The Texinfo file generation routine should have been wrapped into a call to WITH-STANDARD-IO-SYNTAX and it wasn't. Always do that to be on the safe side. Lesson learnt.

This failure on my part, however, had the interesting consequence of exhibiting what I consider a serious infringement of part 2 of the robustness principle: be strict with yourself. It would have remained unnocited otherwise. The culprit here is not Declt. This time, it's the common-lisp-stat library. Problem: the simple fact of loading this library globally changes the value of *READ-DEFAULT-FLOAT-FORMAT* from SINGLE-FLOAT (the default) to DOUBLE-FLOAT. This is bad, and it can break your code in all sorts of nasty ways.


*READ-DEFAULT-FLOAT-FORMAT* tells the reader how to read floats when no exponent marker is provided. By default, 0.5 will be read as a SINGLE-FLOAT. But this variable also influences the printer (out of a concern for printing readably I guess): when printing a float of a different format than the current default, then the appropriate exponent marker is also printed. So here is precisely what happened. Declt had been compiled with some floats (e.g. 0.5) read as SINGLE-FLOATs. Later on, those floats were supposed to be printed aesthetically as such. But right after loading common-lisp-stat, the default format changed to DOUBLE-FLOAT and all of a sudden 0.5 started to be printed as 0.5f0.


This is bad enough already, but consider that messing with the standard IO syntax globally like this can break your code in all other sorts of even nastier ways. Imagine for instance that common-lisp-stat had been loaded before Declt, and Declt needed to be recompiled. All of a sudden, Declt would be using double floats and the bug would be gone. That is, until the next start of the REPL, after which all floats would be printed like 0.5d0!

So granted, my code wasn't robust enough. But please, don't mess with standard IO syntax globally!

Zach BeaneUIOP, sly, and qlt

· 83 days ago


The problem I had with UIOP is due to using stock SBCL ASDF (3.1.5) with UIOP 3.3.0. UIOP changed a public function’s behavior in a way that affected older ASDFs (and seemingly only older ASDFs). This is considered a bug and will be fixed in a future UIOP release.


September of Sly has turned into Season of Sly. I haven’t been hacking as much with Sly as I wanted in September, so I’m going to keep going with it for October and beyond, and write up a summary Very Soon. My current hangup is C-c M-q, which is slime-reindent-defun in slime, but does nothing in Sly, and there’s no sly-reindent-defun to try to bind instead. I try to use C-c M-q a thousand times a day.


You might know that Quicklisp dists are constructed after building every project that Quicklisp tracks, and projects that don’t build (due to compile-time problems) aren’t included. This is better than nothing, but it does nothing to catch runtime problems.

Last week Quicklisp got hit with a runtime problem that broke a lot of stuff, so it prompted me to create qlt.

qlt is for collecting small files of Common Lisp code to run before a Quicklisp dist is created. If any of the files signal an error, Quicklisp generates a report with the console output and the dist is no-go until the problem is tracked down.

The project is sparse right now, but it does include a test file that catches the runtime problem from last week. I hope to include many more things to test as time goes on. If there is something you want to check, patches welcome!

Quicklisp newsOctober 2017 Quicklisp dist update now available

· 84 days ago
New projects:
  • also-alsa — Basic ALSA bindings for Common Lisp — LGPL
  • bitio — A wrapper for octet streams that enable bit level streams. — MIT License
  • cl-lzma — CFFI wrapper around LZMA (de)compressor foreign library — Public domain
  • cl-rules — Simple DSL for rules that can be configured without code — GPL-3.0
  • flac-metadata — A utility for reading metadata embedded in FLAC audio files. — MIT
  • html-entities — A module for encoding and decoding HTML/XML/SGML entities. — MIT License
  • mtif — An interface to the MacOS MultiTouch framework — MIT
  • parsley — A toolset for parsing binary data formats. — MIT
  • quicksearch — Quicksearch searches CL library, and outputs results at REPL. — MIT License
  • skippy-renderer — GIF renderer for SKIPPY — MIT
  • trivial-macroexpand-all — Call each implementation's macroexpand-all equivalent — Unlicense
  • zacl — A layer for loading and running some Allegro CL projects. — BSD
Updated projects3bgl-shader3bmdarchitecture.service-providerasd-generatorasdf-vizaws-sign4camblcavemanceplchirpcl+sslcl-anacl-ascii-artcl-autowrapcl-cache-tablescl-cognitocl-conllucl-enumerationcl-fluent-loggercl-forestcl-formscl-glfw3cl-hamcrestcl-html5-parsercl-i18ncl-ledgercl-mosscl-mysqlcl-ntp-clientcl-oclapicl-one-time-passwordscl-openglcl-pcgcl-projectcl-pslibcl-quickcheckcl-rabbitcl-sdl2cl-sdl2-ttfcl-smtpcl-strcl-yesqlclassimpclods-exportcloser-mopclsql-helperclssclunitclxcodata-recommended-valuesconfiguration.optionscroatoandecltdeedsdendritedeploydexadordirtdoubly-linked-listeasy-routeseazy-projectesrapfare-scriptsfast-iofiascoflac-parserfnfs-utilsfsetfxmlgamebox-dgengamebox-ecsgamebox-frame-managergamebox-gridsgamebox-mathgamebox-sprite-packergenieglophu.dwim.defhu.dwim.partial-evalhu.dwim.presentationhu.dwim.quasi-quotehu.dwim.reiteratehu.dwim.utilhu.dwim.web-serverinquisitorironcladjson-streamsjsonrpckenzolegitlet-pluslichat-serverliblisp-namespacelquerymaidenmcclimmedia-typesmitomito-authmodularize-hooksnew-opninevehoclclopticlopticl-coreorg-davep-dictosicatoverlordparse-floatparser.inipathname-utilsplumppngloadpostmodernpsychiqpzmqqlotqmyndqt-libsqtoolsreadablerestasretrospectiffroanrtg-mathscalplsdl2kitsecret-valuesserapeumshortysimple-inferiorssimple-loggersimple-rgbsnoozespinneretstaplestatic-vectorsstlstumpwmtemporal-functionsthe-cost-of-nothingtrivial-file-sizetrivial-irctype-runix-optsvarjowebsocket-driver.

Removed projects: cl-proj, magicffi, poiu.

Neither cl-proj nor magicffi build for me any more due to foreign library changes. POIU was removed by request of the author.

To get this update, use: (ql:update-dist "quicklisp")


McCLIMProgress report #10

· 86 days ago

Dear Community,

We have many important improvements since the last iteration and even more work is pending. I want to apologise for this late progress report - it has been almost three months since the last update. I'll try to improve in this regard.

Some highlights for this iteration:

  • various utilities have been replaced with alexandria equivalents
  • distinct frames don't shadow *accelerator-gestures* of their children
  • accepting-values refinements - better handling of errors and return values
  • refactor and small fixes of the recording implementation code
  • refinements in class hierarchy for streams, medium and graphics state
  • slider from 30.4.5 spec section has been implemented
  • scrolling implementation refinements
  • tab-layout extension refactor
  • improvements related to drei text editing substrate
  • user manual refinements and improvements of its building scripts
  • improvements related to the PDF backend
  • MOP code has been ported to use closer-mop portability layer
  • numerous editorial fixes in bundled specification sources
  • improvements to format-graph-from-roots
  • better Unicode support in CLX for frame title
  • general code base cleanup to decrease number of warnings during compilation
  • transparency handling in CLX backend and alpha channel support in images
  • small Listener improvements (bug fixes and cleanups)

We want to thank everybody who has contributed to the project (by improving the code base, discussions, issue reporting, providing advice and suggestions, monetary contributions etc). We are especially grateful to the following people: Nisar Ahmad, Alastair Bridgewater, John Carroll, Cyrus Harmon, Philipp Marek, Elias Mårtenson, Piotr Mieszkowski, Jan Moringen, Nick Patrick, Alessandro Serra and last but not least Robert Strandh.


All McCLIM bounties (both active and already solved) may be found here. Default bounty expiration date is 6 months after publishing it (a bounty may be reissued after that time period).

Bounties solved this iteration:

  • [$300] Replace MOP things with closer-mop portability layer
  • [$100] Keystroke accelerators may shadow keys even if inactive

Active bounties ($1800):

  • [$100] drag-test demo: dragging square to the empty position invokes the debugger (new)
  • [$100] Text field pane height is too small, clipping the bottom off characters (new)
  • [$300] Listener: repl blocks when long process runs (new)
  • [$500] Windows Backend
  • [$400] Fix Beagle backend
  • [$150] When flowing text in a FORMATTING-TABLE, the pane size is used instead of the column size
  • [$150] clx: input: english layout
  • [$100] Add PDF file generation (PDF backend)

Our current financial status is $1089 for bounties and $264 recurring monthly contributions from the supporters (thank you!).

I have been asked a question about who decides which issues have bounties on them and how the reward amounts are decided. If anyone has been wondering about the same here goes the answer: issues and prices are based on my subjective opinion indicated by problems users encounter and what I consider being worth putting bounty on it. Note though, that I'm open to suggestions (see the next paragraph). I hope that despite some potential doubts the community is generally satisfied with the progress and decisions we make. If there is some lack of transparency, please let me know what you want to know and I'll do my best to help.

Suggestions as to which other issues should have a bounty on them are appreciated and welcome. Please note that Bountysource has a functionality "Suggest an Issue" which may be found on the bounties page. If you would like to work on an issue that is not covered by the existing bounties, feel free to suggest a new bounty.

If you have any questions, doubts or suggestions - please contact me either by email ( or on IRC (my nick is jackdaniel). McCLIM developers and users hang out on #clim IRC channel on Freenode.

Sincerely yours,
Daniel Kochmański

Zach BeaneUIOP 3.3.0 problems

· 90 days ago

Ok, here’s something that is causing problems when I build this month’s Quicklisp dist.

UIOP 3.3.0 was recently released, and it’s causing some stuff to apparently compile over and over again. Here’s a real simple thing to try:

$ cd ~/quicklisp/local-projects/
$ curl -O
$ tar xzvf uiop.tar.gz


CL-USER> (ql:quickload "circular-streams")

On my setup, I see cffi and babel stuff compiled twice:

To load "circular-streams":
  Load 1 ASDF system:
; Loading "circular-streams"
[package uiop/package]............................
[package babel-encodings].........................
[package babel]...................................
[package cffi-sys]................................
[package cffi]....................................
[package cffi-features]...........................
[package impl-specific-gray]......................
[package trivial-gray-streams]....................
[package uiop/package]............................
[package babel-encodings].........................
[package babel]...................................
[package cffi-sys]................................
[package cffi]....................................
[package cffi-features]...........................
[package static-vectors]..........................
[package fast-io].................................
[package circular-streams]..

If I remove uiop-3.3.0/ from local-projects, the output looks like this:

To load "circular-streams":
  Load 1 ASDF system:
; Loading "circular-streams"
[package impl-specific-gray]......................
[package trivial-gray-streams]....................
[package uiop/package]............................
[package babel-encodings].........................
[package babel]...................................
[package cffi-sys]................................
[package cffi]....................................
[package cffi-features]...........................
[package static-vectors]..........................
[package fast-io].................................
[package circular-streams]..

Any ideas?

update Commit 4ed76c32050753c8a4450c342a1592881e11d63d seems to reference this behavior, with the “fast-io” system given as an example. And indeed, when I try this with fast-io, I see similar recompilation.

Didier VernaDeclt 2.3 "Robert April" is out

· 92 days ago

I'm happy to announce the release of Declt 2.3. Declt is my reference manual generator for Common Lisp libraries.

The improvements and bug fixes in the last two releases are the result of running Declt against the whole Quicklisp world (around 3000 ASDF systems for 1500 libraries). See this post for more information.

New in this release:

  • Advertise file extensions in references.
  • Advertise the type of foreign definitions.
  • More robust display and indexing of, and with, lambda-lists.
  • Use UTF8 special characters to denote invisble ones.
  • More robust support for Texinfo brace escaping.
  • Handle modules sharing the same location.
  • Ensure output is done with standard IO syntax.
  • Fix potential duplication of some (non-lisp) files and document all static files.
  • Fix potential duplication of packages documentation.

From the 2.2 "Christopher Pike" release (not previously advertised):

  • Require a UTF-8 environment.
  • Understand ASDF's notion of inferred system, and also be more protective against ASDF extensions.
  • Support for improper lambda lists (e.g. destructuring ones).
  • Improve contact defaulting code.
  • Update support for SBCL's setf expanders introspection.
  • Accept ASDF system designators.
  • Various bug fixes in the areas of method combinations, accessor definition merging and setf expanders.

Find it at the usual place...

Nicolas HafnerProject Listing - Confession 76

· 101 days ago

This is a listing of projects that I've started, some of which I've completed. The intent is to spread awareness about the work I've done, as I speculate that a lot of people don't know about most of it, even though it might prove useful to their own projects. So, hopefully this article will help a bit in that regard.

I won't go into much detail in the descriptions, as that would take too much of both your and my own time. You can however click on the title of each project to get to its "homepage" if you want to find out more. Naturally, you're also free to contact me if you're interested.

Major Projects

Major projects are ones that have no completion in sight. There are always improvements and additions that could be made to improve the project. Generally they serve as a launch pad for other, minor projects that are completable.


Lichat is an attempt at a simple, light-weight chat protocol. There's currently full implementations of the protocol available that allow you to host a TCP or WebSockets server, and to write clients in JavaScript and CL. A Java library, with the intent of writing an Android client is planned.


Maiden is an event coordination framework and bot construction toolkit. It includes a plethora of pre-made modules to provide common chat bot functionality, as well as a couple of different backends for various chat protocols.


Parasol is a native painting application for graphics tablet users. It has been dormant for some years now as I've still not figured out a good way to architecture everything.


Portacle is the portable development environment for Common Lisp. It gives you an easy-to-deploy, contained development studio that you can use to program. It is especially suited and geared towards beginners that have not used Lisp before, but it's also convenient for quick setups on unconfigured machines.


Radiance is a web application environment, allowing you to easily deploy and run different web applications in the same lisp instance, increasing sharing of common resources. This article is actually hosted on an application running in Radiance. Imagine that!


Trial is an OpenGL game engine with a heavy focus on modularity. It is supposed to provide a large toolkit of useful bits and pieces from which you can create a game. I use this engine together with some of my co-conspirators to write games for the Ludum Dare game jam. Hopefully the engine will at some point also give birth to bigger games.

Minor Projects

These projects all either spawned out of the requirements of the major projects, or happened simply for fun. Most of them are already complete and thus ready for use.


A library implementing common matrix calculations, with an emphasis on 2x2,3x3, and 4x4 matrices as commonly used in graphics. It provides some numerical functions as well, but those are not the focus. The library is heavily optimised, so it is not made of pretty code.


This is the counter-piece to 3d-matrices, providing vector operations optimised for 2, 3, and 4-component vectors. Also just like 3d-matrices, the library is heavily optimised and thus not pretty on the eyes.


CSS-Like Simple Selectors implements a DOM search engine using the CSS selectors as the query format. It is reasonably optimised, but only usable with the DOM provided the Plump system.


Lisp Augmented Style Sheets is a compiler for a custom CSS syntax. It allows you to write CSS files in a much more convenient and homely-feeling syntax. I've been using this to write pretty much all of my CSS for the past couple of years.


A small library to provide vector manipulation functions that are sorely missing from the standard. Allows to push to any place in the array while maintaining the proper shifting logic.


A Continuous Integration system with a focus on running directly on your machine, rather than in a container or otherwise segregated environment. This is currently being rewritten from scratch.


A Radiance application for a web interface to a chatlog database. The database is recorded through the Colleen or Maiden chatlog modules.


A chat application based on the Twitter direct messages system. Allows you to chat directly with your twitter friends as if it were a regular chat room. Easy to set up, and runs on all major desktop platforms.


A client library implementing the full Twitter REST API. If you want to interact with Twitter, this is your best bet.


A bindings library to libfond, allowing you to use its functionality easily from Lisp out. Libfond allows the rendering of TrueType fonts onto OpenGL textures.


A bindings library to libstem_gamepad, providing easy gamepad and joystick event processing from Lisp. This is useful if you don't want to use some kind of framework that brings all sorts of other baggage with it, not just gamepad processing.


A wrapper library for the Linux General Purpose IO device present on embedded systems such as the Raspberry Pi. It allows you to conveniently access and control the IO pins on the board.


A bindings library for the k8055 analog input board. Allows you to read its various values and set outputs.


A bindings library to libmixed, allowing you to use its functionality from Lisp. Libmixed allows you to perform digital audio mixing and processing. Thus, with this, you can do efficient DSP from Lisp.


A bindings library to libmonitors, providing convenient access to information about the currently attached monitors, and giving you the ability to control the resolution thereof.


A bindings library to libmpg123, giving you fast and easy to use MP3 decoding. This covers the complete API exposed by libmpg123, and thus easily your best bet for MP3 processing.


A bindings library to libout123, giving you cross-platform audio output. The API is very simple to use, and can thus give you a quick start if you need to play some audio.


A bindings library to SoLoud, an open source C++ sound engine for the use in video games. I've completed this, but dropped it, as it was too hostile to extension from CL. I've since developed Harmony (see below).


A wrapper library for the Linux Serial Port Interface device. With this you can do serial port input/output, which is present on some embedded devices like the Raspberry Pi.


Clip is an alternative approach to templating, expressing the template within valid HTML. This allows a different development approach, wherein you can hammer out a mock-up for a website in an HTML document, and then simply add templating logic through further tags and attributes, maintaining browser-viewability.


This is the predecessor to Maiden, with a more narrow focus and feature base. Since it has been superseded, and the code is quite crusty, I heavily urge you to look at the Maiden project instead.


A tiny library to provide commonly used cryptography functions in a more accessible format, as some of the tools provided by Ironclad & co. can be a bit cumbersome to use.


Deeds is an Extensible and Event Delivery System. It offers both flexible and performant creation of event systems. Deeds is used heavily in Maiden.


This was an attempt at making optional dependency wrangling more convenient. It gives you a few tools that attempt to make it possible to write code that is only considered once another system becomes available.


With Qtools I developed a very convenient mechanism to generate deployments of my systems. This is the evolution of that, allowing you to use it independent from Qt. It takes care of your foreign libraries and the general shutdown and boot sequence, making the whole binary deployment process much smoother.


Sadly a lot of projects use the "trivial-backtrace" system that just gives them a string with a backtrace. Dissect allows you to capture, step, and completely inspect the stack trace on a variety of Lisp implementations. The introspection abilities allow you to write a good chunk of a portable debugger. It's also very useful for logging and other situations where execution is automatically continued, but the information of the current stack is still useful to store somewhere.


I like to keep my code nice and clean, and as such docstrings are quite cumbersome clutter. This library allows you to easily and conveniently put all the docstrings in a file outside of the rest of your code.


This is a Radiance application that provides you with a very simple file storage. Coupled with the filebox-client, you get a Dropbox-like system.


Flare is a particle simulation framework. Unlike most particle systems, it does not focus on the emission of small dots, but rather on the precise coordination of a hierarchy of entities. It allows you to describe sequences of events, and then lets you play those sequences back, performing the actual transformations. You can even rewind time and interactively work on your sequences.


This is a flowchart-like graph library. It gives you access to nodes which, unlike in mathematical graphs, have dedicated ports from which connections are made. These ports can have semantic meaning and syntactic properties. Thus, this gives you a toolkit to make flowchart-like graphs and compute with them.


Since I couldn't come to terms with Iterate, I decided to write my own extensible iteration construct. Unlike Loop or Iterate, For has a particular syntax to it that makes extensions feel much more integrated.


This small library allows you to wrangle lambda forms and destructure them into their individual components (docstring, declarations, arguments, etc).


A toolkit to allow you to process and manipulate OpenGL Shader Language code. It includes a full GLSL4 parser, printer, and code-walker. Using this you can even do stuff like merge separate shaders together automatically, preserving input/output semantics.


A sample application for the Qtools system, providing you with a minimal, but pretty image viewer. Works on all major desktop platforms.


Harmony is a fully-fledged audio system, allowing you to control playback of multiple sources, and even to position them in 3D space. It also allows you to build your own sound processing pipelines, to add in effects and other sound processing capabilities.


This is a client library for the Tumblr REST API. It has full coverage of the documented features and properly wrangles all the oddities and inconsistencies of the API for you.


This is still in the works, but is intended to become a website (using Radiance) that provides useful information about Kanji, as well as an optimised sequence by which to learn them. Hopefully this will help me and other people to learn Japanese.


Another Radiance application that provides a very minimalist site for product reviews. The twist of this site is that your review should be very short, if possible reduced to keywords only. The idea is that this should make for interesting descriptions and interpretations.


The counterpart to form-fiddle, this allows you to wrangle and destructure lambda-lists (argument lists).


An interface to the Git binary. Using this library you can run all the available Git commands using a more convenient and streamlined function interface. An object-oriented high-level interface is also available, but doesn't cover the full API.


A small C library to allow you to render TrueType fonts to OpenGL textures. Text rendering is something that's often left out of minimal game engines, and so libfond can provide you with that aspect.


A small C library to allow you to mix and process digital audio. It is reasonably optimised and comes with a set of processing and mixer components out of the box. One of the components also allows you to integrate LADSPA plugins, so you can use those directly as well.


A small C library to handle the management and information retrieval of connected Monitors. Currently Linux, Windows, and OS X are supported.


A native GUI for the Lichat system. While this works well enough as it currently stands, I'd like to rewrite it at some point to use Maiden, and thus allow connecting to other chat systems as well.


A library modelled after jQuery to allow you to conveniently and succinctly wrangle HTML and XML documents. This is Particularly useful for web scraping tasks.


This is a system that gives you an extension to the package system, by allowing you to add other metadata to it. This should facilitate the construction of "modules," individual components of a larger system. The metadata can be used to give meaning to the modules and model their relations to the whole system.


This augments the modularize system by giving you hooks and triggers. Thus, modules can provide opaque entry points for other modules to provide additional functionality.


This augments the modularize system by giving you "interfaces"- contract-like descriptions of the functionality provided through a package. While the interface description is abstract and only includes the signatures of functions, another module can then opt to implement the actual functionality behind the interface.


The successor to the South (Simple OaUTH) library, implementing the full oAuth 1.0a protocol, both client and server sides. Using North you can easily become an oAuth provider or consumer.


Parachute is a testing framework with an emphasis on being extensible. As proof of this, it includes "compatibility layers" for a couple of other popular testing frameworks. Using such a layer you can immediately convert to using Parachute by just changing a package :use and system :depends-on.


A small library to help with common pathname wrangling tasks. If you need to work with pathnames a lot, you'll probably find one or two things in here that will prove useful to you. Note that the library is restricted to pathnames, so it won't provide anything that actually touches the file system.


A library implementing the public interface for the PiPlates DAQ plates that you can use in combination with the Raspberry Pi. The library is currently untested, but "should work" as it is a fairly straightforward translation of the official Python code. I haven't yet found the will to actually test it myself.


Piping allows you to write "pipelines." Pipelines are systems of pipe segments that pass along and transform or compute based on the object currently being passed through the pipeline.


Plaster is another Radiance application. It gives you a usable paste service. The Radiance tutorial even shows you how to write the application from scratch.


An implementation of a binary storage format for the Plump DOM. It allows you to save a DOM into a more efficiently parseable representation on disk.


A parser and printer for an s-expression based syntax of an HTML DOM, using the Plump DOM as a back-end.


A parser and printer for a TeX based syntax, using the Plump DOM as a back-end. With this you can parse TeX sources into a DOM and render them as HTML.


A Practically Lenient and Unimpressive Markup Parser for XML and HTML documents. It provides a fast and lenient parser, allowing you to chuck all sorts of malformed data at it. Since it integrates with a bunch of my other systems, it's a pretty good choice for HTML.


This is a small library to allow you to post content to multiple services at once. I use this to post my art online, as there's a couple of different places I'd otherwise have to upload to manually every time.


Purplish is yet another Radiance application. It provides you with a slick and simple image board software. If you ever want to run a chan, this could be a good choice.


This system provides you with the Qt4 library binaries. Usually all you have to do is load this system, and you'll be set up and ready to go writing Qt applications. It also includes tools to automatically build the libraries from scratch.


Qtools allows you to write Qt GUIs in a syntax and manner much more similar to how you'd write any other Lisp code. It provides all sorts of conveniences and abstractions to make life a whole lot easier.


This is a collection of UI components and systems that Qt does not provide on its own. Being completely written in Lisp, it is also ripe for extension and adaptation in your own projects. If you have a reusable component that you wrote, it would be a great idea to integrate it here, so that others can benefit as well.


Random-state gives you access to a bunch of different random number generation algorithms, and allows you to portably seed them. This is primarily useful where number generation needs to be controllable.


This system allows you to verify and parse a variety of string-based formats. It is primarily geared towards validating input from web forms, or other unauthorised sources.


A Radiance application providing you with a simple blogging platform with tags and Atom feeds. This article was published on Reader!


A tiny library implementing a gray stream that redirects the output written to it to another stream. This is useful when you want to switch out the stream of a particular system on the fly.


A small library to do simple task issuing and processing. You create tasks that execute some code, and then send them off to be processed on a dedicated background thread.


This gives you a single function, which returns the lambda-list of a function, if the list is known. Useful for introspective and descriptive tasks.


A small library to do simple benchmarking work. This library uses CLOS to be easy to extend, which incurs a bit of overhead for the benchmarks themselves. Thus, it is sadly not suitable for micro-benchmarking.


If you make a macro with a bit of a more advanced syntax, it's likely Slime will not pick up the proper indentation for it. With this, you can help it out by declaring the proper indentation form manually.


Sometimes it's necessary to ensure that code is run in the main thread, especially when you want to do graphics on OS X. This library helps you with that.


The detection and handling of mime-types for files is sometimes necessary to validate the correctness of a specified content type. This library implements both a binary lookup, and a file-name lookup.


This tiny library uses the ImageMagick binaries to create thumbnails of images for you.


Ubiquitous provides a persistent configuration storage. It gives you convenient traversal through the configuration and offers easy file-based serialisation for a good range of Lisp object types. If you need your application to be configurable through external files, or just need a simple storage, check a look!

Other Stuff

That's about it. I have a bunch of other projects that I haven't mentioned here, either because they're old, abandoned, not even close to finishing, or simply uninteresting.

Since I'm constantly doing things, this list is bound to become outdated before long. So, please be mindful of the date. When in doubt, just look at the project's own page, or contact me directly. I love to chat, so even if you don't care about any of this, I definitely wouldn't mind if you stopped by at #shirakumo on Freenode some time.

Quicklisp newsSomething to try out: Quicklisp with OpenPGP and SHA verification

· 110 days ago
I've got a test version of Quicklisp available. It uses pure Common Lisp code to verify file SHA digests and OpenPGP signatures, from bootstrap to library loading.

To try it out, fetch the following file:

Load it into a Lisp implementation with (load "quicklisp.lisp") and follow the prompts. It's best to start with a Lisp that doesn't have Quicklisp already loaded automatically from the init file.

The PGP public key for Quicklisp releases is embedded directly in quicklisp.lisp, but you can also fetch it from another source and use :public-key-file "/path/to/separate/key" as an argument to quicklisp-quickstart:install to use a specific key file.

If you do try it, move your existing, working Quicklisp install out of the way first, or use the :path option to install to a test location. Otherwise, you could clobber a working Quicklisp setup.

This verification code slows things down a bit because it does a lot of arithmetic. The slowdown is most dramatic in implementations like ABCL and CLISP.

If everything works as it should, you won't notice anything very different from the normal Quicklisp install, except some slowdown during verification and some output indicating what checks were attempted and passed.

If you run into problems where something doesn't work as you expect, please let me know at


Paul KhuongRendezvous Hashing: My Baseline "Consistent" Distribution Method

· 113 days ago

2017-10-15: Tweaked the hash merge function to actually deliver the claims (one-universality isn’t enough).

Whenever I mention a data or work distribution problem where I ideally want everything related to a given key to hit the same machine, everyone jumps to consistent hashing. I don’t know how this technique achieved the mindshare it has, although I suspect Amazon’s 2007 Dynamo DB paper is to blame (by introducing the problem to many of us, and mentioning exactly one decentralised solution)... or maybe some Google interview prep package.

Karger et al’s paper doesn’t help, since they introduce the generic concept of a consistent hash function and call their specific solution... “consistent hashing.” I’m not sure where I first encountered rendezvous hashing, but I vaguely remember a technical report by Karger, so it’s probably not some MIT vs UMich thing.

Regardless of the reason for consistent hashing’s popularity, I feel the go-to technique should instead be rendezvous hashing. Its basic form is simple enough to remember without really trying (one of those desert island algorithms), it is more memory efficient than consistent hashing in practice, and its downside–a simple implementation assigns a location in time linear in the number of hosts–is not a problem for small deployments, or even medium (a couple racks) scale ones if you actually think about failure domains.

Side question: why did rendez-vous have to lose its hyphen to cross the Channel?

Basic rendezvous hashing takes a distribution key (e.g., a filename), and a set of destinations (e.g., hostnames). It then uses a hash function to pseudorandomly map each (distribution_key, destination) pair to a value in [0, 1) or [0, 2^64 - 1), and picks the destination that gives the minimal hash value. If it needs k destinations for redundancy, it can pick the destinations that yield the least k hash values. If there are ties (unlikely with a good hash function), it breaks them arbitrarily but consistently, e.g., by imposing a total order on hostnames.

A Python implementation could look like the following.

basic rendezvous hashing
Destination = namedtuple('Destination', ['host', 'hash'])

def merge_hashes(x, y):
    """murmurhash3 mix.  Alternatively, ((x | 1) * (y | 1)) % 2**64 should
    be fine.
    acc = x ^ y
    acc ^= acc >> 33
    acc = (acc * 0xff51afd7ed558ccd) % 2**64
    acc ^= acc >> 33
    acc = (acc * 0xc4ceb9fe1a85ec53) % 2**64
    acc ^= acc >> 33
    return acc

def pick_destinations(key, destinations, k=1):
    key_hash = hash_key(key)  # hash the key once, instead of hash(key + host)
    annotated = [(merge_hashes(key_hash, dest.hash),
                 for dest in destinations]
    ordered = sorted(annotated)  # lexicographic sort on merged hash, host.
    return [host for _, host in ordered[:k]]  # grab host from the first k

We only need to store the list of destinations, and we can convince ourselves that data distribution is pretty good (close to uniform) and that small changes in the set of destinations only affects a small fraction of keys (those going to destinations added/removed), either with pen and paper or with a few simulations. That compares positively with consistent hashing, where a practical implementation has to create a lot (sometimes hundreds) of pseudo-nodes for each real destination in order to mitigate clumping in the hash ring.

The downside is that we must iterate over all the nodes, while consistent hashing is easily \(\mathcal{O}(\log n)\) time, or even \(\mathcal{O}(\log \log n)\), with respect to the number of (pseudo-)nodes. However, that’s only a problem if you have a lot of nodes, and rendezvous hashing, unlike consistent hashing, does not inflate the number of nodes.

Another thing I like about rendezvous hashing is that it naturally handles weights. With consistent hashing, if I want a node to receive ten times as much load as another, I create ten times more pseudo-nodes. As the greatest common divisor of weights shrinks, the number of pseudo-node per node grows, which makes distribution a bit slower, and, more importantly, increases memory usage (linear in the number of pseudo-nodes). Worse, if you hit the fundamental theorem of arithmetic (as a coworker once snarked out in a commit message), you may have to rescale everything, potentially causing massive data movement.

Rendezvous hashing generates pseudorandom scores by hashing, and ranks them to find the right node(s). Intuitively, we want to use weights so that the distribution of pseudorandom scores generated for a node A with twice the weight as another node B has the same shape as that of node B, but is linearly stretched so that the average hash value for A is twice that for B. We also want the distribution to cover [0, infty), otherwise a proportion of hashes will always go to the heavier node, regardless of what the lighter node hashes to, and that seems wrong.

The trick, as explained by Jason Resch at Cleversafe, is to map our hashes from uniform in [0, 1) to [0, infty) not as an exponential, but with -weight / log(h). If you simulate just using an exponential, you can quickly observe that it doesn’t reweigh things correctly: while the mean is correctly scaled, the mass of the probability density function isn’t shifted quite right. Resch’s proof of correctness for this tweaked exponential fits on a single page.

The Python code becomes something like:

weighted rendezvous hashing
Destination = namedtuple('Destination', ['host', 'hash', 'weight'])

def score(hash_value, weight):
    return -weight / math.log(hash_value / HASH_MAX)

def pick_destinations(key, destinations, k=1):
    key_hash = hash_key(key)
    annotated = [(score(merge_hashes(key_hash, dest.hash), dest.weight),
                 for dest in destinations]
    ordered = sorted(annotated)
    return [host for _, host in ordered[:k]]

There are obvious micro-optimisations here (for example, computing the inverse of the score lets us precompute the reciprocal of each destination’s weight), but that’s all details. The salient part to me is that space and time are still linear in the number of nodes, regardless of the weights; consistent hashing instead needs space pseudolinear(!) in the weights, and is thus a bit slower than its \(\mathcal{O}(\log n)\) runtime would have us believe.

The linear-time computation for weighted rendezvous hashing is also CPU friendly. The memory accesses are all linear and easily prefetchable (load all metadata from an array of nodes), and the computational kernel is standard vectorisable floating point arithmetic.

In practice, I’m also not sure I ever really want to distribute between hundreds of machines: what kind of failure/resource allocation domain encompasses that many equivalent nodes? For example, when distributing data, I would likely want a hierarchical consistent distribution scheme, like Ceph’s CRUSH: something that first assigns data to sections of a datacenter, then to racks, and only then to individual machines. I should never blindly distribute data across hundreds of machines; I need to distribute between a handful of sections of the network, then one of a dozen racks, and finally to one of twenty machines. The difference between linear and logarithmic time at each level of this “failure trie” is marginal and is easily compensated by a bit of programming.

The simplicity of basic rendezvous hashing, combined with its minimal space usage and the existence of a weighted extension, makes me believe it’s a better initial/default implementation of consistent hash functions than consistent hashing. Moreover, consistent hashing’s main advantage, sublinear-time distribution, isn’t necessarily compelling when you think about the whole datacenter (or even many datacenters) as a resilient system of failure-prone domains. Maybe rendezvous hashing deserves a rebranding campaign (:

François-René RideauA tale of many nests

· 114 days ago

This short essay will tell you about my favorite macro, nest, discuss the modularity of syntax extension, and use the implementation of that macro as an illustration for how to use defmacro, syntax-rules and syntax-case, providing along the way a comparison between these respective macro definition systems.

Using the nest macro

When I started using Scheme as my main Lisp, the first macro I wrote was the nest macro. What macro? The nest macro. The one that in Common Lisp helps my code avoid drifting hopelessly to the right as I nest binding form inside binding form... by doing the nesting for me. To illustrate the kind of issues that I'm concerned with, consider the Common Lisp code snippet below:

(multiple-value-bind (a1 b1 p1) (foo1)
  (with-open-file (f1 p1 ...)
    (let ((x1 (read f1)))
      (when x1
        (multiple-value-bind (a2 b2 p2) (foo2)
          (with-open-file (f2 p2 ...)
            (let ((x2 (read f2)))
              (when x2
                (bar x1 x2 ...))))))))

You're doing the same thing twice, but because of all those binding forms, the code moves right, the symmetry is broken, and the line limit makes you cut your lines more and more as you nest more forms, until you run out of space. It's really ugly. So much so that it makes me miss block-oriented languages in the ALGOL tradition (like C, OCaml or Python) where all the bindings (at least simple ones) go at the same level of indentation, and symmetry is preserved between consecutive bindings, and the line limit isn't an increasing threat as my functions get more complex.

Of course, it is always better when you can break down your functions into simpler chunks, at which point it doesn't matter that you move a little bit to the right, because no function is long enough for this right shift to matter much. However, when you are really computing some correspondence between two (or more) sets of entities, there's no way around doing a lot of nested bindings before you have all the input elements aligned together in a way that you can even start your computation. That's what happened for instance with my famous macros implementing the isomorphisms between pure and stateful data structures and between interface-passing-style (typeclasses) and object-oriented style (classes): the more complex macros, to "classify" interfaces, had up to 18 levels of nesting. That's a lot of indentation, that reflects as much context to fit in your limited brain registers at once (and indeed, it took me over a month to complete the first in that series of macros). Happily there is a lot of symmetry, that will be more readily apparent if only your code doesn't have to get indented so much.

The traditional solution to this approach, in Common Lisp, was to invent a "mother of all" binding macro, that could replace all the other ones at once: Attempts at providing such universal binding form include metabang-bind, let+, at least one attempt internal at ITA that I saw, and probably many more attempts that I don't know about, not to mention pattern matchers like my old fare-matcher and its better replacements optima and trivia. Now, the problem with this approach is that whoever writes this universal binding form must offer a way to supersede each and every binding form in the language. But since the language is extensible and people keep defining new binding forms (especially using CALL-WITH-* style), the task is an endless, sisyphean endeavor. What more, it is also a modularity nightmare, as there is no clear responsible party for each of the N*M extensions of N universal binding forms to each match each of M new kinds of bindings. I believe the best universal binding system in town these days is trivia, the extensible pattern matcher; but even it has only limited mind share. (Of course, I'm partial to pattern matchers: when, decades ago, I switched from OCaml to Common Lisp, I missed pattern-matching so much that the first thing I did was to write the first ML-style pattern matcher for Common Lisp; which could of course be done within the language using macros.)

As I was discussing this topic a few years ago, and potential extensions to macro-expanders to support capture of syntactic "block" continuations so they would work well with python-like syntax, my then colleague Marco Baringer, of arnesi fame, told me about this beautiful, simple solution that he knew of: the nest macro. This one remarkable macro already supports all binding forms past, present and future, without anyone having to write any code whatsoever to extend it — because it trivially embraces the syntax of them all with one weird trick: recognizing that all binding macros end with a body of code inside which the variable are bound, and nesting each form passed to the macro inside the body of the previous form, at the end. Thus, for instance the above snippet is flattened this way:

 (multiple-value-bind (a1 b1 p1) (foo1))
 (with-open-file (f1 p1 ...))
 (let ((x1 (read f1))))
 (when x1)
 (multiple-value-bind (a2 b2 p2) (foo2))
 (with-open-file (f2 p2 ...))
 (let ((x2 (read f2))))
 (when x2)
 (bar x1 x2 ...))

Notice how all these binding forms are now neatly indented at the same level! Remark that each of the forms is closed before its body ends (sometimes before its body begins), said body being present or completed in the next form. And see how it also works for forms that are not binding forms, but still provide context for the body inside, such as the when forms. And note that if you didn't want your when form to wrap around all the rest, but simply to contribute a side-effect before the rest is evaluated, then you might want to wrap your when form inside a progn form (in Common Lisp) or begin form (in Scheme), that would provide this sequential behavior. Thus the nest macro works not only with binding forms, but with all kinds of expressions. Better yet, the nest macro even works with forms that are not expressions stricto sensu (though the overall form has to be an expression for the macro to be expanded)! For instance, it will work with case or match clauses:

 (for-each! list1) (lambda (elem1))
 (match elem1) ((list x1 y1 z1 ...))
 (for-each! list2) (lambda (elem2))
 (match elem2) ((list x2 y2 z2 ...))
 (begin (assert (equal x1 x2)) ...)
 (case (foo y1 y2) ((A B C ...) easy-caseA ...) ((D E F ...) easy-caseD ...))
 ((G H I ...) (hard-caseG main-body ...)) ...)

Notice how the (list x1 y1 z1 ...) or the (G H I ...) are not expressions but one a matching pattern and the other a list of cases. Each of their syntactic roles is different from that of normal expressions, and each follows its own distinct grammar. That's some additional expressive power that the nest macro has in Lisp that other more sophisticated macros do not have in Lisp or in any other language. Notice also how this style allows to detach the lambda expressions and their bindings from the rest of their body (which covers the rest of the arguments to the nest macro), and to place the head of these lambda expressions right next to the calling expression that calls them and binds the lambda variables: thus it becomes very clear that elem1 will be bound to each of the elements in list1, or that x2 y2 z2 will be bound to elements of a list that matches the contents of elem2. Meanwhile, the body below each of these forms doesn't have to care what kind of form it is the body of. This is all a beautiful division of labor, with power, expressiveness, brevity, relevance, symmetry, etc. All that for the price of understanding one simple macro.

I like this macro so much that I made it available as UIOP:NEST, as part of UIOP, the Common Lisp "Utilities for Implementation- and OS- Portability", a library is transcluded in ASDF, the build system used by nearly all software written in Common Lisp today. Thus, every contemporary Common Lisp program can assume that this macro is readily available. And ASDF itself is making good use of the macro: the macro shines not just to keep indentation in check, but also in conjunction with Common Lisp reader conditionals, so that some wrapping forms are only used on relevant implementations and not other implementations.

Implementing the nest macro

But just how simple is the nest macro? So simple it's literally a one liner in Common Lisp:

(defmacro nest (&rest r) (reduce (lambda (o i) `(,@o ,i)) r :from-end t))

That is, it is just a right fold (hence the :from-end t) on the list of forms r, to nest each (rightmost) form i into the end of the previous one o.

Now, moving to Scheme, the benefits of nest are about the same, just better, since there are even more higher-order function that call functions; if only we reorganize the order of their arguments so the function comes last (as with the for-each! variant of the standard for-each function above), then we can easily chain together the bindings and bodies of all these forms with the nest macro. And so, soon enough, I wrote a Scheme version of the nest macro. Here was my first, naive, attempt; can you tell what was wrong with it without reading what follows?

(define-syntax nest
  (syntax-rules ()
    ((nest (x ...) y z ...) (x ... (nest y z ...)))
    ((nest x) x)))

This macro uses the simple but limited pattern-matching macro-defining macro syntax-rules. It recurses into each of the forms to nest by inserting itself as the head of successive shorter sub-lists of forms, that will recursively expand it, until a single form is left that is returned as the innermost form with nothing to nest in it. And the problem with this simple macro is... that the recursive inner form (nest y z ...) will only expand this inner nest if it is an expression, i.e. the kind of form that gets evaluated into a value by the evaluator (whether based on an interpreter, compiler, JIT, or whatever else), and corresponds to a single non-terminal of the language grammar. Therefore, the macro won't work when (y z ...) is a case or match clause, a type level expression, or anything but a normal expression (I was tempted to say regular expression or normal form, but these are terms of art with their own entrenched meaning). And so began my quest for a correct implementation of nest.

The difficulty here is that you really want to fold-right, starting with the inner form and bubbling up inside each consecutive outer form; but what was trivial to express recursively, yet not quite correct, was that fold-left above, assuming each nested form was an expression. My first correct solution used two macros as follows:

(define-syntax Rnest
  (syntax-rules ()
    ;;((_ () ()) ()) ;; This case is an error, actually
    ((_ (rev ...) (one more ...)) (Rnest (one rev ...) (more ...))) ;; reverse the outer form list
    ((_ (x (y ...) z ...) ()) (Rnest ((y ... x) z ...) ())) ;; recursively nest
    ((_ (x) ()) x))) ;; return
(define-syntax nest
  (syntax-rules ()
    ((_ x ...) (Rnest () (x ...)))))

The Rnest macro that does the job of nesting the forms, in two phases: first by reversing the list by accumulating its elements one by one into an accumulator list; and second by folding left on the accumulated list. Each step is tail recursive so the macro always remains in control of the expansion until the end, without having to rely on any sub-form to itself be an expression that partakes in the expansion protocol (syntax-rules, like maybe all but one macro systems, only allows macro-expansion for one kind of grammatical non-terminal, the expression; the only exception I know to this rule is Racket's syntax/parse, where multiple kind of grammatical non-terminals each can have their own macro extensions). The reverse step will be familiar to anyone who ever tried to prove correct an implementation of reverse in e.g. Coq; or to prove correct an implementation of append, which often involves the append-reverse function.

Now, exposing the binding to the Rnest macro above isn't very hygienic. Ideally, you'd like to have Rnest itself be lexically defined, such that it is seen by nest but not anything else. Here is how I eventually did it, after someone tipped me that (... ...) was the proper way of quoting the ellipsis ... so it could be present in nested macros:

(define-syntax nest
  (syntax-rules ()
    ((_ x ...)
           (syntax-rules ()
             ((_ (xx (... ...)) (y z (... ...))) (r (y xx (... ...)) (z (... ...)))) ;; reverse the list
             ((_ (xx (y (... ...)) z (... ...)) ()) (r ((y (... ...) xx) z (... ...)) ())) ;; nest
             ((_ (xx) ()) xx)))) ;; bottom case
       (r () (x ...))))))

Note how it's essentially the same macro as previously, except for some ugly renamings: the internal version of Rnest is just called r, but its ellipsis has to be quoted as (... ...), and its variable x has to be renamed xx not to clash with the outer x that further demands to be used with an ellipsis (unless I suppose you quote it everywhere as (... x)). So, this hygienic version of nest using syntax-rules works but is particularly ugly. That said, it is not quite as ugly as what I had to go through before I was told how to quote the ellipsis...

Without quoting the ellipsis, you can still use syntax-rules to define the nest macro, but now you have to get creative, and use tail-recursion only with continuation-passing style so that all your transformations are done without requiring expansion from any subform, none of which might be an expression. That's where we actually use this tail-recursive append-reverse macro rev-app (in the body of the macro, at the bottom of the definition); its continuation will be calling the left-folding macro nest-rev that nests the reversed list of forms. It's all straightforward if you know continuation-passing style applied to macros (or even just to functions in general; macros being just source-transforming functions):

(define-syntax nest
  (syntax-rules ()
    ((_ . forms)
           (syntax-rules ()
             ((_ . args) (error "nest error" 'args))))
          (rev-app ;; k ctx lst acc ==> k ctx ,(append (reverse lst) acc))
           (syntax-rules ()
             ((_ k ctx (hd . tl) rev) (rev-app k ctx tl (hd . rev)))
             ((_ k ctx () rev) (k ctx rev))
             ((_ k ctx x ()) (k ctx x))))
          (app ;; k ctx l1 l2 ==> (k ctx ,(append l1 l2))
           (syntax-rules ()
             ((_ k ctx l1 l2) (rev-app app-ret (k ctx l2) l1 ()))))
          (app-ret ;; (k ctx l2) rev-l1 ==> (k ctx ,(append (reverse rev-l1) l2))
           (syntax-rules ()
             ((_ (k ctx l2) revl1) (rev-app k ctx revl1 l2))))
          (nest-rev ;; given the reverse list of forms, setup the recursion
           (syntax-rules ()
             ((_ () ()) (nest-error))
             ((_ () (final . more)) (nest-rev2 more final))))
          (nest-rev2 ;; recurse
           (syntax-rules ()
             ((_ () done) done)
             ((_ (form . more) done) (app nest-rev2 more form (done))))))
       (rev-app nest-rev () forms ())))))

However, straightforward or not, this macro CPS work is extremely tedious; and if you want to do non-trivial processing in this style, you'll have to develop a library of macros in continuation-passing style to mirror each of the list-transforming functions you might have wanted to use if only you had the full power of the language while meta-programming. And then debugging meta-programs written this way will be atrocious, lacking adequate debugging support from your regular tools (the only exception here being Racket, that sports dedicated support for debugging macros). This horrible situation of having a braindamaged meta-programming language completely disconnected from your base language in which you must reinvent all data structures and libraries from scratch, without proper tooling, of course is remindful of template metaprogramming in C++, which, dreadful as it is, is still one of the more powerful of blub languages with respect to metaprogramming (then there is compile-time reflection in Java, but by the time you've handled all the boilerplate to do the simplest of things, you'll either have forgotten why you were doing it in the first place, or will have committed suicide in disgust — unless you embrace the Greenspunning and re-create Clojure or such).

Of course, if you're willing to assume that your nesting level will still remain small, and that macro-expansion of nested forms won't be a significant drag on your compilation time, then you could use this much simpler version that uses an O(n²) algorithm instead of an O(n) algorithm, by expressing your fold-right in a more direct though less efficient way (implementation courtesy of gwatt on IRC #Scheme):

(define-syntax nest
  (syntax-rules ()
    ((nest x) x)
    ((nest x ... (y ...) z) (nest x ... (y ... z)))))

Now, all this may have finished convincing you that while syntax-rules makes it simple to write simple macros, it might not be the best tool to write more elaborate macros that do not fit its simplistic assumptions. It is then time to unleash a more powerful macro-defining macro, syntax-case. Syntax-case, like syntax-rules, is hygienic, in that it tracks source location and naming contexts, so that you don't have to do it manually and carefully insert gensym everywhere; but unlike syntax-rules, it is not limited to a simple pattern language, it allows for metaprogramming using the very same language. Here is a straightforward version of the nest macro using syntax-case:

(define-syntax (nest stx)
  (syntax-case stx ()
    ((nest . xs)
     (let loop ((forms (syntax->list #'xs)))
        ((null? forms) #'xs)
        ((null? (cdr forms)) (car forms))
        (else #`(#,@(syntax->list (car forms)) #,(loop (cdr forms)))))))))

The above loop follows the same naive approach as we were initially trying to use with syntax-rules: it manually does a left fold on the list of forms; unlike the syntax-rules version, though, it works even if the forms are not expressions, because we recurse directly inside the macro-expanding function, rather than by hoping that the next form will be an expression that recursively macro-expands. Recursion is much easier and nicer to use with syntax-case, because you have the full power of your language as a meta-language, instead of an ad hoc term-rewrite engine.

Now, if the #`(#,@ characters looked like line noise to you, they were quasisyntax and unsyntax-splicing, the syntax-case analogue to quasiquote and unquote-splicing that you use with Common Lisp style macros. But if quasisyntax is alien to you or unimplemented in your Scheme, you can also manipulate the syntax directly, using the datum->syntax and syntax->list primitives:

(define-syntax (nest stx)
  (syntax-case stx ()
    ((nest . xs)
     (let loop ((forms (syntax->list #'xs)))
        ((null? forms) #'xs)
        ((null? (cdr forms)) (car forms))
        (else (datum->syntax #'nest
                (append (syntax->list (car forms)) (list (loop (cdr forms)))))))))))

Of course, instead of doing the recursion manually, you could explicitly use a left fold, just like the reduce in Common Lisp (there again thanks to gwatt for his help):

(define-syntax (nest stx)
  (syntax-case stx ()
    ((nest outer ... inner)
     (foldl (lambda (o i) #`(#,@o #,i)) #'inner (reverse (syntax->list #'(outer ...)))))))

And there again, we can write the same thing without quasisyntax:

(define-syntax (nest stx)
  (syntax-case stx ()
    ((nest outer ... inner)
     (foldl (lambda (o i) (datum->syntax #'nest `(,@(syntax->list o) ,i)))
            #'inner (reverse (syntax->list #'(outer ...)))))))

And of course we can directly use a right fold, instead of a left fold on the reverse. That's very similar to the Common Lisp macro, just with some extra wrapping and unwrapping to maintain hygiene.

(define-syntax (nest stx)
  (syntax-case stx ()
    ((nest outer ... inner)
     (foldr (lambda (o i) #`(#,@o #,i)) #'inner (syntax->list #'(outer ...))))))

And as always, we can do it without quasisyntax, instead using quasiquote to implicitly express a call to the append function:

(define-syntax (nest stx)
  (syntax-case stx ()
    ((nest outer ... inner)
     (foldr (lambda (o i) (datum->syntax #'nest `(,@(syntax->list o) ,i)))
            #'inner (syntax->list #'(outer ...))))))

Now the macro is so simple, with a single trivial pattern to match, that you could even write the expander directly without syntax-case:

(define-syntax (nest stx)
  (let ((forms (syntax->list stx)))
    (let loop ((more (cdr forms)))
       ((null? more) #'stx)
       ((null? (cdr more)) (car more))
       (else (datum->syntax (car forms) ;; in Racket, stx would do
               (append (syntax->list (car more)) (list (loop (cdr more))))))))))

Also, you could use syntax->datum instead of syntax->list but that would needlessly lose syntax location data on the recursive objects:

(define-syntax (nest stx)
  (syntax-case stx ()
    ((nest . xs)
     (datum->syntax #'nest ;; Gerbil wants an identifier; Racket works well with stx.
       (let ((forms (reverse (syntax->datum #'xs))))
         (let loop ((acc (car forms)) (more (cdr forms)))
           (if (null? more)
             (loop `(,@(car more) ,acc) (cdr more)))))))))

Last but not least, here is the version I actually use in my code, as proposed by vyzo. It uses ellipses to do the appending directly on syntax; to achieve this, it first uses with-syntax to establish a binding between the macro's runtime variable o and i and the macro's compile-time syntax variables outer and inner; outer can then use the ellipsis to express the appending (note that Gerbil unlike Racket does not need the (syntax->list ...) wrapper):

(define-syntax (nest stx)
  (syntax-case stx ()
    ((nest outer ... inner)
     (foldr (lambda (o i)
              (with-syntax (((outer ...) o)
                            (inner i))
                #'(outer ... inner)))
            #'inner (syntax->list #'(outer ...))))))

So there, we've seen a simple macro, nest, how it interestingly trivializes a problem that others tried very hard to solve with extremely elaborate macros, how it can be implemented in three different widely used macro systems, and what are some issues with writing macros in Scheme rather than Lisp — the price you pay for hygiene (mind that just because I do not discuss the benefits does not mean there aren't such very valuable benefits). Note that this macro is pretty much the worst case scenario when translating a Common Lisp macro into a Scheme macro: it doesn't use any gensym so doesn't benefit from any of the hygiene machinery, and its pattern is just a bit off from what is easy with syntax-rules. Yet in the end, it isn't too hard to translate it. Translating it was a good exercise in learning Scheme macro systems.

Nicolas HafnerPortacle Release - Confession 75

· 131 days ago

I've written about Portacle on a previous occasion, where I talked mostly about the issues I've faced. This time, however, I'm excited to announce that Portacle has finally reached version 1.0. This means that there are no obvious remaining issues that I am aware of. Everything should Just Work™.

In case you're confused about what Portacle even is, it stands for the Portable Common Lisp development Environment. It's a combination of Emacs, SBCL, Quicklisp, GIT, and a variety of other, smaller components that together bring you a fully-fledged IDE that runs on the three major operating systems in use today. It is installable with a simple extraction and fully contained in its own directory. It can thus be loaded onto a USB stick for use on the go as well.

Portacle is primarily intended to target both complete newcomers, for which the installation procedure of a full Emacs setup otherwise involves a lot of confusing and complicated steps, and advanced users that simply need a quick way to set up a running environment on a machine. Portacle is especially convenient to test your libraries on different systems.

I have personally tested Portacle to run properly on the following platforms:

  • Windows 7
  • Windows 10
  • OS X 10.11
  • OS X 10.12
  • Ubuntu 16.04
  • Linux Mint 17.3
  • Debian 8
  • Fedora 25
  • Arch Linux

Note that currently the following platform versions are supported:

  • Windows 7+ x64
  • OS X 10.11+ x64
  • Linux 3.13+ x64

You can download the current release here. If your system falls within these constraints and Portacle won't run properly for you, please do file an issue so that I can see what else needs fixing.

If you otherwise have suggestions regarding documentation extension, adding features, or smoothing out rough edges, please file an issue as well, or hop onto the #shirakumo IRC channel on Freenode to chat directly with me. I'd be happy to hear your thoughts.

For older items, see the Planet Lisp Archives.

Last updated: 2018-01-16 16:29