Planet Lisp

Michael MalisDebugging Lisp Part 3: Redefining Classes

· 6 days ago

This is part 3 of Debugging Common Lisp. If you haven’t read either of the previous parts, you can find part 1 here, and part 2 here.

The Common Lisp Object System (CLOS) is pretty powerful. It gives you multiple inheritance, multiple dispatch, and many different ways to extend the behavior of methods. Underneath, most implementations use the Metaobject Protocol (MOP), a way of defining CLOS in terms of itself. As part of the MOP, classes are implemented as objects with several instance variables. Among those are variables that hold the class’s name, its superclasses, and a list of the class’s own instance variables. If you don’t believe me, take the point class from the previous post:

(defclass point ()
  ((x :accessor point-x :initarg :x :initform 0)
   (y :accessor point-y :initarg :y :initform 0)))

And use the Slime Inspector to inspect the point class object, which can be obtained by calling find-class:


The advantage of using the MOP is that it makes it possible to fine tune the behavior of CLOS by using ordinary object-oriented programming. A great example of this is the filtered-functions library which adds arbitrary predicate based dispatch to CLOS. But enough about the MOP.1 In this post I’m going to talk about one tiny piece of CLOS, update-instance-for-redefined-class.

Update-instance-for-redefined-class is a method which is called whenever a class is redefined (at runtime). By overriding it, you can customize what exactly happens at that point in time. For example, let’s say you are using the above point class to represent complex numbers for some sort of simulation. As part of the simulation, you have a point object saved inside of the *location* variable: (1)


After profiling the simulation, you find that one of the bottlenecks is complex multiplication. Since multiplication of complex numbers is much more efficient when they are represented in polar form, you decide that you want to change the implementation of the point class from Cartesian to polar coordinates. To do that (at runtime), all you need to do is run the following code:

(defmethod update-instance-for-redefined-class :before
     ((pos point) added deleted plist &key)
  (let ((x (getf plist 'x))
        (y (getf plist 'y)))
    (setf (point-rho pos) (sqrt (+ (* x x) (* y y)))
          (point-theta pos) (atan y x))))

(defclass point ()
  ((rho :initform 0 :accessor point-rho)
   (theta :initform 0 :accessor point-theta)))

(defmethod point-x ((pos point))
  (with-slots (rho theta) pos (* rho (cos theta))))

(defmethod point-y ((pos point))
  (with-slots (rho theta) pos (* rho (sin theta))))

Basically, the code extends update-instance-for-redefined-class to calculate the values of rho and theta for the polar implementation in terms of the variables x and y from the Cartesian one. After extending update-instance-for-redefined-class the code then redefines the class, causing all of the existing instances to be changed over to the new implementation.2 Finally, two methods are defined, point-x and point-y, which preserve the interface for the point class.3 After running the code and then inspecting the contents of *location*, you should see: (2)


Even though the object inside of *location* is still the same object, it is now implemented using polar coordinates! To make sure that it was converted from Cartesian to polar correctly, you decide to call point-x on the object to check that the x-coordinate is still the same:

Amazingly, all of the code continues to work even though the implementation of an entire class was completely changed. So anytime you want to change the implementation of a class that is part of a service that needs to be up 24/7 and just happens to be written in Common Lisp, remember to use update-instance-for-redefined-class.

The post Debugging Lisp Part 3: Redefining Classes appeared first on Macrology.

LispjobsTwo positions at SPARQL Engine and Database Storage programmer (Oakland, CA)

· 7 days ago

Franz, Inc., a medium size software company in Oakland, CA (2 blocks from BART) is looking for a Software Developer to join their AllegroGraph group.

Links to the two positions:

Michael MalisDebugging Lisp Part 2: Inspecting

· 13 days ago

This is part 2 of Debugging Lisp. If you haven’t read part 1, you can find it here.

In this post I am going to discuss another tool used for debugging Common Lisp – the Slime Inspector. The Slime inspector makes it possible to manipulate objects directly from the repl. You can do many different things with it, including clicking on objects to look at their contents and being able to copy and paste objects in order to reuse them in future function calls.1 Let’s say you have the following point class:

(defclass point ()
  ((x :accessor point-x :initarg :x :initform 0)
   (y :accessor point-y :initarg :y :initform 0)))

If you were to make an instance of the above class:

(make-instance 'point :x 10 :y 20)

You can then right click on it and click on the “inspect” option, or just use the Emacs shortcut “C-c C-v TAB” to peek inside the object: (5)


This will show you the current values of all of the instance variables of the object. Not only can you look at the object’s instance variables, you can modify them as well. Note that the power comes from being able to do all of this from within the debugger at runtime. (7)


To make sure that the value of that object was actually changed, you can copy and paste the point object and then call the point-x function on it. (5)


One more really cool tool that hooks into the Inspector is the Slime Trace Dialog. The Slime Trace Dialog is like ordinary trace, but it also allows for inspection on the objects that were passed to or returned from the traced functions. For example, let’s say you are writing a tail call optimized function, sum, that sums all of the numbers in a list.

(defun sum (xs &optional (acc 0))
  (if (null xs)
      (sum (cdr xs) (+ (car xs) acc))))

(sum '(1 2 3))
=> 6

You can toggle the use the Slime Trace Dialog to trace sum by typing the shortcut “C-c M-t” and then typing in the name of function, “sum". After tracing it and running the code, you can press “C-c T” to enter the interactive Trace Dialog buffer. From there you can press “G” to refresh it and obtain the most recent trace. (4)


The trace will look like the output from ordinary trace, except it will have some addition goodies. As I said above you can inspect all of the arguments and return values. You can also hide/show branches of the trace tree in order to make it easier to find what you are looking for. (8)


The Slime Trace Dialog is invaluable when you have code which is passing lots of objects around and you aren’t exactly sure what the value of each variable in each object is. You can just use the Slime Trace Dialog and have it keep track of all of the information for you.

All in all, the Slime Inspector is another amazing part of the Common Lisp debugging tool set. It comes in handy when the program crashes and you are unaware of the current state of the program. When combined with the rest of the features for debugging Common Lisp, the Slime Inspector is just incredible.

The post Debugging Lisp Part 2: Inspecting appeared first on Macrology.

Zach BeaneCeramic: a new CL application style

· 13 days ago

Ceramic by Fernando Borretti is an interesting project that takes Common Lisp web applications and makes them into desktop applications. There’s some discussion about it on Hacker News and reddit.

Didier VernaDeclt 2.0 is out -- IMPORTANT

· 15 days ago

Declt 2.0 "Kathryn Janeway" is out. This release doesn't contain any change in functionality, yet deserves a major version upgrade since it contains 3 important changes: an infrastructure revamp (along the lines of what Clon endured not so long ago), a license switch from the GNU GPL to a BSD one, and finally a system / package name change. The prefix is now net.didierverna instead of com.dvlsoft. Do I need to apologize for this again? :-)

Find it at the usual place...

Quicklisp newsJuly 2015 Quicklisp dist update now available

· 17 days ago
This Quicklisp update is supported by my employer, Clozure Associates. If you need commercial support for Quicklisp, or any other Common Lisp programming needs, it's available via Clozure Associates.
New projects:

  • check-it — A randomized property-based testing tool for Common Lisp. — LLGPL
  • cl-gists — Gists API Wrapper for Common Lisp. — MIT
  • cl-git — A CFFI wrapper of libgit2. — Lisp-LGPL
  • cl-opsresearch — Common Lisp library for Operations Research. — GPL3
  • cl-scripting — Utilities to help in writing scripts in CL — MIT
  • hu.dwim.graphviz — Graphviz layouting using CFFI bindings. — public domain
  • hu.dwim.presentation — A component based GUI framework with a backend to present it using HTML and JavaScript. — public domain
  • jp-numeral — A printer for Japanese numerals. — MIT
  • quickapp — A utility library to automate much of the app creation process — Modified BSD License
  • simple-tasks — A very simple task scheduling framework. — Artistic
  • terminfo — Terminfo database front-end. — copyrights
  • trivial-main-thread — Compatibility library to run things in the main thread. — Artistic
  • trivialib.type-unify — unifies a polimorphic type specifier with type variables against actual type specifiers — LLGPL
Updated projects: arrow-macros, binascii, birch, bit-smasher, buffalo, burgled-batteries.syntax, caveman, cerberus, cl-ana, cl-ansi-term, cl-async, cl-charms, cl-clon, cl-coveralls, cl-dbi, cl-freetype2, cl-growl, cl-isaac, cl-ledger, cl-libssh2, cl-libuv, cl-marklogic, cl-mongo-id, cl-netstring-plus, cl-olefs, cl-rabbit, cl-readline, cl-reexport, cl-rethinkdb, cl-rss, cl-sdl2, cl-slug, cl-spark, cl-string-match, cl-uglify-js, cl-voxelize, cl-yaclyaml, clack, clfswm, closer-mop, coleslaw, colleen, com.informatimago, command-line-arguments, common-doc-plump, commonqt, croatoan, dbus, declt, defclass-std, dexador, dissect, djula, docparser, drakma, dyna, eazy-gnuplot, fare-csv, fast-http, flexi-streams, frpc, generic-sequences, glass, glyphs, hemlock, hu.dwim.common, hu.dwim.def, hu.dwim.logger, hu.dwim.uri, hu.dwim.util, hu.dwim.web-server, hunchentoot, immutable-struct, integral, intel-hex, iolib, jonathan, jsown, lack, legion, let-over-lambda, lisp-interface-library, lisp-invocation, lisp-unit2, lucerne, madeira-port, marching-cubes, mathkit, mcclim, media-types, mexpr, mk-string-metrics, nibbles, ningle, optima, osicat, perlre, pileup, postmodern, pounds, pp-toml, priority-queue, prove, qlot, qmynd, qt-libs, qtools, quadtree, quicklisp-slime-helper, quri, rutils, sb-cga, screamer, scriba, serapeum, slime, smackjack, staple, stumpwm, sxql, transparent-wrap, trivial-download, trivial-features, trivial-lazy, trivial-signal, trivial-update, type-r, uiop, unix-opts, varjo, verbose, vgplot, weft, woo, wookie, workout-timer, x.fdatatypes, x.let-star, yaclml.

Michael MalisDebugging Lisp Part 1: Recompilation

· 20 days ago

This post is the start of a series on how to debug Common Lisp code, specifically with Emacs, Slime, and SBCL. If you do not understand Common Lisp, you should still be able to follow along and recognize just how powerful the facilities provided by the Common Lisp debugger are. Nathan Marz asked me to write these posts since he thought many of the tools for debugging Common Lisp were pretty cool.

The first thing you need to do in order to get started debugging Common Lisp is to set your Lisp’s optimization qualities. Optimization qualities are basically a group of settings which allow you to specify what the compiler should optimize for. These qualities include speed, space, compilation speed, safety, and debugging. If you do not run the code below, which tells the compiler to optimize for debugging, almost none of the examples in this post will work.

CL-USER> (declaim (optimize (debug 3)))

CL-USER> (your-program)

With the compiler optimized for debugging, it becomes possible to do pretty much everything at runtime. This post will show you how Tom, an experienced Lisp developer would debug and patch a buggy function at runtime. Let’s say that Tom has the following code which implements the well known Fibonacci function:

(defun fib (n)
  (if (<= 0 n 1)
      (/ 1 0)
      (+ (fib (- n 1))
         (fib (- n 2)))))

There’s just one problem, the code isn’t correct! Instead of returning n in the base case, the code winds up dividing by zero. When Tom tries to calculate the tenth Fibonacci with this code, a debugger window pops up because an error was signaled.


Realizing that he has entered the debugger, Tom wonders what has gone wrong. In order to find the bug, Tom decides to insert a breakpoint into the function.1 In Common Lisp, breakpoints are implemented as a function called ‘break'.2 To insert his breakpoint, Tom adds a call to break at the beginning of fib. After adding the breakpoint, Tom then puts his cursor next to one of the frames and hits the ‘r’ key in order to restart it. In this case, Tom decided to restart the frame where n was three.


By restarting the frame, Tom basically traveled back in time to the beginning of the frame he restarted. After restarting the frame, the debugger immediately hits the breakpoint Tom had just added. From there Tom steps through the code by hitting the ‘s’ key. He eventually realizes that the base case is implemented incorrectly and that that is why he received the error. (2)


After finding the source of the problem, similar to how he had previously inserted the breakpoint, Tom patches the code. He replaces the base case with n and removes the breakpoint he had previously inserted. (3)


After recompiling the code, Tom once again restarts one of the frames. Since he was previously stepping through code, the debugger starts stepping through the frame Tom decided to restart. Tom just taps the ‘0’  (zero) key in order to invoke the step-continue restart3 and continue normal execution. Because Tom restarted a frame which occurred before the bug, and now that the bug is gone, the code runs as if there had never a bug in the first place! (3)


Let’s recap what happened. After the code signaled an error, Tom found himself in the debugger. Tom was able to insert a breakpoint and poke around until he found the source of the problem. After finding the problem, Tom patched the code and restarted the process from a point before it had signaled an error. Because Tom had corrected the code, after he restarted the frame, it acted as if nothing had ever gone wrong!

The ability to recompile code at runtime is just one of the many incredible features provided by Common Lisp. Next time, I’m going to talk about the Slime inspector, which makes it possible to look into and modify objects from within debugger.

The post Debugging Lisp Part 1: Recompilation appeared first on Macrology.

Quicklisp newsJune 2015 download stats

· 21 days ago

Here are the top 100 downloads for June, 2015:

8413 alexandria
5803 babel
5222 cffi
5152 trivial-features
4926 cl-ppcre
4621 bordeaux-threads
4308 trivial-gray-streams
4208 closer-mop
4112 usocket
4068 flexi-streams
3891 trivial-garbage
3815 cl+ssl
3727 cl-fad
3597 split-sequence
3591 anaphora
3510 iterate
3274 cl-base64
3200 chunga
3141 nibbles
3136 chipz
3083 puri
2992 drakma
2678 ironclad
2521 named-readtables
2407 local-time
2391 let-plus
2256 cl-colors
2231 md5
2152 slime
2148 trivial-backtrace
2089 cl-ansi-text
1984 prove
1769 metabang-bind
1574 cl-unicode
1512 optima
1487 hunchentoot
1455 cl-interpol
1392 cl-utilities
1328 rfc2388
1301 cl-annot
1215 quri
1178 trivial-types
1090 cl-syntax
1080 fast-io
1059 static-vectors
1038 salza2
1011 trivial-indent
1000 cl-json
966 plump
943 parse-number
941 ieee-floats
933 trivial-utf-8
918 array-utils
841 fiveam
802 postmodern
778 proc-parse
772 lparallel
759 stefil
751 quicklisp-slime-helper
741 fast-http
741 xsubseq
740 clss
734 lquery
724 clack
695 jsown
693 lack
677 cl-dbi
676 jonathan
669 closure-common
666 osicat
658 cl-html-parse
658 cl-sqlite
646 asdf-system-connections
642 cxml
634 uuid
628 esrap
625 yason
619 symbol-munger
611 fare-utils
608 lisp-unit
602 cl-who
595 external-program
585 cl-csv
573 http-body
572 metatilities-base
569 cl-containers
555 trivial-mimes
545 fare-quasiquote
538 hu.dwim.asdf
524 cl-marshal
520 log4cl
511 zpng
511 command-line-arguments
490 cl-log
478 html-template
476 function-cache
471 cl-yacc
469 trivial-shell
427 circular-streams
422 cl-emb

Zach BeaneQuicklisp: beyond beta

· 26 days ago

I gave a talk at ELS 2015 in London in April about the past, present, and future of Quicklisp. The "future" part detailed several things I'd like to accomplish before removing the beta label from Quicklisp.

The slides and script of the talk are available on github. But since there are over a hundred slides and about twenty pages of script, I thought I'd summarize things in this post.

First, what's Quicklisp for, anyway? The slogan I put in the talk is: Make it easy to confidently build on the work of others. (Also, work on as many Common Lisps on as many platforms as possible.) 

Quicklisp achieves part of that already. It runs well on almost all Common Lisp implementations on all platforms. It's easy to install, use, and update. Things are tested together so everything usually works. But if something breaks on update, you can revert to a previous working version. And with those features you can build on hundreds of libraries already.

If it can do all that, why not drop the "beta" already? There are still a number of things that I want Quicklisp to do before I'm ready to say "Here, this is what I had in mind from the start."

First, I'd like to improve the confidence in the code you download and run. By adding HTTPS, signature checking, and checksum validation, you can be sure that there is nobody intercepting and modifying the software provided by Quicklisp. The signature and archive integrity checks must be made complete and automatic to have the best results.

Second, I'd like to add comprehensive user and developer documentation. For users, that means being able to learn each command, feature, and behavior of Quicklisp, to be able to use it to its fullest. For developers, that means being able to build your own solutions on a Quicklisp foundation without starting from scratch.

Third, I'd like to make it easy to find the project that does what you need, evaluate its quality and popularity, and find out if its license is compatible with your goals. If you want to make changes to a project, I want it to be easy to get the original source of a project and send fixes or improvements to the upstream maintainer.

Fourth, I'd like to make it easy to hook into the archive-fetching component of Quicklisp in a way that makes it easy to support additional integrity checks, support local development policies, and add local mirrors or caches for Quicklisp software.

These changes and improvements will take time. When they're done, I'll be happy to drop "beta" from Quicklisp.

Michael MalisMultiple-value-bind

· 27 days ago

Common Lisp is a pretty unique language. One of the many features that makes Common Lisp such an awesome language is multiple values. Yes, you read right. In Common Lisp it is possible for a function to return more than a single value. One example of a function that takes advantage of multiple values is floor. Floor takes a number as its argument and returns two values, whatever was passed in rounded down and the remainder.

(floor 3.5)

When you use floor in the manner above, you get two values back, 3 as the first return value, and 0.5 as the second. What’s really cool is that the values besides the first are completely ignored unless you explicitly ask for them.1 This means you can pretend that floor returns only a single value as long as you don’t need the other ones. Notice how in the following example, the + function is not aware of the second value returned by floor:

(+ (floor 3.5) 10)
=> 13

Now you may be wondering, “How can I obtain other values besides the first one?”. Well, there are several macros for doing that, the main one being multiple-value-bind. To use multiple-value-bind, you specify a list of the variables you want to bind each value to, followed by the expression that will return multiple values. Let’s say you want to multiply the two values returned by floor together. Here is how you would do that with multiple-value-bind:

(multiple-value-bind (val remainder) (floor 3.5)
  (* val remainder))
=> 1.5

It is also easy to create your own function that returns multiple values. All you need to do is pass each value you want to return to the values function. Below is a function which returns both twice its argument and three times its argument:

(defun multiples (x)
  (values (* 2 x) (* 3 x)))

(multiples 10)

There is just one more thing you need to know about multiple values. If the last call of a function is to another that returns multiple values, the first function will return all of the values the second one returns. If you were to write a function that doubles its argument and then uses floor to round it down, that function will return both values that are returned by floor.

(defun double-and-round-down (x)
  (floor (* 2 x)))

(double-and-round-down 5.25)

This behavior may or may not be desired. The standard way to make sure your function only returns a single value is to wrap the function that returns multiple values with a call to valuesValues will pay attention only to the first value and will return just that and nothing else.

(defun double-and-round-down (x)
  (values (floor (* 2 x))))

(double-and-round-down 5.25)
=> 10

And that’s all you need to know to work with multiple values!

The post Multiple-value-bind appeared first on Macrology.

Didier VernaDeclt 1.1 is released

· 29 days ago


as promised last week, I've just released a new version of Declt, my reference manual generator for ASDF systems. This new version (1.1) is now able to document Clon again (the documentation of which has been updated on the website).

New in this release:

  • Declt now properly handles and documents complex system and component dependencies, such as :feature :require and :version statements,
  • Declt also documents a system's :if-feature if any.

But the most important addition is the ability to document several ASDF systems in the same reference manual. More precisely, Declt now documents not only the main system but also all its subsystems. A subsystem is defined as a system on which the main one depends on in any way, and which is also part of the same distribution (under the same directory tree). Declt also understands multiple system definitions from the same .asd file.


Paul KhuongLinear-log Bucketing: Fast, Versatile, Simple

· 30 days ago

There’s a couple code snippets in this post (lb.lisp, bucket.lisp, bucket-down.lisp, bin.c). They’re all CC0.

What do memory allocation, histograms, and event scheduling have in common? They all benefit from rounding values to predetermined buckets, and the same bucketing strategy combines acceptable precision with reasonable space usage for a wide range of values. I don’t know if it has a real name; I had to come up with the (confusing) term “linear-log bucketing” for this post! I also used it twice last week, in otherwise unrelated contexts, so I figure it deserves more publicity.

I’m sure the idea is old, but I first came across this strategy in jemalloc’s binning scheme for allocation sizes. The general idea is to simplify allocation and reduce external fragmentation by rounding allocations up to one of a few bin sizes. The simplest scheme would round up to the next power of two, but experience shows that’s extremely wasteful: in the worst case, an allocation for \(k\) bytes can be rounded up to \(2k - 2\) bytes, for almost 100% space overhead! Jemalloc further divides each power-of-two range into 4 bins, reducing the worst-case space overhead to 25%.

This sub-power-of-two binning covers medium and large allocations. We still have to deal with small ones: the ABI forces alignment on every allocation, regardless of their size, and we don’t want to have too many small bins (e.g., 1 byte, 2 bytes, 3 bytes, ..., 8 bytes). Jemalloc adds another constraint: bins are always multiples of the allocation quantum (usually 16 bytes).

The sequence for bin sizes thus looks like: 16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384, ... (0 is special because malloc must either return NULL [bad for error checking] or treat it as a full blown allocation).

I like to think of this sequence as a special initial range with 4 linearly spaced subbins (0 to 63), followed by power-of-two ranges that are again split in 4 subbins (i.e., almost logarithmic binning). There are thus two parameters: the size of the initial linear range, and the number of subbins per range. We’re working with integers, so we also know that the linear range is at least as large as the number of subbins (it’s hard to subdivide 8 integers in 16 bins).

Assuming both parameters are powers of two, we can find the bucket for any value with only a couple x86 instructions, and no conditional jump or lookup in memory. That’s a lot simpler than jemalloc’s implementation; if you’re into Java, HdrHistogram’s binning code is nearly identical to mine.

Common Lisp: my favourite programmer’s calculator

As always when working with bits, I first doodled in SLIME/SBCL: CL’s bit manipulation functions are more expressive than C’s, and a REPL helps exploration.

Let linear be the \(\log\sb{2}\) of the linear range, and subbin the \(\log\sb{2}\) of the number of subbin per range, with linear >= subbin.

The key idea is that we can easily find the power of two range (with a BSR), and that we can determine the subbin in that range by shifting the value right to only keep its subbin most significant (nonzero) bits.

I clearly need something like \(\lfloor\log\sb{2} x\rfloor\):

(defun lb (x)
  (1- (integer-length x)))

I’ll also want to treat values smaller than 2**linear as though they were about 2**linear in size. We’ll do that with

n-bits := (lb (logior x (ash 1 linear))) === (max linear (lb x))

We now want to shift away all but the top subbin bits of x

shift := (- n-bits subbin)
sub-index := (ash x (- shift))

For a memory allocator, the problem is that the last rightward shift rounds down! Let’s add a small mask to round things up:

mask := (ldb (byte shift 0) -1) ; that's `shift` 1 bits
rounded := (+ x mask)
sub-index := (ash rounded (- shift))

We have the top subbin bits (after rounding) in sub-index. We only need to find the range index

range := (- n-bits linear) ; n-bits >= linear

Finally, we combine these two together by shifting index by subbin bits

index := (+ (ash range subbin) sub-index)

Extra! Extra! We can also find the maximum value for the bin with

size := (logandc2 rounded mask)

Assembling all this yields

(defun bucket (x linear subbin)
  (let* ((n-bits (lb (logior x (ash 1 linear))))
         (shift (- n-bits subbin))
         (mask (ldb (byte shift 0) -1))
         (rounded (+ x mask))
         (sub-index (ash rounded (- shift)))
         (range (- n-bits linear))
         (index (+ (ash range subbin) sub-index))
         (size (logandc2 rounded mask)))
    (values index size)))

Let’s look at what happens when we want \(2\sp{2} = 4\) subbin per range, and a linear progression over \([0, 2\sp{4} = 16)\).

CL-USER> (bucket 0 4 2)
0 ; 0 gets bucket 0 and rounds up to 0
CL-USER> (bucket 1 4 2)
1 ; 1 gets bucket 1 and rounds up to 4
CL-USER> (bucket 4 4 2)
1 ; so does 4
CL-USER> (bucket 5 4 2)
2 ; 5 gets the next bucket
CL-USER> (bucket 9 4 2)
CL-USER> (bucket 15 4 2)
CL-USER> (bucket 17 4 2)
CL-USER> (bucket 34 4 2)

The sequence is exactly what we want: 0, 4, 8, 12, 16, 20, 24, 28, 32, 40, 48, ...!

The function is marginally simpler if we can round down instead of up.

(defun bucket-down (x linear subbin)
  (let* ((n-bits (lb (logior x (ash 1 linear))))
         (shift (- n-bits subbin))
         (sub-index (ash x (- shift)))
         (range (- n-bits linear))
         (index (+ (ash range subbin) sub-index))
         (size (ash sub-index shift)))
     (values index size)))
CL-USER> (bucket-down 0 4 2)
0 ; 0 still gets the 0th bucket 
0 ; and rounds down to 0
CL-USER> (bucket-down 1 4 2)
0 ; but now so does 1
CL-USER> (bucket-down 3 4 2)
0 ; and 3
CL-USER> (bucket-down 4 4 2)
1 ; 4 gets its bucket
CL-USER> (bucket-down 7 4 2)
1 ; and 7 shares it
CL-USER> (bucket-down 15 4 2)
3 ; 15 gets the 3rd bucket for [12, 15]
CL-USER> (bucket-down 16 4 2)
CL-USER> (bucket-down 17 4 2)
CL-USER> (bucket-down 34 4 2)

That’s the same sequence of bucket sizes, but rounded down in size instead of up.

The same, in GCC

static inline unsigned int
lb(size_t x)
        /* I need an extension just for integer-length (: */
        return (sizeof(long long) * CHAR_BIT - 1) - __builtin_clzll(x);

 * The following isn't exactly copy/pasted, so there might be
 * transcription bugs.
static inline size_t
bin_of(size_t size, size_t *rounded_size,
    unsigned int linear, unsigned int subbin)
        size_t mask, range, rounded, sub_index;
        unsigned int n_bits, shift;

        n_bits = lb(size | (1ULL << linear));
        shift = n_bits - subbin;
        mask = (1ULL << shift) - 1;
        rounded = size + mask; /* XXX: overflow. */
        sub_index = rounded >> shift;
        range = n_bits - linear;

        *rounded_size = rounded & ~mask;
        return (range << subbin) + sub_index;

static inline size_t
bin_down_of(size_t size, size_t *rounded_size,
    unsigned int linear, unsigned int subbin)
        size_t range, sub_index;
        unsigned int n_bits, shift;

        n_bits = lb(size | (1ULL << linear));
        shift = n_bits - subbin;
        sub_index = size >> shift;
        range = n_bits - linear;

        *rounded_size = sub_index << shift;
        return (range << subbin) + sub_index;

What’s it good for?

I first implementated this code to mimic’s jemalloc binning scheme: in a memory allocator, a linear-logarithmic sequence give us alignment and bounded space overhead (bounded internal fragmentation), while keeping the number of size classes down (controlling external fragmentation).

High dynamic range histograms use the same class of sequences to bound the relative error introduced by binning, even when recording latencies that vary between microseconds and hours.

I’m currently considering this binning strategy to handle a large number of timeout events, when an exact priority queue is overkill. A timer wheel would work, but tuning memory usage is annoying. Instead of going for a hashed or hierarchical timer wheel, I’m thinking of binning events by timeout, with one FIFO per bin: events may be late, but never by more than, e.g., 10% their timeout. I also don’t really care about sub millisecond precision, but wish to treat zero specially; that’s all taken care of by the “round up” linear-log binning code.

In general, if you ever think to yourself that dispatching on the bitwidth of a number would mostly work, except that you need more granularity for large values, and perhaps less for small ones, linear-logarithmic binning sequences may be useful. They let you tune the granularity at both ends, and we know how to round values and map them to bins with simple functions that compile to fast and compact code!

P.S. If a chip out there has fast int->FP conversion and slow bit scans(!?), there’s another approach: convert the integer to FP, scale by, e.g., \(1.0 / 16\), add 1, and shift/mask to extract the bottom of the exponent and the top of the significand. That’s not slow, but unlikely to be faster than a bit scan and a couple shifts/masks.

Vsevolod DyomkinRunning Lisp in Production at Grammarly

· 31 days ago

We have written a blog post describing almost 3 years of our Lisp in production experience at Grammarly. Here's a small abstract for it.

At Grammarly, the foundation of our business, our core grammar engine, is written in Common Lisp. It currently processes more than a thousand sentences per second, is horizontally scalable, and has reliably served in production for almost 3 years.

We noticed that there are very few, if any, accounts of how to deploy Lisp software to modern cloud infrastructure, so we thought that it would be a good idea to share our experience. The Lisp runtime and programming environment provides several unique, albeit obscure, capabilities to support production systems (for the impatient, they are described in the final chapter).

Continue to the full text »

Zach BeaneLisp in production at Grammarly

· 31 days ago

Vsevolod Dyomkin (who conducted the Lisp Hackers interviews) has an interesting blog post today about running Lisp in production at Grammarly:

At Grammarly, the foundation of our business, our core grammar engine, is written in Common Lisp. It currently processes more than a thousand sentences per second, is horizontally scalable, and has reliably served in production for almost 3 years.

We noticed that there are very few, if any, accounts of how to deploy Lisp software to modern cloud infrastructure, so we thought that it would be a good idea to share our experience.

Full article.

Didier VernaClon 1.0b24 is released -- IMPORTANT

· 33 days ago


I'm happy to announce the release of the next beta version of Clon, the Common Lisp / Command Line Options Nuker library. This release doesn't contain much change in terms of functionality, but it contains a lot of change in terms of infrastructure, plus very important and backward-incompatible modifications. So if you're a Clon user, please read on.

First of all, a huge revamp of the library's infrastructure (package hierarchy, ASDF and Make implementations) occurred. A large portion of this work is actually not mine, but Fare's (big thanks to him, 'cause the level of ASDF expertise required just means that I couldn't have done that by myself). The purpose here was twofold: first, remove all logic from the ASDF files (so that other system managers could be used; not sure that's actually useful right now) and second, split the library in two: the core, basic functionality and the non-standard platform-dependent bells and whistles (read: termio support). The result is that Clon now comes with 4 different ASDF systems! A setup system allows you to configure some stuff prior to loading the library, a core system allows you to load only the basic functionality and the regular one loads everything, autodetecting platform-dependent features as before. The fourth system is auxiliary and not to be used by hand. All of this is properly documented. For a code maniac like me, this new infrastructure is much more satisfactory, and I've learned a lot about ASDF less known features.

Next, I've moved the repository to Github. Please update your links! It seems that I've lost all my former tags in the process, but oh well...Only the Git repo has moved. The main Clon web page still contains the full history of tarballs, the preformatted documentation, and will continue to do so in the future.

Finally (I've kept this to myself until the last possible minute because I'm scared like hell to tell): I've changed the systems and packages names... The com.dvlsoft prefix has been replaced with net.didierverna. All other libraries of mine will eventually endure the same surgery. It's for the best, I apologize for it and I swear I will never ever do that again, EVER (fingers crossed behind my back).

So what's next? Before considering an official 1.0 release, there are two things that I want to do. First, cleanup some remaining Fixmes and some shaky error handling. Second, provide an even simpler way of using Clon than what the Quick Start chapter in the doc demonstrates. The idea is to just implement a main function with keyword arguments, and those argument magically become command-line options.

A side-effect of this work is that Declt now chokes on Clon, because some ASDF features that it doesn't understand are in use. So Declt has a couple of new challenges ahead, and you should expect a new release in the weeks to come.

Michael MalisHofeach

· 40 days ago

Last time I talked about mapeach, a macro which is a simple wrapper around mapcar. After using mapeach a couple times, I found that I wanted ‘each’ version of many other other functions, removefind, and count to name a few. One option I had was to write a macro for every single one of these functions. If I were to have done this, I would have wound up with ‘remove-each’, ‘find-each’, and so on. Instead I took door number two, creating a general macro which I call ‘hofeach’Hofeach, is just like mapeach, except it takes an extra argument for the HOF (higher order function), that you want to use. Below is one possible implementation of hofeach.

(defmacro hofeach (hof var list &body body)
  `(funcall ,hof (lambda (,var) ,@body) ,list))

Here is what code that uses hofeach as a fill in for mapeach looks like:

(hofeach #'mapcar x '(1 2 3)
  (* x x))

=> (1 4 9)

Now we get to specify which HOF we want to use! If we want to keep all of the numbers in a list that are even, here is how we could do that:1 2

(hofeach #'remove-if-not x '(1.2 5 7 2 3.5 6 9)
  (and (integerp x) (evenp x)))

=> (2 6)

So now that I have hofeach, I generally will use it instead of passing a complex lambda expression to a HOF. Most of the time I use hofeach with remove-if-not, but I have also used it with count-if as well. It gives code a nice down and to the right look, which I find pretty easy to read. You get to read the forms in the order that they appear. If you were to use a lambda expression instead, it becomes much more difficult to read since you have to jump around in order to read the code.

The post Hofeach appeared first on Macrology.

Patrick SteinSyntactic Corn Syrup

· 41 days ago

I’ve been bouncing around between Java and C++ and C and loads of JNI cruft in between. At some point today, I accidentally used a semicolon to separate parameters in my C function declaration:

void JNI_myJNIMethod( int paramA; int paramB; int paramC )

It looked wrong to me. But, I had one of those brain-lock moments where I couldn’t tell if it was wrong. I was pretty sure that it was wrong by the time my brain locked on pre-ANSI K&R:

JNI_myJNIMethod(paramA, paramB, paramC)
  int paramA;
  int paramB;
  int paramC;

Regardless, it got me thinking about the programming maxims: Deleted code has no bugs and Deleted code is debugged code.

I never have this kind of brain-lock in Lisp. Some of that is because my Emacs configuration has been molded to my Lisp habits better than to my C/C++/Java habits. Most of it, though, is that Lisp understands the difference between syntactic sugar and syntactic cruft.

Lisp decided long ago that writing code should be easy even if it makes writing the compiler tougher. C and C++ and Java all decided that LALR(1) was more important than me. As if that weren’t bad enough, C++ and Java have thrown the lexers and parsers under the bus now, too. No one gets a free ride.

drmeisterI gave a talk on Clasp and my Chemistry at Google in Cambridge Mass last week

· 42 days ago

Here is the link for the talk.

This talk describes our unique approach to constructing large, atomically precise molecules (called “Molecular Lego” or “spiroligomers”) that could act as new therapeutics, new catalysts (molecules that make new chemical reactions happen faster) and ultimately to construct atomically precise molecular devices. Then I describe Clasp and CANDO, a new implementation of the powerful language Common Lisp. Clasp is a Common Lisp compiler that uses LLVM to generate fast machine code and it interoperates with C++. CANDO is a molecular design tool that uses Clasp as its programming language. Together I believe that these are the hardware (molecules) and the software (the CANDO/Clasp compiler) that will enable the development of sophisticated molecular nanotechnology.

For more info see:

What a great place Google was! My host, Martin Cracauer was fantastic, he made me feel really, really welcome and made sure that the talk would be recorded and put up on the web. He arranged it so that I could spend the afternoon talking with him and Doug and James, two Lisp/compiler gurus at Google. He also gave me a tour of Google, it was great.

Michael MalisMapeach

· 43 days ago

Many times when using mapcar, I find myself using a complex lambda expression for the function argument. This makes the code difficult to read since it breaks apart the flow. My code winds up looking like the following:

(mapcar (lambda (x)

First you have to read the possibly massive lambda expression, then you finally find out what you are mapping over. As the lambda expression increases in length, it becomes harder and harder to read. A way to fix this is with the macro mapeach. Mapeach is a macro which is meant to be used when the lambda expression that would be passed to mapcar is much longer than the expression for the list. Mapeach works just like mapcar, but instead provides an alternative syntax which makes it easier to read when the lambda expression is complicated. Here is one possible implementation of mapeach:1

(defmacro mapeach (var list &body body)
  `(mapcar (lambda (,var) ,@body) ,list))

Mapeach, does two things to fix the problem. First it hides the lambda, making it easier to find the important parts of the code. Second, it inverts the order of the arguments, putting the simple list expression first and the complex body second. As a simple example of mapeach, here is how one could square each element in a list using it:

(mapeach x '(1 2 3)
  (* x x))

If one wanted to write the above code by using mapcar, it would look something like the following:

(mapcar (lambda (x)
          (* x x))
        '(1 2 3))

Although it doesn’t shine for this simple example, you can tell that mapeach makes the code a bit clearer. As the body for the lambda expression gets longer and longer, mapeach begins to make the code much easier to understand. I find that mapcar is nice to use only when the expression for the function is short. This happens either when you are either using a named function or you are using some sort of reader macro. Mapeach is another one of those macros that makes what seems like an insignificant difference. Even so, I find that it aids a lot in readability since it puts all of the simple parts in one place.

The post Mapeach appeared first on Macrology.

Nicolas HafnerThe Great UI Warts - Confession 56

· 46 days ago

It's now been about a year since I first started work on Parasol. In the process, I had to learn about UI programming in Common Lisp. It pains me a lot to say this, but it is definitely not one of the great strengths of CL. It certainly wasn't back then, and it still isn't now. Since Parasol started I learned a lot about Qt and in particular the Common Lisp bindings, CommonQt. While using Qt is your best bet at writing a native GUI, it just isn't as pleasant as writing other lisp code. Too many things can break, too many brick walls are laying in wait for you to hit your head against, too many things are simply not there infrastructure wise. However, as Parasol grew, and I grew tired of CommonQt's shortcomings, I started to write more and more systems to work around these problems and make the UI experience for the developer a better one. This is the goal of Qtools.

This library started out as an innocent encapsulation of a few things I'd developed in tandem with Parasol. The first serious issue I had with Parasol was memory leaking. Since we're accessing Qt -a C++ library- we need to go back to the old times and deal with our memory by hand. This is a very arduous task and one prone to mistakes. So, a system was developed to alleviate this pain. The result of this is Qtools' finalizers. At the core of it is a generic function that takes care of cleaning up the object it is passed. So in other words, a destructor function. Using this I could ensure that foreign objects were always properly cleaned up. However, I quickly came to realise that I did one very similar thing all the time: Add a finalizer method for my widget, and call finalize on its slot values. Thanks to the Meta Object Protocol's capabilities, I was able to hide this away completely. Now there's almost never a need to write a finalizer method again. It suffices to just add :finalized T to a widget's slot, and in the case of sub-widgets, the system already does it automatically.

The next issue I had was that writing in CommonQt's style is really uncomfortable. You need to duplicate a lot of information and keep track of the slots, signals, and overrides you define in the class definition. You also need to take care of different type and naming styles that come from C++ and leak into your CL application. This spawned Qtools' widget system. Not only does it take care of mapping naming styles and types, but it also allows a much more normal-looking way of defining your widgets. Instead of having to stuff information into your class definition, you can use multiple, separated forms. Just the way it works in your usual Lisp programs. At the heart of this system lies reinitialize-instance. Thanks to this fantastic function (and the MOP), I was able to separate everything out. What happens in the back when you compile a separate form is that it appends the option that should be in the class definition onto a class property, and calls reinitialize-instance. This call subsequently computes the effective class options and injects them into the class re/initialisation, effectively making it appear as if you had indeed added an option onto the class definition itself.

With this step done, much of the awkwardness was gone. Programs looked much more naturally structured, and things could be specified in a way that felt intuitive. However, one stain remained in the picture: Qt method calls. In order to call Qt methods, CommonQt provides a reader macro: #_. Sadly, in order for this reader macro to work, you need to specify the method name as it is in Qt, including the proper capitalisation. Since it is a foreign call, you also can't inspect it, or get any documentation information out of it. Argument list validity also isn't checked. Getting rid of this and allowing some form of normal-looking function call instead was a rather tricky problem to solve. My first thought was to dynamically analyse the available methods and generate Lisp wrapper functions for them. Those wrapper definitions are dumped to file, and then loaded. Sadly, doing so results in a couple hundred thousand method wrappers and a roughly 50Mb FASL file (on SBCL). The initial compile time also suffered because of this of course. This seemed like a less than stellar solution to me, mostly because the overwhelming majority of the wrappers included would never be called by the GUI anyway. So I sought a different solution. I did find one, albeit it is rather mad.

This solution is called Q+. The first part of it is the aforementioned wrapper compiler that I previously used to generate static wrappers. Modifying it a bit, I could use the same system to generate individual wrappers for any specific method I wanted. The second part is detecting when a supposed call to a wrapper function is made. Since it is not precompiled, the CL host cannot know of it. Thus, we need to somehow intercept when such a form is compiled, dynamically compile the wrapper, and then replace it with a call to the actual wrapper function. That sounds like a macro! And indeed, the q+ macro does that. It takes a method name and an argument list, dynamically compiles the wrapper, and finally emits a call to the new wrapper. The truth is a bit trickier here, since the wrapper needs to be available when a file is merely loaded as well, which wouldn't be the case if it was only generated during macro expansion. So instead, a load-time-value form that generates the wrapper is emitted alongside the wrapper call. That way, methods are always around as needed, with no run-time overhead. The last trick to Q+ is the hiding of the q+ macro call. Using the q+ macro solved most of the problems, but it was essentially the same thing as the #_ reader macro, with a bit nicer method name handling. What I wanted instead was to be able to write the actual wrapper function names. That would also allow slime to show docstrings, arguments, and similar information. In order to make this last trick work, I had to hack into the reader.

One of the greater blemishes of the the Common Lisp standard is the inability to hook into the reader's symbol creation process. This exclusion from the standard makes it impossible to write such things as package local nicknames as a library, or make a case like mine easy. What I had to do instead was to override the ( reader macro. Q+ then reads ahead, to see whether you're trying to reference a symbol from the q+ package. If so, it reads the rest of the form, and emits a call to the q+ macro from above instead. It not, it delegates to the standard reader macro for (. Overriding this reader macro is a dirty trick, and I'd rather not have done it. However, there simply is no other way to accomplish this feat, short of writing a complete reader implementation and demanding that people use that instead of the host implementation's, which is a bit too much to ask for, in my opinion. Still, it works fine, and I haven't run into any obvious issues so far. Now Qtools applications look and read like regular lisp code.

However, how the code looks is only one of the aspects that influence writing GUIs. There's a lot more to it, like for example the initial installation and the binary deployment. Those two things are what I've worked on in the past few weeks now. Out of the first item grew qt-libs, which should ensure that the required libraries like smoke and CommonQt are available easily. This currently works fine for Linux, however I did not get enough time before the Quicklisp release to find testers for Mac OS X. Windows is another problem entirely, one that I can only solve through downloading of precompiled libraries. I've wasted the entire day today with trying to get 64bit versions of the smoke libraries compiled on Windows. Hopefully I can push through with that and allow easy setup of a Qt environment on Windows as well. Qt-libs builds fine on Mac OS X now as well, though there's currently an issue remaining in loading the libraries. I'll get that sorted out before the next Quicklisp release though.

The second part grew into Qtools' new deployment system part. This allows really convenient and easy generation of ready-to-ship binaries of your application. The only thing you have to do is update your system definition a little:

(asdf:defsystem :my-system
  :defsystem-depends-on (:qtools)
  :build-operation "qt-program-op"
  :build-pathname "binary-name"
  :entry-point "my-package:start-function-or-main-class")

Once these four lines are added, you can simply launch your implementation from a shell, invoke (asdf:operate :program-op :my-system) and it'll do all the magic -like closing foreign libraries before dump, restoring the proper library search paths after resume, reloading the foreign libraries again using the new paths, etc.- for you. All you'll get is a bin folder in your project folder that you can zip and ship. I've tried this for Halftone and it Just Works™ on Linux so far.

But, the road ahead is still long and twisted. Once deployment and installation work flawlessly, there's still a lot of code left to be written to make working with Qt itself less painful. Hopefully some day I'll be able to say that writing native GUIs in Lisp is actually a nice experience!

The Qtools documentation is long and extensive. It contains a lot of talk on both how to start using Qtools, as well as what the internals are and how they work. If you're interested, have a read.


Zach BeaneSLIME 2.13 and SBCL 1.2.12 error:&nbsp;The value NIL is not of type POLICY

· 47 days ago

If you run into this error message, there's a quick runtime fix: evaluate (sb-ext:restrict-compiler-policy 'safety)

You can also add that form to your .sbclrc.

You can also update to SLIME 2.14 or downgrade to SBCL 1.2.11. Unfortunately, SLIME 2.14 isn’t in the recent June Quicklisp update, but I might do a quick second update to fix this problem.

Michael MalisCL-WHO

· 47 days ago

The CL-WHO library is one of many that make it easy to generate HTML. When first checking out CL-WHO, I thought that it must be at least a couple thousand lines of code long. As it turns out, it is only several hundred. At the core of CL-WHO is with-html-output (hence the name “who”), which allows one to use a DSL for generating HTML. With-html-output works like all macros. At a high level, it takes your code in the DSL, and compiles it into Lisp code that will generate the desired HTML (here are some examples).

With-html-output does little by itself. Almost all of the work is done by three functions: tree-to-templateprocess-tag, and convert-tag-to-string-list. Most of the time these functions call one another recursively in order to process the entire DSL. It is possible to customize the control flow, but I will get to that later. Here is a link to a gist of the output after tracing all of the functions and using macroexpand-1 to expand a simple example. The example only shows what happens when using basic tags in CL-WHO. It doesn’t show what happens when you embed Lisp expressions in the DSL.

Tree-to-template is the entry point into the compilation process. It loops through the DSL tree, and builds up a “template”. A template is just a list of strings and expressions. The strings in the template contain HTML and are meant to be printed directly to the HTML stream. On the other hand, the expressions contain code that will print objects to that stream. Eventually all of this output put together will be the desired HTML. As tree-to-template loops through the code, if it sees a non-tag, it will just collect that into the list.1 When it does see a tag, tree-to-template calls process-tag to process it, and then concatenates the result of that into the template.

Process-tag will extract the tag as well as the attribute list. Everything after the attribute list makes up the “body” of the tag. How is the body processed? Well, process-tag takes an additional argument, body-fn, which specifies how to process the body. Process-tag will then call convert-tag-to-string-list with the tag, the attribute list, the body, and body-fn. The reason process-tag doesn’t process the body itself is that convert-tag-to-string-list is a generic function, making it possible to customize its behavior.

Convert-tag-to-string-list handles the semantics of the tag. It takes all of the arguments above and returns a list of strings and expressions. That list will become part of the template eventually returned by tree-to-template. Since convert-tag-to-string-list is a generic function, it is possible to extend it. The documentation for CL-WHO gives an example of how one could create a custom “red” tag which changes the font of the text to red, even though there is no such HTML tag. In the default case, convert-tag-to-string-list takes the result from calling body-fn on body and surrounds that with strings for the opening and closing tags. Since convert-tag-to-string-list is customizable, it is possible to change the control flow and ultimately how the body is processed. If one wanted, they could make a call to process-tag, but with a different body-fn argument, changing how the code is processed further up (down?) the tree.

With the help of these functions with-html-output converts the DSL into a template. The template is then turned into a list of valid Lisp code.2 With-html-output then wraps the body with a macrolet which binds several local macros. These macros are: htm, fmt, esc, str. These macros make it easier to print objects to the stream used for output. Check out the documentation for CL-WHO for a more detailed description of what these macros do.

I really like CL-WHO. It is a great example of an embedded DSL. A Lisp hacker still has full access to Lisp from within what is a great DSL. The only problem I have with CL-WHO is the inability to have macros expand into code for the DSL. This decreases the flexibility of CL-WHO somewhat. The only way I can see to fix this problem would be to use a library such as :hu.dwim.walker to expand all of the macros in advance.

The post CL-WHO appeared first on Macrology.

Quicklisp newsJune 2015 Quicklisp dist update now available

· 47 days ago
This Quicklisp update is supported by my employer, Clozure Associates. If you need commercial support for Quicklisp, or any other Common Lisp programming needs, it's available via Clozure Associates.
New projects:
  • cambl — A library for working with financial amounts involving multiple commodities. — BSD-3
  • cerberus — A Kerberos implementation — MIT
  • cl-ledger — Double-entry accounting system. — BSD-3
  • cl-libssh2 — Libssh2 bindings — MIT
  • cl-sane — Lispy library bindings for sane. — GPLv3
  • docparser — Parse documentation from Common Lisp systems. — MIT
  • fn — Some macros for lambda brevity — Public Domain
  • frpc — An ONC-RPC implementation. — MIT
  • glass — General Lisp API Security System. — MIT
  • glkit — Various utilities for OpenGL — MIT
  • integral-rest — REST APIs for Integral DAO Table. — MIT
  • legion — Simple worker threads with a queue. — BSD 2-Clause
  • lime — A high-level Swank client, like Slime, but for Common Lisp applications. — MIT
  • mathkit — Various utilities for math — MIT
  • or-glpk — Foreign interface to the GNU Linear Programming Kit. — LGPL3
  • pounds — Lisp block storage, provides portable file mappings amongst other things. — MIT
  • qt-libs — System to ensure that the necessary Qt libs are available. — Artistic
  • restful — Spin up new REST entities like madman — MIT License
  • swank-protocol — A low-level Swank client. — MIT
  • temporal-functions — A means of creating functions that have an internal concept of time — 2 Clause BSD
  • utilities.binary-dump — Formatting of binary data similar to the od(1) UNIX program. — LLGPLv3
  • varjo — Common Lisp -> GLSL Compiler — LLGPL
Updated projects: apply-argv, arrow-macros, asdf-dependency-grovel, asdf-encodings, asdf-finalizers, asdf-linguist, asdf-package-system, avatar-api, babel, bit-smasher, black-tie, blackbird, blackthorn-engine, bordeaux-fft, buffalo, burgled-batteries, burgled-batteries.syntax, caveman, cells, cffi, chanl, city-hash, cl+ssl, cl-6502, cl-abnf, cl-ana, cl-annot, cl-autowrap, cl-bencode, cl-bibtex, cl-charms, cl-cli-parser, cl-coveralls, cl-cron, cl-csv, cl-dbi, cl-dot, cl-dropbox, cl-durian, cl-emb, cl-factoring, cl-ftp, cl-fuse-meta-fs, cl-gendoc, cl-geometry, cl-glfw3, cl-gpu, cl-growl, cl-influxdb, cl-isaac, cl-launch, cl-lexer, cl-libpuzzle, cl-libusb, cl-libuv, cl-llvm, cl-marklogic, cl-memcached, cl-messagepack, cl-mlep, cl-mustache, cl-netstring-plus, cl-nxt, cl-odesk, cl-pass, cl-pdf, cl-plplot, cl-ppcre, cl-primality, cl-project, cl-protobufs, cl-qrencode, cl-quickcheck, cl-rabbit, cl-recaptcha, cl-rethinkdb, cl-rlimit, cl-rrt, cl-sam, cl-sdl2, cl-shellwords, cl-slug, cl-smtp, cl-sophia, cl-strftime, cl-string-match, cl-tk, cl-unification, clack, classimp, cletris, clim-widgets, clinch, clipper, clos-diff, closer-mop, coleslaw, colleen,, command-line-arguments, common-doc, common-doc-plump, common-html, contextl, crane, croatoan, css-selectors, daemon, dartsclhashtree, defclass-std, defpackage-plus, dissect, djula, dyna, eazy-gnuplot, eazy-process, eazy-project, eco, eos, escalator, esrap, esrap-peg, event-glue, exscribe, fare-csv, fare-memoization, fare-mop, fare-quasiquote, fare-utils, fast-io, fft, find-port, gendl, glaw, glop, glu-tessellate, hdf5-cffi, hermetic, html-template, http-parse, hu.dwim.asdf, hu.dwim.common, hu.dwim.common-lisp, hu.dwim.computed-class, hu.dwim.debug, hu.dwim.def, hu.dwim.defclass-star, hu.dwim.delico, hu.dwim.logger, hu.dwim.partial-eval, hu.dwim.perec, hu.dwim.quasi-quote, hu.dwim.rdbms, hu.dwim.reiterate, hu.dwim.serializer, hu.dwim.stefil, hu.dwim.syntax-sugar, hu.dwim.uri, hu.dwim.util, hu.dwim.walker, hu.dwim.web-server, ieee-floats, imago, inferior-shell, inner-conditional, inotify, intel-hex, ip-interfaces, jonathan, jwacs, kebab, lack, lambda-gtk, lambda-reader, lass, let-over-lambda, lfarm, linedit, lisp-executable, lisp-gflags, lisp-interface-library, lisp-invocation, lisp-namespace, lispbuilder, local-time, lparallel, lucerne, lw-compat, magicffi, md5, meta, mexpr, mgl, mgl-pax, micmac, misc-extensions, mixalot, modf, modf-fset, modularize, modularize-interfaces, myweb, named-readtables, nibbles, ningle, npg, opticl, osicat, pal, parse-js, periods, perlre, pg, plump, png-read, pooler, postmodern, projectured, protobuf, pzmq, qlot, qtools, query-fs, quri, random, rcl, readable, reader-interception, repl-utilities, retrospectiff, rfc3339-timestamp, rock, rpc4cl, rpm, rucksack, s-xml, scalpl, scriba, scribble, sdl2kit, serapeum, shuffletron, single-threaded-ccl, sip-hash, smackjack, smug, snappy, software-evolution, st-json, staple, stem, stumpwm, swank-client, swank-crew, sxql, temporary-file, thorn, trivia, trivia.balland2006, trivial-download, trivial-extract, type-i, type-r, unix-options, unix-opts, usocket, utilities.print-items, utils-kt, verbose, vertex, vgplot, websocket-driver, weft, with-c-syntax, woo, wookie, workout-timer, wuwei, xhtmlgen, zip, zlib, zs3.

Removed projects: arnesi+, asdf-contrib, asdf-project-helper, asdf-utils, until-it-dies.

arnesi+ has been removed because its repo has disappeared and its authors have not replied to inquiries in months.

asdf-contrib and asdf-utils have been removed by request of the author. asdf-project-helper has stopped working as a result.

until-it-dies has never actually worked, but was previously included because some of its auxiliary systems worked.

To get this update, use (ql:update-dist "quicklisp")

Michael MalisOnce-only

· 51 days ago

One of the most common mistakes made when writing macros is evaluating one of the arguments multiple times. Not only can this be inefficient, but when side effects are involved, it leads to quirky behavior. Take a macro square, which simply squares its argument (in reality one would use a function to do this):

(defmacro square (x)
  `(* ,x ,x))

The above implementation is buggy. Why? Because the x argument is evaluated twice. To see why this is a bad thing, check out the following code:

(square (incf a))

The above winds up expanding into:

(* (incf a) (incf a))

Which is buggy since it increments a twice. A way to fix this problem is to bind the value of x to a gensym, and then use that gensym throughout the rest of the macro. Here is a bug free definition of square that uses with-gensyms:

(defmacro square (x)
  (with-gensyms (gx)
    `(let ((,gx ,x))
       (* ,gx ,gx))))

Is there a way to automate this? Yes, there is, by using a macro called once-onlyOnce-only is a relatively complicated macro, but it eliminates lots of boilerplate code. Once-only takes a list of expressions, generally arguments to a macro, and makes sure they are evaluated only once in the final macro expansion. Here is an implementation of once-only based on the one from Practical Common Lisp:

(defmacro once-only ((&rest names) &body body)
  (let ((gensyms (loop for n in names collect (gensym))))
    `(with-gensyms (,@gensyms)
      `(let (,,@(loop for g in gensyms
                      for n in names
                      collect ``(,,g ,,n)))
        ,(let (,@(loop for n in names
                       for g in gensyms
                       collect `(,n ,g)))

In order to explain how once-only works, I’m first going to show how to rewrite square using it. From there I will show what square looks like after once-only has been expanded. After that I will show what the macro expansion of square looks like. Finally, I will give an explanation as to what is going on. If you are reading on a computer, I strongly recommed you open this page in another window so you can follow along with the code and the explanation at the same time. Here is an implementation of square that uses once-only:

(defmacro square (x)
  (once-only (x)
    `(* ,x ,x)))

Here is what square looks like after once-only has been expanded inline:

(defmacro square (x)
  (with-gensyms (#:g830)
    `(let (,`(,#:g830 ,x))
       ,(let ((x #:g830))
          `(* ,x ,x)))))

So a usage of square such as the following:

(square (incf x))

will wind up looking like the code below after macro expansion.

(let ((#:g831 (incf x)))
  (* #:g831 #:g831))

So what the heck is going on? In line 2 of once-only, it creates a list of gensyms, one for each of the expressions that should only be evaluated once. We then take these gensyms and on line 3, generate code that will bind them to fresh gensyms. That generated code becomes line 2 of square after once-only has been expanded. We need to do this because we are writing a macro that writes a macro or code that writes code that writes code. So, after once-only has been expanded, square's body will contain a use of with-gensyms which will bind a bunch of gensyms to new gensyms every time square is ran. These fresh gensyms will eventually be the ones used to store the value of the expressions we want to be evaluated once only.

Now for lines 4-6. By using the double backquote, this code generates code that will generate code that will be part of the expansion of square. Lines 4-6 of once-only become line 3 of the definition of square, which becomes line 1 of the expansion of square. Basically the little segment

``(,,g ,,n)

says to generate code that will generate code (double backquote), that will be a list containing the value of the value of g, and the value of the value of n. The value of g will be one of the gensyms we created in once-only. From line 3 of square after once-only has been expanded, we see that this gensym was #:g830. The value of #:g830 will be another gensym, whatever it was bound to by with-gensyms. From the code will can see that this gensym was #:g831. The value of n will be one of the arguments to once-only. From the original code for square we see that the only argument to once-only is x. Then the value of x, or the value of the value of n, will be whatever is passed as the argument to the square macro, in this case (incf x). Ultimately the code looks like this as it goes through the multiple expansions:

``(,,g ,,n) => `(,#:g830 ,x) => (#:g831 (incf x))

Lines 4-6 take a list of expressions similar to those in the middle of the above process, splices them into a let by using the comma-at, then evaluates each one of them by using the comma in order to evaluate them once more. This works because the single comma in ,,@ actually applies to every element in the spliced list. Here is an example that demonstrates this:

``(,,@ '(x y z)) => `(,x ,y ,z)

Then on line 3 of square after once-only has been expanded, we wind up with the comma followed by a backquote which wind up canceling each other out. So this is how lines 4-6 of once-only get us line 3 of square which then gives us line 1 of the expansion of square.

Now for lines 7-10 of once-only. These lines generate lines 4 and 5 of the code for square after once-only has been expanded. All these lines do is generate code that will bind the given names to the gensyms that will contain their values at runtime. In this case we want to bind x to the gensym #:g831. Since the value of #:g830 is #:g831, we can just bind x to the value of #:g830. Then we just evaluate the body in this environment. By doing this, we bind x to an expression that will give us the same value as the expression previously contained in x! And that is how once-only ultimately works. In the expansion of square, we bind #:g831 to the value of (incf x). Then we bind x to #:g831 so any where we insert the expression x, we get #:g831, a gensym which is bound to the value of the expression that was initially bound to x, but only evaluated once.

Ultimately, once-only is a fairly useful macro. Like with-gensyms it is a utility for writing other macros. Once-only greatly reduces boiler plate and complexity in cases where it is used. It is because of these reasons once-only is one of the most popular macros out there.

The post Once-only appeared first on Macrology.

Fernando BorrettiCommon Lisp with Travis and Coveralls

· 51 days ago

Travis is a service for running unit tests in the cloud. Every commit you make and every pull request your project receives is seen by Travis, which spins up a Docker container and runs your tests. With a little work, it supports Common Lisp.

Travis is controlled by a file in your repo's root, .travis.yml. It defines some options, like the language the project is written in, the list of shell commands that have to be executed prior to running the tests, and whom to send notifications to.

YAML is an easy to read format for structured data. Tools like Ansible, Chef and Salt have made it popular for configuration management and deployment. The result is basically a shell script, but structured into different sections.

To test a Common Lisp project, we use cl-travis. This provides a script that you can tell Travis to download, which sets up the Common Lisp implementation of your choice (you can test on many - more on that below), and installs Quicklisp.

Without further ado, this is what the simplest .travis.yml looks like:

language: common-lisp
sudo: required

    - LISP=sbcl
    - LISP=ccl

  # Install cl-travis
  - curl | bash

  - cl -l fiveam
       -e '(setf fiveam:*debug-on-error* t
                 fiveam:*debug-on-failure* t)'
       -e '(setf *debugger-hook*
                 (lambda (c h)
                   (declare (ignore c h))
                   (uiop:quit -1)))'
       -e '(ql:quickload :my-project-test)'


The first two items just define the language of the project and tell it that sudo is required to run the tests. cl-travis requires sudo, so we'll have to set it to required at least for now.

Every item in the env.matrix list will create a new build with a certain configuration of environment variables. In this case, we want to test on both SBCL and CCL, so we use this:

    - LISP=sbcl
    - LISP=ccl

The install list is just a list of shell commands to execute to set up the test environment. Here, we just download and install cl-travis:

  # Install cl-travis
  - curl | bash

Projects which require system libraries to run the tests, like Crane, can install and configure those in the install list:

  # Install cl-travis
  - curl | bash
  # Install the latest versions of the major dependencies
  - git clone quicklisp/local-projects/sxql
  - git clone quicklisp/local-projects/cl-dbi
  # Update package repos
  - sudo apt-get update
  # Install SQLite
  - sudo apt-get install -y sqlite3
  # Set up Postgres
  - sudo -u postgres createdb crane_test_db
  - sudo -u postgres psql -c "CREATE USER crane_test_user WITH PASSWORD 'crane_test_user'"
  - sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE crane_test_db TO crane_test_user"

For projects that you'd like to test using the latest version of dependencies, you can clone them to the ~/lisp folder. For instance, this is the install list of the .travis.yml file for Scriba:

  # Install cl-travis
  - curl | bash
  # Clone the latest common-doc
  - git clone ~/lisp/common-doc
  # Clone the latest common-doc-plump
  - git clone ~/lisp/common-doc-plump

Finally, the script is the actual testing itself. cl-travis install CIM, a command line utility for managing and running different Lisp implementations under a common interface.

The cl command launches a Lisp image, and the -l flag can be used to Quickload a library. The -e flag lets us execute code, and here's where we set up what happens on failure and how to run the tests.

If you're using FiveAM for testing, you need to tell it to enter the debugger on test failures and errors. Then, hook up the debugger to UIOP's1 implementation-independent quit function. This ensures that on a test failure the script exits with -1, which tells Travis the tests have failed. Then, we just Quickload the test system to run the tests:

  - cl -l fiveam
       -e '(setf fiveam:*debug-on-error* t
                 fiveam:*debug-on-failure* t)'
       -e '(setf *debugger-hook*
                 (lambda (c h)
                   (declare (ignore c h))
                   (uiop:quit -1)))'
       -e '(ql:quickload :my-project-test)'

If you're using fukamachi's prove for testing, you use this:

    - cl -l prove -e '(or (prove:run :my-project-test) (uiop:quit -1))'

Enabling Travis

To use Travis, you need to sign up with your GitHub account. Then hover over your name in the upper right-hand corner of the page and go to your profile page. This will give you the following page:

Travis profile page

If you're just pushed the repo, chances are you need to click on 'Sync' to update the list of repos.

Then you click on the switch next to the repo's name to enable it, and then all you have to do is push a commit to trigger a build. Travis, like all services, has its ups and downs in terms of availability. So some times builds will start almost instantaneously, other times they'll take some time.

Coverage Testing

Code coverage is basically how many lines of source code are run by tests. SBCL supports coverage measuring, and can generate some HTML reports of coverage, but it requires some manual operation.

Enter Coveralls: This is a service that takes raw code coverage data and tracks it. It shows you covered files, which lines are executed and which are not, the evolution of coverage over time, and also tells you what a pull request will do to coverage.

Coveralls works with Travis, so now extra files are needed: You run the code coverage in the Travis build, along with the tests, and send the data to Coveralls for tracking. The library that does all this is cl-coveralls, which provides a macro that wraps some code in coverage measuring.

To add Coveralls support to the .travis.yml file, we first set the COVERALLS environment variable to true for a particular implementation (preferably SBCL):

    - LISP=sbcl COVERALLS=true
    - LISP=ccl

Then, we clone cl-coveralls:

  # Coveralls support
  - git clone ~/lisp/cl-coveralls

In the script part, we load coveralls along with our testing framework, and then wrap the code that runs tests in the with-coveralls macro:

  - cl -l fiveam -l cl-coveralls
       -e '(setf fiveam:*debug-on-error* t
                 fiveam:*debug-on-failure* t)'
       -e '(setf *debugger-hook*
                 (lambda (c h)
                   (declare (ignore c h))
                   (uiop:quit -1)))'
       -e '(coveralls:with-coveralls (:exclude (list "t"))
             (ql:quickload :my-project-test))'

Note how we used the :exclude option to prevent testing code from falling into coverage tracking.

Enabling Coveralls

The process is similar to enabling a repo for Travis:

Coveralls profile page

You flick the switch to enable or disable a repo, and if the repo is new, click on 'Sync GitHub Repos' up there near the top of the page.


Now, the whole point of this is letting users know what state the software is in. Both Travis and Coveralls give each project a status badge, and image you can put in the README to let users know upfront that the project's in working order and what the coverage status is.

Here's the Markdown for Travis and Coveralls badges:

# Project Name

[![Build Status](](
[![Coverage Status](](


Below is a (necessarily incomplete) list of projects using Travis and/or Coveralls:


  1. This is ASDF's portable tools layer. It provides a few very useful things like finding your hostname, quitting the Lisp image, or finding the system's architecture in a reliably portable way.

Zach BeaneCEPL video series on YouTube

· 53 days ago

Baggers's CEPL talk at ELS was one of the highlights. He demonstrated how a small bit of code could generate some very interesting visual effects, and it could be updated and modified on the fly from within slime.

He's started to upload new CEPL videos to YouTube. Check out his video list and subscribe if you want to keep up.

Quicklisp newsMay 2015 download stats

· 54 days ago
Here are the top 100 downloads for May, 2015:
 5093  alexandria
3865 babel
3442 cl-ppcre
3296 trivial-features
3109 cffi
3023 usocket
2979 cl+ssl
2821 bordeaux-threads
2723 flexi-streams
2720 trivial-gray-streams
2702 trivial-garbage
2621 cl-fad
2588 nibbles
2442 chunga
2390 closer-mop
2380 chipz
2326 cl-base64
2266 drakma
2247 split-sequence
2160 ironclad
2104 anaphora
2100 puri
1792 iterate
1758 trivial-backtrace
1658 slime
1618 local-time
1371 md5
1268 named-readtables
1152 metabang-bind
1104 hunchentoot
1071 let-plus
1067 cl-unicode
1009 cl-colors
953 cl-interpol
936 trivial-utf-8
874 cl-ansi-text
862 prove
851 plump
851 cl-utilities
849 optima
842 jsown
825 uuid
821 parse-number
816 trivial-indent
815 trivial-types
806 array-utils
804 lquery
794 postmodern
791 quicklisp-slime-helper
787 rfc2388
770 clss
766 lparallel
731 fiveam
723 ieee-floats
710 quri
696 asdf-system-connections
661 cl-annot
642 metatilities-base
641 cl-containers
603 cl-sqlite
573 cl-syntax
571 command-line-arguments
564 salza2
538 py-configparser
531 cl-json
525 cl-abnf
524 garbage-pools
523 cl-log
522 dynamic-classes
521 cl-markdown
517 cl-mssql
516 buildapp
507 cl-who
500 static-vectors
498 asdf-finalizers
491 clack
482 fast-io
468 zpng
466 cl-vectors
452 fast-http
449 proc-parse
408 esrap
400 osicat
397 trivial-shell
394 fare-utils
389 zpb-ttf
387 cl-csv
385 clx
371 vecto
364 jonathan
360 fare-quasiquote
354 parenscript
336 closure-common
333 cl-coveralls
327 xsubseq
322 stefil
319 ningle
312 cxml
309 cl-yacc
292 lack

Michael MalisAutomatically Binding Gensyms

· 55 days ago

One of the most common macros that almost everyone keeps in their utilities file is with-gensymsWith-gensyms is a macro that binds a list of variables to gensyms. That’s it! All with-gensyms does it take a list of symbols and generates code which binds each of those symbols to a gensym.  Although with-gensyms is simple, it removes a lot of boiler plate code. Here is a simple implementation of with-gensyms:

(defmacro with-gensyms (vars &body body)
  `(let ,(loop for v in vars collect `(,v (gensym)))

Looking at my implementation of accum, here is how one could simplify it by using with-gensyms. Pay attention to how much boiler plate is removed.

(defmacro accum (accfn &body body)
  (with-gensyms (ghead gtail garg)
    `(let* ((,ghead (list nil))
            (,gtail ,ghead))
       (macrolet ((,accfn (,garg)
                    `(setf ,',gtail
                           (setf (cdr ,',gtail)
                                 (list ,,garg)))))
         (cdr ,ghead)))))

By removing so much boiler plate, with-gensyms helps greatly reduce the cognitive load in certain cases. This will be important when I introduce once-only, the next macro I plan to talk about. There are also other variations of with-gensyms such as the one in Alexandria which makes it easier to have base names associated with the gensyms created.

The post Automatically Binding Gensyms appeared first on Macrology.

Zach BeaneCorman Lisp 3.02 now available

· 56 days ago

Thanks to some great work by Chun "binghe" Tian and testing and release wrangling by Luís Oliveira, you can now download a Corman Lisp zip file from GitHub and run it directly on modern Windows systems without building anything or entering a license code. Everything I just tried on my Windows 8.1 VM on VirtualBox worked fine right out of the zip.

Thanks, binghe and luis!

Christophe Rhodeslots of jobs in computing at goldsmiths

· 57 days ago

There's an awkward truth that perhaps isn't as well known as it ought to be about academic careers: an oversupply of qualified, motivated candidates chasing an extremely limited supply of academic jobs. Simplifying somewhat, the problem is: if one tenured professor (or "lecturer" as we call them in the UK) is primarily responsible for fifteen PhDs over their career, then fourteen of those newly-minted doctors will not get permanent jobs in academia.

That wouldn't be a problem if the career aspirations of people studying for doctorates were in line with the statistics - if about one in ten to one in twenty PhD candidates wanted a job in academia, then there would be minimal disappointment. However, that isn't the case; many more doctoral students have the ambition and indeed the belief to go on and have an academic career: and when belief meets reality, sometimes things break. Even when they don't, the oversupply of idealistic, qualified and motivated candidates leads to distortions, such as a large number of underpaid sessional teaching staff, assisting in the delivery of courses to ever larger cohorts of students (see also). The sector hasn't sunk as low as the "unpaid internship" seen in other oversupplied careers (games, journalism, fashion) - though it has come close, and there are some zero-hour contract horror stories out there, as well as the nigh-on-exploitative short-term postdocs that are also part of the pyramid.

All this is a somewhat depressing way to set the scene for our way of redressing the balance: Goldsmiths Computing is hiring to fill a number of positions. Some of the positions are traditional lecturer jobs - fixed-term and permanent - and while they're good openings, and I look forward to meeting candidates and working with whoever is successful, they're not what's most interesting here. We have also allocated funds for a number of post-doctoral teaching and research fellowships: three year posts where, in exchange for helping out with our teaching, the fellows will be able to pursue their own research agenda, working in collaboration with (but not under the direction of) established members of staff. I think this is a hugely positive move, and a real opportunity for anyone interesting in the particular kinds of areas of Computing that we have strengths in at Goldsmiths: Games and Graphics, Music and Art Computing, Data and Social Computing, Human-Computer Interaction and AI, Robotics and Cognition. (And if applicants were to want to work with me on projects in Music Informatics or even involving some programming language work, so much the better!)

The complete list of positions we're hoping to fill (apply by searching for the "Computing" Department in this search form) is:

  • Lecturer in Computational Art - 0.5FTE, 3 year fixed-term
  • Lecturer in Computer Science - full-time, 3 year fixed-term
  • Lecturer in Computer Science - 0.5FTE, 3 year fixed-term
  • Lecturer in Games and Graphics - full-time, open-ended
  • Lecturer in Games Art - 0.5FTE, open-ended
  • Lecturer in Physical Computing - full-time, open-ended
  • Post-doctoral Teaching and Research Fellow - full-time, 3 year fixed-term

The deadline for applications for most of these posts is Monday 8th June, so get applying!

For older items, see the Planet Lisp Archives.

Last updated: 2015-07-22 08:00