Planet Lisp

ECL NewsECL Quarterly Volume IV

· 11 days ago

1 Preface


I've managed to assemble the fourth volume of the ECL Quarterly. As always a bit off schedule but I hope you'll find it interesting.

This issue will revovle around ECL news, some current undertakings and plans. Additionally we'll talk about Common Lisp implementations in general and the portability layers. I believe it is important to keep things portable. Why? Keep reading!

Lately we're working with David O'Toole on making support for ECL on Android better. He wants to distribute his games on this platform and was kind enough to write an article for ECL Quarterly. Thanks to his work we've discovered various rough edges and bugs in ECL and gained some invaluable insight into the cross compilation problems of Common Lisp applications.

As the final remark - I've found some time to establish a proper RSS subscription feed for ECL and ECL Quarterly. I hope that this issue will finally land on the Planet Lisp - a well known Lisp-related blog posts aggregator maintained by Zach Beane.

I want to thank for the valuable feedback and proofreading to many people, especially Antoni Grzymała, Javier Olaechea, Michał Posta, Ilya Khaprov and David O'Toole.

Have a nice lecture,

Daniel Kochmański ;; aka jackdaniel | TurtleWare
Poznań, Poland
June 2016

2 ECL's "what's going on"

I've added a milestone with a deadline for the ECL 16.1.3 release with the bugs I want to fix. You may find it here. I'm very happy to receive a lot of positive feedback, merge requests and awesome bug reports. Thank you for that! :-)

Backporting CLOS changes from CLASP was successful but we won't incorporate them in the main branch. The recently resurrected cl-bench has shown that these changes impact performance and consing negatively (check benchmarks). If you are curious about the changes, you may checkout the branch feature-improve-clos in the repository.

I'm slowly working on the new documentation. This is very mundane task which I'm not sure I'll be able to finish. Rewriting DocBook to TexInfo and filling the missing parts is hard. I'm considering giving up and improving the DocBook instead.

In the near future I plan to make a crowdfunding campaign to improve support for cross-compilation, Android and Java interoperability in order to boost development. More details will be probably covered in the next Quarterly issue.

3 Porting Lisp Games to Android with Embeddable Common Lisp, Part 1

3.1 Introduction

Recently I ported my Common Lisp game engine "Xelf" to the Android operating system using Embeddable Common Lisp.

Some work remains to be done before I can do a proper beta test release, but ECL Quarterly provides a good opportunity to pause and share the results thus far. This is the first part of a two-part article. The focus of Part 2 will be on performance optimization, testing, and user interface concerns.

Special thanks to Daniel Kochmański, 3-B, oGMo, and the rest of the Lisp Games crew for their inspiration and assistance.

3.1.1 About the software

Xelf is a simple 2-D game engine written in Common Lisp. It is the basis of all the games I have released since 2008, and can currently be used with SBCL to deliver optimized standalone game executables for GNU/Linux, MS Windows, and Mac OSX.

I've also published a Git repository with all the work-in-progress scripts, patches, and libraries needed to compile Xelf for Android with Embeddable Common Lisp, OpenGL, and SDL.

Please note that this is a pre-alpha release and is mainly intended for Common Lisp developers looking to get a head start in building an Android game. Use with caution.

Xelf is not required; you can substitute your own Lisp libraries and applications and just use the repo as a springboard.

I would like to add support for CL-SDL2 as well, both as a prelude to porting Xelf to SDL 2.0, and as a way to help the majority who use SDL 2.0 for current projects.

3.2 Problems

3.2.1 Choosing an implementation

As I use only Free Software for my projects, I did not consider any proprietary Lisps.

Steel Bank Common Lisp now runs on Android, but SBCL as a whole cannot yet be loaded as a dynamic shared library. This is a show-stopper because Android requires the entry point of a native application to be in a shared library specially embedded in the app.

Xelf works very well with Clozure Common Lisp, but CCL's Android support is not fully functional at present. So I've been quite happy to discover Embeddable Common Lisp. Its technique of translating Common Lisp into plain C has made integration with the Android NDK toolchain relatively simple.

3.2.2 Cross-compilation

For performance reasons the Lisp stack (meaning LISPBUILDER-SDL, CL-OPENGL, CFFI, Xelf, the game, and all their dependencies) must be compiled to native ARM machine code and loaded as shared libraries.

There is a complication in this task as regards ECL. The latter produces native code by translating Common Lisp into plain C, and then invoking the C compiler. But the C compiler toolchain is not typically present on Android, and building one that is properly configured for this task has proved difficult so far.

Therefore we must cross-compile the entire Lisp stack. ECL's Android build procedure already cross-compiles the Lisp contained in ECL, but there were additional difficulties in compiling Lisp libraries which I'll cover below in the "Solutions" section.

3.2.3 Legacy code

Xelf has improved a lot over time and gained new features, but is now outdated in some respects. When I first wrote Xelf in the 2006-2007 period SDL 1.2 was current and OpenGL Immediate mode had not yet been officially deprecated. This hasn't been a terrible problem in practical terms, given that both are still widely supported on PC platforms. But porting to Android would mean I could not procrastinate any longer on updating Xelf's SDL and OpenGL support.

3.3 Solutions

3.3.1 CommanderGenius to the rescue

Help arrived for my SDL woes in the form of Sergii Pylypenko's "CommanderGenius", a fancy port of SDL 1.2/2.0 to Android. I can utilize the existing LISPBUILDER-SDL bindings for SDL, SDL-MIXER, SDL-TTF, SDL-IMAGE, and SDL-GFX. Not only that, there are extra features such as gamepad support, floating virtual joysticks, access to touchscreen gesture data and Android system events, support for the Android TV standard, and much more.

CommanderGenius is actually designed from the start to rebuild existing SDL 1.2 / 2.0 / OpenGL projects as Android applications, and includes dozens of examples to work with. So in mid-May this year I set about splicing together Daniel Kochmański's ECL-ANDROID Java wrapper and startup code (which together load ECL as a shared object from within the app) into the CommanderGenius SDL application code and build procedures.

The result is a fullscreen SDL/OpenGL application with Embeddable Common Lisp, optionally running Swank. There's even a configurable splash screen!

3.3.2 Do a little dance with ASDF

ECL can compile an entire system into one FASL file, but I ran into a snag with the ASDF-based build procedure. The typical way is to compile each Lisp file and then load the resulting compiled file. But on the cross-compiler,

(load (compile-file "myfile.lisp"))

fails because the output of COMPILE-FILE is a binary for the wrong architecture. Likewise, alien shared libraries cannot be loaded during Lisp compilation, which broke CL-OPENGL and LISPBUILDER-SDL.

My temporary solution was to redefine the function ASDF:PERFORM-LISP-LOAD-FASL in my build script. My modified version does something like this instead:

(compile-file "myfile.lisp")
(load "myfile.lisp")

I then invoke ECL's system builder, which spits out a big binary FASB file containing the whole system. But thanks to the LOAD statements, each Lisp file has had access to the macros and other definitions that preceded it in compilation.

I'm sure this is really wrong, but it works, and the resulting FASBs load very quickly. (App startup time went from over 30 seconds when loading byte-compiled FASCs, to about 3.5 seconds.)

In the end, it was simple to deal with CL-OPENGL and LISPBUILDER-SDL wanting to open shared libraries during compilation. I used Grep to find and then comment out calls to CFFI:USE-FOREIGN-LIBRARY, leaving the DEFINE-FOREIGN-LIBRARY definitions intact. This allows cross-compilation to proceed normally.

Then on Android, after the FASBs are loaded I invoke USE-FOREIGN-LIBRARY on each of the required definitions.

So tricking ASDF works. But aside from being a hack, it's not enough for some of the things I'd like to do. The INLINED-GENERIC-FUNCTION technique looks like a highly promising way to increase performance, but my cross-compilation trick led in this case to invalid FASB's with embedded FASC bytecodes. Indeed, to work with ECL in this situation would require actually loading the ARM-architecture compiled INLINED-GENERIC-FUNCTION binary before compiling systems that use inlining—which as mentioned above cannot be done during cross-compilation.

I'm exploring other potential solutions, such as installing a GNU/Linux container on my Android development unit in order to give ECL access to a native C compiler toolchain (see below). I may even attempt to write a custom cross-compilation procedure using Clang and LLVM. But this is less urgent for now, because tweaking ASDF is sufficient to produce a working application.

3.3.3 Use OpenGL ESv1 with CL-OPENGL

Luckily the the path of least resistance could prevail here. OpenGL ES version 1 is widely supported on Android devices, and is easier to port to from Immediate mode than is GLESv2. CL-OPENGL supports it right out of the box. (I'd like to thank 3-B and oGMo for their help in bridging the gap with my own code.)

Some tasks remain to be done here but most of Xelf's drawing functions are now working, including TrueType fonts and vertex coloring.

I've also written some code to partially emulate vertex coloring as a way of increasing render performance, and this will be covered in the forthcoming Part 2 of this article.

3.3.4 ProTip: Use the byte-compiler

One issue has gone unmentioned. How do I interactively redefine functions and set variables in order to develop the running game via SLIME/Swank, if everything must be cross-compiled on an X86 system?

The answer is that ECL's built-in bytecode compiler is used in these cases, and the bytecoded definitions replace the originals. I can freely use COMPILE-FILE, LOAD, and even ASDF:LOAD-SYSTEM during "live" development; under normal circumstances the only real difference is execution speed of the resulting code. The final game app will ship without Swank, of course, and with a fully native Lisp stack.

Now you have a new problem, which is how to edit the Lisp files on your Android device so that Swank can compile and load them.

3.3.5 ProTip: Use Emacs TRAMP with ADB

To make this useful you need a rooted android device.

(add-to-list 'tramp-default-user-alist '("adb" nil "root"))
(find-file "/adb::/")

This can integrate with Emacs' "bookmarks" and "desktop" features for even more convenience.

3.3.6 ProTip: Use Emacs to inspect your APK package

They're just zip files. Missing libraries or assets? Check the APK by opening it as a file in GNU Emacs.

3.3.7 ProTip: Use a GNU/Linux container for SSH and native Emacs with X11!

You can actually install a GNU/Linux "container" with Debian, Ubuntu, or some other distribution on your Android development system in order run the Secure Shell daemon and many other applications. I use it to run a graphical Emacs on the Android box, with Emacs' X11 connection forwarded through SSH so that its windows open on my desktop GNU/Linux PC's X server—right alongside my native Emacs. I use different color themes to avoid mixing them up.

This gives me full access to everything on both systems from a single mouse/keyboard/monitor, and I can cut and paste text freely between applications.

Setting up such a container is beyond the scope of this article, but I highly recommend it. It was pretty easy on a rooted device, and works very well.

3.4 Conclusion

In less than a month we went from "let's do it" to "wow, it works!" What more can you ask for?

This concludes Part 1 of my article on building Lisp games for Android with Embeddable Common Lisp. To read my running commentary and see news and test results as they are posted, you can visit the project README:

More details and all scripts and configurations can be found in that repository.

Thanks for reading,

David O'Toole (
11 June, 2016

4 Common Lisp implementations

Some time ago I've created with the help of many kind people (most notably from Rainer Joswig and Fare Rideau) a graph presenting Common Lisp implementations and the relations between them. This version is improved over drafts presented on twitter and linkedin. If you find any errors, please contact me.


It is worth noting that LispWorks and VAX share the code with Spice Lisp which later evolved into Common Lisp implementation CMUCL. Striped lines lead to CMUCL, because I didn't want to add pre-CL implementations.

There is also suspicion that Lucid shares code with Spice Lisp and/or VAX, but I couldn't confirm that, so I'm leaving it as is.

"JavaScript Lisp Implementations" classifies some lisps as CL, but I've added only Acheron and parenscript to the list, because rest is just CL-ish, not even being a subset.

Resources I've found on the internet: CMU FAQ, ALU list, CLiki overview, Wikipedia article, JavaScript Lisp Implementations.

5 Building various implementations

I've built various lisps to perform some benchmarks and to have some material for comparison. Ultimately I've decided to polish it a little and publish. I had some problems with Clasp and Mezzano so I've decided to not include them and leave building these as an exercise for the reader ;-). Also, if you feel adventurous, you may try to build Poplog, which has Common Lisp as one of the supported languages.

If you want to read about various implementations, please consult Daniel's Weinreb Common Lisp Implementations: A Survey (material from 2010, definitely worth reading).

First we create a directory for the lisp implementations (we'll build as an ordinary user) and download the sources. Each implementation has a list of building prerequisites, but it may be not comprehensive.

export LISPS_DIR=${HOME}/lisps
mkdir -p ${LISPS_DIR}/{src,bin}
pushd ${LISPS_DIR}/src

# Obtain sources
svn co abcl
# git clone
svn co ccl
hg clone clisp
git clone cmucl
git clone ecl
git clone git:// gcl
git clone jscl
# git clone
git clone mkcl
git clone git:// sbcl
git clone wcl
git clone

5.0.1 ABCL (Armed Bear Common Lisp)

jdk, ant
pushd abcl
cp abcl ${LISPS_DIR}/bin/abcl-dev

5.0.2 CCL (Clozure Common Lisp)

gcc, m4, gnumake
pushd ccl
echo '(ccl:rebuild-ccl :full t)' | ./lx86cl64 -n -Q -b

# installation script is inspired by the AUR's PKGBUILD
mkdir -p ${LISPS_DIR}/ccl-dev
cp -a compiler contrib level-* lib* lisp-kernel objc-bridge \
   tools x86-headers64 xdump lx86cl64* examples doc \

find ${LISPS_DIR}/ccl-dev -type d -name .svn -exec rm -rf '{}' +
find ${LISPS_DIR}/ccl-dev -name '*.o' -exec rm -f '{}' +
find ${LISPS_DIR}/ccl-dev -name '*.*fsl' -exec rm -f '{}' +

cat <<EOF > ${LISPS_DIR}/bin/ccl-dev
exec ${LISPS_DIR}/ccl-dev/lx86cl64 "\$@"
chmod +x ${LISPS_DIR}/bin/ccl-dev

5.0.3 CLISP

gcc, make
don't build with ASDF (it's old and broken)
pushd clisp
./configure --prefix=${LISPS_DIR}/clisp-dev/ \
            --with-threads=POSIX_THREADS \
cd build
make && make install
ln -s ${LISPS_DIR}/clisp-dev/bin/clisp ${LISPS_DIR}/bin/clisp-dev

5.0.4 CMUCL (CMU Common Lisp)

cmucl binary, gcc, make, openmotif
it needs another CMUCL to bootstrap (release 21a)
pushd cmucl
mkdir -p prebuilt
pushd prebuilt
wget \
mkdir ${LISPS_DIR}/cmucl-21a
tar -xf cmucl-21a-x86-linux.tar.bz2 -C ${LISPS_DIR}/cmucl-21a/
tar -xf cmucl-21a-x86-linux.extra.tar.bz2 -C ${LISPS_DIR}/cmucl-21a/
cat <<EOF > ${LISPS_DIR}/bin/cmucl-21a
exec ${LISPS_DIR}/cmucl-21a/bin/lisp "\$@"
chmod +x ${LISPS_DIR}/bin/cmucl-21a
# Note, that this is already a fully functional lisp now
bin/ -C "" -o "cmucl-21a"
bin/ -I ${LISPS_DIR}/cmucl-dev/ linux-4/
cat <<EOF > ${LISPS_DIR}/bin/cmucl-dev
exec ${LISPS_DIR}/cmucl-dev/bin/lisp "\$@"
chmod +x ${LISPS_DIR}/bin/cmucl-dev

5.0.5 ECL (Embeddable Common Lisp)

gcc, make
./configure --prefix=${LISPS_DIR}/ecl-dev/
make && make install
ln -s $LISPS_DIR/ecl-dev/bin/ecl ${LISPS_DIR}/bin/ecl-dev

5.0.6 JSCL (Java Script Common Lisp)

Conforming CL implementation, web browser, nodejs
Doesn't provide LOAD yet (no filesystem), but author confirmed that this will be implemented (virtual filesystem on the browser and the physical one on the nodejs).
mkdir ${LISPS_DIR}/jscl-dev
pushd jscl

# Run in the console (node-repl)
cp jscl.js repl-node.js ${LISPS_DIR}/bin/jscl-dev
cat <<EOF > ${LISPS_DIR}/bin/jscl-dev
exec node ${LISPS_DIR}/jscl-dev/repl-node.js
chmod +x ${LISPS_DIR}/bin/jscl-dev

# Run in the web browser (optional)
cp jscl.js repl-web.js jquery.js jqconsole.min.js jscl.html style.css \
# replace surf with your favourite browser supporting JS
cat <<EOF > ${LISPS_DIR}/bin/jscl-dev-browser
exec surf ${LISPS_DIR}/jscl-dev/jscl.html
chmod +x ${LISPS_DIR}/bin/jscl-dev-browser


5.0.7 GCL (GNU Common Lisp)

gcc, make
# Doesn't work both with head and the release, luckily it works with
# the next pre-release branch
git checkout Version_2_6_13pre
./configure --prefix=${LISPS_DIR}/gcl-2.6.13-pre
make && make install
ln -s ${LISPS_DIR}/gcl-2.6.13-pre/bin/gcl ${LISPS_DIR}/bin/gcl-2.6.13-pre

5.0.8 MKCL (Man-Kai Common Lisp)

gcc, make
pushd mkcl
./configure --prefix=${LISPS_DIR}/mkcl-dev
make && make install
ln -s ${LISPS_DIR}/mkcl-dev/bin/mkcl ${LISPS_DIR}/bin/mkcl-dev

5.0.9 SBCL (Steel Bank Common Lisp)

ANSI-compliant CL implementation
  • Lisp has to close on EOF in top-level (CMUCL doesn't do that),
  • ECL has some bug regarding Lisp-to-C compiler apparently triggered by the SBCL compilation - don't use it here,
  • we could use precompiled SBCL like with the CMUCL, but let's exploit the fact, that we can compile from the C-bootstrapped implementation (we'll use already built clisp-dev),
  • it is advised to run the script in fast terminal (like xterm) or in the terminal multiplexer and to detach it - SBCL compilation process is very verbose,
  • if you build SBCL on Windows, consider using MinGW to preserve POSIX compatibility.
pushd sbcl
export GNUMAKE=make
./ "clisp"
cat <<EOF > ${LISPS_DIR}/bin/sbcl-dev
SBCL_HOME=${LISPS_DIR}/sbcl-dev/lib/sbcl exec ${LISPS_DIR}/sbcl-dev/bin/sbcl "\$@"
chmod +x ${LISPS_DIR}/bin/sbcl-dev

5.0.10 WCL

tcsh, gcc, git
very incomplete implementation
pushd wcl
REV=`git rev-parse HEAD`
sed -i -e "s/WCL_VERSION = \"3.0.*$/WCL_VERSION = \"3.0-dev (git-${REV})\"/" CONFIGURATION
LD_LIBRARY_PATH=`pwd`/lib make rebuild
mkdir ${LISPS_DIR}/wcl-dev
cp -a bin/ lib/ doc/ ${LISPS_DIR}/wcl-dev/
cat <<EOF > ${LISPS_DIR}/bin/wcl-dev
LD_LIBRARY_PATH=${LISPS_DIR}/wcl-dev/lib exec ${LISPS_DIR}/wcl-dev/bin/wcl "\$@"
chmod +x ${LISPS_DIR}/bin/wcl-dev

5.0.11 XCL

last commit in 2011
pushd xcl
mkdir ${LISPS_DIR}/xcl-dev
XCL_HOME=${LISPS_DIR}/xcl-dev make
cp -a clos compiler lisp COPYING README xcl ${LISPS_DIR}/xcl-dev
# This will build in XCL_HOME, even if run in source directory
./xcl <<EOF

ln -s ${LISPS_DIR}/xcl-dev/xcl ${LISPS_DIR}/bin/xcl-dev

6 Portability libraries

It is important to know the difference between the language standard, implementation-specific extensions and the portability libraries. The language standard is something you can depend on in any conforming implementation.

Sometimes it's just not enough. You may want to do ** serializethreading*, or to *data, which is very hard to express (or even impossible) in the language provided by the standard. That's where the implementation-specific extensions kick in. Why are they called "implementation-specific"? Because the API may be different between implementations - reaching consensus is a hard thing1.

The most straightforward approach I can imagine is to reach for the documentation of the Common Lisp implementation you are currently using and to use the API provided by this implementation. I dare you not to do that! It's definitely the easiest thing to do at first, but mind the consequences. You lock yourself, and your users in the implementation you prefer. What if you want to run it on the JVM or to make it a shared library? Nope, you're locked-in.

"What can I do then?" - you may ask. Before I answer this question, I'll tell you how many people do it (or did it in the past) - they used read-time conditionals directly in the code. Something like the following:

(defun my-baz ()
  #+sbcl                        (sb-foo:do-baz-thing 'quux)
  #+ccl                         (ccl:baz-thing       'quux)
  #+(and ecl :baz-thing)        (ext:baz             'quux)
  #+abcl                        (ext:baz             'quux)
  #+(and clisp :built-with-baz) (ext:baz-thingie     'quux)
  #-(or sbcl ccl ecl abcl clisp)
  (error "Your implementation isn't supported. Fix me!"))

If the creator felt more fancy and had some extra time, they put it in the package my-app-compat. It's all great, now your application works on all supported implementations. If somebody wants theirs implementation to work, send the creator a patch, who incorporates it into the code and voila, everything works as desired.

We have one problem however. Libraries tend to depend on one another. There is also a lot of software which uses features beyond the ANSI specification (it's all good, programmers need these!). Do you see code duplication everywhere? How many times does a snippet above have to be copy-pasted, or rewritten from scratch? It's not black magic after all. APIs between ad-hoc implementations don't exactly match, covered CL implementations differ…

So you quickload your favorite library which depends on 10 other libraries which implement BAZ functionality in theirs own unique way, with a slightly different API on the unsupported implementation - that's why we have my-baz abstraction after all, right? Now, to make it work, a user has to:

  1. Find which of the ten libraries don't work (not trivial!),
  2. find and clone the repositories (we want to use git for patches),
  3. fix each one of them (grep helps!) and commit the changes,
  4. push the changes to your own forked repository and create a pull request (or send a diff to the mailing list) - *ten times*,
  5. voila, you're done, profit, get rich, grab a beer.

It's a lot of work which the user probably won't bothered to do. They will just drop the task, choose another implementation or hack their own code creating the Yet Another Baz Library for the implementations he cares for reinventing the wheel once more. It's a hacker's mortal sin.

I'm going to tell you now what is the Right Thing™ here. Of course you are free to disagree. When you feel that there is a functionality you need which isn't covered by the standard you should

  1. Look if there is a library which provides it.

    You may ask on IRC, the project's mailing list, check out the CLiki, do some research on the web. Names sometimes start with trivial-*, but it's not a rule. In other words: do your homework.

  2. If you can't find such a library, create one.

    And by creating such a library I mean comparing the API proposed by at least two CL implementations (three would be optimal IMHO), carefully designing your own API which covers the functionality (if it's trivial, this should be easy) and implementing it in your library.

    Preferably (if possible) add a fallback implementation for implementations not covered (with the appropriate warning, that it may be inefficient or not complete in one way or another).

    It may be worth reading the Maintaining Portable Lisp Programs paper written by Christophe Rhodes.

  3. Write beautiful documentation.

    A CL implementation docs may be very rough. It takes time to write them and programmers tend to prioritize code over the documentation. It's really bad, but it's very common for the documentation to be incomplete or outdated.

    Document your library, describe what it does, how to use it. Don't be afraid of the greatness! People will praise you, success will come, world will be a better place. And most importantly, your library will be useful to others.

  4. Publish the library.
  5. Make that library your project's dependency.

I know it's not easy, but in the long term it's beneficial. I guarantee you that. That's how the ecosystem grows. Less duplication, more cooperation - pure benefit.

Some people don't follow this path. They didn't think it through, or maybe they did and decided that keeping the dependency list minimal is essential to their project, or were simply lazy and hacked their own solution. There are also some old projects which exported a number of features being a very big portability library and an application at the same time (ACL-compat, McCLIM and others). What to do then?

If it's a conscious decision of the developer (who doesn't want to depend on /anything/), you can do nothing but provide a patch adding your own implementation to the supported list. It's their project, their choice, we have to respect that.

But before doing that you may simply ask if they have something against plugging these hacks with the proper portability library. If they don't - do it, everybody will benefit.

There are a few additional benefits of the presented portability library approach for the implementations itself. Having these internal details in one place makes it more probable that your implementation is already supported. If the library has a bug it's easier to fix it in one place. Also, if the CL implementation changes it's API, it's easy to propagate changes to the corresponding portability libraries. New CL implementation creators have a simplified task of making their work usable with existing libraries.

It is worth noting, that creating such library paves the way to the new quasi-standard functionalities. For instance Bordeaux Threads has added recently CONDITION-WAIT function, which isn't implemented on all implementations. It is a very good stimulus to add it. This is how library creators may have real impact on the implementation creators decisions about what to implement next.

6.1 Portability layer highlights

Here are some great projects helping CL implementations be part of a more usable ecosystem. Many of these are considered being part of the de-facto standard:

Provides thread primitives, locks and conditionals
Serializing and deserializing CL objects from streams
Foreign function interface (accessing foreign libraries)
Meta-object protocol - provides it's own closer-common-lisp-user package (redefines for instance defmethod)
TCP/IP and UDP/IP socket interface.
Osicat is a lightweight operating system interface for Common Lisp on POSIX-like systems, including Windows
Portable pathname library
trivial-garbage provides a portable API to finalizers, weak hash-tables and weak pointers
trivial-features ensures consistent *FEATURES* across multiple Common Lisp implementations
trivial-gray-streams system provides an extremely thin compatibility layer for gray streams
external-program enables running programs outside the Lisp process

There are many other very good libraries which span multiple implementations. Some of them have some drawbacks though.

For instance IOlib is a great library, but piggy-backs heavily on UN*X - if you develop for many platforms you may want to consider other alternatives..

UIOP is also a very nice set of utilities, but isn't documented well, does too many things at once and tries to deprecate other actively maintained projects - that is counterproductive and socially wrong. I'd discourage using it.

There are a few arguments supporting UIOP's state - it is a direct dependency of ASDF, so it can't (or doesn't want to) depend on other libraries, but many utilities are needed by this commonly used system definition library. My reasoning here is as follows: UIOP goes beyond ASDF's requirements and tries to make actively maintained projects obsolete. Additionally it works only on supported implementations even for features which may be implemented portably.

6.2 UIOP discussion

I'm aware that my opinion regarding UIOP may be a bit controversial. I've asked the library author and a few other people for feedback which I'm very grateful for. I'm publishing it here to keep opinions balanced.

6.2.1 Fare Rideau

Dear Daniel,

while there is a variety of valid opinions based on different interests and preferences, I believe your judgment of UIOP is based on incorrect premises.

First, I object to calling UIOP "not well documented". While UIOP isn't the best documented project around, all its exported functions and variables have pretty decent DOCSTRINGs, and there is at least one automatic document extractor, HEΛP, that can deal with the fact that UIOP is made of many packages, and extract the docstrings into a set of web pages, with a public heλp site listed in the UIOP The fact that some popular docstring extractors such as quickdocs can't deal with the many packages that UIOP creates with its own uiop:define-package doesn't mean that UIOP is less documented than other projects on which these extractors work well, it's a bug in these extractors.

Second, regarding the deprecation of other projects: yes, UIOP does try to deprecate other projects, but (a) it's a good thing, and (b) I don't know that any of the projects being deprecated is "actively maintained". It's a good thing to try to deprecate other lesser libraries, as I've argued in my article Consolidating Common Lisp libraries: whoever writes any library should work hard so it will deprecate all its rivals, or so that a better library will deprecate his and all rivals (such as optima deprecating my fare-matcher). That's what being serious about a library is all about. As for the quality of the libraries I'm deprecating, one widely-used project the functionality of which is completely covered by UIOP is cl-fad. cl-fad was a great improvement in its day, but some of its API is plain broken (e.g. the :directories argument to its walk-directory function has values with bogus names, while its many pathname manipulation functions get things subtly wrong in corner cases), and its implementation not quite as portable as UIOP (that works on all known actively used implementations). There is no reason whatsoever to ever choose cl-fad over UIOP for a new project. Another project is trivial-backtrace. I reproduced most of its functionality, except in a more stable, more portable way (to every single CL implementation). The only interface I didn't reproduce from it is map-backtrace, which is actually not portable in trivial-backtrace (only for SBCL and CCL), whereas serious portable backtrace users will want to use SLIME's or SLY's API, anyway. As for external-program, a good thing it has for it is some support for asynchronous execution of subprocesses; but it fails to abstract much over the discrepancies between implementations and operating systems, and is much less portable than uiop:run-program (as for trivial-shell, it just doesn't compete).

UIOP is also ubiquitous in a way that other libraries aren't: all implementations will let you (require "asdf") out of the box at which point you have UIOP available (exception: mostly dead implementations like Corman Lisp, GCL, Genera, SCL, XCL, may require you to install ASDF 3 on top of their code; still they are all supported by UIOP, whereas most portability libraries don't even bother with any of them). This ubiquity is important when writing scripts. Indeed, all the functionality in UIOP is so basic that ASDF needed it at some point — there is nothing in UIOP that wasn't itself required by some of ASDF's functionality, contrary to your claim that "UIOP goes beyond ASDF's requirements" (exception: I added one function or two to match the functionality in cl-fad, such as delete-directory-tree which BTW has an important safeguard argument :validate; but even those functions are used if not by ASDF itself, at least by the scripts used to release ASDF itself). I never decided "hey, let's make a better portability library, for the heck of it". Instead, I started making ASDF portable and robust, and at some point the portability code became a large chunk of ASDF and I made it into its own library, and because ASDF is targetting 16 different implementations and has to actually work on them, this library soon became much more portable, much more complete and much more robust than any other portability library, and I worked hard to achieve feature parity with all the libraries I was thereby deprecating.

Finally, a lot of the functionality that UIOP offers is just not offered by any other library, much less with any pretense of universal portability.

6.2.2 David Gu

For the documentation thing, I really think Quickdocs could do a better job. The bug #24 stated that problem, however, it's remain to be solved. I will check this out if I have free time recently.

I use UIOP a lot in my previous company, the reason is simple and maybe a little naive: my manager didn't want to involve too many add-ons in the software. UIOP is shipped together with ASDF, it's really "convenient", and its robustness is the final reason why I will stick to it. If people understand how UIOP came out in the history from ASDF2 to ASDF3, I think people will understand why it's acting like deprecating several other projects – that's not the original idea of it.

But anyway, I really learned a lot from this post and also the comments. In my opinion, avoid reinventing the wheels is the right idea and directions for this community. So from that perspective, I support @fare's idea "It's a good thing to try to deprecate other lesser libraries". Including this article and along with Maintaining Portable Lisp Programs and @fare's Consolidating Common Lisp Libraries, we should let more people involved in this topic.



If you are Common Lisp implementer and plan to add a feature beyond ANSI specification, please consider writing a proposal and submitting it to Common Lisp Document Repository. It will make everybody's life easier.

Michael MalisBuilding Fizzbuzz in Fractran from the Bottom Up

· 14 days ago

In this post, I am going to show you how to write Fizzbuzz in the programming language Fractran. If you don’t know, Fractran is an esoteric programming language. That means it is extraordinary difficult to write any program in Fractran. To mitigate this difficultly, instead of writing Fizzbuzz in raw Fractran, what we are going to do is build a language that compiles to Fractran, and then write Fizzbuzz in that language.

This post is broken up into three parts. The first part covers what Fractran is and a way of understanding what a Fractran program does. Part 2 will go over the foundation of the language we will build and how it will map to Fractran. Finally, in Part 3, we will keep adding new features to the language until it becomes easy to write Fizzbuzz in it.

Part 1: Understanding Fractran

Before we can start writing programs in Fractran, we have to first understand what Fractran is. A Fractran program is represented as just a list of fractions. To execute a Fractran program, you start with a variable N=2. You then go through the list of fractions until you find a fraction F, such that N*F is an integer. You then set N=N*F and go back to the beginning of the list of fractions. You keep repeating this process until there is no fraction F such that N*F is an integer.

Since there is no way to print anything with the regular Fractran rules, we are going to add one additional rule on top of the ordinary ones. In addition to the list of fractions, each program will have a mapping from numbers to characters representing the “alphabet” of the program. After multiplying N by F, whenever the new N is a multiple of one of the numbers in the alphabet, that will “print” the character that the number maps to. I have written a function, run-fractran, which implements this version of Fractran and included it here. It takes a list of fractions and an alphabet as an alist and executes the program.

Let’s walk through a simple example. Let’s say we have the following Fractran program:

9/2, 1/5, 5/3

with the alphabet 5->’a’. To run this program, we start with N=2. We then go through the list fractions until we find a fraction F such that N*F is an integer. On this first step, F becomes 9/2, since N*F = 2 * 9/2 = 9 which is an integer. We then set N to N*F so that N now becomes 9. Repeating this process again, we get F=5/3 and N=N*F=15. Since the number 5 is in the alphabet, and N is now a multiple of 5, we output the character that 5 maps to, ‘a’. If we keep repeating these steps, we eventually reach a point where N=1 and we have outputted the string “aa”. Since 1 times any of the fractions does not result in an integer, the program terminates with the output “aa”.

At this point, you may be thinking that writing any program in Fractran is nearly impossible. The truth is that there is a simple trick you can use that makes it much easier program Fractran. All you need to do is look at the prime factorization of all of the numbers. Let’s see what the above Fractran program looks like if we convert every number into a tuple (a,b,c) where a is the how many times 2 divides the number, b is how many times 3 does, and c is how many times 5 does. The program then becomes:

(0, 2, 0) / (1, 0, 0)
(0, 0, 0) / (0, 0, 1)
(0, 0, 1) / (0, 1, 0)

We also have the tuple (0,0,1) mapping to ‘a’ for our alphabet. We start with N = (1,0,0). If you don’t know, multiplying two numbers is the same as adding the counts of each prime factors, and division is the same as subtracting the counts. For example, 2 * 6 = (1,0,0) + (1,1,0) = (2,1,0) = 12. With this way of looking at the program, finding a fraction F such that N*F is an integer becomes finding a “fraction” F such that each element in the tuple N is greater than or equal to the corresponding element in the tuple in the denominator of F. Once we find such F, instead of multiplying N by it, you subtract from each element of N the corresponding value in the denominator of F (equivalent to dividing by the denominator), and add the corresponding value in the numerator (equivalent to multiplying by the numerator). Executing the program with this interpretation proceeds as follows.

We start with N = (1,0,0). Since every value in N is greater than or equal to their corresponding values in the denominator of the first fraction, we subtract every value in the first denominator and then add every value in the numerator to get N = (1,0,0) – (1,0,0) + (0,2,0) = (0,2,0). Repeating this again, F becomes the third fraction. Subtracting the denominator and adding the numerator gets us N = (0,1,1). Then since every value in N is greater than or equal to their corresponding element in (0,0,1), we print ‘a’. The program continues, just like it did for the original Fractran program.

Basically we can think of every prime number as having a “register” which can take on non-negative integer values. Each fraction is an instruction that operates on some of the registers. You can interpret a fraction as saying if the current value of each register is greater than or equal to the the value specified by the denominator (the number of times the prime for that register divides the denominator), you subtract from the registers all of the values in the denominator, add all the values specified in the numerator (the number of times the prime for each register divides the numerator), and then jump back to the first instruction. Otherwise, if any register is less than the value specified in the denominator, continue to the next fraction. For example, the fraction 9/2 can be translated into the following pseudocode:

;; If the register corresponding to the prime number 2 
;; is greater or equal to 1
if reg[2] >= 1
  ;; Decrement it by 1 and increment the register 
  ;; corresponding to 3 by 2. 
  reg[2] = reg[2] - 1
  reg[3] = reg[3] + 2
  goto the beginning of the program
;; Otherwise continue with the rest of the program.

Although programming Fractran is still difficult, this technique suddenly makes writing Fizzbuzz in Fractran tractable.

Part 2: Compiling to Fractran

For our compiler, we are going to need to generate a lot of primes. To do so, we will use a function, new-prime, which will generate a different prime each time it is called.1

(defun prime (n)
  "Is N a prime number?"
  (loop for i from 2 to (isqrt n)
        never (multiple n i)))

(defparameter *next-new-prime* nil)

(defun new-prime ()
  "Returns a new prime we haven't used yet."
  (prog1 *next-new-prime*
    (setf *next-new-prime*
          (loop for i from (+ *next-new-prime* 1)
                if (prime i)
                  return i))))

So now that we’ve got new-prime, we’ve we can start figuring out how we are going to compile to Fractran. The first detail we will need to figure out is how to express control flow in Fractran.  In other words, we need a way to specify which fractions will execute after each other fractions. This is a problem because after a fraction executes, you always jump back to the first fraction.

Expressing control flow actually winds up being surprisingly easy. For each fraction we can designate a register. Then, we only execute a fraction if its register is set. It is easy to have a fraction conditionally execute depending on whether its register is set by using the trick we are using to interpret a Fractran program. All we need to do is multiply the denominator of each fraction by the prime for the register of that fraction. This way, we will pass over a fraction unless its register is set. Also, all we need to do to specify which fraction should execute after a given fraction is to multiply the numerator of the given fraction by the prime of the register for the next fraction. By doing this, after a fraction executes, it will set the register of the next fraction.

In order to keep track of the primes for the current fraction and for the next fraction, we will have two global variables. The first will be the prime number for the current instruction, and the second will be the prime number for the next instruction:

(defparameter *cur-inst-prime* nil)
(defparameter *next-inst-prime* nil)

We will also need a function advance which will advance the values of the variables once we move on to the next instruction.

(defun advance ()
  (setf *cur-inst-prime* *next-inst-prime*
        *next-inst-prime* (new-prime)))

Now that we’ve got a way of expressing control flow, we can start planning out what the language we will build will look like. From this point on, I am going to call the language we are building, Lisptran. An easy way we represent a Lisptran program is as just a list of expressions. We can have several different kinds of expressions each of which does something different.

The simplest kind of expression we will want is an inline fraction. If a Lisptran expression is just a fraction, we can just add that fraction to the Fractran program being generated.

Another kind of expression that would be useful are labels. Whenever a Lisptran expression is a Lisp symbol, we can interpret that as a label. Each label will be converted into that fraction that is the prime of the next instruction after the label divided by the prime of the label. This way we can jump to the instruction after the label by setting the register for the label. In order to make keeping track of the primes of labels easy, we are going to keep a hash-table, *lisptran-labels*, mapping from labels to the primes for those labels. We will also have a function prime-for-label, which will lookup the prime for a label or assign a new prime if one hasn’t been assigned yet:

(defparameter *lisptran-labels* nil)

(defun prime-for-label (label)
  (or (gethash label *lisptran-labels*)
      (setf (gethash label *lisptran-labels*)

One last kind of expression that will be useful are macro calls. A macro call will be a list whose first element is the name of a macro followed by a list of arbitrary Lisp expressions (The expressions don’t have to be Fractran expressions. They can be interpreted however the macro wants them to be.). In order to compile a macro call, we will lookup the function associated with the macro, and call it on the expressions in the rest of the macro call. That function should then return a list of Lisptran expressions which will then be compiled in place of the macro call. After that we just continue compiling the new code generated by the macro expansion.

To keep track of the definitions of macros, we will keep a hash-table *lisptran-macros*, which will map from the name of the macro to the function for that macro. In order to make defining Lisptran macros easy, we can create a Lisp macro deftran, that works in a similar way to defmacro. When defining a macro with deftran, you are really just defining a function which will take the expressions in the macro call, and return a list of Lisptran instructions to be compiled in its place. Here is the definition for deftran:

(defparameter *lisptran-macros* (make-hash-table))

(defmacro deftran (name args &body body)
  "Define a Lisptran macro."
  `(setf (gethash ',name *lisptran-macros*)
         (lambda ,args ,@body)))

And that’s all of the different kinds of expressions we will need in Lisptran.

Although we now have all of the expressions we need, there are a few more pieces of the compiler we need to figure out. For example, we still haven’t figured out how we are going to represent variables yet. Ultimately this is trivial. We can just assign a register to every variable and keep a mapping from variable names to primes in the same way we have the mapping for labels:

(defparameter *lisptran-vars* nil)

(defun prime-for-var (var)
  (or (gethash var *lisptran-vars*)
      (setf (gethash var *lisptran-vars*)

One last piece of the compiler we need to figure out is how we are going to represent the alphabet of the program. One way we can do this is just represent the characters in our alphabet as variables. The alphabet of a program could just be all of the variables that have characters for names and the primes of the registers for those variables. By doing it this way, we can print a character by just incrementing and then immediately decrementing a variable! Here is code that can be used to obtain the alphabet from *lisptran-vars*:

(defun alphabet (vars)
  "Given a hash-table of the Lisptran variables to primes, 
   returns an alist representing the alphabet."
  (loop for var being the hash-keys in vars 
        using (hash-value prime)
        if (characterp var)
          collect (cons var prime)))

Now that we can express control flow, variables, and macros, we have everything we need to write the actual Lisptran to Fractran compiler:

(defun assemble (insts)
  "Compile the given Lisptran program into Fractran. 
   Returns two values. The first is the Fractran program 
   and the second is the alphabet of the program."
  (let* ((*cur-prime* 2)
         (*cur-inst-prime* (new-prime))
         (*next-inst-prime* (new-prime))
         (*lisptran-labels* (make-hash-table))
         (*lisptran-vars* (make-hash-table)))
    (values (assemble-helper insts)
            (alphabet *lisptran-vars*))))

(defun assemble-helper (exprs)
  (if (null insts)
      (let ((expr (car exprs))
            (rest (cdr exprs)))
          ;; If it's a number, we just add it to the 
          ;; Fractran  program and compile the rest 
          ;; of the Lisptran program
          ((numberp expr)
           (cons expr (assemble-helper rest)))

          ;; If it's a symbol, we divide the prime for 
          ;; the next instruction by the prime for the 
          ;; label.
          ((symbolp expr)
           (cons (/ *cur-inst-prime* 
                    (prime-for-label expr))
                 (assemble-helper rest)))

          ;; Otherwise it's a macro call. We look up the 
          ;; macro named by the first symbol in the 
          ;; expression and call it on the rest of the 
          ;; rest of the expressions in the macro call. 
          ;; We then append all of the instructions 
          ;; returned by it to the rest of the program 
          ;; and compile that.
            (let ((macrofn (gethash (car inst)
              (assemble-helper (append (apply macrofn
                                              (cdr inst))

The function assemble takes a Lisptran program and returns two values. It returns the generated Fractran program and the alphabet of that program. assemble first initializes all of the global variables for the program and then goes to assemble-helper which recursively processes the Lisptran program according to the specification above. Using the function run-fractran that I mentioned above, we can write a function that will execute a given Lisptran program as follows:

(defun run-lisptran (insts)
  "Run the given Lisptran program."
  (multiple-value-call #'run-fractran (assemble insts)))

Part 3: Building Lisptran

Now that we’ve completed the core compiler, we can start adding actual features to it. From here on out, we will not touch the core compiler. All we are going to do is define a couple Lisptran macros. Eventually we will have enough macros such that programming Lisptran seems like programming a high level assembly language.

The first operations we are going should define are basic arithmetic operations. For example, addition. In order to add addition to Lisptran, we can define a macro addi, which stands for add immediate. Immediate just means that we know what number we are adding at compile time. The macro addi will take a variable and a number, and will expand into a fraction which will add the given number to the register for the variable. In this case, the denominator for the fraction will just be the prime for the current instruction (execute this instruction when that register is set) and the numerator will be the prime for the next instruction (execute the next instruction after this one) times the prime for the variable raised to the power of the number we are adding (add the immediate to the register). Here is what the definition for addi looks like:

(deftran addi (x y)
  (prog1 (list (/ (* *next* (expt (prime-for-var x) y))

With are also going to want an operation that performs subtraction. It’s a bit tricky, but we can implement a macro subi (subtract immediate) in terms of addi, since adding a number is the same as adding the negative of that number:2

(deftran subi (x y) `((addi x ,(- y))))

Now that we’ve got some macros for performing basic arithmetic, we can start focusing on macros that allow us to express control flow. The first control flow macro we will implement is >=i (jump if greater than or equal to immediate). In order to implement >=i, we will have it expand into three fractions. The first fraction will test if the variable is greater or equal to the immediate. If the test succeeds, we will then advance to the second fraction which will restore the variable (since when a test succeeds, all of the values from the denominator are decremented from the corresponding registers), and then jump to the label passed in to >=i. If the test fails, we will fall through to the third fraction which will just continue onto the next fraction after that.

The denominator of the first fraction will be the prime for current instruction (execute the instruction if that register is set) times the prime for the register raised to the power of the constant (how we test that the register is greater than or equal to the immediate) and the numerator will be the prime for the second instruction (so we go to the second instruction if the test succeeds). The second fraction is just the prime for the label passed into >=i (so we jump to wherever the label designates) divided the prime for that instruction. Lastly, the denominator of the third fraction is the prime for the current instruction (so we fall through to it if the test in the first fraction fails), and the numerator is just the prime for the next instruction so that we continue to that if the test fails:

(deftran >=i (var val label)
  (prog1 (let ((restore (new-prime)))
           (list (/ restore
                    (expt (prime-for-var var) val)
                 (/ (* (prime-for-label label)
                       (expt (prime-for-var var) val))
                 (/ *next-inst-prime* *cur-inst-prime*)))

Believe it or not, but after this point, we won’t need to even think about fractions anymore. Lisptran now has enough of a foundation that all of the further macros we will need can be expressed in terms of addisubi and >=i. The only two functions that actually need to be implemented in terms of Fractran are addi and >=i. That means no more thinking about Fractran. From here on out, all we have is Lisptran!

We can easily define unconditional goto in terms of >=i. Since all of the registers start at 0, we can implement goto as greater than or equal to zero. We use the Lisp function gensym to generate a variable without a name so that the variable doesn’t conflict with any other Lisptran variables:

(deftran goto (label) `((>=i ,(gensym) 0 ,label)))

Then through a combination of >=i and goto, we can define <=i:

(deftran <=i (var val label)
  (let ((gskip (gensym))) 
    `((>=i ,var (+ ,val 1) ,gskip)
      (goto ,label)

Now that we have several macros for doing control flow, we can start building some utilities for printing. As mentioned previously printing a character is the same as incrementing the variable with the character as its name and then immediately decrementing it:

(deftran print-char (char)
  `((addi ,char 1)
    (subi ,char 1)))

Then if we want to write a macro that prints a string, it can just expand into a series of calls to print-char, each of which prints a single character in the string:

(deftran print-string (str)
  (loop for char across str
        collect `(print-char ,char)))

We are also going to need a function to print a number. Writing this with the current state of Lisptran is fairly difficult since we haven’t implemented several utilities such as mod yet, but we can start by implementing a macro print-digit that prints the value of a variable that is between 0 and 9. We can implement it, by having it expand into a series of conditions. The first one will check if the variable is less than or equal to zero. If so it will print the character zero and jump past the rest of the conditions. Otherwise it falls through to the next condition which tests if the variable is less than or equal to one and so on. We don’t have to manually write the code for print-digit because we can use Lisp to generate the code for us:

(deftran print-digit (var)
  (loop with gend = (gensym)
        for i from 0 to 9
        for gprint = (gensym)
        for gskip = (gensym)
        append `((<=i ,var ,i ,gprint)
                 (goto ,gskip)
                 (print-char ,(digit-char i))
                 (goto ,gend)
        into result
        finally (return `(,@result ,gend))))

At this point, now that we have macros for performing basic arithmetic, basic control flow, and printing, we can start writing some recognizable programs. For example here is a program that prints the numbers from zero to nine:

 (>=i x 10 end)
 (print-digit x)
 (print-char #\newline)
 (addi x 1)
 (goto start)

If you are curious I have included the Fractran program generated by this Lisptran program here. It’s hard to believe that the above Lisptran program and the Fractran program are equivalent. They look completely different!

Now that we have a bunch of low level operations, we can start building some higher level ones. You may not have thought of it, but instructions don’t need to just have flat structure. For example, now that we have goto, we can use it to define while loops (just like in Loops in Lisp):

(deftran while (test &rest body)
  (let ((gstart (gensym))
        (gend (gensym)))
    `((goto ,gend)
      (,@test ,gstart))))

In order to implement while, we are assuming that all predicates take labels as their last argument which is where they will jump to if the predicate succeeds. Now that we have while loops, we can start writing some much more powerful macros around manipulating variables. Here’s two useful ones, one that sets a variable to zero, and one that copies the value in one variable to another:

(deftran zero (var)
  `((while (>=i ,var 1)
      (subi ,var 1))))

(deftran move (to from)
  (let ((gtemp (gensym)))
    `((zero ,to)
      (while (>=i ,from 1)
        (addi ,gtemp 1)
        (subi ,from 1))
      (while (>=i ,gtemp 1)
        (addi ,to 1)
        (addi ,from 1)
        (subi ,gvar 1)))))

For move, we first have to decrement the number we are moving from and increment a temporary variable. Than we restore both the original variable and the variable we are moving the value to at the same time.

With all of these macros, we can finally start focusing on macros that are actually relevant to Fizzbuzz. One operation that is absolutely going to be necessary for Fizzbuzz is mod. We can implement a macro modi by repeatedly subtracting the immediate until the variable is less than the immediate.

(deftran modi (var val)
  `((while (>=i ,var ,val)
      (subi ,var ,val))))

We only need one more real feature before we can start writing Fizzbuzz. We are going to need a way of printing numbers. In order to print an arbitrary number, we are going to need a way of doing integer division. We can implement a macro divi by repeatedly subtracting the immediate until the variable is less than the immediate and keeping track of the number of times we’ve subtracted the immediate.

(deftran divi (x y)
  (let ((gresult (gensym)))
    `((zero ,gresult)
      (while (>=i ,x ,y)
        (addi ,gresult 1)
        (subi ,x ,y))
      (move ,x ,gresult))))

Now for the final macro we will need. A macro for printing numbers. Actually, we are going to cheat a little. Printing numbers winds up being pretty difficult since you have to print the digits from left to right, but you can only look at the lowest digit at a time. To make things easier, we are only to write a macro that is able to print two digit numbers. We won’t need to print 100 since “buzz” will be printed instead.

(deftran print-number (var)
  (let ((gtemp (gensym))
        (gskip (gensym)))
    `((move ,gtemp ,var)
      (divi ,gtemp 10)
      (>=i ,gtemp 0 ,gskip)
      (print-digit ,gtemp)
      (move ,gtemp ,var)
      (modi ,gtemp 10)
      (print-digit ,gtemp)
      (print-char #\newline))))

Now our language is sufficiently high enough that Fizzbuzz is going to be practically as easy as it will get. Here is an implementation of Fizzbuzz in Fractran.

((move x 1)
 (while (<=i x 100)
   (move rem x)
   (modi rem 15)
   (<=i rem 0 fizzbuzz)

   (move rem x)
   (modi rem 3)
   (<=i rem 0 fizz)

   (move rem x)
   (modi rem 5)
   (<=i rem 0 buzz)

   (print-number x)
   (goto end)

   (print-string "fizzbuzz")
   (goto end)

   (print-string "fizz")
   (goto end)

   (print-string "buzz")
   (goto end)

   (addi x 1)))

I’ve also included the generated Fractran program here and included all of the full source code for this blog post here.

I find it absolutely amazing that we were able to build a pretty decent language by repeatedly adding more and more features on top of what we already had. To recap, we implemented a basic arithmetic operation (addi) in terms of raw Fractran and then defined a second (subi) in terms of that. From there we defined three macros for doing control flow (>=igoto<=i), with the second two being defined in terms of the first. Then we were then able to define macros for printing (print-charprint-stringprint-digit). At this point we had all of the low level operations we needed so we could start implement while loops (while), a high level control flow construct. With while loops, we were able to define several macros for manipulating variables (zeromove). With these new utilities for manipulating variables we could define more advanced arithmetic operations (modidivi). Then with these new operations we were able to define a way to print an arbitrary two digit number (print-number). Finally, using everything we had up to this point, we were able to write Fizzbuzz. It’s just incredible that we could make a language by always making slight abstractions on top of the operations we already had.

The post Building Fizzbuzz in Fractran from the Bottom Up appeared first on Macrology.

CL Test Gridquicklisp 2016-05-31

· 16 days ago
The difference between this and previous months:

Grouped by lisp implementation first and then by library:

Grouped by library first and then by lisp impl:

(Both reports show the same data, just arranged differently)

This time the diff is smaller than the past month. There are some regressions, some improvements.

If you're interested in some particular failure or need help investigating something, just ask in the comments or on the mailing list.

Wimpie NortjeDaemonizing Common Lisp services.

· 18 days ago

Update (2016-06-09):

During a discussion on Reddit I realised that the issues discussed in this post are not relevant to the systemd init system.

At the time of writing my servers run Ubuntu 14.04 which uses Upstart as the init system. Upstart detaches the tty of the started services. That is the cause of the issues discussed below.

Most Linux distributions seem to migrate to systemd as the init system. systemd does not detach the tty, it binds stdin to /dev/null and stdout and stderr to a logging stream.

If your OS uses systemd you can turn a foreground application into a service without using GNU Screen or the daemon package.

Deploying a Common Lisp server application in production requires that it runs as a daemon.

A Unix init system like Upstart or systemd can be used to ensure that the service is always running. In this case the application can daemonize itself or it can run in the foreground and let the init system take care of daemonization.

It is usually easier to create an init script for a self-daemonizing application but then the source code gets more complicated.

When the application runs in the foreground the init script is slightly more tricky but the application is easier to debug.


The procedure to daemonize an application is well documented but most of these documentation is focused on C. To implement the complete procedure in Common Lisp can become a mission in itself because it involves forking and opening and closing standard IO stream at the right times.

The daemon package implements all the necessary steps and provides a trivial API for daemonizing a program.

Though daemon is easy to use, all the forking makes it difficult to get backtraces when something goes wrong. The application then appears to be hanging while it is actually waiting in the debugger for input but the debugger can't be used because standard IO is closed.

When a service daemonizes itself it is advisable to devise some method to interact with the application while it is waiting in the debugger. One of the swank libraries could be useful.

Run in foreground

Foreground applications can be used as-is for a service but it can not be used as the main application instantiated by the init system.

During the daemonizing process the init system closes all the standard IO streams. This causes the REPL to exit and the application to end.

GNU Screen can be used to keep the standard IO open for a daemonized service. However, when an error occurs one is in much the same situation as with a self-daemonizing application. The application appears to hang because it is waiting in the debugger but the debugger can't easily be reached.

Screen has options to log standard IO to a file. Debugging then consists of killing the service and working through the logged backtrace. This is not ideal but it has the advantage of not introducing any code complexity in order to make a daemon.

Like in the self-daemonizing case, a swank library can be used to get live debugging back.

Comparing the options

Self-daemonizing Foreground
Daemonizing responsibility Common Lisp code Unix init system
Tool daemon library GNU Screen
Init script complexity Less complex More complex
Code complexity Increased complexity Same complexity as normal program
Debugging options Embedded swank server Logged data or embbeded swank server

Zach BeaneLisp stuff on YouTube

· 18 days ago

If you want to see some neat videos, subscribe to dto, Baggers, and WarWeasle on YouTube. They all regularly post neat graphical stuff done in Common Lisp.

If you know any more people I should follow on YouTube, let me know.

LispjobsSecure Outcomes, Contract Common Lisp programmer

· 22 days ago

Secure Outcomes builds and provides digital livescan fingerprinting systems for use by law enforcement, military, airports, schools, Fortune 500s, etc.

All of our systems are constructed in Common Lisp.

We are looking for a contract CL developer located anywhere in the world that can build software for us.

Strong CL/LW background, of course, but also knowledge of foreign function interfacing etc needed.

Work full or part time.

See what we do at

Resumes to No calls please.

Timofei ShatrovAll you need is PROGV

· 23 days ago

I have never seen PROGV in useErik Naggum

Common Lisp is very, very old. Tagbody and progv, anyone? – Hacker News user pwnstigator

I haven't written anything on this blog lately, mostly because of lack of time to work on side projects and consequently the lack of Lisp things to talk about. However recently I've been working on various improvements to my Ichiran project, and here's the story of how I came to use the much maligned (or rather, extremely obscure) special operator PROGV for the first time.

Ichiran is basically a glorified Japanese dictionary (used as the backend for the web app and it heavily depends on a Postgres database that contains all the words, definitions and so on. The database is based on a dump of an open JMdict dictionary, which is constantly updated based on the users' submissions.

Well, the last time I generated the database from this dump was almost a year ago, and I wanted to update the definitions for a while. However this tends to break the accuracy of my word segmenting algorithm. For this reason I want to keep the old and the new database at the same time and be able to run the whatever code with either of the databases.

I'm using Postmodern to access the database, which has a useful macro named with-connection. If I have a special variable *connection* and consistently use (with-connection *connection* ...) in my database-accessing functions then I can later call

(let ((*connection* '("foo" "bar" "baz" "quux")))

and it will use connection ("foo" "bar" "baz" "quux") instead of the default one. I can even encapsulate it as a macro

(defmacro with-db (dbid &body body)
  `(let ((*connection* (get-spec ,dbid)))
     (with-connection *connection*

(dbid and get-spec are just more convenience features, so that I can refer to the connection by a single keyword instead of a list of 4 elements).

So far so good, but there’s a flaw with this approach. For performance reasons, some of the data from the database is stored in certain global variables. For example I have a variable *suffix-cache* that contains a mapping between various word suffixes and objects in the database that represent these suffixes. Obviously if I run something with a different connection, I want to use *suffix-cache* that’s actually suitable for this connection.

I created a simple wrapper macro around defvar that looks like this:

(defvar *conn-vars* nil)

(defmacro def-conn-var (name initial-value &rest args)
     (defvar ,name ,initial-value ,@args)
     (pushnew (cons ',name ,initial-value) *conn-vars* :key 'car)))

Now with-db can potentially add new dynamic variable bindings together with *connection* based on the contents of *conn-vars*. It’s pretty trivial to add the new bindings at the macro expansion time. However that poses another problem: now all the conn-vars need to be defined before with-db is expanded. Moreover, if I introduce a new conn-var, all instances of with-db macro must be recompiled. This might be not a problem for something like a desktop app, but my web app usually runs for months without being restarted, with new code being hot-swapped into the running image. I certainly don’t need the extra hassle of having to recompile everything in a specific order.

Meanwhile I had the definition of let opened in the Hyperspec, and there was a link to progv at the bottom. I had no idea what it does, and thinking that my Lisp has gotten rusty, clicked through to refresh my memory. Imagine my surprise when I found that 1) I have never used this feature before and 2) it was exactly what I needed. Indeed, if I can bind dynamic variables at runtime, then I don’t need to re-expand the macro every time the set of these variables changes.

The final code ended up being pretty messy, but it worked:

(defvar *conn-var-cache* (make-hash-table :test #'equal))

(defmacro with-db (dbid &body body)
  (alexandria:with-gensyms (pv-pairs var vars val vals iv key exists)
    `(let* ((*connection* (get-spec ,dbid))
            (,pv-pairs (when ,dbid
                         (loop for (,var . ,iv) in *conn-vars*
                            for ,key = (cons ,var *connection*)
                            for (,val ,exists) = (multiple-value-list (gethash ,key *conn-var-cache*))
                            collect ,var into ,vars
                            if ,exists collect ,val into ,vals
                            else collect ,iv into ,vals
                            finally (return (cons ,vars ,vals))))))
       (progv (car ,pv-pairs) (cdr ,pv-pairs)
              (with-connection *connection*
           (loop for ,var in (car ,pv-pairs)
              for ,key = (cons ,var *connection*)
              do (setf (gethash ,key *conn-var-cache*) (symbol-value ,var))))))))

Basically the loop creates a pair of list of variables and list of their values (no idea why progv couldn’t have accepted an alist or something). The values are taken from *conn-var-cache* which takes the pairing of variable name and connection spec as the key. Then I also add an unwind-protect to save the values of the variables that might have changed within the body back into the cache. Note that this makes nested with-db’s unreliable! The fix is possible, and left as an exercise to the reader. Another problem is that dynamic variables bindings don’t get passed into new threads, so no threads should be spawned within the with-db macro.

And this is how I ended up using progv in production. This probably dethrones displaced array strings as the most obscure feature in my codebase. Hopefully I’ll have more things to write about in the future. Until next time!

Quicklisp newsMay 2016 Quicklisp dist update now available

· 24 days ago
New projects:
  • cl-ecs — An implementation of the Entity-Component-System pattern mostly used in game development. — MIT
  • cl-htmlprag — A port of Neil Van Dyke's famous HTMLPrag library to Common Lisp. — LGPL 2.1
  • cl-messagepack-rpc — A Common Lisp implementation of the MessagePack-RPC specification, which uses MessagePack serialization format to achieve efficient remote procedure calls (RPCs). — MIT
  • cl-monitors — Bindings to libmonitors, allowing the handling of monitors querying and resolution changing. — Artistic
  • cl-oclapi — Yet another OpenCL API bindings for Common Lisp. — MIT
  • cl-pack — Perl compatible binary pack() and unpack() library — BSD-3-Clause
  • cl-scan — port scanner. — ISC
  • cl-scsu — An implementation of 'Standard Compression Scheme for Unicode'. — MIT
  • clack-static-asset-middleware — A cache busting static file middleware for the clack web framework. — MIT
  • easing — Easing functions. — MIT
  • fare-scripts — Various small programs that I write in CL in lieu of shell scripts — MIT
  • fixed — A fixed-point number type. — MIT
  • focus — Customizable FORMAT strings and directives — BSD
  • glsl-spec — The GLSL Spec as a datastructure — The Unlicense
  • injection — Dependency injection for Common Lisp — GPLv3
  • json-mop — A metaclass for bridging CLOS and JSON — LGPLv3+
  • leveldb — LevelDB bindings for Common Lisp. — BSD
  • moira — Monitor and restart background threads. — MIT
  • recursive-restart — Restarts that can invoke themselves. — MIT
  • trivial-nntp — Simple tools for interfacing to NNTP servers — MIT
  • trivial-openstack — A simple Common Lisp OpenStack REST client. — MIT
  • trivial-yenc — Decode yenc file to a binary file — MIT
  • weblocks-prototype-js — Weblocks JavaScript backend for PrototypeJs — LLGPL
Updated projects3d-vectorsbackportsbinfixbit-smasherblackbirdcaveman2-widgetsceplcepl.cameracepl.devilcl-anacl-autowrapcl-bloomcl-cairo2cl-charmscl-dotcl-erlang-termcl-gamepadcl-geometrycl-hash-table-destructuringcl-html5-parsercl-i18ncl-jpegcl-liballegrocl-libssh2cl-mediawikicl-mongocl-mpicl-ohmcl-pangocl-projectcl-qrencodecl-rediscl-sdl2cl-seleniumcl-storeclackcloser-mopcoleslawcroatoandeedsdissectdjula,esrapexscribefemlispfiascofirephpflareform-fiddleglkitglopgraphhalftonehelambdaphl7-parserhu.dwim.partial-evalhu.dwim.quasi-quotehu.dwim.utilkenzolegitlisp-interface-librarylisp-namespacelisp-unit2mcclimoclcl,pathname-utilspzmqqt-libsqtoolsqtools-uiquery-fsqueuesrclrestasrtg-mathrutilsscalplsdl2kitserapeumsimple-taskssketchskitterslimesnakesstp-querystringprepstumpwmtemporal-functionstriviatrivial-backtracetrivial-benchmarktrivial-string-templatetrivialib.bddubiquitousutilities.print-treevarjoverbosevgplotweblocksweblocks-storesxhtmlambdazs3.

To get this update, use: (ql:update-dist "quicklisp")

You didn't miss it -- there wasn't a fundraiser in April. Or May, either. I've got my fingers crossed to see one soon, but I'm not sure exactly when it might happen. Stay tuned!

McCLIMOld news list

· 35 days ago

For posterity we are publishing the archival news:

  • 2008-04-23: McCLIM 0.9.6 "St. George's Day" released.

  • 2007-09-02: McCLIM 0.9.5 "Eastern Orthodox Lithurgical New Year" released.

  • 2007-01-14: McCLIM 0.9.4 "Orthodox New Year" released.

  • 2006-11-02: McCLIM 0.9.3 "All Souls' Day" released.

  • 2006-03-30: Highly-experimental binaries of McCLIM 0.9.2, set up to start up the McCLIM listener, and incorporating the McCLIM demos as well as a graphical debugger and inspector, are available for download. Supported platforms: PPC/OS X, x86/Linux.

  • 2006-03-26: McCLIM 0.9.2 "Laetare Sunday" released.

  • 2005-03-06: McCLIM 0.9.1 "Mothering Sunday" released.

  • 2004-12-09: McCLIM CVS hosting moved to; if you're a developer you should already have heard about this (if not, mail

  • 2002-10-29: Tim Moore presented a paper written by Robert Strandh and himself at the International Lisp Conference during the last week of October 2002.

McCLIMNew website

· 35 days ago

We are happy to announce that the McCLIM website has been refreshed. All broken links has been replaced and the infrastructure has been moderated into something easier to maintain.

Website comparison

On the left is the old version while on the right is the current design. Website is responsive, has the RSS stream and all the goods coleslaw provides.

You may expect frequent improvements to the codebase in the near future - stay tuned!

Vsevolod DyomkinImproving Lisp UX One Form at a Time

· 40 days ago

At the recent ELS, I presented a lightning talk about RUTILS and how I see it as a way of "modernizing" CL, i.e. updating the basic language elements to be simpler, clearer and more generic. Thus improving the everyday user experience and answering the complaints of outsiders about "historical cruft" in the Lisp standard. Indeed, Lisp has a lot of unrecognizable names (like mapcar and svref) or just unnecessary long ones (multiple-value-bind or defparameter), and out-of-the-box it lacks a lot of things that many current programmers are used to: unified generic accessors, generators, literal syntax for defining hash-tables or dynamic vectors etc. This may not be a problem for the people working with the language on a regular basis (or if it is they probably have a personal solution for that already), but it impedes communication with the outside world. I'd paid extra attention to that recently as I was preparing code examples for the experimental course on algorithms, which I teach now using Lisp instead of pseudocode (actually, modulo the naming/generics issue, Lisp is a great fit for that).

Unfortunately, the lightning talk format is too short for a good presentation of this topic, so here's a more elaborate post, in which I want to show a few examples from the RUTILS library of using Lisp's built-in capabilities to introduce clear, uniform, and generic syntactic abstractions that may be used alongside the standard Lisp operators, as well as replace them in the cases when we want to get more concise and understandable code.

What's cool about this problem is that, in Lisp, besides a common way to extend the language with functions and methods (and even macros/templates, which find they way into more and more languages), there are several other approaches to the problem that allow to tackle issues that can't be covered by functions and even macros. Those include, for instance, reader macros and aliasing. Aliasing is, actually, a rather simple idea (and can be, probably, implemented in other dynamic languages): duplicating functionality of existing functions or macros with a new name. The idea for such operator came from Paul Graham's "On Lisp" and it may be implemented in the following way (see a full implementation here):

(defmacro abbr (short long &optional; lambda-list)
((macro-function ',long)
(setf (macro-function ',short) (macro-function ',long)))
((fboundp ',long)
(setf (fdefinition ',short) (fdefinition ',long))
,(when lambda-list
`(define-setf-expander ,short ,lambda-list
(values ,@(multiple-value-bind
(dummies vals store store-form access-form)
(cons long (remove-if (lambda (sym)
(member sym '(&optional; &key;)))
(let ((expansion-vals (mapcar (lambda (x) `(quote ,x))
(list dummies
(setf (second expansion-vals)
(cons 'list vals))
(t (error "Can't abbreviate ~a" ',long)))
(setf (documentation ',short 'function) (documentation ',long 'function))

As you may have noticed, it is also capable of duplicating a setf-expander for a given function if the lambda-list is provided. Using abbr we can define a lot of shorthands or alternative names, and it is heavily used in RUTILS to provide more than 50 alternative names; we'll see some of them in this post. What this example shows is the malleability of Lisp, which allows approaching its own improvement from different angles depending on the problem at hand and the tradeoffs you're willing to make.

Introducing generic element access

One of the examples of historic baggage in CL is a substantial variety of different methods to access elements of collections, hash-tables, structures, and objects with no generic function unifying them. Not to say that other languages have a totally uniform accessor mechanism. Usually, there will be two or three general-purpose ways to organize it: dot notation for object field access, something square-braketish for array and other collections access, and some generic operator like get for all the other cases. And occasionally (e.g. in Python or C++) there are hooks to plug into the built-in operators. Still, it's a much smaller number than in Lisp, and what's more important, it's sufficiently distinct and non-surprising.

In Lisp, actually, nothing prevents us from doing even better — both better than the current state and than other languages — i.e. from having a fully uniform and extensible solution. At first approximation, it's just a matter of defining a generic function that will work on different container types and utilize all the existing optimized accessor functions in its methods. This interface will be extensible for any container object. In RUTILSX (a part of RUTILS where any experiments are allowed) this function is called generic-elt:

(defgeneric generic-elt (obj key &rest; keys)
(:method :around (obj key &rest; keys)
(reduce #'generic-elt keys :initial-value (call-next-method obj key))))

One important aspect you can see in this definition is the presence of an :around method that allows to chain multiple accesses in one call and dispatch each one to an appropriate basic method via call-next-method. Thus, we may write something like (generic-elt obj 'children 0 :key) to access, for instance, an element indexed by :key in a hash-table that is the first element of a sequence that is the contents of the slot children of some object obj.

The only problem with this function is its long name. Unfortunately, most of good short element access names, like elt and nth are already taken in the Common Lisp standard, while for RUTILS I've adopted a religious principle to retain full backward compatibility and don't alter anything from the standard. This is a critical point: not redefining CL, but building on top of it and extending it!

Moreover, element access has two features: it's a very common operation and it's also not a usual function that does some computation, so ideally it should have a short but prominent look in the code. The perfect solution occurred to me at one point: introduce an alias ? for it. Lisp allows to name operations with any characters, and a question mark, in my opinion, matches very well the inner intent of this operation: query a container-like object using a certain key. With it, our previous example becomes very succinct and cool: (? obj 'children 0 :key).

Additionally to element reading, there's also element write access. This operation in Lisp, like in most other languages, has a unified entry point called setf. There's a special interface to provide specific "methods" for it based on the accessor function. Yet, what to do when an access function is polymorphic? Well, provide polymorphic setter companion. (defsetf generic-elt generic-setf). Like generic-elt, generic-setf defers work to already defined specific setters:

(defmethod generic-setf ((obj list) key &rest; keys-and-val)
(setf (nth key obj) (atomize keys-and-val)))

And it also supports key chaining, so you can write: (setf (? obj 'children 0 :key) new-value).

Having this unified access functionality is nice and cool, but some people may still linger for the familiar dot object slot access syntax. We can't blame them: habits are a basis of good UX. Unfortunately, this is contrary to the Lisp way... But Lisp is a pro-choice and future-proof language: if you want something badly, even something not in the usual ways, almost always you can, actually, find a clean and supported means of implementing it. And this case is not an exception. If you can tolerate an small addition — a @-prefix to the object reference (that's also an extra prominent indicator of something unusual going on) — when accessing its slots you can define a reader macro that will expand forms @obj.slot into our (? obj 'slot) or a standard (slot-value obj 'slot). With it, we can write something like (? tokens, which is much more succinct and, arguably, readable than (elt tokens (slot-value (slot-value dep 'govr) 'id)).

Still, one issue remains unsolved in this approach: the preferred Lisp slot-access method is not via slot-value, but with an accessor method that is exported. And one of the reasons for it is that slot-names, which are usually short and can clash, are kept private to the package where they are defined. It means that in most cases @obj.slot will not work across packages. (Unlike the OO-languages in which every class is its own namespace, in Lisp, this function is not "complected" within the OO-system, and packages are a namespacing method, while objects serve for encapsulation and inheritance.)

There are two ways to tackle this problem. As I said, Lisp is future-proof: being thoroughly dynamic and extensible, CLOS defines a method that is called when there's a problem accessing an object's slot — slot-missing. Once again, we can define an :around method that will be a little smarter (?) and try to look up slot-name not only in the current package, but also in the class' original package.

(defmethod slot-missing :around
(class instance slot-name (operation (eql 'slot-value)) &optional; new-value)
(declare (ignore new-value))
(let ((class-package (symbol-package (class-name (class-of instance)))))
(if (eql class-package (symbol-package slot-name)) ;; to avoid infinite looping
(if-it (find-symbol (string-upcase slot-name) class-package)
(slot-value instance it)

This is a rather radical way and comes at a cost: two additional virtual function calls (of the slot-missing method itself and an additional slot-value one). But in most of the cases it may be worth paying it for convenience's sake, especially, since you can always optimize a particular call-site by changing the code to the most direct (slot-value obj 'package::slot) variant. By the way, using slot accessor method is also costlier than just slot-value, so we are compensating here somewhat. Anyway, it's cool to have all the options on the table: beautiful slow and ugly fast method that our backward-compatibility approach allows us. As usual, you can't have a cake and eat it too...

Though, sometimes, you can. :) If you think more of this it becomes apparent that slot-value could be implemented this way from the start: look up the slot name in the class'es original package. As classes or structs are defined together with their slots it is very rare if not almost impossible to see slot-names not available in the package where their class is defined (you have to explicitly use a private name from another package when defining a class to do such a trick). So, slot-value should always look for slot names in the class'es package first. We can define a "smart" slot-value variant that will do just that, and with our nice generic-elt frontend it can easily integrated without breaking backward-compatibility.

(defun smart-slot-value (object slot-name)
(slot-value object
(or (find-symbol (string-upcase slot-name)
(symbol-package (class-name (class-of instance))))

Unifying variable binding with with

Almost everything in functional variable definition and binding was pioneered by Lisp at some point, including the concept of destructuring. Yet, the CL standard, once again, lacks unification in this area. There are at least 4 major constructs: let and let*, destructuring-bind and multiple-value-bind, and also a few specialized ones like with-slots or ppcre:register-groups-bind. One more thing to mention is that parallel assignment behavior of plain let can be implemented with destructuring-bind and multiple-value-bind. Overall, it just screams for uniting in a single construct, and already there have been a few attempts to do that (like metabang-bind). In RUTILS, I present a novel implementation of generic bind that has two distinct features: a more plausible name — with — and a simple method-based extension mechanism. The implementation is very simple: the binding construct selection is performed at compile-time based on the structure of the clause and, optionally, presence of special symbols in it:

(defmacro with ((&rest; bindings) &body; body)
(let ((rez body))
(dolist (binding (reverse bindings))
(:= rez `((,@(call #'expand-binding binding rez)))))
(first rez)))

A very short number of methods covering the basic cases are defined:

  • the first one expands to let or multiple-value-bind depending on the number of symbols in the clause (i.e. for multiple values you should have more than 2)
  • the second group triggers when the first element of the clause is a list and defaults to destructruing-bind, but has special behaviors for 2 symbols ? and @ generating clauses for our generic element access and smart slot access discussed in the previous section

(defun expand-binding (binding form)
(append (apply #'bind-dispatch binding)

(defgeneric bind-dispatch (arg1 arg2 &rest; args)
(:method ((arg1 symbol) arg2 &rest; args)
(if args
`(multiple-value-bind (,arg1 ,arg2 ,@(butlast args)) ,(last1 args))
`(let ((,arg1 ,arg2)))))
(:method ((arg1 list) (arg2 (eql '?)) &rest; args)
`(let (,@(mapcar (lambda (var-key)
`(,(first (mklist var-key))
(? ,(first args) ,(last1 (mklist var-key)))))
(:method ((arg1 list) (arg2 (eql '@)) &rest; args)
(with-gensyms (obj)
`(let* ((,obj ,(first args))
,@(mapcar (lambda (var-slot)
`(,(first (mklist var-slot))
(smart-slot-value ,obj ',(last1 (mklist var-slot)))))
(:method ((arg1 list) arg2 &rest; args)
`(destructuring-bind ,arg1 ,arg2)))
In a sense, it's a classic example of combining generic-functions and macros to create a clean and extensible UI. Another great benefit of using with is reduced code nesting that can become quite deep with the standard operators. Here's one of the examples from my codebase:

(with (((stack buffer ctx) @ parser)
(fs (extract-fs parser interm))
(((toks :tokens) (cache :cache)) ? ctx))
And here's how it would have looked in plain CL:

(with-slots (stack buffer ctx) parser
(let ((fs (extract-fs parser interm)))
(toks (gethash :tokens ctx))
(cache (gethash :cache ctx)))

Implementing simple generators on top of signals

One of my friends and a Lisp enthusiast, Valery Zamarayev, who's also a long-time Python user, once complained that the only thing that he misses in CL from Python is generators. This feature is popular in many dynamic languages, such as Ruby or Perl, and even Java 8 has introduced something similar. Sure, there are multiple ways to implement lazy evaluation in Lisp with many libraries for that, like SERIES, pygen or CLAZY. We don't have to wait for another version of the spec (especially, since it's not coming 8-)

In RUTILS I have discovered, I believe, a novel and a very clean way to implement generators — on top of the signal system. The signal or condition facility is, by the way, one of the most underappreciated assets of Common Lisp that often comes to rescue in seemingly dead ends of control flow implementation. And Kent Pitman's description of it is one of my favorite reads in Computer Science. Anyway, here's all you need to implement Python-style generators in Lisp:

(define-condition generated ()
((item :initarg :item :reader generated-item)))

(defun yield (item)
(restart-case (signal 'generated :item item)
(resume () item)))

(defmacro doing ((item generator-form &optional; result) &body; body)
(with-gensyms (e)
`(block nil
(handler-bind ((generated (lambda (,e)
(let ((,item (generated-item ,e)))
(invoke-restart (find-restart 'resume))))))

The doing macro works just like dolist, but iterating the generator form instead of an existing sequence. As you can see from this example, restarts are like generators in disguise. Or, to be more correct, they are a more general way to handle such functionality, and it takes just a thin layer of syntactic sugar to adapt them to a particular usage style.

And a few mischiefs

We have seen three different approaches to extending CL in order to accommodate new popular syntactic constructs and approaches. Lastly, I wanted to tread a little in the "danger zone" that may be considered unconventional or plain bad-style by many lispers — modifying syntax at the reader level. One thing that Clojure (following other dynamic languages before it), I believe, has proven is the importance of shorthand literal notation for popular operations. CL standard has predated this understanding: although it has specific print representations for various important objects, and even a special syntax for static arrays. Yet, the language is really future-proof in this respect, because it provides a way to hook into the reader mechanism by modifying the readtables. It was further smoothed and packaged by the popular NAMED-READTABLES library, which allows to treat readtables similar to packages. In RUTILS I have defined several extended readtables that implement a few shortcuts that are used literally in every second function or macro I define in my code. These include:

  • a shorthand notation for zero-, one- or two-argument lambda functions: ^(+ % %%) expands into (lambda (% %%) (+ % %%))
  • a literal syntax for hash-tables: #h(equal "key" "val") will create a EQUAL-hash-table with one key-value pair
  • a syntax for heredoc-strings: #/this quote (") shouldn't be escaped/# (which, unfortunately, doesn't always work smoothly in the repl)

Overall, I have experimented a lot with naming — it was sort of my obsession in this work to find short and obvious names for new things, many of which substitute the existing functionality, under the constraints of not altering what's already in the standard. For this sake, I've ventured into non-character symbols and even the keyword package — a major offence, I reckon... And here are a few of the findings I wanted to share (besides ? and with mentioned previously):

  • call is a new alias for funcall — I suppose, in the 70's it was a really fun experience to call a function hence the name, but now its too clumsy
  • get#, set#, and getset# are aliases and new operations for #-tables (when you can't or won't use ? for that)
  • finally, the grandest mischief is := (alongside :+, :-, :*, :/), which is an alias for setf (and, you've guessed it, incf etc). The justification for this is that everyone is confused about the -f, that setting a variable is a very important operation that we should immediately notice in our clean and functional code ;), and that := is a very familiar syntax for it even used by some languages, such as Pascal or golang. It may be controversial, but it's super-convenient.

The only thing I failed to find a proper renaming for so far is mapcar. It is another one of those emblematic operations that should be familiar to everyone, yet -car creates confusion. For now, I resist the temptation to rename map into map-into and make map smarter by using the first sequence's type for the result expression. However, there's no plausible alternative variant I was able to find even among the zoo of other language's naming of this concept. Any thoughts?

PS. Those were a few prominent examples, but RUTILS, in fact, has much more to offer. A lot of stuff was borrowed from other utility projects, as well as implemented from scratch: anaphoric operators, the famous iter — a replacement for loop, Clojure-style threading macros, a new semantic pair data type to replace cons-cells, lots of utilities to work with the standard data structures (sequences, vectors, hash-tables, strings) making them truly first-class, iteration with explicit indices etc etc. With all that in the toolbox, there's now no ground to claim that Lisp is in any aspect inferior in terms of day-to-day UX compared to some other language, be it Haskell, Ruby or Clojure. Surely, I'm not talking about the semantic differences here.

Wimpie NortjeDo you really want to use conditional compilation?

· 40 days ago

Edit (2016-05-18): Use featurep to improve readability in run time check. Thanks to @ogamita for the pointer.

Edit (2016-05-21): Fix spelling mistakes. featurep not featuresp. Thanks to @ngnghm for pointing that out.

How do I conditionally include code?

When code must change behaviour based on build time settings people often reach for the conditional reader macros (#+ and #-).

These macros have two properties one must be aware of.

  1. The compiler sees different code based on the condition.
  2. The macros are only evaluated at compile time.

Another thing to be aware of is that ASDF only recompiles files when their modification timestamps have changed. This is an optimization to decrease compilation time.

There are two situations where conditional code inclusion are most often used. The first is for writing code which is portable between different computing environments and the second is for setting behaviour options at build time.

In the first scenario the same source file must work on different platforms (e.g. 32 bit and 64 bit) and different compilers. The variance in target environments makes it a necessity to present different code to the compiler based on the environment. Once the file is compiled there is no reason to recompile it until it is moved to a new environment. For this scenario the two properties above (and ASDF's partial compilation) is exactly what is needed and the correct solution is the #+ and #- macros.

The second scenario happens when the application's behaviour can be modified by setting appropriate variables at build time. This technique if often used to switch a code base between development and production modes.

If conditional macros are used to perform this task it is easy to end up with files which were compiled under different conditions. This will almost certainly result in transient bugs, i.e. bugs which disappear when the complete project is recompiled.

Since a complete recompile avoids transient bugs the next logical step is to do a complete recompile every time strange behaviour is encountered. Doing such a recompile negates much of the benefit of ASDF's partial compilation.

Another issue is that compiling different code based on the environment means that you can never test the complete code base in a single environment. One problem with this is that there is always uncertainty about the source of a bug which is present in only one of the environments. Another problem is that one can get into a situation where buggy code is only ever present in an environment with no debugging facilities1.

In summary, using conditional macros to implement build time settings have the following problems2:

  • Transient bugs,
  • Long compile times, and
  • Multiple execution environments.

These problems can be avoided while keeping the build time settings by using run time checks instead of compile time checks. The code examples below illustrate both the compile time and run time methods.

Conditional behaviour using reader macros

This method causes trouble.

(pushnew :app-release *FEATURES*)


Conditional behaviour using run time checks

One possible solution to rid your code of conditional macros.

(pushnew :app-release *FEATURES*)

(if (uiop:featurep :app-release) 

Conditional reader macros or run time detection?

Though I have not seen much discussion about this topic, I have seen a few projects which implement build time settings as I suggest in this post.

Method Use case
#+ and #- The same functionality is implemented by different pieces of code for different environments.
Run time detection. Different behaviours are selected based on build time options.

  1. An example is to move between SLIME and Buildapp with debugging disabled.

  2. Also see Buildapp fails when using uncompiled libraries for another problem caused by conditional macros.

Nicolas Hafner9th European Lisp Symposium - Confession 62

· 45 days ago

As I'm writing this I'm still in Krakow. Sitting next to me is Till, who joined me for ELS this year. It was a blast, but I'm also really exhausted and my throat is still hurting a bit from talking all the time the past three days. Our flight back to Zürich is in about an hour from now and I have a test to study for on the coming Thursday; I actually would've really liked to stay a bit longer, especially considering there were a few people I would've loved to talk to a bit more. Alas, you can't always get what you want.

But before I go through the entire thing by backtracking, let's instead reverse time all the way back to Sunday. Our flight to Krakow was scheduled for 17:00, so we had ample time to lounge around at home and try to relax a bit before the inevitable stress that is airport security and flying in general. Till had also packed way too much stuff, so we unloaded a bunch to make the carrying lighter. In hindsight I'm really glad we did that, as it turned out that we had to walk around quite a bit in Krakow.

At around two we then set off for the airport, where we had a quick lunch and noticed that Till had left his boarding pass in a book he had packed but we then left at home. Fortunately enough -after a bit of trouble with the Swiss website- we managed to download a copy of it to his tablet, so that all turned out fine. I suppose you really can't go for any kind of journey without at least some kind of oversight that gives you a hefty scare.

The plane we flew in was a small jet, but it was still pretty packed. I assume it was mostly Polish people returning home after a quick holiday break. On the flight my stomach got upset a bit, but otherwise it went by just fine. Once we finally arrived in Krakow we got a bit confused about the airport layout, as it was under pretty heavy construction. After some wandering about we managed to find the proper bus stop and get our tickets. I also exchanged way too much money for zloty, most of which is still in my wallet now. I didn't get much of an opportunity to waste it.

The bus ride to the hotel took around three quarters of an hour, so we got to have a good look around the outskirts of the city and its landscapes. The hotel itself was located in an area that looked rather worn down, the streets were not up to par and long stretches of the sidewalk were opened up for construction. After check-in and a short look-see at our room we decided to head on down to the bar and wait for someone to show up. Most of the conference people had already arrived in Krakow before us and were having a jolly time at the pre-conference registration party from what I overheard.

About an hour later we were joined by Christian Schafmeister and Joram Schrijver and the discussions immediately fired up. We talked a lot about Clasp and its near future- Christian was pretty worried about what he could show for his talk. He had the impression that people wouldn't be impressed by a new Common Lisp alone. I can't say I agree with that viewpoint, Clasp brings a lot of new stuff to the table that makes it a great addition to the list of implementations. The C++ interop alone is already noteworthy enough, but there's lots of smaller features that could prove very useful for larger projects. The biggest problem with Clasp remains however; there's just not enough people working on it to move it along quicker. Even with Christian's incredible speed and dedication, there's only so much he can do on his own. I've been trying to push Clasp into a situation where it is more accessible to other people for quite a while now, but especially with recent changes there's a lot left to be done for that- something that was reflected again throughout the discussions we had during the conference.

Later we were joined by a group of other lispers that were just returning from their previous party to start a new one at the bar. Things got rather lively and all sorts of topics got brought up. At around midnight I had to excuse myself however, as I wanted to be at least somewhat fresh on the coming morning. The hotel room was alright, at least it didn't smell terribly and was otherwise nicely roomy. Unfortunately the heating was also turned up enough that I couldn't fall asleep for about an hour. Opening the window cooled things down sufficiently and we finally managed to get some good rest in.

Finding the conference building in the morning was a bit tricky, but we managed to discover a kiosk along the way to get some snacks and drinks in. The conference provided for plenty of that on its own, but I was still glad to have a nice bottle of ice tea in my bag at all times. We got to the conference hall on time for the registration, but the actual conference organisation was oddly delayed. Nevertheless, discussion between the few people that had already showed up sparked almost immediately, so it didn't feel like we had to wait at all.

It was great to see Robert Strandh again as well, although I barely got to talk to him this year, much to my dismay. I'm hoping to remedy that next year. I also met Masatoshi Sano again, but I only got to talk to him on the second day. I met a few other people that I already knew from the previous ELS and had the pleasure of talking to them, but I unfortunately am terrible at remembering names, so I can't list them all here. My apologies.

Moving on to the talks. The first was about Lexical Closures and Complexity, by Francis Sergeraert. At points it was unfortunately -despite the rather heavy focus on Math at ETH- a bit over my head or moving too quickly, so I had a bit of trouble following along what exactly was happening. From what I could gather he uses closures to model potentially infinite or very large problem spaces and then perform various computations and mappings on those, thus still being able to compute real-value results without wasting enormous amounts of resources to try and model it all.

Next up was the language design section of the talks, the first of which focused on automated refactoring tools to aid students in finding style problems in their Racket code. It was fairly interesting to see a brief introduction to the tools used to both analyse and restructure source forms automatically to determine more succinct and idiomatic ways of achieving the same semantic result. It was also rather depressing to see some real-world snippets of code they had gathered from actual students. I can't say I'm surprised that this kind of absolutely horrendous code gets written by people being introduced to a language or programming in general, but one can't help but wonder if these 20-level nested ifs might stem from something other than the writer being new to it.

Following this was the demonstration of a library that extended the CL type system for a way to type-check sequences, allowing you to express things like plists as a type that you otherwise could not. It appears to me that this system might prove useful for succinct pattern checking, but unfortunately because these type definitions don't actually really communicate with the compiler in any way it is rather useless for inference or other potential optimisations that could be done if the compiler had actual knowledge of what kind of structure is being described. The code presented also used (declare (type ..)), which is the wrong way to go about something like this. Declarations are intended as promises from the programmer to the compiler. That all sane compilers also insert checks for the type on standard optimisation levels is not something that should be relied upon. check-type on the other hand would be perfectly suitable for this. However, at that point you might as well drop the type charade altogether and just have something like a check-pattern function that performs the test. Still, the talk presented an interesting view into how the actual sequence type descriptors are compiled into efficient finite state machines. The mechanism behind the type set merging was very intriguing.

Afterwards we heard a talk from Robert about his implementation of editor buffers, presenting an efficient and extensible way to handle text editing. I had read his paper on it before so I was already familiar with the ideas behind it, but it was a nice refresher to hear about it again. I'll make sure to see about hooking in his system when I inevitably get to the point of writing a source editor for QUI. It was also pretty surprising to hear about a topic like this since usually when using editors, one doesn't think about the potential efficiency problems presented by them- it all just works so well most of the time already. The biggest grievance for me in Emacs at the moment isn't necessarily the editing of text itself, even if that slows down to a crawl sometimes when I'm cruising about with a couple hundred cursors at the same time, no, the problem is with dynamic line wrapping. Emacs goes down to a complete crawl if you have any kind of buffer with long lines without line breaks. This however I assume has more to do with the displaying of the buffer than the internal textual manipulation algorithm. Maybe Robert has ideas for a good way to solve that problem as well.

During Lunch I had the great pleasure of meeting and talking to Chris Bagley of CEPL fame. I'll really have to look into that myself to see which parts of it can be incorporated into Trial. I'll definitely incorporate the Varjo part so that we can use sexprs for GLSL code, but there might be lots of other little gems in there that can be repurposed.

Following Lunch was the session on DSLs, starting off with a system to describe statistical tests in lisp. For this talk too I felt like I wasn't familiar enough with the areas it touched upon to really be able to appreciate what was being done. Apparently the system was used to do some hefty number-crunching. Besides, it's always great to see new discoveries in language evolution that allow a convenient description of a problem without sacrificing computational power for it.

The following talk stepped right into that line as well, presenting a high-performance image processing language called CMera (I believe). It does some really nifty stuff like automatically unrolling loops to avoid having to do edge testing in your tight loops all the time, expanding a single loop over an image into nine different parts that are all optimised for their individual parts. When comparing the code written against an implementation of the same algorithm in hand-written C++, the difference is absolutely astounding. Not only does this reduction in code size make things much more readable and maintainable, it also allows them to move much faster and prototype things quickly, all without having to sacrifice any computation speed. If anything they can gain speed by letting the compiler use higher-level information about the code to transform it into specialised and large but performant code.

Finally the last session of the day focused on demonstrations, firing right off with CL-MPI, a library to use the MPI system from within lisp while taking care of all the usual C idiosyncrasies, thus presenting a very handy API to run highly parallel and distributed code. Really interesting to me was their system to synchronise memory between the individual machines automatically. While the system seems pretty neat I couldn't help but wonder whether this might a bit too easily lead to either assuming synchronisation when none is present, or introducing a bottleneck in the system when it has to synchronise too often. Either way, I'm glad I don't have worry about highly distributed systems myself- measly threading alone is enough of a headache for me.

After this we had a really nice showing of an interactive computer vision system in Racket using the Kinect. That sounded like a very fun way to introduce students to Racket and computer vision in general. The few demos he showed also seemed very promising. Given last year's talk on computer vision, I might really have to take the time to look into this stuff more closely some time.

The last talk for the day focused on the problem of lexical variables in CL when debugging. Since lexical variables are often compiled away completely, it's tough to see their values during debugging. SBCL often does a pretty good job at retaining this information when compiling with (debug 3) in my experience, but there's certainly times when it doesn't, and I can see the value in implementing a system that can ensure that this gets preserved in every case, especially one that's portable across implementations. Apparently there's some really nasty code walking necessary to get the job done, and there's apparently still no code walker around that actually works as expected on all major implementations, which is a bit of a downer.

As usual closing off the day was a session of lightning talks. This year mine was a form of continuation on my last year's talk about Qtools. I talked very briefly about Qtools-UI, the effort to replace Qt parts that aren't extensible enough and thus provide a more convenient base for people to work with. I'm not sure if I managed to convince anyone to contribute to it, but hopefully it'll at least linger around in some people's heads so that they might remember it if they ever come across the need to write a GUI.

The rest of the lightning talks I'm afraid to say I can't quite remember. My memory is rather shoddy at the moment and the only reason I remember all the other talks is because I looked up their titles on the website. So, my apologies for skipping out on this, but I think the article is already plenty long as it is so going into detail on all these would only make it all the longer.

The first day was concluded by a Chris, Christian, Joram, Till, and I going back to the hotel for a brief chat at the bar, followed by a quest in search for pizza. We looked up a bunch of places near the hotel and went on our way. The first we encountered was too full and the other was near a campus that was filled with students drinking booze and doing BBQ; we deemed it a bit too lively for us. The third one was inside a student dorm building, but had enough space for us to spend some hours talking and eating. The pizza tasted very differently from what I'm used to. It wasn't bad, but also not really my kind of thing.

Some more talking and a good night's rest later it was already Tuesday. Time flies when you're having a blast. We got up a tad later this time around and walked through a convenience store. I was relieved to see that just like all the stores I'm used to the layout is as confusing as possible so that you have to waste lots of time walking by everything except what you're looking for.

Since we were close on time and there was a group photo to shoot we didn't get any time to talk before the first talk. It started off with a presentation of the Julia language by Stefan Karpinski. Julia seems like a really nice replacement for Matlab and I'd very much welcome it if it got more ground that way. However, some of the points that were presented here didn't really seem to make much sense to me. One thing that was emphasised as distinguishing Julia is that number types and arithmetic aren't in the specification, but rather defined in Julia code itself. This sounds like a neat little thing to do for curiosity's sake, but I just can't see the benefit of it. Not to mention that now instead of reading some pages of a spec you have to read some pages of code with possibly arcane interconnections and optimisations going on. Whether this makes anything more clear is really dubious to me. I'm guessing that this part was mostly mentioned at all to at least bring something new to the table since it would otherwise be pretty hard to impress lispers. Another thing I was confused about is that he seemed to hint at the possibility of writing functions that get the information about the inferred type of the compiler and can use that to generate different code, which is something that I've missed in CL in places where macro functions could be further optimised with that kind of information, but the example he showed didn't seem to use that in any way or even get it at all, so I'm not sure if I didn't catch that part or what exactly is going on with that.

A quick break later we got to the implementations part of the talks. Robert presented his modern implementation of the loop macro, which uses a system of combinatory parsing and full CLOS to allow it to be extensible. I'd love to have a portable way to extend loop as iterate really doesn't appeal to me much at all and there's currently nothing else that is extensible for custom sequences and clauses and the like. I'm not sure if his implementation will be adopted, but I would definitely welcome it.

The next two talks, which were about source translation in Racket and STM in Clojure, I'm sad to say I can't really talk about because I was distracted by a bug I had discovered momentarily and couldn't help myself but try to fix. I got absorbed all too easily, so I didn't catch much of it.

During the lunch break I got to talk to Masatoshi Sano for a good while, we mostly discussed the prospect of using Roswell for my Portacle project and talked about some of the difficulties or ways to deal with what I'll just call "The Windows Situation". I later talked to Joram a bit about potentially getting him involved in the Colleen3 or Trial projects, for which I'd heartily welcome some contributors or even just discussion partners. I'm very excited about the prospect of working with him on that.

And then came the big one. Christian's talk was, just like last year, pretty comparable to a bomb dropping. Some suspect that he's not of this world. Ignoring the question of his conception, hearing him in his element talking about Chemistry is always a treat. He showed off a nice demo of what CANDO is capable of and it really looks like a nicely lispy way of performing chemistry modelling. This is said from what I can tell with my practically nonexistent knowledge of chemistry, so I can't really claim to have a grasp on what you can actually do with it. Given that he's been at this for such a long time though, I'm convinced he knows what he needs to do in order to create things like what he presented to us- a perfect water filtering membrane. I'm still glad to have gotten involved with Clasp, it has given me lots of really great talking and thinking opportunities. It's exciting to listen with or discuss the in-depth details of what's going on inside Clasp. Now though Christian needs to get his chemistry stuff off the ground so that he can get enough funding to continue Clasp. Unfortunately grants have been hard to come by for him and that's a looming pressure that has been haunting the project many times before. Hopefully he'll be able to prove the worth of Clasp and Cando in the near future. I wish him all the luck.

Next up we had a presentation about the question of how different implementations of a depth of field effect perform on different hardware. This was mostly a concerning example as to how much code still needs to be tuned to the hardware it's being run on today. Maybe the effect is actually even more so now, since lots of hardware that lies in the same category is still very differing on what it is adept at. Thankfully I am mostly staying clear of such optimisation lunacy.

Some more coffee passed by and we were ready for the last session of talks for the conference. The debut was made by James Anderson, presenting his research into how source files are connected with each other and what kind of dependencies exist between them. He wrote a system that analysed the entire Quicklisp ecosystem's files for symbol references between things and then crunched all that data down into interesting graphs using his own database technology. The graphs for larger systems like qtools-ui look like a complete jumble as expected. He also mentioned the difficulties of trying to extract this kind of relationship information since he could only inspect code by reading it in. This is particularly a problem for methods, since they are likely to be defined from lots of different source files and potentially packages, but without at least type inference or even runtime information you can't really know where the dependency goes. Initially his idea for doing this seemed to be that he doesn't want to have to write the dependency information into his system definition files and the system should be able to infer it automatically. I'm not so sure that this is is a good idea, or that it is in fact such a problem. It seems like a rather minor inconvenience to me, but then again I've never written systems on a very large scale.

Closing it all off we had a presentation of Bazel and how it can be used to build Lisp. The most promising feature of it all being that you are able to use it to statically link libraries into an SBCL binary. I'm not convinced that Bazel is the tool to use for building unless you have a gigantic project or ecosystem surrounding it already however. It seems ridiculously heavy-weight and paying the price for it and its different way of configuration and operation does not seem worth the benefits unless you really need static linking and cannot do with shared libraries. Still, it gave me some things to think about for the eventual effort of writing my own build system, whenever that will happen.

Then as before we had some more lightning talks to round it all off. I didn't do a second one this time around, mostly because I didn't really know what to talk about for Trial. It did not seem finished enough to present yet. Maybe next year.

Finally the conference was rounded off by some goodbye messages from the conference organisation and the announcement that the next ELS might be happening in Brussels. We then had two hours left before the banquet. Michal Herda guided us into the inner parts of the city where we got to see some nice architecture. Along the way Chris Bagley and I chatted about the problems in writing game engines and games in general and some other assorted topics.

Once we arrived at the banquet I was pretty beat. The place we stood at looked oddly high-brow. The tables were set with all the usual you get in fancy restaurants- multiple wine glasses, forks and knives. It seemed in an odd conflict with the rest of the getup of the conference attendees. We all looked far too casual for this kind of thing. Due to my pickiness I couldn't eat much of the food that was being served either. It certainly looked fancy, but I don't think the taste was in accordance to that. From what I've heard from others or noticed in their expressions it was nothing exceptional. No matter for me either way though, since I came all this way not to eat, but to finally be with people that understood me and vice versa. And I got ample opportunity to do exactly that. During the dinner I mostly talked with Joram about Colleen3 and Markless.

Soon enough it hit 22:00 and we had to leave. Due to the long way back to the hotel and other delays in saying goodbye to everyone, we only arrived around two hours later, at which point I just slammed myself into bed after making sure that I got the boarding pass for next morning.

And so today I woke up at six, just to make sure that we had plenty of time for potential mistakes. Three quarters of an hour later we were on the bus to the airport. Half an hour later we had already passed through security and were waiting at the gate, at which point I started writing this. The flight after was rather annoying. It was pretty packed and there were lots of screaming children on board, the bane of any flight passenger. I have no idea why there were so many families on board, let alone on a Wednesday morning, let alone from Krakow to Zürich. Despite hellish screams of tortured souls haunting us along the way we made it back safely.

Now it's already 16:00 and aside from getting home and continuing to write this I only got the time to cook some nice lunch- the kitchen remains to be cleaned.

I suppose I should try to form some sort of a conclusion here to end this overly long article on a good note. If it wasn't already apparent from my descriptions, I had a grand time at the conference and I'm really glad that I could attend again this year. A huge thanks to everyone that was willing to talk to me and especially to all the people that got involved to make it all happen. Hopefully it'll happen again next year; I'm definitely looking forward to it.

For now though I have to get back to work hacking lisp. There's so much left to be done.


Daniel Kochmański9th European Lisp Symposium in Kraków

· 46 days ago

The 9th European Lisp Symposium in Kraków has ended. It was my second ELS (first time I've attended it just a year before in London). It is really cool to spend some time and talk with so knowledgeable people. The European Lisp Symposium is a unique event because it gathers people from all around the world who are passionate about what they're doing. The mixture was astonishing - the university professors, professional programmers, individual hackers, visionaries, students.

I'm glad that I have met in person many people with whom I had only the contact over the internet. I've heard about various exciting projects and ideas either during the sessions and the breaks. I have even an autograph from miss Kathleen Callaway on my Lisp in Small Pieces book. I'm also very excited that this year there was a Clojure talk, which is a modern incarnation of the Lisp idea.

During the event I had a chance to stand in front of this "angry crowd" (actually crowd of a very nice people - I still was very stressed though) during my lightning talk. I was talking about my opinions on how the contributing to the Common Lisp ecosystem should look like, how people can get involved in a productive and efficient way.

Yesterday we were on a banquet which officially closed the symposium. The food was delicious and the company was great. The only thing is that it could last a little longer. In fact many of the attendees moved to some other place to continue the meeting. I've heard they've ended late in the night (or early in the morning). I was too sleepy, so I've left to my kind host's flat.

I want to thank all the organizers and speakers for the effort they've put to make the symposium happen. Michał Psota did a tremendous job as a local chair - he managed all the local stuff and it was all perfect, Irčne Durand and Didier Verna were managing things very well - everything went very smooth and only a few people got shot during the lightning talks by Didier for exceeding the time frame. I hope that I'll be able to attend the next ELS which will probably take place in Brussels.

Vsevolod DyomkinEuropean Lisp Symposium 2016

· 47 days ago

The last two days, I'm at the ELS2016. So far, it's being a great experience - I've actually forgotten the joy of being in one room with several dozens of Lisp enthusiasts. The peculiarity of this particular event is that it's somewhere in the middle between a scientific conference, like ACL, that I had a chance to attend in the recent years thanks to my work at Grammarly, and a tech gathering: it employs the same peer reviewed approach and a scientific presentation style you will find at the research conferences, but most of the topics are very applied and engineering-related.

Anyway, the program was really entertaining with several deep and insightful presentations (here are the proceedings). The highlights for me were the talks on the heterogenous sequences type-checker implementation based on the Lisp declare facility (that I'm growing more and more fond) by Jim Newton and a presentation of an image-processing DSL that's an excellent example of the Lisp state-of-the-art approach in DSL design by Kai Selgrad. Other things like a description of the editor buffers protocol, local variables preservation technic were also quite insightful. And other good stuff is coming...

It's also great to hear new people bringing fresh ideas alongside old-timers sharing their wisdom and perspective - one of the things I appreciate in the Common Lisp community.

Near the end, I'm going to present a lightning talk about RUTILS and how I view it as a vehicle for evolving the Common Lisp user experience.

Sugaring Lisp for the 21st Century from Vsevolod Dyomkin

Christophe Rhodesnot going to els2016

· 49 days ago

I'm not going to the European Lisp Symposium this year.

It's a shame, because this is the first one I've missed; even in the height of the confusion of having two jobs, I managed to make it to Hamburg and Zadar. But organizing ELS2015 took a lot out of me, and it feels like it's been relentless ever since; while it would be lovely to spend two days in Krakow to recharge my batteries and just listen to the good stuff that is going on, I can't quite spare the time or manage the complexity.

Some of the recent complexity: following one of those "two jobs" link might give a slightly surprising result. Yes, Teclo Networks AG was acquired by Sandvine, Inc. This involved some fairly intricate and delicate negotiations, sucking up time and energy; some minor residual issues aside, I think things are done and dusted, and it's as positive an outcome for all as could be expected.

There have also been a number of sadder outcomes recently; others have written about David MacKay's recent death; I had the privilege to be in his lecture course while he was writing Information Theory, Inference, and Learning Algorithms, and I can trace the influence of both the course material and the lecturing style on my thought and practice. I (along with many others) admire his book about energy and humanity; it is beautifully clear, and starts from the facts and argues from those. "Please don't get me wrong: I'm not trying to be pro-nuclear. I'm just pro-arithmetic." - a rallying cry for advocates of rationality. I will also remember David cheefully agreeing to play the viola for the Jesus College Music Society when some preposterous number of independent viola parts were needed (my fallible memory says "Brandenburg 3"). David's last interview is available to view; iPlayer-enabled listeners can hear Alan Blackwell's (computer scientist and double-bassist) tribute on BBC Radio 4's Last Word.

So with regret, I'm not travelling to Krakow this year; I will do my best to make the 10th European Lisp Symposium (how could I miss a nice round-numbered edition?), and in the meantime I'll raise a glass of Croatian Maraschino, courtesy of my time in Zadar, to the success of ELS 2016.

drmeisterLinking LLVM bitcode files for a dynamic language

· 52 days ago

Clasp Common Lisp is a dynamic language in which every top-level form needs to be evaluated in the top level environment. Clasp compiles Common Lisp code to bitcode files and then links them together into a shared library or an executable. When the library or the executable are loaded, each top-level form needs to be evaluated. This is until Clasp gains the ability to save a running environment to a file, which it doesn’t have yet. Even then, the ability to play back the top-level forms will be needed to create the environment to write to a file.

So the compiled bitcode files need to keep track of the top-level forms so that they can be played back at startup. Clasp does this by defining a “main” function with internal linkage (called “run-all”) for each llvm::Module. “run-all” evaluates every top-level form that was compiled into the llvm:Module. The tricky part is how does this main function get exposed to the outside world so that it can be called if a single bitcode file is loaded into clasp or linked together into a library or executable and invoked with other “run-all” functions from other modules.

Clasp creates a global variable in each module called “global-run-all-array” that stores an array of initially one function pointer that points to the module’s “run-all” function.  The “global-run-all-array” global variable is defined with “appending” linkage. What this does is when bitcode files get linked together by the system linker, the “global-run-all-array” will have all of the “run-all” functions appended together and put back into the “global-run-all-array” global variable.

Then there is the problem to determine the number of entries in the “global-run-all-array”.  Clasp solves that by ensuring that the last module linked in a list of modules has a two element “global-run-all-array” where the second element is NULL and by adding a second global variable called “global-epilogue”.

When a bitcode file or a shared library is loaded into Clasp, it checks for the “global-epilogue” symbol, if it finds it then it knows that the “global-run-all-array” contains a NULL terminated array of function pointers to call.   If “global-epilogue” is not present, then it knows that “global-run-all-array” contains a single function pointer.

Clasp then invokes each of the “global-run-all-array” functions one after the other and each one of them invokes the compiled functions for the top-level forms for each of the bitcode files.

This only takes a few seconds when starting up Clasp.

Note: I haven’t been actively blogging – because I’m very, very actively programming.  If you want to say hi, I’m on IRC, #clasp almost every day.

CL Test Gridquicklisp 2016-04-21

· 54 days ago
The difference between this and previous months:

Grouped by lisp implementation first and then by library:

Grouped by library first and then by lisp impl:

(Both reports show the same data, just arranged differently)

As usually, some new libraries start to fail on old lisp implementations because they need newer ASDF.

3d-vectors redefines constant to a value not eql to the previous value. clinch starts to fail on several lisps. hyperluminal-mem refers undefined variable MOST-POSITIVE-B+SIZE. rutils crashes CCL. And some other failures. There are improvements too of course.

If you're interested in some particular failure or need help investigating something, just ask in the comments or on the mailing list.

Daniel KochmańskiKraklisp and hackerspace in Kraków

· 54 days ago

During the last weekend I've been in Kraków to visit some friends. Since I know that there is a lisp group kraklisp on the Jagiellonian University and I know a few people in there (mainly from IRC channel #lisp-pl @ freenode), I've arrived in Kraków a bit earlier than I have originally planned to give a talk about the reader macros in Common Lisp. You may find it here (it's in polish):

Michał "phoe" Herda has met us on the bus station near the university buildings and lead us to the destination. Jagiellonian University has very beautiful buildings reminding these on Morasko in Poznań. When we've arrived at the destination we have entered KSI students association room, waited a few moments and started the workshops.

I was surprised that so many people arrived. Eleven people is a lot given that Common Lisp is considered a niche language. kraklisp brings together not only students but also lisp enthusiasts who work with Scala, Java, NetBSD and many other technologies outside the university. That's great to know, that we have an active group of lisp hackers here in Poland!

My talk took about an hour and from the feedback I infer it was just fine but the pace was a little too fast. People were actively listening and asking questions. That was fun. After me Jacek "TeMPOraL" Złydach lead a workshop about the web scrapping in Common Lisp. He has shown very nicely how to build programs interactively in the bottom-up manner. The video is available here (also in polish):

After the meeting the group has split and we have headed to the Hackerspace Kraków headquarters. Amazing place - group of people who tinker with the hardware and software in their free time. A lot of electronic devices, soldering irons, computers and boxes with an unknown content. We've chatted a little and we've learned about some nice projects they have developed - like a device mounted on your chest to locate objects in front of you. It has been created to help blind people to get around the room. Theirs location was in a little bustle, because they are currently moving to the new location. Hackerspace in Kraków is definitely worth seeing, and if feasible - cooperating with.

Finally we've moved to our stay due to the late hour (hackerspace is open 24h!). Kraków seems to be a great place to engage in a CS hobbies like programming and electronics or just to hang around with a smart people in general. I'm glad I'll arrive there again soon to attend the European Lisp Symposium.

Wimpie NortjeBuildapp fails when using uncompiled libraries.

· 54 days ago

Note: I use CCL 64-bit on Linux. I have not checked this on anything else.

I discovered that Buildapp fails to build a project when the FASLs1 for some 'standard' libraries are not available. 'Standard' meaning well known, mature libraries like Alexandria.

When I started using Common Lisp I tended to use the reader macros #+ and #- to conditionally compile for development or production. Since ASDF uses file modification times instead of code dependencies to determine what to recompile it would often happen that the project's various files were compiled using different compilation conditions. This caused many mysterious bugs.

The solution to this environment mismatch is to force a recompile of the whole project. I use a two-phase process of generating a manifest and then building the application binary. The recompile is forced by deleting2 the project FASLs before each phase.

At one point I tracked an elusive bug to a library which uses conditional reader macros for conditions which differ between my development and production environments. I extended my FASL deletion to include all the Quicklisp libraries for both building phases. This caused Buildapp to fail.

By using (ql:quickload) in its verbose mode during the build process I saw that CCL emitted 'compilation failure' warnings for some libraries while generating the manifest. Many mature and often used libraries caused such warnings. Compilation completed successfully in spite of the warnings and these libraries have been used like that for years so it does not seem to be a serious problem.

During the binary creation phase Buildapp exited at the first occurrence of a 'compiler failure' warning with an error. It seems that Buildapp escalated all warnings to errors which caused it to fail.

When your own code triggers this behaviour it is useful because it helps you ship better software. However, when external libraries trigger the failure it is extremely annoying because it blocks your development effort.

The solution for making a complete build in a consistent environment is to do a full clean before generating the manifest and project-only clean before building the binary. This enables Buildapp to load the libraries while still compiling the complete set in a known environment but it requires that the environment conditions for the libraries remain constant during the manifest generation and building phases3.

  1. 'FASL' is short for 'FASt Loading'. It is a binary file containing compiled code. ASDF prefers to load code from a FASL rather than a source file when it determines that the source has not changed since being compiled.

  2. ASDF documentation provides three options for forcing a recompile: (1) The (clear-system) API call, (2) touching the system's .asd file, and (3) deleting the project FASLs.

  3. This can be a tricky requirement because some libraries use #+quicklisp which definitely changes from manifest to building.

Zach BeaneNew version of ZS3 supports AWS4 auth

· 58 days ago

I just published ZS3 1.2.8. It’s available on my website and will be in the next Quicklisp dist update in May. The main difference is support for the latest AWS authentication system. This makes ZS3 work properly with the latest AWS regions like Frankfurt and Seoul.

If you have any trouble using it, please let me know!

(This work was for a paying customer; if you are interested in specific updates and features in ZS3 or any of my other software, get in touch.)

Daniel KochmańskiCreating a project homepage with the SCLP

· 65 days ago


In this short tutorial I'll describe how to bootstrap easily a project website. In fact that's what I did today with the Embeddable Common-Lisp website in order to provide the RSS feed and make putting there the news easier.

Additionally I'm showing here, how to create a standalone executable for coleslaw with clon after providing quicklisp independant bundle of systems.

Quick start

First clone the repository:

$ cd /home/p/ecl
$ git clone website
$ cd website

Now you should adjust the appropriate files. Edit .coleslawrc (file is self-explanatory), static pages and posts.

Each file with the extension *.page is a static page. pages/ is an example template with a static page - don't forget to link it in the .coleslawrc's sitenav section. Exact URL of the page is declared in the file's header.

Files named *.post represent blog/news posts which appear in the RSS feed. They are indexed and accessible from the root URL. Supported file formats are markdown, html and cl-who (if enabled).

When you're done, you could just load coleslaw with your favorite CL implementation, using Quicklisp load coleslaw and call the function main on the website directory:

(ql:quickload 'coleslaw)
(coleslaw:main "/home/p/ecl/website/")

We will take more ambitious road - we'll create a standalone executable with a proper command line arguments built from a clean bundle produced by Zach Beane's Quicklisp. CLI arguments will be handled by Clon - the Command-Line Options Nuker, an excellent deployment solution created by Didier Verna.

Creating the bundle

Bundle is a self-containing tree of systems packed with their dependencies. It doesn't require internet access or Quicklisp and is a preferred solution for the application deployment.

Some dependencies aren't correctly detected - Quicklisp can't possibly know, that our plugin will depend on the cl-who system, and it can't detect cl-unicode's requirement during the build phase - flexi-streams (this is probably a bug). We have to mention these systems explicitly.

Clon is added to enable the clonification (keep reading).

(ql:bundle-systems '(coleslaw flexi-streams
                     cl-who cl-fad
                   :to #P"/tmp/clw")

Clonifying the application

(in-package :cl-user)
(require "asdf")

(load "bundle")
(asdf:load-system :net.didierverna.clon)
(asdf:load-system :coleslaw)
(asdf:load-system :cl-fad)

(use-package :net.didierverna.clon)
(defsynopsis (:postfix "DIR*")
  (text :contents "Application builds websites from provided directories.")
  (flag :short-name "h" :long-name "help"
        :description "Print this help and exit."))

(defun main ()
  "Entry point for our standalone application."
  (when (getopt :short-name "h")
  (print (remainder))
  (handler-case (mapcar
                 #'(lambda (p)
                      (cl-fad:pathname-as-directory p)))
    (error (c) (format t "Generating website failed:~%~A" c)))

(dump "coleslaw" main)

You may generate the executable with sbcl and ccl (ecl has some problems with the coleslaw dependency - esrap, I'm working on it). I have used ccl, because it doesn't "derp" on the symbol exit and produces slighly smaller executable than sbcl.

Issue the following in the bundle directory (/tmp/clw):

ccl -n -l clonify.lisp

This command should create native executable named coleslaw in the same directory. On my host ccl produces binary with the approximate size 50M.

Executable usage

This is a very simple executable definition. You may extend it with new arguments, more elaborate help messages, even colors.

To generate a websites with sources in directories /tmp/a and /tmp/b you call it as follows:

./coleslaw /tmp/a /tmp/b

That's all. Deployment destination is set in the .coleslawrc file in each website directory.

Adding GIT hooks

You may configure a post-receive hook for your GIT repository, so your website will be automatically regenerated on each commit. Let's assume, that you have put the coleslaw standalone executable in place accessible with the PATH environment variable. Enter your bare git repository and create the file hooks/post-receive:

cd website.git

cat > hooks/post-receive <<EOF
########## CONFIGURATION VALUES ##########



if cd `dirname "$0"`/..; then
    cd $OLDPWD || exit 1
    exit 1

git clone $GIT_REPO $TMP_GIT_CLONE || exit 1

while read oldrev newrev refname; do
    if [ $refname = "refs/heads/master" ]; then
        echo -e "\n  Master updated. Running coleslaw...\n"
        coleslaw $TMP_GIT_CLONE


That's all. Now, when you push to the master branch your website will be regenerated. By default .gitignore file lists directory static/files as ignored to avoid keeping binary files in the repository. If you copy something to the static directory you will have to run coleslaw by hand.


Coleslaw is a very nice project simplifying managing project website with easy bootstrapping the site without any need to maintain working lisp process on the server (this is static content which may be served with nginx or apache) and allowing easy blogging (write a post in markdown and push to the repository).

Sample Common-Lisp Project is a pre-configured website definition with a theme inspired by the projects themes with some nice features, like RSS feed and blog engine (thanks to coleslaw).

We have described the process of creating a simple website, creating a standalone executable (which may be shared by various users) and chaining it with git hooks.


ECL NewsNew website look

· 65 days ago

I've imported the old archives and genearated ECL website with help of the coleslaw and the sclp. Now we have a proper RSS feed and posting news is less annoying then before.

For posterity, here is the ugly hack I've used to import archives from JSON:

(defparameter *archives-template*
title: ~A
date: ~A
author: ~A
format: md


(setf *json-posts*
  (with-open-file (f #P"/home/jack/linki/repo/ecl-website/static/files/misc/news-ecl-backup-2015-08-25.json"
                     :direction :input
                     :external-format '(:line-termination :cr :character-encoding :utf-8))
    (cl-json:decode-json f)))

(mapcar (let ((cnt 0))
          #'(lambda (post)
              (with-open-file (f (format nil "/tmp/archives/" (incf cnt))
                                 :direction :output
                                 :if-exists :supersede
                                 :external-format (make-external-format :line-termination :unix))
                (format f *archives-template*
                        (cdr (assoc :title post))
                        ;; (cdr (assoc :labels post))
                        (substitute #\- #\/
                                    (subseq (cdr (assoc :url post)) 40 47))
                        (let ((author (cdr (assoc :author post))))
                          (if (string-equal author "dkochmanski")
                              "jackdaniel" author))
                        (remove #\Return (cdr (assoc :text post)))))))
        (cdar *json-posts*))

You may find a guide how to use the Sample Common Lisp Project template for your own project here. The clnet theme is inspired by the css in most of the projects.

Best regards, Daniel

Quicklisp newsApril 2016 Quicklisp dist update now available.

· 65 days ago
Quicklisp's 67th monthly update is now available!

 Thank you to all the people who signed up for the recurring Supporter Club in the last month. The "big" fundraiser is still in the works, and I'll let you know more when there's more info to share.

New projects:
  • caveman2-widgets — Weblocks like widgets for caveman2. — LLGPL
  • cl-gamepad — Bindings to libstem_gamepad, allowing the handling of gamepad input. — Artistic
  • cl-geos — A CFFI wrapper of GEOS for performing geometric operations in Lisp. — Lisp-LGPL
  • cl-hash-table-destructuring — Hash table destructuring utils — WTFPL
  • cl-statsd — Statsd client in Common Lisp — MIT
  • cl-vhdl — My attempt to understand VHDL, and basicly make VHDL with Lisp-macro — MIT
  • electron-tools — Download, extract, and run Electron binaries. — MIT
  • flare — Easy particle systems with fine grained control. — Artistic
  • inlined-generic-function — MOP implementation of the fast inlinable generic functions dispatched in compile-time — LLGPL
  • liblmdb — Low-level LMDB bindings. — MIT
  • lmdb — Bindings to LMDB. — MIT
  • oclcl — oclcl is a library S-expression to OpenCL C. — LLGPL
  • prbs — A library of higher-order functions for generating Pseudo-Random Binary Sequences of (practically) any degree — MIT
  • — Common Lisp client — MIT
  • random-state — Portable random number generation. — Artistic
  • remote-js — Send JavaScript from Common Lisp to a browser. — MIT
  • sketch — Sketch is a Common Lisp framework for the creation of electronic art, computer graphics, visual design, game making and more. It is inspired by Processing and OpenFrameworks. — MIT
  • tm — Formalized Iteration Library for Common LISP — MIT
  • trivial-compress — Compress a directory. — MIT
  • trivial-string-template — A trivial string template library, inspired by Python's string.Template — MIT
  • trivial-ws — Trivial WebSockets. — MIT
Updated projects3d-vectorsalexandriaarchitecture.service-providerarrow-macrosasdf-flvasteroidsbinfixburgled-batteriesceramiccl+sslcl-anacl-asynccl-autowrapcl-bsoncl-geometrycl-hash-utilcl-itertoolscl-jpegcl-l10ncl-lexercl-llvmcl-marklogiccl-mockcl-mtgnetcl-mysqlcl-ohmcl-openglcl-opsresearchcl-pangocl-rabbitcl-rethinkdbcl-sdl2cl-slugcl-string-matchcl-stringscl-tasuketecl-tetris3dcl-wordcutcl-yaclyamlclack-errorsclassimpclinchcloser-mopclxcommon-doccommon-doc-plumpcommon-htmlcommonqtcopy-directorycroatoandartsclemailaddressdeedsdefpackage-plus,dissectdocparseresrapesrap-liquidfare-utilsfast-ioform-fiddlegendlhyperluminal-memjp-numeralkenzolakelisp-interface-librarylocal-timemacrodynamicsmcclimmitonorthosicatpaiprolog,parse-jspath-parseprojecturedqtoolsqtools-uiquickutilrpmrtg-mathrutilssafe-queuescalplsdl2kitserapeumsimple-date-timesimple-tasksskittersmugsnarksouthspinneretstaplestumpwmsxql,teepeedee2temporal-functionstriviatrivial-channelstrivial-extractusocketutilities.print-treewhichworkout-timer.

To get this update, use (ql:update-dist "quicklisp").

Michael MalisHow to Generate Self-Referential Programs

· 67 days ago

In this post, I am going to show you how to write programs that are self-referential. By self-referential, I mean programs which are able to obtain their own source code without any external input. In other words, they won’t just read from their own files. This post is based on section 6.1 of the book Introduction to the Theory of Computation.

Before we can start generating self-referential programs we are first going to need some techniques for generating programs in general. The first technique we need is a method of taking a given program and writing a second program that outputs the given program. As an example, given (+ 2 2), we would need to write a program that outputs (+ 2 2). In most languages this is easy. One way to do it in Lisp is to put a quote in front of the program:

'(+ 2 2)
=> (+ 2 2)

We are also going to need a function that automates this process. Such a function would take a program as its argument and return a new program that when ran, outputs the program that was originally passed to the function. In most languages doing this is fairly tricky. In Lisp, we can write this function easily through backquote:

(defun code-that-generates (program)

(code-that-generates '(+ 2 2))
=> '(+ 2 2)

If you don’t understand how backquote works, you can read this. Even though it’s for Emacs Lisp, everything there is still applicable to other Lisps. Just make sure that you understand that code-that-generates can be used to generate a program that outputs a given program.

Now that we have these two techniques, we can begin writing programs that are able to refer to themselves. The first self-referential program we will write will be an example of a quine. If you don’t know, a quine is a program that outputs its own source code. The quine we are going to write is made up of two parts, part A and part B, where part A is a function that is applied to part B:

(A B)

To describe how the quine works, it is easiest to start with part B. All that part B needs to do is return the source code of part A:

(A 'A)

Part A’s job is to take its own source code, and use it to obtain the source code of the entire quine. Since B is a program that outputs A, A can use code-that-generates on its own source code in order to obtain the source code of B. Once A has the source code of both A and B, it becomes trivial to combine the two to obtain the source code of the entire quine. Here is the complete quine, with the call to code-that-generates inlined:

((lambda (a)
   (let ((b `',a))
     `(,a ,b)))
 '(lambda (a)
    (let ((b `',a))
      `(,a ,b))))
((lambda (a)
   (let ((b `',a))
     `(,a ,b)))
 '(lambda (a)
    (let ((b `',a))
      `(,a ,b))))

Now this is where things start getting interesting. A quine can be thought of as a program that generates its own source code, and immediately returns it. What if instead of immediately returning its own source code, the quine applied a function to it first, and then returned the result of that. The steps for building such a program are almost exactly the same as the steps we took for building the quine. This time, there is a third part F, for the function we want to call. The structure of the program will look like the following:

(F AB)

Where AB has a similar structure to our quine. After breaking AB into the two parts, A and B, the program looks like the following:

(F (A B))

Part B in the above program has the same responsibilities as B in the quine, it returns the source code for A:

(F (A 'A))

Then once A has the source code for itself, it can use code-that-generates to obtain the source code for B. Now that it has the source of A and B, it is easy for it to construct AB. Once part A has the code for AB, it can easily generate the source of the entire program. Here is what the program becomes after filling in everything except F:

 ((lambda (a)
    (let ((b `',a))
      (let ((ab `(,a ,b)))
        `(F ,ab))))
  '(lambda (a)
     (let ((b `',a))
       (let ((ab `(,a ,b)))
         `(F ,ab))))))

What makes this so awesome is that F can be any function we want, and the above program will run F with the source code of the entire program! For example, replacing F with identity causes the program to become a quine:

 ((lambda (a)
    (let ((b `',a))
      (let ((ab `(,a ,b)))
        `(identity ,ab))))
  '(lambda (a)
    (let ((b `',a))
      (let ((ab `(,a ,b)))
        `(identity ,ab))))))
 ((lambda (a)
    (let ((b `',a))
      (let ((ab `(,a ,b)))
        `(identity ,ab))))
  '(lambda (a)
    (let ((b `',a))
      (let ((ab `(,a ,b)))
        `(identity ,ab))))))

But we can also do some much more impressive things. We can replace F with a function that lists its argument twice, and get a program that returns a list containing its own source code twice:

((lambda (x) (list x x))
 ((lambda (a)
    (let ((b `',a))
      (let ((ab `(,a ,b)))
        `((lambda (x) (list x x)) ,ab))))
  '(lambda (a)
     (let ((b `',a))
       (let ((ab `(,a ,b)))
         `((lambda (x) (list x x)) ,ab))))))


(((lambda (x) (list x x))
  ((lambda (a)
     (let ((b `',a))
       (let ((ab `(,a ,b)))
         `((lambda (x) (list x x)) ,ab))))
   '(lambda (a)
      (let ((b `',a))
        (let ((ab `(,a ,b)))
          `((lambda (x) (list x x)) ,ab))))))
 ((lambda (x) (list x x))
  ((lambda (a)
     (let ((b `',a))
       (let ((ab `(,a ,b)))
         `((lambda (x) (list x x)) ,ab))))
   '(lambda (a)
      (let ((b `',a))
        (let ((ab `(,a ,b)))
          `((lambda (x) (list x x)) ,ab)))))))

To make writing these self-referential programs easier, we can define a function that fills in F for us. It just requires a little nested backquote trickery.1

(defun self-referential-version-of (f)
     ((lambda (a)
        (let ((b `',a))
          (let ((ab `(,a ,b)))
            `(,',f ,ab))))
       '(lambda (a)
          (let ((b `',a))
            (let ((ab `(,a ,b)))
              `(,',f ,ab)))))))

(self-referential-version-of '(lambda (x) (list x x))
((lambda (x) (list x x))
 ((lambda (a)
    (let ((b `',a))
      (let ((ab `(,a ,b)))
        `(,'(lambda (x) (list x x)) ,ab))))
  '(lambda (a)
     (let ((b `',a))
       (let ((ab `(,a ,b)))
         `(,'(lambda (x) (list x x)) ,ab))))))

Now that we’ve got a function that can generate self-referential programs for us, I am going to show you how to build something called a quine-relay. A quine-relay is like a normal quine, except it passes through multiple languages. The quine-relay we are going to write is a Lisp program that outputs a C program that outputs the original Lisp program. All we have to do is write a function that takes its argument and writes a C program that prints the argument it was given. Then we can pass that function to self-referential-version-of to get the quine-relay! That’s it! Here is a program that will generate the quine-relay:

  '(lambda (self)
     (format t

"#include <stdio.h>~%int main(){printf(\"%s\",~(~s~));}"

             (remove #\newline (prin1-to-string self)))))

I’ve omitted the actual quine-relay for brevity, but you can find it here if you are curious. There are a few idiosyncrasies in the above program and in the quine-relay because of the differences in behavior between Lisp and C. For example, in C you can’t have multi-line strings, so it becomes easier to remove all of the newlines from the Lisp program, than it is to keep them.

And that’s all it takes to write self-referential programs. After seeing how easy it is to generate a quine-relay, it shouldn’t be hard to imagine how to write one with many more steps. You may even be able to get up to 100 if you work at it long enough.

The post How to Generate Self-Referential Programs appeared first on Macrology.

Wimpie NortjeKeep Quicklisp and Qlot out of your application binary.

· 73 days ago

'How do I create a Lisp application binary without including Quicklisp or Qlot?'

Quicklisp helps you work faster because it knows where to find libraries and Qlot keeps you mostly out of dependency hell. Without these two tools application development would be dead slow. However, applications distributed as standalone binaries should not contain them.

Quicklisp adds unnecessary code and it will likely try to create a ~/quicklisp directory on your user's machine. This can fail for any number of reasons (such as file permissions or lack of internet access) which will only cause headaches.

Qlot also adds unnecessary code and it will cause the same problems as Quicklisp.

The process of building a binary using Buildapp and Quicklisp without actually having them inside the final application is as follows:

  1. Load your project using the normal Quicklisp method, i.e. (ql:quickload).
  2. Export a manifest file which lists the absolute path to every ASDF system currently loaded into the Lisp image. This is done using (ql:write-asdf-manifest-file).
  3. Exit from Lisp.
  4. Use Buildapp to create your binary. The manifest file must be passed as a parameter, --load-system must be used to load your system and Quicklisp must NOT be loaded.

@xach explained the procedure in this Stack Overflow answer.

When Qlot is employed in a project, the above process becomes:

  1. Load Qlot.
  2. Load your project using Qlot's method, i.e. (qlot:quickload).
  3. Export the manifest file using (qlot:with-local-quicklisp (ql:write-asdf-manifest-file)).
  4. Exit from Lisp.
  5. Use Buildapp to create your binary.

Since Quicklisp is not aware of Qlot (ql:write-asdf-manifest) must be wrapped with (qlot:with-local-quicklisp) in order to generate paths to the project-local Qlot managed libraries instead of the system wide Quicklisp managed libraries.


This is an excerpt of a makefile from a project which uses Qlot. The makefile is used to create a standalone binary that does not contain Quicklisp or Qlot. The prep-quicklisp.lisp script is used to setup Quicklisp for loading the project system from an arbitrary location.

# CCL Flags for manifest build
MANIFEST_FLAGS =  --no-init 
MANIFEST_FLAGS += --batch 
MANIFEST_FLAGS += --load prep-quicklisp.lisp
MANIFEST_FLAGS += --eval '(ql:quickload :qlot)'
MANIFEST_FLAGS += --eval '(qlot:install :$(QL_SYSTEM))'
MANIFEST_FLAGS += --eval '(qlot:quickload :$(QL_SYSTEM))'
MANIFEST_FLAGS += --eval '(qlot:with-local-quicklisp :$(QL_SYSTEM) (ql:write-asdf-manifest-file \#P"$(MANIFEST)" :if-exists :supersede :exclude-local-projects nil))'
MANIFEST_FLAGS += --eval '(quit)'

# Buildapp settings
B_FLAGS =  --output $(OUTDIR)/$(TARGET)
B_FLAGS += --manifest-file $(MANIFEST)
B_FLAGS += --load-system $(QL_SYSTEM)
B_FLAGS += --entry app:main



Michael MalisLoops in Lisp Part 4: Series

· 74 days ago

This is part four of Loops in Lisp. Follow one of the following links for part one, two, or three).

One of the many advantages of programming in a functional style (by this, I mean manipulating your data through the operations, map, fold, and filter) is that your program winds up being made up a bunch of tiny and composable pieces. Since each piece is so small, usually only a few lines each, it becomes trivial to unit test the entire program. Additionally, it is easy to express new features as just the composition of several existing functions. One disadvantage of programming through map and friends, is that there is fairly large time penalty for allocating the intermediate results. For example, every time filter is called on a list, a new list needs to be allocated. These costs add up pretty quickly and can make a functional program much slower than its imperative equivalent.

One solution to this problem is laziness. Instead of allocating a new list every time an operation is performed on a list, you instead keep track of all of the transformations made on the list. Then when you fold over the list, you perform all of the transformations as you are folding over it. By doing this, you don’t need to allocate intermediate lists. Although laziness doesn’t allocate any intermediate lists, there is still a small cost for keeping track of the laziness. An alternative solution that makes functional programming just as fast as imperative programming is provided by the Series library.1 Series lets you write your program in a functional style without any runtime penalty at all!

Personally, the Series library is my favorite example of the magic that can be pulled off with macros. In short, Series works by taking your functional code and compiling it down into a single loop. In this loop, there is one step per transformation performed on the original list. The loop iterates over the values of the original sequence on at a time. On each iteration, the loop takes a single element, performs all of the transformations performed on the list on that single element, and then accumulates that value into the result according to the folding operation. This loop requires no additional memory allocation at runtime, and their is no time penalty either! As an example, here is a program that sums the first N squares, written using Series:

(defun integers ()
  "Returns a 'series' of all of the natural numbers."
  (declare (optimizable-series-function))
  (scan-range :from 1))

(defun squares ()
  "Returns a 'series' of all of the square numbers."
  (declare (optimizable-series-function))
  (map-fn t 
          (lambda (x) (* x x)) 

(defun sum-squares (n)
  "Returns the sum of the first N square numbers."
  (collect-sum (subseries (squares) 0 n)))

(sum-squares 10)
=> 385

The above code certainly looks functional, there are no side effects in sight. Now let’s look at the code generated by Series. Here is what the macroexpansion of collect-sum looks like:

(common-lisp:let* ((#:out-969 n))
  (common-lisp:let ((#:numbers-966
                     (coerce-maybe-fold (- 1 1) 'number))
                    (#:index-965 -1)
                    (#:sum-959 0))
    (declare (type number #:numbers-966)
             (type (integer -1) #:index-965)
             (type number #:sum-959))
       (setq #:numbers-966
             (+ #:numbers-966
                (coerce-maybe-fold 1 'number)))
       (setq #:items-967
             ((lambda (x) (* x x)) #:numbers-966))
       (incf #:index-965)
          (declare (type nonnegative-integer #:index-965))
         (if (>= #:index-965 #:out-969)
             (go end))
         (if (< #:index-965 0)
             (go #:ll-970)))
       (setq #:sum-959 (+ #:sum-959 #:items-967))
       (go #:ll-970)

What series does it looks at the entire lifetime of the sequence from its creation until it is folded. It uses this information to build the above loop which simultaneously generates the original sequence, maps over it, filters elements out of it, and folds it into the final result. Here is the breakdown of the expansion. Lines 1-9 are just initialization. They define all of the variables the loop will be using and set them to their starting values. The important variables to keep track of are #:NUMBERS-966, #:ITEMS-967, and #:SUM-959. As the code “iterates” over the original sequence, #:NUMBERS-966 is the value of the original sequence, #:ITEMS-967 is the square of that value, and #:SUM-959 is the sum of the squares so far. The rest of the code is the actual loop.

The loop first takes #:NUMBERS-966, the previous value of the sequence, and increments it in order to set it to current value of the sequence (since the sequence is the range from 1 to infinity). Next the loop takes the square of #:NUMBERS-966 to get the ith square number and stores that in #:ITEMS-967. Then the loop checks if it ha taken more than N elements out of the sequence, and if so, terminates. Finally the loop takes the value in #:ITEMS-967 and accumulates that into #:SUM-959.

Although the imperative version is equivalent to the original functional code, it is much faster than the functional code if the functional code were to allocate intermediate results or use laziness. This idea of turning transformations on a list into a loop doesn’t just work for this simple example, it also works for much more complicated programs. I just find it incredible that Series is able to take such pretty code and compile it into code that is extremely fast.

The post Loops in Lisp Part 4: Series appeared first on Macrology.

LispjobsFRONT-END DEVELOPER (CLOJURESCRIPT), Vital Labs, San Francisco

· 75 days ago


We are looking for a generalist developer with a front-end focus and strong interest in working with the Clojure ecosystem who can lead or contribute to all aspects of a modern technology stack. Our production systems are built almost entirely in Clojure/Clojurescript with help from Java/Javascript libraries as needed. We leverage Cassandra and Datomic as our data storage infrastructure and orchestrate all these tools with Ansible. We build our web interfaces with React, as well as our current mobile interfaces. We will be expanding our platforms into native iOS/Android and are exploring new tools such as React Native.

We emphasize simplicity and elegance to combat the inherently complex nature of healthcare. If you are a jack-of-all-trades with a deep interest in building front-end systems that involve complex client-server IO, data visualization, and helping people make sense of machine learning dat, then we'd love to talk to you.


  • 4+ years working on front-end systems (desktop or mobile).
  • 2+ years working on production distributed systems projects with strong exposure to the back-end.
  • Experience working with UX and UI designers.
  • Strong knowledge of web standards, cross-browser issues, and CSS
  • Comfortable with one or more dynamic languages such as Python, Ruby, or Clojure.


  • Comfort working with React or React Native
  • Experience with data visualization or animation in a web context
  • A track record contributing to open source software


  • Experience with the Java/Clojure ecosystem
  • Exposure to Healthcare IT technologies and challenges
  • Exposure to AI – expert systems, statistical modeling, etc.
  • A UX/UI design background and/or interest

Fernando BorrettiUsing LMDB from Common Lisp

· 80 days ago

LMDB is a fast key-value store. This talk is useful for those who want to learn more. The design of LMDB means many of the things that are standard in other databases - write-ahead logs, and all the filesystem housekeeping necessary to implement concurrent transactions - is unnecessasry in LMDB.

The bindings are implemented as two separate libraries: liblmdb, a low-level, autogenerated CFFI binding; and lmdb, the high-level CLOS binding.


Using LMDB requires some work. A simple query requires setting up (and tearing down) a whole stack of objects, namely:

  1. An environment, which is, essentially, a collection of databases. You create the environment object by passing a directory where LMDB will store its data.
  2. A transaction within that environment. All queries have to take place inside a transaction.
  3. A database access object, which is created from a transaction, after the transaction's been started. The database object keeps the name of the database we're accessing within the environment.


LMDB lifecycle diagram

When you have the database object, you can set, retrieve and delete key-value pairs. For more complex operations, you have to use cursors, which add another level of lifecycle management within databases.


Starting by loading LMDB and Alexandria,

CL-USER> (ql:quickload '(:lmdb :alexandria))
To load "lmdb":
  Load 1 ASDF system:
; Loading "lmdb"
To load "alexandria":
  Load 1 ASDF system:
; Loading "alexandria"


We'll store the database in your home directory under lmdb-test/, and use a hardcoded named LMDB database:

CL-USER> (defparameter +directory+
           (merge-pathnames #p"lmdb-test/" (user-homedir-pathname)))

CL-USER> (defparameter +db-name+ "mydb")

First, let's abstract away all of the housekeeping:

CL-USER> (defmacro with-db ((db) &body body)
           (alexandria:with-gensyms (env txn)
             `(let ((,env (lmdb:make-environment +directory+)))
                (lmdb:with-environment (,env)
                  (let ((,txn (lmdb:make-transaction ,env)))
                    (lmdb:begin-transaction ,txn)
                    (let ((db (lmdb:make-database ,txn +db-name+)))
                      (lmdb:with-database (,db)
                          (lmdb:commit-transaction ,txn)))))))))

We can retrieve keys using the get function:

CL-USER> (with-db (db)
           (lmdb:get db #(1)))

Obviously this returns NIL, since we haven't actually set anything. To add or overwrite a key value pair, you use put:

CL-USER> (with-db (db)
           (lmdb:put db #(1) #(1 2 3)))
#(1 2 3)

CL-USER> (with-db (db)
           (lmdb:get db #(1)))
#(1 2 3)

That's better. But raw byte vectors are unwieldy: how can we store actual data?

First, let's get rid of this key/value pair so we can get back to a blank slate. We use the del function for that:

CL-USER> (with-db (db)
(lmdb:del db #(1))) T

CL-USER> (with-db (db)
           (lmdb:get db #(1)))

Alright, so, real data. These bindings only handle byte vectors: fancier datatypes are explicitly anti-features. Serialization of more complex data structures to byte vectors should be done by a higher-level library - maybe I'll write a Moneta clone for Common Lisp.

Storing strings is pretty simple, all you need is the trivial-utf-8 library:

CL-USER> (ql:quickload :trivial-utf-8)
To load "trivial-utf-8":
  Load 1 ASDF system:
; Loading "trivial-utf-8"


CL-USER> (defun str->vec (str)
           (trivial-utf-8:string-to-utf-8-bytes str))

CL-USER> (defun vec->str (vec)
           (trivial-utf-8:utf-8-bytes-to-string vec))

Now we can use this like this:

CL-USER> (with-db (db)
           (lmdb:put db (str->vec "Common Lisp")
                        (str->vec "An ANSI-standarized Lisp dialect")))
#(65 110 32 65 78 83 73 45 115 116 97 110 100 97 114 105 122 101 100 32 76 105
  115 112 32 100 105 97 108 101 99 116)

CL-USER> (with-db (db)
           (vec->str (lmdb:get db (str->vec "Common Lisp"))))
"An ANSI-standarized Lisp dialect"

How about integers? We use bit-smasher for that:

CL-USER> (ql:quickload :bit-smasher)
To load "bit-smasher":
  Load 1 ASDF system:
; Loading "bit-smasher"

CL-USER> (defun int->vec (int)
           (bit-smasher:int->octets int))

CL-USER> (defun vec->int (vec)
           (bit-smasher:octets->int vec))

And usage:

CL-USER> (with-db (db)
           (lmdb:put db (str->vec "Common Lisp/age")
                        (int->vec 21)))

CL-USER> (with-db (db)
           (vec->int (lmdb:get db (str->vec "Common Lisp/age"))))

This works with Common Lisp's arbitrary-precision integers, as well. Let's try ten to the three hundredth power1:

CL-USER> (expt 10 300)

CL-USER> (integer-length *)

Nine hundred and ninety seven bits is larger than the average machine word, and will be until we start dismanting planets into computers2. Let's see how it works:

CL-USER> (with-db (db)
           (lmdb:put db (str->vec "big integer")
                        (int->vec (expt 10 300))))
#(23 228 60 136 0 117 155 165 156 8 225 76 124 215 170 216 106 74 69 129 9 249
  28 33 197 113 219 232 77 82 217 54 244 74 190 138 61 91 72 193 0 149 157 157
  11 108 200 86 179 173 201 59 103 174 168 248 224 103 210 200 208 75 193 119
  247 180 40 122 110 63 205 163 111 163 179 52 46 174 180 66 225 93 69 9 82 244
  221 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0)

CL-USER> (with-db (db)
           (vec->int (lmdb:get db (str->vec "big integer"))))

So all of this is fine, but what if we don't know the contents of the database? That's what cursors are for, but we don't need to deal with them directly because this wrapper abstracts them:

CL-USER> (with-db (db)
           (lmdb:do-pairs (db key value)
             (format t "~A: ~A~%~%" key value)))
#(67 111 109 109 111 110 32 76 105 115 112): #(65 110 32 65 78 83 73 45 115 116
                                               97 110 100 97 114 105 122 101
                                               100 32 76 105 115 112 32 100 105
                                               97 108 101 99 116)

#(67 111 109 109 111 110 32 76 105 115 112 47 97 103 101): #(21)

#(98 105 103 32 105 110 116 101 103 101 114): #(23 228 60 136 0 117 155 165 156
                                                8 225 76 124 215 170 216 106 74
                                                69 129 9 249 28 33 197 113 219
                                                232 77 82 217 54 244 74 190 138
                                                61 91 72 193 0 149 157 157 11
                                                108 200 86 179 173 201 59 103
                                                174 168 248 224 103 210 200 208
                                                75 193 119 247 180 40 122 110
                                                63 205 163 111 163 179 52 46
                                                174 180 66 225 93 69 9 82 244
                                                221 16 0 0 0 0 0 0 0 0 0 0 0 0
                                                0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
                                                0 0 0 0 0 0 0 0 0)


Which is not very informative, since these are just byte vectors.

Finally, you don't want to keep the database directory:

CL-USER> (uiop:delete-directory-tree +directory+ :validate t)


  1. I replaced the actual integer representation from the REPL with 1e300 for brevity.

  2. A 1024-bit wide machine word is probably overkill even then.

Wimpie NortjeExplaining version priority handling in Qlot.

· 80 days ago

How do I specify a library version that is newer than the specified Quicklisp distribution?

While struggling to match a database, Linux libraries, database drivers and all the correct Quicklisp library versions one question I struggled a lot with was 'How do I specify a library version that is newer than the specified Quicklisp distribution?'

The Qlot documentation says about library priorities: 'If multiple distributions provide the same library, lower one would take priority over higher ones.'

This can be interpreted in more than one way which can lead to confusion. The correct interpretation is that the 'lower one' in the documentation refers to the order of the declarations in the qlfile.

To explain Qlot's priority handling more clearly: The order of the declarations in qlfile determines the priority. When multiple distributions provide the same functionality, later statements override the the earlier ones.

The two examples demonstrate the Qlot priority handling.

Using Clack from the latest Quicklisp distribution.

ql :all 2014-01-13
ql clack :latest

Using Clack from the 2014-01-13 distribution.

ql clack :latest
ql :all 2014-01-13

For older items, see the Planet Lisp Archives.

Last updated: 2016-06-15 00:00