Planet Lisp

Zach BeaneQuicklisp fundraiser update

· 40 hours ago

The Common Lisp Foundation Quicklisp fundraiser hit its initial goal of $6,000 after just a few hours. There have been many donors and a few very generous donations put it over the top. I am so surprised and grateful that it happend so quickly.

By exceeding $6,000, contributions are no longer doubled with matching funds. But the fundraiser is still going to continue to December 31st as scheduled, and everything raised until then is important and appreciated.

As I write this, contributions are near $9,700. With the matching funds, it’s nearly $15,700. How high can it go?

Thank you for supporting Quicklisp!

Zach BeaneThe Quicklisp fundraiser is now up and running

· 2 days ago

The Common Lisp Foundation’s Quicklisp appreciation fundraiser is now up and running. I made a brief video to appeal for your support. Here’s the transcript:

Hi, I’m Zach Beane and I’m asking for your support for Quicklisp.

I created Quicklisp in 2010. It’s easy to install and provides nearly fifteen hundred Common Lisp community libraries with just a few commands.

I keep things updated every month, and work with the community to get new projects added and make sure existing projects keep working. I check builds, I look for problems, and I’ve filed hundreds of bug reports to keep those problems from affecting Quicklisp users.

The vast majority of this work has been done on a volunteer basis, in my free time.

But today, the Common Lisp Foundation has started a appreciation fundraiser for Quicklisp, running until December thirty-first.

I hope to use the proceeds from the fundraiser for a few months of dedicated work on Quicklisp. In particular, I want to address the main issues that have kept it in beta all this time, starting with security and documentation.

If Quicklisp has made your life easier - whether it’s to get started as a new Common Lisp user, or to easily get all the dependencies you need in a large commercial project, or if it just helps you hack on a hobby or homework project - please consider visiting for a contribution.

Thank you!

Quicklisp newsDecember 2016 Quicklisp dist update now available

· 2 days ago
Note: The Quicklisp fundraiser is up and running. If you appreciate Quicklisp, please contribute if you can.

New projects:
  • cl-digraph — Simple directed graphs for Common Lisp. — MIT/X11
  • cl-directed-graph — Directed graph data structure — MIT
  • format-string-builder — A DSL wrapping cl:format's syntax with something more lispy. — MIT
  • hunchentools — Hunchentoot utility library — MIT
  • l-system — L-system or Lindenmayer system on lists — GPLv3+
  • parser.common-rules — Provides common parsing rules that are useful in many grammars. — MIT
  • parser.ini — Provides parsing of Ini expressions. — LLGPLv3
  • postmodernity — Utility library for the Common Lisp Postmodern library — MIT
  • stl — Load triangle data from binary stereolithography (STL) files. — ISC
  • whofields — HTML field rendering and input validation utilities written in Common Lisp — MIT
Updated projects3bmdalexaalexandriaarchitecture.builder-protocolarchitecture.hooksarchitecture.service-providerasdf-dependency-grovelbeastcarriercaveman2-widgetscellsceplcepl.drm-gbmcepl.sdl2cepl.skittercircular-streamscl+sslcl-anacl-autowrapcl-bootstrapcl-cacl-change-casecl-dbicl-dotcl-drmcl-gistscl-growlcl-jpegcl-kanrencl-l10ncl-libyamlcl-mediawikicl-mpg123cl-openglcl-out123cl-pangocl-portaudiocl-pslibcl-quickcheckcl-sdl2clackclavierclfswmclinchcltclcoleslawcollectorscroatoandbusdendritedjuladocumentation-utilsdynaeazy-gnuplotesrapfare-scriptsformletsgbbopengendlgenevagsllgtk-cffihttp-bodyhu.dwim.computed-classhu.dwim.defhu.dwim.presentationhu.dwim.serializerhu.dwim.syntax-sugarhyperluminal-meminquisitorjskenzolacklasslegitmcclimmetabang-bindmeteringmglmitomodularizemodularize-interfacesneo4clnibblesningleoclclopticlpgloaderpng-readpostmodernprotobufpsychiqqlotqt-libsqtoolsquickutilquriratifyretrospectiffrutilsserapeumskitterspinneretstaplestumpwmsxqltriviatrivial-documentationtrivial-featurestrivial-nntptrivial-rfc-1123trivial-yencubiquitousufoutilities.print-treeutility-argumentsutils-ktvarjowebsocket-driverwhat3wordswookieworkout-timerxml.location.

Removed projects: cl-xspf, date-calc, elephant, html-entities, lambda-gtk, perfpiece, quicksearch, usocket-udp.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Eugene ZaikonnikovPJSUA support in CL-PJSIP

· 3 days ago

After a bit of tweaking, CL-PJSIP now supports basic PJSUA API. PJSUA aggregates much of PJSIP functionality in a handful of structures and protocol methods. This simplifies application side a lot: one can get by with just a few lines of setup code and a couple callbacks. From Lisp perspective it also reduces the used FFI surface to a stable, generic interface. This ought to improve long term compatibility with PJSIP own revisions.

I refer to cl-pjsua-demo.lisp in the library for a short sample. Load the demo system and try (cl-pjsip::run-pjsua "") for a quick test against a voice menu directory. It was tested on Linux x86_64 with CCL 11.1 and Allegro 10.1 beta. It however eventually crashes with floating point exception on SBCL.

Zach BeaneErlangen intro

· 6 days ago

Max Rottenkolber made a distributed, async message-passing system. Here’s his introduction to Erlangen.

McCLIMProgress report #4

· 9 days ago

Dear Community,

During this iteration I have continued to work on the tutorial, improving documentation, working on issues and assuring CLIM II specification compatibility.

Most notable change is that argument type in define-command is not evaluated (but if it is quoted it still works for backward compatibility reasons). I've also started some refactoring of the frames module implementation.

The tutorial work takes some time because I try to fix bugs when I encounter them to make the walkthrough as flawless as possible. While I'm not overly satisfied with the tutorial writing progress and its current shape, this work results in benefit of improving the code and the documentation.

The documentation chapter named "Demos and applications" has been updated to reflect the current state of the code base. Some additional clarifications about the pane order and pane names have been added to it. I've updated the website to include the external tutorials and include the Guided Tour in the Resources section. The manual has been updated as well.

The rest of the time was spent on peer review of the contributions, merging pull requests, development discussions, questions on IRC and other maintenance tasks.

Alessandro Serra has created a Raster Image Backend - a backend similar to PostScript, but having png as its output. See "Drawing Tests" in clim-examples and the chapter "Raster Image backend" in the Manual.

A detailed report is available at:

If you have any questions, doubts or suggestions - please contact me either with email ( or on IRC (my nick is jackdaniel).

Sincerely yours,
Daniel Kochmański

Eugene ZaikonnikovAnnouncing CL-PJSIP

· 12 days ago

I am pleased to announce CL-PJSIP, a Common Lisp wrapper for PJSIP, a popular multimedia communications library.

CL-PJSIP so far supports a limited subset of PJSIP functionality, yet sufficient to implement a simple SIP telephony user agent. Things will get gradually expanded from that. At the moment, focus is on moving beyond alpha-quality (scary amounts of FFI there) and implimenting Lisp-ideomatic handling of PJSIP components.

There is a certain learning curve involved in using PJSIP, and it's worth starting with the included cl-pjsip-ua.lisp. This is a near verbatim copy of PJSIP's own simpleua.c.

The application runs here (disclaimers apply) on CCL 1.11 LinuxX8664, although it does seem to run on recent SBCL (albeit not really tested). There are couple +DARWINs in the code, where it was inferred from PJSIP header files, but it was not tested on MacOS at at all.

Hans Hübner

· 12 days ago

Berlin Lispers Meetup: Tuesday November 29th, 2016, 8.00pm

You are kindly invited to the next "Berlin Lispers Meetup", an informal gathering for anyone interested in Lisp, beer or coffee:

Berlin Lispers Meetup
Tuesday, November 29th, 2016
8pm onwards

in November *new location*

xHain hack+makespace
Grünberger Strasse 14

U5 Frankfurter Tor
S Warschauer Str.
Tram M10, M13
Bus 240, N40
(location via OSM)

In case you don't find us,
please contact Christian: 0157 87 05 16 14.

Please join for another evening of parentheses!

LispjobsCommon Lisp developer, m-creations, Mainz, Germany

· 20 days ago

Full time position for German speaking Common Lisp developer near
Frankfurt, Germany.

We are a small German software shop based in Mainz, Germany, founded in
2000. We create custom software solutions for mid-size to big companies
in finance/payment, health care, and media research.

Our portfolio includes high volume ingestion of different social and
classic (print/tv/radio) media streams into NoSQL databases,
dictionary-based natural language processing, fast user-facing search
solutions, and visualisation of results.

To expand the portfolio in the areas of natural language processing and
machine learning, we are looking for talented engineers who ideally have

– 3+ years of software engineering experience

– a solid background in Java/C#/C++

– experience in Common Lisp (not necessarily professional)

– experience in one ore more of the following tools/frameworks: CouchDB,
ElasticSearch, Cassandra, Kafka, Mesos/Marathon, Docker

– experience in development of microservices

– experience or strong interest in different machine learning
methodologies such as neural networks, bayesian networks, support
vector machines etc.

Experience with languages and frameworks is not as important as
curiousity, intelligence and open-mindedness. You will get the necessary
time to learn the missing skills. We are interested in a long-term
relationship rather than just staffing a project with ‘resources’.

Due to our size as a small company, we do care about each one of our
colleagues and react flexibly to the (sometimes changing) necessities of
their life. Together we try to develop a plan for your personal career,
depending on your own goals.
Curious? Please contact Kambiz Darabi at
He’ll be happy to give you more information and answer all your

m-creations gmbh
Acker 2
55116 Mainz

ECL NewsECL Quarterly Volume V

· 33 days ago

1 Preface

Dear Readers,

I'm very pleased to present the fifth volume of the ECL Quarterly.

This issue is focused on software development. Tianrui Niu (Lexicall) written a tutorial on how to embed ECL in Qt5.7. He explains how to use ECL with C++ software and how to achieve comfortable development environment for that. Next we have a first part of the tutorial on creating Common Lisp application from scratch. I hope you enjoy it. At last some practical opinionated advice about including all dependencies in the software definitions.

ECL slowly moves towards the next release with a version number 16.1.3. Numerous bugs were fixed, some minor functions were introduced. We're currently working on simplifying the autoconf/Makefile files. Configure options were also cleaned. This may cause some regressions in the build system.

Some code base cleanup was performed with removal of the obsolete code (which wasn't compiled at all), refactor of testing framework and making man page up-to-date. One-dash long flags are deprecated in favour of two-dash alternatives. We've fixed ECL support in upstream portable CLX and now there is no reason to bundle our own separate version. Bundled module has been purged. McCLIM works now with the ECL out of the box.

We've started to implement CDRs (right now CDR1, CDR5, CDR7 and CDR14 are present in ECL with CDR9 and CDR10 on its way). Constant work on a new documentation is performed.

See the CHANGELOG for more details. I hope to release next version before the new year.

If you want to discuss some topic with the developers, please join the channel #ecl on Freenode. If you are interested in supporting ECL development please report issues, contribute new code, write documentation or just hang around with us on IRC. Writing about ECL also increases its mind-share - please do so! Preferably in ECL Quarterly. If you want to support us financially (either ECL and ECL Quarterly), please consider contributing on Bountysource.

Enjoy! And don't forget to leave the feedback at

Daniel Kochmański ;; aka jackdaniel | TurtleWare
Przemyśl, Poland
November 2016

2 Embedding ECL in Qt5.7

2.1 Introduction

ECL is a fantastic ANSI Commnon Lisp implementation that aims at embedding. It allows us to build Lisp runtime into C/C++ code as external library. That means you can both call C code from Lisp runtime and call Lisp code form the C/C++ part. In this article we focus on how to call Lisp procedures from the C/C++ side, you may also achieve the reversed process by inlining C/C++ code in ECL, but that's beyond our discussion here. Generally, this one-side work is fairly enough to enable us to exhaust any power of both Lisp and C/C++.

This article shows how you can embed ECL into a Qt project, and serve as the kernel of that program. I hope that can serve as a common example, or tutorial, for the one who want to know about the true power of ECL, and perhaps that awesome guy is you. At least we will show you that the ability to hybrid is what makes ECL different from other implementations. And if one day you need that power, you know where you gonna head for.

2.2 But why?

I know I should shoot the code and demo quickly, but let the theory come first, just for those who don't understand why we are doing this.

What we are doing is just an instance of mixed-language programming. That means you will be writing codes in different languages and glue them together to combine a program. Usually, the product program would appear to be a peanut, the kernel is written in language A and the nutshell written in language B. The most frequent combination is C++ and Lua, in game programming. This enables us to take advantage of both language A and B. But in this case we are hybridizing C++ and Common Lisp, both aiming at building large software system, an we are sure to gain even more benifits here.

Common Lisp is a masterpiece, but it lacks a good GUI toolkit. Well, if you work on Linux you definitely can take a shot on packages like CL-GTK or CommonQt, and someone will suggest you to use another package called EQL, the Qt4 binding for ECL. But any of these GUI tools look coarse on my Mac, with Retina screen. I don't want to spend my scholarships on Lispworks, so I must find another way to enable Lisp in GUI-related programming. Finally I ended up in here. The hybrid is no compromise, since I can develop fancy GUI interface without losing the power of Lisp.

We can see more benifits that are shown below:

  1. Live hotpatching. Common Lisp is a dynamic language, so it allows you to add new components at runtime. That means you can upgrade your program without re-compiling or even restart. In more advanced discussions, you may even recompile your C/C++ functions. So embedded ECL could even change the host language.
  2. With the most powerful macro system, Lisp is natively designed for complex system designing. Different from light-weight languages like Python and Lua, Lisp is suitable for huge programs. So you can always focus on Lisp, instead of seeking for help from language B. You can also use Lisp environment as a direct DSL interpreter so that you needn't write one yourself.
  3. Efficiency. There are other approaches to combine Lisp runtime with other languages, like using pipes, channels, sockets or temporary files. These are never elegant solutions, since you can only read the output by Lisp runtime manually and there's no direct access to the Lisp runtime memory. And you have to string-parse each Lisp output. So this is neither quick nor efficient. You may also use FFIs (Foreign Language Interfaces) and it's more common with the reverse direction, say call C from Lisp. Now the ECL approach is to save. The Lisp runtime in ECL shares the same part of memory with C/C++ side and there's direct way to fetch the return value of any Lisp function calls. How can they achieve this magic? Well ECL compiles Lisp to C code or bytecode so that they get the same tone.
  4. Stable and mature. ECL is currently the best for embedding. You may have heard of other implementations like Clasp, which works on LLVM and is compatible with C++. But it's not yet stable or ANSI-compatible hitherto. Meanwhile ECL has a long history and is already ready to use. When built with C++ compiler like g++ (flag –with-cxx), ECL also enables us to Call C++ functions. So stick to ECL.

I hope this should convince you that this could be a promising hybrid.

2.3 General Approach

The embedding process of ECL can be pretty simple if you understand how it works. Unfortunately the ECL official documentation of this part is not quite clear at the moment, here are some example code in the example/embed directory. Thanks to Daniel Kochmański, he helped me through the way towards my first success of hybridding. I'm still a newbie here.

The example code is enough for understanding the process of hybridizing Lisp and C codes by ECL. There is absolutely a general approach and you can use it as a pattern in your development.

ECL achieves this by compiling itself into C libraries and link it to C runtime. There is two ways to go: static library and shared library. In this article we will take the first approach. For embedding, there are a few steps:

  1. Write your Lisp files. (absolutely)
  2. Compile your Lisp files.
  3. Write a C/C++ file that includes the ECL library and boots the CL environment.
  4. Link the runtime and compile the whole project into executables.

Easy enough, isn't it? Let me explain in detail.

The first step is nothing different than general Lisp development. You can either create your own package or not. (Just leave the naked lisp file.)

The second step, well, it's time for ECL to rock. We've got two approaches, which depend on whenever you use ASDF or not. If you do not want to use it, you may follow this code:

(compile-file "<YOUR-LISP-FILE>.lisp" :system-p t)
(c:build-static-library "<LIBRARY-NAME>"
                        :lisp-files '("<THE-OBJECT-FILE>.o")
                        :init-name "<INIT-NAME>")

The first line of code compiles your .lisp file into a .o object file, say, <THE-OBJECT-FILE>.o. This file serves as the input for the next procedure. The c:build-static-library function is defined by ECL, it builds the object file into a static library, say, <LIBRARY-NAME>.a. We should pay attention to the init-name. You can define your init-name here as a string, and this is useful for step 3. We will head back when it happens.

If you choose to use ASDF, you can head for the asdf:make-build function. This can be seen in the Makefile in example:

hello.exe: hello.c hello-lisp.a
        $(CC) `ecl-config --cflags` -o $@ hello.c hello-lisp.a \
              `ecl-config --ldflags` -lecl

hello-lisp.a: hello-lisp.lisp
        ecl -norc \
            -eval '(require :asdf)' \
            -eval '(push "./" asdf:*central-registry*)' \
            -eval '(asdf:make-build :hello-lisp :type :static-library :move-here "./")' \
            -eval '(quit)'

        -rm -f hello-lisp.a hello.exe

And you may use asdf:defsystem in your lisp code. We will see this closer in my demo.

In the third step, we must dance with some C/C++ code. In your .c file where you want the ECL environment to run, you should #include <ecl/ecl.h> to make sure all the ECL symbols are linked. Then write some simple code to boot the environment:

/* Initialize ECL */
cl_boot(argc, argv);

/* Initialize the library we linked in. Each library
 * has to be initialized. It is best if all libraries
 * are joined using ASDF:MAKE-BUILD.
extern void init_lib_HELLO_LISP(cl_object);
ecl_init_module(NULL, init_lib_HELLO_LISP);

The cl_ boot procedure boots the CL environment, it takes the right args from your main entry. Now take a look at the extern declaration. Remember last time I suggest you to notice the :init-name, now it's time to use it. If you took the first approach of building library and defined your own *init-name*, now the function name should be the same with it. And if you didn't define your init-name, now the name convention should be: init_lib_<FILE_NAME_OF_THE_LIBRARY>. Say, if the static library named "hello-world–all-systems.a", then you write init_lib_HELLO_WORLD__ALL_SYSTEMS for the function name.

Notice: In C++, you should encapsule the extern code in an extern "C" block:

extern "C"{
extern void init_lib_HELLO_LISP(cl_object);

To make the link process complete. This has something to do with the naming convention that differs from C to C++. In general ECL exports symbols following C naming convention to allow seamless FFI to it from C and other languages. C++ does some weird name mangling.So if you want to call C functions from C++, you have to declare them in C++ that way indeed. The function is used by procedure ecl_init_module to load all of your user-defined Lisp symbols. Then you are freely to call your Lisp code in C/C++.

The forth step builds the whole project. So it acquires all of your C/C++ files, libraries and the ECL library. All of the work can be easily done if you are familiar with Makefile. See the example above.

2.4 Calling Lisp in C/C++

"How can I call the Lisp functions I wrote?" This should be the most urgent question you may ask. The ECL manual describes most of the functions in the Standards chapter. Apparently most of the Lisp functions, or macros have been maped into C functions, with some name convention. For example the [[][cl_ eval]] means the corresponding Lisp code "eval". Most of the ANSI-defined procedure has the naming convention of using cl_ as prefix. So you can easily find the primitive symbol you need.

But perhaps the problem you most concern is:

  1. How can I call MY Lisp functions in C/C++?
  2. How can I translate the return value into C/C++ objects?

For the first question I suggest you to use the cl_eval function. The reason is it's simple and extensible. For the safety reasons you may use cl_funcall or cl_safe_eval. But none of them is as universal as cl_eval. The cl_funcall, as its name means, can only call functions and cannot be used to call macros. And cl_safe_eval requires more parameters in order to handle potential errors that may occur on the Lisp side. But here since I don't mean to make my code productive so I won't care about the safety or convenience. I wrote a friendlier version of cl_eval and you can call lisp code like this:

cl_eval("mapcar", list_foo, "(lambda (x) (princ x))");

And that's nearly Lisp code in appearance.

So let's head for the cl_eval. Its signature is:

cl_object cl_eval(cl_object);

It receives a cl_object and returns a cl_object. Hmm. Now you should get knowledge of how ECL manipulate Common Lisp Objects before wondering what cl_object is.

Quite simple. ECL encapsules any Lisp object into the same structure cl_object. It's a C union whose definition can be seen in object.h, line 1011. So you don't need to worry about using different types to capture the return value.

Translating C strings to cl_object is trivial: use the c_string_to_object function:

cl_object c_string_to_object (const char * s)

You just write the Lisp form in C string and the function will create Lisp object for you. So you may write

cl_eval(c_string_to_object("(princ \"hello world\")"));

To get your first hybrid-call.

The second question can be a little tough due to lack of documentation. And there's another naming convention.

Generally, you may use the ecl_to_* family to convert the cl_object to primitive C data, here is some regular examples:

char ecl_to_char(cl_object x);
int ecl_to_int(cl_object x);
double ecl_to_double(cl_object x);

I've said that these functions could only help convert cl_object to primitive C data. No array, and no string. The ECL API didn't provide them officially. So we have to implement them manually, sorry to say that. (If I missed something, correct me.)

I would show two trivial yet useful functions that may help you. The first one helps you to traverse Lisp List:

auto cl_list_traverse=[](auto& cl_lst, auto fn){

This is implemented in C++ with the convenience of C++14 standard. Can be rewritten in C like this:

void cl_list_traverse(cl_object cl_lst, void(*fn)(cl_object)){

Usage example:

void print_element(cl_object obj){
    printf("%d\n", ecl_to_int(obj));
list_traverse(foo_list, print_element);

And the second one converts the cl_object into C++ *std::string*.

std::string to_std_string(cl_object obj){
    std::string val;
    auto & str=obj->string;
    for(unsigned long i=0;i<str.fillp;i+=1)
       val+=*(typeof(str.elttype) *)(str.self+i);
    return val;

When you are using these functions to convert a cl_object to C/C++ objects, you have to know exactly what the return value is. That means, if you are trying to call ecl_to_int on a cl_object, you should be clear that the cl_object IS an integer. And for some complicate situation, a cl_object could contain more than one type at the same time. For example, if you call a function that returns a list of strings, say '("hello" "lisp") then the corresponding cl_object can both contain a string (on its car position) and a list (on its cdr position). Call cl_car and you get a cl_object containing a string, and you can call to_std_string on that object to get a C++ string. I mean, you should figure out that before you code. The secret is to just think you are still in Lisp.

2.5 Hybridizing Lisp & Qt

Now it's time to head for our ultimate goal: let Lisp rock with Qt! We have had enough knowledge of embedding ECL into C++ code in the former chapters and Qt is nothing but C++. So the work should be trivial. Sounds this is true but, there's still many things to be solved. I have stuggled much about them but now I can just write down the final progress and pretend this is simple.

The first one is, how to build a Lisp package system, instead of compiling a naked Lisp file or a single package.

2.5.1 Build Lisp Package System

If you are to build some awesome software, you must be using external packages. After all, there are plenty of excellent Lisp packages, like cl-ppcre and lparallel. Quicklisp solved the package management problem in an elegant way. But when you decide to distribute your software, you shouldn't ask Quicklisp for help, instead, you should compile all of your dependencies into your Lisp runtime, so that you can load them all by a single statement. SBCL could dump current Lisp image into a single executable file by function sb-ext:save-lisp-and-die. We need a function that does the similar thing, here in ECL.

ASDF is here to help. You can make an asdf system that defines every files and dependencies in your project. If you haven't touched that, see the tutorial.

After that, you just have one step to go: build the system into library. You may use asdf:make-build. Here comes an example:

(require 'asdf)
(push "./" asdf:*central-registry*)
(asdf:make-build :hello-lisp-system
                 :type :static-library
                 :monolithic t
                 :move-here "./")

The push expression adds current working directory into ASDF search list. Then asdf is ready to find your user-defined system in your directory.

If you have external Lisp packages as dependencies, you must set the :monolithic parameter to T. That means, you order ASDF to build your whole system into a single file. Or else you'd have to load your dependencies manually each time you start your Lisp runtime.

Unfortunately, I have to say the function is not ready for building static libraries that contains Lisp package dependencies. There is a serious bug that prevents the library from linking. So the example code shown above won't work!. Sorry to say that. But perhaps this code works fine in the future. :)

Don't be afraid. There is still two other approaches, to build a fasl file or the shared library.

I'll take the first approach since it brings some good advantages. That is, allowing us to distribute the Lisp system independently. You can debug either natively in ECL by loading the fasl file or remotely on the C/C++ side. Sometimes you need this because you don't know which side, say C or Lisp, that causes your program crash.

Since then, I have to build two different Lisp systems. The first one serves as the Lisp runtime and is build to static library. It contains just one line of Lisp code.

(princ "Lisp Environment Settled.")

This library will be linked to my C++ program. The second one will be the actual system I wrote. I'm building it into a independent fasb file.

(require 'asdf)
(push "./" asdf:*central-registry*)

(asdf:make-build :hello-lisp-system
                 :type :fasl
                 :monolithic t
                 :move-here "./")

After loading this code I will see a hello-lisp-system–all-systems.fasb file in my directory. In order to use the system, I should load that fasl file at runtime. So the init code should be:

/* init-name */
#define __cl_init_name init_lib_LISP_ENVI

extern "C"{

  extern void __cl_init_name(cl_object);


void init_cl_env(int argc, char * argv[]){
  /* Initialize CL environment */
  cl_boot(argc, argv);
  ecl_init_module(NULL, __cl_init_name);
  /* load fasb */
  cl_eval("load", CL_MAIN_FASB);
  /* set context to current package */
  cl_eval("in-package", CL_MAIN_PACKAGE_NAME);
  /* hook for shutting down cl env */

#undef __cl_init_name

There is also a function called cl_load, you may use it to load the bundle:

Signature: cl_object cl_load(cl_arg narg, cl_object path);
Usage: cl_load(1, c_string_to_object("./lisp_image.fasb"));

Notice: When you are using the Lisp runtime, you are in the :top context.

Notice: The cl_eval function I used is the optimized, or overloaded version which I will introduce in the next section.(Code is here.) If you stick to the original version, you should convert C string to cl_object manually, like:


2.5.2 Enhance ECL Bridge In C++14

ECL is written in pure C, as a result, it lacks the real object to describe Lisp data. The cl_ object structure unions the Lisp datas together but there is no method for it. Utility functions are just naked funtions. You have to write ecl_to_int(obj) to convert the object to int, but it would be friendlier if you can write that as obj.to_int(). At this moment we are going to enclosure the original cl_ object in a C++ object to implement this.

auto cl_list_traverse=[](auto& cl_lst, auto fn){

class cl_obj {
    cl_object __obj;

    cl_obj(cl_object &&obj){this->__obj=obj;}
    cl_obj(const cl_object &obj){this->__obj=obj;}

    /* list index */
    inline cl_obj car(){return cl_obj(cl_car(this->__obj));}
    inline cl_obj cdr(){return cl_obj(cl_cdr(this->__obj));}
    inline cl_obj cadr(){return this->cdr().car();}
    inline cl_obj caar(){return this->car().car();}
    inline cl_obj cddr(){return this->cdr().cdr();}

    /* predicates */
    inline bool nullp(){return Null(this->__obj);}
    inline bool atomp(){return ECL_ATOM(this->__obj);}
    inline bool listp(){return ECL_LISTP(this->__obj);}
    inline bool symbolp(){return ECL_SYMBOLP(this->__obj);}

    inline int to_int(){return ecl_to_int(this->__obj);}
    inline char to_char(){return ecl_to_char(this->__obj);}

    inline std::string to_std_string(){
        std::string val;
        auto & str=this->__obj->string;
        for(unsigned long i=0;i<str.fillp;i+=1)
           val+=*(typeof(str.elttype) *)(str.self+i);
        return val;

    template<typename function>
    inline void list_traverse(function fn){cl_list_traverse(this->__obj, fn);}

    inline cl_obj operator=(cl_object &&obj){return cl_obj(obj);}


It's just a trivial one and can only implement a small subset of ANSI Common Lisp, but anyway it's enough for our demo. After that, you can write something like obj.cdr().car().to_ int(). That is a more fluent interface.

Despite that, the original cl_eval function is not friendly enough. We are going to implement a better one so that you can call that function just as if you are in Lisp. See the overloading:

using std::string;

cl_object lispfy(string str);
 return c_string_to_object(;
string __spc_expr(string first);
template <typename ...str>
string __spc_expr (string first, str ... next){
    return first+" "+__spc_expr(next...);
template<typename ...str>
string par_expr(str... all){
    return "("+__spc_expr(all...)+")";
template<typename ...str>
string par_list(str... all){
    return "'"+par_expr(all...);
template<typename ...str>
string cl_eval(str... all){
    return cl_eval(lispfy(par_expr(all...)));

Now you can call that cl_eval function like:

cl_eval("mapcar", "'(1 2 3 4 5)", "(lambda (x) (evenp x))");

Those code would compile by setting your compiler to -std=c++14.

2.5.3 Time to Hybridize!

After gaining the knowledge in the former chapter, it's trivial for us to use ECL in Qt programming. You just have to follow some small modifications and tips.

Source code of the demo being shown here can be found here.

First you should get a little knowledge about qmake. It's an automatic toolchain that helps us build our program. This time we needn't write Makefile manually since qmake is quite easy to use. You should check your .pro file and add those code to it:

QMAKE_CFLAGS += `ecl-config --cflags`
QMAKE_CXXFLAGS += `ecl-config --cflags`
QMAKE_LFLAGS += `ecl-config --ldflags` 
LIBS += -lecl

ecl-config will generate flags for your compiler. And since I used C++14, I have to add:


And we should also do a small trick. Because Qt defined the macro slots as keyword, it conflicts with the slots defined in ecl.h. So we have to undef that keyword to turn off the interference:

#ifdef slots
#undef slots
#include <ecl/ecl.h>

Now you can check out my demo. It looks like this:



It's just simple but enough to serve as a demo. The Lisp code of Fibonacci demo is based on package lparallel, the concurrent package.

(defpackage :hello-lisp
  (:use :cl :lparallel))

(in-package :hello-lisp) ;;package name hello-lisp

(setf lparallel:*kernel* (lparallel:make-kernel 4))

(lparallel:defpun pfib (n)
  (if (< n 2)
      (plet ((a (pfib (- n 1)))
             (b (pfib (- n 2))))
            (+ a b))))

You see, that's concurrent computation! This function should take use of all my four CPU cores. So that one is to show you how we can use external Lisp packages in our ECL.

The second demo is Quicksort. It just sorts the List you passed and print the result on the output line. This one demostrates the method to load and traverse Lisp list.

Click the hello-lisp button and you get an echo:


The text "Bonjour, lisp!" is returned by a Lisp function. This demostrates how to extract strings from cl_object.

Now you are ready for deeper adventure with embedding ECL. Good luck!

Note: For OSX users, after you build the source code by qmake, make, you should also run this shell code:

mv hello-lisp-system--all-systems.fasb

To make sure the Lisp system is right in the place. For Linux users you are not bothered by this since Qt won't make application packages in default.

3 Creating a Common Lisp project - Part I

3.1 Introduction

A common question heard from the Common Lisp newcomers is:

How to create my own application with Common Lisp?

Numerous concepts like packages, Quicklisp, modules and ASDF bring the confusion, which is only deepened by a wide range of implementations and foreign to the new programmer developing paradigm of working on a live image in the memory.

This post is a humble attempt to provide a brief tutorial on creating a small application from scratch. Our goal is to build a tool to manage document collection. Due to the introductory nature of the tutorial we will name our application "Clamber".

We will start with a quick description of what should be installed on the programmer's system (assumed operating system is Linux). Later we will create a project boilerplate with the quickproject, define a protocol for the software, write the application prototype (ineffective naive implementation), provide the command line interface with Command Line Option Nuker. This is where the first part ends.

Second part will be published on McCLIM blog and will show how to create a graphical user interface for our application with McCLIM.

Afterwards (in a third part in next ECL Quarterly) we will take a look into some considerations on how to distribute the software in various scenarios:

  • Common Lisp developers perspective with Quicklisp,
  • ordinary users with system-wide package managers with ql-to-deb,
  • source code distribution to clients with Qucklisp Bundles,
  • binary distribution (closed source) with ADSF prebuilt-system,
  • as a shared library for non-CL applications with ECL.

Obviously a similar result may be achieved using different building blocks and all choices reflect my personal preference regarding the libraries I use.

3.2 How to distribute the software

Before we jump into the project creation and actual development I want to talk a little about the software distribution. We may divide our target audience in two groups - programmers and end users. Sometimes it is hard to tell the difference.

Programmers want to use our software as part of their own software as a dependency. This is a common approach in FOSS applications, where we want to focus on the problem we want to solve, not the building blocks which are freely available (what kind of freedom it is depends on the license). To make it easy to acquire such dependencies the Quicklisp project was born. It is a package manager.

End users aren't much concerned about the underlying technology. They want to use the application in the most convenient way to them. For instance average non-programming Linux user would expect to find the software with theirs system distribution package manager. Commercial client will be interested in source code with all dependencies with whom the application was tested.

Proposed solution is to use Quicklisp during the development and bundle the dependencies (also with Quicklisp) when the application is ready. After that operation our source code doesn't depend on the package manager and we have all the source code available, what simplifies further distribution.

3.3 What are Common Lisp systems?

Notion of "system" is unknown to the Common Lisp specification. It is a build-system specific concept. Most widely used build-system in 2016 is ASDF. System definition is meant to contain information essential for building the software - application name, author, license, components and dependencies. Unfortunately ADSF doesn't separate system definitions from the source code and asd format can't be considered declarative. In effect, we can't load all system definitions with certainty that unwanted side-effects will follow.

3.4 Development environment configuration

We will only outline steps which are necessary to configure the development environment. There are various tutorials on how to do that which are more descriptive.

  1. Install Emacs and SBCL1:

    These two packages should be available in your system package manager (if it has one).

  2. Install Quicklisp:

    Visit and follow the instructions. It contains steps to add Quicklisp to Lisp initialization file and to install and configure SLIME. Follow all these instructions.

  3. Start Emacs and run Slime:

    To run Slime issue M-x slime in Emacs window.

These steps are arbitrary. We propose Linux + SBCL + Emacs + Quicklisp + SLIME setup, but alternative configurations are possible.

3.5 How to create a project

Quickproject is an excellent solution for this task because it is very simple tool with a well defined goal - to simplify creating basic project structure.

The simplest way of creating a new one is loading the quickproject system with Quicklisp and calling the appropriate function. Issue the following in the REPL:

(ql:quickload 'quickproject)
(quickproject:make-project #P"~/quicklisp/local-projects/clamber/"
                           :depends-on '(#:alexandria)
                           :author "Daniel Kochmański <>"
                           :license "Public Domain")

That's it. We have created a skeleton for our project. For now, we depend only on alexandria - public domain utility library. List of dependencies will grow during the development to reflect our needs. Go to the clamber directory and examine its contents.

Now we will customize the skeleton. I prefer to have one package per file, so I will squash package.lisp and clamber.lisp into one. Moreover, README.txt will be renamed to, because we will use markdown format for it.

To avoid clobbering the tutorial with unnecessary code we put only interesting parts here. Complete steps are covered in the application GIT repository available here:

We propose to clone the repository and track the progress with the subsequent commits and this tutorial.

3.6 Writing the application

Here is our application Clamber informal specification:

  • Application will be used to maintain a book collection,
  • Each book has associated meta-information (disregarding the underlying book file format),
  • Books may be organized with tags and shelves,
  • Book may be only on one shelf, but may have multiple tags,
  • Both CLI and GUI interfaces are a required,
  • Displaying the books is not part of the requirements (we may use external programs for that).
  1. Protocol

    First we will focus on defining a protocol. Protocol is a functional interface to our application. We declare how external modules should interact with it. Thanks to this approach we are not tied to the implementation details (exposing internals like hash tables or class slot names would hinder us during the future refactoring, or could cause changes which are not backward compatible).

    ;;; Clamber book management protocol
    ;;; Requirements explicitly list that books has to be organized by
    ;;; shelves and tags. Book designator is used to identify books (it
    ;;; has to be unique). Protocol doesn't mandate designator type. It
    ;;; may be a unique name, pathname, URL or any arbitrary
    ;;; object. Other args (in form of keys) are meant to contain
    ;;; meta-information.
    (defgeneric insert-book (book-designator &rest args
                             &key shelf tags &allow-other-keys)
      (:documentation "Creates a book entity associated to a given ~
       `shelf' and `tags'."))
    ;;; We need to bo able to remove book. We need only the designator for
    ;;; that.
    (defgeneric delete-book (book-designator)
      (:documentation "Removes a book entity from the system."))
    ;;; We may search for books according to various
    ;;; criteria. `book-designator' is definite. It is possible to extend
    ;;; the find functionality to support other criteria. Book must match
    ;;; *all* supplied criteria.
    (defgeneric find-books (&rest args
                            &key book-designator shelf tags &allow-other-keys)
      (:documentation "Returns a `sequence' of books matching the ~
    ;;; We access books by their designators, but `find-books' returns a
    ;;; list of opaque objects. This function is needed for coercion from
    ;;; these objects to the designators. Sample usage:
    ;;; (map 'list #'book-designator (find-books :shelf "criminal"))
    (defgeneric book-designator (book)
      (:documentation "Extract book designator from opaque `book' object."))

    This code is put in clamber.lisp file. It is important to remember, that :documentation clause in defgeneric is meant only for programmers who use our library (to provide a short reminder of what the function does) and shouldn't be considered being the final documentation. Especially docstrings are not documentation.

    Comments are meant for programmers who work on our library (extend the library or just read the code for amusement). Their meaning is strictly limited to the implementation details which are irrelevant for people who use the software. Keep in mind, that comments are not reference manual.

  2. Implementation prototype

    Our initial implementation will be naive so we can move forward faster. Later we could rewrite it to use a database. During the prototyping programmer may focus on the needed functionality and modify the protocol if needed.

    This is a tight loop of gaining the intuition and adjusting rough edges of the protocol. At this phase you mustn't get too attached to the code so you can throw it away without hesitation. More time you spend on coding more attached to the code you are.

    ;;; Implementation
    ;;; At start we are going to work on in-memory database.
    (defparameter *all-books* (make-hash-table) "All defined books.")
    ;;; Note, that when we define `:reader' for the slot `designator' we
    ;;; actually implement part of the protocol.
    (defclass book ()
      ((designator :type symbol   :initarg :des :reader book-designator)
       (shelf      :type string   :initarg :shl :reader book-shelf)
       (tags       :type sequence :initarg :tgs :reader book-tags)
       (meta :initarg :meta :accessor book-information)))
    ;;; `title' and `author' are enlisted for completion.
    (defmethod insert-book ((designator symbol) &rest args
                            &key shelf tags title author &allow-other-keys
                            &aux (tags (alexandria:ensure-list tags)))
      (declare (ignore title author readedp)
               (type (shelf string)))
      (multiple-value-bind (book found?) (gethash designator *all-books*)
        (declare (ignore book))
        (if found?
            (error "Book with designator ~s already present." designator)
            (setf (gethash designator *all-books*)
                  (make-instance 'book
                                 :des designator
                                 :shl shelf
                                 :tgs (coerce tags 'list)
                                 :meta args)))))
    ;;; Trivial
    (defmethod delete-book ((designator symbol))
      (remhash designator *all-books*))
    ;;; We use `while-collecting' macro (`collect' equivalent from
    ;;; cmu-utils) to simplify the code.
    (defmethod find-books (&rest args
                             (book-designator nil designator-supplied-p)
                             (shelf nil shelf-supplied-p)
                             (tags nil tags-supplied-p)
                           &aux (tags (alexandria:ensure-list tags)))
      (declare (ignore args))
      (uiop:while-collecting (match)
        (labels ((match-book (book)
                   (and (or (null shelf-supplied-p)
                            (equalp shelf (book-shelf book)))
                        (or (null tags-supplied-p)
                            (subsetp tags (book-tags book) :test #'equalp))
                        (match book))))
          (if designator-supplied-p
              (alexandria:when-let ((book (gethash book-designator *all-books*)))
                (match-book book))
              (alexandria:maphash-values (lambda (val)
                                           (match-book val))

    Our prototype support only shelf and tags filters and allows searching with a designator. Note that book-designator function is implemented in our class definition as a reader, so we don't have to define the method manually. We add uiop to dependencies for the while-collecting macro (descendant of a collect macro in cmu-utils).

    We may check if our bare (without user interface) implementation works:

    (ql:quickload :clamber)
    ;; -> (:CLAMBER)
     :shelf "nonfiction"
     :tags '("nonfiction" "politics" "psychology")
     :title "The Captive Mind"
     :author "Czesław Miłosz")
    ;; -> #<CLAMBER::BOOK {100469CB73}>
    (clamber:find-books :tags '("politics"))
    ;; -> (#<CLAMBER::BOOK {100469CB73}>)
  3. Unit tests

    Now we will add some basic unit tests. For that we will use fiveam testing framework. For seamless integration with ASDF and to not include the tests in clamber itself we will define it as a separate system and point to it with the :in-order-to clause:

    (asdf:defsystem #:clamber
      :description "Book collection managament."
      :author "Daniel Kochmański <>"
      :license "Public Domain"
      :depends-on (#:alexandria #:uiop)
      :serial t
      :components ((:file "clamber"))
      :in-order-to ((asdf:test-op
                     (asdf:test-op #:clamber/tests))))
    (asdf:defsystem #:clamber/tests
      :depends-on (#:clamber #:fiveam)
      :components ((:file "tests"))
      :perform (asdf:test-op (o s)
                 (uiop:symbol-call :clamber/tests :run-tests)))

    tests.lisp file is in the repository with clamber. To run the tests issue in the REPL:

    (asdf:test-system 'clamber/tests)
  4. Prototype data persistence

    To make our prototype complete we need to store our database. We will use for it a directory returned by uiop:xdg-data-home. To serialize a hash-table cl-store will be used.

    (defparameter *database-file* (uiop:xdg-data-home "clamber" "books.db"))
    (defun restore-db ()
      "Restore a database from the file."
      (when (probe-file *database-file*)
        (setf *all-books* (cl-store:restore *database-file*))))
    (defun store-db ()
      "Store a database in the file."
      (ensure-directories-exist *database-file*)
      (cl-store:store *all-books* *database-file*))
    (defmethod insert-book :around ((designator symbol)
                                    &rest args &key &allow-other-keys)
      (declare (ignore designator args))
      (prog2 (restore-db)
    (defmethod delete-book :around ((designator symbol))
      (declare (ignore designator))
      (prog2 (restore-db)
    (defmethod find-books :around (&rest args &key &allow-other-keys)
      (declare (ignore args))

    We read and write database during each operation (not very efficient, but it is just a prototype). find-books doesn't need to store the database, because it doesn't modify it.

    Since our database isn't only in-memory object anymore, some additional changes to tests seem appropriate. We don't want to modify user's database:

    (defparameter *test-db-file*
      (uiop:xdg-data-home "clamber" "test-books.db"))
    (defun run-tests ()
      (let ((clamber::*database-file* *test-db-file*))
        (5am:run! 'clamber)))

    Right now we have a "working" prototype, what we need is the user interface.

3.7 Creating standalone executable

There are various solutions which enable creation of standalone binaries. The most appealing to me is Clon: the Command-Line Options Nuker, which has a very complete documentation (end-user manual, user manual and reference manual) , well thought API and works on a wide range of implementations. Additionally, it is easy to use and covers various corner-cases in a very elegant manner.

Our initial CLI (Command Line Interface) will be quite modest:

% clamber --help
% clamber add-book foo \
  --tags a,b,c \
  --shelf "Favourites" \
  --meta author "Bar" title "Quux"  
% clamber del-book bah
% clamber list-books
% clamber list-books --help
% clamber list-books --shelf=bah --tags=drama,psycho
% clamber show-book bah

3.7.1 Basic CLI interface

To make our interface happen we have to define application synopsis. clon provides defsynopsis macro for that purpose:

(defsynopsis (:postfix "cmd [OPTIONS]")
  (text :contents
        "Available commands: add-book, del-book, list-books, show-book.
Each command has it's own `--help' option.")
  (flag :short-name "h" :long-name "help"
        :description "Print this help and exit.")
  (flag :short-name "g" :long-name "gui"
        :description "Use graphical user interface."))

These are all top-level flags handling main options (help and graphical mode switch). As we can see it is declarative, allowing short and long option names. Except flag other possible option types are possible (user may even add his own kind of option).

clon allows having multiple command line option processing contexts, what simplifies our task - we can provide different synopsis for each command with its own help. First though we will define a skeleton of our main function:

(defun main ()
  "Entry point for our standalone application."
  ;; create default context
    ;; if user asks for help or invokes application without parameters
    ;; print help and quit
    ((or (getopt :short-name "h")
         (not (cmdline-p)))
    ;; running in graphical mode doesn't require processing any
    ;; further options
    ((getopt :short-name "g")
     (print "Running in graphical mode!")

  (alexandria:switch ((first (remainder)) :test 'equalp)
    ("add-book"   (print "add-book called!"))
    ("del-book"   (print "del-book called!"))
    ("list-books" (print "list-books called!"))
    ("show-book"  (print "show-book called!")))


(defun dump-clamber (&optional (path "clamber"))
  (dump path main))

In our main we look for the top-level options first. After that we verify which command is called. For now our action is just a stub which prints the command name. We will expand it in the next step. Function dump-clamber is provided to simplify executable creation. To dump the executable it is enough to use this snippet:

sbcl --eval '(ql:quickload :clamber)' --eval '(clamber/cli:dump-clamber "clamber")'
./clamber --help


3.7.2 Implementing commands

Each command has to have its own synopsis. Books have unique identifiers (designators) - we force this option to be a symbol. All applications parameters following the options are treated as metadata. add-book has the following synopsis:

(defparameter +add-book-synopsis+
  (defsynopsis (:make-default nil :postfix "cmd [OPTIONS] [META]")
    (text :contents "Add a new book to the database.")
    (flag :short-name "h" :long-name "help"
          :description "Print this help and exit.")
    (lispobj :short-name "d" :long-name "ident"
             :description "Book designator (unique)."
             :typespec 'symbol)
    (stropt :short-name "s" :long-name "shelf"
            :description "Book shelf.")
    ;; comma-separated (no spaces)
    (stropt :short-name "t" :long-name "tags"
            :description "Book tags."))
  "The synopsis for the add-book command.")

We don't want duplicated options, so we filter them out in the add-book-main function, which is called in main instead of printing the message. Command entry point is implemented as follows:

(defun add-book-main (cmdline)
  "Entry point for `add-book' command."
  (make-context :cmdline cmdline
                :synopsis +add-book-synopsis+)
  (when (or (getopt :short-name "h")
            (not (cmdline-p)))

  (let ((ident (getopt :short-name "d"))
        (shelf (getopt :short-name "s"))
        (tags  (getopt :short-name "t")))

    (when (or (getopt :short-name "d")
              (getopt :short-name "s")
              (getopt :short-name "t"))
      (print "add-book: Junk on the command-line.")
      (exit 1))
    (clamber:insert-book ident
                         :shelf shelf
                         :tags (split-sequence #\, tags)
                         :meta (remainder))))

To make book listing more readable we define print-object method for books in clamber.lisp. Moreover, we tune find-books method to not rely on the fact whenever argument was supplied or not, but rather on its value (NIL vs. non-NIL).

(defmethod print-object ((object book) stream)
  (if (not *print-escape*)
      (format stream "~10s [~10s] ~s -- ~s"
              (book-designator object)
              (book-shelf object)
              (book-tags object)
              (book-information object))

list-books command is very similar, but instead of calling insert-book it prints all books found with clamber:find-books called with provided arguments. Also we don't print help if called without any options.

(defparameter +list-books-synopsis+
  (defsynopsis (:make-default nil :postfix "[META]")
    (text :contents "List books in the database.")
    (flag :short-name "h" :long-name "help"
          :description "Print this help and exit.")
    (lispobj :short-name "d" :long-name "ident"
             :description "Book designator (unique)."
             :typespec 'symbol)
    (stropt :short-name "s" :long-name "shelf"
            :description "Book shelf.")
    ;; comma-separated (no spaces)
    (stropt :short-name "t" :long-name "tags"
            :description "Book tags."))
  "The synopsis for the list-books command.")

(defun list-books-main (cmdline)
  "Entry point for `list-books' command."
  (make-context :cmdline cmdline
                :synopsis +list-books-synopsis+)
  (when (getopt :short-name "h")

  (let ((ident (getopt :short-name "d"))
        (shelf (getopt :short-name "s"))
        (tags  (getopt :short-name "t")))

    (when (or (getopt :short-name "d")
              (getopt :short-name "s")
              (getopt :short-name "t"))
      (print "add-book: Junk on the command-line.")
      (exit 1))

    (map () (lambda (book)
              (format t "~a~%" book))
         (clamber:find-books :book-designator ident
                             :shelf shelf
                             :tags tags))))

Last command we are going to implement is the simplest one - del-book:

(defparameter +del-book-synopsis+
  (defsynopsis (:make-default nil)
    (text :contents "Delete a book in the database.")
    (flag :short-name "h" :long-name "help"
          :description "Print this help and exit.")
    (lispobj :short-name "d" :long-name "ident"
             :description "Book designator (unique)."
             :typespec 'symbol))
  "The synopsis for the del-book command.")

(defun delete-book-main (cmdline)
  "Entry point for `list-books' command."
  (make-context :cmdline cmdline
                :synopsis +del-book-synopsis+)
  (when (or (getopt :short-name "h")
            (not (cmdline-p)))

  (clamber:delete-book (getopt :short-name "d")))

Of course this CLI prototype needs to be improved. For instance, it doesn't handle any errors - for if we try to add a book with already existing designator. Moreover, for testing purposes it would be nice to be able to provide database file top-level argument for testing purposes.

4 Case against implicit dependencies

Sometimes implementations provide functionality which may be expected to be present during run-time under certain conditions. For instance when we use ASDF to load a system, we probably have UIOP available in the image (because to load the system, we need ASDF which depends on UIOP at its own run-time).

It is important to remember that we can't mix two very different moments - the build time and the run-time. This difference may not be very obvious for the Common Lisp programmer because it is common practice to save the lisp image with the system, which was loaded with help of the build system (hence the build system is present in the image), or they load fasl files with the build system in question. The fact that we have only one widely adopted building facility, and that it is often preloaded, makes it even less possible to encounter any problems.

There are two main arguments against implicit dependencies. The first one is the separation of the build tool from the application. It is hardly justifiable to include autotools and make in your binary after building the system. They may have exploitable bugs, increase the application size or are simply unnecessary (unless you really depend on make at run-time).

Assuming you rely on implicit dependencies, and given that you produce a standalone application (or you cross-compile it), either your build system will inject such dependency for you (what you may not necessarily want), or your application will miss an important component which it relies upon (for instance UIOP 2) and will effectively crash.

The second argument has more to do with the declarative system definitions. If your application depends on something, you should list it, because it is a dependency. So if we switch the build system and it may read our declarative system definitions, or we have an older version of the build system which doesn't imply the dependency, then we can't load the system. It's not the build system problem, but our broken system definition.

Having that in mind, I sharply advocate listing all dependencies in system definition, despite meretricious voices saying it's rudimentary or harmful to put them there. We will take UIOP as an example. We have two possible options:

(defsystem #:system-one
  (depend-on ((:require #:uiop))))

(defsystem #:system-two
  (depend-on (#:uiop))

system-one's dependency is resolved as follows:

  1. If the system uiop is already present in the image, do nothing3,
  2. If the system uiop may be acquired as a module, require it,
  3. If the system uiop may be loaded by a build system, load it,
  4. Otherwise signal a missing-component condition.

This behavior is an elegant substitute for the implicit dependency, which relies on the UIOP version bundled with the Common Lisp implementation.

The system-two system dependency is handled in a slightly different manner:

  1. If the system uiop may be loaded from the disk and version in the image isn't up-to-date, load the system from there,
  2. If the image has a preloaded version of the system, do nothing,
  3. Otherwise signal a missing-component condition.

Both definitions are strictly declarative and any build system which "knows" the ASD file format will know your preferences disregarding if it has UIOP bundled or not. If it can't handle it correctly, then it is a bug of the build system, not your application.

UIOP here is only one example. I urge you to declare any dependencies of your system. You may know that bordeaux-threads on which you depend implies that Alexandria will be present in the image, but this assumption may turn against you if it changes this dependency in favour of something else.

I've asked one of a proponents of the implicit dependencies François-René Rideau for a comment to present the other point of view:

The dependency on ASDF is not implicit, it's explicit: you called your system file .asd.

Now, if you want to fight dependency on ASDF, be sure to also track those who put functions and variables in .asd files that they use later in the system itself. Especially version numbers.

Trying to enforce technically unenforceable constraints through shaming isn't going to fly. If you want to promote separation of software from build system, promote the use of Bazel or some other build system incompatible with ASDF.



Since we won't use any unique ECL features we suggest using SBCL here (it is faster and better supported by 3rd-party libraries). Using ECL shouldn't introduce any problems though.


UIOP doesn't depend on ASDF and it may be loaded with older versions of this widely adopted build system, or directly from the file. Quicklisp ships UIOP this way to assure compatibility with implementations which don't include new ASDF.


This is broken in ASDF as of version 3.1.7 - ASDF will load the system from the disk if it is possible. It will hopefully be fixed in version 3.1.8.

Didier Verna10th European Lisp Symposium, April 3-4 2017, Brussels, Belgium

· 34 days ago
		ELS'17 - 10th European Lisp Symposium

		   VUB - Vrije Universiteit Brussel

			   April 3-4, 2017

		In co-location with <Programming> 2017

The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.

The 10th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.

Topics include but are not limited to:

- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies

We invite submissions in the following forms:

  Papers: Technical papers of up to 8 pages that describe original
    results or explain known ideas in new and elegant ways.

  Demonstrations: Abstracts of up to 2 pages for demonstrations of
    tools, libraries, and applications.

  Tutorials: Abstracts of up to 4 pages for in-depth presentations
    about topics of special interest for at least 90 minutes and up to
    180 minutes.

  The symposium will also provide slots for lightning talks, to be
  registered on-site every day.

All submissions should be formatted following the ACM SIGS guidelines
and include ACM classification categories and terms. For more
information on the submission guidelines and the ACM keywords, see: and The conference proceedings will be
published in the ACM Digital Library.

Important dates:

 -    30 Jan 2017 Submission deadline
 -    27 Feb 2017 Notification of acceptance
 -    20 Mar 2017 Final papers due
 - 03-04 Apr 2017 Symposium

Programme chair:
  Alberto Riva, University of Florida, USA

Programme committee:

Search Keywords:

#els2017, ELS 2017, ELS '17, European Lisp Symposium 2017,
European Lisp Symposium '17, 10th ELS, 10th European Lisp Symposium,
European Lisp Conference 2017, European Lisp Conference '17

McCLIMProgress report #3

· 39 days ago

Dear Community,

During this iteration I was working on a tutorial about how to create an application from scratch with McCLIM used as a GUI toolkit with a detailed description of each step. This is targeted at beginners who want to write their own project with a CLIM interface. The tutorial isn't finished yet, but I expect to publish it soon.

The font-autoconfigure branch was successfully merged to the master branch and various issues were closed thanks to that. One of them is bounty issue #65. Since I don't know how to cancel the bounty, I'm claiming it ($100), and I'm withdrawing from the McCLIM account the usual amount with this $100 subtracted.

I have replaced the clim-listener non-portable utilities with the osicat portability layer and the alexandria library. Changes are present in the develop branch (not merged yet).

The rest of the time was spent on peer review of the contributions, merging pull requests, development discussions, questions on IRC and other maintenance tasks.

A detailed report is available at:

If you have any questions, doubts or suggestions - please contact me either with email ( or on IRC (my nick is jackdaniel).

Sincerely yours,
Daniel Kochmański

Quicklisp newsOctober 2016 Quicklisp dist update now available

· 40 days ago
New projects:
  • architecture.builder-protocol — Protocol and framework for building parse results and other object graphs. — LLGPLv3
  • cepl.drm-gbm — DRM/GBM host for cepl — BSD 3-Clause
  • cl-association-rules — An implementation of the apriori algorithm to mine association rules in Common Lisp. — MIT
  • cl-change-case — Convert strings between camelCase, param-case, PascalCase and more — LLGPL
  • cl-drm — Common Lisp bindings for libdrm — BSD 3-Clause
  • cl-egl — Common Lisp wrapper for libEGL — BSD 3-Clause
  • cl-gbm — Common Lisp wrapper for libgbm — BSD 3-Clause
  • cl-wayland — libwayland bindings for Common Lisp — BSD 3-Clause
  • cl-xkb — Common Lisp wrapper for libxkb — BSD 3-Clause
  • cltcl — Embed Tcl/Tk scripts in Common Lisp — MIT
  • diff-match-patch — A Common Lisp port of Neil Fraser's library of the same name — Apache 2.0
  • exit-hooks — Call registered function when Common Lisp Exits. — BSD
  • grovel-locally — Grovel using cffi and cache the result locally to the system — BSD 2 Clause
  • portable-threads — Portable Threads — Apache License 2.0
Updated projects: 3d-matrices3d-vectorsalexaalexandriaassoc-utilscavemancaveman2-widgetscfficircular-streamscl-anacl-autowrapcl-bootstrapcl-cudacl-hash-utilcl-html-parsecl-influxdbcl-kanrencl-l10ncl-libfarmhashcl-libhoedowncl-openglcl-protobufscl-pslibcl-quickcheckcl-rabbitcl-redditcl-scancl-sdl2cl-stringsclim-pkg-docclim-widgetscloser-mopclxcoleslawcolleencroatoandbusdexadoresrapesrap-liquidfiascofngendlglsl-specgraphhttp-bodyhu.dwim.graphvizhumblerhunchensocketjonathanlacklakelasslisp-criticmaxpcmcclimmitomodularize-hooksmodularize-interfacesnorthparse-floatpostmodernrecursive-restartrtg-mathrutilssnmpsnoozestumpwmtemporal-functionstrivial-string-templateubiquitousuiopusocketutilities.binary-dumpvarjoweblocksxml-emitterzs3.

Removed projects: asn.1, cl-bacteria, cl-binary-file, cl-btree, cl-ntriples, cl-op, cl-swap-file, cl-wal, cl-web-crawler, doplus, esrap-peg.

There are more removed projects than usual this month. asn.1 was removed by request of the author. esrap-peg no longer builds - it may be back soon. All the others are victims of Google Code and SourceForge. Their code can no longer be easily checked out or updated, they don't affect other projects, and nobody has come forward to move them somewhere else and maintain them. If you miss any of those projects, feel free to take it over and let me know.

To get this month's update, use (ql:update-dist "quicklisp"). Enjoy!

Nicolas HafnerRadiance - An Overview - Confession 70

· 41 days ago

It's been a good while since I last worked on Radiance. Unfortunately I can't claim that this was because Radiance was completely finished, quite far from it to be truthful. However, it has been stable and running well enough that I didn't have to tend to it either. In fact, you're reading this entry on a site served by Radiance right now. Either way, thanks to some nudging by my good friend Janne, I've now returned to it.

I did some rather serious refactoring and cleanup in the past week, that I'm certain brought Radiance a good chunk closer to being publishable. For me, a project needs to fulfil a lot of extra criteria in order to be publishable, even if it already works well enough in terms of the code itself just doing what it's supposed to. So, while it'll be a while before Radiance is ready for public use and biting critique, the overall architecture of it is set in stone. I thought it would be a good idea to elaborate on that part in this entry here, to give people an idea of what sets Radiance apart.

Radiance is, in some sense of the term, a web framework. However, it is sufficiently different from what usually qualifies as that for me to not consider the term an apt description. I instead opted for calling it a "web application environment". If you're familiar with web development in Lisp, you might also know of Clack, which also adopted that term after I introduced it in a lightning talk at ELS'15. Clack comes closer to what Radiance is than most web frameworks, but it's still rather different.

So what is it then that sets Radiance apart? Well, that is explained by going into what Radiance really does, which are only two things: interface management and routing. Naturally it provides some other things on the side as well, but those are by far the biggest components that influence everything.

Back when I was a young bab and had ambition, I decided to write my own websites from scratch. Things developed and evolved into a framework of sorts over the course of a multitude of rewrites. One of the things I quickly became concerned with was the question of how to handle potential user demands. Let's say that I'm an administrator and would like to set up a blog software that uses Radiance. I have some requirements, such as what kind of database to use, and perhaps I already have a webserver running too. Maybe I would also like to pick a certain method of authenticating users and so forth. This means that the actual framework and blog software code needs to be sufficiently decoupled in order to allow switching out/in a variety of components.

This is what interfaces are for. Radiance specifies a set of interfaces that each outline a number of signatures for functions, macros, variables, and any kind of definable thing. As an application writer, you then simply say that you depend on a particular interface and write your code against the interface's symbols. As an administrator you simply configure Radiance and tell it which implementation to use for which interface. When the whole system is then loaded, the application's interface dependencies are resolved according to the configuration, and the specified implementation then provides the actual functions, macros, etc that make the interface work.

This also allows you to easily write an implementation for a specific interface, should you have particular demands that aren't already filled by implementations that are provided out of the box. Since most applications will be written against these interfaces, everything will 'just work' without you having to change a single line of code, and without having to write your application to be especially aware of any potential implementation. The modularity also means that not every interface needs to have an implementation loaded if you don't need it at all, avoiding the monolith problem a lot of Frameworks pose.

Unlike other systems that use dynamic deferring to provide interfaces, Radiance's way of doing things means that there is zero additional overhead to calling an interface function, and that macros also simply work as you would expect them to. While interfaces are not that novel of an idea, I would say that this, coupled with the fact that Radiance provides interfaces on a very fine level (there's interfaces for authentication, sessions, users, rate limiting, banning, cache, etc), makes it distinct enough to be considered a new approach.

Let's look at an example in the hopes that it will make things feel a bit more practical. Here's the definition for the cache interface:

(define-interface cache
  (defun get (name))
  (defun renew (name))
  (defmacro with-cache (name-form test-form &body request-generator)))

This definition creates a new package called cache with the functions get and renew and the macro with-cache in it. get is further specified to retrieve the cached content, renew will throw the cache out, and with-cache will emit the cached data if available, and otherwise execute the body to generate the data, stash it away, and return it.

There's a couple of different ways in which the cache could be provided. You could store it in memory, save it to file, throw it into a database, etc. Whatever you want to do, you can get the behaviour by writing a module that implements this interface. For example, here's an excerpt from the simple-cache default implementation:

(in-package #:modularize-user)
(define-module simple-cache
  (:use #:cl #:radiance)
  (:implements #:cache))
(in-package #:simple-cache)

(defun cache:get (name)
  (read-data-file (cache::file name)))

(defun cache:renew (name)
  (delete-file (cache::file name)))

(defmacro cache:with-cache (name test &body request-generator)
  (let ((cache (gensym "CACHE")))
    `(let ((,cache ,name))
       (if (or (not (cache::exists ,cache))
           (cache::output ,cache ((lambda () ,@request-generator)))
           (cache:get ,cache)))))

We define a module, which is basically a fancy package definition with extra metadata. We then simply overwrite the functions and macros of the interface with definitions that do something useful. You'll also notice that there are references to internal symbols in the cache interface. This is how implementations can provide implementation-dependant extra functionality through the same interface. The actual definitions of these functions are omitted here for brevity.

Now that we have an interface and an implementing module, all we need to do is create an ASDF system for it and tell radiance to load it if someone needs the interface.

(setf (radiance:mconfig :radiance :interfaces :cache) "simple-cache")

Finally, if we want to use this interface in an application, we simply use the interface functions like any other lisp code, and add (:interface :cache) as a dependency in the system definition. When you quickload the application, simple-cache will automatically be loaded to provide the interface functionality.

Radiance provides one last thing in order to make the orchestra complete: environments. It is not too far out there to imagine that someone might want multiple, separate Radiance setups running on the same machine. Maybe the implementations used in production and development are different and you'd like to test both on the same machine. Perhaps you're developing different systems. Either way, you need to multiplex the configuration somehow and that's what the environment is for.

The environment is a variable that distinguishes where all of the configuration files for your radiance system are stored, including the configuration and data files of potential applications/modules like a forum or a blog.

So, to start up a particular setup, you just set the proper environment, and launch Radiance as usual.

The second part, routing, is something that every web application must do in some way. However, Radiance again provides a different spin on it. In this case, it is something that I have never seen before anywhere else. This may well be due to ignorance on my behalf, but for now we'll proceed in the vain hope that it isn't.

Routing is coupled to one of the initial reasons why Radiance even came to be. Namely, I wanted to have a website that had a couple of separate components like a forum, a gallery, and a blog, but all of them should be able to use the same user account and login mechanism. They should also be able to somehow share the "URL address space" without getting into conflicts with each other as to who has which part of the URL. Coupling this with the previous problem of the webmaster having a certain number of constraints for their setup such as only having a single subdomain available, or having to be in a subfolder, things quickly become difficult.

The first part of the problem is kind of solved partially by interfaces, and the rest could be solved if we, say, gave each module of the whole system its own subdomain to work on. Then the URL path could be done with as each part requires it to be. This approach stops working once we consider the last problem, namely that the system cannot presume to have the luxury of infinite subdomains, or the luxury of really having much of any control over the URL layout.

This issue gets worse when you consider the emitted data. Not only do we need to know how to resolve a request to the server to the appropriate module, we also need to make sure to resolve each URL in the HTML we emit back again so that the link won't point to nowhere. Compounding on that is the question of cross-references. Most likely your forum will want to provide a login button that should take them to the pages of the authentication module that then handles the rest.

To make this more concrete, let's imagine an example scenario. You're the webmaster of Since this is a free hoster that just gives you a domain that refers to your home server, you can't make any subdomains of your own. Worse, you don't have root access on your machine, so you can't use port 80. You've already set up a simple file server to provide some intro page and other files that you would like to keep at the root. You'd now like to set up a forum on /forum/ and a blog on /personal/blog/. This might seem far-fetched, but I don't deem it too much of a stretch. Most of the constraints are plausible ones that do occur in the wild.

So now as an application writer for the forum or blog software you have a problem. You don't know the domain and you don't know the path either. Hell, you don't even know the port. So how can you write anything to work at all?

The answer is to create two universes and make Radiance take care of them. There's an "external" environment, which is everything the actual HTTP server gets and sends, and thus the user reading a site sees and does. And then there's an "internal" environment, which is everything an application deals with. As an application writer you can now merrily define your endpoints in a sane and orderly fashion, and the admin can still make things work the way he wants them to work. This is what Radiance's routing system allows.

To this effect, Radiance defines a URI object that is, on a request, initialised to the address the server gets. It is then transformed by mapping routes, and then dispatched on. Each application then creates URIs for the links it needs and transforms them by the reversal routes before baking them into the actual HTML.

Again, I think a specific code example will help make things feel more tangible. Let's first define some stubs for the forum and the blog software:

(define-page forum "forum/" ()
  (format NIL "<html><head><title>Forum</title></head>~
               <body>Welcome to the forum! Check out the <a href=~s>blog</a> too!</body></html>"
          (uri-to-url "blog/" :representation :external)))
(define-page blog "blog/" ()
  (format NIL "<html><head><title>Blog</title></head>~
               <body><a href=~s>Log in</a> to write an entry.</body></html>"
          (uri-to-url (resource :auth :page :login) :representation :external)))

Radiance by convention requires each separate module to inhabit its own subdomain. You also see that in the forum page we include a link to the blog by externalising the URI for its page and turning it into a URL. In the blog page we include a link to the login page by asking the resource system for the page in the authentication interface. We can do this because the authentication interface specifies that the page resource must be resolvable and must return the appropriate URI object.

Next we need to tell Radiance about our external top-level domain so that it doesn't get confused and try to see subdomains where there are none.

(add-domain "")

By the way, you can have as many different domains, ports, and whatevers as you want simultaneously and the system will still work just fine.

Finally we need the actual routes that do the transformations. Funnily enough, despite the apparent complexity, this current setup allows us to use the simplest route definition macro. You can do much, much more complicated things.

(define-string-route forum :mapping "/forum/(.*)" "forum/\\1")
(define-string-route forum :reversal "forum/(.*)" "/forum/\\1")
(define-string-route blog :mapping "/personal/blog/(.*)" "blog/\\1")
(define-string-route blog :reversal "blog/(.*)" "/personal/blog/\\1")

And that's all. Radiance will now automatically rewrite URIs in both directions to resolve to the proper place.

For an example of a more complicated route, Radiance provides a default virtual-module route that allows you to use the /!/foo/.. path to simulate a lookup to the foo subdomain. This may seem simple at first, but it needs to do a bit of extra trickery in order to ensure that all external URIs are also using the same prefix, but only if you already got to the page by using that prefix.

But it just works. The route system neatly confines that complexity to a single place with just two definitions and ensures complete agnosticism for all applications that reside in Radiance. No matter what kind of weird routes you might have to end up using, everything will resolve properly automagically provided you set up the routes right and provided you do use the URI system.

And that's, really, mostly all that Radiance's core does. It shoves off all of the functionality that is not strictly always needed to interfaces and their various possible implementations. In the end, it just has to manage those, and take care of translating URIs and dispatching to pages.

I don't want to toot my own horn much here, mostly because I despise bragging of any kind and I'm much too insecure to know if what I've done is really noteworthy. On the other hand, I think that this should illustrate some of the benefits that Radiance can give you and present a brief overview on how things fit together.

As I mentioned at the beginning, Radiance is not yet ready for public consumption, at least according to my own criteria. Lots of documentation is missing, many parts are not quite as well thought out yet, and in general there's some idiosyncrasies that I desperately want to get rid of. So, unless you feel very brave and have a lot of time to spend learning by example, I would not advise using Radiance. I'm sure I won't forget to blog once I do deem it ready though. If you are interested, you'll simply have to remain patient.

LispjobsClojure Engineer, ROKT, Sydney, Australia

· 45 days ago

ROKT ( is hiring thoughtful, talented functional programmers, at all levels, to expand our Clojure team in Sydney, Australia.  (We’re looking for people who already have the right to work in Australia, please.)

ROKT is a successful startup with a transaction marketing platform used by some of the world’s largest ecommerce sites. Our Sydney-based engineering team supports a business that is growing rapidly around the world.

Our Clojure engineers are responsible for ROKT’s “Data Platform”, a web interface for our sales teams, our operations team, and our customers to extract and upload the data that drives our customers’ businesses and our own. We write Clojure on the server-side, and a ClojureScript single-page application on the frontend.

We don’t have a Hadoop-based neural net diligently organising our customer data into the world’s most efficiently balanced red-black tree (good news: we won’t ask you to write one in an interview) — instead, we try to spend our time carefully building the simplest thing that’ll do what the business needs done. We’re looking for programmers who can help us build simple, robust systems — and we think that means writing in a very functional style — whether that involves hooking some CV-enhancing buzzword technology on the side or not.

If you have professional Clojure experience, that’s excellent, we’d like to hear about it. But we don’t have a big matrix of exacting checkboxes to measure you against, so if your Clojure isn’t fluent yet, we’ll be happy to hear what you can do with Common Lisp or Scheme, or in fact any other language.  We have the luxury of building out a solid team of thoughtful developers — no “get me a resource with exactly X years of experience in technology Y, stat!”

If this sounds interesting, please contact Sam Roberton at Sam’s an engineer on the Clojure team, and he’d be happy to tell you whatever you want to know about what we do and what we’re looking for.

LispjobsClojure Engineer, Funding Circle, San Francisco

· 46 days ago


Would you describe yourself as a creative and ambitious engineer who's always ready to take on the next cutting edge technology? If your answer is yes, you're going to fit right in with our global team. We're looking for an experienced and enthusiastic Clojure Engineer who will bring elegance and simplicity to the forefront of our distributed systems. We are a group of passionate engineers whose bread and butter is learning new technologies and fostering a collaborative and inclusive environment – we're looking for partners in crime who feel the same.

Bird's eye view of the role:

  • Be an automator: we are continually reevaluating our stack to improve efficiency throughout the pipeline. We practice continuous integration and have a container-based deployment workflow.
  • Be a builder: you'll build and expand our highly-available architecture to handle over $2 billion in loans originated through our pipeline.
  • Be a collector: you'll help build scalable infrastructure to collect data for real time analytics and risk modeling.
  • Be a collaborator: you'll be expected to forge deep bonds with your business counterparts to truly understand the needs of our Borrowers and Investors. We work in an Agile environment including pair programming and daily stand-ups.
  • Be a teacher: be generous with your time and expertise to teach stakeholders and our fellow engineers how to answer their own questions with tools you build.

Our ideal teammate has:

  • at least 1 year of professional experience working with Clojure (or really strong personal projects using Clojure).
  • 3+ years of overall software engineering experience in any language (Ruby, Python, Java, etc).
  • an interest in Functional Programming languages.
  • comfortable in a Unix/Linux environment.

Brownie points for:

  • Github or other open source code we can check out.
  • distributed systems experience.
  • experience with microservices and/or event-driven architecture.
  • operating at scale with low-latency systems.
  • familiarity with Docker, Mesos, and/or experience with distributed database systems, such as Cassandra

Zach BeaneRandom wish: tests for convergence and consensus

· 47 days ago

There are a lot of areas where most Common Lisp implementations have converged on a common way to do something, even though the specification allows freedom to approach them in divergent ways.

For example, it’s not required that (code-char 97) evaluates to #\a, but in practice, that’s how all implementations currently work.

Similarly, although probe-file’s description repeatedly refers to a file, in practice, nearly every implementation allows a directory pathname. (CLISP signals an error, instead, and has a parallel set of system functions that can probe directories.)

And although it’s not required, reading and writing specialized octet vectors to streams does what you’d generally expect. The octets on the Lisp side line up one-to-one with the octets that are read from or written to the stream. There isn’t any padding or swapping or headers or other stuff.

I think it would be pretty cool to accumulate these de facto standard behaviors into a test suite so you could see exactly where you need to adjust your expectations, and perhaps provide persuasive evidence to change unnecessarily divergent behavior.

I don’t have time to make this, sorry! But if you make it, let me know, because I’d love to try it out.

Zach BeaneSLIME's "fancy" inspector is great. I've...

· 48 days ago

SLIME's "fancy" inspector is great. I've used it for a long time, but just recently found a new feature.

I was working with a file that had some non-standard syntax in it, but the error message didn't make it clear exactly where.

In the slime debugger, you can inspect the condition that is signaled while reading a file and find the stream associated with the read operation. When you inspect that stream, there's an action available, [visit file and show current position], that will open an Emacs buffer for the file, with the cursor at the stream's position in the file.

From there, it was easy to see the syntax that tripped up the reader. Give it a try the next time you have a similar issue.

Hans Hübner

· 48 days ago

Berlin Lispers Meetup: Tuesday October 25th, 2016, 8.00pm

You are kindly invited to the next "Berlin Lispers Meetup", an informal gathering for anyone interested in Lisp, beer or coffee:

Berlin Lispers Meetup
Tuesday, October 25th, 2016
8 pm onwards

St Oberholz, Rosenthaler Straße 72, 10119 Berlin
U-Bahn Rosenthaler Platz

We will try to occupy a large table on the first floor, but in case you don't see us,
please contact Christian: 0157 87 05 16 14.

Please join for another evening of parentheses!

Quicklisp newsProjects on the bubble

· 59 days ago
This month there are a number of projects that may be dropped from Quicklisp. They are hosted by Google Code and SourceForge, and the projects no longer check out properly. They are:
Each of these projects was downloaded between 4 and 6 times in the month of September. (No project in the entire dist was downloaded fewer than 4 times.)

I'm going to do some build-testing and see how widely this will impact other projects. If the damage is minimal, they will simply be dropped.

If you maintain (or want to maintain) one of these projects and want to see it remain in Quicklisp, please update its hosting and then get in touch.

ABCL DevABCL 1.4.0

· 63 days ago
With a decided chill noticeable in the Northern Hemisphere, the Bear has finally sloughed off a long-needed release of ABCL.

With abcl-1.4.0, CFFI now works reliably allowing cross-platform linkage to native libraries to be initiated dynamically at runtime.  Examples of using CL-CUDA to follow as their authors have time to publish.

Considerable work and testing led by Elias Pipping with contributions from Olof-Joachim Frahm has led to a reasonable basis for UIOP/RUN-PROGRAM compatibility.

We have taken the time to learn enough of Maven to publish binary artifacts for both abcl.jar and abcl-contrib.jar that allow developers everywhere to more easily incorporate the Bear into their local Java build tool chains.

And we have tentatively surrendered to the current fashion by establishing GIT bridges to the ABCL source at and to more easily facilitate contributions from the community.

Version 1.4.0



* Consolidated RUN-PROGRAM fixes (ferada, pipping)

* Upstream consolidated patchset (ferada)

** [r14857] Support `FILE-POSITION` on string streams.
** [r14859] Add multiple disassembler selector.
** [r14860] Add EXTERNAL-ONLY option to APROPOS.
** [r14861] Fix nested classes from JARs not visible with JSS.

* [r14840-2] (Scott L. Burson) Introduced "time of time" semantics for
  {encode,decode}-universal time.

* EXTENSIONS:MAKE-TEMP-FILE now takes keyword arguments to specify
  values of the prefix and suffix strings to the underlying JVM
  implementation of

* [r14849] EXT:OS-{UNIX,WINDOWS}-P now provide a pre-ASDF runtime check on hosting platform


* [r14863] RandomCharacterFile et. al.

* [r14839] (JSS) Ensure the interpolation of Java symbol names as strings (alan ruttenberg)

* [r14889] Fix ANSI-TEST SXHASH.8 (dmiles)


* asdf-

* jna-4.2.2


* [r14885] ASDF-INSTALL was removed

CL Test Gridquicklisp 2016-09-29

· 69 days ago
Test results for quicklisp 2016-09-29 comparing to the previous release are coming here.

If you need help reproducing or understanding particular failure, email or comment.

McCLIMProgress report #2

· 69 days ago

Dear Community,

During this iteration my main focus was targeted towards refactoring font implementation for McCLIM. Major changes were applied to both the CLX backend and the TrueType extension - fixing issues and improving functionality. A brief description of the TrueType extension is added in the corresponding system mcclim-fonts/truetype (in the form of file).

The branch with mentioned changes waits for a peer review and will be hopefully merged soon. Since it is a vast change any critical feedback is very welcome:

In the meantime I've worked a bit on issue #55 (without great success), answered questions on IRC, responded to reported issues and merged some pull requests. I'm very pleased that we have a modest, yet constant stream of contributions - many thanks to all contributors.

Posted bounties have not been claimed yet and unfortunately we haven't found a second developer for the agreed price so far.

A detailed report is available at:

If you have any questions, doubts or suggestions - please contact me either with email ( or on IRC (my nick is jackdaniel).

Sincerely yours,
Daniel Kochmański

Quicklisp newsSeptember 2016 Quicklisp dist update now available

· 71 days ago
New projects:
  • 3d-matrices — A utility library implementing 2x2, 3x3, 4x4, and NxN matrix functionality. — Artistic
  • a-cl-logger — A logger that sends to multiple destinations in multiple formats. Based on arnesi logger — BSD
  • alexa — A lexical analyzer generator — BSD 3-clause (See LICENSE.txt)
  • beast — Basic Entity/Aspect/System Toolkit — MIT/X11
  • cl-ascii-art — Ascii Art generating routines. — GPLv3
  • cl-bootstrap — Twitter Bootstrap widget library for Common Lisp — MIT
  • cl-cuda — Cl-cuda is a library to use NVIDIA CUDA in Common Lisp programs. — LLGPL
  • cl-kanren — A minikanren implementation — BSD
  • easy-audio — A pack of audio decoders for FLAC, WavPack and other formats — 2-clause BSD
  • fxml — Fork of CXML. — LLGPL
  • infix-math — An extensible infix syntax for math in Common Lisp. — MIT
  • mgl-mat — MAT is library for working with multi-dimensional arrays which supports efficient interfacing to foreign and CUDA code. BLAS and CUBLAS bindings are available. — MIT
  • parachute — An extensible and cross-compatible testing framework. — Artistic
  • scalpl — market maker + APIs to several Bitcoin exchanges — public domain
  • trivial-mmap — A library providing an easy-to-use API for working with memory-mapped files. — Public Domain
  • utility-arguments — Utility to handle command-line arguments. — ICS
Updated projects3d-vectorsacclimationarchitecture.service-providerarray-utilsarrow-macrosbknr-datastorecarriercaveman2-widgetscellschanlchirpcl-anacl-bloomcl-coroutinecl-freeimagecl-gamepadcl-glfw3cl-inotifycl-interpolcl-ixfcl-lexcl-monitorscl-mpg123cl-mtgnetcl-neovimcl-oclapicl-openglcl-out123cl-packcl-rabbitclackclfswmclipclobbercloser-mopclssclxcodexcoleslawcolleencroatoancrypto-shortcutsdeedsdeferreddexadordissectdocumentation-utilsesrapfare-scriptsfile-typesflareforform-fiddlegbbopengendlgenevahu.dwim.bluezhu.dwim.sdlhumblerinferior-shellinlined-generic-functionkenzolacklakelambda-fiddlelasslegitlisp-invocationlquerymaxpcmcclimmitomito-authmodularizemodularize-hooksmodularize-interfacesmore-conditionsmpcpathname-utilspgloaderpipingplumpplump-bundleplump-sexppostmodernptesterpurlqlotqt-libsqtoolsqtools-uiqueen.lispquriracerrandom-stateratifyredirect-streamrtg-mathrutilsserapeumsha3simple-inferiorssimple-taskssnoozesoftdrinksouthspinneretstaplestatic-vectorsstumpwmswap-bytestriviatrivial-argumentstrivial-benchmarktrivial-indenttrivial-main-threadtrivial-mimestrivial-rfc-1123trivial-thumbnailubiquitousutilities.binary-dumputilities.print-itemsverboseweblocksweblocks-prototype-jswith-cached-reader-conditionalswoozenekindarl.

Removed projects: agm, ax.tga, cl-ecs, cl-marklogic, com.informatimago.

To get this update, use (ql:update-dist "quicklisp").


Zach BeaneCorman Lisp updates

· 71 days ago

Artem Boldarev has made some nice updates to Corman Lisp, fixing the Windows 64-bit FFI issue Roger Corman described as “difficult” when Corman Lisp was released under the MIT license. There are a number of other fixes, too, so go check it out if you have any interest in Corman Lisp.

Hans Hübner

· 77 days ago

Berlin Lispers Meetup: Tuesday September 27th, 2016, 8.00pm

You are kindly invited to the next "Berlin Lispers Meetup", an informal gathering for anyone interested in Lisp, beer or coffee:

Berlin Lispers Meetup
Tuesday, September 27th, 2016
8 pm onwards

St Oberholz, Rosenthaler Straße 72, 10119 Berlin
U-Bahn Rosenthaler Platz

We will try to occupy a large table on the first floor, but in case you don't see us,
please contact Christian: 0157 87 05 16 14.

Please join for another evening of parentheses!

Nicolas HafnerLudum Dare 36 & Lisp Application Programming - Confession 68

· 102 days ago

With just about one hour and a half to spare we managed to submit our entry for Ludum Dare 36. Ludum Dare is a regularly occurring, fairly well-known game jam the idea of which is to create a game from scratch in 48 hours by yourself or 72 hours in a team. Given that, unlike last time we tried to participate we actually managed to finish making something that resembles a game, I think it's worth noting what my experience was as well as my general thoughts on game programming in Lisp.

On the first day, I actually still had a university exam to partake in, so I couldn't start in the morning right away and I didn't have much time at all to prepare anything. This in turn lead to several hours being squandered on the first day trying to get collision detection working and fixing other graphical bugs. Most of that time was spent looking through hundreds of completely useless tutorials and code samples on the web. In general I find the quality of material in the game and engine areas to be absolutely atrocious. Most of the things you can find are either too sweeping, badly written, or assume some ridiculous game framework that makes the code very hard to decipher even if it were applicable as a general, decoupled algorithm. I'm not sure why this field in particular is so bad, or if I'm just searching wrong. Either way, it seems that every time I stumble upon a problem in that domain I have to invest a lot of time in finding the right material- not a very productive experience as you might guess.

All difficulties aside, after the first day we had a running game that let you interact with objects and pick things up into an inventory and so forth. Pretty much all of the progress you see here is my doing. My partner-in-crime was busy working on a random map generator using perlin noise maps, which only got integrated later. Naturally, the reason why we could move on to working on actual game related features much sooner than on our previous attempt at a Ludum Dare is that our engine, Trial, has advanced a lot since then and is much more ready for use. But it is still not there yet by a long shot. Especially a general collision detection and resolution system is a vital aspect that we just did not have the time and energy to incorporate yet. Collision is one of those notorious game development aspects because there's so many ways in which to get it slightly wrong, not to mention that depending on your geometry it can get quite complex quite quickly, not only in terms of performance, but also in terms of the necessary algorithms involved. We've set up a trello board now that should allow us to more easily track ideas we have for Trial and keep things organised.

I'm also thinking of focusing future Ludum Dares on one particular core component of the engine. For example next time we could work on a general dialogue tree system and an adventure or visual novel would lend itself very well to that. Or we could start working on a more powerful animation system with something like a puzzle game, and so forth. Another thing we need to get into is sound and music design. Just before the jam I was heavily focused on getting sound support incorporated, but especially the mixer part was giving me a lot of unprecedented design issues that I could not satisfactorily resolve in the small amount of time I had available to myself with exams still going strong. I hope to return to that and finish it off soon though. That should lay the path to adding sound to our games. Unfortunately it won't actually help with teaching how to make effects and music. I expect that despite my 8 years of violin and 3 years of saxophone practise I won't actually have much of a clue at all on how to compose music. Either way, there always has to be a start somewhere to get onto the road.

By the end of the second day we had finally gotten in some good work on the game aspects, having shed pretty much all of the engine troubles. Random map generation worked, you could place items and interact, and there were even some, albeit flowery-looking, mice running around. Things were much more smooth-sailing now thanks to Common Lisp and Trial's incremental and dynamic development capabilities. We did uncover some more issues with the underlying engine system that proved rather "interesting". I'll have to investigate solutions over the coming days. Most prominently one problem is that of accessing important structures such as the scene when it isn't explicitly passed as a parameter. Currently depending on where you are you can reach this through a special variable or through the global window registry. Both approaches feel not so great and I'd like to see if I can come up with a cleaner solution. We also found some problems in other libraries in the ecosystem such as qtools and flare. It's really great to get some good use out of these libraries and get an opportunity to improve them.

And then the third day marched on. We got some pretty good last-minute features in, namely actually being able to pick up the mice, cook them, and eat them to fill your stomach. Things were getting pretty tight on time towards the end as we were rushing to fix problems in the map generation and gameplay mechanics. Fortunately enough I planned in a lot of buffer time (~6 hours) for the deployment of the game too, as that actually proved to be a lot more problematic than I had anticipated. Not even a single platform was able to deploy right away and it took me until 2 in the morning to figure everything out. One of the mechanisms that Qtools offers in order to tie in custom libraries into the automated deploy process was not coded right and never tested fully, so that bug only showed up now. On Windows we had some access violation problems that were probably caused by Trial's asset tracking system constructing Qt objects before dumping. Fortunately I had anticipated such a thing and with a simple (push :trial-optimize-pool-watching *features*) before compilation that disabled itself and things worked smoothly from there.

On Linux the issues were much more curious. Running it from SLIME worked fine. Deploying worked fine. Launching the binary from my host worked fine. But as soon as I tried to launch it from my VM, it would complain about not finding despite the file sitting right there next to the binary, the system using an absolute path to it that was correctly constructed, and other libraries before it being loaded the same way actually working. I'm still not sure why that exactly happened, but I finally remembered that qt-libs augments your LD_LIBRARY_PATH in order to ensure that, should a library want to automatically load another for some dumb reason -despite the load order being already exactly and properly done in-code- it would still resolve to our own custom files first. However, since environment variables aren't saved when the binary is dumped, this did not carry over to when it was resumed, so I had to make sure that Qtools automatically corrects those on warm boot. And as if by magic, now everything did indeed work!

And so I sleep-deprivedly submitted our entry and went off to sleep. Today I then finally got to catch up with some other things that had started to pile up because I didn't have any time to spare at all over the weekend and now I'm here writing this entry. A riveting recount of a tale, to be sure.

Now I want to take some time to talk about my general impression on writing games -or applications in general- in Common Lisp. This is going to be rough and you might hate me for what I'll say. That's fine. Hatred is a good emotion if you can divert its energy into something productive. So put your passion to it and go ahead, brave reader.

I'll start with my biggest gripe: libraries. While there's plenty of libraries around in terms of numbers, some areas are just severely lacking, and a lot of things are missing. Sure, for games there's bindings to SDL, but if you don't want to use that you're already pretty much out of luck. There wasn't any library to handle gamepad input, monitor resolution managing, 3d model file loading, or complex audio mixing until I added them all recently and that's not accounting for all the features that I got "for free" by using Qt. There's still so much more missing. Font loading and rendering is one current example that's bothering me in specific. We're using Qt for that right now but it sucks. It sucks big time. I want something that works. Some times there is one or some libraries around, but they're not a candidate to me because they're just not usable.

Now, I think it's worth noting what it takes for a library to become usable to me, so let me explain. First and foremost, it must have a non-viral license that allows me to use it freely, and potentially commercially, without repercussion. I don't intend on selling my crap any time soon, but someone might want to that might want to use my software to do it. I cannot accept something that would restrict them from doing so. Second, it must work natively cross-platform on at the very least Linux, Windows, and OS X. If you cannot deploy your application to all of those platforms you can forget about it- it might be a nice toy for your personal use, but don't have any illusions that everyone is using Linux or that people would bother to switch operating systems just for your program. Third, it must be easy to deploy. This includes but is not limited to minimal C dependencies. Deploying C dependencies is a bloody nightmare and the version mismatches will ruin your life. Sometimes C dependencies are unavoidable, but anything that creates a gigantic dependency tree is practically impossible to deploy on Linux across distributions without requiring people to install and potentially compile packages on their own system, which is such a ludicrous suggestion that you should feel ashamed for even considering it. End-users will often not know, nor care about how to do that, and certainly won't go through the trouble of finding out just to use your thing. Finally, it should have a nice interface for me, the programmer. I don't want to use a library that is just a bare-bones CFFI wrapper, or merely some magic "run" function and nothing else. If it's something like that I can probably write it better myself and quicker at that than it would take me to figure out how to use that library.

Libraries aside, Common Lisp is not a magic bullet. Most of the problems are still the exact same as in any other environment. The algorithms are the same, the implementations are roughly the same, the problems are about equivalent. Sure, Lisp is different and it is really cool, but again, don't make yourself any illusions of grandeur about it. Being such a small community there's just all the more pressure on everyone to put as much into it as possible to bring it up to par with the rest of the world. Just because Lisp has many convenient features that ease coding a lot doesn't remedy the fact that it is dwarfed utterly in man-power. I know man-power isn't everything but pretending the effects of thousands upon thousands of people working on things all over the world just aren't there is just as insane as expecting your productivity to increase hundred-fold if you add a hundred people to a project. So please, stay open and accepting about the issues that Lisp still has as an ecosystem. The easiest way to start would be by making sure that libraries have adequate documentation.

If you're fine with Lisp staying small, then that's absolutely alright by me. After all, I'm fine with that too, and actually don't really care about it growing either. What I do care about is the idiocy of pretending that somehow Lisp's advantages can trump the size of other ecosystems. That is plain lunacy. No matter how high the convenience, writing any amount of code is going to take time.

I really like Lisp, I have probably around a hundred projects written in it by now and I have not touched any other language for personal projects in years. But I don't merely want to like Lisp, I want to be able to tell other people about it with a good conscience. I want to be able to tell them "yeah sure, you can do that no problemo!" without having to fall into the Turing tar pit. I want to be able to show them my projects and tell them "man, that was fun!", not "man, I had to write like twenty libraries to get here, and I'm now finally done after many weeks of hard work, but it was kinda fun I guess."

As mentioned in the other article linked above I really don't want to come off as pushy. I don't want to tell anyone what to do. You do what you like. This is merely me venting my frustration about some of the attitudes I've seen around the place, and some of the problems I seem to be constantly dealing with when working on my projects. I don't like being frustrated, and that's why I'm writing about it. But that's all there is to it; my frustration is far from enough justification to want to change anyone else. I definitely wouldn't expect this to change anyone's mind or force them to do anything different. It's just another voice in the wind.

So how about mentioning some good aspects? Well, they were already buried in the above, I'd say. Incremental development is awesome, especially for games where a lot of small tweaks and extensive, interactive testing are necessary. Lisp is a very pretty language and I like it a lot more than anything else I've ever worked with so far. It has the potential to be fantastic... but it isn't there yet.

Now I think I'll go back to thinking on how to get it just a sliver more in that direction. Ludum Dare gave me a lot to think about and there's exciting changes ahead. Who knows what'll be possible by the time the next Ludum Dare comes around.

By the way, in case you'd like to talk to us and discuss problems, potential features, and generally just chat around about all things code, consider yourself encouraged to hop on by our IRC channel #shirakumo on

McCLIMProgress report #1

· 103 days ago

Dear Supporters,

I'm publishing a progress report for month August with detailed timelog and brief description of undertakings performed each day. This file also contains report for the previous iteration sponsored by Professor Robert Strandh.

The most important achievement was securing funds for a few months of work with the Bountysource crowdfunding campaign. We've created some bounties to attract new developers. #clim channel on Freenode is active and we seem to regain the user base what results in interesting discussions, knowledge sharing and increased awareness about the project among non-clim users.

We have also gained valuable feedback about user expectations regarding the further development and issues which are the most inconvenient for them. We've added a new section on the website which addresses some common questions and doubts and we have created a wiki on GitHub (not very useful yet

During this time we are constantly working on identifying and fixing issues, cleaning up the code base and thinking about potential improvements. Curious reader may consult the git repository log, IRC log and read the logbook.

As a side note, I've exceeded time meant for this iteration by four hours, but I'm treating it as my free time. Additionally people may have noticed that I did some works on CLX not specifically related to McCLIM - this development was done on my own time as well.

Also, to address a few questions regarding our agenda - our roadmap is listed here: That means among other things, that we are concentrated on finishing and polishing the CLX backend and we are currently not working on any other backends.

If you have any questions, doubts or suggestions - please contact me either with email ( or on IRC (my nick is jackdaniel).

Sincerely yours,
Daniel Kochmański

Quicklisp newsAugust 2016 Quicklisp dist update now available

· 105 days ago
New projects:
  • assoc-utils — Utilities for manipulating association lists — Public Domain
  • caveman2-widgets-bootstrap — An extension to caveman2-widgets which enables the simple usage of Twitter Bootstrap. — LLGPL
  • cells — A Common Lisp implementation of the dataflow programming paradigm — LLGPL
  • cl-mpg123 — Bindings to libmpg123, providing cross-platform, fast MPG1/2/3 decoding. — Artistic
  • cl-neovim — Common Lisp client for Neovim — MIT
  • cl-out123 — Bindings to libout123, providing cross-platform audio output. — Artistic
  • cl-soil — A thin binding over which allows easy loading of images — BSD 2 Clause
  • cl-sxml — SXML parsing for Common Lisp — GNU General Public License
  • clump — Library for operations on different kinds of trees — FreeBSD, see file LICENSE.text
  • dirt — A front-end for cl-soil which loads images straight to cepl:c-arrays and cepl:textures — BSD 2 Clause
  • ext-blog — A BLOG engine which supports custom theme — BSD
  • for — An extensible iteration macro library. — Artistic
  • git-file-history — Retrieve a file's commit history in Git. — MIT
  • illogical-pathnames — Mostly filesystem-position-independent pathnames. — BSD 3-clause (See illogical-pathnames.lisp)
  • maxpc — Max's Parser Combinators: a simple and pragmatic library for writing parsers and lexers based on combinatory parsing. — GNU Affero General Public License
  • parse-front-matter — Parse front matter. — MIT
  • path-string — A path utility library — MIT
  • pseudonyms — Relative package nicknames through macros — FreeBSD (BSD 2-clause)
  • — Common Lisp implementation of Graham Cormode and S. Muthukrishnan's Effective Computation of Biased Quantiles over Data Streams in ICDE'05 — MIT
  • queen.lisp — Chess utilities for Common Lisp — MIT
  • read-number — Definitions for reading numbers from an input stream. — Modified BSD License
  • simple-gui — A declarative GUI definition tool for Common Lisp — BSD
  • slack-client — Slack Real Time Messaging API Client — Apache-2.0
  • trivial-rfc-1123 — minimal parsing of rfc-1123 date-time strings — MIT
  • with-cached-reader-conditionals — Read whilst collection reader conditionals — BSD 2 Clause
Updated projects3bmd3d-vectorsagmalexandriabinfixburgled-batteriescavemancaveman2-widgetsceplcepl.cameracepl.devilcepl.sdl2cepl.skitterceramicchirpcity-hashcl-anacl-asynccl-azurecl-conspackcl-ecscl-fadcl-gamepadcl-gracecl-influxdbcl-jpegcl-libuvcl-messagepackcl-messagepack-rpccl-mpicl-mtgnetcl-oclapicl-openglcl-openstack-clientcl-packcl-quickcheckcl-rediscl-rethinkdbcl-scancl-sdl2cl-smtpcl-stringscl-tokyo-cabinetcl-unificationcl-yaclyamlcl-yamlclackclassimpclmlclos-fixturescloser-mopclxclx-truetypecoleslawcollectorscorona,croatoandbusdendritedexadordissectdjulaeazy-gnuplotesrapexscribeexternal-programfare-memoizationfare-scriptsfiveamflarefngendlgenevaglkitglsl-specgslliteratejson-mopkenzo,lacklambda-fiddlelisp-namespacelispbuilderlparallelmcclimmel-baseneo4cloclclopticlosicatprometheus.clproveqlotqt-libsqtoolsqtools-uiquickapprandom-staterclremote-jsrestasrtg-mathserapeumsip-hashskitterspinneretsquirlstumpwmtreedbtriviatrivial-documentationtrivial-nntptrivial-open-browsertrivial-wstrivialib.type-unifyubiquitousugly-tiny-infix-macro,utilities.print-itemsutilities.print-treevarjovgplotvomweblocksweblocks-utilswoowu-sugar.

Removed projects: scalpl.

Zach BeaneDinosaur and Lisp

· 107 days ago

Dinosaur and Lisp has a nice hack for automating the Chrome dinosaur game with Common Lisp and CLX.

For older items, see the Planet Lisp Archives.

Last updated: 2016-12-09 17:01