Planet Lisp

Tycho Garen Learning Common Lisp Again

· 40 hours ago

In a recent post I spoke about abandoning a previous project that had gone off the rails, and I've been doing more work in Common Lisp, and I wanted to report a bit more, with some recent developments. There's a lot of writing about learning to program for the first time, and a fair amount of writing about lisp itself, neither are particularly relevant to me, and I suspect there may be others who might find themselves in a similar position in the future.

My Starting Point

I already know how to program, and have a decent understanding of how to build and connect software components. I've been writing a lot of Go (Lang) for the last 4 years, and wrote rather a lot of Python before that. I'm an emacs user, and I use a Common Lisp window manager, so I've always found myself writing little bits of lisp here and there, but it never quite felt like I could do anything of consequence in Lisp, despite thinking that Lisp is really cool and that I wanted to write more.

My goals and rational are reasonably simple:

  • I'm always building little tools to support the way that I use computers, nothing is particularly complex, but it'd enjoy being able to do this in CL rather than in other languages, mostly because I think it'd be nice to not do that in the same languages that I work in professionally. [1]
  • Common Lisp is really cool, and I think it'd be good if it were more widely used, and I think by writing more of it and writing posts like this is probably the best way to make that happen.
  • Learning new things is always good, and I think having a personal project to learn something new will be a good way of stretching my self as a developer. Most of my development as a programmer has focused on
  • Common Lisp has a bunch of features that I really like in a programming language: real threads, easy to run/produce static binaries, (almost) reasonable encapsulation/isolation features.

On Learning

Knowing how to program makes learning how to program easier: broadly speaking programming languages are similar to each other, and if you have a good model for the kinds of constructs and abstractions that are common in software, then learning a new language is just about learning the new syntax and learning a bit more about new idioms and figuring out how different language features can make it easier to solve problems that have been difficult in other languages.

In a lot of ways, if you already feel confident and fluent in a programming language, learning a second language, is really about teaching yourself how to learn a new language, which you can then apply to all future languages as needed.

Except realistically, "third languages" aren't super common: it's hard to get to the same level of fluency that you have with earlier languages, and often we learn "third-and-later" languages are learned in the context of some existing code base or project4, so it's hard to generalize our familiarity outside of that context.

It's also the case that it's often pretty easy to learn a language enough to be able to perform common or familiar tasks, but fluency is hard, particularly in different idioms. Using CL as an excuse to do kinds of programming that I have more limited experience with: web programming, GUI programming, using different kinds of databases.

My usual method for learning a new programming language is to write a program of moderate complexity and size but in a problem space that I know pretty well. This makes it possible to gain familiarity, and map concepts that I understand to new concepts, while working on a well understood project. In short, I'm left to focus exclusively on "how do I do this?" type-problems and not "is this possible," or "what should I do?" type-problems.


The more I think about it, the more I realize that when we talk about "knowing a programming language," inevitably linked to a specific kind of programming: the kind of Lisp that I've been writing has skewed toward the object oriented end of the lisp spectrum with less functional bits than perhaps average. I'm also still a bit green when it comes to macros.

There are kinds of programs that I don't really have much experience writing:

  • GUI things,
  • the front-half of the web stack, [2]
  • processing/working with ASTs, (lint tools, etc.)
  • lower-level kind of runtime implementation.

There's lots of new things to learn, and new areas to explore!


[1]There are a few reasons for this. Mostly, I think in a lot of cases, it's right to choose programming languages that are well known (Python, Java+JVM friends, and JavaScript), easy to learn (Go), and fit in with existing ecosystems (which vary a bit by domain,) so while it might the be right choice it's a bit limiting. It's also the case that putting some boundaries/context switching between personal projects and work projects could be helpful in improving quality of life.
[2]Because it's 2020, I've done a lot of work on "web apps," but most of my work has been focused on areas of applications including including data layer, application architecture, and core business logic, and reliability/observability areas, and less with anything material to rendering web-pages. Most projects have a lot of work to be done, and I have no real regrets, but it does mean there's plenty to learn. I wrote an earlier post about the problems of the concept of "full-stack engineering" which feels relevant.

Alexander Artemenkodeclt

· 2 days ago

This is the documentation builder behind Quickref site. It is good for generating API references for third party libraries.

Most interesting features of Declt are:

  • Declt uses Texinfo file format for intermediate document store. This makes it possible to generate not only HTML but also PDF and other output formats.
  • It can automatically include license text into the documentation. But this works only for a number of popular licenses like MIT, BSD, GPL, LGPL and BOOST.

As always, I've created a template project, ready to be used:

Here is how it is rendered in HTML:

And in PDF:

Sadly, Declt does not support markup in docstrings and cross-referencing does not work there.

Some other pros and cons are listed on example site.

Remember, all example projects from include a build script and GitHub Action to update documentation on every commit!

Jonathan GodboutProto Cache: A Caching Story

· 6 days ago

What is Proto-Cache?

I've been working internally at Google to open source several libraries including cl-protobufs and a series of utility libraries we call "ace". I wrote several blog posts making an HTTP server that takes in either protocol buffers or JSON strings and responds in kind. I think I have worked enough on Mortgage Server and wish to work on a different project.

Proto-cache will grow up to be a pub-sub system that takes in google.protobuf:any protos and send them to users over http requests. I'm developing it to showcase the ace.core library and the Any proto well-known-type. In this post we create a cache system which stores google.protobuf.any messages in a hash-table keyed off of a symbol.

The current incarnation of Proto Cache:

The code can be found here:


This is remarkable in-as-much as cl-protobufs isn't required for the defsystem! It's not required at all, but we do require the protocol buffer message object. Right now we are only adding and getting it from the cache. This allows us to store a protocol buffer message object that any user system can parse by calling unpack-any. We never have to understand the message inside.


The actual implementation. We give three different functions:

  • get-from-cache
  • set-in-cache
  • remove-from-cache

We also have a:

  • fast-read mutex
  • hash-table

Note: The ace.core library can be found at:

Fast-read mutex (fr-mutex):

The first interesting thing to note is the fast-read mutex. This can be found in the ace.core.thread package included in the ace.core utility library. This allows for mutex free reads of a protected region of code. One has to call:

  • (with-frmutex-read (fr-mutex) body)
  • (with-frmutex-write (fr-mutex) body)

If the body of with-frmutex-read is finished with nobody calling with-frmutex-write then the value is returned. If someone calls with-frmutex-write while another thread is in with-frmutex-read then the body of with-frmutex-read has to be re-run. One should be careful to not modify state in the with-frmutex-read body.

Discussion About the Individual Functions


(acd:defun* get-from-cache (key)
  "Get the any message from cache with KEY."
  (declare (acd:self (symbol) google:any))
  (act:with-frmutex-read (cache-mutex)
    (gethash key cache)))

This function uses the defun* form from ace.core.defun. It looks the same as a standard defun except has a new declare statement. The declare statement takes the form

(declare (acd:self (lambda-list-type-declarations) output-declaration))

In this function we state that the input KEY must be a symbol and the return value is going to be a google:any protobuf message. The output declaration is optional. For all of the options please see the macro definition for ace.core.defun:defun*.

The with-fr-mutex-read macro is also being used.

Note in the macro's body we only do a simple accessor call into a hash-table. Safety is not guaranteed, only consistency.


(acd:defun* set-in-cache (key any)
  "Set the ANY message in cache with KEY."
  (declare (acd:self (symbol google:any) google:any))
  (act:with-frmutex-write (cache-mutex)
    (setf (gethash key cache) any)))

We see that the new defun* call is used. In this case we have two inputs, KEY will be a symbol ANY will be a google:any proto message. We also see that we will return a google:any proto message.

The with-frmutex-write macro is being used. The only thing that is done in the body is setting a cache value. If we try to get a message from the cache and set a message into the cache, it is possible a reader will have to read multiple times. In systems where readers are more common than writers fr-mutexes and spinlocking are much faster than having readers lock a mutex for every read..


We omit this function in this write-up for brevity.


Fast-read mutexes like the one found in ace.core.thread are incredibly useful tools. Having to access a mutex can be slow even in cases where that mutex is never locked. I believe this is one of the more useful additions in the ace.core library.

The new defun* macro found in ace.core.defun for creating function definitions is more mixed. I find a lack of clarity in mapping the lambda list s-expression in the defun statement to the s-expression in the declaration. Others may find it provides nicer syntax and the clarity is more obvious.

Future posts will show the use of the any protocol buffer message.

As usual Carl Gay gave copious edits and suggestions.

Eric TimmonsStatic Executables with SBCL

· 14 days ago

Common Lisp is an amazing language with many great implementations. The image based development paradigm vastly increases developer productivity and enjoyment. However, there frequently comes a time in a program's life cycle where development pauses and a version must be delivered for use by non-developers. There are many tools available to build an executable in Common Lisp, most of which follow the theme of "construct a Lisp image in memory, then dump it to disk for later reloading". That being said, none of the existing methods fit 100% of my use cases, so this post is dedicated to documenting how I filled the gap by convincing SBCL to generate completely static executables.


There are a variety of reasons to want static executables, but the most common ones I run into personally are:

  1. I want to archive my executables. I want to have a version of my executables saved that I can dig up at any point in the future, long after I've upgraded my OS (multiple times), and run for benchmarking purposes, to test if old versions exhibited specific behavior, etc. without needing to recompile.
  2. I want to enable someone to reproduce my results exactly. This is important for reproducibility in academic contexts. Also, some computing contests that conferences organize prefer static executables so they can run tests on their hardware without needing to set up a complicated run time environment.
  3. I want to make it trivial for someone to install my software. With a static executable, all anyone running on Linux needs to do is download a single file, chmod +x it, and copy it onto their path (preferably after verifying its integrity, but, let's be honest, fewer people do that than should).

There certainly are issues with static executables/linking in general. If you are unaware of what they are, I highly encourage you to read up on the subject before deciding that static executables are the be-all-end-all of application delivery. Static executables are just another tool in a developer's toolbox, to be pulled out only when the time is right.

I'll pause at the moment for a clarification: when I say static executable I mean a truly static executable. As in I want to be able to run ldd on it and have it output not a dynamic executable and I do not want it to call any libdl functions (such as dlopen or dlsym at runtime). While some existing methods claim or imply that they make static executables with SBCL (such as CFFI's static-program-op or manually linking external libraries into the SBCL runtime while building it), they by and large mean they statically link foreign code into the runtime, but the runtime itself is not a static executable.

I have yet to find a publicly documented method of creating a fully static executable with SBCL and it's not too hard to understand why. Creating a static executable requires statically linking in libc and the most common libc implementation for Linux (glibc) does a half-assed job at statically linking itself. While it is possible, many functions will cause your "static" executable to dynamically load pieces of glibc behind your back. Except now you have the requirement that the runtime version must match the compiled version exactly. That defeats the whole point of having a static executable!

For that reason, musl libc is commonly used when creating a truly static executable is important. Unfortunately, musl is not 100% compatible with glibc and for a while SBCL would not work with it. There have been various efforts at patching SBCL to run with musl libc throughout the years, but the assorted (minor!) changes finally got merged upstream in SBCL 2.0.5. This laid the groundwork necessary for truly static executables with SBCL.


Enough with the blabber, show me the code!

I am maintaining a fork of SBCL that contains the necessary patches. There is a static-executable branch which will always contain the latest version. I plan to rebase this branch on new SBCL releases or on top of upstream's master branch if it looks like I'm going to need to do some extra legwork for an upcoming release. There will also be a series of branches named static-executable-$VERSION which have my patches applied on top of the named version, starting with SBCL 2.1.0.

The patch for any SBCL release is also located at$VERSION/static-executable-support.patch. There is a detached signature available at$VERSION/static-executable-support.patch.asc signed with GPG key 0x9ACF6934.

I would love to get these patches upstreamed, but they didn't get much traction the last time I submitted them to sbcl-devel. Admittedly, they were an early, less elegant version that hadn't seen much use in the real-world. My hope is that other people who desire this capability from SBCL will collaborate to test and refine these patches over time for eventual upstreaming.


Given that most people aren't using musl libc on their development computer, the quickest, easiest way to get a static executable is to build one with Docker. After getting the patchset, simply run the following set of commands in the root of the SBCL repo. This will use the clfoundation/sbcl:alpine3.12 Docker image (another project of mine for a future post) to build a static executable and then copy it out of the image to your host's file system.

docker build -t sbcl-static-executable -f tools-for-build/Dockerfile.static-executable-example .
docker create --name sbcl-static-executable-extractor sbcl-static-executable
docker cp sbcl-static-executable-extractor:/tmp/sb-gmp-tester /tmp/sb-gmp-tester
docker rm sbcl-static-executable-extractor

You should now be able to examine /tmp/sb-gmp-tester to see that it is a static executable:

$ ldd /tmp/sb-gmp-tester
     not a dynamic executable

If all goes well, you should also be able to run it, see the sb-gmp contrib tests all pass (fingers crossed), and realize that this worked because libc, the SBCL runtime, and libgmp were all statically linked!

The file README.static-executable (after applying the patchset) has an example of building locally and a set of docker commands that doesn't require tagging images and naming containers.

How does it work??

This approach requires that the target image be be built twice: once to record the necessary foreign symbols, and then again with the newly built static runtime. I can, however, envision ways around this for a sufficiently motivated person.

One way could be to modify the (already in-tree) shrinkwrapping recipe to handle libdl not being available at runtime. I abandoned this approach largely because the shrinkwrapping code is written for x86-64 and does a lot of things with assembly (which I do not know). It is important for me to have static executables on ARM as well. A second way could be to patch out or otherwise improve the check that the runtime version used to build the core matches the runtime version used to run it. I didn't go this approach as it would certainly lead to difficult to debug issues if used incorrectly, plus the Lisp code in the core would need to check the presence/usefulness of libdl functions at runtime.

So, how does this patchset work and why does it require two passes? Apologies to the SBCL devs if I completely butcher the explanation of SBCL internals, but here it goes anyways!

Lisp code routinely calls into C code, whether it is to a runtime provided function, a libc function, or another library the user has linked and defined using the sb-alien package or the portable counterparts in CFFI. In order to mediate these calls from the Lisp side, SBCL maintains a linkage table. This table has two components. First is a Lisp-side hash table that maps foreign names (and an indicator of if it is data or a function) to an integer. The second is a C-side vector that contains either the address of the symbol (in the case of data) or the opcodes necessary to call the function (e.g., by JMPing to its address).

The C-side vector is populated by looking up the symbol's address using dlsym. This lookup generally happens under two possible scenarios. First, when the Lisp code defines a foreign symbol it wants to be able to call or read. Second, every time the runtime starts, it populates the C-side entries for every symbol contained in the core's hash-table. This second case is how SBCL handles the dynamic linker changing the address of symbols in between core dumps.

This reliance on dlopen and dlsym is so baked into SBCL at this point that, even though the code is nominally conditioned on the internal feature :os-provides-dlopen, I was unable to build a working SBCL without it (before these patches, of course).

With these patches, you first build your Lisp image that you want to deliver like normal. Then, you load the file tools-for-build/dump-linkage-info.lisp into it. Next, you call sb-dump-linkage-info:dump-to-file to extract the Lisp side linkage table entries into a separate file (filtered to remove functions from libdl). Once you have this file, you rebuild SBCL, this time with the intention of creating a static runtime. To do this, you should provide the following:

  • The environment variable LINKFLAGS should contain -no-pie -static in order to build the static runtime.
  • Any additional libraries you need should be specified using the environment variable LDLIBS.
  • You probably want to set the environment variable IGNORE_CONTRIB_FAILURES to yes.
  • You need to pass the file containing the linkage table entries to using the --extra-linkage-table-entries argument.
  • Build without the :os-provides-dlopen and :os-provides-dladdr features. One way of doing this is to pass --without-os-provides-dlopen and --without-os-provides-dladdr to

During the build process, the contents of the --extra-linkage-table-entries file are inserted into the cold SBCL core during second genesis and a C file is autogenerated containing a single function that populates the C side of the linkage table using the address of every symbol. This C file is the built into the runtime and called while the runtime boots, before it starts executing the core. This means that, if the runtime is a dynamic executable, the system linker will patch up all the references we need at runtime without SBCL needing to call dlsym explicitly. If the runtime is a static executable, then the symbols are statically linked for us and nothing needs to be done at runtime.


Given how new this approach is, you will certainly run into issues. Many systems that load foreign code will blindly assume that libraries can be linked in at runtime and will fail to work (silently or loudly) if that assumption is not met. Some libraries already have their own homebrew ways of dealing with this. For instance, if the feature :cl+ssl-foreign-libs-already-loaded is present, the cl+ssl system will not attempt to load the libraries. To deal with this issue in a more principled way, I strongly recommend patching systems to use CFFI's (relatively) new canary argument to define-foreign-library.

CFFI itself also has some issues with this arrangement because it dives into some sb-alien internals that simply aren't present on #-os-provides-dlopen. I currently fix this in a kludgy way by commenting out most of %close-foreign-library in src/cffi-sbcl.lisp, but if more people start building static executables, we'll need to come up with a better way of handling it.

Next Steps

I would love to get feedback on this approach and any ideas on how to improve it! I strongly believe that better support for building static executables with SBCL should be upstreamed and I doubt I am alone in that belief. Please drop me a line (etimmons on Freenode or daewok on Github/Gitlab) if you have suggestions.

Personally, I have used earlier iterations of these patches to build static executables for some of my grad school work. My next real deployment of these patches will likely be to build CLPM with them and providing static executables starting with v0.4.

Michał HerdaTIL that Common Lisp dynamic variables can be made locally unbound

· 15 days ago
            ;;; let's first define a global variable...
CL-USER> (defvar *foo* 42)

;;; ...and then make a binding without a value using PROGV
CL-USER> (progv '(*foo*) '() (print *foo*))

debugger invoked on a UNBOUND-VARIABLE in thread
#<THREAD "main thread" RUNNING {1004A684B3}>:
  The variable *FOO* is unbound.

Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.

restarts (invokable by number or by possibly-abbreviated name):
  0: [CONTINUE   ] Retry using *FOO*.
  1: [USE-VALUE  ] Use specified value.
  2: [STORE-VALUE] Set specified value and use it.
  3: [ABORT      ] Exit debugger, returning to top level.

((LAMBDA ()))
   source: (PRINT *FOO*)
0] ; look ma, locally unbound!


Timofei ShatrovIchiran@home 2021: the ultimate guide

· 15 days ago

Recently I've been contacted by several people who wanted to use my Japanese text segmenter Ichiran in their own projects. This is not surprising since it's vastly superior to Mecab and similar software, and is occassionally updated with new vocabulary unlike many other segmenters. Ichiran powers which is a very cool webapp that helped literally dozens of people learn Japanese.

A big obstacle towards the adoption of Ichiran is the fact that it's written in Common Lisp and people who want to use it are often unfamiliar with this language. To fix this issue, I'm now providing a way to build Ichiran as a command line utility, which could then be called as a subprocess by scripts in other languages.

This is a master post how to get Ichiran installed and how to use it for people who don't know any Common Lisp at all. I'm providing instructions for Linux (Ubuntu) and Windows, I haven't tested whether it works on other operating systems but it probably should.


Ichiran uses a PostgreSQL database as a source for its vocabulary and other things. On Linux install postgresql using your preferred package manager. On Windows use the official installer. You should remember the password for the postgres user, or create a new user if you know how to do it.

Download the latest release of Ichiran database. On the release page there are commands needed to restore the dump. On Windows they don’t really work, instead try to create database and restore the dump using pgAdmin (which is usually installed together with Postgres). Right-click on PostgreSQL/Databases/postgres and select “Query tool…”. Paste the following into Query editor and hit the Execute button.

CREATE DATABASE [database_name]
    WITH TEMPLATE = template0
    OWNER = postgres
    LC_COLLATE = 'Japanese_Japan.932'
    LC_CTYPE = 'Japanese_Japan.932'
    TABLESPACE = pg_default

Then refresh the Databases folder and you should see your new database. Right-click on it then select “Restore”, then choose the file that you downloaded (it wants “.backup” extension by default so choose “Format: All files” if you can’t find the file).

You might get a bunch of errors when restoring the dump saying that “user ichiran doesn’t exist”. Just ignore them.


Ichiran uses SBCL to run its Common Lisp code. You can download Windows binaries for SBCL 2.0.0 from the official site, and on Linux you can use the package manager, or also use binaries from the official site although they might be incompatible with your operating system.

However you really want the latest version 2.1.0, especially on Windows for uh… reasons. There’s a workaround for Windows 10 though, so if you don’t mind turning on that option, you can stick with SBCL 2.0.0 really.

After installing some version of SBCL (SBCL requires SBCL to compile itself), download the source code of the latest version and let’s get to business.

On Linux it should be easy, just run

sh --fancy
sudo sh

in the source directory.

On Windows it’s somewhat harder. Install MSYS2, then run “MSYS2 MinGW 64-bit”.

pacman -S mingw-w64-x86_64-toolchain make

# for paths in MSYS2 replace drive prefix C:/ by /c/ and so on

cd [path_to_sbcl_source]
export PATH="$PATH:[directory_where_sbcl.exe_is_currently]"

# check that you can run sbcl from command line now
# type (sb-ext:quit) to quit sbcl

sh --fancy

INSTALL_ROOT=/c/sbcl sh

Then edit Windows environment variables so that PATH contains c:\sbcl\bin and SBCL_HOME is c:\sbcl\lib\sbcl (replace c:\sbcl here and in INSTALL_ROOT with another directory if applicable). Check that you can run a normal Windows shell (cmd) and run sbcl from it.


Quicklisp is a library manager for Common Lisp. You’ll need it to install the dependencies of Ichiran. Download quicklisp.lisp from the official site and run the following command:

sbcl --load /path/to/quicklisp.lisp

In SBCL shell execute the following commands:


This will ensure quicklisp is loaded every time SBCL starts.


Find the directory ~/quicklisp/local-projects (%USERPROFILE%\quicklisp\local-projects on Windows) and git clone Ichiran source code into it. It is possible to place it into an arbitrary directory, but that requires configuring ASDF, while ~/quicklisp/local-projects/ should work out of the box, as should ~/common-lisp/ but I’m not sure about Windows equivalent for this one.

Ichiran wouldn’t load without settings.lisp file which you might notice is absent from the repository. Instead, there’s a settings.lisp.template file. Copy settings.lisp.template to settings.lisp and edit the following values in settings.lisp:

  • *connection* this is the main database connection. It is a list of at least 4 elements: database name, database user (usually “postgres”), database password and database host (“localhost”). It can be followed by options like :port 5434 if the database is running on a non-standard port.
  • *connections* is an optional parameter, if you want to switch between several databases. You can probably ignore it.
  • *jmdict-data* this should be a path to these files from JMdict project. They contain descriptions of parts of speech etc.
  • ignore all the other parameters, they’re only needed for creating the database from scratch

Run sbcl. You should now be able to load Ichiran with

(ql:quickload :ichiran)

On the first run, run the following command. It should also be run after downloading a new database dump and updating Ichiran code, as it fixes various issues with the original JMdict data.


Run the test suite with


If not all tests pass, you did something wrong! If none of the tests pass, check that you configured the database connection correctly. If all tests pass, you have a working installation of Ichiran. Congratulations!

Some commands that can be used in Ichiran:

  • (ichiran:romanize "一覧は最高だぞ" :with-info t) this is basically a text-only equivalent of, everyone’s favorite webapp based on Ichiran.
  • (ichiran/dict:simple-segment "一覧は最高だぞ") returns a list of WORD-INFO objects which contain a lot of interesting data which is available through “accessor functions”. For example (mapcar 'ichiran/dict:word-info-text (ichiran/dict:simple-segment "一覧は最高だぞ") will return a list of separate words in a sentence.
  • (ichiran/dict:dict-segment "一覧は最高だぞ" :limit 5) like simple-segment but returns top 5 segmentations.
  • (ichiran/dict:word-info-from-text "一覧") gets a WORD-INFO object for a specific word.
  • ichiran/dict:word-info-str converts a WORD-INFO object to a human-readable string.
  • ichiran/dict:word-info-gloss-json converts a WORD-INFO object into a “json” “object” containing dictionary information about a word, which is not really JSON but an equivalent Lisp representation of it. But, it can be converted into a real JSON string with jsown:to-json function. Putting it all together, the following code will convert the word 一覧 into a JSON string:
  (ichiran/dict:word-info-from-text "一覧")))

Now, if you’re not familiar with Common Lisp all this stuff might seem confusing. Which is where ichiran-cli comes in, a brand new Command Line Interface to Ichiran.


ichiran-cli is just a simple command-line application that can be called by scripts just like mecab and its ilk. The main difference is that it must be built by the user, who has already did the previous steps of the Ichiran installation process. It needs to access the postgres database and the connection settings from settings.lisp are currently “baked in” during the build. It also contains a cache of some database references, so modifying the database (i.e. updating to a newer database dump) without also rebuilding ichiran-cli is highly inadvisable.

The build process is very easy. Just run sbcl and execute the following commands:

(ql:quickload :ichiran/cli)

sbcl should exit at this point, and you’ll have a new ichiran-cli (ichiran-cli.exe on Windows) executable in ichiran source directory. If sbcl didn’t exit, try deleting the old ichiran-cli and do it again, it seems that on Linux sbcl sometimes can’t overwrite this file for some reason.

Use -h option to show how to use this tool. There will be more options in the future but at the time of this post, it prints out the following:

>ichiran-cli -h
Command line interface for Ichiran

Usage: ichiran-cli [-h|--help] [-e|--eval] [-i|--with-info] [-f|--full] [input]

Available options:
  -h, --help      print this help text
  -e, --eval      evaluate arbitrary expression and print the result
  -i, --with-info print dictionary info
  -f, --full      full split info (as JSON)

By default calls ichiran:romanize, other options change this behavior

Here’s the example usage of these switches

  • ichiran-cli "一覧は最高だぞ" just prints out the romanization
  • ichiran-cli -i "一覧は最高だぞ" - equivalent of ichiran:romanize :with-info t above
  • ichiran-cli -f "一覧は最高だぞ" - outputs the full result of segmentation as JSON. This is the one you’ll probably want to use in scripts etc.
  • ichiran-cli -e "(+ 1 2 3)" - execute arbitrary Common Lisp code… yup that’s right. Since this is a new feature, I don’t know yet which commands people really want, so this option can be used to execute any command such as those listed in the previous section.

By the way, as I mentioned before, on Windows SBCL prior to 2.1.0 doesn’t parse non-ascii command line arguments correctly. Which is why I had to include a section about building a newer version of SBCL. However if you use Windows 10, there’s a workaround that avoids having to build SBCL 2.1.0. Open “Language Settings”, find a link to “Administrative language settings”, click on “Change system locale…”, and turn on “Beta: Use Unicode UTF-8 for worldwide language support”. Then reboot your computer. Voila, everything will work now. At least in regards to SBCL. I can’t guarantee that other command line apps which use locales will work after that.

That’s it for now, hope you enjoy playing around with Ichiran in this new year. よろしくおねがいします!

Nicolas Hafner2020 for Kandria in Review - Gamedev

· 18 days ago

Well, 2020 has certainly been a year. Given the amount of stuff that's happened, and especially the big changes in my life around Kandria, I thought it would be interesting to write up a review on the entire year. I'm not going to go month by month, but rather just give an overview on the many things that happened and how I feel about it all, so don't be surprised if I jump between things a little bit.

With that said, I want to start this out by thanking everyone for their support throughout the year. It's been really nice to see people interested in the project! I really hope that we can deliver on a good game, though it is going to take a long time still to get there. I hope you can wait for a couple more years!

A year ago Kandria still had its prototype name "Leaf", and I had just gotten done with a redesign of the main character, The Stranger. Much of the visual style of the game had already been defined by then, though, including the shadows. Most of the UI toolkit, Alloy, was also standing at that point. I think it was also then that I decided to do public monthly updates on the project.

I'm glad that I started on that pretty early, as I got a few eyes on the project pretty soon after I had posted things on There's a lot more that needs to be done in terms of outreach and marketing, though. Since the Steam launch we've been thinking a lot about how to get a bigger community together and foster active discussion surrounding the project. For now I'll keep doing the monthly summaries and weekly updates on the mailing list. I'll also try to be more active on Twitter and the Discord, but other than that we don't have a solid strategy yet.

The Steam launch and everything with Pro Helvetia leading up to that was a pretty stressful time all in all, when I was already running on fumes from everything else that had been going on. I'm really glad that I decided to afford myself these two weeks of holidays just to get away from it all. I didn't succeed entirely - I've been thinking about Kandria every day in at least some fashion - but I have been working on other projects at least, and been spending a lot of time just playing games, too, so I think I'm at least getting my mind cleared up enough to start fresh into the year next week.

On the topic of Pro Helvetia, the story there began in February, when the Swiss Game Hub had a little presentation on the organisation and its grant programme. With a little push from fellow local devs I decided to take the step and try to apply. This in turn forced a lot of changes as I decided to finally "properly go public". This meant finding a real name, creating a website and trailer, as well as a publicly playable demo, and mailing list to manage the marketing. And of course, polishing everything to actually run on other systems. I also got the Steam app at that point, with the idea of using it for testing distribution, but I only really got that sorted out after the grant submission deadline.

When I applied at Pro Helvetia I didn't expect to get the grant - and as expected, I didn't get it either. However, when we applied for the Swiss Games showcase in November, I did think we had a pretty good shot at it. Getting the message that we were, once again, rejected just two weeks before Christmas was pretty crushing, especially after all the work and rush that went into squeezing out a new trailer, new demo, Steam page, and press kit in time for it. Worst of all though, we weren't given any reason as to why others were selected over Kandria. I've tried contacting them the day after to ask for feedback, but have not heard back from them.

I've never been a confident person, so getting these rejections has been wearing down my already feeble remaining amounts of confidence, which hasn't been great for morale. While I'm not a confident person, I am however a very stubborn person, so despite everything I'm still determined to see this through to the end. Worst comes to worst I'll have to finish it on my own, but even if that came to pass I'd still do it. This is the best shot I've ever had at getting a real game made, and I'm not going to give up on it.

Moving on from these more rough sides of development, there has been a lot of progress this year, though a lot of it was in the innards of the game, and not necessarily on the visible side. That pains me a bit, since the screenshots from a year ago look very similar to the ones from today. I have to keep in mind that even without this, the progress made is necessary and valuable. Anyway, on to what I did do.

I reworked the SteamWorks library to work properly again. I rewrote the sound system stack almost entirely from scratch to allow for more complex effects and to work properly on all platforms. Large parts of the engine had to be rewritten to fix some big issues in how resources and rendering used to be organised. Not directly part of the game, but still important, I made custom mailing list and feedback systems. Hopefully there will be less things like that that I need to do next year, so there's more time for the actual game.

On the side of visible progress, most of it has been surrounding the combat system, and starting on upping the pizzazz by introducing fancy effects and post processing. There's still a lot more to do in that department though. Especially combat needs to have a lot more flare to it - explosions should kick and spray particles around, slashes need to connect visibly, getting hit has to really impact. I've looked at some other games and how they handle combat, and it really does seem like a much larger part than one might think of how the combat feels depends entirely on how many effects there are piled on. Sparks, flashes, particles, and especially crunchy sound effects make an enormous difference.

Don't get me wrong though, the animations of the characters themselves are also very important. They have to be fluid and have visible weight that is being thrown around. I struggled tremendously with that when I started out with the combat in Spring and had to do the first animations myself. I'm very glad that I've recruited Fred to take care of that part, as he's done an amazing job at it. The new animations feel a lot more fun, fluid, and real.

Speaking of Fred, one of the biggest changes this year was that I finally decided to put not only my time, but also my money on the line and actually hire some people to expand the team. This is something that was a long time coming. I always knew when I started out that I'd have to eventually expand the team, simply because the scale would require it to get it done in a reasonable amount of time, and because I simply don't trust my own skills well enough to get a great product out of them. That's where the confidence thing comes in again.

The hiring process took an entire month of my time, mostly because there were way more applications than I ever thought there would be, and I wanted to do my due diligence and investigate everyone to a good degree. Ultimately finalising the selection was also difficult for me, and took me over a week of deliberation. I'm happy with the choices I made, but I still wish I had the funds to just hire more people.

Since the game is almost entirely built on a custom stack of software, engine and all, there's a lot of rough edges and corner case bugs that hinder development and cost us a lot of time. I really wish I had the funds to hire another skilled programmer to take care of those so I can focus more on directing the story, art, and general features and level design. Still, we're already on a tight budget that isn't going to last for the entire duration of development unless we can procure additional funding somehow. We've been talking about that a fair bit, too, but there's no clear decision yet.

So far the plan is still to complete a vertical slice in the coming months and then do another planning session to see how things hash out once we have a better idea of the development costs involved and how the overall plot and world will pan out. Then comes another application for the Pro Helvetia grant in September. If we get that, we'll have extended funds for another year, which should hopefully bridge the gap well enough to pull through to the end. If not... well, there's other possibilities that I don't want to really discuss yet as it's all still too uncertain.

As you may know, during most of the development of Kandria so far I was a Master's student at ETH. I've been a student for a long time, since my Bachelor's took me a long time to complete, largely in part to not being able to take the stress of taking on too many subjects at once. Most of the classes I either didn't care for, or outright loathed having to work on, so it was not a very merry time. Still, I managed to persevere. Now, in the Master's programme for Computer Science at ETH there's a requirement to complete two of three "interdisciplinary laboratories". You have to complete these regardless of the focus you take, and so regardless of your interests or target skillset. I tried all three, and failed all three, the last two of which I failed this Summer. All three were very hard courses that required a ton of time investment. I did not expect to fail them all. Whatever the case, this, in addition with the strict term limits at ETH, meant that it was not guaranteed I'd be able to complete my Master's even if I did decide to try them again in a year. It would mean spending at least one and a half more years to complete my Master's, if I managed to pass these classes the second time.

I decided that these odds were no longer worth it. University made me miserable, and I was not sure how big of a benefit the degree would be anyway. So I made the big decision to work full time on Kandria, which I have now been doing since September.

Doing this also shifted the project quite a bit though, as now it is no longer a game project I just want to complete on the side, it's now something that has to prove not only possible, but also financially viable, in order to be able to keep doing this. Naturally this places a huge burden on me, and even if I don't want to think about it much, my subconscious still does anyway. This has lead to a somewhat unhealthy work/life balance, where I couldn't justify working on other side projects like I used to all this time before, as the thought of "but shouldn't you be working on the game, instead?" always came creeping around the corner.

This has especially been a problem in November and the beginning of December, and is why I've run so badly out of steam. These two weeks of holidays have really been great to get away from that. Still, I'm going to have to figure out some better balance to make this sustainable in the long run. I can't be going on holidays every two months or so after all. At this point I don't yet know how exactly to do this, except that I know I need to weave different projects into my schedule somehow. That's something to figure out in the new year.

Tim and I have already been making some good progress discussing the characters, setting, world, and overall story in December, and I'm really eager to dive back into that and get started on planning out the first section of Kandria for the vertical slice. I also have a bunch of cool ideas for new features and effects to implement. I'm looking forward to diving back into all of that next week, but I'm also cautious about all the challenges we already know about. I really don't want to rush it and end up with something we have to throw away in the end.

This entry has gone on for long enough already, even if there's a lot of details and smaller developments I skipped, so I'll try to bring this to a close. As always, if you want to be kept up to date on the development, sign up for the mailing list!

Tim also wanted to write a little bit about his experience working on Kandria the past two months, so here goes:

It's been a whirlwind two months working on Kandria! I've already gotten heavily involved in writing marketing text, developing the lore, and making a demo quest to learn the dev tools. I'm looking forward to coming back after Christmas and keeping the momentum going for the vertical slice. I expect I'll be getting more hands on with the tools in particular, to write multiple quests for a hub-like area; now I've learned the basics and will have more time, I'll be looking to structure it better as well, using the quest system to its fullest, rather than brute-forcing it with task interactions alone. :)

With that, I think I'll call the yearly round-up done. I hope next year will be better than this one, and am currently being cautiously optimistic about that. I wish everyone out there, and especially you reading this, all the best in 2021!

Leo ZovicProfiling `house`. Again.

· 29 days ago

So I've plowed some of my vacation time into polishing up/hacking on some old projects. Including house, the web server I complained was garbage, but still had one distinct advantage over other Common Lisp webservers. Namely; because it's the only natively implemented one, it will work out-of-the-box, without issue, anywhere you can install quicklisp and a LISP it runs on.

This hacking attempt was aimed at addressing the complaint. Most of the major-overhaul branch was aimed at making the code more readable and sensical. Making handlers and http-types much simpler, both implementationally and conceptually. But I want to throw at least a little effort at performance. With that in mind, I wanted a preliminary benchmark. I'm following fukamachis' procedure for woo. Note that, since house is a single-threaded server (for now), I'm only doing single-threaded benchmarks.

; SLIME 2.26
CL-USER> (ql:quickload :house)
To load "house":
  Load 1 ASDF system:
; Loading "house"
CL-USER> (in-package :house)
HOUSE> (define-handler (root) () "Hello world!")
#<HANDLER-TABLE {1004593CF3}>
HOUSE> (house:start 5000)
inaimathi@this:~$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.01ms    5.85ms 204.63ms   98.73%
    Req/Sec     2.64k     0.89k    7.22k    62.16%
  104779 requests in 10.10s, 30.58MB read
  Socket errors: connect 0, read 104775, write 0, timeout 0
Requests/sec:  10374.93
Transfer/sec:      3.03MB
inaimathi@this:~$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.74ms   19.05ms 408.54ms   98.18%
    Req/Sec     2.58k     0.85k    4.64k    57.39%
  102543 requests in 10.10s, 29.92MB read
  Socket errors: connect 0, read 102539, write 0, timeout 0
Requests/sec:  10152.79
Transfer/sec:      2.96MB
inaimathi@this:~$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.56ms   59.54ms   1.66s    99.27%
    Req/Sec     3.10k     1.83k    9.56k    76.72%
  103979 requests in 10.01s, 30.34MB read
  Socket errors: connect 0, read 103979, write 0, timeout 4
Requests/sec:  10392.46
Transfer/sec:      3.03MB
inaimathi@this:~$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.49ms   85.22ms   1.66s    98.81%
    Req/Sec     3.23k     2.16k   11.90k    81.01%
  102236 requests in 10.01s, 29.83MB read
  Socket errors: connect 0, read 102232, write 0, timeout 4
Requests/sec:  10215.87
Transfer/sec:      2.98MB

So that puts house comfortably in the same league as Tornado on PyPy or the node.js server. This is not a bad league to be in, but I want to see if I can do better.

Step 1 - Kill Methods

defmethod is a thing I was seemingly obsessed with when I wrote house. This isn't necessarily a bad thing from the legibility perspective; because they have type annotations, it's clearer what an expected input is from a reading of the code. However, there's two disadvantages to using methods where you don't have to.

  1. You'll often get a no-defined-method error on weird input, rather than something more descriptive and specific the way you probably would when using a normal function
  2. Your performance will sometimes irredeemably suck.

The first point is a nit, but the second one is worth dealing with in the context of a library that should probably perform reasonably well at least some of the time. The cause of that problem is that methods can't be inlined. Because the point of them is to dispatch on a type-table of their arguments at runtime, they can't do their work at compile-time to inline the result without some serious trickery1. Today, I'm avoiding trickery and just re-writing every method in house that I can into a function, usually by using etypecase.

Some of these are trivial conversions

;;; house.lisp
-(defmethod start ((port integer) &optional (host usocket:*wildcard-host*))
+(defun start (port &optional (host usocket:*wildcard-host*))
+  (assert (integerp port))
-(defmethod process-ready ((ready stream-server-usocket) (conns hash-table))
-  (setf (gethash (socket-accept ready :element-type 'octet) conns) nil))
-(defmethod process-ready ((ready stream-usocket) (conns hash-table))
+(defun process-ready (ready conns)
+  (assert (hash-table-p conn))
+  (etypecase ready
+    (stream-server-usocket (setf (gethash (socket-accept ready :element-type 'octet) conns) nil))
+    (stream-usocket
-(defmethod parse-cookies ((cookie string))
+(defun parse-cookies (cookie)
+  (assert (stringp cookie))
-(defmethod handle-request! ((sock usocket) (req request))
+(defun handle-request! (sock req)
-(defmethod error! ((err response) (sock usocket) &optional instance)
-  (declare (ignorable instance))
+(defun error! (err sock)
;;; session.lisp
-(defmethod new-session-hook! ((callback function))
+(defun new-session-hook! (callback)
-(defmethod poke! ((sess session))
+(defun poke! (sess)
;;; util.lisp
-(defmethod path->uri ((path pathname) &key stem-from)
+(defun path->uri (path &key stem-from)
-(defmethod path->mimetype ((path pathname))
+(defun path->mimetype (path)

Some are slightly more complicated. In particular, parse looks like it would conflate two entirely separate functions, but on inspection, we know the type of its argument at every call site.

./house.lisp:46:		      (setf (parameters (request buf)) (nconc (parse buf) (parameters (request buf)))))
./house.lisp:68:	   do (multiple-value-bind (parsed expecting) (parse buffer)
./house.lisp:92:(defmethod parse ((str string))
./house.lisp:110:(defmethod parse ((buf buffer))
./house.lisp:116:	(parse str))))

So, we can convert parse to two separate, named functions. write! is basically the same situation.

;;; house.lisp
-(defmethod parse ((str string))
+(defun parse-request-string (str)
-(defmethod parse ((buf buffer))
+(defun parse-buffer (buf)
-(defmethod write! ((res response) (stream stream))
+(defun write-response! (res stream)
-(defmethod write! ((res sse) (stream stream))
+(defun write-sse! (res stream)

Not pictured; changes at each call-site to call the correct one.

The parse-params method is a bit harder to tease out. Because it looks like it genuinely is one polymorphic function. Again, though, on closer inspection of the fully internal to house call-sites makes it clear that we almost always know what we're passing as arguments at compile-time.

./house.lisp:78:(defmethod parse-params (content-type (params null)) nil)
./house.lisp:79:(defmethod parse-params (content-type (params string))
./house.lisp:83:(defmethod parse-params ((content-type (eql :application/json)) (params string))
./house.lisp:107:	(setf (parameters req) (parse-params nil parameters))
./house.lisp:113:	(parse-params
					 (->keyword (cdr (assoc :content-type (headers (request buf)))))

That "almost" is going to be a slight pain though; we need to do a runtime dispatch inside of parse-buffer to figure out whether we're parsing JSON or a param-encoded string.

-(defmethod parse-params (content-type (params null)) nil)
-(defmethod parse-params (content-type (params string))
+(defun parse-param-string (params)
   (loop for pair in (split "&" params)
-     for (name val) = (split "=" pair)
-     collect (cons (->keyword name) (or val ""))))
-(defmethod parse-params ((content-type (eql :application/json)) (params string))
-  (cl-json:decode-json-from-string params))
+	for (name val) = (split "=" pair)
+	collect (cons (->keyword name) (or val ""))))
-	(parse-params
-	 (->keyword (cdr (assoc :content-type (headers (request buf)))))
-	 str)
-	(parse str))))
+	(if (eq :application/json (->keyword (cdr (assoc :content-type (headers (request buf))))))
+	    (cl-json:decode-json-from-string str)
+	    (parse-param-string str))
+	(parse-request-string str))))

The last one is going to be a headache. The lookup method is meant to be a general accessor, and has a setf method defined. I'm not going that way right now; lets see if we gained anything with our current efforts.

Second verse same as the first.

; SLIME 2.26
CL-USER> (ql:quickload :house)
To load "house":
  Load 1 ASDF system:
; Loading "house"
CL-USER> (in-package :house)
HOUSE> (define-handler (root) () "Hello world!")
#<HANDLER-TABLE {1004593CF3}>
HOUSE> (house:start 5000)
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.96ms    4.02ms  76.87ms   98.43%
    Req/Sec     2.70k     0.98k    7.57k    73.83%
  103951 requests in 10.10s, 30.34MB read
  Socket errors: connect 0, read 103947, write 0, timeout 0
Requests/sec:  10292.48
Transfer/sec:      3.00MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   846.32us    2.63ms  58.29ms   98.26%
    Req/Sec     2.64k     0.94k   11.13k    72.89%
  102661 requests in 10.10s, 29.96MB read
  Socket errors: connect 0, read 102658, write 0, timeout 0
Requests/sec:  10165.46
Transfer/sec:      2.97MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.57ms   90.07ms   1.66s    98.96%
    Req/Sec     3.71k     2.87k   11.73k    74.30%
  105162 requests in 10.10s, 30.69MB read
  Socket errors: connect 0, read 105159, write 0, timeout 2
Requests/sec:  10412.91
Transfer/sec:      3.04MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.69ms   70.32ms   1.66s    99.25%
    Req/Sec     3.06k     1.82k    9.46k    74.40%
  101302 requests in 10.10s, 29.56MB read
  Socket errors: connect 0, read 101299, write 0, timeout 3
Requests/sec:  10030.14
Transfer/sec:      2.93MB

Aaand it looks like the effect was neglegible. Oh well. I honestly think that the untangling we've done so far makes the parts of the codebase that its' touched more readable, so I'm keeping them, but there's no great improvement yet. Perhaps if we inline some things?

;;; package.lisp
-(declaim (inline crlf write-ln idling? flex-stream))
+(declaim (inline crlf write-ln idling? flex-stream write-response! write-sse! process-ready parse-param-string parse-request-string))
wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.71ms   15.37ms 412.51ms   98.91%
    Req/Sec     2.69k     0.91k    6.28k    65.37%
  103607 requests in 10.10s, 30.24MB read
  Socket errors: connect 0, read 103603, write 0, timeout 0
Requests/sec:  10258.44
Transfer/sec:      2.99MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   837.49us    2.66ms  58.36ms   98.36%
    Req/Sec     2.63k   836.52     3.81k    49.37%
  103449 requests in 10.10s, 30.19MB read
  Socket errors: connect 0, read 103446, write 0, timeout 0
Requests/sec:  10242.91
Transfer/sec:      2.99MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.23ms   74.76ms   1.89s    99.08%
    Req/Sec     4.01k     2.20k   10.23k    58.89%
  101524 requests in 10.10s, 29.63MB read
  Socket errors: connect 0, read 101522, write 0, timeout 4
Requests/sec:  10052.56
Transfer/sec:      2.93MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.75ms   70.98ms   1.67s    99.27%
    Req/Sec     3.19k     2.11k   10.26k    81.39%
  100944 requests in 10.01s, 29.46MB read
  Socket errors: connect 0, read 100941, write 0, timeout 1
Requests/sec:  10088.23
Transfer/sec:      2.94MB

Again, no huge difference. On closer inspection, lookup is only used in one place internally, and it's easy to replace with gethash so I'm just going to do that and re-check real quick.

;;; channel.lisp
-  (push sock (lookup channel *channels*))
+  (push sock (gethash channel *channels*))
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.95ms    3.72ms  72.70ms   98.43%
    Req/Sec     2.66k     1.00k   11.52k    73.45%
  102839 requests in 10.10s, 30.01MB read
  Socket errors: connect 0, read 102835, write 0, timeout 0
Requests/sec:  10183.46
Transfer/sec:      2.97MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.87ms    2.85ms  59.32ms   98.19%
    Req/Sec     2.62k     0.86k    3.87k    54.82%
  102818 requests in 10.10s, 30.00MB read
  Socket errors: connect 0, read 102814, write 0, timeout 0
Requests/sec:  10180.62
Transfer/sec:      2.97MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.96ms   80.03ms   1.68s    99.10%
    Req/Sec     3.11k     2.12k   11.72k    78.40%
  105460 requests in 10.10s, 30.78MB read
  Socket errors: connect 0, read 105456, write 0, timeout 5
Requests/sec:  10441.77
Transfer/sec:      3.05MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.22ms   83.95ms   1.66s    98.84%
    Req/Sec     3.19k     2.07k   11.66k    73.23%
  103933 requests in 10.10s, 30.33MB read
  Socket errors: connect 0, read 103930, write 0, timeout 5
Requests/sec:  10290.43
Transfer/sec:      3.00MB

To no ones' great surprise, still not much of a difference. I'm going to let the lookup issue dangle for the moment, because it has to do with a trick I want to pull a bit later on, but before we get to that...

Step 2 - Kill Classes

The second step is to kill class definitions entirely. Their accessor functions are also generic, and therefore rely on method dispatch. structs are a bit clumsier, but probably faster in the end. Now, we can't really mess with session, request and response, because those are part of houses' external interface, but there's three places where we can replace defclass with defstruct.

Re-writing buffer, sse and handler-entry ...

;;; model.lisp
-(defclass sse ()
-  ((id :reader id :initarg :id :initform nil)
-   (event :reader event :initarg :event :initform nil)
-   (retry :reader retry :initarg :retry :initform nil)
-   (data :reader data :initarg :data)))
-(defclass buffer ()
-  ((tries :accessor tries :initform 0)
-   (contents :accessor contents :initform nil)
-   (bi-stream :reader bi-stream :initarg :bi-stream)
-   (total-buffered :accessor total-buffered :initform 0)
-   (started :reader started :initform (get-universal-time))
-   (request :accessor request :initform nil)
-   (expecting :accessor expecting :initform 0)))
-(defclass handler-entry ()
-  ((fn :reader fn :initarg :fn :initform nil)
-   (closing? :reader closing? :initarg :closing? :initform t)))
;;; house.lisp
-(defun write-sse! (res stream)
-  (format stream "~@[id: ~a~%~]~@[event: ~a~%~]~@[retry: ~a~%~]data: ~a~%~%"
-	  (id res) (event res) (retry res) (data res)))
-(defun buffer! (buffer)
-  (handler-case
-      (let ((stream (bi-stream buffer)))
-	(incf (tries buffer))
-	(loop for char = (read-char-no-hang stream)
-	   until (or (null char) (eql :eof char))
-	   do (push char (contents buffer))
-	   do (incf (total-buffered buffer))
-	   when (request buffer) do (decf (expecting buffer))
-	   when (and #-windows(char= char #\linefeed)
-		     #+windows(char= char #\newline)
-		 (line-terminated? (contents buffer)))
-	   do (multiple-value-bind (parsed expecting) (parse-buffer buffer)
-		(setf (request buffer) parsed
-		      (expecting buffer) expecting
-		      (contents buffer) nil)
-		(return char))
-	   when (> (total-buffered buffer) +max-request-size+) return char
-	   finally (return char)))
-    (error () :eof)))
-(defun parse-buffer (buf)
-  (let ((str (coerce (reverse (contents buf)) 'string)))
-    (if (request buf)
-	(if (eq :application/json (->keyword (cdr (assoc :content-type (headers (request buf))))))
-	    (cl-json:decode-json-from-string str)
-	    (parse-param-string str))
-	(parse-request-string str))))
;;; define-handler.lisp
+(defstruct handler-entry
+  (fn nil)
+  (closing? t))
-    (make-instance
-     'handler-entry
+    (make-handler-entry
;;; channel.lisp
+(defstruct (sse (:constructor make-sse (data &key id event retry)))
+  (id nil) (event nil) (retry nil)
+  (data (error "an SSE must have :data") :type string))
-(defun make-sse (data &key id event retry)
-  (make-instance 'sse :data data :id id :event event :retry retry))
+(defun write-sse! (res stream)
+  (format stream "~@[id: ~a~%~]~@[event: ~a~%~]~@[retry: ~a~%~]data: ~a~%~%"
+	  (ss-id res) (sse-event res) (sse-retry res) (sse-data res)))
;;; buffer.lisp
+(in-package :house)
+(defstruct (buffer (:constructor make-buffer (bi-stream)))
+  (tries 0 :type integer)
+  (contents nil)
+  (bi-stream nil)
+  (total-buffered 0 :type integer)
+  (started (get-universal-time))
+  (request nil)
+  (expecting 0 :type integer))
+(defun buffer! (buffer)
+  (handler-case
+      (let ((stream (buffer-bi-stream buffer)))
+	(incf (buffer-tries buffer))
+	(loop for char = (read-char-no-hang stream)
+	   until (or (null char) (eql :eof char))
+	   do (push char (buffer-contents buffer))
+	   do (incf (buffer-total-buffered buffer))
+	   when (buffer-request buffer) do (decf (buffer-expecting buffer))
+	   when (and #-windows(char= char #\linefeed)
+		     #+windows(char= char #\newline)
+		 (line-terminated? (buffer-contents buffer)))
+	   do (multiple-value-bind (parsed expecting) (parse-buffer buffer)
+		(setf (buffer-request buffer) parsed
+		      (buffer-expecting buffer) expecting
+		      (buffer-contents buffer) nil)
+		(return char))
+	   when (> (buffer-total-buffered buffer) +max-request-size+) return char
+	   finally (return char)))
+    (error () :eof)))
+(defun parse-buffer (buf)
+  (let ((str (coerce (reverse (buffer-contents buf)) 'string)))
+    (if (buffer-request buf)
+	(if (eq :application/json (->keyword (cdr (assoc :content-type (headers (buffer-request buf))))))
+	    (cl-json:decode-json-from-string str)
+	    (parse-param-string str))
+	(parse-request-string str))))

... should get us _something. Right?

inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.09ms    6.18ms 202.73ms   98.55%
    Req/Sec     2.69k     0.89k    4.02k    56.74%
  105108 requests in 10.10s, 30.67MB read
  Socket errors: connect 0, read 105105, write 0, timeout 0
Requests/sec:  10406.92
Transfer/sec:      3.04MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.98ms    5.78ms 204.47ms   98.86%
    Req/Sec     2.67k   848.77     3.98k    54.71%
  104242 requests in 10.10s, 30.42MB read
  Socket errors: connect 0, read 104242, write 0, timeout 0
Requests/sec:  10321.40
Transfer/sec:      3.01MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.93ms   79.75ms   1.66s    99.10%
    Req/Sec     3.33k     2.46k   11.95k    79.87%
  105920 requests in 10.10s, 30.91MB read
  Socket errors: connect 0, read 105918, write 0, timeout 2
Requests/sec:  10487.59
Transfer/sec:      3.06MB
inaimathi@this:~/quicklisp/local-projects/house$ wrk -c 100 -t 4 -d 10
Running 10s test @
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.78ms   61.11ms   1.68s    99.30%
    Req/Sec     2.83k     1.26k    7.01k    70.22%
  103381 requests in 10.10s, 30.17MB read
  Socket errors: connect 0, read 103378, write 0, timeout 0
Requests/sec:  10235.14
Transfer/sec:      2.99MB

Very little noticeable gain, I'm afraid. Ok, there's one more thing I'm tempted to try. There were hints earlier that this was coming, including this, but if you don't follow my github you might still be surprised.

Step 3 - Musing on CLJ

Now that we have what I think is a reasonably fast implementation of house, I want to see whether2 [clj]( does performance damage to the implementation. I want to see this because, the clj datastructures and syntax really improve readability and REPL development; there's a bunch of situations in which I missed having that level of visibility into my structures before I even began this benchmark article. There's even probably a few places where it saves some performance by referencing other partial structures. The problem is that I'm guessing it's a net negative in terms of performance, so I want to see what a conversion would do to my benchmark before I go through with it.

This is going to be especially useful for houses' external interface. And given that I've already had to break compatibility to write this overhaul, this is probably the best possible time to test the theory. The trouble is that I'm not entirely sure what the real interface looks like quite yet, so I'm not going to be implementing it today. These are just some musings.

The current house model for handler/response interaction is that a handler returns either a response (in the event of a redirect!) or a string (in any other event). This makes a few things kind of difficult. Firstly, it means that session and header manipulation has to happen by effect. That is, they're not included as part of the return value; they have to be exposed in some other way. In the case of headers, it's via an alist bound to the invisible symbol headers inside of the handler body. This ... is less than ideal.

If we take the http-kit approach, we'd expect our handlers to always return a map. And if that map had slots for headers/session, those things would be set as appropriate in the outgoing response and/or server state. Our input would also be a map. And it would naturally contain method/headers/path/parameters/session/etc slots that a handler writer would want to make use of. I'm not entirely clear on whether we'd want to make this the primary internal and external representation, or if we're just looking for an easily manipulated layer for the users. I'm leaning towards the first of those options.

This ... actually doesn't sound too hard if cut at the right level. Lets give it a shot, I guess.

It wasn't.

There's enough weird shit happening here that I need a fresh brain for it. That was enough for now. The main roadblock I hit is that it turns out that a lot more of the internal interface here depends on mutation than I thought. This is bad for readability and coceptual simplicity, but good in the sense that I can move away from these models first, then see about integrating clj later.

I'll probably take another run up this hill later, but for now, I think I'm moving on to other issues.

  1. Wait, why use methods then? They're good specifically in the situation where you want to establish an interface for a set of datastructures that you expect to have to extend outside of your library. If all the extension is going to happen inside, you can still make the argument that etypecase is the right way to go. But if you want the callers of your code to be able to define new behaviors for datastructures they specify themselves, then absolutely reach for defmethod.
  2. More realistically, "how much" rather than "whether"

Quicklisp newsDecember 2020 Quicklisp dist update now available

· 29 days ago

 New projects

  • aether — A DSL for emulating an actor-based distributed system, housed on a family of emulated devices. — MIT (See
  • binding-arrows — An implementation of threading macros based on binding anonymous variables — MIT
  • bitfield — Efficiently represent several finite sets or small integers as a single non-negative integer. — MIT
  • cl-bloggy — A simple extendable blogging system to use with Hunchentoot — MIT
  • cl-data-structures — Data structures, ranges, ranges algorithms. — BSD simplified
  • cl-html-readme — A HTML Documentation Generator for Common Lisp projects. — MIT
  • cl-ini — INI file parser — MIT
  • cl-notebook — A notebook-style in-browser editor for Common Lisp — AGPL3
  • cl-unix-sockets — UNIX Domain socket — Apache License, Version 2.0
  • cmd — A utility for running external programs — MIT
  • cytoscape-clj — A cytoscape widget for Common Lisp Jupyter. — MIT
  • damn-fast-priority-queue — A heap-based priority queue whose first and foremost priority is speed. — MIT
  • dataloader — A universal loader library for various data formats for images/audio — LLGPL
  • ecclesia — Utilities for parsing Lisp code. — MIT
  • fuzzy-match — From a string input and a list of candidates, return the most relevant candidates first. — MIT
  • geco — GECO: Genetic Evolution through Combination of Objects A CLOS-based Framework for Prototyping Genetic Algorithms — GPL 2.0
  • gtwiwtg — Lazy-ish iterators — GPLv3
  • gute — Gene's personal kitchen sink library. — MIT
  • lense — Racket style lenses for the Common Lisp. — BSD-2
  • linear-programming-glpk — A backend for linear-programming using GLPK — GPL 3.0
  • mgrs — Convert coordinates between Latitude/Longitude and MGRS. — GPL-3
  • monomyth — A distributed data processing library for CL — MPL 2.0
  • neural-classifier — Classification of samples based on neural network. — 2-clause BSD
  • roan — A library to support change ringing applications — MIT
  • simple-neural-network — Simple neural network — GPL-3
  • stefil- — Unspecified — Unspecified
  • tree-search — Search recursively through trees of nested lists — ISC
  • ttt — A language for transparent modifications of s-expression based trees. — GPLv3
  • utm-ups — Convert coordinates between Latitude/Longitude and UTM or UPS. — GPL-3
  • with-contexts — The WITH-CONTEXT System. A system providing a WITH macro and 'context'ualized objects handled by a ENTER/HANDLE/EXIT protocol in the spirit of Python's WITH macro. Only better, or, at a minimum different, of course. — BSD

Updated projects: 3bmd, 3bz, 3d-matrices, 3d-vectors, adopt, algae, april, arc-compat, architecture.builder-protocol, array-utils, arrow-macros, aws-sign4, bdef, binpack, check-bnf, cl-ana, cl-ansi-text, cl-bunny, cl-catmull-rom-spline, cl-cffi-gtk, cl-collider, cl-conllu, cl-covid19, cl-custom-hash-table, cl-digraph, cl-environments, cl-gamepad, cl-gd, cl-glfw3, cl-gserver, cl-interpol, cl-kraken, cl-liballegro, cl-liballegro-nuklear, cl-libyaml, cl-lzlib, cl-markless, cl-maxminddb, cl-mime, cl-mixed, cl-mongo-id, cl-naive-store, cl-octet-streams, cl-pass, cl-patterns, cl-pdf, cl-portaudio, cl-prevalence, cl-randist, cl-rdkafka, cl-sdl2, cl-sdl2-mixer, cl-semver, cl-sendgrid, cl-setlocale, cl-skkserv, cl-steamworks, cl-str, cl-tcod, cl-telegram-bot, cl-unicode, cl-utils, cl-wavelets, cl-webkit, cl-yaml, clesh, clj, clml, closer-mop, clsql, clweb, colored, common-lisp-jupyter, concrete-syntax-tree, conduit-packages, consix, corona, croatoan, curry-compose-reader-macros, dartscltools, dartscluuid, data-lens, defclass-std, deploy, dexador, djula, docparser, doplus, easy-audio, easy-routes, eazy-documentation, eclector, esrap, file-select, flexichain, float-features, floating-point-contractions, functional-trees, gadgets, gendl, generic-cl, glacier, golden-utils, gtirb-capstone, harmony, helambdap, house, hunchentoot-multi-acceptor, hyperluminal-mem, imago, ironclad, jingoh, jpeg-turbo, jsonrpc, kekule-clj, linear-programming, linux-packaging, lisp-chat, lisp-critic, lisp-gflags, literate-lisp, lmdb, local-package-aliases, local-time, lquery, markup, math, mcclim, millet, mito, mmap, mutility, named-readtables, neo4cl, nibbles, num-utils, origin, orizuru-orm, parachute, pathname-utils, perceptual-hashes, petalisp, phoe-toolbox, physical-quantities, picl, pjlink, portable-condition-system, postmodern,, protest, protobuf, py4cl, py4cl2, qt-libs, quilc, quri, rcl, read-number, reader, rpcq, rutils, s-graphviz, sc-extensions, secret-values, sel, select, serapeum, shadow, simple-parallel-tasks, slime, sly, snooze, static-dispatch, stmx, stumpwm, swank-client, swank-protocol, sxql, tesseract-capi, textery, tooter, trace-db, trivial-compress, trivial-do, trivial-pooled-database, trivial-string-template, uax-15, uncursed, verbose, vp-trees, weblocks-examples, weblocks-prototype-js.

Removed projects: cl-arrows, cl-generic-arithmetic, clcs-code, dyna, osmpbf, sanity-clause, unicly.

To get this update, use (ql:update-dist "quicklisp")


Michał HerdaQuicklisp Stats

· 29 days ago

Quicklisp statistics are now available as CSV files, and the Quicklisp Stats system that I've just submitted to Quicklisp is a little helper library for handling this dataset and accessing it from inside Lisp.


            ;;; How many times was Alexandria downloaded in Nov 2020?
QUICKLISP-STATS> (system-downloads :alexandria 2020 11)

;;; Get all systems that were downloaded
;;; more than 10000 times in Nov 2020
;;; and print them somewhat nicely
QUICKLISP-STATS> (loop with stats = (month 2020 4)
                       with filtered-stats
                         = (remove-if-not (lambda (x) (< 10000 (cdr x))) stats)
                       for (system . count) in filtered-stats 
                       do (format t ";; ~20A : ~5D~%" system count))
;; alexandria           : 19938
;; cl-ppcre             : 15636
;; bordeaux-threads     : 14974
;; trivial-features     : 14569
;; split-sequence       : 14510
;; closer-mop           : 14482
;; trivial-gray-streams : 14259
;; babel                : 14254
;; cffi                 : 12365
;; flexi-streams        : 11940
;; iterate              : 11924
;; named-readtables     : 11205
;; cl-fad               : 10996
;; usocket              : 10859
;; anaphora             : 10783
;; trivial-backtrace    : 10693

;;; How many downloads did Bordeaux Threads 
;;; have over all of 2020?
QUICKLISP-STATS> (loop for ((year month) . data) in (all)
                       for result = (a:assoc-value data "bordeaux-threads"
                                                   :test #'equal)
                       do (format t ";; ~4,'0D-~2,'0D: ~D~%" year month result))
;; 2020-01: 16059
;; 2020-02: 12701
;; 2020-03: 17123
;; 2020-04: 14974
;; 2020-05: 14489
;; 2020-06: 13851
;; 2020-07: 14130
;; 2020-08: 10843
;; 2020-09: 13757
;; 2020-10: 13444
;; 2020-11: 15825



· 31 days ago
A few remarks about the manardb.

Marco AntoniottiWith what are we contextualizing?

· 38 days ago

Common Lisp programmers may write many with-something overt their careers; the language specification itself is ripe with such constructs: witness with-open-file. Many other libraries also introduce a slew of with- macros dealing with this or that case.

So, if this is the case, what prevents Common Lisp programmers from coming up with a generalized with macro?

It appears that the question has been answered, rather satisfactorily, in Python and Julia (at least). Python offers the with statement, alongside a library of "contexts" (Python introduced the with statement in 2005 with PEP 343) and Julia offers its do blocks.

In the following I will present WITH-CONTEXTS, a Common Lisp answer to the question. The library is patterned after the ideas embodied in the Python solution, but with several (common) "lispy" twists.

Here is the standard - underwhelming - example:

      (with f = (open "") do
         (do-something-with f))

That's it as far as syntax is concerned (the 'var =' being optional, obviously not in this example; the syntax was chosen to be loop-like, instead of using Python's as keyword). Things become more interesting when you look under the hood.

Traditional Common Lisp with- macros expand in variations of unwind-protect or handle-case (and friends). The example above, if written with with-open-file would probably expand into something like the following:

      (let ((f nil))
                 (setq f (open ""))
                 (do-something-with f))
            (when f (close f))))

Python generalizes this scheme by introducing a enter/exit protocol that is invoked by the with statement. Please refer to the Python documentation on contexts and their __enter__ and __exit__ methods.

The "WITH" Macro in Common Lisp: Contexts and Protocol

In order to introduce a with macro in Common Lisp that mimicked what Python programmers expect and what Common Lisp programmers are used to some twists are necessary. To achieve this goal, a protocol of three generic functions is provided alongside a library of contexts.

The ENTER/HANDLE/EXIT Context Protocol

The WITH-CONTEXTS library provides three generic functions that are called at different times within the code resulting from the expansion of the onvocation of the with macro.

  • enter: this generic function is invoked when the with macro "enters" the context; its main argument is the result of the expression that is the argument of the with macro.
  • handle: this generic function is called to take care of exceptional situations that may arise during the call to enter or during the execution of the body of the with macro.
  • exit: this generic function is called to "clean up" before exiting the context entered by means of the with macro.

Given the protocol (from now on referred to as the "EHE-C protocol"), the (undewhelming) "open file" example expands in the following:

      (let ((f nil))
	      (setq f (enter (open "contexts.lisp")))
	      (handler-case (open-stream-p f)
		(error (#:ctcx-err-e-41883)
		  (handle f #:ctcx-err-e-41883))))
	  (exit f)))

Apart from the gensymmed variable the expansion is pretty straightforward. The function enter is called on the newly opened stream (and is essentially an identity function) and sets the variable. If some error happens while the body of the macro is executing then control is passed to the handle function (which, in its most basic form just re-signals the condition). Finally, the unwind-protect has a chance to clean up by calling exit (which, when passed an open stream, just closes it).

One unexpected behavior for Common Lisp programmers is that the variable (f in the case above) escapes the with constructs. This is in line with what Python does, and it may have its uses. The file opening example thus has the following behavior:

    CL-prompt > (with f = (open "contexts.lisp") do
                    (open-stream-p f))

    CL-prompt > (open-stream-p f)

To ensure that this behavior is reflected in the implementation, the actual macroexpansion of the with call becomes the following.

      (let ((#:ctxt-esc-val-41882 nil))
	    (let ((f nil))
		    (setq f (enter (open "contexts.lisp")))
			(open-stream-p f)
		      (error (#:ctcx-err-e-41883)
			(handle f #:ctcx-err-e-41883))))
		    (exit f)
		  (setf #:ctxt-esc-val-41882 f))))
	  (setf f #:ctxt-esc-val-41882)))

This "feature" will help in - possibly - porting some Python code to Common Lisp.


Python attaches to the with statement the notion of contexts. In Common Lisp, far as the with macro is concerned, anything that is passed as the expression to it, must respect the enter/handle/exit. protocol. The three generic functions enter, handle, exit, have simple defaults that essentially let everything "pass through", but specialized context classes have been defined that parallel the Python context library classes.

First of all, the current library defines the EHE-C protocol for streams. This is the strightforward way to obtain the desired behavior for opening and closing files as with with-open-file.

Next, the library defines the following "contexts" (as Python does).

  • null-context:
    this is a full "pass through" context, just encapsulating the expression passed to it.
  • managed-resource-context:
    this is a first cut implementation of a "managed resource", which implements also an acquire/release protocol; of course, this would become more useful in presence of mutltiprocessing (see notes in Limitations).
  • redirect-context:
    this is a context that redirects output to a different stream.
  • suppress-context:
    this is a context that selectively handles some conditions, while ignoring other ones.
  • exit-stack-context:
    this is a context that essentially allows a programmer to manipulate the "state of the computation" within it body and combine other "contexts"; to achieve its design goal, it leverages the functions of a protocol comprising the enter-context, push-context, callback, pop-all and unwind (this is equivalent to the Python close() context method).

This should be a good enough base to start working with contexts in Common Lisp. It is unclear whether the Python decorator interface would provide some extra functionality in this Common Lisp implementation of contexts and the with macro.


The current implementation has a semantics that is obviously not the same as the corresponding Python one, but it is hoped that it still provided useful functionality. There are some obvious limitations that should be taken into account.

The current implementation of the library does not take into consideration threading issues. It could, by providing a locking-context based on a portable multiprocessing API (e.g., bordeaux-threads).

The Python implementation of contexts relies heavily on the yield statement. Again, the current implementation does not provide similar functionality, although it could possibly be implemented using a delimited continuation library (e.g., cl-cont).


The code associated to these documents is not completely tested and it is bound to contain errors and omissions. This documentation may contain errors and omissions as well. Moreover, some design choices are recognized as sub-optimal and may change in the future.


The file COPYING that accompanies the library contains a Berkeley-style license. You are advised to use the code at your own risk. No warranty whatsoever is provided, the author will not be held responsible for any effect generated by your use of the library, and you can put here the scariest extra disclaimer you can think of.

Repository and Downloads

The with-contexts library is available on Quicklisp (not yet).

The with-contexts library. is hosted at

The git repository can be gotten from the Gitlab instance in the with-macro project page.


Neil MunroCommon Lisp Tutorial 10b: Basic Classes

· 38 days ago


In this tutorial I explain how to start using classes in Common Lisp, it is mostly focused on learning about slots (properties), how to use them, what options are available on slots and how to initialise a class.

Common Lisp Tutorial 10b: Basic Classes

A simple example

A simple class can be created with the defclass macro:

(defclass person ()
  (name age))

It can be initialised with the following code, please be aware however that one does not use new or some factory-pattern named function to build an instance, Common Lisp has a different way, make-instance:

(make-instance 'person)

It is possible to get started with code this simple, using the slot-value function with setf to get/set the values stored in the slots:

(defclass person ()
  (name age))

(let ((p (make-instance 'person)))
  (setf (slot-value p 'name) 'bob)
  (setf (slot-value p 'age) 24)
  (format nil "~A: ~A" (slot-value p 'name) (slot-value p 'age)))

Alternatively one can also use with-slots to achieve the same result, the slot names are setf-able and can be read and written to easily!

(defclass person ()
  (name age))

(let ((p (make-instance 'person)))
  (with-slots (name age) p
    (setf name 'bob)
    (setf age 28)
    (format nil "~A: ~A" name age)))

There's a lot more one can do with classes though, in fact there are 8 options that can be passed to a slot, each extend the behavior in useful ways and are listed below:


A previous version of this article incorrectly claimed there was no way to get/set the slots.


The initarg option is used to set the value of slots at class initilisation, you do not have to use the same keyword as the slot name!

(defclass person ()
    ((name :initarg :name)))
; When you create an object, you can set the slot value like so
(let ((p (make-instance 'person :name "Fred")))
        (with-slots (name) p
            (format t "~A~%" name)))


The initform option is used to set the default value of slots at class initilisation, if no value is given.

(defclass person ()
    ((name :initform "Fred")))
; When you create an object, you can set the slot value like so
(let ((p (make-instance 'person)))
    (with-slots (name) p
        (format t "~A~%" name)))


The reader option allows you to have a function created for you to access the value stored in a slot. It is worth noting you can have as many :reader options as you like!

(defclass person ()
    ((name :initarg :name :reader name)))
; You can then use the function like so
(let ((p (make-instance 'person)))
    (format t "~A~%" (name p)))


The writer option allows you to have a function created for you to change the value stored in a slot. It is worth noting you can have as many :writer options as you like!

(defclass person ()
    ((name :initarg :name :reader name :writer set-name)))
; You can then use the function like so
(let ((p (make-instance 'person)))
    (set-name "Fred" p)
    (format t "~A~%" (name p)))


A setf-able function that can be used to both read and write to the slot of a class instance.

(defclass person ()
    ((name :initarg :name :accessor name)))
(let ((p (make-instance 'person)))
    (setf (name p) "Fred")
    (format t "~A~%" (name p)))


Determines if a slot exists on the class directly and is therefore shared amonst all instances or if the slot is unique to each instance, the two options to allocation are :class or :instance. By default slots are allocated to :instance and not :class.

(defclass person ()
    ((name :initarg :name :allocation :instance :accessor name)
     (species :initform "human" :allocation :class :accessor species)))
(let ((p  (make-instance 'person :name "Fred"))
      (p1 (make-instance 'person :name "Bob")))
    (setf (species p1) "not human")
    (format t "~A: ~A~%" (name p) (species p)))


The documentation option is to assist the programmer understand the purpose of a slot. Forgive such a trivial example below as what a name slot on a person object is going to be is pretty self-evident, but in other cases maybe not so much.

(defclass person ()
    ((name :documentation "The persons name")))


The type option is another hint to programmers, it is important to note that despite appearances it is not an enforced type, it confused me at first but it's just a hint, alongside :documentation.


zellerin has very kindly corrected this particular section, thank you!

To quote the HyperSpec

The :type slot option specifies that the contents of the slot will always be of the specified data type. It effectively declares the result type of the reader generic function when applied to an object of this class. The consequences of attempting to store in a slot a value that does not satisfy the type of the slot are undefined. The :type slot option is further discussed in Section 7.5.3 (Inheritance of Slots and Slot Options).

So be warned, this is not a hint to programmers, it is a promise to the compiler, and if you break that promise, anything can happen. This means that :type is more than a hint to programmers!

It is possible to see how to enforce the use of types throws a type error using locally safety optimized code like so:

(locally (declare (optimize (safety 3)))
  (defclass foo () ((a :initarg :a :type integer)))
  (make-instance 'foo :a 'a))
(defclass person ()
    ((name :type string)))


The code from the video is listed here for your convenience.

(defclass person ()
  ((name :initarg    :name    :initform "Bob"   :accessor name    :allocation :instance :type string  :documentation "Stores a persons name")
   (age  :initarg    :age     :initform 18      :accessor age     :allocation :instance :type integer :documentation "Stores a persons age")
   (species :initarg :species :initform "human" :accessor species :allocation :class)))

(let ((p1 (make-instance 'person :name 145)))
  (setf (species p1) "not-human")

  (let ((p2 (make-instance 'person :name "Fred" :age 34)))
    (format nil "~A: ~A (~A)" (name p2) (age p2) (species p2))))


Michael FianoBack To Work

· 43 days ago

Well, I am already slowly starting to get back into coding me some Lisp games. There just isn't much else to do in my free time in this current global health crisis.

For the last week, I have been mostly scribbling notes on my reMarkable about ways to fix the engine troubles discussed in the last couple of articles. I have a few solutions that look really good on paper, so I'm just starting to explore them in code.

While it isn't that difficult of a problem to solve, the difficulty is in retrofitting the existing engine -- it'd be far too much work to solve that, both due to its size and complexity, as well as just the code quality in general with zero unit or integration tests.

For that reason, I am going to begin working on a new engine, that will share a lot of ideas with the previous, but will infact be rewritten from the ground up, with a better architecture and proper tests every step of the way. I'm not going to say much about the new design or what's different until I am confident enough in it, but what I worked out was a way to use structure-objects and arrays in the performance-sensitive areas that were previously using standard-objects and hash tables.

As the project progresses into more than just an idea, I will publish the code on my GitHub as usual. I just wanted to mention that I'm happy to be back, although I am taking precautions as to not get so burnt out again.

Marco AntoniottiIron handling (with Emacs Lisp)

· 43 days ago

At the beginning of the pandemic I stumbled upon an article regarding the problems that the State of New Jersey was having in issuing relief checks and funding due to the lack of ... COBOL programmers.  At the time I followed a couple of links, landing on this "Hello World on z/OS" blog post.  I was curious and obviously looking for something other than my usual day job, plus, I swear, I had never written some COBOL code.

What follows is a report of the things I learned and how I solved them.  If you are easily bored, just jump to the end of this (long) post to check out the IRON MAIN Emacs Lisp package.

A Foray in the Big Iron Internet

Well, to make a long story short, I eventually installed the Hercules emulator (and other ones - more on this maybe later) in its SDL/Hyperion incarnation and installed MVS on it; the versions I installed are TK4- and a "Jay Moseley" build (special thanks to Jay, who is one of the most gracious and patient people I interacted with over the Internet).  I also installed other "big iron" OSes, e.g., MTS, on the various emulators and experimented a bit (again, maybe I will report on this later).

It has been a lot of fun, and I discovered a very lively (if grizzled) community of enthusiasts, who mostly gathers around a few groups, e.g., H390-MVS.  The community is very helpful and, at this point, very similar, IMHO, to the "Lisp" communities out there, if you get my drift.

Anyway, Back to hacking

One way to interact with "the mainframe" (i.e., MVS running on Hercules) is to write your JCL in your host system (Linux, Windows, Mac OS) and then to submit it to a simulated card reader listening over a socket (port 3505, which is meaningful to the IBM mainframe crowd).  JCL code is interesting, as is the overall forma mentis that is required to interact with the mainframe, especially for somebody who was initially taught UNIX, saw some VMS and a few hours of Univac Exec 8. In any case, you can write your JCL, where you can embed whole Assembler, COBOL, Fortran, PL/I etc code, using some editor on Windows, Linux or Mac OS etc.

Of course, Lisp guys know that there is one Editor, with its church. So, what one does is to list-all-packages and install jcl-mo...  Wait...

To the best of my knowledge, as of December 2020, there is no jcl-mode to edit JCL code in Emacs.

It immediately became a categorical imperative to build one, which I did, while learning a bit of Emacs Lisp, that is, all the intricacies of writing modes and eventually post them on MELPA.

Writing the IRON MAIN Emacs Lisp Package

Writing a major mode for Emacs in 2020 is simple in principle, but tricky in practice, especially, if, like me, you start with only a basic knowledge of the system as a user.

One starts with define-derived-mode and, in theory, things should be relatively easy from there on.  The first thing you want to do is to get your font-lock-mode specifications right.  Next you want to add some other nice visual tools to your mode.  Finally you want to package your code to play nice with the Emacs ecosystem.

Font Lock

Font Lock mode (a minor mode) does have some quirks that make it a bit difficult to understand without in depth reading of the manual and of the (sparse) examples one finds over the Internet.  Of course, one never does enough RTFM, but I believe a few key points should be reported here.

Font Lock mode does two "fontification" operations/passes.  At least this seem the way to interpret them.

  1. A search based one: where "keywords" are "searched" and "highlighted" (read: they are rendered according to the face declared for them).
  2. A syntax table one: where fontification is performed based on properties set for a given character in a syntax table.

To interact with Font Lock, a mode must eventually set the variable font-lock-defaults.  The specification of the object contained in this variable is complicated.  This variable is eventually a list with at least one element (the "keywords"); the optional second one controls whether the syntax table pass (2) is performed or not. I found that the interaction between the first two elements must be carefully planned.  Essentially you must decide whether you want only the search based ("keyword") fontification or the syntax table based (2) fontification too.

If you do not want the syntax table based (2) fontification then you want to have the second element of font-lock-defaults set to non-NIL.

The first element of font-lock-defaults is where most of the action is.  Eventually it becomes the value of the variable font-lock-keywords that Font Lock uses to perform search based fontification (1).  The full range of values that font-lock-keywords may assume is quite rich; eventually its structure is just a list of "fontificators". There are two things to note however, which I found very useful.

First, Font Lock applies each element of font-lock-keywords (i.e., (first font-lock-defaults)) in order.  This means that a certain chunk of text may be fontified more than once.  Which brings us to the second bit of useful information.

Each element that eventually ends up in font-lock-keywords may have the form

(matcher . subexp-highlighter)
where subexp-highligther = (subexp facespec [override [laxmatch]])

(see the full documentation for more details).

Fontification is not applied to chunks of text that have already been fontified, unless override is set to non-NIL.  In this case the current fontification is applied.  This is very important for things like strings and comments, which may interact in unexpected ways, unless you are careful with the order of font-lock-keywords.

I suggest you download and use the wonderful library font-lock-studio by Anders Lindgren to debug your Font Lock specifications.

Ruler mode

When you write lines, pardon, cards for MVS or z/OS it is nice to have a ruler to count on that tells you at what column you are (and remember that once you hit column 72 you'd better... continue).  Emacs has a built in nice little utility that does just that: a minor mode named ruler-mode, which shows a ruler in the top row of your buffer.

There is a snag.

Emacs counts columns from 0.  MVS, z/OS and friends count columns from 1.  Popping up the ruler of ruler-mode in a buffer containing JCL (or COBOL, or Fortran) shows that you are "one off": not nice.

Almost luckily, in Emacs 27.x (which is what I am using) you can control this behavior using the variable column-number-indicator-zero-based, which is available when you turn on the minor mode column-number-mode. Its default is t, but if you set it to nil, the columns in the buffer will start at 1, which is "mainframe friendly".  Alas, this change does not percolate (yet - it needs to be fixed in Emacs) to ruler-mode, which insists on counting from 0.

End of story: some - very minor - hacking was needed to fix the rather long "ruler function" to convince it to count columns from 1.


Is there a good way to do this?

It appears that most Emacs "packages" are one-file affairs.  The package I wrote needs to be split up in a few files, but it is unclear (remember that I never do enough RTFM) how to keep thinks together for distribution, e.g., on MELPA or, more simply in your Emacs load-path.

What I would like to achieve is to just do a load (or a load-library) of a single file that caused the loading of the other bits and pieces.  It appears that Emacs Lisp does not have an ASDF or a MK:DEFSYSTEM as you have in Common Lisp (I will be glad to be proven wrong), so, as my package is rather small after all, I resorted to writing a main file that is named after the library and which can be thus referenced in the -pkg.el file that Emacs packaging requires.  I could have used use-package, but its intent appear to be dealing with packages that are already "installed" in your Emacs environment.

MELPA comes with it recipes format to register your package; it is a description of your folder structure and it is useful, but it is something you need to submit separately to the main site, let me add, in a rather cumbersome way. Quicklisp is far friendlier.

One other rant I have with the Emacs package distribution sites (e.g., MELPA and El-Get) is that eventually they assume you are on UN*X (Linux) and require you to have installed bits and pieces of the traditional UN*X toolchain (read: make) or worse.  I am running on W10 these days and there must be a better way.

Bottom line: I created a top file (iron-main.el) which just sets up a few things and requires and/or loads the other files that are part of or needed by the package.  One of the files contains the definition of a minor mode called iron-main-mode (in an eponymous .el file).

I am wondering whether this is the best way of doing things in Emacs Lisp.  Please tell me in the comments section.

The IRON MAIN Emacs Lisp Package

At the end of the story, here is the link to the GitHub repository for the IRON MAIN Emacs package to interact with the mainframe.

As you see the package is rather simple.

It is essentially three files plus the "main" one and a few ancillary ones.

  • iron-main.el: the main "loader" file.
  • iron-main-mode.el: the minor mode invoked by the other major modes defined below.
  • jcl-mode.el: a major mode to handle JCL files (pardon, datasets).
  • asmibm-mode.el: a major mode to handle IBM Assemblers.

One of the nice things I was able to include in jcl-mode is the ability to submit the buffer content (or another .jcl file, pardon, dataset) to the mainframe card reader listening on port 3505 (by default, assuming such a card reader has been configured).

This turns out to be useful, because it allows you to avoid using netcat, nc.exe or nc64.exe, which, at least on W10, always trigger Windows Defender.  Plus everything remains integrated with Emacs.  Remember: there's an Emacs command for that!

To conclude here are two screenshots (one "light", one "dark") of a test JCL included in the release. Submitting it form Emacs to  TK4- and to a "Jay Moseley's build" seems to work pretty well.  Just select the Submit menu under JCL OS or invoke the submit function via M-x.

What's next?  A few things apart from cleaning up, like exploring polymode; after all, embedding code in JCL is not unheard of.

That's it.  It has been fun and I literally learned a lot of new things.  Possibly useful.

If you are a mainframe person, do jump on the Emacs bandwagon.  Hey, you may want to write a ISPF editor emulator for it  😏😄




Nicolas HafnerKandria is now on Steam! - December Update

· 43 days ago

Kandria now has a Steam page! Please visit and wishlist! It would help us a lot to get the game promoted on Steam. As a result of the Steam page and other unexpected changes, this month was pretty hectic, too, so there's a lot to talk about.

Nick - Marketing, Bugfixing, Video Editing, Artworking, Tweaking, Many-things-ing

Very early in the month, we found out that the Pro Helvetia interactive media grant was not going to happen in March of next year like we expected, but rather in September. This was some troubling news, as we had planned our production around that date, and more seriously, I had planned the funding around that as well. It's not like the grant moving would mean Kandria can't be finished - I'm determined to see that through to the end - it's more that with my initial budget allocation the grant timing would have been ideal to keep Fred and Tim on the project continuously if we got the grant.

Now that things have moved around, I'll have to scrounge up some more money to keep them employed on my own dime. It'll be fine, but it was just a bit of a shocking reveal that threw me for a loop. Alongside that revelation though was the announcement of the Swiss Games Showcase, a newer project that they have which offers mentorship from industry experts for a select number of Swiss game projects. The deadline for application to the showcase was 30th of November, which only gave us a few weeks to scrounge everything together.

The application required a press kit and a pitch video, which we got to work on almost immediately. However, in addition to this I felt it would be best if we still tried to land the new 0.0.4 demo release that had been announced, and got a Steam page published with the new material. The Steam page especially would let us start on gathering wishlists already, and garner some more visibility on another platform.

Publishing a Steam page requires quite a few things though: a trailer video and screenshots, capsule images for the store and the Steam library, and a captivating description text. While Tim started work on the press kit and Fred got going on a first enemy design, I got to work on the Steam artwork:

capsule banner library image

The style in these drawings is quite different from what I usually do, so it was really challenging work for me. All things considered though, I'm pretty happy with how they turned out! Having proper artwork like this also really helps with promotional material, which is an added bonus for sure.

Next I had to scrounge together a pitch video. The video had to include several points about the game's marketing strategy, financing, and so forth. To help with this I also spent an afternoon doing some market research into similar games on Steam and their general performance. What I found out in doing that is that the combination of platformer and hack & slash is a rather rare one, especially one that includes actual precision platforming, rather than platforming as a necessary byproduct of being side-scrolling. This indicates that we're aiming for a market niche, which is a good thing for smaller indie titles like Kandria.

The video also required narration and gameplay footage, all of which had to be recorded and cut together in a pleasing manner. I'm pretty happy with the end result, though I don't think we can use it for any public marketing material. If you're interested anyway, you can see the video here.

In between all of this there were bugs in the game and especially its tools that needed to be fixed. The tools especially have been giving me some grief. Everything being custom made is nice when it works, not so nice when it doesn't since you know it's all your fault. Being in a rush is also always a good way to make the most annoying bugs surface, because that's just how these things go. I'll probably spend some time intermittently in the next few weeks fixing the most egregious problems in the tooling.

The Swiss Games Showcase application then finally went out last week. We haven't heard back from them yet, but hopefully we should know whether we've been accepted before we go in for the holiday break. Fingers crossed!

After all that I got started on a new tileset for an area we already know is going to be important: the desert. Given the rather eccentric purple look of the tundra area, I thought it would be a good idea to keep that sort of thing up for all the remaining areas as well. Gives the game a more interesting and unique look, for sure.

desert desert-gif

The tileset barely covers the essentials at the moment, but it's already looking pretty decent, and it's really nice to get a break from the tundra environment I've had to look at for over a year now.

Since we were running out of time I re-used a lot of the footage from the mentorship video and interspersed it with new stuff from the desert environment to craft the trailer for the Steam page. This all only got done in the past week, so I've really been scrambling to get things done!

Finally, leading up to all of this I've also been trying to be more active in promoting the game and posting stuff to my Twitter. I'm not reaching any high numbers or anything yet, but I do think it'll help to do this stuff more often. After all, posting more frequently even with lower shares is still a greater amount of opportunities for other people to see it!

In any case, it's been a bunch of really stressful weeks for me and it has been taking its toll, too. I'm really glad the Steam page is finally out. It feels like a big step forward, but at the same time there's also so, so much work left to be done, it's kind of surreal for me to think about it. I'm really looking forward to being able to wind down a bit in the coming weeks, and especially to being able to take my mind off of things during the holiday break. I heard there was a game coming out soon, what was it again? Cyber... something? Might want to check that out then.

As usual, the new demo release you can get from the mailing list! If you're already subscribed, you should have gotten a reminder email with the download link as well.

Oh, and I just noticed that it's now been a full year of monthly updates! Hoorah.

Fred - Animation Tweaks, Sfx, Enemy Design

Most of the work I did this month has been on designing and implementing the new, first enemy type, adding player animations, and tweaking the combat framing.

zombie-concept zombie-concept2 player zombie

There's still a lot left to do there to get the feeling of it right. I think a large part of that is the effects and stuff though, so I also got started on that. I had a lot of fun doing explosions as its an effect I've done a bunch and it's always fun to kinda re-explore it. Next I'm trying figure out the other effects' design style for things like the hard fall, sword slashes, and so on.

explosion-16 explosion-32

Tim - Copywriting and Questing

It's been a beneficial couple of weeks for getting a handle on the game's marketing tone and quest tools. I've worked on the Steam page copy, which went through several drafts with Nick, and which I'm really happy with. I thought I knew about writing Steam pages, but I've learned lots from reading helpful online guides and tips from other devs, as well as studying the pages of games in a similar vein.

Some of this content has been retconned into the presskit too, so it reads its best for the Pro Helvetia application. To that end, I also fed back to Nick on the video script for the app, and I think we've ended up with a really cool summary of what the game is and where it's going.

On the game side I've been having great fun in the level editor. I am now much more confident navigating it, and I even made a new room or "chunk" for the demo quest, mapping the basic layout and painting down the tileset. Ah, maybe one day I'll be an artist... (No I won't).

The quest itself is somewhat of a "my first Kandria quest" scenario, though I'm quite pleased with how it's turned out. I basically took the framework Nick had already scripted in the Markless language, and then changed the structure and content to suit the design I'd planned on paper. It generally fits within the constraints of what was already there, but there's nothing like constraints to get you being creative! I'm finding Lisp quite an unusual syntax to get used to, so banged my head against the wall a little bit, but Nick was there to save the day.

I'm really pleased with the end result though. The characters are showing glimmers of life; I had fun writing The Stranger's scene-investigation lines, as well as snappy back and forths with Fi. The quest even attempts an emotional punch - anyone playing the demo can let me know if that worked for you or not. The toolset is also great for rapid testing and iteration, which is vital for such a non-linear approach to questing as Kandria has. Only once you play do you go "Ahh, that line doesn't make sense anymore if you read that other line first...". So in short: good tools are your friend :)

The Plan

With the Pro Helvetia application out, the Steam page done, and the 0.0.4 demo released we've checked off all the points on our previous roadmap. For the remaining two work weeks of December, we're going to look at planning and conceptualising things. This means we'll work out major story beats, world building, gameplay areas, and side characters. After that there'll be a well-deserved break for two weeks, during which I'll try my best to clean out my head so that I can start fresh into the new year, ready to work on Kandria with a lot more energy.

January, February, and possibly March will be spent working on the vertical slice, so there won't be any further demo updates until that's done. Doing so will give us a lot of insight into the production process - we should have a much better idea of the scope of the game itself, and how much time it takes us to actually produce the necessary content, as well. This will be vital in shaping the future production scheduling. It should also serve as a good testing ground for all the mechanics and base features, our team work, and the testing feedback.

Until then, I hope you'll have a good holiday season, stay safe, and see you again in the new year! Or, if you're on the mailing list, in the next weekly!

If you haven't done so yet, check out our Steam page and wishlist Kandria! It would really help us out a lot.

Jonathan GodboutMortgage Server on a Raspberry Pi

· 44 days ago

In the last post we discussed creating a server to calculate an amortization schedule that takes and returns both protocol buffer messages and JSON. In this post we will discuss hosting this server on a Raspberry Pi. There are some pitfalls, and the story isn't complete, but it's still fairly compelling.

What We Will Use:


We will use a Raspberry Pi 3 model B as our server. We will use the stock operating system Raspbian. This SOC has a quad core 64-bit processor with floating point on chip. The operating system itself is 32-bit which makes the processor run on 32-bit mode.


We will be using SBCL as our Common Lisp, CL-PROTOBUFS as our protocol buffer and JSON library, and Hunchentoot as our web server.


1. SBCL on a Raspbian

When trying to run the mortgage-info server on Raspbian the first error I got was an inability to load the lisp file generated by protoc. On contacting Doug Katzman he noted I was running an old version of SBCL. The Raspbian apt-get repository has an old version of SBCL. If someone desires to run SBCL on a Raspberry Pi they should follow the binary installation instructions here:

2. CL-Protobufs on a 32-Bit OS

The cl-protobufs library has been optimized to run on a 64-bit x86 platform. The Raspberry Pi environment is 32-bit arm. As noted before, the 32-bit arm environment is supported by SBCL. I don't think anyone has attempted to run cl-protobufs on the 32-bit arm environment running SBCL. After modifying cl-protobufs.asd to have float-bits.lisp loaded on SBCL not running in 64-bit we could quickload mortgage-info into a repl.

3. Bugs in the mortgage-info repo  

There were several bugs I fixed in my very limited testing of the mortgage info repo, as well as some bugs that are still existent. 

  1. When trying to set numbers in the proto message structs I had to coerce them to double-float. I'm not sure why… This works on SBCL running on the x86-64 without the coercions.
  2. A division by 0 bug if the entered interest rate is 0.
  3. The possibility of having 0 as the number of repayment periods. I added an assertion so we will return a 500 stating the assertion was hit. We should have a more graceful error message than a stack trace, but this is currently only a proof of concept.
  4. The mortgage.proto file had interest as an integer, but interest is usually a float divisible by .125. 
  5. We have rounding problems if the interest rate is too high (say 99%). We only ever pay interest and the amount never goes down, at least with a 300 payment period. This is most likely due to rounding, we do not accept fractional pennies. This is okay, if the national interest rate went anywhere near 99% we have BIG problems.

CL-protobufs on the Pi

I have cl-protobufs running on SBCL on the Raspberry Pi, but some of the tests don't pass. I'm not sure if it would work on a 64-bit OS on the Raspberry Pi, I don't have the inclination to get a 64-bit OS for my Pi. If you do, please tell me what happens!

I wasn't able to get CCL on arm32 to load cl-protobufs. It gives an error saying it doesn't have asdf 3.1. Quickloading asdf I get undefined function version<=. If any CCL folk has an idea about what’s going on, please send me a message.

Trying to run ABCL lead me to yet another bug:

Running Server

My Raspberry Pi is running at:

Feel free to send either JSON or protobuf messages to the server.

Example JSON:


I don't know how long I will keep it running. If it goes down and you are interested in sending it messages please send me an email.

Ron, Carl, and Ben edited this post (as usual). Doug provided a great deal of help with SBCL on ARM 32.

Michael FianoFollow-up to Gamedev, Sleep, Repeat

· 48 days ago

After the last hastily constructed stream-of-consciousness post, I feel like I didn't explain some things very well.

I mentioned that I have been failing for about 10 years. This isn't completely accurate, as I have both learned a lot, and have been able to re-use that knowledge and a lot of mathematics and code in future attempts. Writing a game engine as part of a small team is difficult, and this is expected.

I have been writing games and game engines for 25 years. Why? Because it's fun, and an endless journey of knowledge. I am less interested in making games, and more interested in the design of game engines. A game engine is interesting to me because it requires discipline in many fields of study, and each implementation is different. The thing is, a game engine is a piece of software that manages the data flow for a particular game, or a particular category of games. It is nothing more than a set of choices someone made for you in order to write games in a particular way. Any given game engine could be productive or counter-productive in creating your game. Even using a general purpose game engine like Unity and Unreal is a trade-off, and for a significant game, you'll find you still have to work around or reimplement core engine features at the 11th hour to get your game shipped.

I mentioned I work in a small team writing game engine code. Yes, there are three other developers working with me to write a game engine in Common Lisp, which is a different project than the engine mentioned in the previous post, and serves different game developer needs. Half of the people in this team are currently part of a games studio that professionally use Unity and have released real games, both as part of the studio, and individually, and the reason why they want to make an engine in Common Lisp, is because of the numerous shortcomings of that engine. Even with millions of dollars, and an endless army of developers, a particular game engine still may not work for you, and could be more of a hindrance than starting from scratch, especially if you already have experience in the black arts.

As mentioned, I do enjoy working out the math and architectural decisions involved with a complicated piece of software such as a game engine, far more than making games. It is why I have made several (perhaps a dozen) game engines in Common Lisp -- that's what I find fun. Ocassionally, I get a good idea for a game, and I stop to try using one of these to execute that idea. This latest attempt was trying to use an engine designed for one particular type of game for another, so it is no wonder it wasn't suited for the performance (and some feature) characteristics required.

Even if one has an engine particularly suited for the type of game they want to make, making games is hard, and requires lots of discipline in many different fields, not just maths and computer science. Content is king, and asset creation accounts for a lot of the work, in addition to all the game logic and making it all well-balanced. This can only come after a seemingly never-ending tweak, play-test, tweak feedback loop in most situations, for a moderately sized game. This large cost in writing a game is one reason why lots of people reach for a ready-made engine instead of doing things themselves, and there is no harm in that.

While there are some promising engines and tools which can be built upon for Common Lisp, they all have not been battle-tested, or are otherwise not very usable out-of-box for a sizable game idea. This has led me and a few others to try changing that, slowly but surely. Common Lisp is an excellent platform for a game engine, despite what some may think. Common Lisp can be extremely performant, but ultimate runtime performance is not usually required for games these days. Good game design is about finding a balance between the CPU and GPU, and with concurrency and the very little work most games have to do on the CPU relative to that of the GPU, unless there is a complex, non-discrete physics system involved, or tens of thousands of nodes in your scene, it really isn't a problem. If it ever is, you can shift around work between the two processors in a lot of cases.

Where Common Lisp really shines for making games, is in the interactivity the whole language is designed around. Generic functions, while much slower than regular functions, is a cost we're usually willing to pay. Macros, and designing all of the DSLs a game requires for describing data, is dead-simple in Common Lisp. Hot code reloading, is one of my favorite features. Being able to recompile individual functions, DSLs, etc, as a game is running, without requiring custom support from a game engine editor, is the biggest time-saver for me.

For example, I could write a DSL to describe an entity along with all of its properties, and any children and their properties, and so on. Then I could create an instance of this sub-tree and stick it somewhere in the game world. Maybe I'll add 100 instances, each at different locations. Then I could go back to the DSL and decide that I want them all to have an additional child node, so I add it, hit a button, and just like that they're all updated in the game world. Similarly, if one of their textures doesn't look quite right, I could recompile another DSL that describes the texture, to have all uses of it updated in the game world. This workflow is very welcome after coming from a language that forces you to stop the game, recompile everything, restart the game, and get back to the place you were in the game. After all that is said and done, it is very difficult to know if your changes made an impact for the better - often times you are making small color adjustments, or otherwise hard to notice, but better nonetheless, shader program adjustments.

These are some of the reasons why I love Common Lisp, and why I love making game engines and games in Common Lisp. Just because I got burnt out for a bit, doesn't mean I'm done or have given up. This is a lifelong journey of mine, because I find it fun and a pleasure to work with such a dynamic, interactive, and fast when it needs to be language.

People like Nicolas Hafner (Shinmera), Chris Bagley (Baggers), and Pavel Korolev (borodust) are inspirations, who have also devoted themselves to game development in Common Lisp, with great success. I wish them all the best of luck, and my thanks for giving me the will to continue after so many years. I would also like to sincerely thank all of the people who have been supportive over the years, and special thanks to those few of you who have sponsored me. Thank you, everyone!

Michael FianoGamedev, Sleep, Repeat

· 51 days ago

It's been several years since I last posted. There are several reasons, but most notable, is the fact that I haven't been doing anything except write a game engine from scratch. For nearly 2 years, I would just crank out code, sleep, and repeat.

The good news is I was able to write a game engine usable (to an extent; more on that later) for the game ideas I had in mind. The bad news is, as previously mentioned, it has took its toll on my mental health, knowing that I lost a lot of time where I could have been working on other projects on my back burner, or just having fun with random activities, such as playing games, going on a hike, etc.

About three months ago, I finished polishing up the engine and started planning and implementing the beginnings of my first game - after more than 10 years of failing -- which was mostly due to the general lack of good game development tooling and engines in general for Common Lisp. Things were looking good about two months into development and several thousand lines of game logic later. However, shortly thereafter, as my game was a real-time game with lots of game objects and physics being calculated each frame, I quickly realized that the engine was not performant enough to pull of my game idea. After a week of profiling, improving suspectful things in the engine, and reiterating, I didn't improve the performance much at all, and was finally stuck with SBCL's statistical profiler telling me that even for a small scene with not too much going on, my CPU was spending about 50% of time in CLOS -- Common Lisp's object system, which is very dynamic and relies on a lot of runtime dispatch accessing slot values, calling generic function accessors, and so forth.

This was a pretty large disappointment, because I didn't anticipate it to be this slow, even though I knew it was doing a lot of dynamic diapatching. The use of CLOS was a fundamental design decision I made on day one, which utilizes the MOP (Meta-Object Protocol) in order to dynamically generate classes at runtime as behavioral components are added or removed from game objects. Everything being a class meant that there was a lot of dynamic dispatch, for accessing slots of objects which in turn hold hold references to other objects, etc.

After a lot of thought about what to do, I ultimately decided that it would be best to rewrite the foundation of the engine using actual composition over inheritance rather than mixin classes. This meant completely decoupling components from game objects, and using static structure objects rather than CLOS standard objects.

This decoupling of components meant that another core piece of machinery also had to be rewritten -- the component flow protocol, which is responsible for realizing game objects and their components, and ensuring everything happens in lock-step throughout a frame. This is actually a much harder problem than it sounds like, considering one of the core design ideas was a declarative, DSL for many types of game resources, with a notable prefab DSL for describing a subtree of game objects and components. Arbitrary nodes within a prefab description can reference or be referenced by other toplevel prefab definitions, and each toplevel form is able to be live-recompiled as the game is running to see changes happen in real time. This decoupling of components from entities ruined this ability of interactivity in many ways, and there just is not a clear solution to the problem. At the very least, it would require going back to the drawing board for several weeks, and redesigning the engine with simplicity in mind.

Which brings me to my main point. Game engines are large systems consisting of many moving parts. Good software engineering requires simplicity -- it is what allows a system to remain secure, stable, and coherent throughout its evolution. Simplicity itself requires a lot of work at the start of a project to reduce the idea to its essense, and lots of discipline over the lifetime of the project to be able to distinguish worthwhile changes from the pernicious ones. That is simply everything my game engine is not, because for such a complex piece of software such as a game engine, it is not easy to know HOW all the pieces fit together, just some vague idea. Complexity arises through the iterative process that is implementing and actually debugging problems with these features. Making a small change to get a engine feature to play nice with others could, and often does, adversely affects simplicity and elegance much later down the road during development.

The refactored engine with structs over classes, and decoupling of components from game objects, is for the most part a failure, and I am abandoning that two week effort. That leaves me with the previous, albeit slower performing attempt. It probably means that I have to either scrap my current game idea, or change it in major ways to be able to pull it off so that it is playable. It's either that, or just start over yet again, engine and all, in which case I would start to question my choice of language. Common Lisp seems like an excellent choice for interactive applications that require hot code reloading, such as with games, but with games, requires very good performance over convenience and simplicity in a lot of areas.

I am honestly not sure what I will do yet, but I do know, that for the first time in about 2 years, I am going to take a much needed break to let all of this sink into my subconscious, and maybe the way forward will emerge. I will play games, go on hikes, read books, work on other projects that have been sitting on my back burner for far too long, and maybe even take up another programming language in the mean time. I just know that I need a serious break from all of this, as the mental toll it all has taken on me is real. Sometimes I feel like I'm worthless, not a good programmer, etc, all because game development, and especially game engine development, is a lifelong journey requiring discipline and knowledge in many different fields of study.

That is all, and sorry for the rant. This post was not proof-read. I just needed to quickly get this out of my head to begin my hiatus.

Zach BeaneJackson Lee Underwriting has a remote Common Lisp job open

· 53 days ago

sirherrbatkaThe Common Lisp Condition System review

· 55 days ago
Phoe wrote a book!

Michał HerdaGoodbye, Hexstream

· 56 days ago

I am saddened that I need to write this post, but I need to make a public confession.

After Jean-Philippe Paradis, a Common Lisp programmer better known online as Hexstream, requested me to review his "extensive contributions" to the Common Lisp ecosystem, he seems to have disliked my reply so much that he has declared me the single biggest threat to the Common Lisp community right now.

(A gist copy of the review is here for people who would rather avoid browsing the full issue.)

The review has appeared after yet another discussion thread on GitHub - originally about implementations of Clojurelike arrow macros in Common Lisp - has been derailed by Hexstream in the traditional way in which he derailed many [1] other[2] GitHub[3] discussions[4]: asserting as a logical fact that his preferences take precedence over other people's preferences, aggressively calling out other people for questioning this state of matters, and finally playing the victim card of being silenced, censored, and tortured by a so-called Common Lisp Mafia.

Unlike during the past few times, this time I have decided not to give up posting. On the contrary, I have spend a considerable amount of my personal time (including one all-nighter) to actually respond to every single post of Hexstream, analyze it, take it apart into individual claims that he is making, and refute every single false point that I could find to the best of my ability using the full extent of my available tools.

After several posts of increasing angriness exchanged with Hexstream, in which discussion I have once again tried to coerce him into changing his course and stop being an aggressive offender towards members of the Common Lisp community, and after being explicitly invited to analyze Hexstream's contribution to the Common Lisp community in a tweet of his, I replied to his request with an analysis of the public data collected from GitHub, Quicklisp and Hexstream's public CV. Hexstream has announced multiple times that he is proud of this information and there is nothing to hide there; no, quite the contrary. Hence, I felt welcome to use it and see for myself what kinds of prominent contributions of his I must have missed.

It seems that my analysis of that data was not well-received; Hexstream disappeared with a mere "see you in 2021" comment, stating that he has projects with higher priorities to work on at the moment, and simply replied on GitHub that "my posts contain countless factual, logical and other errors". Afterwards, his Twitter contained this.

I did have a fair amount of respect left for phoe before today, but after he said I am not a Common Lisp expert and that I am a fraud, based on malicious deliberately superficial


            (with-irony "


It seems to me that I must have thought the unthinkable. (How could I have said that he is not a Common Lisp expert and a fraud? How was it even possible!?) Moreover, I then dared to say it aloud. Worst of all, I even backed it all with solid, concrete, data-based evidence that cannot be immediately refuted as a mere opinion and requires some serious figuring out of how to turn it around so that the Common Lisp Mafia is guilty for all the facts that I've noted.

All of a sudden, after posting this single post, I have become the main threat to the whole Common Lisp community, declared impossible to directly and indirectly fund in an ethical manner, and then proclaimed to require immediate medical attention of psychiatric nature.

Oh goodness. I assume that the analysis must have been way too short for his liking. I regret that I have not found the time to go into his GitHub issues in detail...



So, Hexstream. If you're reading this, I hope that my review serves as a proper wake-up call for you to actually see that your behavior is off and needs adjustment in order for other people to actually consider you acceptable in the Common Lisp community. If it does not, I have done everything to actually try and help you as a fellow Common Lisp hacker. I can, and will, do no more in this matter, and will instead do everything to protect the people I respect, like, and cooperate with from your destructive influence.

You are planning to launch some kind of Common Lisp Revival 2020 Fundraiser soon. I would like to tell you that I consider you to be the wrong person to launch one: not even for any of the aforementioned reasons, but for the reason that to you, Common Lisp seems to be a completely different language than it is to me. Based on the above review that you have requested me to do, it seems that you perceive Common Lisp as a strictly single-player language where you have to struggle against countless feats and enemies on Twitter, GitHub, and wherever else, in order to produce anything of even the smallest value after grand feats and massive effort to struggle against censorship.

On the contrary, I know many people who consider Common Lisp to be a multiplayer language where people support one another, are eager to help each other, share knowledge, indulge in fascinating projects that would be tough to indulge in with other languages and, best of all, are not hostile towards one another at the smallest hint of suspicion. Some of those people form the Common Lisp Foundation that, in my opinion, should take over any kind of Common Lisp revival fundraisers.

Obviously, all other reasons from my analysis why you are not entitled to represent the Common Lisp community as head of such a fundraiser still apply. And they are much more damning than the worldview issue above.

  • Your claimed commercial expertise in Common Lisp is void.
  • Your fifteen years of overall experience in Lisp have no basis in actual code.
  • Your projects larger than micro-utilities have been so poor that, as you claim, you have disposed of them yourself.
  • Your micro-utilities do not have a single dependent in the main Quicklisp distribution and they do not show signs of actual use by programmers.
  • Your documentation projects are generally not acceptable in the Common Lisp community because they are encumbered by the implicit unbearable personality of their author.
  • You have not contributed a single line of code to any GitHub repository hosted by anyone else throughout your eight and a half years of presence on GitHub and fifteen years of overall programming experience that you claim to have.
  • You derail GitHub conversations with offensive and aggressive comments, indulge in Twitter rants containing more offensive and aggressive comments, and tie them together with your personal website containing even more offensive and aggressive comments.
  • You repeatedly defame various honored and respected members of the Common Lisp community, including Rainer Joswig, Michael Fiano, Daniel Kochmański, Stas Boukarev, and Zach Beane. And, I guess, me.
  • Oh, about Zach! have I mentioned ?

And to top it all, after the above analysis was posted, instead of fulfilling my hopes and responding to this critique of your Lisp merit by indulging in meritocratic discussion about your technical contributions to the Common Lisp ecosystem, you instead immediately announced that I require psychiatric help.

For completeness, I do have to admit: you have been popularizing crowdfunding among Lispers and achieved visible success there, with multiple authors and repositories adopting various means of crowdfunding (GitHub Sponsors, Patreon, LiberaPay) thanks to your efforts and suggestions. This is the one single thing that I can unambiguously consider a net positive coming from you. That's all.

Other than that, I do have to repeat what I have said at the end of my analysis. You try to pose as a Common Lisp expert. No, with all of the above I have no reasons to claim that you are one. Your expertise is hollow. Your experience seems false. You pretend to be someone you are not. You are a scam, Hexstream, and I am saddened and torn that I need to speak these words because I sincerely wish you were not.

The earliest Lisp commit that I was able to find in my GitHub repositories is from November 2015. That is exactly five years ago. In 2015, I was getting frustrated over Emacs keybindings. In 2015, you were "exposing" Zach Beane. Through these five years, I was learning Lisp to the best of my ability. Through these five years, you were doing I have no idea what. I can only guess based on what I see.

And I see Twitter rants. I see GitHub issue derailments. I see self-announced policies that contradict one another. I see tiny Lisp libraries with zero users. I see no other Lisp code of yours. I see no code of yours in any other GitHub repositories. I see big claims backed by nothing. I see an image of a Common Lisp expert that is so fragile that it falls into pieces after a brief glance.

Seriously, what were you doing with your life during these years? Researching ethics? Verifying the boundaries set by Twitter and GitHub moderation teams? Fighting for your life while the Common Lisp Mafia caged you and demanded a ransom of 20,000,000 US parentheses for your freedom?

I simply cannot comprehend it. And I do feel sorry for you, since most likely neither can you.

If you are still reading, please answer one question that I will ask at the end of this block of text. I will attempt to be somewhat honest regarding myself in the topic of my own impact on the Common Lisp community, as I see it. No boasting too much, not being too humble. Let's try it.

I have attempted to complete the Common Lisp UltraSpec which I talked about at an European Lisp Symposium one time and then failed miserably at this task after grossly misestimating it. I have implemented package-local nicknames in Clozure Common Lisp and then used the momentum from that work to make a portability library for package-local nicknames. I have managed to rewrite and optimize the somewhat famed split-sequence system commonly used in the Common Lisp ecosystem. I have managed to overhaul the even more famed Lisp Koans by rewriting them almost from scratch and fixing multiple compliance errors. I have successfully convinced Massachusetts Institute of Technology to release the Common Lisp WordNet interface under a permissive license (which took only half a year of pinging people via mail) and fixed it up as appropriate. I have written a utility suite for managing protocols and test cases with some documentation that I am proud of even after two whole years. I wrote an implementation of Petri nets in Common Lisp that seems either to work fine or not to be used at all, because I do not get much attention from it; still, I've tested it (hopefully) well enough to be useful in the general case. I recently wrote the fastest priority queue available in Common Lisp after someone mentioned that the ones on Quicklisp are too slow. I then ended up miserably failing at rewriting the Common Lisp arrows system, which resulted in a different system with a tutorial for arrows that I have received several thanks for. And then there's some smaller libraries that might not be all that mentionworthy.

I have been hosting the Online Lisp Meeting series which have met general acclaim and popularity and are considered a worthy continuous extension of the ideas of the European Lisp Symposium - even if, in my opinion, they contain a bit too much CL content, compared to the ELS ideals and statistics. The eleventh installment is bound to happen this week, where I will speak for the second time - again about the topic of control flow and condition systems. I already have two more talks queued up and we plan on going until the next European Lisp Symposium, which will most likely eat up all of the available talks and then some. (Maybe some of the rejected papers will sublimate as OLM videos though?... I sincerely hope so! ELS recently had to reject papers not because they were bad, but because they had an already full schedule.)

With help of countless people helping me on various stages of the book lifecycle and with support from Apress Publishing, I have managed to release the book The Common Lisp Condition System along with a pair of accompanying Common Lisp systems, the larger Portable Condition System and the smaller trivial-custom-debugger, plus a release of source code from the book and a free online appendix to the book containing content that did not make it inside in time. I have also proven that the condition system can be easily implemented in a non-Lisp, which is Java, and I will talk about this in extent to the WebAssembly committee to ensure that WASM has all the necessary functionalities to ensure that Common Lisp can be efficiently implemented on WASM.

Finally, I made some art once. I think it did not sting anybody's eyes too hard. Or that it's strictly Lisp-related too much... but hey, it's CL implementations, and the Lisp Lizard.

I think I am generally tolerated and maybe even enjoyed in my community as a Common Lisp programmer, despite my occasional outbursts of frustration and outright stupidity. I try to be available on Reddit, IRC, Discord, and in private messages for all sorts of support that I am capable of providing. I try to teach other people the way I was taught when I was starting out. Whenever I notice that I should apologize and make amends because I fucked up somewhere (e.g. in the recent Quickdocs issue), I do try my best to be sorry and amend my behavior as appropriate, and I try to welcome other people's remarks and inegrate them into my behavior as appropriate; I think it helps other people tolerate my behavior when I'm not easily tolerable.

And, well, you know, there is this single person in my environment who just keeps on smearing shit on people in my vicinity, but I don't think I care anymore; this person has willingly made so many enemies by now, that they are ignored by many, confronted by few (who actually have some time to spare), and hell, I even got some most unexpected people to cheer me on in my attempts to actually try and confront this guy and his bullshit excuses for repeatedly setting fires in the Common Lisp world.

But, yeah, anyway. You still consider that it's me who needs psychiatric help. Is that right?

So, Hexstream, this is a goodbye. Thank you for the unique chance to train my patience, persistence, and insistence. I assure you that it has not gone to waste, and I assure you that I will remember it for the rest of my life.

Since you do not seem to want to change your behavior in the slightest, then I wish you to stay on your current course and not change in the slightest so you may see for yourself where it leads you. The faster you slide into irrelevance because of your current choice, the healthier the Common Lisp community will be.

(And I mean the real Common Lisp community, containing more than just a single person who's purely accidentally named Jean-Philippe.)

Bye. I don't think I will miss you much, even though I adore the technical thought behind some of your libraries. And if I encounter you again on the Internet, be prepared to once again meet the side of me that has long run out of spare chances to give you anymore.

Vsevolod DyomkinThe Common Lisp Condition System Book

· 57 days ago

Several months ago I had a pleasure to be one of the reviewers of the book The Common Lisp Condition System (Beyond Exception Handling with Control Flow Mechanisms) by Michał Herda. I doubt that I have contributed much to the book, but, at least, I can express my appreciation in the form of a reader review here.

My overall impression is that the book is very well-written and definitely worth reading. I always considered special variables, the condition system, and multiple returns values to be the most underappreciated features of Common Lisp, although I have never imagined that a whole book may be written on these topics (and even just two of them). So, I was pleasantly flabbergasted.

The book has a lot of things I value in good technical writing: a structured and logical exposition, detailed discussions of various nuances, a subtle sense of humor, and lots of Lisp. I should say that reading the stories of Tom, Kate, and Mark was so entertaining that I wished to learn more about their lives. I even daydreamt (to use the term often seen throughout the book) about a new semi-fiction genre: stories about people who behave like computer programs. I guess a book of short stories containing the two from this book and the story of Mac from "Practical Common Lisp" can already be initialized. "Anthropomorphic Lisp Tales"...

So, I can definitely recommend reading CLCS to anyone interested in expanding their Lisp knowledge and general understanding of programming concepts. And although I can call myself quite well versed with the CL condition system, I was also able to learn several new tricks and enrich my understanding. Actually, that is quite valuable as you never know when one of its features could become handy to save your programming day. In my own Lisp career, I had several such a-ha moments and continue appreciating them.

This book should also be relevant to those, who have a general understanding of Lisp, but are compelled to spend their careers programming in inferior languages: you can learn more about one of the foundations of interactive programming and appreciate its value. Perhaps, one day you'll have access to programming environments that focus on this dimension or you'll be able to add elements of interactivity to your own workflow.

As for those who are not familiar with Lisp, I'd first start with the classic Practical Common Lisp.

So, thanks to Michał for another great addition to my virtual Lisp books collection. The spice mush flow, as they say...

Michał HerdaDamn Fast Priority Queue: a speed-oriented priority queue implementation

· 63 days ago

I think I have accidentally outperformed all of the Quicklisp priority queue implementations. Enter Damn Fast Priority Queue.

Detailed description and benchmarks are available on the GitHub repository. It seems that my implementation is consistently an order of magnitude faster than most of the other priority heaps (with Pileup being the runner-up, only being about 3-4x slower than DFPQ).

Michał HerdaCafe Latte - a condition system in Java

· 64 days ago

I've more or less finished Cafe Latte - an implementation of Common Lisp dynamic variables, control flow operators, and condition system in plain Java.

It started out as a proof that a condition system can be implemented even on top of a language that has only automatic memory management and a primitive unwinding operator (throw), but does not have dynamic variables or non-local returns by default.

It should be possible to use it, or parts of it, in other projects, and its source code should be readable enough to understand the underlying mechanics of each Lisp control flow operator.

Nicolas HafnerClosing in on Production - November Kandria Update

· 73 days ago

October somehow flew by really quickly for me. It's already November, and we're nearing the end of the year, too. Just thinking about that is making me reminiscent, but I'll have to hold off on doing my yearly wrap-up for another two months! Who knows, a lot more can still happen in that time. Last month marked another release for Kandria, and this month marked the start of Kandria being an actual team effort!

I'm really glad that it's no longer just me working on things. Fred already introduced himself in the last monthly, and by now he has already started work and delivered some really great stuff:

new light attack player idle

As a result, the game already feels a lot more fun to play. The step up from the combat animations I had made early in the year is huge!

We're still not done with it though, there's a few more moves missing, and a lot more left to adjust and fine-tune of course. We'll also have to get started on some real enemy designs soon and implement those to have some interesting encounters to test things with.

I can now also finally announce the third team member, Tim White, who'll be working on characters, story, and dialogue for the game:

Hey there! I'm Tim, a games writer from the UK. I've been in the industry for ten years now (where did the time go?!), and have been lucky enough to work at Jagex on Transformers Universe, and most recently with Brightrock Games on War for the Overworld and an unannounced game.

Kandria jumped out the screen at me straight away, with its detailed world and story, custom-made dev tools, and strong creative and artistic direction. I also have a real soft spot for post-apocalyptic worlds, and the ethics surrounding artificial life. Applying was a no brainer, and I can't wait to start!

You can find Tim on Twitter at @TimAlanWhite, or on the official Kandria Discord.

Both Tim and Fred will be giving quick updates on what's happening in the weekly newsletter from now on. The newsletter has now also been moved away from Mailchimp to my own mailing list service called Courier. I'm glad to finally have made the switch, freeing me from Mailchimp's slow and clunky interface!

On the engine side, I reworked the lighting and background systems to allow changing the lighting and parallax background to fit the current environment. As part of this I also changed the shadow casting to work properly so that it no longer contains the weird corner case glitches it used to.

I also had to make some fixes to the animation system to make it more capable and to make it less of a hassle to use when animations are changed or added. Previously the tooling there would easily mess up your data.

Then, in order to prepare for Tim, I reworked the quest system to be much easier to manage and control, and added a couple of additional features that should be very useful to control branching. To test it I made some quick draft animations for Fi and jotted her down in the test level.


She'll now comment on things you can find throughout the level.

I also wrote a bunch of documentation to help Tim and Fred get set up and running with the game, introduced some very useful tooling like hot-reloading to make it faster to iterate on animations and textures, and improved the editor, especially for the in-game animation properties.

With all of this now in, we are very, very close to ending post-production. There's a few not-so-small things that I still need to do, like an animation system for the UI that I started working on yesterday, and one very nasty bug that popped up on Windows systems with surround sound configured. Still, with all of this in mind, I think we're well on track for the vertical slice release in March.

I hope there'll be a 0.0.4 demo release by the end of this month, which will be the last public demo until the vertical slice 0.1.0 demo. After that- I don't know yet how things will go. A lot about the game is going to become much clearer in the coming months as we decide on stuff like the core plot and work out the first area of the game for the vertical slice.

Aside from putting out whatever fires Fred and Tim stumble across this month, I'll be focusing on two things: first, fix surround sound on Windows. This is important to me as having the game crash and burn because of something so... tangential, is really terrible. Second, implement a UI animation system. The UI toolkit I'm using, Alloy, does not currently have a way to animate things. This is fine for tools and other UI like that, but in games you really want to spruce things up by tweening and animating to make your UI more interesting to look at. That's the last major addition to Alloy that's needed to have everything we need.

If time permits, I'll also work on some more platforming challenge levels to give the 0.0.4 demo some more content.

Anyway, I'm really happy to have a team together now, and I'm very excited to see how quickly things develop from here! To be fair, I'm also quite a bit worried what with being, I suppose, my own boss now, and the responsibilities that brings. I suppose time will tell whether I can figure out a good schedule and manage things well. For now I'm cautiously optimistic.

Alright, back to thinking about the animation system now, and see you next month, or next week if you're on the mailing list!

Alexander Artemenkosphinxcontrib-cldomain

· 79 days ago

This is an add-on to the Sphinx documentation system which allows using of information about Common Lisp packages in the documentation.

Initially, Sphinx was created for Python's documentation and now it is widely used not only for python libraries but also for many other languages.

Sphinx uses reStructured text markup language which is extensible. You can write your own extensions in Python to introduce new building blocks, called "roles".

sphinxcontrib-cldomain consists of two parts. The first part is a python extension to the Sphinx which adds an ability to render documentation for CL functions, methods and classes. The second - a command-line docstring extractor, written in CL.

Initially, cldomain was created by Russell Sim, but at some moment I've forked the repository to port it to the newer Sphinx, Python3 and Roswell.

The coolest feature of the cldomain is its ability to mix handwritten documentation with docstring. The second - cross-referencing. You can link between different docstrings and chapters of the documentation.

Today I will not show you any code snippets. Instead, I've created an example repository with a simple Common Lisp system and documentation:

This example includes a GitHub workflow to update the documentation on a push to the main branch and can be used as a skeleton for you own libraries.

The main thing I dislike in Sphinx and cldomain is the Python :( Other cons are the complexity of the markup and toolchain setup.

In the next few posts, I'll review a few other documentation tools for Common Lisp and try to figure out if they can replace Sphinx for me.

I think we as CL community must concentrate our efforts to improve the documentation level of our software and choosing the best setup which can be recommended for everybody is the key.

ABCL DevABCL 1.8.0

· 81 days ago

Under the gathering storms of the Fall 2020, we are pleased to release ABCL 1.8.0 as the Ninth major revision of the implementation.

This Ninth Edition of the implementation now supports building and running on the recently released openjdk15 platform.  This release is intended as the last major release to support the openjdk6 openjdk7, and openjdk8 platforms, for with abcl-2.0.0 we intend to move the minimum platform to openjdk11 or better in order to efficiently implement atomic memory compare and swap operations.

With this release, the implementation of the EXT:JAR-PATHNAME and EXT:URL-PATHNAME subtypes of cl:PATHNAME has been overhauled to the point that arbitrary references to ZIP archives within archives now work for read-only stream operations (CL:PROBE-FILE CL:TRUENAME, CL:OPEN, CL:LOAD, CL:FILE-WRITE-DATE, CL:DIRECTORY, and CL:MERGE-PATHNAMES).  The previous versions of the implementation relied on the ability for to open streams of an archive within an archive, behavior that was silently dropped after Java 5, and consequently hasn't worked on common platforms supported by the Bear in a long time.  The overhaul of the implementation restores the feasibility of accessing fasls from within jar files.  Interested parties may examine the ASDF-JAR contrib for a recipe for packaging and accessing such artifacts.  Please consult the "Beyond ANSI: Pathnames" Section 4.2 of the User Manual for further details for how namestrings and components of PATHNAME objects have been revised.

A more comprehensive list of CHANGES is available with the source.


Alexander Artemenkocl-pdf

· 82 days ago

This is the library for PDF generation and parsing.

Today I'm too lazy to provided step by step examples, but I have a real task to do with this library.

Some time ago I've read the article about productivity which recommended to print a "life calendar". This calendar should remind you: "The life is limited and the time's price is high."

The calendar is a grid where every box is one week of you life. The article suggested to buy a poster with the calendar, but I don't want to wait for a parcel with the poster! I want to print it now!

And here is where cl-pdf comes on the scene!

I wrote this simple function to generate the poster of A1 format:

(defun render (&optional (filename "life.pdf"))
  (flet ((to-ppt (size-in-mm)
           (/ size-in-mm 1/72 25.4)))
    (let* ((width (to-ppt 594))       ;; This is A1 page size in mm
           (height (to-ppt 841))
           (margin-top (to-ppt 70))
           (margin-bottom (to-ppt 30))
           (span  (to-ppt 2))
           (year-weeks 52)
           (years 90)
           (box-size (/ (- (- height (+ margin-top margin-bottom))
                            (* span (1- years)))
           (boxes-width (+ (* box-size year-weeks)
                            (* span (1- year-weeks))))
           (boxes-height (+ (* box-size years)
                             (* span (1- years))))
           ;; horizontal margin depends on box size,
           ;; because we need to center them
           (margin-h (/ (- width boxes-width)
           (box-radius (/ box-size 3))
           (helvetica (pdf:get-font "Helvetica")))
      (pdf:with-document ()
        (pdf:with-page (:bounds (rutils:vec 0 0 width height))
          ;; For debug
          ;; (pdf:rectangle margin-h margin-bottom
          ;;                boxes-width
          ;;                boxes-height
          ;;                :radius box-radius)
          (loop for year from 0 below years
                do (loop for week from 0 below year-weeks
                         for x = (+ margin-h (* week (+ box-size span)))
                         for y = (+ margin-bottom (* year (+ box-size span)))
                         do (pdf:rectangle x y box-size box-size :radius box-radius)))
          ;; The title
           (/ width 2)
           (+ margin-bottom
               ;; space between text and boxes in mm
               (to-ppt 15))
           "LIFE CALENDAR"
           ;; font-size in mm
           (to-ppt 30))

          ;; Labels for weeks
          (let ((font-size
                  ;; We want labels to be slightly smaller than boxes
                  (* box-size 2/3)))
             (+ margin-h
                 (/ box-size 4))
             (+ margin-bottom
                 ;; space between text and boxes in mm
                 (to-ppt 10))
             "Weeks of the year"
            (loop for week below year-weeks
                  do (pdf:draw-centered-text
                      (+ margin-h
                          (/ box-size 2)
                          (* week (+ box-size span)))
                      (+ margin-bottom
                          ;; space between text and boxes in mm
                          (to-ppt 3))
                      (rutils:fmt "~A" (1+ week))

            ;; Labels for years
               (- margin-h
                   (to-ppt 10))
               (- (+ margin-bottom
                   (/ box-size 4)))
              (pdf:rotate 90)
               0 0
               "Years of your life"
            (loop for year below years
                  do (pdf:draw-left-text
                      (- margin-h
                          ;; space between text and boxes in mm
                          (to-ppt 3))
                      (+ margin-bottom
                          (/ box-size 4)
                          (* year (+ box-size span)))
                      (rutils:fmt "~A" (- years year))

            ;; The Question.
             (- width margin-h)
             (- margin-bottom
                 (to-ppt 10))
             "Is this the End?"
             (* font-size 2))
        (pdf:write-document filename)))))

Here is how result will look like:

The PDF can be downloaded here.

This program demonstrates a few features of cl-pdf:

  • ability to set page size;
  • text drawing and rotation;
  • font manipulation.

There are a lot more features but all of them aren't documented, only several examples :(

GitHub shows 4 forks with some patches. And some of them are turned into a pull-request, but maintainer is inactive on the GitHub since 2019 :(

Alexander Artemenkocl-async-await

· 84 days ago

This library implements the async/await abstraction to make it easier to write parallel programs.

Now we'll turn "dexador" http library calls into async and will see if we can parallel 50 requests to the site which response in 5 seconds.

To create a function which can return a delayed result, a "promise", we have to use cl-async-await:defun-async:

POFTHEDAY> (cl-async-await:defun-async http-get (url &rest args)
             (apply #'dexador:get url args))

Now let's call this function. When called it returns a "promise" object not the real response from the site:

POFTHEDAY> (http-get "")

Now we can retrieve the real result, using cl-async-await:await function:

POFTHEDAY> (cl-async-await:await *)
  \"args\": {}, 
  \"data\": \"\", 
  \"files\": {}, 
  \"form\": {}, 
  \"headers\": {
    \"Accept\": \"*/*\", 
    \"Content-Length\": \"0\", 
    \"Host\": \"\", 
    \"User-Agent\": \"Dexador/0.9.14 (SBCL 2.0.8); Darwin; 19.5.0\", 
    \"X-Amzn-Trace-Id\": \"Root=1-5f9732d6-148ee9a305fab66c26a2dbfd\"
  \"origin\": \"\", 
  \"url\": \"\"
200 (8 bits, #xC8, #o310, #b11001000)
#<CL+SSL::SSL-STREAM for #<FD-STREAM for "socket, peer:" {10085B0BF3}>>

If we look a the promise object again, we'll see it has a state now:

  "args": {}, 
  "data": "", 
  "files": {}, 
  "form": {}, 
  "headers": {
    "Accept": "*/*", 
    "Content-Length": "0", 
    "Host": "", 
    "User-Agent": "Dexador/0.9.14 (SBCL 2.0.8); Darwin; 19.5.0", 
    "X-Amzn-Trace-Id": "Root=1-5f9732d6-148ee9a305fab66c26a2dbfd"
  "origin": "", 
  "url": ""

  200 #<HASH-TABLE :TEST EQUAL :COUNT 7 {1002987DE3}>
  #<SSL-STREAM for #<FD-STREAM for "socket, peer:" {10085B0BF3}>>) >

Ok, it is time to see if we can retrieve results from this site in parallel. To make it easier to test speed, I'll wrap all code into the separate function.

The function returns the total number of bytes in all 50 responses:

POFTHEDAY> (defun do-the-test ()
             (let ((promises
                     (loop repeat 50
                           collect (http-get ""
                                             :use-connection-pool nil
                                             :keep-alive nil))))
               ;; Now we have to fetch results from our promises.
               (loop for promise in promises
                     for response = (cl-async-await:await
                     summing (length response))))

POFTHEDAY> (time (do-the-test))
Evaluation took:
  6.509 seconds of real time
  2.496912 seconds of total run time (1.672766 user, 0.824146 system)
  38.36% CPU
  14,372,854,434 processor cycles
  1,519,664 bytes consed

As you can see, the function returns in 6.5 seconds instead of 250 seconds! This means cl-async-await works!

The only problem I found was this concurrency issue:

But probably it is only related to Dexador.

For older items, see the Planet Lisp Archives.

Last updated: 2021-01-18 00:00