Jonathan GodboutgRPC Basics

· 16 hours ago

Howdy Hackers, it's finally time to talk to you about gRPC for Common Lisp. In this post we will discuss the basics of gRPC. We will go through an example request/response flow from the perspective of the client and server. In future posts we will make a gRPC server and call it from a client. 

Lyra and Faye Looking Forward to gRPC Discussion?


gRPC is a general RPC framework developed by Google. It is often used to pass Protocol Buffer messages from one system to another, though an advanced user could use it to pass byte vectors. It sits over HTTP/2 and allows for bidirectional message streaming between a client and a server.

For these posts we assume some knowledge of Protocol Buffers.

Why would I use it?

gRPC allows for simple communication between clients and servers. It allows for language-agnostic message passing of complex structured objects.

First let’s look at a simple call flow for a client and server.

  1. Service implementor publishes a gRPC Service description and Request Message as well as a public URL.
  2. Client uses the URL and gRPC library to create a channel.
  3. Client instantiates a request object.
  4. Client uses Protocol Buffer generated code to call the server passing in the channel and request object.
  5. Server receives the request object, does required processing, and returns a response object.
  6. The client received a response message based on the published service descriptor.

The client and server need language specific Protocol Buffer and gRPC libraries. The language of these libraries for the client and server need not be identical. In our examples we will use qitab/grpc and qitab/cl-protobufs both written for Common Lisp.

The protocol buffer library takes care of many of the low-level details for you. Once you specify the request and response message fields protobufs provides convenient constructors in multiple languages and takes care of serialization and deserialization to the correct type for each message field.

The gRPC library is in charge of transmission of the underlying bytes from one client to server. It delegates to the Protocol Buffer library for serialization of the request and response messages.



One option to consider is bare HTTP calls, in fact this is the basic underlying of gRPC! This still leaves a system designer with the need to choose what to send over the wire. This is often JSON or XML. Then one must determine how to send over the API schema, device authentication schemes, and all the work that creating a good API requires. gRPC gives much of this for free.

Apache Thrift

gRPC has a larger market share. The ecosystem you want to with will often determine your choice of Thrift vs gRPC. 


There are many different RPC frameworks, these are just the most common. Your software environmnet will often determine your framework, if you work at Google you will probably use gRPC where if you work at Facebook you’ll probably use Thrift. Also, not all languages are supported with any RPC framwork.


We now understand gRPC and its use case. We discussed the different types of libraries we need and saw a simple call flow with these libraries. In our next post we will create a gRPC server using qitab/grpc and call it.

Thanks go to Carl Gay for edits!

vindarelDebugging Lisp: trace options, break on conditions

· 2 days ago

Those are useful Common Lisp debugging tricks. Did you know about trace options?

We see how trace accepts options. Especially, we see how we can break and invoke the interactive debugger before or after a function call, how we can break on a condition (“this argument equals 0”) and how we can enrich the trace output. But we only scratch the surface, more options are documented on their upstream documentation:

INFO: You'd better read this on the Common Lisp Cookbook, that's where it will receive updates.

Table of Contents

Trace - basics

But let’s first see a recap’ of the trace macro. Compared to the previous Cookbook content, we just added that (trace) alone returns a list of traced functions.

trace allows us to see when a function was called, what arguments it received, and the value it returned.

(defun factorial (n)
  (if (plusp n)
    (* n (factorial (1- n)))

To start tracing a function, just call trace with the function name (or several function names):

(trace factorial)

(factorial 2)
  0: (FACTORIAL 3)
    1: (FACTORIAL 2)
      2: (FACTORIAL 1)
        3: (FACTORIAL 0)
        3: FACTORIAL returned 1
      2: FACTORIAL returned 1
    1: FACTORIAL returned 2
  0: FACTORIAL returned 6

(untrace factorial)

To untrace all functions, just evaluate (untrace).

To get a list of currently traced functions, evaluate (trace) with no arguments.

In Slime we have the shortcut C-c M-t to trace or untrace a function.

If you don’t see recursive calls, that may be because of the compiler’s optimizations. Try this before defining the function to be traced:

(declaim (optimize (debug 3)))  ;; or C-u C-c C-c to compile with maximal debug settings.

The output is printed to *trace-output* (see the CLHS).

In Slime, we also have an interactive trace dialog with M-x slime-trace-dialog bound to C-c T.

But we can do many more things than calling tace with a simple argument.

Trace options - break and invoke the debugger

trace accepts options. For example, you can use :break t to invoke the debugger at the start of the function, before it is called (more on break below):

(trace factorial :break t)
(factorial 2)

We can define many things in one call to trace. For instance, options that appear before the first function name to trace are global, they affect all traced functions that we add afterwards. Here, :break t is set for every function that follows: factorial, foo and bar:

(trace :break t factorial foo bar)

On the contrary, if an option comes after a function name, it acts as a local option, only for its preceding function. That’s how we first did. Below foo and bar come after, they are not affected by :break:

(trace factorial :break t foo bar)

But do you actually want to break before the function call or just after it? With :break as with many options, you can choose. These are the options for :break:

:break form  ;; before
:break-after form
:break-all form ;; before and after
TIP: form can be any form that evaluates to true. You can add any custom logic here.

Note that we explained the trace function of SBCL. Other implementations may have the same feature with another syntax and other option names. For example, in LispWorks it is “:break-on-exit” instead of “:break-after”, and we write (trace (factorial :break t)).

Below are some other options but first, a trick with :break.

break on a condition

The argument to an option can be any form. Here’s a trick, on SBCL, to get the break window when we are about to call factorial with 0. (sb-debug:arg 0) refers to n, the first argument.

CL-USER> (trace factorial :break (equal 0 (sb-debug:arg 0)))
;; WARNING: FACTORIAL is already TRACE'd, untracing it first.

Running it again:

CL-USER> (factorial 3)
  0: (FACTORIAL 3)
    1: (FACTORIAL 2)
      2: (FACTORIAL 1)
        3: (FACTORIAL 0)

breaking before traced call to FACTORIAL:
   [Condition of type SIMPLE-CONDITION]

 0: [CONTINUE] Return from BREAK.
 1: [RETRY] Retry SLIME REPL evaluation request.
 2: [*ABORT] Return to SLIME's top level.
 3: [ABORT] abort thread (#<THREAD "repl-thread" RUNNING {1003551BC3}>)

  0: (FACTORIAL 1)
        N = 1   <---------- n is still 1, we break before the call with 0.

Other options

Trace on conditions

:condition enables tracing only if the condition in form evaluates to true.

:condition form
:condition-after form
:condition-all form

If :condition is specified, then trace does nothing unless Form evaluates to true at the time of the call. :condition-after is similar, but suppresses the initial printout, and is tested when the function returns. :condition-all tries both before and after.

Trace if called from another function

:wherein can be super useful:

:wherein Names

If specified, Names is a function name or list of names. trace does nothing unless a call to one of those functions encloses the call to this function (i.e. it would appear in a backtrace.) Anonymous functions have string names like “DEFUN FOO”.

Enrich the trace output

:report Report-Type

If Report-Type is trace (the default) then information is reported by printing immediately. If Report-Type is nil, then the only effect of the trace is to execute other options (e.g. print or break). Otherwise, Report-Type is treated as a function designator and, for each trace event, funcalled with 5 arguments: trace depth (a non-negative integer), a function name or a function object, a keyword (:enter, :exit or :non-local-exit), a stack frame, and a list of values (arguments or return values).

See also :print to enrich the trace output:

In addition to the usual printout, the result of evaluating Form is printed at the start of the function, at the end of the function, or both, according to the respective option. Multiple print options cause multiple values to be printed.


(defparameter *counter* 0)
(defun factorial (n)
  (incf *counter*)
  (if (plusp n)
    (* n (factorial (1- n)))
CL-USER> (trace factorial :print *counter*)
CL-USER> (factorial 3)
  0: (FACTORIAL 3)
  0: *COUNTER* = 0
    1: (FACTORIAL 2)
    1: *COUNTER* = 1
      2: (FACTORIAL 1)
      2: *COUNTER* = 2
        3: (FACTORIAL 0)
        3: *COUNTER* = 3
        3: FACTORIAL returned 1
      2: FACTORIAL returned 1
    1: FACTORIAL returned 2
  0: FACTORIAL returned 6

Closing remark

As they say:

it is expected that implementations extend TRACE with non-standard options.

and we didn’t list all available options or parameters, so you should check out your implementation’s documentation.

For more debugging tricks see the Cookbook and the links in it, the Malisper series have nice GIFs.

I am also preparing a short screencast to show what we can do inside the debugger, stay tuned!

vindarelLisp for the web: building one standalone binary with foreign libraries, templates and static assets

· 5 days ago

In our previous entry, we saw how to deploy our web application with Systemd, either from sources or with a binary. Now we’ll speak more about this building process to produce one binary that contains everything for our web app. We’ll tackle 3 issues:

  • ship foreign libraries alongside your binary, such as or,
  • include your Djula templates into your binary,
  • serve static files from your binary, without reading the filesystem,
  • and we’ll see my Gitlab CI recipe.

This allows us to create a binary that is really easy to deploy, to ship to users or to embed in an external process such as an Electron window (more on that later). Coming from Python and JS, what a dream!

Now, I want to thank the people that helped me figure these issues out and who wrote, fixed and extended these libraries: special shout-out to @mmontone for writing a Djula patch so quickly, @shinmera for Deploy and answering my many questions on Discord, @zulu.inoe for finding Hunchentoot answers, and everybody else on Discord for their help (@gavinok, @fstamour et all, sorry if I forgot) and all who dare asking questions to let everybody learn!

Table of Contents - Ship foreign libraries: the need of Deploy - Configuring Deploy: ignore libssl, verbosity - Remember: your program runs on another user’s machine. - Telling ASDF to calm down - Embed HTML Djula templates in your binary - Serve static assets - Gitlab CI - Closing remarks

Ship foreign libraries: the need of Deploy

Deploy is the way to go. If you used asdf:make in your .asd system definition to create a binary already, you just need to change two things:

;; my-project.asd
:defsystem-depends-on (:deploy)  ;; so you need to quickload deploy sometime before.
:build-operation "deploy-op"  ;; instead of program-op for asdf:make

and those two lines stay the same:

:build-pathname "my-application-name"
:entry-point "my-package:my-start-function"

Here’s my Makefile target, where I quickload Deploy before loading my app and calling asdf:make:

LISP ?= sbcl

	$(LISP)	--non-interactive \
		--eval '(ql:quickload "deploy")' \
		--load openbookstore.asd \
		--eval '(ql:quickload :openbookstore)' \
		--eval '(asdf:make :openbookstore)'

This creates a bin/ directory with our binary and the foreign libraries:

  -rwxr-xr-x  1 vindarel vindarel 130545752 Nov 25 18:48 openbookstore
  -rw-rw-r--  1 vindarel vindarel    294632 Aug  3 13:06
  -rw-rw-r--  1 vindarel vindarel    319528 Aug 23 18:01
  -rw-rw-r--  1 vindarel vindarel   1212216 Aug 24 16:42
  -rw-rw-r--  1 vindarel vindarel    116960 Aug  3 13:06

We need to deploy this directory.

When we start the binary, Deploy tells us what it is doing:

$ ./bin/openbookstore --datasource argentina lisp
 ==> Performing warm boot.
   -> Runtime directory is /home/vindarel/projets/openbookstore/openbookstore/bin/
   -> Resource directory is /home/vindarel/projets/openbookstore/openbookstore/bin/
 ==> Running boot hooks.
 ==> Reloading foreign libraries.
   -> Loading foreign library #<LIBRARY READLINE>.
   -> Loading foreign library #<LIBRARY SQLITE3-LIB>.
   -> Loading foreign library #<LIBRARY LIBSSL>.
   -> Loading foreign library #<LIBRARY LIBCRYPTO>.
 ==> Launching application.
OpenBookStore version 0.2-d2ac5f2
==> Epilogue.
==> Running quit hooks.

We can configure Deploy.

Configuring Deploy: ignore libssl, verbosity

You can silence the Deploy statuses by pushing :deploy-console into the *features* list, before calling asdf:make. Add this to the Makefile:

--eval '(push :deploy-console *features*)'

Now all seems well, you rsync your app to your server, run it and... you get a libssl error:

=> Deploying files to /home/vindarel/projets/myapp/commandes-collectivites/bin/
Unhandled SIMPLE-ERROR in thread #<SB-THREAD:THREAD "main thread" RUNNING
  #<LIBRARY LIBCRYPTO> does not have a known shared library file path.

Nicolas (@shinmera) explained that we typically want to import libssl or libcrypto from the target system, that “deploying these libraries without them blowing up on Linux is hard”. To do this, we ask Deploy to not handle them. In the .asd:

#+linux (deploy:define-library cl+ssl::libssl :dont-deploy T)
#+linux (deploy:define-library cl+ssl::libcrypto :dont-deploy T)

As a consequence, you now need to quickload or require :cl+ssl before loading the .asd file, because of the cl+ssl::libssl/libcrypto symbols at the top level.

Nicolas built all this for his needs when working on his Trial game engine and on his Kandria game (soon on Steam!), check them out!

Remember: your program runs on another user’s machine.

By this I mean that if you took the habit to use functions that locate a project’s source directory (asdf:system-source-directory, asdf:system-relative-pathname, for example when asking Hunchentoot to serve static assets, more on that below), then you need to re-write them, because your binary runs on another machine and it doesn’t run from sources, so ASDF, Quicklisp and friends are not installed, and your project(s) don’t have source directories, they are embedded in the binary.

Use a deploy:deployed-p runtime check if needed.

Telling ASDF to calm down

Now, we are very happy and confident, what could possibly go wrong? We run our app once again on our naked VPS:

$ ./bin/myapp
 ==> Performing warm boot.
   -> Runtime directory is /home/debian/websites/app/myapp/bin/
   -> Resource directory is /home/debian/websites/app/myapp/bin/
 ==> Running boot hooks.
 ==> Reloading foreign libraries.
   -> Loading foreign library #<LIBRARY LIBSSL>.
   -> Loading foreign library #<LIBRARY LIBRT>.
   -> Loading foreign library #<LIBRARY LIBOSICAT>.
   -> Loading foreign library #<LIBRARY LIBMAGIC>.
 ==> Launching application.
   You are using ASDF version from
   #P"/home/vindarel/common-lisp/asdf/asdf.asd" and have an older version of ASDF
   (and older than 2.27 at that) registered at

  [ long message ellided ]

; compilation unit aborted
;   caught 1 fatal ERROR condition
An error occured:
 Error while trying to load definition for system asdf from pathname
    Couldn't load #P"/home/vindarel/common-lisp/asdf/asdf.asd": file does not
    exist. ==> Epilogue.
 ==> Running quit hooks.

Now ASDF wants to do what, update itself? Whatever it tries to do, it crashes. Yes, this happens on the target host, when we run the binary. Damn!

The solution is easy, but it had to be documentend or google-able... Add this in your .asd to tell ASDF to not try to upgrade itself:

(deploy:define-hook (:deploy asdf) (directory)
  ;; Thanks again to Shinmera.
  (declare (ignorable directory))
  #+asdf (asdf:clear-source-registry)
  #+asdf (defun asdf:upgrade-asdf () nil))

By the way, if you want a one-liner to upgrade ASDF to 3.3.5 so that you can use package-local nicknames, check this lisp tip

Embed HTML Djula templates in your binary

Our binary now runs fine on our server: super great. But our app has another issue.

I like very much Djula templates, maintained by @mmontone. It is a traditional, no-surprises HTML templating system, very similar to Django templates. It is easy to setup, it is very easy to create custom filters, it has good error messages, both in the browser window and on the Lisp REPL. It’s one of the most downloaded Quicklisp libraries. Like Django templates, its philosophy is that it doesn’t allow a lot of computations in the template. It encourages to prepare your data on the back-end, so it is straightforward to process them in the templates. Sometimes it is limiting, so for more flexibility I’d look at Ten. It isn’t as much used and tested though (and I didn’t try it myself). If you want lispy templates, look at Spinneret. You can say goodby to copy-pasting nice-looking HTML examples, though.

However, by using Spinneret you would have not faced the following issue:

Djula reads templates from your file system.

and when your application runs on someone else’s machine, this is undefined behaviour.

Until now, you had to deploy your web app from sources or, at least, you had to send the HTML files to the server. This was the case until I talked about this issue to Mariano. He sent a patch the day after.

Now, we can choose: by default, Djula reads the HTML files from disk: very well. But now, when we build our binary, we can ask Djula to build the templates in memory, so they are saved into the Lisp binary.

Normally, you only need to tell Djula where to find templates (“add a template directory”), then to compile them into a variable:

;; normal, file-system case.
(djula:add-template-directory (asdf:system-relative-pathname "webapp" "templates/"))
(defparameter +base.html+ (djula:compile-template* "base.html"))

;; and then, we render the template with (djula:render-template* nil +base.html+ ...)

This uses a filesystem-template-store. In addition, it recompiles templates on change. This can be turned off as we’ll see.

For our binary, we need to set Djula’s *current-store* to a memory-template-store AND we need to turn off the djula:*recompile-templates-on-change* setting. Then, we need to compile all the templates of our application, and save our binary.

I actually do all this at the top-level of my web.lisp file. By default I load the app for development, and if we find a custom “feature”, that is added by the “build” target of the Makefile, we compile templates in memory.

So, in order:

  1. in the “build” target of my Makefile, I push a new setting in the *features* list:

    --eval '(push :djula-binary *features*)'
  2. in my web.lisp, I check for this setting (with #+djula-binary) and I create either a filetemplate store or a memory store. This is written at the top-level so it will be executed when we load the file. We can probably come up with better ergonomics.

This will be executed when I quickload my app in the build target of the Makefile, following the one above.

    --eval '(ql:quickload :openbookstore)'
(setf djula:*current-store*
      (let ((search-path (list (asdf:system-relative-pathname "openbookstore"
          (uiop:format! t "~&Openbookstore: compiling templates in file system store.~&")
          ;; By default, use a file-system store and reload templates during development.
          (setf djula:*recompile-templates-on-change* t)
          (make-instance 'djula:filesystem-template-store
		         :search-path search-path))

        ;; But, if this setting is set to NIL at load time, from the Makefile,
        ;; we are building a standalone binary: use an in-memory template store.
        ;; We also need to NOT re-compile templates on change.
          (uiop:format! t "~&Openbookstore: compiling templates in MEMORY store.~&")
          (setf djula:*recompile-templates-on-change* nil)
          (make-instance 'djula:memory-template-store :search-path search-path))))
  1. compile all the templates. If you used a web framework (or started to develop yours), you might have used a shortcut: calling a render function which takes the name of a template as a string for argument. I’m thinking about Caveman:
@route GET "/"
(defun index ()
  (render #P"index.tmpl"))

This strings denotes the name of the template. For a standalone binary, we need to compile the template before. That’s why Djula shows how to define and compile our templates:

(defparameter +base.html+ (djula:compile-template* "base.html"))

You need this line for every template of your application:

;;; Compile and load templates (as usual).
(defparameter +base.html+ (djula:compile-template* "base.html"))
(defparameter +dashboard.html+ (djula:compile-template* "dashboard.html"))
(defparameter +search.html+ (djula:compile-template* "search.html"))
(defparameter +stock.html+ (djula:compile-template* "stock.html"))
(defparameter +card-page.html+ (djula:compile-template* "card-page.html"))
(defparameter +card-stock.html+ (djula:compile-template* "card-stock.html"))
(defparameter +card-create.html+ (djula:compile-template* "card-create.html"))
(defparameter +card-update.html+ (djula:compile-template* "card-edit.html"))
;; and so on.
  1. let asdf:make (and Deploy) save your binary. To try it, rename your templates/ directory to something else and run your app.


An .asd system definition can reference static files, so they are part of the build process and included into the delivered application. That’s how you can ship a README file:

  :components ((:static-file "")

I did the same for my templates. To be honest, I don’t recall if there is a solid reason, since they are compiled and saved into the image anyhow with the compilation step above. I show this anyways, it looks like a good practice to me:

            (:module "src/web/templates"
                        ;; Order is important.
                        ((:static-file "login.html")
                         (:static-file "404.html")
                         (:static-file "base.html")
                         (:static-file "dashboard.html")
                         (:static-file "history.html")
                         (:static-file "search.html")
                         (:static-file "sell.html")

now we need to programmatically get the list of files from this src/web/templates module and to compile everything:

(let ((paths (djula:list-asdf-system-templates "bookshops" "src/web/templates")))
  (loop for path in paths
     do (uiop:format! t "~&Compiling template file: ~a...~&" path)
       (djula:compile-template* path))
  (values t :all-done))

This snippet and the general instructions are documented:

Feel free to show how you do it.

Bonus: here’s the list-asdf-system-templates function. We use asdf functions to get a system name, its components, their names...

(defun list-asdf-system-templates (asdf-system component)
  "List djula templates in ASDF-SYSTEM at COMPONENT.
  A list of template PATHNAMEs is returned."
  (let* ((sys (asdf:find-system asdf-system))
         (children (asdf:component-children sys))
         (module (or (find component children :key #'asdf:component-name :test #'equal)
                     (error "Component ~S not found in system named ~S.~&Available components are: ~S" component asdf-system (mapcar #'asdf:component-name children))))
         (alltemplates (remove-if-not (lambda (x) (typep x 'asdf:static-file))
                                      (asdf:module-components module))))
    (mapcar (lambda (it) (asdf:component-pathname it))

Serve static assets

At that point in time, I figured out static assets would need to be worked on too. Hopefully, the people on Discord helped me and it was quickly solved.

This is how I served static assets with Hunchentoot. We use a “folder dispatcher and handler”:

(defun serve-static-assets ()
  "Serve static assets under the /src/static/ directory when called with the /static/ URL root."
  (push (hunchentoot:create-folder-dispatcher-and-handler
         "/static/" (merge-pathnames *default-static-directory*
                                     (asdf:system-source-directory :openbookstore) ;; => NOT src/

But when your app is on another machine... hence the need to ship the static assets into the standalone binary, and to ask Hunchentoot to serve them.

What we do is pretty obvious: save our static files into a data structure, so this one is saved in the image, but we use a couple Lisp tricks so I comment the code below.

You’ll see that this time I hardcoded the file names and I didn’t declare them on the .asd file... clearly there is room for improvement, be my guest.

You can find the file I use for my application here.

;;; pre-web.lisp
;;; Parameters and functions required before loading web.lisp
;;; We read the content of our static files and put them into variables, so that they can be saved in the Lisp image.
;;; We define %serve-static-file to simply return their content (as string),
;;; and because we use with the #. reader macro, we need to put these functions in another file than web.lisp.

;;; Where my static files are:
(defparameter *default-static-directory* "src/static/"
  "The directory where to serve static assets from (STRING). If it starts with a slash, it is an absolute directory. Otherwise, it will be a subdirectory of where the system :abstock is installed.
  Static assets are reachable under the /static/ prefix.")

;;; We simply use a hash-table that maps a file name to its content, a a string.
;;; I love Serapeum's dict which is a readable hash-table, that's what I use:
(defparameter *static-files-content* (dict)
  "Content of our JS and CSS files.
  Hash-table with file name => content (string).")

;;; I read all my static files and I save them into the hash-table:
(defun %read-static-files-in-memory ()
  "Save the JS and CSS files in a variable in memory, so they can be saved at compile time."
  (loop for file in (list "openbookstore.js"
     with static-directory = (merge-pathnames *default-static-directory*
                                              (asdf:system-source-directory :bookshops))
     for content = (uiop:read-file-string (merge-pathnames file static-directory))
     do (setf (gethash file *static-files-content*) content)
     finally (return *static-files-content*)))

;; AT COMPILE TIME, read the content of our static files.

(defun %serve-static-file (path)
  "Return the content as a string of this static file.
  For standalone binaries delivery."
  ;; "alert('yes, compiled in pre-web.lisp');"  ;; JS snippet to check if this dispatcher works.
  (gethash path *static-files-content*))  ;; this would not work without the #. reader macro.

It is inside “web.lisp” that I set other rules for Hunchentoot. If it recognizes my static files, we simply return their content, as a string.

I don’t know if this works well with very big or with numerous files. But I-want-a-standalone-binary! For serious needs, I’d serve the static files with a proper server... I guess.

We use the #. reader macro to get our files’ content at compile time, this is why we needed to define our helper functions in another file, that is loaded before this one.

;;; web.lisp
(defun serve-static-assets-for-release ()
  "In a binary release, Hunchentoot can not serve files under the file system: we are on another machine and the files are not there.
  Hence we need to get the content of our static files into memory and give them to Hunchentoot."
   (hunchentoot:create-regex-dispatcher "/static/openbookstore\.js"
                                        (lambda ()
                                          ;; Returning the result of the function calls silently fails. We need to return a string.
                                          ;; Here's the string, read at compile time.
                                          #.(%serve-static-file "openbookstore.js")))

   (hunchentoot:create-regex-dispatcher "/static/card-page\.js"
                                        (lambda ()
                                          #.(%serve-static-file "card-page.js")))

Finally, it is inside my start-app function that I decide how to serve my static assets:

  (hunchentoot:start *server*)
  (if (deploy:deployed-p)
      ;; Binary release: don't serve files by reading them from disk.
      ;; Normal setup, running from sources: serve static files as usual.
  (uiop:format! t "~&Application started on port ~a.~&" port)

Find my web.lisp file here.

Gitlab CI

I build my binary on Gitlab.

image: clfoundation/sbcl

# uncomment to run the jobs in parallel. They are now run one after the other.
# stages:
  # - test
  # - build

# We need to install some system dependencies,
# to clone libraries not in Quicklisp,
# and to update ASDF to >= 3.3.5 in order to use local-package-nicknames.
  - apt-get update -qy
  - apt-get install -y git-core sqlite3 tar
  # The image doesn't have Quicklisp installed by default.
  - QUICKLISP_ADD_TO_INIT_FILE=true /usr/local/bin/install-quicklisp
  # clone libraries not in Quicklisp or if we need the latest version.
  - make install
  # Upgrade ASDF (UIOP) to 3.3.5 because we want package-local-nicknames.
  - mkdir -p ~/common-lisp/asdf/
  - ( cd ~/common-lisp/ && wget  && tar -xvf asdf-3.3.5.tar.gz && mv asdf-3.3.5 asdf )
  - echo "Content of ~/common-lisp/asdf/:" && ls ~/common-lisp/asdf/

  allow_failure: true
  # stage: test
    # QA tools:
    # install Comby:
    - apt-get install -y sudo
    # - bash <(curl -sL
    - bash <(curl -sL
    # install Colisper for simple lisp checks:
    - git clone ~/colisper
    - chmod +x ~/colisper/
    - cd src/ && ~/colisper/

  # stage: build
    - make build
    name: "openbookstore"
    # Publish the bin/ directory (todo: rename, include version...)
      - bin/

Closing remarks

I am so excited by the possibilities this brings.

I knew it was possible to do this in CL but I admit I thought it would be simpler... it turned out it is not a very crowded path. Now the steps are documented and google-able, here and everywhere else I could leave a comment, but it will be nice to come up with shorter and friendlier ready-to-use utilities. In a new web framework? And again, please share how you do all of this in the comments.

Having this standalone binary dramatically simplifies my deployment process. With a small web app, running from sources was easy (once you set up Quicklisp, and ASDF, and...). But with a growing application, that uses my local forks or code not yet pushed to GitHub, deployment was becoming tedious, and it is now greatly simplified. rsync, systemctl restart and done.

Its only limitation is that you need the same libc version on the target OS as on your local machine. So, back in august I could build on my machine and send the result to my VPS, but I upgraded my Debian-ish system, and left the server with its (very) old Ubuntu version, so I can’t run the binary from my machine there any more... I must resort to a CI pipeline that uses a matrix of Ubuntu versions, or build with Docker or a virtual machine. Or run from sources... Maybe soon I will build (truly) static executables: they are coming to SBCL.

I repeat again the good side: my friends (on Debian so far) can download the app, run bin/openbookstore and it works \o/

One last thing I’d like to do is to be able to double-click an executable to start the app, and to have one single file (and not an archive that extracts as a directory, although it is not too bad!). This looks possible with Makeself.

If you want to try the standalone binary on your GNU/Linux system (does it actually work on other distros?), download the artifacts of the latest passing build on the pipelines page, or grab it with this direct link. Un-zip, run bin/bookshops and go to localhost address shown in the output (and also create an admin user as shown in the readme). You can leave me a comment here, on Gitter, on Discord or with a good ol’ email.

Stay tuned, OpenBookStore is still a work in progress but it will be a Common Lisp application flagship ;)

Nicolas HafnerRelease Date Announcement! - November Kandria Update

· 18 days ago

This update's an important one! The final release date, a new trailer, and some more announcements. Dang! Well, without further ado:

Release Date: 11th of January!

Alright, I'm happy to announce that we got a final release date for Kandria, which is as the title says, Wednesday, 11th of January 2023! To celebrate the release date announcement, and the large amounts of progress we've made polishing the game, please enjoy this brand new trailer as well:

The game will release on Steam,, and onto the website as a direct sale. All copies of the game will be DRM-free. I'm also excited to say that the game will release both in English and in German, translated by yours truly. We have enough time left over to do the localisation, so I really want to do it.

Shevalin Single

The full soundtrack of the game (which is excellent, by the way!) will be released together with the game in January. However, you can enjoy a single from the soundtrack right now:

This is Shevalin, the ending credits song, composed by our very talented Mikel Dale, and sung by the incredible Julie Elven. I hope you enjoy it!

User Feedback

Currently we're still polishing everything we can find and responding to user feedback. One of the most prominent things we noticed watching people play was that they were confused by the locked doors in the first region of the game. So hey, we finally added crashable doors:

If you're part of the beta programme, please give the game a try! We still have some time to include more changes if you have any suggestions.


Okey, last month we got a new roadmap, so let's look at that now:

  • Add even more detail tiles, foliage, and animal spawners throughout the world

  • Fine-tune the levelling, trade prices, and enemy difficulty

  • Create new key art

  • Create a new trailer

  • Spruce up some of the sound effects and music tracks

  • Create achievement icons and integrate them into the game

  • Translate everything (over 50'000 words) into German

  • Release the full game

  • Backport re-usable components into Trial

  • Separate out the assets from the main repository

  • Publish the source code under a permissive license

  • Fix a plethora of bugs and inconveniences in Alloy

  • Polish the editor and make it more stable

  • Release the editor

  • Develop a modding system for Trial

  • Finish the Forge build system at least to an extent where it's usable

  • Integrate the API with the modding system

  • Create a mod manager and browser UI

  • Document more parts of Trial and Kandria

  • Release an official modding kit

Alright! Until the 11th of January finally hits, please continue to share the steam page with your friends and other communities!

Quicklisp newsNovember 2022 Quicklisp dist update now available

· 27 days ago

 New projects:

  • 40ants-asdf-system — Provides a class for being used instead of asdf:package-inferred-system. — BSD
  • action-list — An implementation of action lists — zlib
  • adp — Add Documentation, Please. A documentation generator. — The Unlicense
  • anatevka — A distributed blossom algorithm for minimum-weight perfect matching. — MIT
  • cl-annot-revisit — Re-implementation of 'cl-annot', an annotation syntax library for Common Lisp. — WTFPL
  • cl-bloom-filter — Just another Common Lisp bloom filter implementation, enjoy it! — 
  • cl-cblas — A cl-autowrap generated wrapper around CBLAS which provides a C interface to the Basic Linear Algebra Subprograms. — MIT
  • cl-djula-svg — Handle SVGs in Djula Templates — MIT
  • cl-djula-tailwind — Tailwind classes for Djula templates — MIT
  • cl-facts — in-memory graph database — ISC
  • cl-glib — GLib binding for Common Lisp. — lgpl3
  • cl-gobject-introspection-wrapper — Wrap and call GObject Introspection FFI function in LISP style, based on cl-gobject-introspection. — lgpl3
  • cl-lessp — Generic order predicate — ISC
  • cl-oju — Common Lisp equivalents of core Clojure functions, especially sequence-related ones — MIT
  • cl-rollback — rollback functions — ISC
  • cl-sentry-client — Sentry client — MIT
  • cl-union-find — An implementation of UNION-FIND datastructure — LGPL
  • climc — A common lisp Instant Messaging client. — MIT License
  • clog-plotly — New CLOG System — BSD
  • clog-terminal — CLOG Terminal — BSD
  • de-mock-racy — Simplistic mocking library. — BSD simplified
  • distributions — Random numbers and distributions — MS-PL
  • dsm — Destructuring match — MIT
  • easy-macros — An easier way to write 90% of your macros — Apache License, Version 2.0
  • filesystem-utils — A collection of utilities for filesystem interaction. — zlib
  • filter-maker — CLIM program for letting users make filters out of predicates and keys. — BSD 2-Clause
  • fiveam-matchers — An extensible matchers library for FiveAM — Apache License, Version 2.0
  • infix-reader — A reader macro to allow for infix syntax with { ... } — Unlicence
  • input-event-codes — Port of all constants from input-event-codes.h from both Linux and FreeBSD — MIT
  • instance-tracking — Defines a class that tracks its instances — MIT
  • json-lib — A simple and relatively fast JSON parser and encoder — MIT
  • lineva — Linear evaluation macro system — GPLv3
  • luckless — Lockless data structures — zlib
  • more-cffi — Extension of the CFFI project. A facility to wrap C bindings and write documentation. — The Unlicense
  • music-spelling — Automatic pitch and rhythm spelling. — Apache 2.0
  • nail — library providing convenient functions for working with linalg, statistics and probability. — MIT
  • ndebug — A toolkit to construct interface-aware yet standard-compliant debugger hooks. — BSD 3-Clause
  • numericals — A high performance numerical computing library for Common Lisp (focus: basic math operations) — MIT
  • ospm — OS package manager interface — BSD 3-Clause
  • pero — Logging and text file perations library — MIT
  • pk-serialize — Serialization of Common Lisp data structures — MIT
  • statistics — A consolidated system of statistical functions — MS-PL
  • stepster — Web scraping library — MIT
  • testiere — Up Front Testing for DEFUN and DEFMETHOD — GPLv3
  • trivial-sanitize — clean html strings: "foo" â†' "foo" — LLGPL
  • tsqueue — Thread Safe Queue — MIT
  • typo — A portable type inference library for Common Lisp — MIT
  • wayflan — From-scratch Wayland client implementation — BSD 3-Clause
  • yah — Yet Another Heap — BSD-3

Updated projects: 3d-quaternions, 3d-vectors, abstract-arrays, acclimation, agnostic-lizard, alexandria-plus, architecture.builder-protocol, array-utils, assoc-utils, auto-restart, bdef, bit-smasher, blackbird, bp, bst, caveman,, cerberus, cffi, chunga, ci, ci-utils, cl+ssl, cl-all, cl-async, cl-autowrap, cl-bmas, cl-charms, cl-collider, cl-confidence, cl-cron, cl-data-structures, cl-form-types, cl-forms, cl-gamepad, cl-generator, cl-git, cl-gserver, cl-i18n, cl-info, cl-interpol, cl-isaac, cl-json-pointer, cl-kaputt, cl-las, cl-lib-helper, cl-liballegro, cl-liballegro-nuklear, cl-libuv, cl-lzlib, cl-marshal, cl-migratum, cl-mixed, cl-mock, cl-naive-store, cl-openal, cl-patterns, cl-pdf, cl-protobufs, cl-randist, cl-random-forest, cl-replica, cl-scsu, cl-semver, cl-sendgrid, cl-ses4, cl-steamworks, cl-str, cl-telegram-bot, cl-tls, cl-torrents, cl-unix-sockets, cl-utils, cl-wav, cl-webkit, cl-xkb, cl-yaml, cl-yxorp, cl-zstd, clack, clgplot, clingon, clj-re, clobber, clog, clog-ace, closer-mop, clsql, clss, cluffer, clunit2, clx, cmd, coleslaw, common-lisp-jupyter, commondoc-markdown, compiler-macro-notes, conduit-packages, consfigurator, croatoan, css-lite, cytoscape-clj, damn-fast-priority-queue, data-frame, data-lens, data-table, datamuse, defmain, dense-arrays, depot, dexador, dfio, dissect, doc, docparser, docs-builder, eclector, erudite, extensible-compound-types, fast-io, fiveam-asdf, flare, float-features, font-discovery, for, functional-trees, github-api-cl, gtirb-capstone, gtirb-functions, gtwiwtg, gute, harmony, http2, hunchensocket, hunchentoot-errors, imago, in-nomine, ironclad, jp-numeral, json-schema, jsonrpc, kekule-clj, lack, latter-day-paypal, lift, linear-programming, linear-programming-glpk, lisp-binary, lisp-critic, lisp-namespace, lisp-stat, lisp-unit2, literate-lisp, log4cl-extras, ltk, lunamech-matrix-api, markup, math, mcclim, mito, mnas-graph, mnas-package, multiposter, mutility, myway, neural-classifier, nfiles, nhooks, nkeymaps, nodgui, numcl, numerical-utilities, nyxt, omglib, one-more-re-nightmare, osc, osicat, overlord, papyrus, parachute, pathname-utils, periods, petalisp, pgloader, piping, plot, plump, polymorphic-functions, posix-shm, postmodern, pp-toml, query-fs, quick-patch, quri, random-state, replic, rutils, sel, select, serapeum, shasht, shop3, simple-neural-network, sketch, skippy-renderer, slite, sly, snakes, special-functions, speechless, spinneret, staple, stripe-against-the-modern-world, stumpwm, stumpwm-dynamic-float, tfeb-lisp-hax, tfeb-lisp-tools, trace-db, trivial-clipboard, trivial-extensible-sequences, trivial-file-size, trivial-mimes, uax-15, uiop, usocket, utilities.print-items, utilities.print-tree, vellum, vellum-binary, vellum-postmodern, vk, with-c-syntax, wuwei, xml-emitter, yason, zippy.

Removed projects: cl-json-template, cl-schedule, cl-splicing-macro, mito-attachment, trivial-timers.

To get this update, use (ql:update-dist "quicklisp")

I apologize for the long gap between this update and the last. I intend to get back on a monthly schedule.

Pascal CostanzaNew blog address

· 31 days ago

I am moving my blog away from blogspot / blogger. I am going to host my new blog at You can subscribe to an RSS feed on Lisp-related posts if you care only for that. also acts as a social network and, although it is its own platform, is compatible with Mastodon. My Mastodon handle is

cnx.cnx-float, cnx.cnx-float cnx { visibility: hidden !important; } div.jwplayer div.jw-wrapper, div[id^='primis_playerSekindoSPlayer'], div.min-tv-is-sticky, iframe.min-tv-is-sticky, div.vjs-pip-container { position: absolute !important; }cnx.cnx-float, cnx.cnx-float cnx { visibility: hidden !important; } div.jwplayer div.jw-wrapper, div[id^='primis_playerSekindoSPlayer'], div.min-tv-is-sticky, iframe.min-tv-is-sticky, div.vjs-pip-container { position: absolute !important; }

Joe MarshallLisp: Second impression

· 43 days ago

My first impressions of Lisp were not good. I didn't see how navigating list structure was of any use. It seemed to be just a more cumbersome way of getting at the data.

In fact, my first impressions of computer science were not very positive. I enjoyed hobbyist coding on my TRS-80, but “real” programming was tedious and the proscriptions of “doing it the correct way” took the joy out of it. I explored other options for my major. Fate intervened. Over the next year I realized my calling was EECS, so in my sophomore year I took all the intro courses.

I had heard that the introductory computer science course used Lisp. That was disappointing, but I started hearing things about Lisp that made me think I should take a second look. I learned that Lisp was considered the premier language of MIT's Artificial Intelligence Laboratory. It was invented by hackers and designed to be a programmable programming language that was infinitely customizable. The lab had developed special computers that ran Lisp on the hardware. The OS was even written in Lisp. I wasn't looking forward to car and cdr'ing my way through endless cons cells, but I figured that there had to be more going on.

6.001 was unlike the prior computer courses I had taken. The course was not about how to instruct a computer to perform a task — the course was about expressing ideas as computation. To me, this seemed a much better way to approach computers. Professor Hal Abelson was co-lecturing the course. He said that he chose Lisp as the teaching language because it was easier to express ideas clearly.

Two things stood out to me in the first lecture. Professor Abelson showed the recursive and iterative versions of factorial. Of course I had seen recursive factorial from the earlier course and I knew how it worked. Clearly the iterative version must work the same way. (One of my early hangups about Lisp was all the recursion.) I was suprised to find out that the Lisp system would automatically detect tail recursive cases and turn them into iteration. Evidentally, the makers of Lisp had put some thought into this.

Professor Abelson also demonstrated first class functions. He wrote a procedure that numerically approximates the derivative of a function. He then used that in a generic Newton's method solver. This is all straightforward stuff, but to a newbie like me, I thought it was amazing. In just a few lines of code we were doing simple calculus.

It was a mystery to me how first class functions were implemented, but I could see how they were used in the Newton's method solver. The code Professor Abelson wrote was clear and obvious. It captured the concept of derivatives and iterative improvement concisely, and it effectively computed answers to boot. I had to try it. Right after the lecture I went to lab and started typing examples at the REPL. Sure enough, they worked as advertised. A tail-recursive loop really didn't push any stack. It didn't leak even the tiniest bit of memory, no matter how long the loop. I tried the Newton's method solver to take cube roots. I passed the cube function to the derivative function and the result was a function that was numerically close to the derivative.

Now I was a bit more impressed with Lisp than I was earlier. I wasn't completely sold, but I could see some potential here. I wanted to learn a bit more before I dismissed it entirely. It took me several months to become a Lisp fan. The parenthesis were a small hurdle — it took me a couple of weeks to get the hang of let forms. There was a week or two of navigating cons cells to wade through. But I eventually came to love the language.

My first impression of Lisp was poor. The uselessness of traversing random list structure was unmotivating. My second impression was better. Professor Abelson teaching directly from preprints of S&ICP might have had something to do with it.

Joe MarshallLisp: First Impressions

· 45 days ago

My first exposure to Lisp was in the summer of 1981. I was taking a summer school intro to computers. The course was taught on a PDP-11, and for the first few weeks we programmed in Macro-11 assembly language. For the last couple of weeks they introduced Lisp.

Frankly, I wasn't impressed.

The course started by talking about linked lists and how you could navigate them with car and cdr. We then went on to build more complicated structures like alists and plists. This was an old-fashioned lisp, so we used things like getprop and putprop to set symbol properties.

The subject matter wasn't difficult to understand (though chasing pointers around list structure is error prone). Since we had just been learning Macro-11, it was natural to play with linked list structure in assembly code. We wrote assembly code to look things up in a plist.

My impression was that Lisp was centered around manipulating these rather cumbersome data structures called cons cells. Linked lists of cons cells have obvious disadvantages when compared to arrays. This makes the language tedious to work with.

The summer school course was my first “real” college course in computers. I was put off. “Real” computing wasn't as much fun as I had hoped it would be. I definitely wouldn't be considering it as a major, let alone a career. I wasn't interested in Lisp at all.

to be continued

Tim BradshawPackage-local nicknames

· 51 days ago

What follows is an opinion. Do not under any circumstances read it. Other opinions are available (but wrong).

Package-local nicknames are an abomination. They should be burned with nuclear fire, and their ashes launched into space on a trajectory which will leave the Solar System.

The only reason why package-local nicknames matter is if you are writing a lot of code with lots of package-qualified names in it. If you are doing that then you are writing code which is hard to read: the names in your code are longer than they need to be and the first several characters of them are package name noise (people read, broadly from left to right). Imagine me:a la:version ge:of oe:English oe:where la:people wrote like that: it’s just horrible. If you are writing code which is hard to read you are writing bad code.

Instead you should do the work to construct a namespace in which the words you intend to use are directly present. This means constructing suitable packages: the files containing the package definitions are then almost the only place where package names occur, and are a minute fraction of the total code. Doing this is a good practice in itself because the package definition file is then a place which describes just what names your code needs, from where, and what names it provides. Things like conduit packages (shameless self-promotion) can help with this, which is why I wrote them: being able to say ‘this package exports the combination of the exports of these packages, except …’ or ‘this package exports just the following symbols from these packages’ in an explicit way is very useful.

If you are now rehearsing a litany of things that can go wrong with this approach in rare cases1, please don’t: this is not my first rodeo and, trust me, I know about these cases. Occasionally, the CL package system can make it hard or impossible to construct the namespace you need, with the key term here being being occasionally: people who give up because something is occasionally hard or impossible have what Erik Naggum famously called ‘one-bit brains’2: the answer is to get more bits for your brain.

Once you write code like this then the only place package-local nicknames can matter is, perhaps, the package definition file. And the only reason they can matter there is because people think that picking a name like ‘XML’ or ‘RPC’ or ‘SQL’ for their packages is a good idea. When people in the programming section of my hollowed-out-volcano lair do this they are … well, I will not say, but my sharks are well-fed and those things on spikes surrounding the crater are indeed their heads.

People should use long, unique names for packages. Java, astonishingly, got this right: use domains in big-endian order (org.tfeb.conduit-packages, org.tfeb.hax.metatronic). Do not use short nicknames. Never use names without at least one dot, which should be reserved for implementations and perhaps KMP-style substandards. Names will now not clash. Names will be longer and require more typing, but this will not matter because the only place package names are referred to are in package definition files and in in-package forms, which are a minute fraction of your code.

I have no idea where or when the awful plague of using package-qualified names in code arose: it’s not something people used to do, but it seems to happen really a lot now. I think it may be because people also tend to do this in Python and other dotty languages, although, significantly, in Python you never actually need to do this if you bother, once again, to actually go to the work of constructing the namespace you want: rather than the awful

import sys

... sys.argv ...



you can simply say

from sys import argv, exit

... argv ...


and now the very top of your module lets anyone reading it know exactly what functionality you are importing and from where it comes.

It may also be because the whole constructing namespaces thing is a bit hard. Yes, it is indeed a bit hard, but designing programs, of which it is a small but critical part, is a bit hard.

OK, enough.

If, after reading the above, you think you should mail me about how wrong it all is and explain some detail of the CL package system to me: don’t, I do not want to hear from you. Really, I don’t.

  1. in particular, if your argument is that someone has used, for instance, the name set in some package to mean, for instance, a set in the sense it is used in maths, and that this clashes with cl:set and perhaps some other packages, don’t. If you are writing a program and you think, ‘I know, I’ll use a symbol with the same name as a symbol exported from CL to mean something else’ in a context where users of your code also might want to use the symbol exported by CL (which in the case of cl:set is ‘almost never’, of course), then my shark pool is just over here: please throw yourself in. 

  2. Curiously, I think that quote was about Scheme, which I am sure Erik hated. But, for instance, Racket’s module system lets you do just the things which are hard in the package system: renaming things on import, for instance. 

Nicolas HafnerMapping the Road to Release - October Kandria Update

· 55 days ago

So, we've been in beta for over a month and gotten lots of useful feedback. Thanks a bunch! We'll continue to listen eagerly for feedback as we move towards the end of the development.

In case you missed the Kickstarter but would still like to support us ahead of the release in January, you can do so by preordering Kandria or its soundtrack through Backerkit. Unlike Kickstarter, this also accepts PayPal, if you don't have access to a credit card.


Another convention!

We'll be at the HeroFest in Bern, October 14-16! You'll be able to play the latest Kandria release there and chat about whatever. If you're in the area, please stop on by and check out the rest of the Swiss indie games presenting there as well.


The release of the soundtrack has been delayed by a bit, as our composer got swamped with work, and we're still trying to hash out the complicated stuff behind royalties and all. However, a single of the soundtrack should be out soon. Please keep an ear out for that!

Steam Deck Support

Gaben finally delivered a Steam Deck to me, and I've tested Kandria on it. There were a couple of minor fixes to make it more usable, but now it seems to run flawlessly on the deck! Nice!

The Deck is a wonderful piece of tech, and I've been enjoying playing other games on it as well. While I still would like for Kandria to run on the Switch as well, this is the next best thing for now.


We're still rounding out the last bits of polish and bugs, and focusing on playing through the game more to ensure the balance and progression also work well. Development of the core game will officially end at the end of November, after which focus will shift towards adding the stretch goals we promised during the Kickstarter, preparing promotional materials, and so on.

To that end, here's a new rough roadmap of all the stuff left to do including post-release updates:

  • Add even more detail tiles, foliage, and animal spawners throughout the world

  • Fine-tune the levelling, trade prices, and enemy difficulty

  • Spruce up some of the sound effects and music tracks

  • Create achievement icons and integrate them into the game

  • Release the full game

  • Backport re-usable components into Trial

  • Separate out the assets from the main repository

  • Publish the source code under a permissive license

  • Fix a plethora of bugs and inconveniences in Alloy

  • Polish the editor and make it more stable

  • Release the editor

  • Develop a modding system for Trial

  • Finish the Forge build system at least to an extent where it's usable

  • Integrate the API with the modding system

  • Create a mod manager and browser UI

  • Document more parts of Trial and Kandria

  • Release an official modding kit

We're also planning some cool events to celebrate the three big release milestones, though more info about that as we actually get closer to them.

For now, please continue to share the steam page with friends and other groups. It would help a lot to ensure that we can continue to make games in the future!

Tim BradshawBradshaw's laws

· 61 days ago

There are two laws.

The laws

  1. All sufficiently large software systems end up being programming languages.
  2. Whenever you think the point is at which the first law will apply, it will apply before that.

Implications of the laws

When building software systems you should design them as programming langages. You should do this however small you think they will be. In order to make this practical for small systems you should therefore use a language which allows seamless extension into other languages with insignificant zero-point cost.

But because the laws are not widely known, most large software systems are built without understanding that what is being built is in fact a programming language. Because people don’t know they are building a programming language, don’t know how to build programming languages, and do not use languages which make the seamless construction of programming languages easy, the languages they build are usually terrible: they are hard to use, have opaque and inconsistent semantics and are almost always insecure.

TurtleWareBuffering Output

· 64 days ago

Single buffering

In graphical applications buffering of output is necessary to avoid flickering - a displeasing effect where mid-drawing artifacts are displayed on the screen. For example consider the following function:

(defun draw-scene (sheet)
  (draw-rectangle* sheet 125 125 175 175 :ink +red+)
  (draw-rectangle* sheet 125 125 175 175 :ink +blue+))

Here we draw two rectangles one on top of the other. If the red square is visible for a brief period of time before the blue one, then it is called flickering. To avoid this effect a concept of output buffering was invented - only when the output is ready for display, show it on the screen.

Double buffering

With double buffering we draw on the "back" buffer, and when done the back buffer contents are shown on the front buffer.

(defun game-loop ()
  (loop (draw-scene sheet)
        (swap-buffers sheet (buffer-1 sheet) (buffer-2 sheet))))

Triple buffering

The triple buffering is used when new scenes are produced much faster than the front buffer could be updated. We have "render", "ready" and "front" buffers. The implicit assumption is that the game loop and the display loop operate in separate threads.

(defun display-loop ()
  (loop (swap-buffers sheet (buffer-2 sheet) (buffer-3 sheet))
        (display-buffer sheet (buffer-3 sheet))))

Incremental and non-incremental rendering

If each frame is drawn from scratch (like in many games), then it doesn't matter whether the "swap" operation copies or swaps buffers. Some applications however treat the canvas incrementally. In this case losing the old content is not acceptable and we must copy data.

;;; The frame is rendered from scratch (not incremental)
(defmacro swap-buffers (sheet buffer-1 buffer-2)
  `(with-swap-lock (sheet)
     (rotatef ,buffer-1 ,buffer-2)))

;;; The frame is rendered based on the previosu content (incremental)
(defmacro copy-buffers (sheet buffer-1 buffer-2)
  `(with-swap-lock (sheet)
     (copy-array ,buffer-1 ,buffer-2)))

Copying data is more expensive than rotating buffers. That said sometimes re-rendering a frame from scratch may outweigh that cost. Incremental rendering resembles drawing on a paper - unless we clear it manually, the old content will be visible.

Mixed buffering

Sometimes we may want to draw directly on the front buffer. This is the most performant when we write each pixel exactly once (for example when we render an image). In this case we are not only expected to synchronize the front buffer with the back buffer, but also the other way around.

;;; Buffer-1 is "back", Buffer-2 is "front".

(defun activate-single-buffering ()
  ;; Update the front buffer immedietely.
  (copy-buffers sheet (buffer-1 sheet) (buffer-2 sheet)))

(defun activate-double-buffering ()
  ;; Synchronize the back buffer with the front-buffer.
  (copy-buffers sheet (buffer-2 sheet) (buffer-1 sheet)))

Otherwise, if we turn the double buffering back on, the back buffer won't contain the data that was drawn when the output was single-buffered.

Closing thoughts

There are many techniques that makes this more performant. My main goal with this post was to emphasize the difference between the incremental and non-incremental rendering that was usually ommited in materials I've found on the Internet.

Interesting reads:

Joe MarshallObservationally Functional

· 66 days ago

A couple of years back I wrote a Java microservice that talks to Jenkins to find the state of a series of builds. The code was structured to be “observationally functional” — there were plenty of side effects, but the main data abstractions behaved as if they were immutable as far as the abstract API was concerned. This allows us to treat code that uses these objects as if it were pure functional code.

If a data structure is observationally functional, then regardless of what the implementation does, there is no way to observe side effects at the abstract level. Primarily, this means that if you call a function twice with the same arguments, you always get the same answer. (This implies, but it isn't obvious, that calling a function should not mutate anything that would cause a different function to change.) This restriction has a lot of wiggle room. You can certainly side effect anything local to the abstraction that doesn't get returned to the caller. You can side effect data until the point it is returned to the caller.

The main data abstraction my microservice works with is a representation of the build metadata tree on the Jenkins server. The higher level code walks this tree looking for builds and metadata. The code maintains the illusion that the tree is a local data structure, but the implementation of the tree contains URL references to data that is stored on the Jenkins server. As the higher level code walks the tree, the lower level code fetches the data from the Jenkins server on demand and caches it.

Writing the code this way allows me to separate the data transfer and marshaling parts from the data traversal and analysis part. The tree, though it is mutated as it is traversed, is immutable in the parts that have already been visited. The caching code, which actually mutates the tree, needs to be synchronized across multiple threads, but the traversal code does not. Nodes in the tree that have already been visited are never mutated, so no synchronization is needed.

Once the caching tree abstraction was written, the higher level code simply walks the tree, selecting and filtering nodes, then reading the field values in the nodes. But the higher level code can be treated as if it were pure functional because there are no observable side effects. An advantage of pure functional code is that it is trivially thread safe, so my microservice can run hundreds of threads in parallel, each walking separate parts of the Jenkins tree and none interfering with the other. The only part of the code that uses synchronization is the tree caching code.

This implementation approach was quite fruitful. Once the code was tested with a single thread, it was obvious that multiple threads ought to work (because they couldn't observe each other's side effects) and when I turned the thread count up, no debugging was necessary. The code has been running continuously with dozens of threads for the past couple of years with no timing, synchronization, or race condition bugs.

Tim BradshawSimple logging in Common Lisp

· 69 days ago

slog is a simple logging framework for Common Lisp based on the observation that conditions can represent log events.

slog is based on an two observations about the Common Lisp condition system:

  • conditions do not have to represent errors, or warnings, but can just be a way of a program saying ‘look, something interesting happened’;
  • handlers can decline to handle a condition, and in particular handlers are invoked before the stack is unwound.

Well, saying ‘look, something interesting happened’ is really quite similar to what logging systems do, and slog is built on this idea.

slog is the simple logging system: it provides a framework on which logging can be built but does not itself provide a vast category of log severities &c. Such a thing could be built on top of slog, which aims to provide mechanism, not policy.

slog provides a couple of conditions representing log entries, which are designed to be subclassed in real life. Log entries are created using a slog function (this is why slog is called slog: log is already taken) which simply signals an appropriate condition. Handlers are set up by a logging form (this should really be called slogging but it is not), which associates conditions with handlers. There is fairly flexible file handling for logging to files, and in particular you can refer to file names which all get associated with the approprate stream, streams get closed automagically (and you can manually close them, when they will be reopened if need be), and the underlying mechanism for writing entries is exposed by a slog-to generic function which could be extended. Log entry formats can be controlled in various ways.

In addition slog tries to associate log entries with ‘precision time’, which is CL’s universal time expanded to the precision of a millisecond, or of internal time if it is less precise than a millisecond. Setting this up means that slog takes a second or so to load.

Once again: slog is a framework: it has no dealings with log severities, catagories, or anything like that. All that is meant to be provided on top of what slog provides.

Documentation is here, source code is here. It will be available from Quicklisp in due course.

Tim BradshawMetatronic macros

· 69 days ago

Metatronic macros are a simple hack which makes it a little easier to write less unhygienic macros in Common Lisp.

Common Lisp macros require you to avoid variable name capture yourself. So, for a macro which iterates over the lines in a file, this is wrong:

(defmacro with-file-lines ((line file) &body forms)
  ;; wrong
  `(with-open-file (in ,file)
     (do ((,line (read-line in nil in)
                 (read-line in nil in)))
         ((eq ,line in))

It’s wrong because it binds in to the stream open to the file, and user code could perfectly legitimately refer to a variable of the same name.

The standard approach to dealing with this is to use gensyms:

(defmacro with-file-lines ((line file) &body forms)
  ;; righter
  (let ((inn (gensym)))
    `(with-open-file (,inn ,file)
     (do ((,line (read-line ,inn nil ,inn)
                 (read-line ,inn nil ,inn)))
         ((eq ,line inn))

This creates a new symbol bound to inn (in’s name), and then uses it as the name of the variable bound to the stream. Code can’t then use any variable with this unique name.

This works, but it’s ugly. Metatronic macros let you write the above like this:

(defmacro/m with-file-lines ((line file) &body forms)
  ;; righter, easier
  `(with-open-file (<in> ,file)
     (do ((,line (read-line <in> nil <in>)
                 (read-line <in> nil <in>)))
         ((eq ,line <in>))

In this macro all symbols which look like <> (in any package) are rewritten to unique names, but all references to symbols with the same original name are to the same symbol1. This makes this common case more pleasant to do: macros written using defmacro/m have less noise around their expansion.

Metatronic macros go to some lengths to avoid leaking the rewritten symbols. Given this silly macro

(defmacro/m silly ()

then (eq (silly) (silly)) is false. Similarly given this:

(defmacro/m also-silly (f)
  `(eq ,f '<silly>))

Then (also-silly '<silly>) will be false of course.

There is defmacro/m, macrolet/m and define-compiler-macro/m, and the implementation of metatronization is exposed if you need it.

Documentation is here, source code is here. It will be available in Quicklisp in due course.

  1. in fact, a symbol whose name is <> is rewritten as a unique gensym as a special case. I am not sure if this is a good thing but it’s what happens. 

Nicolas HafnerKandria enters beta - September Kandria Update

· 87 days ago

Beta Release - September Kandria Update

Kandria has finally entered beta, and the keys have been rolled out to all the Kickstarter backers of the Hunter tier and above. We've already got some good feedback in from that, and are continuing to polish the remaining areas.

If you're a beta backer and haven't gotten your key, please check your emails and your Backerkit account here:

In case you missed the Kickstarter but would still like to support us, you can also do so by preordering Kandria or its soundtrack through Backerkit. Unlike Kickstarter, this also accepts PayPal, if you don't have access to a credit card.

Gamescom & Devcom

We were part of the Swiss Games delegation to Gamescom this year, and had a stand at their incredible booth:

A bunch of people stopped by to talk to us about the game and try it out for themselves, which is always fun to observe.

On Thursday we were even allowed to use the booth's big LED wall for a small event, where participants could race for the lowest time in one of our side quests!

It was all an extremely exhausting week, though. Having to stand at the booth all day, plus the almost two hours of travel time between the venue and where I was staying every morning and evening ended up being quite taxing.

Luckily I somehow managed to dodge Covid again, despite most people not wearing a mask again...


In the meantime, development continued and we finally got some full-game playthrough tests done! We asked a number of people to play through the game casually, and found that on average a playthrough will take about 6-8 hours, which is an excellent time for a game of this type. If you do all the side quests, I imagine it'll take about 10-12 hours. And if you go for all the secrets and everything, probably even longer than that, so there's definitely a lot of content stuffed into the game!

So compared to last month, we already got two of the major steps left to do finished out. The main questline is working well, and all of the side quests for the game, including the Kickstarter stretch goal "Synthesis" questline are done and playable. We're also almost done detailing in all the rooms with tiles. We also tuned the RPG levelling quite a bit already, though I imagine I'll want to tune that some more based on player feedback.

What's left to do now is mostly ironing out all of the bugs, testing everything front and back and whatever other ways we can think of, and marketing the game some more to make sure people actually know it exists, too. And here's where I unfortunately have to make another announcement: the release will be delayed to January. Releasing in November is not a good move, especially for a young studio like ours. January is much calmer and generally results in better marketing and better sales, which we'll really need.

The time spent between November and January won't be wasted, though. We'll keep responding to feedback from our testers, and we'll work on the stretch goals from the Kickstarter as well. With any luck, the game will be able to launch with the level editor already integrated and available!

But yes, we need a good launch if we're to make more games, and I really, really would like to. Again, I ask that if you know anyone at all, or any groups that may be interested in Kandria, please share the steam page with them. It would help us out a lot, as we're not doing so hot with our general outreach.

Well then, that's it for this month. See you next month, or next week if you're on the mailing list!

Joe MarshallPlaying with raycasting

· 87 days ago

I wanted to learn about raycasting. Raycasting is like a simplified version of ray tracing. As in ray tracing, you examine the environment by projecting a ray from your current location out in some direction to determine what is visible in that direction. But we simplify the problem by only considering two dimensions.

I was also interested in making the graphics more functional and less dependent upon side effects. Now obviously rendering an image to the screen is going to involve side effects, but we can refactor the rendering problem into two subproblems, a pure function that maps the world to an image and the procedure that displays the image.

I'll put the code below. The run procedure implements the event loop state machine. It keeps track of the world and calls next-world on the current world to update the world as time passes. next-world just maps next-state over the objects in the world. next-state does not mutate an object, rather it returns a new object in the new state. Every 13 milliseconds, run calls render-world!, which calls render! on each object in the world.

We're going to use raycasting to fake up a first-person view of a two-dimensional maze. From a position within the maze, we'll cast a ray in a direction and see how far away the wall is. If we peer at the wall through a narrow slit in just that direction, it will appear as a vertical line with height inversely proportional to its distance. If we sweep the ray direction and stack the vertical lines next to each other, it will create three dimensional effect.

The render! method for a fp-view will side effect the screen, but we'll compute the contents functionally. We'll go through each column on the screen and call (vraster fp-view column) to compute a color and a height and we'll draw a vertical line of that height in that color in that column.

(defmethod render! (renderer (fp-view fp-view))
  (dotimes (column +window-width+)
    (multiple-value-bind (r g b height)
        (vraster fp-view column)
      (sdl2:set-render-draw-color renderer r g b #xFF)
      (sdl2:render-draw-line renderer
                             column (round (+ (/ +window-height+ 2) (/ height 2)))
                             column (round (- (/ +window-height+ 2) (/ height 2)))))))

vraster is a function that returns the color and height of the wall on a particular column on the screen. It figures out the angle at which to cast to a ray and calls range to find the distance to the nearest wall at that angle. This is sufficient to determine the wall height for that column, but the first person effect is enhanced significantly if you tint the color according to the distance and the direction of the wall. Knowing the distance, we compute the exact point px, py that the ray hit. It's a wall in the x direction if the y coordinate is an integer and vice versa.

(defparameter +field-of-view+ (/ pi 4))

(defun column->theta (column)
  (- (* (/ column +window-width+) +field-of-view+) (/ +field-of-view+ 2)))

(defun vraster (fp-view column)
  (let* ((location (get-location fp-view))
         (theta (column->theta column))
         (distance (range location theta))
         (px (+ (get-x location) (* (sin (+ (get-theta location) theta)) distance)))
         (py (+ (get-y location) (* (cos (+ (get-theta location) theta)) distance)))
         (wx (< (abs (- py (round py))) 0.05))
         (wy (< (abs (- px (round px))) 0.05)))
     (min #xFF (floor (/ (if wx #xff #x00) distance)))
     (min #xFF (floor (/ #xFF distance)))
     (min #xFF (floor (/ (if wy #xff #x00) distance)))
     (min +window-height+ (/ (* +window-height+ 2) distance)))))

So we've factored the rendering of a frame into a procedure that draws on the screen and a function that returns what to draw. The direct advantage of this is that we can determine what we should draw without actually drawing it. As an example, suppose we wanted to generate a stereo pair of images. The only thing we need to change is the render! method. It will now compute the view from two slightly different locations and put one set of columns on the left and the other on the right.

(defmethod render! (renderer (fp-view fp-view))
  (dotimes (column (/ +window-width+ 2))
    (multiple-value-bind (r g b height)
        (vraster (left-eye (get-location fp-view)) (* column 2))
      (sdl2:set-render-draw-color renderer r g b #xFF)
      (sdl2:render-draw-line renderer
                             column (round (+ (/ +window-height+ 2) (/ height 2)))
                             column (round (- (/ +window-height+ 2) (/ height 2)))))
    (multiple-value-bind (r g b height)
        (vraster (right-eye (get-location fp-view)) (* column 2))
      (sdl2:set-render-draw-color renderer r g b #xFF)
      (sdl2:render-draw-line renderer
                             (+ column (/ +window-width+ 2)) (round (+ (/ +window-height+ 2) (/ height 2)))
                             (+ column (/ +window-width+ 2)) (round (- (/ +window-height+ 2) (/ height 2)))))))

In this and in a previous post I've gone through the effort of writing some graphics code while avoiding unnecessary side effects. Typical graphics examples and tutorials are stuffed to the brim with global variables, state, and side effects. I wanted to see which side effects were intrinsic to graphics and which are simply incidental to how the examples are coded. It appears that large amounts of the global state and side effects are unnecessary and a more functional approach is reasonable.

As promised, here is the code.

;;; -*- Lisp -*-

(defpackage "RAYCAST"
  (:shadowing-import-from "NAMED-LET" "LET")
  (:use "COMMON-LISP" "NAMED-LET""))

(in-package "RAYCAST")

(defparameter +window-height+ 480)
(defparameter +window-width+ 640)

(defgeneric next-state (object dt)
  (:method ((object t) dt) object))
(defgeneric render! (renderer thing))

(defun next-world (previous-world dt)
  (map 'list (lambda (object) (next-state object dt)) previous-world))

(defun render-world! (renderer world)
  (sdl2:set-render-draw-color renderer #x00 #x00 #x00 #xFF)
  (sdl2:render-clear renderer)
  (mapc (lambda (object) (render! renderer object)) world)
  (sdl2:render-present renderer))

(defun run (initial-world)
  (sdl2:with-init (:video)
    (sdl2:with-window (window
                       :h +window-height+
                       :w +window-width+
                       :flags '(:shown))
      (sdl2:with-renderer (renderer window :index -1 :flags '(:accelerated :presentvsync))

        (let ((last-ticks 0)
              (render-ticker 0)
              (title-ticker 0)
              (sim-count 0)
              (frame-count 0)
              (world initial-world))

          (flet ((title-tick! (dticks)
                   (incf title-ticker dticks)
                   (when (>= title-ticker 1000)
                     (decf title-ticker 1000)
                     (sdl2:set-window-title window
                                            (format nil "Sim rate: ~d, Frame rate: ~d"
                                                    sim-count frame-count))
                     (setq sim-count 0)
                     (setq frame-count 0)))

                 (world-tick! (dticks)
                   (incf sim-count)
                   (setq world (next-world world (/ dticks 1000))))

                 (render-tick! (dticks)
                   (incf render-ticker dticks)
                   (when (>= render-ticker 13)
                     (incf frame-count)
                     (decf render-ticker 13)
                     (render-world! renderer world))))

            (sdl2:with-event-loop (:method :poll)

              (:idle ()
                     (let ((this-ticks (sdl2:get-ticks)))
                       (if (= this-ticks last-ticks)
                           (sdl2:delay 1)
                           (let ((dticks (- this-ticks last-ticks)))
                             (setq last-ticks this-ticks)
                             (title-tick! dticks)
                             (world-tick! dticks)
                             (render-tick! dticks)))))

              (:keydown (:keysym keysym)
                        (case (sdl2:scancode keysym)
                          ((:scancode-x :scancode-escape) (sdl2:push-quit-event))
                          ((:scancode-left :scancode-right
                            :scancode-up :scancode-down
                            :scancode-pageup :scancode-pagedown)
                          (t (format *trace-output* "~&Keydown: ~s" (sdl2:scancode keysym))
                           (force-output *trace-output*))))

              (:quit () t)

(defparameter +maze+
  #2a((1 1 1 1 1 1 1 1 1 1 1 1)
      (1 0 0 1 0 0 0 0 0 0 0 1)
      (1 0 0 1 0 0 0 0 0 0 0 1)
      (1 0 1 1 0 0 0 1 0 1 0 1)
      (1 0 0 0 0 0 0 0 0 0 0 1)
      (1 0 0 0 0 0 0 1 0 1 0 1)
      (1 0 0 0 0 0 0 0 0 0 0 1)
      (1 0 0 0 0 0 0 0 0 0 0 1)
      (1 0 0 0 0 1 0 0 0 1 0 1)
      (1 0 0 0 0 0 0 0 0 0 0 1)
      (1 1 1 1 1 1 1 1 1 1 1 1)))

(defclass location ()
  ((maze :initarg :maze
         :initform +maze+
         :reader get-maze)
   (x :initarg :x
      :initform 2.5
      :reader get-x)
   (y :initarg :y
      :initform 2.5
      :reader get-y)
   (theta :initarg :theta
          :initform 0
          :reader get-theta)))

(defun ud-input ()
  (- (if (sdl2:keyboard-state-p :scancode-up) 1 0)
     (if (sdl2:keyboard-state-p :scancode-down) 1 0)))

(defun lr-input ()
  (- (if (sdl2:keyboard-state-p :scancode-right) 1 0)
     (if (sdl2:keyboard-state-p :scancode-left) 1 0)))

(defun pg-input ()
  (- (if (sdl2:keyboard-state-p :scancode-pageup) 1 0)
     (if (sdl2:keyboard-state-p :scancode-pagedown) 1 0)))

(defun canonicalize-angle (angle)
  (cond ((>= angle pi) (canonicalize-angle (- angle (* pi 2))))
        ((>= angle (- pi)) angle)
        (t (canonicalize-angle (+ angle (* pi 2))))))

(defparameter +translation-rate+ 3.0) ;; tiles per second
(defparameter +rotation-rate+ pi) ;; radians per second

(defmethod next-state ((location location) dt)
  (let ((fbstep (* (ud-input) +translation-rate+ dt))
        (lrstep (* (pg-input) +translation-rate+ dt))
        (thstep (* (lr-input) +rotation-rate+ dt))

        (old-x (get-x location))
        (old-y (get-y location))
        (cos-theta (cos (get-theta location)))
        (sin-theta (sin (get-theta location))))

    (let ((new-x (+ old-x (* sin-theta fbstep) (- (* cos-theta lrstep))))
          (new-y (+ old-y (* cos-theta fbstep) (+ (* sin-theta lrstep))))
          (new-theta (canonicalize-angle (+ (get-theta location) thstep))))
      (cond ((zerop (aref (get-maze location) (floor new-x) (floor new-y)))
             (make-instance 'location :x new-x :y new-y :theta new-theta))
            ((zerop (aref (get-maze location) (floor old-x) (floor new-y)))
             (make-instance 'location :x old-x :y new-y :theta new-theta))
            ((zerop (aref (get-maze location) (floor new-x) (floor old-y)))
             (make-instance 'location :x new-x :y old-y :theta new-theta))
             (make-instance 'location :x old-x :y old-y :theta new-theta))))))

(defclass fp-view ()
  ((location :initarg :location
             :reader get-location)))

(defmethod next-state ((fp-view fp-view) dt)
  (make-instance 'fp-view :location (next-state (get-location fp-view) dt)))

(defun range (location relative-theta)
  (let* ((angle (+ (get-theta location) relative-theta))

         (dx/dS (sin angle))
         (dy/dS (cos angle))

         (x-step (if (< dx/dS 0) -1 1))
         (y-step (if (< dy/dS 0) -1 1))

         (dS/dx (abs (/ 1 (if (zerop dx/dS) 1e-30 dx/dS))))
         (dS/dy (abs (/ 1 (if (zerop dy/dS) 1e-30 dy/dS)))))

    (let dda ((next-x (* dS/dx
                         (if (< dx/dS 0)
                             (- (get-x location) (floor (get-x location)))
                             (- (+ 1.0 (floor (get-x location))) (get-x location)))))
              (mapx (floor (get-x location)))
              (next-y (* dS/dy
                         (if (< dy/dS 0)
                             (- (get-y location) (floor (get-y location)))
                             (- (+ 1.0 (floor (get-y location))) (get-y location)))))
              (mapy (floor (get-y location)))
              (distance 0))
      (cond ((not (zerop (aref (get-maze location) mapx mapy))) distance)
            ((< next-x next-y)
             (dda (+ next-x dS/dx) (+ mapx x-step)
                  next-y mapy
             (dda next-x mapx
                  (+ next-y dS/dy) (+ mapy y-step)

(defparameter +field-of-view+ (/ pi 4))

(defun column->theta (column)
  (- (* (/ column +window-width+) +field-of-view+) (/ +field-of-view+ 2)))

(defun vraster (location column)
  (let* ((theta (column->theta column))
         (distance (range location theta))
         (px (+ (get-x location) (* (sin (+ (get-theta location) theta)) distance)))
         (py (+ (get-y location) (* (cos (+ (get-theta location) theta)) distance)))
         (wx (< (abs (- py (round py))) 0.05))
         (wy (< (abs (- px (round px))) 0.05)))
     (min #xFF (floor (/ (if wx #xff #x00) distance)))
     (min #xFF (floor (/ #xfF distance)))
     (min #xFF (floor (/ (if wy #xff #x00) distance)))
     (min +window-height+ (/ (* +window-height+ 2) distance)))))

(defmethod render! (renderer (fp-view fp-view))
  (dotimes (column +window-width+)
    (multiple-value-bind (r g b height)
        (vraster (get-location fp-view) column)
      (sdl2:set-render-draw-color renderer r g b #xFF)
      (sdl2:render-draw-line renderer
                             column (round (+ (/ +window-height+ 2) (/ height 2)))
                             column (round (- (/ +window-height+ 2) (/ height 2)))))))

;; (run (list (make-instance 'fp-view :location (make-instance 'location))))

Joe MarshallDrawing a circle

· 89 days ago

SDL is your bare bones graphics interface. It gives you primitives like draw-point, draw-line, and draw-rectangle, but you're on your own if you want to draw a circle. Naturally, I cribbed the code from stackoverflow.

Small circles didn't look right — they were a little squarish. The code worked by walking along pixels in one direction and accumulating an error term. When the error term got large enough, it would be reset and the code would advance a step along the other pixel axis. The error term was computed using integer math so that the circle was drawn quickly. The problem is that the integer math has rounding error and the rounding error is noticable with small circles.

For reference, it is straightforward to draw an exact circle. A circle isn't a function, but circle segment between 0 and 45 degrees is a function. If we mirror that segment eight ways horizontally, vertically, and at 90 degrees, we'll get a full circle.

(defun eightfold-point (renderer center-x center-y x y)
  (sdl2:render-draw-point renderer (+ center-x x) (+ center-y y))
  (sdl2:render-draw-point renderer (+ center-x x) (- center-y y))
  (sdl2:render-draw-point renderer (- center-x x) (+ center-y y))
  (sdl2:render-draw-point renderer (- center-x x) (- center-y y))

  (sdl2:render-draw-point renderer (+ center-x y) (- center-y x))
  (sdl2:render-draw-point renderer (+ center-x y) (+ center-y x))
  (sdl2:render-draw-point renderer (- center-x y) (- center-y x))
  (sdl2:render-draw-point renderer (- center-x y) (+ center-y x)))

(defun draw-circle (renderer center-x center-y radius)
  (let ((r-squared (* radius radius)))
    (dotimes (x (1+ (ceiling (/ radius (sqrt 2)))))
      (eightfold-point renderer
                       center-x center-y
                       x (round (sqrt (- r-squared (* x x))))))))
This gives much rounder looking circles than the code I cribbed from stackoverflow.

The problem, of course, is that this code computes a square root on each iteration. These days, computers are fast and that's not a big issue, but let's try to improve things.

On each iteration, we are computing the square root of r2-x2. This quantity changes between iterations, but not by much. You can compute a square root pretty quickly using Newton's method. The square root computed last iteration isn't that far from the new square root, so we can use the previous square root as the initial guess for Newton's method. Since we started pretty close to the right answer, we only need a single pass of Newton's method to get close enough to the square root for the current iteration.

(defun average (a b) (/ (+ a b) 2))

(defun draw-circle1 (renderer center-x center-y radius)
  (let ((x-limit (ceiling (/ radius (sqrt 2)))))
    (do ((x 0 (1+ x))
         (o 1 (+ o 2))
         (r2-x2 (* radius radius) (- r2-x2 o))
         (y radius (round (average y (/ (- r2-x2 o) y)))))
        ((> x x-limit))
      (eightfold-point renderer center-x center-y x y))))

This gives us pretty round circles — rounder than the integer methods, but not as round as the sqrt method. It requires less arithmetic than the sqrt method, but more than the integer method. What is more annoying, squarish circles or the amount of time it takes to draw round ones?

vindarelLisp for the web: deploying with Systemd, gotchas and solutions

· 95 days ago

How do you run your Common Lisp (web) application on your server? Nowadays most GNU/Linux distros have Systemd. I recently used it more, with a mix of applications running from source, from a binary, running locally or on my VPS. I had to bypass a few gotchas, so let’s recap’ what you need to know.

Also stay tuned: next, we’ll see how to build a standalone binary for your Common Lisp application with Deploy (so that we handle foreign libraries like libssl), how to include your Djula HTML templates as well as your static assets. This, in turns, makes it straightforward to ship your web app into an Electron desktop window.

INFO: dear readers, here's the best price I can set for my Udemy course "Learn Lisp effectively" , it will be available for 5 days! Thanks all for your support (NB: I'm working on the chapter on condition handling and new chapters are available for existing students).

Before Systemd

Let’s say we can run our app like this: we load the system definition, its dependencies, and we call the entry point.

sbcl --load my-app.asd \
     --eval '(ql:quickload :my-app)' \
     --eval '(my-app:start-app)'

Now we need to put the app on the background, we must ensure that if it fails, it is restarted, if the server is restarted, our app too, we must ensure we get to see the logs, etc.

Maybe you run your app inside of Emacs on your VPS... maybe you run your app inside tmux. This works and it is convenient to get back to the Lisp REPL, but this doesn’t ensure a restart on failure.

Systemd: daemonizing, restarting in case of crashes, handling logs

Systemd (or the service system of your distro) can help with all that.

Write a service file like this:

$ emacs -nw /etc/systemd/system/my-app.service
Description=Lisp app example

# Next, your command, with the full path to SBCL.
# This works, locally. Be sure to see the last section.
ExecStart=/usr/bin/sbcl --load run.lisp
# or: use a path to your binary.
# Use environment variables:

# Start or restart at boot:

When you run your app on the terminal, ensure that its webserver (Hunchentoot here) stays listening correctly on the foreground (otherwise see below):

$ sbcl --load run.lisp
Hunchentoot server is started.
Listening on localhost:9003.

Now run this command to start the service:

sudo systemctl start my-app.service

to check its status use this:

systemctl status my-app.service

Systemd handles logs for you. We make our app write to stdout or stderr, Systemd writes logs:

journalctl -u my-app.service

use -f -n 30 to see live updates of logs, where -n is the number of lines you want as context.

The following tells Systemd to handle crashes and to restart the app:


and this makes it start the app after a reboot:


to enable it:

sudo systemctl enable my-app.service

Now keep in mind a couple things.

Make it stay on the foreground

The first gotcha is that your app must stay on the foreground.

If you run your app from source, you might have nothing to do, you’ll get a Lisp REPL, from which you can interact with your running application. Awesome.

But, if you build a binary, you might see this error when you run it with Systemd:

* ;
; compilation unit aborted
; caught 1 fatal error condition" error.

This puzzled me: I thought I had a Lisp prompt (the * ;) and that my program crashed, but no. I knew it, that’s simply Lisp quitting too early. Don’t rush and double check that your binary runs correctly.

What you must do can be found elsewhere (the Cookbook!): in your main function where you start your app, in this example with Hunchentoot, put its thread in the foreground:

;; with bordeaux-threads. Also sb-ext: join-thread, thread-name, list-all-threads.
(bt:join-thread (find-if (lambda (th)
                            (search "hunchentoot" (bt:thread-name th)))

Let it crash: --disable-debugger

We want our app to crash so that it can be re-started automatically: you'll want the --disable-debugger flag with SBCL, when you run your app from sources.

Relying on Quicklisp

When you run your apps locally, you most probably rely on Quicklisp being installed and being started in your init file (~/.sbclrc):

;;; The following lines added by ql:add-to-init-file:
(let ((quicklisp-init (merge-pathnames "quicklisp/setup.lisp"
  (when (probe-file quicklisp-init)
    (load quicklisp-init)))

There are 2 gotchas.

Systemd will, by default, run your app as root, so:

  • if it happened you did install Quicklisp on your production machine, you probably didn’t install it as root, so Systemd won’t find the init file that initializes Quicklisp (and so your startup scripts will fail).
    • you can use SBCL’s --userinit flag to tell the username where to find the init file.
    • you can set the Systemd user with User=xyz in the [service] section (disclaimer: untested).
  • the Quicklisp snippet will fail at (user-homedir-pathname), for a clash on usernames too, so Quicklisp won’t find its setup.lisp file. I replaced this function call with a hard path (/home/vindarel/), until I used a standalone binary.

That’s it. Now you can deploy in peace. I hope I saved you some hours. Now these issues are better google-able \o/

See you around and stay tuned.

Tim BradshawMacros (from Zyni)

· 99 days ago

It is the business of the future to be dangerous; and it is among the merits of science that it equips the future for its duties. — Alfred Whitehead

Once upon a time, long ago in a world far away, Lisp had many features which other languages did not have. Automatic storage management, dynamic typing, an interactive environment, lists, symbols … and macros, which allow you to seamlessly extend the language you have into the language you want and need.

But that was long long ago in a world far away where giants roamed the earth, trolls lurked under every bridge and, they say, gods yet lived on certain distant mountains.

Today, and in this world, many many languages have automatic storage management, are dynamically typed, have symbols, lists, interactive environments, and so and so and so. More of these languages arise from the thick, evil-smelling sludge that coats every surface each day: hundreds, if not thousands of them, like flies breeding on bad meat which must be swatted before they lay their eggs on your eyes.

Lisp, today and in this world not another, has exactly one feature which still distinguishes it from the endless buzz of these insect languages. That feature is seamless language extension by macros.

So yes, macros are dangerous, and they are hard and they are frightening. They are dangerous and hard and frightening because all powerful magic is dangerous and hard and frightening. They are dangerous because they are a thing which has escaped here from the future and it is the business of the future to be dangerous.

If macros are too dangerous, too hard and too frightening for you, do not use Lisp because macros are what Lisp is about.

This originated as a comment by my friend Zyni: it is used with her permission.

Colin LuptonLearn Lisp The Hard Way is Back Online

· 103 days ago

The second draft (in-progress) of Learn Lisp The Hard Way is back online and under active development, now hosted by and powered by GitBook on GitLab Pages.

You can find it at

If you would like to show your support for LLTHW, please consider sponsoring me through GitHub Sponsors at:

Joe MarshallPlaying with graphics

· 108 days ago

I wanted to play with some graphics. I don't know much about graphics, so I wanted to start with the basics. I played around with a couple of demos and I found that easiest to get reliably working was SDL2.

After downloading the SDL binary library and installing the FFI bindings with Quicklisp, I was off and running. You can find numerous SDL demos and tutorials on line and I tried a number of them. After I felt confident I decided to try something simple.

One thing I've noticed about graphics programs is the ubiquity of mutable state. Everything seems mutable and is freely modified and global variables abound. As a mostly functional programmer, I am alarmed by this. I wanted to see where we'd get if we tried to be more functional in our approach and avoid mutable data structures where practical.

Now the pixels on the screen had best be mutable, and I'm not trying to put a functional abstraction over the drawing primitives. We'll encapsulate the rest of the state in a state machine that is driven by the SDL event loop. The state machine will keep track of time and the current world. The current world is simply an immutable list of immutable objects. The state machine can transition through a render! phase, where it renders all the objects in the current world to a fresh frame. It attempts to do this about 75 times a second. The state machine can also transition through a next-world phase, where the current world and a delta-t are used to compute a new version of the world.

Our run program will take the initial list of objects. We'll start by initializing SDL, creating a window, and allocating a renderer for that window:

(defun run (initial-world)
  (sdl2:with-init (:video)
    (sdl2:with-window (window
                       :h +window-height+
                       :w +window-width+
                       :flags '(:shown))
      (sdl2:with-renderer (renderer window :index -1 :flags '())
        ... )))

Now we need the event loop state. last-ticks records the value from sdl2:get-ticks from the last time we processed the :idle event. This will be used to compute the elapsed time in ticks. render-ticker will record how many ticks have elapsed since the last time we rendered a frame to the screen. When render-ticker exceeds a certain amount, we'll call (render! current-world) and reset the ticker to zero. title-ticker will record how many ticks have occurred since the last time the window title was updated. When title-ticker exceeds a certain amount, we'll call sdl2:set-window-title to update the window title with some stats. sim-count is simply the number of times we've iterated next-world and frame-count is the number of times we've called render!. These are reset to zero every time we refresh the window title, so we'll have the frames per second and the world steps per second in the window title.

        (let ((last-ticks 0)
              (render-ticker 0)
              (title-ticker 0)
              (sim-count 0)
              (frame-count 0)
              (world initial-world))

          (flet ((title-tick! (dticks)
                   (incf title-ticker dticks)
                   (when (>= title-ticker 1000)
                     (decf title-ticker 1000)
                     (sdl2:set-window-title window (format nil "Sim rate: ~d, Frame rate: ~d" sim-count frame-count))
                     (setq sim-count 0)
                     (setq frame-count 0)))

                 (world-tick! (dticks)
                   (incf sim-count)
                   (setq world (next-world world (/ dticks 1000))))

                 (render-tick! (dticks)
                   (incf render-ticker dticks)
                   (when (>= render-ticker 13)
                     (incf frame-count)
                     (decf render-ticker 13)
                     (render-world! renderer world))))

Now we can run the event loop. The idle event is where the action happens:

          (sdl2:with-event-loop (:method :poll)

              (:idle ()
                     (let ((this-ticks (sdl2:get-ticks)))
                       (if (= this-ticks last-ticks)
                           (sdl2:delay 1)
                           (let ((dticks (- this-ticks last-ticks)))
                             (setq last-ticks this-ticks)
                             (title-tick! dticks)
                             (world-tick! dticks)
                             (render-tick! dticks)))))

              (:keydown (:keysym keysym)
                        (case (sdl2:scancode keysym)
                          (:scancode-escape (sdl2:push-quit-event))
                          (:scancode-x      (sdl2:push-quit-event))))

              (:quit () t))

Now that's a bunch of state, but it's more or less under control because what we have is a state machine and the state variables aren't accessible to anything.

render-world! is straightforward. It clears the renderer, calls render! on every object in the world, and presents the renderer for display.

(defun render-world! (renderer world)
  (sdl2:set-render-draw-color renderer #x00 #x00 #x00 #xFF)
  (sdl2:render-clear renderer)
  (mapc (lambda (object) (render! renderer object)) world)
  (sdl2:render-present renderer)

next-world is a function that maps the current world to the next. It basically calls next on each object in the world and accumulate the results. We want objects to be able to go away, so if (next object) returns nil, we don't accumulate anything in the new world. If next returns the object unchanged, it will be accumulated unchanged in the next world. (next object) returns a new version of an object to simulate an update to the object. We want to be able to increase the amount of objects, so we allow (next object) to return a list of objects to be accumulated.

(defun next-world (previous-world dt)
   (lambda (items item)
     (let ((more (next item dt)))
       (cond ((null more) items)
             ((consp more) (append more items))
             (t (cons more items)))))

We'll start with a user-controlled player.

(defclass player ()
  ((x :initarg :x
      :reader get-x)
   (y :initarg :y
      :reader get-y)))

Everything that is to be displayed needs a render! method. This one just draws a little green triangle facing up.

(defmethod render! (renderer (player player))
  (let ((x (floor (get-x player)))
        (y (floor (get-y player))))
    (sdl2:set-render-draw-color renderer #x00 #xFF #x00 #xFF)
    (sdl2:render-draw-line renderer (- x 8) (+ y 8) (+ x 8) (+ y 8))
    (sdl2:render-draw-line renderer (- x 8) (+ y 8) (- x 1) (- y 16))
    (sdl2:render-draw-line renderer (+ x 8) (+ y 8) x (- y 16))
    (sdl2:render-draw-point renderer x y)

The next method computes the player in the next world:

(defun x-input ()
  (- (if (sdl2:keyboard-state-p :scancode-right)
     (if (sdl2:keyboard-state-p :scancode-left)

(defun y-input ()
  (- (if (sdl2:keyboard-state-p :scancode-down)
     (if (sdl2:keyboard-state-p :scancode-up)

(defparameter +player-speed+ 200.0) ;; pixels per second

(defmethod next ((player player) dt)
  (let ((new-x (max 8  (min (- +window-width+ 8)
                            (+ (get-x player)
                               (* (x-input) +player-speed+ dt)))))
        (new-y (max 16 (min (- +window-height+ 8)
                            (+ (get-y player)
                               (* (y-input) +player-speed+ dt))))))
    (make-instance 'player :x new-x :y new-y)))

Once we've defined a render! method and a next method, we're ready to go. If we call run on a list containing a player object, we'll have our little player on the screen controllable with the arrow keys.

An enemy ship can be defined.

(defclass enemy ()
  ((x :initarg :x :reader get-x)
   (y :initarg :y :reader get-y)
   (dx :initarg :dx :reader get-dx)
   (dy :initarg :dy :reader get-dy)))

(defmethod next ((enemy enemy) dt)
  (let ((new-x (+ (get-x enemy) (* (get-dx enemy) dt)))
        (new-y (+ (get-y enemy) (* (get-dy enemy) dt))))
    (when (and (>= new-x 8)
               (< new-x (+ +window-width+ 8))
               (>= new-y 8)
               (< new-y (- +window-height+ 16)))
      (make-instance 'enemy
                     :x new-x
                     :y new-y
                     :dx (get-dx enemy)
                     :dy (get-dy enemy)))))

;;; Render method omitted

As given, enemy ships will drift at constant speed until they run off the screen. We'd like to replenish the supply, so we'll make an enemy spawner:

(defclass enemy-spawner ()
  ((timer :initarg :timer :initform 0 :reader get-timer)))

(defmethod next ((spawner enemy-spawner) dt)
  (let ((new-time (- (get-timer spawner) dt)))
    (if (> new-time 0)
        (make-instance 'enemy-spawner :timer new-time)
        (list (make-instance 'enemy-spawner :timer (+ 1 (random 4)))
              (make-instance 'enemy :x (random (+ (- +window-width+ 32) 16))
                                    :y 16
                                    :dx (- 25 (random 50))
                                    :dy (+ (random 100) 50))))))

(defmethod render! (renderer (enemy-spawner enemy-spawner))
The render! method doesn't do anything so a spawner doesn't have an image. It simply has a timer. To compute the next spawner, we subtract dt and create a new spawner with the reduced amount of time. If that's not a positive amount of time, though, we create two objects: a new spawner with somewhere between 1 and 5 seconds time and a new enemy ship.

We'll modify our player to allow him to shoot at the enemy:

(defclass player ()
  ((x :initarg :x
      :reader get-x)
   (y :initarg :y
      :reader get-y)
   (fire-cycle :initarg :fire-cycle
               :initform 0
               :reader get-fire-cycle)))

(defmethod next ((player player) dt)
  (let ((new-x (limit 8 (- +window-width+ 8) (+ (get-x player) (* (x-input) +player-speed+ dt))))
        (new-y (limit 16 (- +window-height+ 8) (+ (get-y player) (* (y-input) +player-speed+ dt))))
        (next-fire-cycle (- (get-fire-cycle player) dt)))
    (if (and (sdl2:keyboard-state-p :scancode-space)
             (< next-fire-cycle 0))
         (make-instance 'player
                        :x new-x
                        :y new-y
                        :fire-cycle .1)
         (make-instance 'bullet
                               :x (- (get-x player) 8)
                               :y (- (get-y player) 16)
                               :dx 0
                               :dy (- +bullet-speed+))
         (make-instance 'bullet
                               :x (+ (get-x player) 8)
                               :y (- (get-y player) 16)
                               :dx 0
                               :dy (- +bullet-speed+)))
        (make-instance 'player
                       :x new-x
                       :y new-y
                       :fire-cycle next-fire-cycle))))

A bullet is a simple moving object:

(defclass bullet ()
  ((x :initarg :x :reader get-x)
   (y :initarg :y :reader get-y)
   (dx :initarg :dx :reader get-dx)
   (dy :initarg :dy :reader get-dy)))

(defmethod next ((bullet bullet) dt)
  (let ((new-x (+ (get-x bullet) (* (get-dx bullet) dt)))
        (new-y (+ (get-y bullet) (* (get-dy bullet) dt))))
    (when (and (>= new-x 0)
               (< new-x +window-width+)
               (>= new-y 0)
               (< new-y +window-height+))
      (make-instance 'bullet
                     :x new-x
                     :y new-y
                     :dx (get-dx bullet)
                     :dy (get-dy bullet)))))

At this point we can move around the screen and shoot at enemies that spawn periodically. The problem is that the bullets go right through the enemy. We need to handle object collisions. We'll modify the next-world function. As it loops over the objects in the world, it will perform an inner loop that checks for collisions with other objects. If two objects collide, a function is called to get the collision results and those results are added to the list of objects in the world. If an object doesn't collide with anything, the next method is called to get the next version of the object.

(defun next-world (previous-world dt)
  (let outer ((tail previous-world)
              (next-world '()))
    (cond ((consp tail)
           (let ((this (car tail)))
             (let inner ((those (cdr tail)))
               (cond ((consp those)
                      (let ((that (car those))
                            (others (cdr those)))
                        (if (collides? this that)
                            (outer (append (collide this that) (delete that (cdr tail)))
                            (inner others))))
                     ((null those)
                      (outer (cdr tail)
                             (let ((more (next this dt)))
                               (cond ((consp more) (append more next-world))
                                     ((null more) next-world)
                                     (t (cons more next-world))))))
                     (t (error "Bad list."))))))
          ((null tail) next-world)
          (t (error "Bad list.")))))

We define collides? as a generic function that returns nil by default

(defgeneric collides? (this that)
  (:method ((this t) (that t)) nil)
so that most objects don't collide. In the case where something does collide, we'll define collide as a generic function that returns nil by default
(defgeneric collide (this that)
  (:method ((this t) (that t)) nil)
so when two objects collide, they simply disappear.

collides? will be called on pairs of objects in no particular order, so method pairs will be needed to handle both orders. We'll define collides? methods on bullets and enemies that checks if the bullet is within the bounding box of the enemy:

(defmethod collides? ((this bullet) (that enemy))
  (and (> (get-x this) (- (get-x that) 8))
       (< (get-x this) (+ (get-x that) 8))
       (> (get-y this) (- (get-y that) 8))
       (< (get-y this) (+ (get-y that) 8))))

(defmethod collides? ((this enemy) (that bullet))
  (collides? that this))

At this point, we can shoot enemy ships. The default method for collide between an enemy and a bullet returns nil so the enemy and the bullet simply disappear. If we were fancy, we could arrange for it to return an explosion object or several debris objects.

It would be nice to keep a tally of the number of enemy ships we have shot. We don't have to add any extra machinery for this. We create a score class and a point class:

(defclass score ()
  ((value :initarg :value
          :reader get-value)))

(defmethod next ((score score) dt) score)

;;; Render method prints score on screen.

(defclass point ()
  ((value :initarg :value
          :reader get-value)))

(defmethod next ((point point) dt) point)

(defmethod render! (renderer (point point)) nil)
Scores and points are immutable objects without positions, but we'll define methods so that when a score and a point collide, the result is a higher score.
(defmethod collides? ((this point) (that score)) t)
(defmethod collides? ((this score) (that point)) t)

(defmethod collide ((this point) (that score))
  (list (make-instance 'score
                       :font (get-font that)
                       :value (+ (get-value this) (get-value that)))))

(defmethod collide ((this score) (that point))
  (collide that this))
Now we'll define a bullet colliding with an enemy to produce a point:
(defmethod collide ((this bullet) (that enemy))
  (list (make-instance 'point :value 1)
        ;; add explosion object here

(defmethod collide ((this enemy) (that bullet))
  (collide that this))
So when you shoot an enemy, the bullet and enemy disappear to be replaced by a point. On the next update, the point will collide with the score to be replaced with an updated score.

At this point we've got a little demo game where we can fly a ship around and shoot enemies and a running score is kept. The world model is immutable and worlds are functions of previous worlds. I'll call it a successful proof of concept.

But did this buy us anything? We don't have mutable state per se, but we've kind of cheated. When we create new versions of an object, each version is immutable, but the sequence of versions taken as a whole seem to be evolving over time. For example, consider the score. At each time step, there is an immutable score object, but over time what is considered the current score changes. We've eliminated the direct problems of mutation, but we've introduced the problem of keeping track of what series of immutable versions correspond to a single evolving instance.

In this small example, we're not keeping track of the evolving objects. For instance, each bullet, as it is updated from step to step, is actually created anew at its new position on each step. The old bullet instance is dropped and the new instance really has no idea how it got there. Bullets are such simple objects that this doesn't matter, but the current score is different. It makes sense for there to be a singleton score object that increases over time, but we haven't built that in to our model. Instead, we've designed a set of collision interactions that drive the score.

We've eliminated the direct mutable state in our objects and our world, but sometimes we want to model stateful objects. We therefore create objects that represent state transitions (e.g. points) and then use the collision mechanism to combine the transition objects with the objects that represent physical entities. That seems a bit convoluted, and I don't think it will scale.

On the other hand, we do gain the immediate benefits of the world and the objects being immutable. Saving and restoring a world is trivial. Reasoning about objects at each update is easy because the objects don't change, but we now have to reason about how objects appear to change in the long run.

The tradeoff of immutable objects is increased allocation. But although a lot more consing is happening, most of it can be quickly reclaimed by the generational collector, so noticable GC pauses are infrequent. I haven't measured the performance, but it is certainly adequate for the little example I wrote. If you had a mechanism to reason about the objects as linear types (Sufficiently Smart Compiler), you could determine when you can update objects in place and avoid reallocating.

The world model, simply a list of objects, is flexible, but not well structured. For instance, the current score and the bullets are among the elements mixed together in this list. You'd have to search this list to find specific elements or filter this list to find elements of a certain type.

The simple collision model is O(n2), so it can't handle a ton of objects. A more sophisticated world model would be needed to keep track of different classes of collidable objects to avoid the O(n2) search. For example, if bullets were kept separately, we could avoid checking if they collide with each other.

The point of this exercise was to play with graphics and see what you can do without the mutable state that is alarmingly ubiquitous. It turns out that you can go pretty far, but it's kind of strange.

Kaveh Kardan41. World Building And Alternate Reality Common Lisp

· 110 days ago

Time for another Interlude, wherein I take a break from my career journey and reflect on what we're doing. I say we, not because I have been taken by delusions of royalty, but because there is more than just me involved now, as you will find out below, in the second half of this post.

It occurred to me the other day that I am developing alternate reality software. And what, I hear you cry, do I mean by that?

I have been a science fiction fan ever since I can remember. I was a nerdy kid who watched "Star Trek (TOS)", "Lost in Space", and "Land of the Giants" on TV. I read all the science fiction I could get my hands on. Later in life I even wrote stories for SF movies, and created an SF TV show which was never picked up. If I were an optimist, I would say "has not yet been picked up."

One of the subgenres of science fiction is alternate reality (aka alternate history), where history took a different turn from our world. The most famous example is probably Philip K Dick's novel "The Man in the High Castle", where the Axis powers are victorious in World War II and partition the USA.

From a purely practical point of view, the software we are developing does not make much sense. Common Lisp is a niche language, with very weak support for graphics and graphical user interfaces, and mediocre tooling at best.

No one has really tried doing serious computer graphics and animation with Common Lisp in the past 30 years. It may be seen by some as a quixotic quest at best. But in an alternate reality, one in which Common Lisp retained its popularity and remained a mainstream language, this development effort would make perfect sense.

For me, the most fun part of writing science fiction is the world building. Pure imagination running free. So let's engage in a few scenarios in which Common Lisp could be a mainstream language today.

Scenario one: the US does not get involved in the Vietnam war, there is no budget deficit, and ARPA funding for computer science academic research is not cut. Well-funded Lisp-based projects continue and prosper.

Scenario two: AT&T does not sign a decree with the US government which forces Bell Labs to give away the UNIX source code for free. Instead AT&T tries to sell UNIX as a commercial operating system, becoming just one of many such vendors. Unix's popularity (and therefore that of C/C++) is curtailed and it does not take over the world. Lisp Machines and a variety of other operating systems survive in this ecosystem.

Scenario three: the government decides that instead of inventing a new language known as ADA, it will require Common Lisp as the standard language for government and military work. Common Lisp becomes a mainstay of government contract work and ensures a steady supply of funding and Lisp development opportunities. (Yeah, I know, this one is kind of out there.)

In these scenarios (and if you buy me a coffee I can come up with a half a dozen other ones), Lisp trudges along nicely, perhaps along with Fortran, Cobol, and C/C++.

That is the software we are writing. The one that comes out of these timelines. One where people naturally expect their software to be extensible and customizable and programmable, and not regularly crash due to things like buffer overflows.

I can dream. It's one of the perks of being a science-fiction fan.

Code & Design

When I start a project, I make a "to do" list. This list always grows much faster than I can actually get the enumerated items on it accomplished. I use it to keep track of features, ideas, or even just vague notions of what I might want to do or work on. When I am done implementing one of these items, it will go on the "done" section. The listing above is that section for my graphics project, and represents several months worth of work.

The to do list remains well-populated and represents a lot of work for us. Yes, us. I have gone ahead and publicly announced this system as an open source project. As a result, a small team is gathering around and we are planning the future development of the system.

It was always my hope that this project would grow beyond a personal endeavor of mine into something that could have some real world impact. Not sure what that impact might be yet, but the first step is to get into the hands of people, both artists and developers, who can use it. So our first task is to port the system beyond the Macintosh, to Linux and Windows platforms. Currently work in progress.

If I were to be honest, and that is something I try to do in this blog (otherwise what is the point), I would have to admit that I am equal parts nervous and excited. I had kept putting off making the open-source announcement, but then I remembered a quote I read somewhere to the effect that the biggest mistake we can make is to wait until we think we are ready. So I wondered what I was waiting for. I am not getting any younger. Just get the ball rolling and see what happens.

For those who are geeky enough to want to see the project on Github, here is a link.

John JacobsenAdding Tail Call Optimization to A Lisp Written in Go

· 118 days ago

The last few days have been devoted to improving l1, the homegrown lisp I wrote about earlier this year. A number of changes have landed in the last week:

I also implemented the bulk of the automated tests in the language itself. This was a decisive step forward in both ease of creating new tests and confidence that the language was approaching something usable.

The work I'm happiest with, though, because it taught me the most, was implementing tail call optimization (TCO) in the language, which the rest of this post will be about.


The need for some form of TCO became clear as I started to write more small programs in l1. Perhaps the simplest example is one that sums all the natural numbers up to $n$:

(defn sum-to-acc (n acc)
  (cond ((zero? n) acc)
        (t (sum-to-acc (- n 1) (+ n acc)))))

(defn sum-to (n)
  (sum-to-acc n 0))

Calling sum-to for small $n$ worked fine:

(sum-to 100)

However, larger $n$ blew up spectacularly:

(sum-to (* 1000 1000))
runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0x14020500360 stack=[0x14020500000, 0x14040500000]
fatal error: stack overflow

runtime stack:
runtime.throw({0x10289aa2b?, 0x10294ddc0?})
	/opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/panic.go:992 +0x50
	/opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/stack.go:1101 +0x46c
	/opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/asm_arm64.s:314 +0x70

goroutine 1 [running]:
	/opt/homebrew/Cellar/go/1.18.3/libexec/src/strings/reader.go:66 +0x98 fp=0x14020500360 sp=0x14020500360 pc=0x102883348
math/big.nat.scan({0x0, 0x1401c2820e0?, 0x0}, {0x1028dc7c8, 0x1401c2820e0}, 0xa, 0x0)
	/opt/homebrew/Cellar/go/1.18.3/libexec/src/math/big/natconv.go:126 +0x80 fp=0x14020500430 sp=0x14020500360 pc=0x10288b1e0
;; ...

This happens, of course, because sum-to-acc calls itself a million times, each time storing a copy of its local bindings on the stack, which eventually consumes all the space on the stack.

Getting simple recursive functions like this to work for large $n$ is especially important because l1 doesn't have loops (yet)!

The Optimization

The solution is hinted at already in my test case. Note that I did not write sum-to as a single recursive function, as follows:

(defn sum-to-notail (n)
  (cond ((zero? n) 0)
        (t (+ n (sum-to-notail (- n 1))))))

While this function looks slightly simpler, it is harder for a compiler or interpreter to optimize. The difference is that, whereas sum-to-notail does some work after calling itself (by adding n to the result), sum-to-acc calls itself from the tail position; that is, the function returns immediately after calling itself.

People have long realized that function calls from the tail position can be replaced by updating the return address and then jumping directly to the the new function without adding new information to the stack. This is something I had heard about for years and used in various "functional" languages, without ever really implementing myself (and therefore fully understanding). It's an easy thing to take for granted without knowing anything about how it's actually implemented under the hood. The failure of sum-to-acc and similar recursive functions, described above, meant I would have to learn.

Two very different blog posts were helpful to me in pointing the way forward: Adding tail call optimization to a Lisp interpreter in Ruby, and How Tail Call Optimization Works. The posts focus on very different languages (Ruby vs. C / assembler), but they each revolve around what are effectively GOTO statements. I'm old enough to remember BASIC and the pernicious GOTO statement leading to "spaghetti code." I doubt I've ever used a GOTO statement in production code, whose use in modern programming languages fell out of favor in the aftermath of Dijkstra's famous Go To Statement Considered Harmful paper. But the ability to transfer control to another part of your program without invoking a function call is key to the optimization.

The Approach

Since the strategy is general, let's lose all the parentheses for a moment and rewrite sum-to-acc in language-agnostic pseudo-code:

function sum-to-acc(n sum)
   if n == 0, then return sum
   return sum-to-acc(n - 1, n + sum)

In most languages (without TCO), when this function is called, the values of n and sum, as well as the return address, will be put on the stack, whose evolution looks something like the following.1:

 first invocation: [n=5, sum=0,  ret=sum-to:...]

second invocation: [n=4, sum=5,  ret=sum-to-acc:...]
                   [n=5, sum=0,  ret=sum-to:...]

 third invocation: [n=3, sum=9,  ret=sum-to-acc:...]
                   [n=4, sum=5,  ret=sum-to-acc:...]
                   [n=5, sum=0,  ret=sum-to:...]

fourth invocation: [n=2, sum=12, ret=sum-to-acc:...]
                   [n=3, sum=9,  ret=sum-to-acc:...]
                   [n=4, sum=5,  ret=sum-to-acc:...]
                   [n=5, sum=0,  ret=sum-to:...]

 fifth invocation: [n=1, sum=14, ret=sum-to-acc:...]
                   [n=2, sum=12, ret=sum-to-acc:...]
                   [n=3, sum=9,  ret=sum-to-acc:...]
                   [n=4, sum=5,  ret=sum-to-acc:...]
                   [n=5, sum=0,  ret=sum-to:...]

 sixth invocation: [n=0, sum=15, ret=sum-to-acc:...]
                   [n=1, sum=14, ret=sum-to-acc:...]
                   [n=2, sum=12, ret=sum-to-acc:...]
                   [n=3, sum=9,  ret=sum-to-acc:...]
                   [n=4, sum=5,  ret=sum-to-acc:...]
                   [n=5, sum=0,  ret=sum-to:...]

At the sixth invocation, our terminating condition is reached, and 15 is returned, with all the pending stack frames popped off the stack.

With TCO, the implementation looks more like the following:

function sum-to-acc(n sum)
   if n == 0, then return sum
   n = n - 1
   sum = sum + n

as a result, the evolution of the stack looks as follows:

 first invocation: [n=5, sum=0, ret=sum-to:...]

second invocation: [n=4, sum=5, ret=sum-to:...]

 third invocation: [n=3, sum=9, ret=sum-to:...]

fourth invocation: [n=2, sum=12, ret=sum-to:...]

 fifth invocation: [n=1, sum=14, ret=sum-to:...]

 sixth invocation: [n=0, sum=15, ret=sum-to:...]

All those extra stack frames are gone: recursion has turned into a form of iteration.

Implementing TCO, then, has two ingredients:

  1. Replace the values of the current arguments with their new values directly.
  2. Jump straight to the next call of the function without adding to the stack;

This low-level, imperative optimization makes high-level, functional, recursive implementations efficient.


In thinking about the implementation for l1, I was pleased to learn that Go actually has a goto statement. However, my implementation was poorly set up to use it.

Early in the implementation of l1, I noticed that each data type (numbers, atoms, and lists) had its own evaluation rules, so it made sense to make use of Go's features supporting polymorphism, namely interfaces and receivers. I had a Sexpr interface which looked like the following:

type Sexpr interface {
	String() string
	Eval(*env) (Sexpr, error)  // <--------
	Equal(Sexpr) bool

Numbers and atoms, for example, had fairly simple Eval implementations. For example,

func (a Atom) Eval(e *env) (Sexpr, error) {
	if a.s == "t" {
		return a, nil
	ret, ok := e.Lookup(a.s)
	if ok {
		return ret, nil
	ret, ok = builtins[a.s]
	if ok {
		return ret, nil
	return nil, fmt.Errorf("unknown symbol: %s", a.s)

And, of course, numbers eval to themselves:

func (n Number) Eval(e *env) (Sexpr, error) {
	return n, nil

Lists, as you would expect, were more complicated - evaluating a list expression needs to handle special forms2, user-defined functions, and built-in functions. Following the classic Structure and Interpretation of Computer Programs, I separated the core logic for function application into separate Eval and Apply phases. And to prevent the Eval for lists from getting too large, I broke out the evaluation rules for different cases (e.g. for let and cond special forms and for function application) into their own functions.

In other words, I had evaluation logic spread over ten functions in five files. Sadly, the need to jump back to the beginning of an evaluation rather than recursively calling Eval again meant that several of those nicely broken out functions had to be brought together into a single function, because goto does not support jumping from one function to another. (C has setjmp and longjmp, which effectively do this, but I would want to upgrade my IQ by a few points before applying them in this situation.)

There were actually three cases where I was performing an evaluation step right before returning, and the goto pattern could be used:

  1. When evaluating code in the tail position of a user-defined function;
  2. When evaluating code in the last expression in a let block;
  3. When evaluating code in the chosen branch of a cond clause.

I wound up with code which with looks like the following. Several steps are indicated only with comments. Note the tiny, easy-to-miss top: label at the very beginning:

// lisp.go
func eval(expr Sexpr, e *env) (Sexpr, error) {
	switch t := expr.(type) {
	case Atom:
		return evAtom(t, e)
	case Number:
		return expr, nil
	// ...
	case *ConsCell:
		if t == Nil {
			return Nil, nil
		// special forms:
		if carAtom, ok :=; ok {
			switch {
			case carAtom.s == "quote":
				return t.cdr.(*ConsCell).car, nil
			case carAtom.s == "cond":
				pairList := t.cdr.(*ConsCell)
				if pairList == Nil {
					return Nil, nil
				for {
					if pairList == Nil {
						return Nil, nil
					pair :=*ConsCell)
					ev, err := eval(, e)
					if err != nil {
						return nil, err
					if ev == Nil {
						pairList = pairList.cdr.(*ConsCell)
					expr = pair.cdr.(*ConsCell).car
					goto top
			// ...

The code so far shows the evaluation for atoms, numbers, and cond statements. cond does not introduce any new bindings, but when the first truthy condition is encountered, it evaluates the next argument as its final act. So the code above simply replaces the expression to be evaluated, expr, with the expression from the matching clause, and then restarts the evaluation via goto, without the overhead of a separate function call.

The code for let is somewhat similar:

			case carAtom.s == "let":
				args := t.cdr.(*ConsCell)
				if args == Nil {
					return nil, fmt.Errorf("let requires a binding list")
				// ... code to set up let bindings ...
				body := args.cdr.(*ConsCell)
				var ret Sexpr = Nil
				for {
					var err error
					if body == Nil {
						return ret, nil
					// Implement TCO for `let`:
					if body.cdr == Nil {
						expr =
						e = &newEnv
						goto top
					ret, err = eval(, &newEnv)
					if err != nil {
						return nil, err
					body = body.cdr.(*ConsCell)

The for loop invokes a new eval for each expression in the body of the let, except for the last one: when the last expression is reached, (the cdr is Nil), the last eval is done by jumping to the beginning of the function, once it has updated its environment to point to include the new bindings.

The last use of this pattern is in function invocation proper, which looks similar:

			// (... code to set up new environment based on passed arguments ...)
			var ret Sexpr = Nil
			for {
				if lambda.body == Nil {
					return ret, nil
				// TCO:
				if lambda.body.cdr == Nil {
					expr =
					e = &newEnv
					goto top
				ret, err = eval(, &newEnv)
				if err != nil {
					return nil, err
				lambda.body = lambda.body.cdr.(*ConsCell)

I've skipped various parts of eval that aren't relevant for TCO optimization - if you're interested, you can check out the code yourself.

To be clear, what we are optimizing is all tail calls, not just recursive ones - though the recursive ones were the primary objective due to the stack overflows reported above.

The end result is that sum-to now can complete for large values of $n$:

(sum-to (* 1000 1000))

Incidentally, a variant of our test case failed before I added the TCO optimization to let shown above; this now works, as well:

(defn sum-to-acc-with-let (n acc)
  (let ((_ 1))
    (cond ((zero? n) acc)
          (t (sum-to-acc-with-let (- n 1) (+ n acc))))))

(defn sum-to-with-let (n) (sum-to-acc-with-let n 0))

(sum-to-with-let (* 1000 1000))


Getting tail-call optimization to work was very satisfying... though the eval implementation is certainly more complex than before. (Ah, optimization!)

To ensure TCO continues to work, variants of sum-to with and without let are run on every build, along with a few other short example programs.

After implementing TCO in my own code, I can appreciate and understand the optimization better when I see it in the wild. I fully expect to use the pattern again when implementing future lisps (yes, I hope there will be more).


Note that this is a somewhat abstract representation: the details are language-specific. The ret=sum-to:... notation means that when the function returns, control will pass back to where it left off inside the sum-to function.


A special form is one that does not follow the normal evaluation rule for functions - it may evaluate its arguments once, many times, or not at all. (I am glossing over macros for the time being; l1 does not have them yet.)

Nicolas HafnerReturn to Development - August Kandria Update

· 124 days ago

The Kickstarter is finally well behind us, and we're back to working on the game full-time. Not that we ever really stopped working on it, mind you, but at least there's a bunch of distractions now well behind us. But, let's still quickly summarise all that went down, eh?

If you backed us, I once again wanted to extend my heartfelt thanks to you. It's always very moving and greatly motivating to see people support what you do! You should have received an email from Backerkit by now that lets you set the details of your backer rewards. Please fill that survey out as soon as you can! If you didn't get any such email, you can manually trigger it again on Backerkit:

In case you missed the Kickstarter but would still like to support us, you can also do so by preordering Kandria or its soundtrack through Backerkit. Unlike Kickstarter, this also accepts PayPal, if you don't have access to a credit card.

Gamescom & Devcom

Gamescom is coming up at the end of the month, and Kandria will be there, at the Swiss Games booth in the business area! We'll have the game ready to play, and some chocolates for you to enjoy as well, so if you're around, please stop on by! I'd love to have a chat!

We'll be there throughout the entire week (22nd-26th) so there's plenty of time to check out the booth as well.


There's roughly four months left until until our planned release, and lord, there's still a tonne left to do. The game is fully playable, but a lot of the areas are still unfinished and need a ton of polish. That's what we're going to focus on now.

We've also started playtesting of the full game. So far we haven't encountered any breaking bugs, just minor issues that we'll iron out in the coming days. This is, at least to me, a very good sign, and it seems we shouldn't hit any major roadblocks and things are still proceeding according to schedule.

So what exactly is left to do? We'll, here's an excerpt:

Alright, that's about it for now. If you want to receive more frequent updates and news, along with occasional in-depth emails about specific problems and solutions encountered during development, sign up for the newsletter!

Joe MarshallLet's Play Wordle

· 127 days ago

Wordle is popular these days. Let's teach the computer how to play.

As usual, I'm using the series library. Also, if you are coding along, there's a function or two I omitted that you'll have to write yourself. Note that I use a named-let, too.

To play Wordle, you try to deduce the secret word by making guesses. After each guess, you are told which letters you got exactly right and which are right, but in the wrong position. Each guess narrows down the number of possible answers until there is one left. It's a simple matter of making good guesses.

Here I define a word as a simple string of five characters and a predicate for testing that the word is all lowercase, alphabetic, ascii characters.

(deftype word () `(simple-string 5))

(defun valid-word? (thing)
  (and (typep thing 'word)
       (str:downcase? thing)
       (every #'alpha-char-p thing)
       (every #'str:ascii-char-p thing)))

I don't use a satisfies clause in the word type. satisfies can cause issues with optimization and performance because it is can be hard to control where the compiler inserts type checks. I just manually call valid-word? when necessary.

To read a word file, we read each line in the file, trim it, and select only the valid words. This works on the canonical word files, but you can read words from the system dictionary or other places if you want.

(defun read-word-file (pathname)
  (collect 'bag
    (choose-if #'valid-word?
      (map-fn 'string #'str:trim
        (scan-file pathname #'read-line)))))

(defparameter +word-file+ "~/wordle/words.txt")
(defparameter +answer-file+ "~/wordle/answers.txt")

(defparameter +all-words+ (read-word-file +word-file+))
(defparameter +all-answers+ (read-word-file +all-answers+))

We need to score a guess. When you make a guess, the squares under the letters turn green if the letter is correct, yellow if the letter is incorrect, but appears in the answer, and gray if the letter does not appear in the answer. We'll just return a list of the colors (as keywords). For example, (score-guess "react" "relay") => (:green :green :yellow :gray :gray)

score-guess first needs a list of the letters in the answer that don't match the guess:

(let ((sg (scan 'string guess))
      (sa (scan 'string answer)))
  (collect 'bag
    (choose (map-fn 'boolean #'char/= sg sa) sa)))
then we walk the guess. If the guess character equals the answer character, we cons a :green on to the score. If the guess character is a member of the unmatched answer characters, we cons a :yellow on to the score and delete that character from the unmatched characters. Otherwise, we cons a :gray on to the score.
(defun score-guess (guess answer)
  (declare (type word guess answer))
  (let walk ((index 0)
             (score '())
             (unmatched-chars (let ((sg (scan 'string guess))
                                    (sa (scan 'string answer)))
                                (collect 'bag
                                  (choose (map-fn 'boolean #'char/= sg sa) sa)))))
    (if (>= index 5)
        (nreverse score)
        (let ((guess-char (schar guess index)))
          (cond ((char= guess-char (schar answer index))
                 (walk (1+ index) (cons :green score) unmatched-chars))
                ((member guess-char unmatched-chars)
                 (walk (1+ index) (cons :yellow score) (delete guess-char unmatched-chars)))
                 (walk (1+ index) (cons :gray score) unmatched-chars)))))))

Once we've made a guess and have a score, we'll want to narrow down the possible words. We just go over the word list and keep the words that have a matching score.

(defun prune-words (guess score words)
  (declare (optimizable-series-function) (off-line-port words))
   (lambda (word)
     (equal (score-guess guess word) score))

We'll need a strategy for picking a word to guess. Here's an easy, naive one to start with: if there is only one possible word left, guess that one, otherwise guess a completely random word and narrow down the possibility list.

(defun strategy/random-word (possibilities)
  (if (= (length possibilities) 1)
      (car possibilities)
      (random-word +all-words+)))

So let's imagine the top level. The play function will play a single round of Wordle. We'll be keeping track of the possible words as we go. We choose a guess based on our strategy, then score the guess. If we got the right answer, we're done, but otherwise we narrow down the list of possibilites to those that have the same score and play the next round.

(defun play (strategy &optional (round 1)
                        (possibilities +all-answers+)
                        (secret-word (random-word +all-answers+)))
  (let* ((guess (funcall strategy possibilities))
         (score (score-guess guess secret-word)))
    (format t "~&~d guessing ~s ~s ..." round guess score)
    (if (equal score '(:green :green :green :green :green))
        (progn (format t "correct.") round)
        (let ((new-possibilities
                (collect 'bag (prune-words guess score (scan 'list possibilities)))))
          (format t "narrowed to ~d possibilities." (length new-possibilities))
          (play strategy (+ round 1) new-possibilities secret-word)))))

WORDLE> (play #'strategy/random-word)

1 guessing "culty" (:GRAY :GRAY :GRAY :GRAY :GRAY) ...narrowed to 519 possibilities.
2 guessing "hings" (:GRAY :GRAY :GRAY :GRAY :GRAY) ...narrowed to 101 possibilities.
3 guessing "india" (:GRAY :GRAY :YELLOW :GRAY :GRAY) ...narrowed to 9 possibilities.
4 guessing "lauds" (:GRAY :GRAY :GRAY :YELLOW :GRAY) ...narrowed to 8 possibilities.
5 guessing "stedd" (:GRAY :GRAY :GRAY :GRAY :GREEN) ...narrowed to 2 possibilities.
6 guessing "khets" (:GRAY :GRAY :GRAY :GRAY :GRAY) ...narrowed to 2 possibilities.
7 guessing "bared" (:GREEN :GRAY :YELLOW :GRAY :GREEN) ...narrowed to 1 possibilities.
8 guessing "brood" (:GREEN :GREEN :GREEN :GREEN :GREEN) ...correct.

It plays Wordle. Not very well, but it plays. This strategy seems to average a bit more than seven guesses a game. A better strategy should reduce this average.

When you guess a word, you divide the space of possible answers into a set of equivalence classes by score. I picture these as a set of bins, each labeled with a different score, like (:green :gray :gray :yellow :gray). Making a guess divides the list of possible words among the bins. A bad guess will only use a few bins and have uneven bins. A good guess will use a larger set of bins and divide things more evenly.

We'll need a function to collect the counts of an item in a series

(defun collect-counts (test items)
  (declare (optimizable-series-function))
  (collect-fn t
              (lambda () (make-hash-table :test test))
              (lambda (table item)
                (incf (gethash item table 0))
So now we go through a series of words, score the guess against each one, and count how many times we get each score.
(defun partition-words (guess words)
  (declare (optimizable-series-function))
  (collect-counts 'equal
                  (map-fn 'list #'score-guess
                          (series guess)
This returns a hash table that maps scores to the number of words matching that score. We need to measure how good a job this table does at narrowing down the word list.

We'll need a couple of helpers:

(defun weighted-sum (weights elements)
  (declare (optimizable-series-function))
  (collect-sum (map-fn 'real #'* weights elements)))

(defun scan-hash-values (hash-table)
  (declare (optimizable-series-function))
  (multiple-value-bind (keys values) (scan-hash hash-table)
    (declare (ignore keys))

Now we have to decide how to evaluate how well a partition (set of bins) narrows down possible word list. Suppose our word list originally had 128 words. That's 27 items, so it would take seven binary splits to single out a word. Now suppose after narrowing, we find we're down to 16 words. That's 24 items, so the narrowing is equivalent to three binary splits. The value of an entire set of bins is the weighted average of the narrowing of each bin.

(defun evaluate-partition1 (partition)
  (let* ((original-size (collect-sum (scan-hash-values partition)))
         (original-bits (log2 original-size)))

    (flet ((info-gain (bin-size)
             (- original-bits (log2 bin-size)))

           (weight (bin-size)
             (/ (coerce bin-size 'double-float)
                (coerce original-size 'double-float))))

      (let ((bin-sizes (scan-hash-values partition)))
         (map-fn 'real #'weight bin-sizes)
         (map-fn 'real #'info-gain bin-sizes))))))

(defun evaluate-guess (guess possibilities)
  (evaluate-partition (partition-words guess (scan 'list possibilities))))

(defun best-guess (guesses possibilities)
  (best #'> guesses :key (lambda (guess) (evaluate-guess guess possibilities))))

WORDLE> (play #'strategy/best-word)

1 guessing "soare" (:GRAY :GREEN :GRAY :GRAY :GRAY) ...narrowed to 87 possibilities.
2 guessing "culty" (:GRAY :GRAY :YELLOW :GRAY :GRAY) ...narrowed to 1 possibilities.
3 guessing "login" (:GREEN :GREEN :GREEN :GREEN :GREEN) ...correct.

With this strategy, we seem to average about 3.5 guess per game. This is much better than the tad over 7 we had before.

Kaveh Kardan40. My Face On YouTube&#8202;&mdash;&#8202;Is That A Good Idea?

· 128 days ago

40. My Face On YouTube — Is That A Good Idea?

While the filming for Encounter was in pre-production, I had started work on the graphics for the game. What today we would call the assets.

Using a Power Mac 9500, I modeled the spaceship interiors for the QuickTime VR scenes. Multiple renderings of each scene were done and stitched together to create 360-degree panoramas.

Once the video footage was edited, the scenes were composited on CG backgrounds and all the assets, complete with some basic gameplay, were assembled into a demo of the game using Macromedia Director.

I recall using a multiple-brick-sized external hard drive with a massive 9GB capacity to store the game assets and video files.

John and I travelled to the E3 (Electronic Entertainment Expo) in LA (either 1996 or 1997) to meet with potential distributors. We returned to Montreal without any serious leads, and around that time the production company which had been bankrolling our project ran into financial troubles.

So another of my projects got mothballed, though recently John was kind enough to send me a copy of the screenplay and artwork for the game, which I had lost in the house fire. Never at a loss for crazy ideas, it has occurred to me that Encounter would make an excellent VR game! Now I just need to assemble a team and find investors. Cue maniacal laughter.

Not for the first time, my life and career would take a major turn with a phone call. This time from Honolulu.

Code & Design

The above listing shows some of the macros whose output I showed last time. I have found that having a GUI, rudimentary as it may be, does allow me to more easily explore the possibilities of the system. It also has allowed me to give the first ever demo of the system, an event which was recorded and can be seen here.

Yes, so late in life I have created a YouTube channel. Kind of like being dragged kicking and screaming into the 21st century. Perhaps unexpected from a guy who develops software in a half-century-old language, but the fame and fortune of celebrity culture proved too tempting. It can't be far behind. Can it?

Once I created the channel, I began thinking of the Common Lisp lessons which launched this whole to-do, and how it had been suggested that I make videos of them.

So after my usual overthinking — no one will want to watch this! — that's what I have been doing recently.

I downloaded some screen recording software, the kind that allows you to record a video of yourself, reworked the lessons, and went for it. The whole "should I record video of myself?" question stymied me for a while, but here you have it, in glorious 21st-century digital video: my face on YouTube. Enjoy.

Next Episode

Joe MarshallNamed Lambda and Named Let

· 132 days ago

Suppose you want to map the "add three" function over a list. You don't need to define a name for the function, you can just use a lambda expression: (lambda (n) (+ n 3)). This creates an anonymous "add three" function.

Now suppose you want to map the "factorial" function over a list. You start with (lambda (x) (if (zerop x) 1 ... , but how can you recursively call an anonymous function? We need the function to have a local name within its own body. One option is to use the Y operator:

* (map 'list (Y (lambda (fact)
                  (lambda (x)
                    (if (zerop x)
                        (* x (funcall fact (1- x)))))))
       '(3 5 7))
(6 120 5040)
but another popular option is to provide a new special form
* (map 'list (named-lambda fact (x)
               (if (zerop x)
                   (* x (fact (1- x)))))
     '(3 5 7))
(6 120 5040)
The name fact is bound to the lambda expression only within the body of the lambda expression. You don't need to defun a factorial procedure, you can just use a named-lambda.

A little puzzle for you: write named-lambda. My answer below.

Just as a let is syntactic sugar for a lambda application, a named-let is syntactic sugar for a named-lambda application. The name is bound to the lambda expression that performs the variable binding, so you can use that name to make a recursive call. In effect, you can re-invoke the named-let expression with a fresh set of values.

Scheme hackers will be familiar with named-let, but it isn't usual in Common Lisp. It's an easy transformation:

(named-let recur ((x '(a list))
                  (y 22))    
   (body)) =>

(funcall (named-lambda recur (x y) (body)) '(a list) 22)
named-let is the bee's knees for ad hoc iteration. The iteration variables are bound to their initial values by the named-let, and the body can initiate the next iteration by tail-calling the named-let with the updated values for the iteration variables. Since there is no constraint on where or when you iterate, or how you update the iteration variables, this allows very general iteration.

I have seen a tendency for Scheme hackers to overdo it with named lets. You don't need a named-let when map will do. It's usually better to express an iteration through a higher order function than to write yet another ad hoc loop in your code. But named-let is a useful and powerful tool when the simpler methods aren't cutting it.

Here's what I came up with for named-lambda:

(defmacro named-lambda (name (&rest arglist) &body body)
  `(labels ((,name ,arglist ,@body))
     (function ,name)))

Tim BradshawTwo simple pattern matchers for Common Lisp

· 136 days ago

I’ve written two pattern matchers for Common Lisp:

Both dsm and spam strive to be simple and correct.


Both dsm and spam are simple: they do exactly one thing, and try to do that one thing well.

You could think of dsm as being to some other CL pattern matchers as Unix once was to Multics: dsm is the result of me looking at those other systems and thinking ‘please, not that’.

Those systems are vast, have several levels, and are extensible: some subset of them might do what I wanted to be able to do — make writing macros less unpleasant — but I’m not sure1. They are obsessed with performance.

dsm does one thing, and exports a single macro. If you know how to use destructuring-bind and case you already know almost all there is to know about dsm: it’s a case construct whose cases are destructuring-bind lambda lists. dsm doesn’t care about performance at all, because macroexpansion performance never matters.

At least one of those matchers has almost as many commits in its repo as dsm has lines of code.

Like Multics was, those hairy pattern matchers are fine systems. But there was a good reason that Thompson and Ritchie wrote something very different2.

destructuring-match / dsm

in CL destructuring-bind and, mostly equivalently, macro argument lists are both a blessing and a curse. They’re a blessing because they support destructuring, so you can write, for instance

(defmacro with-foo ((var &optional init) &body forms)

They’re a curse because they are so fragile: with-foo can only support that syntax and will fail with an ugly error message from the implementation when it is fed anything else.

Writing robust macros in CL, especially macros which expect various different argument patterns, then turns into a great saga of manually checking argument patterns before using destructuring-bind to actually bind things. The result of that, of course, is that very many CL macros are not robust and have terrible error reporting.

destructuring-match does away with all this unpleasentness. It supports a slightly extended version of the lambda lists that destructuring-bind supports, has ‘guard’ clauses which allow additional checks, and will match a form against any number of lambda lists until one matches, with a fallback case.

As an example here is a version of with-foo which allows two patterns:

(defmacro with-foo (&body forms)
  (destructuring-match forms
    (((var &optional init) &body body)
     (:when (symbolp var))
    ((((var &optional type) &optional init) &body body)
     (:when (symbolp var))
     (error ...))))

The guard clauses check that var is a symbol before the match succeeds, and will therefore ensure that the second match is the one chosen for (with-foo ((x y) 1) ...).

destructuring-match also supports ‘blank’ variables: any variable whose name is _ (in any package) is ignored, and all such variables are distinct. So for instance

(destructuring-match l
  ((_ _ _) ...))

will match if l is a proper list with exactly three elements.

Using destructuring-match it’s easy to write this macro3:

(defmacro define-matching-macro (name &body clauses)
  (let ((<whole> (make-symbol "WHOLE"))
        (<junk> (make-symbol "JUNK")))
    (destructuring-match clauses
      ((doc . the-clauses)
       (:when (stringp doc))
       `(defmacro ,name (&whole ,<whole> &rest ,<junk>)
          (destructuring-match ,<whole> ,@the-clauses)))
       `(defmacro ,name (&whole ,<whole> &rest ,<junk>)
          (destructuring-match ,<whole> ,@the-clauses))))))

And this then allows the above with-foo macro to be written like this:

(define-matching-macro with-foo
  ((_ (var &optional init) &body forms)
   (:when (symbolp var))
  ((_ ((var &optional type) &optional init) &body forms)
   (:when (symbolp var))
   (error "~S is bad syntax for with-foo" form)))

dsm was not written with performance in mind but it seems to be, typically, around a tenth to a half the speed of destructuring-bind while being far more powerful of course.

dsm can be found here. It will probably end up in Quicklisp in due course but currently it isn’t there, and some of its dependencies are also not up to date there.

spam, the simple pattern matcher

dsm has a lot of cases where it needs to check what the lambda list it is parsing and compiling looks like. To do this I wrote a bunch of predicate constructors and combinators, which return predicates which will check things. So for example:

There are several other predicate constructrors and predicate combinators, but spam can use any predicate.

There is then a matches macro which uses these to match things, and a matchp function which simply invokes a predicate.

As an example, here’s part of a matcher for &rest specifications in lambda lists.

(matching ll
  ((head-matches (some-of (is '&rest) (is '&body))
                 (is '&key))
   ;; &rest x &key ...
   ((head-matches (some-of (is '&rest) (is '&body))
    ;; &rest x with something else
   ((list-matches (some-of (is '&rest) (is '&body))
    ;; &rest x and no more
    (error "oops")))

spam is pretty useful, and code written using it is much easier to read than doing the equivalent checks manually. It is used extensively in the implementation of dsm.

spam is now one of my CL hax.

  1. At the time of writing Trivia supports lambda lists I think, but not destructuring-lambda lists: (match '(1 (1)) ((lambda-list a (b)) (values a b))) will fail, for instance. I don’t know whether is it meant to support destructuring lambda lists — comments in the sources imply it is, but it clearly does not in fact. 

  2. I am aware of Gabriel’s ‘worse is better’ paper and its various afterthoughts. dsm is not like that: it is smaller and simpler, but is not intended to be worse. dsm is to these other systems perhaps as Scheme was to CL. Gabriel also talks about these two options, of course. 

  3. Note this macro is 12 lines, half of which are handling the possible docstring. 

LispjobsSenior Common Lisp Developer | 3E | Remote (with some travel to Brussels)

· 136 days ago

More info here

Writing elegant, clean code is your thing and you are not afraid to pick things up fast and be proactive in a team. Someone creative and innovative is welcome - we embrace new technologies where they can be applied and make sense.

The ideal candidate has the following qualifications:

For older items, see the Planet Lisp Archives.

Last updated: 2022-12-03 19:59