Nicolas MartyanoffCustom Common Lisp indentation in Emacs

· 10 days ago

While SLIME is most of the time able to indent Common Lisp correctly, it will sometimes trip on custom forms. Let us see how we can customize indentation.

In the process of writing my PostgreSQL client in Common Lisp, I wrote a READ-MESSAGE-CASE macro which reads a message from a stream and execute code depending on the type of the message:

(defmacro read-message-case ((message stream) &rest forms)
  `(let ((,message (read-message ,stream)))
     (case (car ,message)
       (:error-response
        (backend-error (cdr ,message)))
       (:notice-response
        nil)
       ,@forms
       (t
        (error 'unexpected-message :message ,message)))))

This macro is quite useful: all message loops can use it to automatically handle error responses, notices, and signal unexpected messages.

But SLIME does not know how to indent READ-MESSAGE-CASE, so by default it will align all message forms on the first argument:

(read-message-case (message stream)
                   (:authentication-ok
                     (return))
                   (:authentication-cleartext-password
                     (unless password
                       (error 'missing-password))
                     (write-password-message password stream)))

While we want it aligned the same way as HANDLER-CASE:

(read-message-case (message stream)
  (:authentication-ok
    (return))
  (:authentication-cleartext-password
    (unless password
      (error 'missing-password))
    (write-password-message password stream)))

Good news, SLIME indentation is defined as a list of rules. Each rule associates an indentation specification (a S-expression describing how to indent the form) to a symbol and store it as the common-lisp-indent-function property of the symbol.

You can obtain the indentation rule of a Common Lisp symbol easily. For example, executing (get 'defun 'common-lisp-indent-function) (e.g. in IELM or with eval-expression) yields (4 &lambda &body). This indicates that DEFUN forms are to be indented as follows:

  • The first argument of DEFUN (the function name) is indented by four spaces.
  • The second argument (the list of function arguments) is indented as a lambda list.
  • The rest of the arguments are indented based on the lisp-body-indent custom variable, which controls the indentation of the body of a lambda form (two spaces by default).

You can refer to the documentation of the common-lisp-indent-function Emacs function (defined in SLIME of course) for a complete description of the format.

We want READ-MESSAGE-CASE to be indented the same way as HANDLER-CASE, whose indentation specification is (4 &rest (&whole 2 &lambda &body)) (in short, an argument and a list of lambda lists). Fortunately there is a way to specify that a form must be indented the same way as another form, using (as <symbol>).

Let us first define a function to set the indentation specification of a symbol:

(defun g-common-lisp-indent (symbol indent)
  "Set the indentation of SYMBOL to INDENT."
  (put symbol 'common-lisp-indent-function indent))

Then use it for READ-MESSAGE-CASE:

(g-common-lisp-indent 'read-message-case '(as handler-case))

While it is in general best to avoid custom indentation, exceptions are sometimes necessary for readability. And SLIME makes it easy.

TurtleWareMethod Combinations

· 16 days ago

Table of Contents

  1. Introduction
  2. Defining method combinations - the short form
  3. Defining method combinations - the long form
    1. The Hooker
    2. The Memoizer
  4. Conclusions

Update [2023-01-23]

Christophe Rhodes pointed out that "The Hooker" method combination is not conforming because there are multiple methods with the same "role" that can't be ordered and that have different qualifiers:

Note that two methods with identical specializers, but with different qualifiers, are not ordered by the algorithm described in Step 2 of the method selection and combination process described in Section 7.6.6 (Method Selection and Combination). Normally the two methods play different roles in the effective method because they have different qualifiers, and no matter how they are ordered in the result of Step 2, the effective method is the same. If the two methods play the same role and their order matters, an error is signaled. This happens as part of the qualifier pattern matching in define-method-combination.

http://www.lispworks.com/documentation/HyperSpec/Body/m_defi_4.htm

So instead of using qualifier patterns we should use qualifier predicates. They are not a subject of the above paragraph because of its last sentence (there is also an example in the spec that has multiple methods with a predicate). So instead of

(define-method-combination hooker ()
  (... (hook-before (:before*)) ...) ...)

the method combination should use:

(defun hook-before-p (method-qualifier)
  (typep method-qualifier '(cons (eql :before) (cons t null))))

(define-method-combination hooker ()
  (... (hook-before hook-before-p) ...) ...)

and other "hook" groups should also use predicates.

Another thing worth mentioning is that both ECL and SBCL addressed issues with the qualifier pattern matching and :arguments since the publication of this blog post.

Introduction

Method combinations are used to compute the effective method for a generic function. An effective method is a body of the generic function that combines a set of applicable methods computed based on the invocation arguments.

For example we may have a function responsible for reporting the object status and each method focuses on a different aspect of the object. In that case we may want to append all results into a list:

(defgeneric status (object)
  (:method-combination append))

(defclass base-car ()
  ((engine-status :initarg :engine :accessor engine-status)
   (wheels-status :initarg :wheels :accessor wheels-status)
   (fuel-level :initarg :fuel :accessor fuel-level))
  (:default-initargs :engine 'ok :wheels 'ok :fuel 'full))

(defmethod status append ((object base-car))
  (list :engine (engine-status object)
        :wheels (wheels-status object)
        :fuel (fuel-level object)))

(defclass premium-car (base-car)
  ((gps-status :initarg :gps :accessor gps-status)
   (nitro-level :initarg :nitro :accessor nitro-level))
  (:default-initargs :gps 'no-signal :nitro 'low))

(defmethod status append ((object premium-car))
  (list :gps (gps-status object)
        :nitro (nitro-level object)))

CL-USER> (status (make-instance 'premium-car))
(:GPS NO-SIGNAL :NITRO LOW :ENGINE OK :WHEELS OK :FUEL FULL)

CL-USER> (status (make-instance 'base-car))
(:ENGINE OK :WHEELS OK :FUEL FULL)

The effective method may look like this:

(append (call-method #<method status-for-premium-car>)
        (call-method #<method status-for-base-car>   ))

Note that append is a function so all methods are called. It is possible to use other operators (for example a macro and) and then the invocation of particular methods may be conditional:

(and (call-method #<method can-repair-p-for-premium-car>)
     (call-method #<method can-repair-p-for-base-car>   ))

Defining method combinations - the short form

The short form allows us to define a method combination in the spirit of the previous example:

(OPERATOR (call-method #<m1>)
          (call-method #<m2>)
          ...)

For example we may want to return as the second value the count of odd numbers:

(defun sum-and-count-odd (&rest args)
  (values (reduce #'+ args)
          (count-if #'oddp args)))

(define-method-combination sum-and-count-odd)

(defclass a () ())
(defclass b (a) ())
(defclass c (b) ())

(defgeneric num (o)
  (:method-combination sum-and-count-odd)
  (:method sum-and-count-odd ((o a)) 1)
  (:method sum-and-count-odd ((o b)) 2)
  (:method sum-and-count-odd ((o c)) 3)
  (:method :around ((o c))
    (print "haa!")
    (call-next-method)))

(num (make-instance 'b)) ;; (values 3 1)
(num (make-instance 'c)) ;; (values 6 2)

Note that the short form supports also around methods. It is also important to note that effective methods are cached, that is unless the generic function or the method combination changes, the computation of the effective method may be called only once per the set of effective methods.

Admittedly these examples are not very useful. Usually we operate on data stored in instances and this is not a good abstraction to achieve that. Method combinations are useful to control method invocations and their results. Here is another example:

(defmacro majority-vote (&rest method-calls)
  (let* ((num-methods (length method-calls))
         (tie-methods (/ num-methods 2)))
    `(prog ((yes 0) (no 0))
        ,@(loop for invocation in method-calls
                append `((if ,invocation
                             (incf yes)
                             (incf no))
                         (cond
                           ((> yes ,tie-methods)
                            (return (values t yes no)))
                           ((> no ,tie-methods)
                            (return (values nil yes no))))))
        (error "we have a tie! ~d ~d" yes no))))

(define-method-combination majority-vote)

(defclass a () ())
(defclass b (a) ())
(defclass c (b) ())
(defclass d (c) ())

(defgeneric foo (object param)
  (:method-combination majority-vote)
  (:method majority-vote ((o a) param) nil)
  (:method majority-vote ((o b) param) t)
  (:method majority-vote ((o c) param) t)
  (:method majority-vote ((o d) param) nil))

(foo (make-instance 'a) :whatever) ; (values nil 0 1)
(foo (make-instance 'b) :whatever) ; #<error tie 1 1>
(foo (make-instance 'c) :whatever) ; (values t 2 0)
(foo (make-instance 'd) :whatever) ; #<error tie 2 2>

Defining method combinations - the long form

The long form is much more interesting. It allows us to specify numerous qualifiers and handle methods without any qualifiers at all.

The Hooker

Here we will define a method combination that allows us to define named hooks that are invoked before or after the method. It is possible to have any number of hooks for the same set of arguments (something we can't achieve with the standard :before and :after auxiliary methods):

(defun combine-auxiliary-methods (primary around before after)
  (labels ((call-primary ()
             `(call-method ,(first primary) ,(rest primary)))
           (call-methods (methods)
             (mapcar (lambda (method)
                       `(call-method ,method))
                     methods))
           (wrap-after (the-form)
             (if after
                 `(multiple-value-prog1 ,the-form
                    ,@(call-methods after))
                 the-form))
           (wrap-before (the-form)
             (if before
                 `(progn
                    ,@(call-methods before)
                    ,the-form)
                 the-form))
           (wrap-around (the-form)
             (if around
                 `(call-method ,(first around)
                               (,@(rest around)
                                (make-method ,the-form)))
                 the-form)))
    (wrap-around (wrap-after (wrap-before (call-primary))))))

(define-method-combination hooker ()
  ((normal-before (:before))
   (normal-after  (:after)
                  :order :most-specific-last)
   (normal-around (:around))
   (hook-before   (:before *))
   (hook-after    (:after  *)
                  :order :most-specific-last)
   (hook-around   (:around *))
   (primary () :required t))
  (let ((around (append hook-around normal-around))
        (before (append hook-before normal-before))
        (after  (append normal-after hook-after)))
    (combine-auxiliary-methods primary around before after)))

With this we may define a generic function and associated methods similar to other functions with an extra feature - we may provide named :before, :after and :around methods. Named auxiliary methods take a precedence over unnamed ones. Only after that the specialization is considered. There is one caveat - PCL-derived CLOS implementations (clasp, cmucl, ecl, sbcl) currently ([2023-01-18 śro]) have a bug preventing wildcard qualifier pattern symbol * from working. So better download ccl or wait for fixes. Here's an example for using it:

;;; The protocol.
(defgeneric note-buffer-dimensions-changed (buffer w h)
  (:method (b w h)
    (declare (ignore b w h))
    nil))

(defgeneric change-dimensions (buffer w h)
  (:method-combination hooker))

;;; The implementation of unspecialized methods.
(defmethod change-dimensions :after (buffer w h)
  (note-buffer-dimensions-changed buffer w h))

;;; The stanard class.
(defclass buffer ()
  ((w :initform 0 :accessor w)
   (h :initform 0 :accessor h)))

;;; The implementation for the standard class.
(defmethod change-dimensions ((buffer buffer) w h)
  (print "... Changing the buffer size ...")
  (setf (values (w buffer) (h buffer))
        (values w h)))

(defmethod note-buffer-dimensions-changed ((buffer buffer) w h)
  (declare (ignore buffer w h))
  (print "... Resizing the viewport ..."))

;;; Some dubious-quality third-party code that doesn't want to interfere with
;;; methods defined by the implementation.
(defmethod change-dimensions :after system (buffer w h)
  (print `(log :something-changed ,buffer ,w ,h)))

(defmethod change-dimensions :after my-hook ((buffer buffer) w h)
  (print `(send-email! :me ,buffer ,w ,h)))

CL-USER> (defvar *buffer* (make-instance 'buffer))
*BUFFER*
CL-USER> (change-dimensions *buffer* 10 30)

"... Changing the buffer size ..." 
"... Resizing the viewport ..." 
(LOG :SOMETHING-CHANGED #<BUFFER #x30200088220D> 10 30) 
(SEND-EMAIL! :ME #<BUFFER #x30200088220D> 10 30) 
10
30

The Memoizer

Another example (this time it will work on all implementations) is optional memoization of the function invocation. If we define a method with the qualifier :memoize then the result will be cached depending on arguments. The method combination allows also "normal" auxiliary functions by reusing the function combine-auxiliary-methods from the previous section.

The function ensure-memoized-result accepts the following arguments:

  • test: compare generations
  • memo: a form that returns the current generation
  • cache-key: a list composed of a generic function and its arguments
  • form: a form implementing the method to be called

When the current generation is nil that means that caching is disabled and we remove the result from the cache. Otherwise we use the test to compare the generation of a cached value and the current one - if they are the same, then the cached value is returned. Otherwise it is returned.

(defparameter *memo* (make-hash-table :test #'equal))
(defun ensure-memoized-result (test memo cache-key form)
  `(let ((new-generation ,memo))
     (if (null new-generation)
         (progn
           (remhash ,cache-key *memo*)
           ,form)
         (destructuring-bind (old-generation . cached-result)
             (gethash ,cache-key *memo* '(nil))
           (apply #'values
                  (if (,test old-generation new-generation)
                      cached-result
                      (rest
                       (setf (gethash ,cache-key *memo*)
                             (list* new-generation (multiple-value-list ,form))))))))))

The method with the qualifier :memoize is used to compute the current generation key. When there is no such method then the function behaves as if the standard method combination is used. The method combination accepts a single argument test, so it is possible to define different predicates for deciding whether the cache is up-to-date or not.

(define-method-combination memoizer (test)
  ((before (:before))
   (after  (:after) :order :most-specific-last)
   (around (:around))
   (memoize (:memoize))
   (primary () :required t))
  (:arguments &whole args)
  (:generic-function function)
  (let ((form (combine-auxiliary-methods primary around before after))
        (memo `(call-method ,(first memoize) ,(rest memoize)))
        (ckey `(list* ,function ,args)))
    (if memoize
        (ensure-memoized-result test memo ckey form)
        form)))

Now let's define a function with "our" method combination. We will use a counter to verify that values are indeed cached.

(defparameter *counter* 0)

(defgeneric test-function (arg &optional opt)
  (:method-combination memoizer eql))

(defmethod test-function ((arg integer) &optional opt)
  (list* `(:counter ,(incf *counter*)) arg opt))

CL-USER> (test-function 42)
((:COUNTER 1) 42)
CL-USER> (test-function 42)
((:COUNTER 2) 42)
CL-USER> (defmethod test-function :memoize ((arg integer) &optional (cache t))
           (and cache :gen-z))
#<STANDARD-METHOD TEST-FUNCTION :MEMOIZE (INTEGER)>
CL-USER> (test-function 42)
((:COUNTER 3) 42)
CL-USER> (test-function 42)
((:COUNTER 3) 42)
CL-USER> (test-function 42 nil)
((:COUNTER 4) 42)
CL-USER> (test-function 42)
((:COUNTER 3) 42)
CL-USER> (test-function 43)
((:COUNTER 5) 43)
CL-USER> (test-function 43)
((:COUNTER 5) 43)
CL-USER> (defmethod test-function :memoize ((arg (eql 43)) &optional (cache t))
           (and cache :gen-x))
#<STANDARD-METHOD TEST-FUNCTION :MEMOIZE ((EQL 43))>
CL-USER> (test-function 43)
((:COUNTER 6) 43)
CL-USER> (test-function 43)
((:COUNTER 6) 43)
CL-USER> (test-function 42)
((:COUNTER 3) 42)

Conclusions

Method combinations are a feature that is often overlooked but give a great deal of control over the generic function invocation. The fact that ccl is the only implementation from a few that I've tried which got method combinations "right" doesn't surprise me - I've always had an impression that it shines in many unexpected places.

Nicolas MartyanoffANSI color rendering in SLIME

· 17 days ago

I was working on the terminal output for a Common Lisp logger, and I realized that SLIME does not interpret ANSI escape sequences.

This is not the end of the world, but having at least colors would be nice. Fortunately there is a library to do just that.

First let us install the package, here using use-package and straight.el.

(use-package slime-repl-ansi-color
  :straight t)

While in theory we are supposed to just add slime-repl-ansi-color to slime-contribs, it did not work for me, and I add to enable the minor mode manually.

If you already have a SLIME REPL hook, simply add (slime-repl-ansi-color-mode 1). If not, write an initialization function, and add it to the SLIME REPL initialization hook:

(defun g-init-slime-repl-mode ()
  (slime-repl-ansi-color-mode 1))
  
(add-hook 'slime-repl-mode-hook 'g-init-slime-repl-mode)

To test that it works as intended, fire up SLIME and print a simple message using ANSI escape sequences:

(let ((escape (code-char 27)))
  (format t "~C[1;33mHello world!~C[0m~%" escape escape))

While it is tempting to use the #\Esc character, it is part of the Common Lisp standard; therefore we use CODE-CHAR to obtain it from its ASCII numeric value. We use two escape sequences, the first one to set the bold flag and foreground color, and the second one to reset display status.

If everything works well, should you see a nice bold yellow message:

ANSI escape sequence rendering

LispjobsDevOps Engineer | HRL Laboratories | Malibu, CA

· 20 days ago

Job posting: https://jobs.lever.co/dodmg/85221f38-1def-4b3c-b627-6ad26d4f5df7?lever-via=CxJdiOp5C6

HRL has been on the leading edge of technology, conducting pioneering research and advancing the state of the art. This position is integrated with a growing team of scientists and engineers on HRL's quantum computing research program.

GENERAL DESCRIPTION:

As a DevOps/DevSecOps engineer, you’ll be focused on maintaining reliable systems for testing and delivery of HRL’s quantum software. (You will not be directly responsible for developing the software or its tests.)

Specifically, you will be responsible for:

* Monitoring the status of CI/CD infrastructure on open and air-gapped networks.

* Building and maintaining infrastructure for synchronizing software between open and air-gapped networks.

* Working closely with developers and IT staff to ensure continued reliability of integration and deployment infrastructure.

* Tracking and vetting software dependencies.

* Looking for and implementing improvements to DevSecOps practices.

Among other candidate requirements, we highly value expertise in Lisp, Python, and C++.

CONFIDENTIALITY NOTICE: The information transmitted in this email, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential, proprietary and/or privileged material exempt from disclosure under applicable law. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this message in error, please contact the sender immediately and destroy any copies of this information in their entirety.

Nicolas MartyanoffSwitching between implementations with SLIME

· 21 days ago

While I mostly use SBCL for Common Lisp development, I regularly switch to CCL or even ECL to run tests.

This is how I do it with SLIME.

Starting implementations

SLIME lets you configure multiple implementations using the slime-lisp-implementations setting. In my case:

(setq slime-lisp-implementations
   '((sbcl ("/usr/bin/sbcl" "--dynamic-space-size" "2048"))
     (ccl ("/usr/bin/ccl"))
     (ecl ("/usr/bin/ecl"))))

Doing so means that running M-x slime will execute the first implementation, i.e. SBCL. There are two ways to run other implementations.

First you can run C-u M-x slime which lets you type the path and arguments of the implementation to execute. This is a bit annoying because the prompt starts with the content of the inferior-lisp-program variable, i.e. "lisp" by default, meaning it has to be deleted manually each time. Therefore I set inferior-lisp-program to the empty string:

(setq inferior-lisp-program "")

Then you can run C-- M-x slime (or M-- M-x slime which is easier to type) to instruct SLIME to use interactive completion (via completing-read) to let you select the implementations among those configured in slime-lisp-implementations.

To make my life easier, I bind C-c C-s s to a function which always prompt for the implementation to start:

(defun g-slime-start ()
  (interactive)
  (let ((current-prefix-arg '-))
    (call-interactively 'slime)))

Using C-c C-s as prefix for all my global SLIME key bindings helps me remember them.

Switching between multiple implementations

Running the slime function several times will create multiple connections as expected. Commands executed in Common Lisp buffers are applied to the current connection, which is by default the most recent one.

There are two ways to change the current implementation:

  1. Run M-x slime-next-connection.
  2. Run M-x slime-list-connections, which opens a buffer listing connections, and lets you choose the current one with the d key.

I find both impractical: the first one does not let me choose the implementation, forcing me to run potentially several times before getting the one I want. The second one opens a buffer but does not switch to it.

All I want is a prompt with completion. So I wrote one.

First we define a function to select a connection among existing one:

(defun g-slime-select-connection (prompt)
  (interactive)
  (let* ((connections-data
          (mapcar (lambda (process)
                    (cons (slime-connection-name process) process))
                  slime-net-processes))
         (completion-extra-properties
          '(:annotation-function
            (lambda (string)
              (let* ((process (alist-get string minibuffer-completion-table
                                         nil nil #'string=))
                     (contact (process-contact process)))
                (if (consp contact)
                    (format "  %s:%s" (car contact) (cadr contact))
                  (format "  %S" contact))))))
         (connection-name (completing-read prompt connections-data)))
    (let ((connection (cl-find connection-name slime-net-processes
                               :key #'slime-connection-name
                               :test #'string=)))
      (or connection
          (error "Unknown SLIME connection %S" connection-name)))))

Then use it to select a connection as the current one:

(defun g-slime-switch-connection ()
  (interactive)
  (let ((connection (g-slime-select-connection "Switch to connection: ")))
    (slime-select-connection connection)
    (message "Using connection %s" (slime-connection-name connection))))

I bind this function to C-c C-s c.

In a perfect world, we could format nice columns in the prompt and highlight the current connection, but the completing-read interface is really limited, and I did not want to use an external package such as Helm.

Stopping implementations

Sometimes it is necessary to stop an implementations and kill all associated buffers. It is not something I use a lot; but when I need it, it is frustrating to have to switch to the REPL buffer, run slime-quit-lisp, then kill the REPL buffer manually.

Adding this feature is trivial with the g-slime-select-connection defined earlier:

(defun g-slime-kill-connection ()
  (interactive)
  (let* ((connection (g-slime-select-connection "Kill connection: "))
         (repl-buffer (slime-repl-buffer nil connection)))
    (when repl-buffer
      (kill-buffer repl-buffer))
    (slime-quit-lisp-internal connection 'slime-quit-sentinel t)))

Finally I bind this function to C-c C-s k.

It is now much more comfortable to manage multiple implementations.

Tim BradshawA case-like macro for regular expressions

· 22 days ago

I often find myself wanting a simple case-like macro where the keys are regular expressions. regex-case is an attempt at this.

I use CL-PPCRE for the usual things regular expressions are useful for, and probably for some of the things they should not really be used for as well. I often find myself wanting a case like macro, where the keys are regular expressions. There is a contributed package for Trivia which will do this, but Trivia is pretty overwhelming. So I gave in and wrote regex-case which does what I want.

regex-case is a case-like macro. It looks like

(regex-case <thing>
  (<pattern> (...)
   <form> ...)
  ...
  (otherwise ()
   <form> ...))

Here <pattern> is a literal regular expression, either a string or in CL-PPCRE’s s-expression parse-tree syntax for them. Unlike case there can only be a single pattern per clause: allowing the parse-tree syntax makes it hard to do anything else. otherwise (which can also be t) is optional but must be last.

The second form in a clause specifies what, if any, variables to bind on a match. As an example

(regex-case line
  ("fog\\s+(.*)\\s$" (:match m :registers (v))
    ...)
  ...)

will bind m to the whole match and v to the substring corresponding to the first register. You can also bind match and register positions. A nice (perhaps) thing is that you can not bind some register variables:

(regex-case line
  (... (:registers (_ _ v))
   ...)
  ...)

will bind v to the substring corresponding to the third register. You can use nil instead of _.

The current state of regex-case is a bit preliminary: in particular I don’t like the syntax for binding thngs very much, although I can’t think of a better one. Currently therefore it’s in my collection of toys: it will probably migrate from there at some point.

Currently documentation is here and source code is here.

Nicolas HafnerKandria is now out!

· 22 days ago
https://filebox.tymoon.eu//file/TWpZME1RPT0=

Kandria is now finally available for purchase and play!

I recommend buying it on Steam, as the algorithm there will help us bring the game in front of more people, as well. However, if that isn't a possibility for you, there's also options on Itch.io and through Xsolla on our webpage:

I am also live on Steam, Twitch, and YouTube right now, to celebrate the launch! Come on and hang out in the chat: https://stream.shinmera.com

I hope you all enjoy the game, and thank you very much for sticking with us for all this time!

Nicolas HafnerKandria launches tomorrow!

· 23 days ago
https://filebox.tymoon.eu//file/TWpZek9BPT0=

​Kandria launches tomorrow, on Wednesday the 11th, at 15:00 CET / 9:00 EST!

There'll be a launch stream for the occasion as well. It'll be live on Twitch! I'll be happy to answer any questions you may have about the game, and hope to see you there!​

Last opportunity to wishlist the game, too: https://kandria.com/steam

vindarelThese Years in Common Lisp: 2022 in review

· 24 days ago

And 2022 is over. The Common Lisp language and environment are solid and stable, yet evolve. Implementations, go-to libraries, best practices, communities evolve. We don’t need a “State of the Ecosystem” every two weeks but still, what happened and what did you miss in 2022?

This is my pick of the most exciting, fascinating, interesting or just cool projects, tools, libraries and articles that popped-up during that time (with a few exceptions that appeared in late 2021).

This overview is not a “State of the CL ecosystem” (HN comments (133)) that I did in 2020, for which you can find complementary comments on HN.

I think this article (of sorts) is definitely helpful for onlookers to Common Lisp, but doesn’t provide the full “story” or “feel” of Common Lisp, and I want to offer to HN my own perspective.

And, suffice to say, I tried to talk about the most important things, but this article (of sorts) is by no means a compilation of all new CL projects or all the articles published on the internet. Look on Reddit, Quicklisp releases, Github, and my favourite resources:

If I had to pick 3 achievements they would be:

  • SBCL developments: SBCL is now callable as a shared library. See below in “Implementations”.
  • a new 3D graphics project: Kons-9: “The idea would be to develop a system along the lines of Blender/Maya/Houdini, but oriented towards the strengths of Common Lisp”. And the project progresses at a good pace.
  • CLOG, the Common Lisp Omnificent GUI. It’s like a GUI framework to create web apps. Based on websockets, it offers a light abstraction to create fully-dynamic web applications, in Common Lisp. It has lots of demos to create websites, web apps, games, and it ships a complete editor. For development, we can connect our Lisp REPL to the browser, and see changes on the fly. The author had a similar commercial product written in Ada, discovered Common Lisp, and is now super active on this project.

Let’s go for more.

Thanks to @k1d77a, @Hexstream, @digikar and @stylewarning for their feedback.

Table of Contents

Documentation

A newcomer to Lisp came, asked a question, and suddenly he created a super useful rendering of the specification. Check it out!

But that’s not all, he also started work on a new Common Lisp editor, built in Rust and Tauri, see below.

We continue to enrich the Common Lisp Cookbook. You are welcome to join, since documention is best built by newcomers for newcomers.

A resurrected project:

Also:

Implementations

We saw achievements in at least 7 8 implementations.

New implementation! It’s 2022 and people start new CL implementations.

  • NPT - an implementation of ANSI Common Lisp in C.

See also:

  • LCL, Lua Common Lisp - The goal of this project is to provide an implementation of Common Lisp that can be used wherever an unmodified Lua VM is running.
    • not a complete implementation.

They are doing great work to revive a Lisp machine:

Medley Interlisp is a project aiming to restore the Interlisp-D software environment of the Lisp Machines Xerox produced since the early 1980s, and rehost it on modern operating systems and computers. It’s unique in the retrocomputing space in that many of the original designers and implementors of major parts of the system are participating in the effort.

Paolo Amoroso blog post: my encounter with Medley Interlisp.

Jobs

I won’t list expired job announces, but this year Lispers could apply for jobs in: web development(WebCheckout, freelance announces), cloud service providers (Keepit), big-data analysis (Ravenpack, and chances are they are still hiring)), quantum computing (HLR Laboratories), AI (Mind AI, SRI International), real-time data aggregration and alerting engines for energy systems (3E); for a startup building autism tech (and using CLOG already); there have been a job seeking to rewrite a Python backend to Common Lisp (RIFFIT); there have been some bounties; etc.

Prior Lisp experience was not 100% necessary. There were openings for junior and senior levels, remote and not remote (Australia for “a big corp”, U.S., Spain, Ukraine...).

Comes a question:

I remind the reader that most Lisp jobs do not have a public job posting, instead candidates are often found organically on the community channels: IRC, Twitter, Discord, Reddit... or teams simply train their new developer.

In 2022 we added a few companies to the ongoing, non-official list on awesome-lisp-companies. If your company uses Common Lisp, feel free to tell us on an issue or in the comments!

For example, Feetr.io “is entirely Lisp”.

Lisp was a conscious decision because it allows a small team to be incredibly productive, plus the fact that it’s a live image allows you to connect to it over the internet and poke and prod the current state, which has really allowed a much clearer understanding of the data.

They post SLY screenshots on their Twitter^^

Evacsound (HN):

We’re using CL in prod for an embedded system for some years now, fairly smooth sailing. It started out as an MVP/prototype so implementation was of no concern, then gained enough velocity and market interest that a rewrite was infeasible. We re-train talent on the job instead.

Pandorabots, or barefootnetworks, designing the Intel Tofino programmable switches, and more.

Projects

Language libraries

Editors, online editors, REPLs, plugins

New releases:

Concurrency

See also lisp-actors, which also does networking. It looks like more of a research project, as it doesn’t have unit-tests nor documentation, but it was used for the (stopped) Emotiq blockchain.

Discussions:

Databases

More choices: awesome-cl#databases.

Delivery tools

There has been outstanding work done there. It is also great to see the different entities working on this. That includes SBCL developers, Doug Katzman of Google, and people at HRL Laboratories (also responsible of Coalton, Haskell-like on top of CL).

Have you ever wanted to call into your Lisp library from C? Have you ever written your nice scientific application in Lisp, only to be requested by people to rewrite it in Python, so they can use its functionality? Or, maybe you’ve written an RPC or pipes library to coordinate different programming languages, running things in different processes and passing messages around to simulate foreign function calls.

[...] If you prefer using SBCL, you can now join in on the cross-language programming frenzy too.

Games

Kandria launches on Steam on the 11th of January, in two days!

🎥 Kandria trailer.

Graphics, GUIs

We saw the release of fresh bindings for Gtk4.

We had bindings for Qt5... but they are still very rough, hard to install so far.

Also:

History:

But an awesome novelty of 2022 is Kons-9.

Kons-9, a new 3D graphics project

🚀 A new 3D graphics project: Kons-9.

The idea would be to develop a system along the lines of Blender/Maya/Houdini, but oriented towards the strengths of Common Lisp.

I’m an old-time 3D developer who has worked in CL on and off for many years.

I don’t consider myself an expert [...] A little about me: · wrote 3D animation software used in "Jurassic Park" · software R&D lead on "Final Fantasy: The Spirits Within" movie · senior software developer on "The Hobbit" films.

Interfaces with other languages

  • py4cl2-cffi: CFFI based alternative to py4cl2.
    • it does one big new thing: it supports passing CL arrays by reference. That means we actually have access to numpy, scipy, and friends.
    • “If py4cl2-cffi reaches stability, and I find that the performance of (i) cffi-numpy, (ii) magicl, as well as (iii) a few BLAS functions I have handcrafted for numericals turn out to be comparable, I might no longer have to reinvent numpy.” @digikar
  • Small update to RDNZL (CL .NET bridge by Edi Weitz)
    • forked project, added support for Int16, fixed Int64, re-building the supporting DLLs.
    • see also: Bike
  • jclass: Common Lisp library for Java class file manipulation

For more, see awesome-cl.

Numerical and scientific

  • 🚀 new Lisp Stats release
    • “emphasis on plotting and polishing of sharp edges. data-frames, array operations, documentation.”
    • HN comments (55)
    • ” I’ve been using lisp-stat in production as part of an algorithmic trading application I wrote. It’s been very solid, and though the plotting is (perhaps was, in light of this new release) kinda unwieldy, I really enjoyed using it. Excited to check out the newest release.”
    • “For example, within Lisp-Stat the statistics routines [1] were written by an econometrician working for the Austrian government (Julia folks might know him - Tamas Papp). It would not be exaggerating to say his job depending on it. These are state of the art, high performance algorithms, equal to anything available in R or Python. So, if you’re doing econometrics, or something related, everything you need is already there in the tin.”
    • “For machine learning, there’s CLML, developed by NTT. This is the largest telco in Japan, equivalent to ATT in the USA. As well, there is MGL, used to win the Higgs Boson challenge a few years back. Both actively maintained.”
    • “For linear algebra, MagicCL was mention elsewhere in the thread. My favourite is MGL-MAT, also by the author of MGL. This supports both BLAS and CUBLAS (CUDA for GPUs) for solutions.”
    • “Finally, there’s the XLISP-STAT archive. Prior to Luke Tierney, the author of XLISP-Stat joining the core R team, XLISP-STAT was the dominate statistical computing platform. There’s heaps of stuff in the archive, most at least as good as what’s in base R, that could be ported to Lisp-Stat.”
    • “Common Lisp is a viable platform for statistics and machine learning. It isn’t (yet) quite as well organised as R or Python, but it’s all there.”
  • numericals - Performance of NumPy with the goodness of Common Lisp
  • MGL-MAT - a library for working with multi-dimensional arrays which supports efficient interfacing to foreign and CUDA code. BLAS and CUBLAS bindings are available.
  • hbook - Text-based histograms in Common Lisp inspired by the venerable HBOOK histogramming library from CERN.

New releases:

  • Maxima 5.46 was released.
    • “Maxima is a Computer Algebra System comparable to commercial systems like Mathematica and Maple. It emphasizes symbolic mathematical computation: algebra, trigonometry, calculus, and much more.”
    • see its frontends, for example WxMaxima.

Call to action:

Web

Screenshotbot (Github) was released. It is “a screenshot testing service to tie with your existing Android, iOS and Web screenshot tests”.

It is straightforward to install with a Docker command. They offer more features and support with their paid service.

LicensePrompt was released. It is “a single place to track all recurring software and IT expenses and send relevant reminders to all interested people”. It’s built in CL, interface with HTMX.

Libraries:

  • jingle: Common Lisp web framework with bells and whistles (based on ningle)
    • jingle demo: OpenAPI 3.x spec, Swagger UI, Docker and command-line interface app with jingle.
  • ciao: Ciao is an easy-to-use Common Lisp OAuth 2.0 client library. It is a port of the Racket OAuth 2.0 Client to Common Lisp.
  • stepster: a web scraping library, on top of Plump and Clss (new in QL)
  • openrpc: Automatic OpenRPC spec generation, automatic JSON-RPC client building
  • HTTP/2 implementation in Common Lisp

Skeletons:

  • cl-cookieweb: my project skeleton to start web projects. Demo in video. I am cheating, the bulk of it was done in 2021.
    • “Provides a working toy web app with the Hunchentoot web server, easy-routes, Djula templates, styled with Bulma, based on SQLite, with migrations and an example table definition.”
    • if you don’t know where to start for web dev in CL, enjoy all the pointers of this starter kit and find your best setup.
    • see also this web template by @dnaeon, and check out all his other Lisp libraries.

Bindings:

  • 👍 lisp-pay: Wrappers around various Payment Processors (Paypal, Stripe, Coinpayment)
  • lunamech-matrix-api: Implementation of the Matrix API, LunaMech a Matrix bot

Apps:

  • Ackfock - a platform of mini agreements and mini memos of understanding (built with CLOG, closed source).
  • todolist-cl: a nice looking todolist with a web UI, written in Common Lisp (and by a newcomer to CL, to add credit)

I don’t have lots of open-source apps to show. Mines are running in production and all is going well. I share everything on my blog posts. I also have an open-source one in development, but that’s for the 2023 showcase :D

CLOG

🚀 The awesome novelty of 2022 I spoke of in the introduction is CLOG, the Common Lisp Omnificent GUI:

The CLOG system browser

I know of one open-source consequent CLOG app: mold-desktop, in development.

I’m developing a programmable desktop and a bookmarks manager application with CLOG. I think I know one of the things that make CLOG user interfaces so easy to develop. It is that they are effortlessly composable. That’s it for now :)

@mmontone

New releases

There are lots of awesome projects in music composition, including OpusModus and OpenMusic which saw new releases. I also like to cite ScoreCloud, a mobile app built with LispWorks, where you whistle, sing or play your instrument, and the app writes the music score O_o

See awesome-cl and Cliki for more.

(re) discoveries

Articles

Graphics

Tooling

Scripting

Around the language

History:

Call for action:

Screencasts and podcasts

New videos by me:

by Gavin Freeborn:

KONS-9 series:

CLOG series:

CL study group:

Others:

and of course, find 3h48+ of condensed Lisp content on my Udemy video course! (I’m still working on new content, as a student you get updates).

Aside screncasts, some podcasts:

Other discussions

Community

Learning Lisp

Common Lisp VS ...


Thanks everyone, happy lisping and see you around!

Nicolas MartyanoffImproving Git diffs for Lisp

· 25 days ago

All my code is stored in various Git repositories. When Git formats a diff between two objects, it generates a list of hunks, or groups of changes.

Each hunk can be displayed with a title which is automatically extracted. Git ships with support for multiple languages, but Lisp dialects are not part of it. Fortunately Git lets users configure their own extraction.

The first step is to identify the language using a pattern applied to the filename. Edit your Git attribute file at $HOME/.gitattributes and add entries for both Emacs Lisp and Common Lisp:

*.lisp diff=common-lisp
*.el diff=elisp

Then edit your Git configuration file at $HOME/.gitconfig and configure the path of the Git attribute file:

[core]
    attributesfile = ~/.gitattributes

Finally, set the regular expression used to match a top-level function name:

[diff "common-lisp"]
    xfuncname="^\\((def\\S+\\s+\\S+)"
    
[diff "elisp"]
    xfuncname="^\\((((def\\S+)|use-package)\\s+\\S+)"

For Lisp dialects, we do not just identify function names: it is convenient to identify hunks for all sorts of top-level definitions. We use a regular expression which captures the first symbol of the form and the name that follows.

Of course you can modifiy these expressions to identify more complex top-level forms. For example, for Emacs Lisp, I also want to identify use-package expressions.

You can see the result in all tools displaying Git diffs, for example in Magit with Common Lisp code:

Common Lisp diff

Or for my Emacs configuration file:

Emacs Lisp diff

Hunk titles, highlighted in blue, now contain the type and name of the top-level construction the changes are associated with.

A simple change, but one which really helps reading diffs.

Nicolas HafnerKandria releases in one week on January 11!

· 29 days ago
https://filebox.tymoon.eu//file/TWpZek5RPT0=

In case you missed the yearly update last week: Kandria will release in one week from today, on January 11th, 15:00 CET / 09:00 EST. I hope you're as excited to play it as we are to finally get it into your hands!

Please remember to wishlist it on Steam to make sure you don't miss it!

Nicolas Hafner2022 for Kandria in Review

· 34 days ago
https://studio.tymoon.eu/api/studio/file?id=2327

It's that time of the year again! The end of it. And what a year it's been for Kandria. We're now less than two weeks away from the release. Yikes! Or should I say, woah! Well, let's take a moment and look at some of all of the things that happened, before we look at what the future may possibly hold in store for us. At least, if I have anything to say about it.

Honestly, so many things happened that I barely remember most of them. I had to go back through the monthly reviews to remember all of it. But then again, I've always been rather terrible at remembering things that far back in any chronologically complete manner. I won't be going over stuff in chronological order, either, but instead will touch on a bunch of individual topics. Let's start out with

Conferences

In 2022 we were present in person at quite a number of conferences:

  • European Lisp Symposium in Portugal

  • Digital Dragons in Poland

  • Develop: Brighton in England

  • Gamescom & Devcom in Germany

  • HEROFest in Switzerland

We got a lot of useful feedback from random people trying the game out at the events, and also got to meet a lot of friendly and great developers from around the World. That all said, these conferences are also quite taxing and costly. We got the booth sponsored for all of them, but travel expenses are still not nothing, not to mention the work time. Travelling is also quite exhausting to me in general, so I hope I won't have to zip around the place quite as much next year.

https://filebox.tymoon.eu//file/TWpVM05nPT0=The Swiss Games booth at Gamescom

However, I can already say that - unforeseen circumstances notwithstanding - I will be at the European Lisp Symposium in the Netherlands, and Tokyo Game Show in Japan.

Kickstarter & Steam Next Fest

In July we had a big double whammy of our Kickstarter and the Steam Next Fest, both launched at the same time. This was also our first big attempt at pushing for some marketing. We tried out Facebook ads, which weren't of much use at all. We also contacted a number of streamers and influencers, a few of which actually gave the new demo we had at the time a shot. It was a lot of fun to watch them play through it and chat with them as they did so.

https://filebox.tymoon.eu//file/TWpRNU9RPT0=A gif I made in an attempt to illustrate Kandria's large vertical map

Leading up to release and during it I imagine we'll have a few more such stream appearances. If you see a streamer playing Kandria, please don't hesitate to notify us in the Discord and I'd be happy to drop by in chat. Assuming I'm not asleep at the time, of course!

Anyway, the Kickstarter went rather well for us, and we managed to get funded in the first week. After that it was mostly coasting along, giving an update every now and again to keep spirits up and push for those stretch goals (more on that later).

I'm really happy that things weren't as hectic as they are often described as being, as I was still able to focus on developing the game. Losing a month of work would have made things quite a bit more troublesome later on.

Still, I'm also very well aware that the reason things went so well for us is mostly down to the fact that we had a rather low goal set, and that we had a lot of support from the programming, and especially Common Lisp community, many of which chipped in rather large sums. Thank you all very much!

I'm not sure that launching your Kickstarter alongside the Next Fest was a good idea. It's definitely a good idea to have a demo available for your Kickstarter, so that people can trust in your abilities to deliver a complete product, but I don't know if the cross-promotion idea worked out. It might be better to have the next fest part way through the Kickstarter or even at the end of it, or entirely separate them, to have two marketing beats rather than one bigger one. Still, it's impossible to say whether it would have gone better or worse overall if we had done it differently, so I'm not complaining. More just thinking about how I'd do this if there's going to be another similar thing in the future.

Development

At the start of the year we didn't even have the full game map ready yet, let alone all the assets, quests, or dialogue. A number of important features were also missing still, both in the engine and in the game itself.

Thinking back to it now it is kind of insane how much of the game was still missing. I know there's folks that can put together a full game in less than a year total, but that's usually much larger teams, or far smaller games.

There were quite a few painful stretches of arduous work. Filling out the entire map with interesting challenges was one, then going back and tiling it all was another. And finally going back again to add details and flairs everywhere was yet another. But, the game feels a lot more livelier and interesting now, so it was definitely worth all that extra work.

If you want to read up on all the nitty gritty of the development that happened during the year, you can browse back in time on the blog, or for even more detail, hop on by the mailing list.

https://pbs.twimg.com/media/Fd1mVMkXwAUexcc?format=jpg&name=largeKandria running on the Steam Deck

One of the coolest parts for me was finally getting a Steam Deck (they're still not officially available in Switzerland), and seeing Kandria just... work for the most part. Having it be portable is really, really sweet. And it only took a couple of tweaks with the menuing to make it all run well. I would still love to also have the game on Switch, but we'll have to see about that later down the road.

Working up to Release

The game's been pretty much done since the end of November, and in the remaining time since then I've been working on translating the game into German. That took quite a bit of work, there's some 60'000 words, and I'm not the best at translating to begin with. The first draft of that is now done, and we should be good ready to get the game out there in both English and German by the release date.

Unfortunately there won't be any other languages for the foreseeable future. My funds have run very dry, and I need to save up again to be able to support the development of the next game (more on that later). However, if you're interested in localising the game yourself, you'll be able to do so soon. Please keep your eyes and ears open!

https://filebox.tymoon.eu//file/TWpZeE5nPT0=

Aside from localisation work I've also ironed out some more bugs, cleaned up some stuff in the code base, developed an independent key distribution system so I can sell copies without being attached to a third-party platform, and added some more minor enhancements and changes along the way.

I really hope the release will go well, as far as I know there's only very minor outstanding issues.

The Release

So. Kandria is releasing on Wednesday, January 11th, 15:00 CET. It'll be released on Steam, Itch.io, and our website. All versions will be DRM-free, though we get the biggest cut of the revenue if you buy it directly from our website.

In addition to this, the Soundtrack will also be available on Steam, Itch.io, and Bandcamp, and on various streaming services such as Spotify.

If you were a backer of the Kickstarter campaign, you will receive your keys for the game and OST in the coming days.

Immediately on release I will be streaming the game on Steam and Twitch, so please join me there for a little celebration. After that I will be looking at any and all feedback that's coming in, and working on patches to address any fires that may be unveiled. And after I've addressed what I can, I think I'll take some more holidays to recenter myself and consider the coming year properly.

2023

Even after the release, my work on Kandria will not be done quite yet. There's two big post-release updates thanks to the Kickstarter stretch goals that will be coming:

  1. Level Editor. The initial release already includes the development level editor, but it is a bit rough around the edges and needs more usability and stability improvements. Once that is done, there'll be another big patch update along with a community event to encourage people to make and share their own levels.

  2. Modding Support. While the game's source code will be available on release already, the second post-launch update will focus on two things: presenting an explicit API for mods, documentation for people to write their own mods for Kandria, and an in-game mod browser supported by mod.io.

I cannot yet make any promises about when these updates will land, especially as I also need to start gearing up work for the next game project. That's right, I'm already planning and working on the next game, and I'm really excited about it. I don't want to reveal anything about it yet, but I think you'll be positively surprised when I do!

Since things are still a bit under covers at the moment I don't know if I'll be able to keep doing monthly roundups like this, though rest assured that I will keep you in the loop with any important developments, don't you worry about that.

You

I wanted to reserve this last section right at the very end of both the article and the year here just for you. Thank you so much. I know this is sappy, and I know this is cliche, and I know it is all of these things and many others, but I do genuinely feel blessed at this moment to have you reading about my work, and following along for such a long time. And the better you know me, the better you'll know how rare it is for me to express such genuine positivity, so I hope you will take it to heart and believe me when I say that I am very thankful to you, and I hope that you'll continue to follow my endeavours in the future as well.

Before I go, I have one last favour to ask of you: please share Kandria with your friends, colleagues, and groups. I know it may not seem like much, and I know it can feel awkward, but it is invaluable for someone like me that's just starting out in this industry. Even just a few more people can make a big difference. So please, share the Steam page, itch site, or our website with people.

And again, thank you. I hope you have a great new year.

Nicolas MartyanoffConfiguring SLIME cross-referencing

· 36 days ago

The SLIME Emacs package for Common Lisp supports cross-referencing: one can list all references pointing to a symbol, move through this list and jump to the source code of each reference.

Removing automatic reference jumps

While cross-referencing is very useful, the default configuration is frustrating: moving through the list in the Emacs buffer triggers the jump to the reference under the cursor. If you are interested in a reference in the middle of the list, you will have to move to it, opening multiple buffers you do not care about as a side effect. I finally took the time to fix it.

Key bindings for slime-ref-mode mode are stored in the slime-xref-mode-map keymap. After a quick look in slime.el, it is easy to remove bindings for slime-xref-prev-line and slime-xref-next-line:

(define-key slime-xref-mode-map (kbd "n") nil)
(define-key slime-xref-mode-map [remap next-line] nil)
(define-key slime-xref-mode-map (kbd "p") nil)
(define-key slime-xref-mode-map [remap previous-line] nil)

If you are using use-package, it is even simpler:

(use-package slime
  (:map slime-xref-mode-map
      (("n")
       ([remap next-line])
       ("p")
       ([remap previous-line]))))

Changing the way references are used

SLIME supports two ways to jump to a reference:

  1. With return or space, it spawns a buffer containing the source file and close the cross-referencing buffer.
  2. With v, it spawns the source file buffer but keeps the cross-referencing buffer open and keeps it current.

This is not practical to me, so I made a change. The default action, triggered by return, now keeps the cross-referencing buffer open and switches to the source file in the same window. This way, I can switch back to the cross-referencing buffer with C-x b to select another reference without spawning buffers in other windows (I do not like having my windows hijacked by commands).

To do that, I need a new function:

(defun g-slime-show-xref ()
  "Display the source file of the cross-reference under the point
in the same window."
  (interactive)
  (let ((location (slime-xref-location-at-point)))
    (slime-goto-source-location location)
    (with-selected-window (display-buffer-same-window (current-buffer) nil)
      (goto-char (point))
      (g-recenter-window))))

Note the use of g-recenter-window, a custom function to move the current point at eye level. Feel free to use the builtin recenter function instead.

I then bind the function to return and remove other bindings:

(define-key slime-xref-mode-map (kbd "RET") 'g-slime-show-xref)
(define-key slime-xref-mode-map (kbd "SPC") nil)
(define-key slime-xref-mode-map (kbd "v") nil)

Much better now!

Tim BradshawThe empty list

· 48 days ago

My friend Zyni pointed out that someone has been getting really impressively confused and cross on reddit about empty lists, booleans and so on in Common Lisp, which led us to a discussion about what the differences between CL and Scheme really are here. Here’s a summary which we think is correct.

A peculiar object in Common Lisp1

In Common Lisp there is a single special object, nil.

  • This represents both the empty list, and the special false value, all other objects being true.
  • This object is a list and is the only list object which is not a cons.
  • As such this object is an atom, and again it is the only list object which is an atom.
  • You can take the car and cdr of this object: both of these operations return the object itself.
  • This object is also a symbol, and it is the only object which is both a list and a symbol.
  • The empty list when written as an empty list, (), is self-evaluating.

Some comments.

  • It is necessary that there be a special empty-list object which is a list but not a cons: the things which are not necessary are that it be a symbol, and that it represent falsity.
  • Combining the empty list and the special false object can lead to particularly good implementations perhaps.
  • The implementation of this object is always going to be a bit weird.
  • It is clear that the empty list cannot be any kind of compound form so requiring it to be quoted — requiring you to write '() really — serves no useful purpose. Nevertheless I (Tim) would probably rather CL did that.
  • Not having to quote nil on the other hand is not at all strange: any symbol can be made self-evaluating simply by (defconstant s 's), for instance.
  • The graph of types in CL is a DAG, not a tree: it is not at all strange that there is an object whose type is both list and symbol.

Some entirely mundane things in Common Lisp

  • There is a symbol, t which represents the canonical true value. Nothing is magic about this symbol in any way: it could be defined by (defconstant t 't).
  • There is a type, boolean which could be defined by (deftype boolean () '(member nil t)), except that it is required that boolean be a recognisable subtype of symbol. All implementations we have tried recognise (member nil t) as a subtype of symbol, but the standard does not require them to do so. Additionally (type-of 't) must return boolean we think.
  • There is a type, null, which could be defined by (deftype null () '(member nil)) or (deftype null () '(eql nil)), with the same caveats as above, and (type-of nil) should return null.
  • There are types named t (top of the type graph) and nil (bottom of type graph).

These mundane things are just that: they don’t require implementational magic at all.

Three peculiar objects in Scheme

In Scheme there is an object, ().

  • () is the special object that represents the empty list.
  • It does not represent false.
  • It is not a symbol.
  • It is the only list object which is not a pair (cons): list? is true of it but pair? is false.
  • You can’t take the car or cdr of it.
  • It is not self-evaluatiing.

There is another object, #f.

  • #f is the distinguished false value and is the only false value in Scheme, all other objects being true.
  • It is not a symbol or a list but satisfies the boolean? predicate.
  • It is self-evaluating.

There is another object, #t.

  • #t represents the canonical true value, but all objects other than #f are true.
  • It is not a symbol or a list but satisfies the boolean? predicate.
  • It is self-evaluating.

Some comments. - Scheme does not have such an elaborate type system as CL and, apart from numbers, doesn’t really have subtype relations the way CL does.

A summary

CL’s treatment of nil clearly makes some people very unhappy indeed. In particular they seem to think CL is somehow inconsistent, which it clearly is not. Generally this is either because they don’t understand how it works, because it doesn’t work the way they want it to work, or (usually) both. Scheme’s treatment is often cited by these people as being better. But CL requires precisely one implementationally-weird object, while Scheme requires two, or three if you count #t which you probably should. Both languages have idiosyncratic evaluation rules around these objects. Additionally it’s worth understanding that things like CL’s boolean type mean essentially nothing implementationally: boolean is just a name for a set of symbols. The only thing preventing you from defining a type like this yourself is the requirement for type-of to return the type.

Is one better than the other? No: they’re just not the same. Certainly the CL approach carries more historical baggage. Equally certainly it is perfectly consistent, and changing it would break essentially all CL programs that exist.


Thanks to Zyni for most of this: I’m really writing it up just so we can remember it. We’re pretty confident about the CL part, less so about the Scheme bit.


  1. peculiar, adjective: having eccentric or individual variations in relation to the general or predicted pattern, as in peculiar motion or velocity. noun: a parish or church exempt from the jurisdiction of the ordinary or bishop in whose diocese it is placed; anything exempt from ordinary jurisdiction. 

Nicolas MartyanoffFixing unquote-splicing behaviour with Paredit

· 51 days ago

Paredit is an Emacs package for structural editing. It is particularly useful in Lisp languages to manipulate expressions instead of just characters.

One of the numerous little features of Paredit is the automatic insertion of a space character before a delimiting pair. For example, if you are typing (length, typing ( will have Paredit automatically insert a space character before the opening parenthesis, to produce the expected (length ( content.

Paredit is smart enough to avoid doing so after quote, backquote or comma characters, but not after an unquote-splicing sequence (,@) which is quite annoying in languages such as Scheme or Common Lisp. As almost always in Emacs, this behaviour can be customized.

Paredit decides whether to add a space or not using the paredit-space-for-delimiter-p function, ending up with applying a list of predicates from paredit-space-for-delimiter-predicates.

Let us add our own. For more flexibility, we will start by defining a list of prefixes which are not to be followed by a space:

(defvar g-paredit-no-space-prefixes (list ",@"))

We then write our predicate which simply checks if we are right after one of these prefixes:

(defun g-paredit-space-for-delimiter (endp delimiter)
  (let ((point (point)))
    (or endp
        (seq-every-p
         (lambda (prefix)
           (and (> point (length prefix))
                (let ((start (- point (length prefix)))
                      (end point))
                  (not (string= (buffer-substring start end) prefix)))))
         g-paredit-no-space-prefixes))))

Finally we add a Paredit hook to append our predicate to the list:

(defun g-init-paredit-space-for-delimiter ()
  (add-to-list 'paredit-space-for-delimiter-predicates
               'g-paredit-space-for-delimiter))

(add-hook 'paredit-mode-hook 'g-init-paredit-space-for-delimiter)

Not only does it fix the problem for unquote-slicing, but it makes it easy to add new prefixes. For example I immediately added #p (used for pathnames in Common Lisp, e.g. #p"/usr/bin/") to the list.

Nicolas MartyanoffSLIME compilation tips

· 52 days ago

I recently went back to Common Lisp to solve the daily problems of the Advent of Code. Of course it started with installing and configuring SLIME, the main major mode used for Common Lisp development in Emacs.

The most useful feature of SLIME is the ability to load sections of code into the Common Lisp implementation currently running. One can use C-c C-c to evaluate the current top-level form, and C-c C-k to reload the entire file, making incremental development incredibly convenient.

However I found the default configuration frustrating. Here are a few tips which made my life easier.

Removing the compilation error prompt

If the Common Lisp implementation fails to compile the file, SLIME will ask the user if they want to load the fasl file (i.e. the compiled form of the file) anyway.

I cannot find a reason why one would want to load the ouput of a file that failed to compile, and having to decline every time is quite annoying.

Disable the prompt by setting slime-load-failed-fasl to 'never:

(setq slime-load-failed-fasl 'never)

Removing the SLIME compilation buffer on success

When compilation fails, SLIME creates a new window containing the diagnostic reported by the Common Lisp implementation. I use display-buffer-alist to make sure the window is displayed on the right side of my three-column split, and fix my code in the middle column.

However if the next compilation succeeds, SLIME updates the buffer to indicate the absence of error, but keeps the window open even though it is not useful anymore, meaning that I have to switch to it and close it with q.

One can look at the slime-compilation-finished function to see that SLIME calls the function referenced by the slime-compilation-finished-hook variable right after the creation or update of the compilation buffer. The default value is slime-maybe-show-compilation-log which does not open a new window if there is no error, but does not close an existing one.

Let us write our own function and use it:

(defun g-slime-maybe-show-compilation-log (notes)
  (with-struct (slime-compilation-result. notes successp)
      slime-last-compilation-result
    (when successp
      (let ((name (slime-buffer-name :compilation)))
        (when (get-buffer name)
          (kill-buffer name))))
    (slime-maybe-show-compilation-log notes)))
    
(setq slime-compilation-finished-hook 'g-slime-maybe-show-compilation-log)`

Nothing crazy here, we obtain the compilation status (in a very SLIME-specific way, with-struct is not a standard Emacs Lisp macro) and kill the compilation buffer if there is one while compilation succeeded.

Making compilation less verbose

Common Lisp specifies two variables, *compile-verbose* and *load-verbose*, which control how much information is displayed during compilation and loading respectively.

My implementation of choice, SBCL, is quite chatty by default. So I always set both variables to nil in my $HOME/.sbclrc file.

However SLIME forces *compile-verbose*; this is done in SWANK, the Common Lisp part of SLIME. When compiling a file, SLIME instructs the running Common Lisp implementation to execute swank:compile-file-for-emacs which forces *compile-verbose* to t around the call of a list of functions susceptible to handle the file. The one we are interested about is swank::swank-compile-file*.

First, let us write some Common Lisp code to replace the function with a wrapper which sets *compile-verbose* to nil.

(let ((old-function #'swank::swank-compile-file*))
  (setf (fdefinition 'swank::swank-compile-file*)
        (lambda (pathname load-p &rest options &key policy &allow-other-keys)
          (declare (ignore policy))
          (let ((*compile-verbose* nil))
            (apply old-function pathname load-p options)))))

We save it to a file in the Emacs directory.

In Emacs, we use the slime-connected-hook hook to load the code into the Common Lisp implementation as soon as Slime is connected to it:

(defun g-slime-patch-swank-compilation-function ()
  (let* ((path (expand-file-name "swank-patch-compilation-function.lisp"
                                 user-emacs-directory))
         (lisp-path (slime-to-lisp-filename path)))
    (slime-eval-async `(swank:load-file ,lisp-path))))
    
(add-hook 'slime-connected-hook 'g-slime-patch-swank-compilation-function)

Quite a hack, but it works.

Jonathan GodboutHello World gRPC Server

· 53 days ago

In our last post we discussed the basics of gRPC, gave an example flow, and discussed why you would want to use it over different communication protocols. In this post we will create a Hello World server using gRPC. This will involve the mixed-use of both gRPC and cl-protobufs. The code here can be found in my Hello World gRPC Github repo and specifically in the first commit.

HelloWorld Service

Defining the Protocol

Before we start writing code we must define:

  1. The messages we wish to send from the client to server and back.
  2. The server and method name.

We will start off simple. The request will simply contain a string name and the response will contain a string message. Creating these messages has been discussed before and is not very interesting. The new portion is the server. In Proto parlance a server is a service and that service contains a set of callable methods called RPCs.

Looking at the proto file in our Hello World repo we see:

service HelloWorld {
  rpc SayHello(HelloRequest) returns (HelloReply) {}
}

This says we are creating a server named HelloWorld. It will export one callable method called SayHello, which will accept one (serialized) HelloRequest proto message and respond with one HelloReply proto message.

The Protocol, Macroexpanded

We've created our proto file. Next we add it to a lisp library with an ASD file, along with all of the defsystem requirements to process the proto file. We detailed an example in our post Proto Over HTTPS if you need a refresher. Now  we would like to create a server that can be called. To understand how to do this we will go over the Service generated code, so please load your ASD file (or just follow along).

Cl-protobufs expands the proto service into several callable functions in the CL-PROTOBUFS.{FILENAME}-RPC package; for us this is CL-PROTOBUFS.HELLO-RPC. For each RPC it creates two functions:

  1. CL-PROTOBUFS.HELLO-RPC:CALL-SAY-HELLO
    1. Channel Argument
      1. An object defined in gRPC
    2. The REQUEST argument
      1. CL-PROTOBUFS.HELLO-REQUEST message
  2. CL-PROTOBUFS.HELLO-RPC:SAY-HELLO
    1. CL-PROTOBUFS.HELLO-REQUEST message
    2. CALL Call object

The CL-PROTOBUFS.HELLO-RPC:CALL-SAY-HELLO function will let clients call our service. The channel is created with the gRPC library and we will discuss this later. The request message is a HELLO-REQUEST object.

The CL-PROTOBUFS.HELLO-RPC:SAY-HELLO is a generic function. The user will have to implement a method overriding this generic. It takes a  (deserialized) CL-PROTOBUFS.HELLO-REQUEST message and a gRPC call object created by the gRPC library.

gRPC Objects

There are two internal book-keeping objects we need to talk about: the CHANNEL object and the CALL object. 

CHANNEL

The channel is an object created by gRPC over which a user can send messages. There are several options for channels - please see the gRPC documentation. We will see a brief example below when we call our complete server.

Call

The call object contains metadata about a call created by the gRPC server, such as whether the call has been canceled. It is currently unused, but will be more useful in the future. For now it will remain ignored in our server implementation.

Server Implementation

Now the fun part: cl-protobufs created the scaffolding for making a server and gRPC created the scaffolding for hosting a server and servicing calls, but we need to implement our server. All we have to do is implement our RPC stub (SAY-HELLO) and start the server!

Implementing our RPC

Since the cl-protobufs scaffolding creates a generic function we just make a method implementing that generic:

(defmethod hello-rpc:say-hello ((request hello:hello-request) call)
  ;; The RPC contains useful data for more intricate requests.
  (declare (ignore call))
  (hello:make-hello-reply
   :message (concatenate 'string "Hello " (hello:hello-request.name request))))

Notice we don't have any serialization calls, this is all done by the gRPC/cl-protobufs scaffolding. Instead, we make the protos and implement our logic.

Starting our Server

Starting our server requires:

  1. Calling (grpc:init-grpc)
    1. This is done once to initialize pieces of gRPC.
  2. Calling grpc::run-grpc-proto-server

The grpc::run-grpc-proto-server needs, at a minimum, the host:port and the Service symbol, here cl-protobufs.hello:hello-world. It offers more functionality, allowing for SSH, user defined number of threads, etc. See the gRPC code for details.

Full Example

Now that we have created our server we will show an example of starting the server and calling it. First clone the Hello World Repo. To start the server just load the grpc-server  package defined in that repo and call grpc-server:main. Your server has been started! You must specify the hostname and port, in our example we use 127.0.0.1 and 8080 defined in constants +hostname+ and +port-number+ in server.lisp.

(defun main ()
  ;; Before we use gRPC we need to init-grpc, this sets up
  ;; low-level gRPC internals.
  (grpc:init-grpc)
  ;; This starts the server.
  (grpc::run-grpc-proto-server
   "127.0.0.1:8080"
   cl-protobufs.hello:hello-world))

Next we need to call the server. In a REPL, load the grpc-server example. This is just to get the #:grpc and #:cl-protobufs.hello-rpc and #:cl-protobufs.hello packages. Next call:

(grpc:with-insecure-channel
             (channel "127.0.0.1:8080")
           (cl-protobufs.hello-rpc:call-say-hello
            channel
            (cl-protobufs.hello:make-hello-request :name "Bob")))

We will discuss the grpc:with-insecure-channel function in the next post. Just note that here we specify a binding argument - channel - and the host and port. Finally, we call our server using cl-protobufs.hello-rpc:call-say-hello over the channel with a protocol buffer message. This returns:

#S(CL-PROTOBUFS.HELLO:HELLO-REPLY
   :%%SKIPPED-BYTES NIL
   :%%BYTES NIL
   :%%IS-SET #*
   :%-MESSAGE #S(CL-PROTOBUFS.IMPLEMENTATION::ONEOF
                 :VALUE "Hello Bob"
                 :SET-FIELD 0))

Wrapping Up

Here we have seen the creation of a full gRPC server, and we called it with a gRPC client seeing it receive and respond with Protocol Buffer messages over a wire. This requires none of the serialization and deserialization scaffolding we created in our previous HTTP servers as we get it for free! In future posts we will discuss client calls as well as bidirectional streaming.


Thanks goes to Ron Gut and Carl Gay for making edits and comments.

vindarelDebugging Lisp: fix and resume a program from any point in stack &#127909;

· 58 days ago

You are doing god’s work on a time-intensive computation, but your final step errors out :S Are you doomed to start everything from zero, and wait again for this long process? No! Find out.

I show this with Emacs and Slime, then with the Lem editor (ready-to-use for CL, works with many more languages thanks to its LSP client).

(This video is so cool :D Sound on)

We use the built-in Common Lisp interactive debugger that lets us restart one precise frame from the call stack. Once you get the debugger:

  • press “v” on a frame to go to the corresponding source file
  • fix the buggy function, compile it again (C-c C-c in Slime)
  • come back to the debugger, place the point on the frame you want to try again, press “r” (sldb-restart-frame, see your editor’s menu)
  • see everything succeed. You did not restart everything from zero.
  • if you don’t see some function calls and variables in the debugger, compile your code with max debug settings (C-u C-c C-c in Slime).

For more:

Other videos:

Hope you learned something!


Debugging a complex stacktrace in Python VS Common Lisp (unknown author)

Tim BradshawClosed as duplicate considered harmful

· 59 days ago

The various Stack Exchange sites, and specifically Stack Overflow, seem to be some of the best places for getting reasonable answers to questions on a wide range of topics from competent people. They would be a lot better if they were not so obsessed about closing duplicates.

Closing duplicates seems like a good idea: having a single, canonical, question on a given topic with a single, canonical, answer seems like a good thing. It’s not.

The reason it’s not is that it makes two false assumptions:

  • that a given question has a single best answer;
  • that this answer does not change over time.

Neither of these assumptions is true for a large number of interesting questions.

Questions can have several good answers. I have at least three introductory books on analysis, and not because I didn’t find the good one on the first try: I have several because they give different perspectives — different answers, in the sense of Stack Exchange — to various aspects of the subject. I have several books on introductory quantum mechanics, several books on introductory general relativity, and so it goes on. It is, simply, a delusion that there exists a single most helpful answer to many questions: pretending that there is stupidly limiting.

And what constitutes a good answer can change over time. If you asked, for instance, what a macro was in Lisp and what macros are good for, you would have got very different answers in 1982 than in 20221. The same is true for many other subjects: human knowledge is not static.

All of this is made worse as only the person asking a question can accept an answer: they may not do so at all or, worse, they may be asking in bad faith and accept wrong or misleading answers (yes, this happens in various Stack Exchanges).

The true Stack Exchange believer will now explain in great detail2 why none of this matters: people should just spend their time adding improved answers to questions which already have accepted answers rather than to new questions which will be closed as duplicates. Because, of course, the accepted answer will not be the one almost everyone looks at, and even if they don’t care about increasing their karma on Stack Exchange, they will be very happy to write answers that, in the real world, almost nobody will ever look at.

Yeah, right.

This is such a shame: Stack Exchange is a good thing, but it’s seriously damaged by this unnescessary problem. The answer is not simply to allow unrestricted duplicates, but to wait for a bit and see if a question which is, or is nearly, a duplicate has attracted new and interesting answers, and to not close it as a duplicate in that case. This would not be hard to do.


  1. And even in 2022 you will get answers from people who seem not to have learned anything since 1982. 

  2. Please, don’t: I don’t have a Stack Exchange account any more and, even if I did, I would not be interested. 

Nicolas HafnerRounding Up - December Kandria Update

· 59 days ago
https://filebox.tymoon.eu//file/TWpZeU1RPT0=

This is a shorter update, as this month was primarily spent on translation and bugfixing, neither of which we can really tell you much about. If you missed last month's though, please be aware that the game will release on January 11th!

Various Things

The translation is going slow, as I'm doing it myself and am not used to a good workflow for that. I expect I'll get the hang of it yet though and I've already made pretty significant progress on it.

Other than that, the achievement icons are done, thanks to Blob! I'm really happy with how they turned out, and it's nice to have some of his high-res art associated with the game.

Finally, I took two days to develop a new webservice for key distribution, which will allow you to buy the game DRM-free directly on our website once it releases. This will also ensure that 95% of your money goes to us, rather than the 70% we get on Steam.

The Soundtrack is also now fully mastered and done, and is ready to be released at the same time as the game, which is 1500 CET, January 11th. Those that backed us via Kickstarter can expect to get their keys sometime early January, though the game won't unlock until the official release date.

Roadmap

Since last month things have not changed drastically, though the game is now "done" as far as the English version is concerned. Most of the remaining time until release in January will be spent on very niche bugfixes and the German translation.

  • Spruce up some of the sound effects and music tracks

  • Create achievement icons and integrate them into the game

  • Translate into German

  • Release the full game

  • Backport re-usable components into Trial

  • Separate out the assets from the main repository

  • Publish the source code under a permissive license

  • Fix a plethora of bugs and inconveniences in Alloy

  • Polish the editor and make it more stable

  • Release the editor

  • Develop a modding system for Trial

  • Finish the Forge build system at least to an extent where it's usable

  • Integrate the mod.io API with the modding system

  • Create a mod manager and browser UI

  • Document more parts of Trial and Kandria

  • Release an official modding kit

Well then until that fateful day, 11th of January, please continue to share the steam page with your friends and other communities! We really need all the help we can get leading up to release.

Please also look forward to a yearly roundup at the end of the month, and until then I hope you have a great holiday season!

Jonathan GodboutgRPC Basics

· 61 days ago

Howdy Hackers, it's finally time to talk to you about gRPC for Common Lisp. In this post we will discuss the basics of gRPC. We will go through an example request/response flow from the perspective of the client and server. In future posts we will make a gRPC server and call it from a client. 

Lyra and Faye Looking Forward to gRPC Discussion?

Background

gRPC is a general RPC framework developed by Google. It is often used to pass Protocol Buffer messages from one system to another, though an advanced user could use it to pass byte vectors. It sits over HTTP/2 and allows for bidirectional message streaming between a client and a server.

For these posts we assume some knowledge of Protocol Buffers.

Why would I use it?

gRPC allows for simple communication between clients and servers. It allows for language-agnostic message passing of complex structured objects.

First let’s look at a simple call flow for a client and server.

  1. Service implementor publishes a gRPC Service description and Request Message as well as a public URL.
  2. Client uses the URL and gRPC library to create a channel.
  3. Client instantiates a request object.
  4. Client uses Protocol Buffer generated code to call the server passing in the channel and request object.
  5. Server receives the request object, does required processing, and returns a response object.
  6. The client received a response message based on the published service descriptor.

The client and server need language specific Protocol Buffer and gRPC libraries. The language of these libraries for the client and server need not be identical. In our examples we will use qitab/grpc and qitab/cl-protobufs both written for Common Lisp.

The protocol buffer library takes care of many of the low-level details for you. Once you specify the request and response message fields protobufs provides convenient constructors in multiple languages and takes care of serialization and deserialization to the correct type for each message field.

The gRPC library is in charge of transmission of the underlying bytes from one client to server. It delegates to the Protocol Buffer library for serialization of the request and response messages.

Alternatives

HTTP(/2)

One option to consider is bare HTTP calls, in fact this is the basic underlying of gRPC! This still leaves a system designer with the need to choose what to send over the wire. This is often JSON or XML. Then one must determine how to send over the API schema, device authentication schemes, and all the work that creating a good API requires. gRPC gives much of this for free.

Apache Thrift

gRPC has a larger market share. The ecosystem you want to with will often determine your choice of Thrift vs gRPC. 

Note:

There are many different RPC frameworks, these are just the most common. Your software environmnet will often determine your framework, if you work at Google you will probably use gRPC where if you work at Facebook you’ll probably use Thrift. Also, not all languages are supported with any RPC framwork.

Conclusion

We now understand gRPC and its use case. We discussed the different types of libraries we need and saw a simple call flow with these libraries. In our next post we will create a gRPC server using qitab/grpc and call it.


Thanks go to Carl Gay for edits!

vindarelDebugging Lisp: trace options, break on conditions

· 62 days ago

Those are useful Common Lisp debugging tricks. Did you know about trace options?

We see how trace accepts options. Especially, we see how we can break and invoke the interactive debugger before or after a function call, how we can break on a condition (“this argument equals 0”) and how we can enrich the trace output. But we only scratch the surface, more options are documented on their upstream documentation:

INFO: You'd better read this on the Common Lisp Cookbook, that's where it will receive updates.

Table of Contents

Trace - basics

But let’s first see a recap’ of the trace macro. Compared to the previous Cookbook content, we just added that (trace) alone returns a list of traced functions.

trace allows us to see when a function was called, what arguments it received, and the value it returned.

(defun factorial (n)
  (if (plusp n)
    (* n (factorial (1- n)))
    1))

To start tracing a function, just call trace with the function name (or several function names):

(trace factorial)

(factorial 2)
  0: (FACTORIAL 3)
    1: (FACTORIAL 2)
      2: (FACTORIAL 1)
        3: (FACTORIAL 0)
        3: FACTORIAL returned 1
      2: FACTORIAL returned 1
    1: FACTORIAL returned 2
  0: FACTORIAL returned 6
6

(untrace factorial)

To untrace all functions, just evaluate (untrace).

To get a list of currently traced functions, evaluate (trace) with no arguments.

In Slime we have the shortcut C-c M-t to trace or untrace a function.

If you don’t see recursive calls, that may be because of the compiler’s optimizations. Try this before defining the function to be traced:

(declaim (optimize (debug 3)))  ;; or C-u C-c C-c to compile with maximal debug settings.

The output is printed to *trace-output* (see the CLHS).

In Slime, we also have an interactive trace dialog with M-x slime-trace-dialog bound to C-c T.

But we can do many more things than calling trace with a simple argument.

Trace options - break and invoke the debugger

trace accepts options. For example, you can use :break t to invoke the debugger at the start of the function, before it is called (more on break below):

(trace factorial :break t)
(factorial 2)

We can define many things in one call to trace. For instance, options that appear before the first function name to trace are global, they affect all traced functions that we add afterwards. Here, :break t is set for every function that follows: factorial, foo and bar:

(trace :break t factorial foo bar)

On the contrary, if an option comes after a function name, it acts as a local option, only for its preceding function. That’s how we first did. Below foo and bar come after, they are not affected by :break:

(trace factorial :break t foo bar)

But do you actually want to break before the function call or just after it? With :break as with many options, you can choose. These are the options for :break:

:break form  ;; before
:break-after form
:break-all form ;; before and after
TIP: form can be any form that evaluates to true. You can add any custom logic here.

Note that we explained the trace function of SBCL. Other implementations may have the same feature with another syntax and other option names. For example, in LispWorks it is “:break-on-exit” instead of “:break-after”, and we write (trace (factorial :break t)).

Below are some other options but first, a trick with :break.

break on a condition

The argument to an option can be any form. Here’s a trick, on SBCL, to get the break window when we are about to call factorial with 0. (sb-debug:arg 0) refers to n, the first argument.

CL-USER> (trace factorial :break (equal 0 (sb-debug:arg 0)))
;; WARNING: FACTORIAL is already TRACE'd, untracing it first.
;; (FACTORIAL)

Running it again:

CL-USER> (factorial 3)
  0: (FACTORIAL 3)
    1: (FACTORIAL 2)
      2: (FACTORIAL 1)
        3: (FACTORIAL 0)

breaking before traced call to FACTORIAL:
   [Condition of type SIMPLE-CONDITION]

Restarts:
 0: [CONTINUE] Return from BREAK.
 1: [RETRY] Retry SLIME REPL evaluation request.
 2: [*ABORT] Return to SLIME's top level.
 3: [ABORT] abort thread (#<THREAD "repl-thread" RUNNING {1003551BC3}>)

Backtrace:
  0: (FACTORIAL 1)
      Locals:
        N = 1   <---------- n is still 1, we break before the call with 0.

Other options

Trace on conditions

:condition enables tracing only if the condition in form evaluates to true.

:condition form
:condition-after form
:condition-all form

If :condition is specified, then trace does nothing unless Form evaluates to true at the time of the call. :condition-after is similar, but suppresses the initial printout, and is tested when the function returns. :condition-all tries both before and after.

Trace if called from another function

:wherein can be super useful:

:wherein Names

If specified, Names is a function name or list of names. trace does nothing unless a call to one of those functions encloses the call to this function (i.e. it would appear in a backtrace.) Anonymous functions have string names like “DEFUN FOO”.

Enrich the trace output

:report Report-Type

If Report-Type is trace (the default) then information is reported by printing immediately. If Report-Type is nil, then the only effect of the trace is to execute other options (e.g. print or break). Otherwise, Report-Type is treated as a function designator and, for each trace event, funcalled with 5 arguments: trace depth (a non-negative integer), a function name or a function object, a keyword (:enter, :exit or :non-local-exit), a stack frame, and a list of values (arguments or return values).

See also :print to enrich the trace output:

In addition to the usual printout, the result of evaluating Form is printed at the start of the function, at the end of the function, or both, according to the respective option. Multiple print options cause multiple values to be printed.

Example:

(defparameter *counter* 0)
(defun factorial (n)
  (incf *counter*)
  (if (plusp n)
    (* n (factorial (1- n)))
    1))
CL-USER> (trace factorial :print *counter*)
CL-USER> (factorial 3)
(FACTORIAL 3)
  0: (FACTORIAL 3)
  0: *COUNTER* = 0
    1: (FACTORIAL 2)
    1: *COUNTER* = 1
      2: (FACTORIAL 1)
      2: *COUNTER* = 2
        3: (FACTORIAL 0)
        3: *COUNTER* = 3
        3: FACTORIAL returned 1
      2: FACTORIAL returned 1
    1: FACTORIAL returned 2
  0: FACTORIAL returned 6
6

Closing remark

As they say:

it is expected that implementations extend TRACE with non-standard options.

and we didn’t list all available options or parameters, so you should check out your implementation’s documentation.

For more debugging tricks see the Cookbook and the links in it, the Malisper series have nice GIFs.

I am also preparing a short screencast to show what we can do inside the debugger, stay tuned!

vindarelLisp for the web: building one standalone binary with foreign libraries, templates and static assets

· 66 days ago

In our previous entry, we saw how to deploy our web application with Systemd, either from sources or with a binary. Now we’ll speak more about this building process to produce one binary that contains everything for our web app. We’ll tackle 3 issues:

  • ship foreign libraries alongside your binary, such as libreadline.so or libsqlite3.so,
  • include your Djula templates into your binary,
  • serve static files from your binary, without reading the filesystem,
  • and we’ll see my Gitlab CI recipe.

This allows us to create a binary that is really easy to deploy, to ship to users or to embed in an external process such as an Electron window (more on that later). Coming from Python and JS, what a dream!

Now, I want to thank the people that helped me figure these issues out and who wrote, fixed and extended these libraries: special shout-out to @mmontone for writing a Djula patch so quickly, @shinmera for Deploy and answering my many questions on Discord, @zulu.inoe for finding Hunchentoot answers, and everybody else on Discord for their help (@gavinok, @fstamour et all, sorry if I forgot) and all who dare asking questions to let everybody learn!

Table of Contents - Ship foreign libraries: the need of Deploy - Configuring Deploy: ignore libssl, verbosity - Remember: your program runs on another user’s machine. - Telling ASDF to calm down - Embed HTML Djula templates in your binary - Serve static assets - Gitlab CI - Closing remarks

Ship foreign libraries: the need of Deploy

Deploy is the way to go. If you used asdf:make in your .asd system definition to create a binary already, you just need to change two things:

;; my-project.asd
:defsystem-depends-on (:deploy)  ;; so you need to quickload deploy sometime before.
:build-operation "deploy-op"  ;; instead of program-op for asdf:make

and those two lines stay the same:

:build-pathname "my-application-name"
:entry-point "my-package:my-start-function"

Here’s my Makefile target, where I quickload Deploy before loading my app and calling asdf:make:

LISP ?= sbcl

build:
	$(LISP)	--non-interactive \
		--eval '(ql:quickload "deploy")' \
		--load openbookstore.asd \
		--eval '(ql:quickload :openbookstore)' \
		--eval '(asdf:make :openbookstore)'

This creates a bin/ directory with our binary and the foreign libraries:

  -rwxr-xr-x  1 vindarel vindarel 130545752 Nov 25 18:48 openbookstore
  -rw-rw-r--  1 vindarel vindarel    294632 Aug  3 13:06 libreadline.so.7.0
  -rw-rw-r--  1 vindarel vindarel    319528 Aug 23 18:01 libreadline.so.8.0
  -rw-rw-r--  1 vindarel vindarel   1212216 Aug 24 16:42 libsqlite3.so.0.8.6
  -rw-rw-r--  1 vindarel vindarel    116960 Aug  3 13:06 libz.so.1.2.11

We need to deploy this directory.

When we start the binary, Deploy tells us what it is doing:

$ ./bin/openbookstore --datasource argentina lisp
 ==> Performing warm boot.
   -> Runtime directory is /home/vindarel/projets/openbookstore/openbookstore/bin/
   -> Resource directory is /home/vindarel/projets/openbookstore/openbookstore/bin/
 ==> Running boot hooks.
 ==> Reloading foreign libraries.
   -> Loading foreign library #<LIBRARY READLINE>.
   -> Loading foreign library #<LIBRARY SQLITE3-LIB>.
   -> Loading foreign library #<LIBRARY LIBSSL>.
   -> Loading foreign library #<LIBRARY LIBCRYPTO>.
 ==> Launching application.
OpenBookStore version 0.2-d2ac5f2
[...]
==> Epilogue.
==> Running quit hooks.

We can configure Deploy.

Configuring Deploy: ignore libssl, verbosity

You can silence the Deploy statuses by pushing :deploy-console into the *features* list, before calling asdf:make. Add this to the Makefile:

--eval '(push :deploy-console *features*)'

Now all seems well, you rsync your app to your server, run it and... you get a libssl error:

=> Deploying files to /home/vindarel/projets/myapp/commandes-collectivites/bin/
Unhandled SIMPLE-ERROR in thread #<SB-THREAD:THREAD "main thread" RUNNING
                                    {10007285B3}>:
  #<LIBRARY LIBCRYPTO> does not have a known shared library file path.

Nicolas (@shinmera) explained that we typically want to import libssl or libcrypto from the target system, that “deploying these libraries without them blowing up on Linux is hard”. To do this, we ask Deploy to not handle them. In the .asd:

#+linux (deploy:define-library cl+ssl::libssl :dont-deploy T)
#+linux (deploy:define-library cl+ssl::libcrypto :dont-deploy T)

As a consequence, you now need to quickload or require :cl+ssl before loading the .asd file, because of the cl+ssl::libssl/libcrypto symbols at the top level.

Nicolas built all this for his needs when working on his Trial game engine and on his Kandria game (soon on Steam!), check them out!

Remember: your program runs on another user’s machine.

By this I mean that if you took the habit to use functions that locate a project’s source directory (asdf:system-source-directory, asdf:system-relative-pathname, for example when asking Hunchentoot to serve static assets, more on that below), then you need to re-write them, because your binary runs on another machine and it doesn’t run from sources, so ASDF, Quicklisp and friends are not installed, and your project(s) don’t have source directories, they are embedded in the binary.

Use a deploy:deployed-p runtime check if needed.

Telling ASDF to calm down

Now, we are very happy and confident, what could possibly go wrong? We run our app once again on our naked VPS:

$ ./bin/myapp
 ==> Performing warm boot.
   -> Runtime directory is /home/debian/websites/app/myapp/bin/
   -> Resource directory is /home/debian/websites/app/myapp/bin/
 ==> Running boot hooks.
 ==> Reloading foreign libraries.
   -> Loading foreign library #<LIBRARY LIBSSL>.
   -> Loading foreign library #<LIBRARY LIBRT>.
   -> Loading foreign library #<LIBRARY LIBOSICAT>.
   -> Loading foreign library #<LIBRARY LIBMAGIC>.
 ==> Launching application.
WARNING:
   You are using ASDF version 3.3.4.15 from
   #P"/home/vindarel/common-lisp/asdf/asdf.asd" and have an older version of ASDF
   (and older than 2.27 at that) registered at
   #P"/home/vindarel/common-lisp/asdf/asdf.asd".

  [ long message ellided ]

;
; compilation unit aborted
;   caught 1 fatal ERROR condition
An error occured:
 Error while trying to load definition for system asdf from pathname
 /home/vindarel/common-lisp/asdf/asdf.asd:
    Couldn't load #P"/home/vindarel/common-lisp/asdf/asdf.asd": file does not
    exist. ==> Epilogue.
 ==> Running quit hooks.

Now ASDF wants to do what, update itself? Whatever it tries to do, it crashes. Yes, this happens on the target host, when we run the binary. Damn!

The solution is easy, but it had to be documentend or google-able... Add this in your .asd to tell ASDF to not try to upgrade itself:

(deploy:define-hook (:deploy asdf) (directory)
  ;; Thanks again to Shinmera.
  (declare (ignorable directory))
  #+asdf (asdf:clear-source-registry)
  #+asdf (defun asdf:upgrade-asdf () nil))

By the way, if you want a one-liner to upgrade ASDF to 3.3.5 so that you can use package-local nicknames, check this lisp tip

Embed HTML Djula templates in your binary

Our binary now runs fine on our server: super great. But our app has another issue.

I like very much Djula templates, maintained by @mmontone. It is a traditional, no-surprises HTML templating system, very similar to Django templates. It is easy to setup, it is very easy to create custom filters, it has good error messages, both in the browser window and on the Lisp REPL. It’s one of the most downloaded Quicklisp libraries. Like Django templates, its philosophy is that it doesn’t allow a lot of computations in the template. It encourages to prepare your data on the back-end, so it is straightforward to process them in the templates. Sometimes it is limiting, so for more flexibility I’d look at Ten. It isn’t as much used and tested though (and I didn’t try it myself). If you want lispy templates, look at Spinneret. You can say goodby to copy-pasting nice-looking HTML examples, though.

However, by using Spinneret you would have not faced the following issue:

Djula reads templates from your file system.

and when your application runs on someone else’s machine, this is undefined behaviour.

Until now, you had to deploy your web app from sources or, at least, you had to send the HTML files to the server. This was the case until I talked about this issue to Mariano. He sent a patch the day after.

Now, we can choose: by default, Djula reads the HTML files from disk: very well. But now, when we build our binary, we can ask Djula to build the templates in memory, so they are saved into the Lisp binary.

Normally, you only need to tell Djula where to find templates (“add a template directory”), then to compile them into a variable:

;; normal, file-system case.
(djula:add-template-directory (asdf:system-relative-pathname "webapp" "templates/"))
(defparameter +base.html+ (djula:compile-template* "base.html"))

;; and then, we render the template with (djula:render-template* nil +base.html+ ...)

This uses a filesystem-template-store. In addition, it recompiles templates on change. This can be turned off as we’ll see.

For our binary, we need to set Djula’s *current-store* to a memory-template-store AND we need to turn off the djula:*recompile-templates-on-change* setting. Then, we need to compile all the templates of our application, and save our binary.

I actually do all this at the top-level of my web.lisp file. By default I load the app for development, and if we find a custom “feature”, that is added by the “build” target of the Makefile, we compile templates in memory.

So, in order:

  1. in the “build” target of my Makefile, I push a new setting in the *features* list:

    --eval '(push :djula-binary *features*)'
    
  2. in my web.lisp, I check for this setting (with #+djula-binary) and I create either a filetemplate store or a memory store. This is written at the top-level so it will be executed when we load the file. We can probably come up with better ergonomics.

This will be executed when I quickload my app in the build target of the Makefile, following the one above.

    --eval '(ql:quickload :openbookstore)'
(setf djula:*current-store*
      (let ((search-path (list (asdf:system-relative-pathname "openbookstore"
                                                              "src/web/templates/"))))
        #-djula-binary
        (progn
          (uiop:format! t "~&Openbookstore: compiling templates in file system store.~&")
          ;; By default, use a file-system store and reload templates during development.
          (setf djula:*recompile-templates-on-change* t)
          (make-instance 'djula:filesystem-template-store
		         :search-path search-path))

        ;; But, if this setting is set to NIL at load time, from the Makefile,
        ;; we are building a standalone binary: use an in-memory template store.
        ;;
        ;; We also need to NOT re-compile templates on change.
        #+djula-binary
        (progn
          (uiop:format! t "~&Openbookstore: compiling templates in MEMORY store.~&")
          (setf djula:*recompile-templates-on-change* nil)
          (make-instance 'djula:memory-template-store :search-path search-path))))
  1. compile all the templates. If you used a web framework (or started to develop yours), you might have used a shortcut: calling a render function which takes the name of a template as a string for argument. I’m thinking about Caveman:
@route GET "/"
(defun index ()
  (render #P"index.tmpl"))

This strings denotes the name of the template. For a standalone binary, we need to compile the template before. That’s why Djula shows how to define and compile our templates:

(defparameter +base.html+ (djula:compile-template* "base.html"))

You need this line for every template of your application:

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; Compile and load templates (as usual).
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(defparameter +base.html+ (djula:compile-template* "base.html"))
(defparameter +dashboard.html+ (djula:compile-template* "dashboard.html"))
(defparameter +search.html+ (djula:compile-template* "search.html"))
(defparameter +stock.html+ (djula:compile-template* "stock.html"))
(defparameter +card-page.html+ (djula:compile-template* "card-page.html"))
(defparameter +card-stock.html+ (djula:compile-template* "card-stock.html"))
(defparameter +card-create.html+ (djula:compile-template* "card-create.html"))
(defparameter +card-update.html+ (djula:compile-template* "card-edit.html"))
;; and so on.
  1. let asdf:make (and Deploy) save your binary. To try it, rename your templates/ directory to something else and run your app.

Addendum.

An .asd system definition can reference static files, so they are part of the build process and included into the delivered application. That’s how you can ship a README file:

  :components ((:static-file "README.md")
               ...)

I did the same for my templates. To be honest, I don’t recall if there is a solid reason, since they are compiled and saved into the image anyhow with the compilation step above. I show this anyways, it looks like a good practice to me:

            (:module "src/web/templates"
                        :components
                        ;; Order is important.
                        ((:static-file "login.html")
                         (:static-file "404.html")
                         (:static-file "base.html")
                         (:static-file "dashboard.html")
                         (:static-file "history.html")
                         (:static-file "search.html")
                         (:static-file "sell.html")
                         ...))

now we need to programmatically get the list of files from this src/web/templates module and to compile everything:

(let ((paths (djula:list-asdf-system-templates "bookshops" "src/web/templates")))
  (loop for path in paths
     do (uiop:format! t "~&Compiling template file: ~a...~&" path)
       (djula:compile-template* path))
  (values t :all-done))

This snippet and the general instructions are documented: https://mmontone.github.io/djula/djula/Deployment.html#Deployment

Feel free to show how you do it.

Bonus: here’s the list-asdf-system-templates function. We use asdf functions to get a system name, its components, their names...

(defun list-asdf-system-templates (asdf-system component)
  "List djula templates in ASDF-SYSTEM at COMPONENT.
  A list of template PATHNAMEs is returned."
  (let* ((sys (asdf:find-system asdf-system))
         (children (asdf:component-children sys))
         (module (or (find component children :key #'asdf:component-name :test #'equal)
                     (error "Component ~S not found in system named ~S.~&Available components are: ~S" component asdf-system (mapcar #'asdf:component-name children))))
         (alltemplates (remove-if-not (lambda (x) (typep x 'asdf:static-file))
                                      (asdf:module-components module))))
    (mapcar (lambda (it) (asdf:component-pathname it))
            alltemplates)))

Serve static assets

At that point in time, I figured out static assets would need to be worked on too. Hopefully, the people on Discord helped me and it was quickly solved.

This is how I served static assets with Hunchentoot. We use a “folder dispatcher and handler”:

https://common-lisp-libraries.readthedocs.io/hunchentoot/#create-folder-dispatcher-and-handler

(defun serve-static-assets ()
  "Serve static assets under the /src/static/ directory when called with the /static/ URL root."
  (push (hunchentoot:create-folder-dispatcher-and-handler
         "/static/" (merge-pathnames *default-static-directory*
                                     (asdf:system-source-directory :openbookstore) ;; => NOT src/
                                     ))
        hunchentoot:*dispatch-table*))

But when your app is on another machine... hence the need to ship the static assets into the standalone binary, and to ask Hunchentoot to serve them.

What we do is pretty obvious: save our static files into a data structure, so this one is saved in the image, but we use a couple Lisp tricks so I comment the code below.

You’ll see that this time I hardcoded the file names and I didn’t declare them on the .asd file... clearly there is room for improvement, be my guest.

You can find the file I use for my application here.

;;; pre-web.lisp
;;; Parameters and functions required before loading web.lisp
;;;
;;; We read the content of our static files and put them into variables, so that they can be saved in the Lisp image.
;;; We define %serve-static-file to simply return their content (as string),
;;; and because we use with the #. reader macro, we need to put these functions in another file than web.lisp.

;;; Where my static files are:
(defparameter *default-static-directory* "src/static/"
  "The directory where to serve static assets from (STRING). If it starts with a slash, it is an absolute directory. Otherwise, it will be a subdirectory of where the system :abstock is installed.
  Static assets are reachable under the /static/ prefix.")

;;; We simply use a hash-table that maps a file name to its content, a a string.
;;; I love Serapeum's dict which is a readable hash-table, that's what I use:
(defparameter *static-files-content* (dict)
  "Content of our JS and CSS files.
  Hash-table with file name => content (string).")

;;; I read all my static files and I save them into the hash-table:
(defun %read-static-files-in-memory ()
  "Save the JS and CSS files in a variable in memory, so they can be saved at compile time."
  (loop for file in (list "openbookstore.js"
                          "card-page.js")
     with static-directory = (merge-pathnames *default-static-directory*
                                              (asdf:system-source-directory :bookshops))
     for content = (uiop:read-file-string (merge-pathnames file static-directory))
     do (setf (gethash file *static-files-content*) content)
     finally (return *static-files-content*)))

;; AT COMPILE TIME, read the content of our static files.
(%read-static-files-in-memory)

(defun %serve-static-file (path)
  "Return the content as a string of this static file.
  For standalone binaries delivery."
  ;; "alert('yes, compiled in pre-web.lisp');"  ;; JS snippet to check if this dispatcher works.
  (gethash path *static-files-content*))  ;; this would not work without the #. reader macro.

It is inside “web.lisp” that I set other rules for Hunchentoot. If it recognizes my static files, we simply return their content, as a string.

I don’t know if this works well with very big or with numerous files. But I-want-a-standalone-binary! For serious needs, I’d serve the static files with a proper server... I guess.

We use the #. reader macro to get our files’ content at compile time, this is why we needed to define our helper functions in another file, that is loaded before this one.

;;; web.lisp
(defun serve-static-assets-for-release ()
  "In a binary release, Hunchentoot can not serve files under the file system: we are on another machine and the files are not there.
  Hence we need to get the content of our static files into memory and give them to Hunchentoot."
  (push
   (hunchentoot:create-regex-dispatcher "/static/openbookstore\.js"
                                        (lambda ()
                                          ;; Returning the result of the function calls silently fails. We need to return a string.
                                          ;; Here's the string, read at compile time.
                                          #.(%serve-static-file "openbookstore.js")))
   hunchentoot:*dispatch-table*)

  (push
   (hunchentoot:create-regex-dispatcher "/static/card-page\.js"
                                        (lambda ()
                                          #.(%serve-static-file "card-page.js")))
   hunchentoot:*dispatch-table*))

Finally, it is inside my start-app function that I decide how to serve my static assets:

  (hunchentoot:start *server*)
  (if (deploy:deployed-p)
      ;; Binary release: don't serve files by reading them from disk.
      (serve-static-assets-for-release)
      ;; Normal setup, running from sources: serve static files as usual.
      (serve-static-assets))
  (uiop:format! t "~&Application started on port ~a.~&" port)

Find my web.lisp file here.

Gitlab CI

I build my binary on Gitlab.

image: clfoundation/sbcl

# uncomment to run the jobs in parallel. They are now run one after the other.
# stages:
  # - test
  # - build

# We need to install some system dependencies,
# to clone libraries not in Quicklisp,
# and to update ASDF to >= 3.3.5 in order to use local-package-nicknames.
before_script:
  - apt-get update -qy
  - apt-get install -y git-core sqlite3 tar
  # The image doesn't have Quicklisp installed by default.
  - QUICKLISP_ADD_TO_INIT_FILE=true /usr/local/bin/install-quicklisp
  # clone libraries not in Quicklisp or if we need the latest version.
  - make install
  # Upgrade ASDF (UIOP) to 3.3.5 because we want package-local-nicknames.
  - mkdir -p ~/common-lisp/asdf/
  - ( cd ~/common-lisp/ && wget https://asdf.common-lisp.dev/archives/asdf-3.3.5.tar.gz  && tar -xvf asdf-3.3.5.tar.gz && mv asdf-3.3.5 asdf )
  - echo "Content of ~/common-lisp/asdf/:" && ls ~/common-lisp/asdf/

qa:
  allow_failure: true
  # stage: test
  script:
    # QA tools:
    # install Comby:
    - apt-get install -y sudo
    # - bash <(curl -sL get.comby.dev)
    - bash <(curl -sL https://raw.githubusercontent.com/vindarel/comby/set-release-1.0.0/scripts/install.sh)
    # install Colisper for simple lisp checks:
    - git clone https://github.com/vindarel/colisper ~/colisper
    - chmod +x ~/colisper/colisper.sh
    - cd src/ && ~/colisper/colisper.sh

build:
  # stage: build
  script:
    - make build
  artifacts:
    name: "openbookstore"
    # Publish the bin/ directory (todo: rename, include version...)
    paths:
      - bin/

Closing remarks

I am so excited by the possibilities this brings.

I knew it was possible to do this in CL but I admit I thought it would be simpler... it turned out it is not a very crowded path. Now the steps are documented and google-able, here and everywhere else I could leave a comment, but it will be nice to come up with shorter and friendlier ready-to-use utilities. In a new web framework? And again, please share how you do all of this in the comments.

Having this standalone binary dramatically simplifies my deployment process. With a small web app, running from sources was easy (once you set up Quicklisp, and ASDF, and...). But with a growing application, that uses my local forks or code not yet pushed to GitHub, deployment was becoming tedious, and it is now greatly simplified. rsync, systemctl restart and done.

Its only limitation is that you need the same libc version on the target OS as on your local machine. So, back in august I could build on my machine and send the result to my VPS, but I upgraded my Debian-ish system, and left the server with its (very) old Ubuntu version, so I can’t run the binary from my machine there any more... I must resort to a CI pipeline that uses a matrix of Ubuntu versions, or build with Docker or a virtual machine. Or run from sources... Maybe soon I will build (truly) static executables: they are coming to SBCL.

I repeat again the good side: my friends (on Debian so far) can download the app, run bin/openbookstore and it works \o/

One last thing I’d like to do is to be able to double-click an executable to start the app, and to have one single file (and not an archive that extracts as a directory, although it is not too bad!). This looks possible with Makeself.

If you want to try the standalone binary on your GNU/Linux system (does it actually work on other distros?), download the artifacts of the latest passing build on the pipelines page, or grab it with this direct link. Un-zip, run bin/bookshops and go to localhost address shown in the output (and also create an admin user as shown in the readme). You can leave me a comment here, on Gitter, on Discord or with a good ol’ email.

Stay tuned, OpenBookStore is still a work in progress but it will be a Common Lisp application flagship ;)

Nicolas HafnerRelease Date Announcement! - November Kandria Update

· 79 days ago
https://kandria.com/media/trailer%20cover.png

This update's an important one! The final release date, a new trailer, and some more announcements. Dang! Well, without further ado:

Release Date: 11th of January!

Alright, I'm happy to announce that we got a final release date for Kandria, which is as the title says, Wednesday, 11th of January 2023! To celebrate the release date announcement, and the large amounts of progress we've made polishing the game, please enjoy this brand new trailer as well:

The game will release on Steam, itch.io, and onto the website as a direct sale. All copies of the game will be DRM-free. I'm also excited to say that the game will release both in English and in German, translated by yours truly. We have enough time left over to do the localisation, so I really want to do it.

Shevalin Single

The full soundtrack of the game (which is excellent, by the way!) will be released together with the game in January. However, you can enjoy a single from the soundtrack right now:

https://hypeddit.com/q89kb4

This is Shevalin, the ending credits song, composed by our very talented Mikel Dale, and sung by the incredible Julie Elven. I hope you enjoy it!

User Feedback

Currently we're still polishing everything we can find and responding to user feedback. One of the most prominent things we noticed watching people play was that they were confused by the locked doors in the first region of the game. So hey, we finally added crashable doors:

https://filebox.tymoon.eu//file/TWpZeE5nPT0=

If you're part of the beta programme, please give the game a try! We still have some time to include more changes if you have any suggestions.

Development

Okey, last month we got a new roadmap, so let's look at that now:

  • Add even more detail tiles, foliage, and animal spawners throughout the world

  • Fine-tune the levelling, trade prices, and enemy difficulty

  • Create new key art

  • Create a new trailer

  • Spruce up some of the sound effects and music tracks

  • Create achievement icons and integrate them into the game

  • Translate everything (over 50'000 words) into German

  • Release the full game

  • Backport re-usable components into Trial

  • Separate out the assets from the main repository

  • Publish the source code under a permissive license

  • Fix a plethora of bugs and inconveniences in Alloy

  • Polish the editor and make it more stable

  • Release the editor

  • Develop a modding system for Trial

  • Finish the Forge build system at least to an extent where it's usable

  • Integrate the mod.io API with the modding system

  • Create a mod manager and browser UI

  • Document more parts of Trial and Kandria

  • Release an official modding kit

Alright! Until the 11th of January finally hits, please continue to share the steam page with your friends and other communities!

Quicklisp newsNovember 2022 Quicklisp dist update now available

· 87 days ago

 New projects:

  • 40ants-asdf-system — Provides a class for being used instead of asdf:package-inferred-system. — BSD
  • action-list — An implementation of action lists — zlib
  • adp — Add Documentation, Please. A documentation generator. — The Unlicense
  • anatevka — A distributed blossom algorithm for minimum-weight perfect matching. — MIT
  • cl-annot-revisit — Re-implementation of 'cl-annot', an annotation syntax library for Common Lisp. — WTFPL
  • cl-bloom-filter — Just another Common Lisp bloom filter implementation, enjoy it! — 
  • cl-cblas — A cl-autowrap generated wrapper around CBLAS which provides a C interface to the Basic Linear Algebra Subprograms. — MIT
  • cl-djula-svg — Handle SVGs in Djula Templates — MIT
  • cl-djula-tailwind — Tailwind classes for Djula templates — MIT
  • cl-facts — in-memory graph database — ISC
  • cl-glib — GLib binding for Common Lisp. — lgpl3
  • cl-gobject-introspection-wrapper — Wrap and call GObject Introspection FFI function in LISP style, based on cl-gobject-introspection. — lgpl3
  • cl-lessp — Generic order predicate — ISC
  • cl-oju — Common Lisp equivalents of core Clojure functions, especially sequence-related ones — MIT
  • cl-rollback — rollback functions — ISC
  • cl-sentry-client — Sentry client — MIT
  • cl-union-find — An implementation of UNION-FIND datastructure — LGPL
  • climc — A common lisp Instant Messaging client. — MIT License
  • clog-plotly — New CLOG System — BSD
  • clog-terminal — CLOG Terminal — BSD
  • de-mock-racy — Simplistic mocking library. — BSD simplified
  • distributions — Random numbers and distributions — MS-PL
  • dsm — Destructuring match — MIT
  • easy-macros — An easier way to write 90% of your macros — Apache License, Version 2.0
  • filesystem-utils — A collection of utilities for filesystem interaction. — zlib
  • filter-maker — CLIM program for letting users make filters out of predicates and keys. — BSD 2-Clause
  • fiveam-matchers — An extensible matchers library for FiveAM — Apache License, Version 2.0
  • infix-reader — A reader macro to allow for infix syntax with { ... } — Unlicence
  • input-event-codes — Port of all constants from input-event-codes.h from both Linux and FreeBSD — MIT
  • instance-tracking — Defines a class that tracks its instances — MIT
  • json-lib — A simple and relatively fast JSON parser and encoder — MIT
  • lineva — Linear evaluation macro system — GPLv3
  • luckless — Lockless data structures — zlib
  • more-cffi — Extension of the CFFI project. A facility to wrap C bindings and write documentation. — The Unlicense
  • music-spelling — Automatic pitch and rhythm spelling. — Apache 2.0
  • nail — library providing convenient functions for working with linalg, statistics and probability. — MIT
  • ndebug — A toolkit to construct interface-aware yet standard-compliant debugger hooks. — BSD 3-Clause
  • numericals — A high performance numerical computing library for Common Lisp (focus: basic math operations) — MIT
  • ospm — OS package manager interface — BSD 3-Clause
  • pero — Logging and text file perations library — MIT
  • pk-serialize — Serialization of Common Lisp data structures — MIT
  • statistics — A consolidated system of statistical functions — MS-PL
  • stepster — Web scraping library — MIT
  • testiere — Up Front Testing for DEFUN and DEFMETHOD — GPLv3
  • trivial-sanitize — clean html strings: "foo" â†' "foo" — LLGPL
  • tsqueue — Thread Safe Queue — MIT
  • typo — A portable type inference library for Common Lisp — MIT
  • wayflan — From-scratch Wayland client implementation — BSD 3-Clause
  • yah — Yet Another Heap — BSD-3

Updated projects: 3d-quaternions, 3d-vectors, abstract-arrays, acclimation, agnostic-lizard, alexandria-plus, architecture.builder-protocol, array-utils, assoc-utils, auto-restart, bdef, bit-smasher, blackbird, bp, bst, caveman, cephes.cl, cerberus, cffi, chunga, ci, ci-utils, cl+ssl, cl-all, cl-async, cl-autowrap, cl-bmas, cl-charms, cl-collider, cl-confidence, cl-cron, cl-data-structures, cl-form-types, cl-forms, cl-gamepad, cl-generator, cl-git, cl-gserver, cl-i18n, cl-info, cl-interpol, cl-isaac, cl-json-pointer, cl-kaputt, cl-las, cl-lib-helper, cl-liballegro, cl-liballegro-nuklear, cl-libuv, cl-lzlib, cl-marshal, cl-migratum, cl-mixed, cl-mock, cl-naive-store, cl-openal, cl-patterns, cl-pdf, cl-protobufs, cl-randist, cl-random-forest, cl-replica, cl-scsu, cl-semver, cl-sendgrid, cl-ses4, cl-steamworks, cl-str, cl-telegram-bot, cl-tls, cl-torrents, cl-unix-sockets, cl-utils, cl-wav, cl-webkit, cl-xkb, cl-yaml, cl-yxorp, cl-zstd, clack, clgplot, clingon, clj-re, clobber, clog, clog-ace, closer-mop, clsql, clss, cluffer, clunit2, clx, cmd, coleslaw, common-lisp-jupyter, commondoc-markdown, compiler-macro-notes, conduit-packages, consfigurator, croatoan, css-lite, cytoscape-clj, damn-fast-priority-queue, data-frame, data-lens, data-table, datamuse, defmain, dense-arrays, depot, dexador, dfio, dissect, doc, docparser, docs-builder, eclector, erudite, extensible-compound-types, fast-io, fiveam-asdf, flare, float-features, font-discovery, for, functional-trees, github-api-cl, gtirb-capstone, gtirb-functions, gtwiwtg, gute, harmony, http2, hunchensocket, hunchentoot-errors, imago, in-nomine, ironclad, jp-numeral, json-schema, jsonrpc, kekule-clj, lack, latter-day-paypal, lift, linear-programming, linear-programming-glpk, lisp-binary, lisp-critic, lisp-namespace, lisp-stat, lisp-unit2, literate-lisp, log4cl-extras, ltk, lunamech-matrix-api, markup, math, mcclim, mito, mnas-graph, mnas-package, multiposter, mutility, myway, neural-classifier, nfiles, nhooks, nkeymaps, nodgui, numcl, numerical-utilities, nyxt, omglib, one-more-re-nightmare, osc, osicat, overlord, papyrus, parachute, pathname-utils, periods, petalisp, pgloader, piping, plot, plump, polymorphic-functions, posix-shm, postmodern, pp-toml, query-fs, quick-patch, quri, random-state, replic, rutils, sel, select, serapeum, shasht, shop3, simple-neural-network, sketch, skippy-renderer, slite, sly, snakes, special-functions, speechless, spinneret, staple, stripe-against-the-modern-world, stumpwm, stumpwm-dynamic-float, tfeb-lisp-hax, tfeb-lisp-tools, trace-db, trivial-clipboard, trivial-extensible-sequences, trivial-file-size, trivial-mimes, uax-15, uiop, usocket, utilities.print-items, utilities.print-tree, vellum, vellum-binary, vellum-postmodern, vk, with-c-syntax, wuwei, xml-emitter, yason, zippy.

Removed projects: cl-json-template, cl-schedule, cl-splicing-macro, mito-attachment, trivial-timers.

To get this update, use (ql:update-dist "quicklisp")

I apologize for the long gap between this update and the last. I intend to get back on a monthly schedule.

Pascal CostanzaNew blog address

· 91 days ago

I am moving my blog away from blogspot / blogger. I am going to host my new blog at micro.blog. You can subscribe to an RSS feed on Lisp-related posts if you care only for that. micro.blog also acts as a social network and, although it is its own platform, is compatible with Mastodon. My Mastodon handle is @costanza@micro.blog.

cnx.cnx-float, cnx.cnx-float cnx { visibility: hidden !important; } div.jwplayer div.jw-wrapper, div[id^='primis_playerSekindoSPlayer'], div.min-tv-is-sticky, iframe.min-tv-is-sticky, div.vjs-pip-container video-js.video-js.vjs-pip-active { position: absolute !important; }cnx.cnx-float, cnx.cnx-float cnx { visibility: hidden !important; } div.jwplayer div.jw-wrapper, div[id^='primis_playerSekindoSPlayer'], div.min-tv-is-sticky, iframe.min-tv-is-sticky, div.vjs-pip-container video-js.video-js.vjs-pip-active { position: absolute !important; }

Joe MarshallLisp: Second impression

· 103 days ago

My first impressions of Lisp were not good. I didn't see how navigating list structure was of any use. It seemed to be just a more cumbersome way of getting at the data.

In fact, my first impressions of computer science were not very positive. I enjoyed hobbyist coding on my TRS-80, but “real” programming was tedious and the proscriptions of “doing it the correct way” took the joy out of it. I explored other options for my major. Fate intervened. Over the next year I realized my calling was EECS, so in my sophomore year I took all the intro courses.

I had heard that the introductory computer science course used Lisp. That was disappointing, but I started hearing things about Lisp that made me think I should take a second look. I learned that Lisp was considered the premier language of MIT's Artificial Intelligence Laboratory. It was invented by hackers and designed to be a programmable programming language that was infinitely customizable. The lab had developed special computers that ran Lisp on the hardware. The OS was even written in Lisp. I wasn't looking forward to car and cdr'ing my way through endless cons cells, but I figured that there had to be more going on.

6.001 was unlike the prior computer courses I had taken. The course was not about how to instruct a computer to perform a task — the course was about expressing ideas as computation. To me, this seemed a much better way to approach computers. Professor Hal Abelson was co-lecturing the course. He said that he chose Lisp as the teaching language because it was easier to express ideas clearly.

Two things stood out to me in the first lecture. Professor Abelson showed the recursive and iterative versions of factorial. Of course I had seen recursive factorial from the earlier course and I knew how it worked. Clearly the iterative version must work the same way. (One of my early hangups about Lisp was all the recursion.) I was suprised to find out that the Lisp system would automatically detect tail recursive cases and turn them into iteration. Evidentally, the makers of Lisp had put some thought into this.

Professor Abelson also demonstrated first class functions. He wrote a procedure that numerically approximates the derivative of a function. He then used that in a generic Newton's method solver. This is all straightforward stuff, but to a newbie like me, I thought it was amazing. In just a few lines of code we were doing simple calculus.

It was a mystery to me how first class functions were implemented, but I could see how they were used in the Newton's method solver. The code Professor Abelson wrote was clear and obvious. It captured the concept of derivatives and iterative improvement concisely, and it effectively computed answers to boot. I had to try it. Right after the lecture I went to lab and started typing examples at the REPL. Sure enough, they worked as advertised. A tail-recursive loop really didn't push any stack. It didn't leak even the tiniest bit of memory, no matter how long the loop. I tried the Newton's method solver to take cube roots. I passed the cube function to the derivative function and the result was a function that was numerically close to the derivative.

Now I was a bit more impressed with Lisp than I was earlier. I wasn't completely sold, but I could see some potential here. I wanted to learn a bit more before I dismissed it entirely. It took me several months to become a Lisp fan. The parenthesis were a small hurdle — it took me a couple of weeks to get the hang of let forms. There was a week or two of navigating cons cells to wade through. But I eventually came to love the language.

My first impression of Lisp was poor. The uselessness of traversing random list structure was unmotivating. My second impression was better. Professor Abelson teaching directly from preprints of S&ICP might have had something to do with it.

Joe MarshallLisp: First Impressions

· 106 days ago

My first exposure to Lisp was in the summer of 1981. I was taking a summer school intro to computers. The course was taught on a PDP-11, and for the first few weeks we programmed in Macro-11 assembly language. For the last couple of weeks they introduced Lisp.

Frankly, I wasn't impressed.

The course started by talking about linked lists and how you could navigate them with car and cdr. We then went on to build more complicated structures like alists and plists. This was an old-fashioned lisp, so we used things like getprop and putprop to set symbol properties.

The subject matter wasn't difficult to understand (though chasing pointers around list structure is error prone). Since we had just been learning Macro-11, it was natural to play with linked list structure in assembly code. We wrote assembly code to look things up in a plist.

My impression was that Lisp was centered around manipulating these rather cumbersome data structures called cons cells. Linked lists of cons cells have obvious disadvantages when compared to arrays. This makes the language tedious to work with.

The summer school course was my first “real” college course in computers. I was put off. “Real” computing wasn't as much fun as I had hoped it would be. I definitely wouldn't be considering it as a major, let alone a career. I wasn't interested in Lisp at all.

to be continued

Tim BradshawPackage-local nicknames

· 111 days ago

What follows is an opinion. Do not under any circumstances read it. Other opinions are available (but wrong).

Package-local nicknames are an abomination. They should be burned with nuclear fire, and their ashes launched into space on a trajectory which will leave the Solar System.

The only reason why package-local nicknames matter is if you are writing a lot of code with lots of package-qualified names in it. If you are doing that then you are writing code which is hard to read: the names in your code are longer than they need to be and the first several characters of them are package name noise (people read, broadly from left to right). Imagine me:a la:version ge:of oe:English oe:where la:people wrote like that: it’s just horrible. If you are writing code which is hard to read you are writing bad code.

Instead you should do the work to construct a namespace in which the words you intend to use are directly present. This means constructing suitable packages: the files containing the package definitions are then almost the only place where package names occur, and are a minute fraction of the total code. Doing this is a good practice in itself because the package definition file is then a place which describes just what names your code needs, from where, and what names it provides. Things like conduit packages (shameless self-promotion) can help with this, which is why I wrote them: being able to say ‘this package exports the combination of the exports of these packages, except …’ or ‘this package exports just the following symbols from these packages’ in an explicit way is very useful.

If you are now rehearsing a litany of things that can go wrong with this approach in rare cases1, please don’t: this is not my first rodeo and, trust me, I know about these cases. Occasionally, the CL package system can make it hard or impossible to construct the namespace you need, with the key term here being being occasionally: people who give up because something is occasionally hard or impossible have what Erik Naggum famously called ‘one-bit brains’2: the answer is to get more bits for your brain.

Once you write code like this then the only place package-local nicknames can matter is, perhaps, the package definition file. And the only reason they can matter there is because people think that picking a name like ‘XML’ or ‘RPC’ or ‘SQL’ for their packages is a good idea. When people in the programming section of my hollowed-out-volcano lair do this they are … well, I will not say, but my sharks are well-fed and those things on spikes surrounding the crater are indeed their heads.

People should use long, unique names for packages. Java, astonishingly, got this right: use domains in big-endian order (org.tfeb.conduit-packages, org.tfeb.hax.metatronic). Do not use short nicknames. Never use names without at least one dot, which should be reserved for implementations and perhaps KMP-style substandards. Names will now not clash. Names will be longer and require more typing, but this will not matter because the only place package names are referred to are in package definition files and in in-package forms, which are a minute fraction of your code.

I have no idea where or when the awful plague of using package-qualified names in code arose: it’s not something people used to do, but it seems to happen really a lot now. I think it may be because people also tend to do this in Python and other dotty languages, although, significantly, in Python you never actually need to do this if you bother, once again, to actually go to the work of constructing the namespace you want: rather than the awful

import sys

... sys.argv ...

...

sys.exit(...)

you can simply say

from sys import argv, exit

... argv ...

exit(...)

and now the very top of your module lets anyone reading it know exactly what functionality you are importing and from where it comes.

It may also be because the whole constructing namespaces thing is a bit hard. Yes, it is indeed a bit hard, but designing programs, of which it is a small but critical part, is a bit hard.

OK, enough.


If, after reading the above, you think you should mail me about how wrong it all is and explain some detail of the CL package system to me: don’t, I do not want to hear from you. Really, I don’t.


  1. in particular, if your argument is that someone has used, for instance, the name set in some package to mean, for instance, a set in the sense it is used in maths, and that this clashes with cl:set and perhaps some other packages, don’t. If you are writing a program and you think, ‘I know, I’ll use a symbol with the same name as a symbol exported from CL to mean something else’ in a context where users of your code also might want to use the symbol exported by CL (which in the case of cl:set is ‘almost never’, of course), then my shark pool is just over here: please throw yourself in. 

  2. Curiously, I think that quote was about Scheme, which I am sure Erik hated. But, for instance, Racket’s module system lets you do just the things which are hard in the package system: renaming things on import, for instance. 

Nicolas HafnerMapping the Road to Release - October Kandria Update

· 115 days ago
https://filebox.tymoon.eu//file/TWpVNU1nPT0=

So, we've been in beta for over a month and gotten lots of useful feedback. Thanks a bunch! We'll continue to listen eagerly for feedback as we move towards the end of the development.

In case you missed the Kickstarter but would still like to support us ahead of the release in January, you can do so by preordering Kandria or its soundtrack through Backerkit. Unlike Kickstarter, this also accepts PayPal, if you don't have access to a credit card.

HeroFest

https://filebox.tymoon.eu//file/TWpVNU13PT0=

Another convention!

We'll be at the HeroFest in Bern, October 14-16! You'll be able to play the latest Kandria release there and chat about whatever. If you're in the area, please stop on by and check out the rest of the Swiss indie games presenting there as well.

Soundtrack

The release of the soundtrack has been delayed by a bit, as our composer got swamped with work, and we're still trying to hash out the complicated stuff behind royalties and all. However, a single of the soundtrack should be out soon. Please keep an ear out for that!

Steam Deck Support

Gaben finally delivered a Steam Deck to me, and I've tested Kandria on it. There were a couple of minor fixes to make it more usable, but now it seems to run flawlessly on the deck! Nice!

https://pbs.twimg.com/media/Fd1mVMkXwAUexcc?format=jpg&name=large

The Deck is a wonderful piece of tech, and I've been enjoying playing other games on it as well. While I still would like for Kandria to run on the Switch as well, this is the next best thing for now.

Development

We're still rounding out the last bits of polish and bugs, and focusing on playing through the game more to ensure the balance and progression also work well. Development of the core game will officially end at the end of November, after which focus will shift towards adding the stretch goals we promised during the Kickstarter, preparing promotional materials, and so on.

To that end, here's a new rough roadmap of all the stuff left to do including post-release updates:

  • Add even more detail tiles, foliage, and animal spawners throughout the world

  • Fine-tune the levelling, trade prices, and enemy difficulty

  • Spruce up some of the sound effects and music tracks

  • Create achievement icons and integrate them into the game

  • Release the full game

  • Backport re-usable components into Trial

  • Separate out the assets from the main repository

  • Publish the source code under a permissive license

  • Fix a plethora of bugs and inconveniences in Alloy

  • Polish the editor and make it more stable

  • Release the editor

  • Develop a modding system for Trial

  • Finish the Forge build system at least to an extent where it's usable

  • Integrate the mod.io API with the modding system

  • Create a mod manager and browser UI

  • Document more parts of Trial and Kandria

  • Release an official modding kit

We're also planning some cool events to celebrate the three big release milestones, though more info about that as we actually get closer to them.

For now, please continue to share the steam page with friends and other groups. It would help a lot to ensure that we can continue to make games in the future!


For older items, see the Planet Lisp Archives.


Last updated: 2023-01-23 18:00