vindarelI published 17 videos about Common Lisp macros - learn Lisp with a code-first tutorial

· 10 days ago

For those who don’t know and who didn’t see the banner :D I am creating a Common Lisp course on the Udemy platform (with complementary videos on Youtube). I wanted to do something different and complementary than writing on the Cookbook.

I worked on new videos this summer and I just finished editing the subtitles. I have added 17 videos (worth 1h30+ of code-driven content) about Common Lisp macros!

We cover a lot of content: quote, backquote and comma, “,@”, comparison with C macros, comparison with functions, GENSYM and variable capture, useful patterns (call-with...), compile-time computing, read-time evaluation... (full summary below)

New: 17 videos to learn Lisp macros

I recorded the last one, about the MACROSTEP tool, inside the Lem editor. It’s short, you should have a look at how this new editor looks like. (I’m very excited about it. Did I say I started develop a Magit-like plugin for it?)

Who is this course for?

The whole course is for beginners in Lisp, although not total beginners in programming. This chapter is, logically, a bit more difficult than the others. If you didn’t write small Common Lisp programs yet, be gentle with yourself and stop if you don’t understand. (you can ask questions in the Udemy forum, of course) In your case I would advise to watch the introductory one, the comparison with C macros, the video on QUOTE, the “functions VS macros” one, and then carry on at your rhythm. Be sure to work on the previous chapters before tackling this one.

Content

This is what we see on the topic of macros. For a full overview of the course, what I want to do next (if you subscribe now, you’ll get new content for the same price) and read others’ feedback, see its GitHub project page (there are six more chapters including getting started, functions, iteration, condition handling...).

Table of Contents

7.1 A quick intro (FREE PREVIEW)

Macros do not evaluate their arguments and expand to new code at compile time. What does that mean? A quick intro before diving deeper.

7.2. A comparison with C macros (FREE PREVIEW)

Lisp macros are NOT manipulating text, unlike C. Text leads to many unnecessary problems. We have a fun tour of a trivial need yet complicated issue in C that is easily done in Common Lisp.

7.3 QUOTE (FREE PREVIEW)

QUOTE does not evaluate its argument.

What we see: how to use QUOTE outside macros. Data takes the shape of code. We pair it with eval and we go full circle. We introduce the need to extrapolate values inside a quote.

7.4 Backquote and comma

What we see: how we extrapolate variable values. How they can help create data structures. Real world examples.

7.5 How to spot you are using a macro

Four tips to recognize if you are using a function or a macro, and why it matters.

7.6 Functions vs macros

Macros do NOT replace functions!

What we see: they are not higher-level functions. The subtle but logic need to re-compile functions using macros.

Introducing MACROEXPAND.

Keeping compile-time computing in mind (more on that later). A look at a function’s disassembly. So... you might not need a macro yet ;)

7.7 COMMA SPLICE ,@ the third most important macro mechanism

What we see: when use it, understanding the common error messages, passing body forms to our macro. Our first macro model.

7.8 &body and other macro parameters. Our second macro model.

What we see: how &body differs to &rest. Macro parameters: lots of possibilities, but some conventions carry meaning. Our own DOLIST macro. Our second macro model you can follow.

7.9 Putting this together: with-echo macro. Macroexpand in use.

We build our first macro with backquote and comma-splice, even a quote followed by a comma. We use macroexpand.

7.10 GENSYM -the simple fix to the most dangerous macros gotcha

What we see: what is variable capture and how to avoid it. Writing our own REPEAT macro. A little discussion about Common Lisp VS Scheme macros. GENSYM can be used outside macros too.

At this point you know enough to write all common macros. See the exercises for easy and not-so-easy ones.

7.11 CALL-WITH pattern: simplifying macros

We saw there can be subtle pitfalls when we write a macro. This pattern allows to offload most of the work to a function, which presents many advantages. We demo with our REPEAT macro.

7.12 Compile time computing

When writing macros, we have the full power of Common Lisp at compile time. This gives great tools to the developer: early type errors and warnings, faster runtime.

What we see: a simple example, writing a scientific macro for conversion of unit at compile time, existing libraries for that, introduction to dispatching macro characters and reader macros.

7.13 Lists VS AST

What we see: other languages don’t have macros but can manipulate Abstract Syntax Trees. Code as lists of symbols is not the same, we would need a third-party library to manipulate a Lisp AST proper. This doesn’t prevent us to develop crazy macros though, see this library adding Haskell-like type checking on top of Common Lisp, in pure CL macros.

7.14 Two example macros for compile-time computing

defstar allows to specify a function’s arguments’ types, Serapeum’s ecase-of does exhaustiveness type checking. At compile time, of course.

7.15 SYMBOL-MACRO

A symbol macro is not your everyday Lisp development tool, but it expands your toolbet. Again.

7.16 Read-time evaluation with #.

Macros occur at compile-time. But Common Lisp blurs the lines between read time, compile time and run time. This allows to execute code at READ time.

7.17 EDITOR TOOL: macrostep (FREE PREVIEW, Lem demo)

Macrostep is an editor extension that helps understand our macro expansions. It is only available in Sly and Lem. We demo with the Lem editor.

Thanks

Thanks for your support, it does make a difference (I am self employed, I don’t earn millions and I’d love to spend *even more time* on CL resources and projects). If you want to learn what I do for the Lisp community and why you should buy my course, read more on Github.

My complementary Lisp videos are on Youtube.

Don’t hesitate to share the link with a friend or a colleague :) Thanks, and happy lisping.

A demo about web development has been recorded and is coming.


ps: we just got a Dockerfile for CIEL, which is then easier to test, thanks to a “student” of my course. Thanks, @themarcelor. It will be on Dockerhub in due time.

The Udemy course by @vindarel is the best introductory material for a fast and practical intro to Common Lisp.

(thanks <3)

A wonderful course for someone with cursory knowledge of lisp. I’ve dipped my feet many times now, but always struggled to wrap my head around everything. This course really helped give me greater confidence in how to start a project. I really enjoyed the focus on having an executable early. The Lisp-2 reveal was beautiful and made me finally understand the difference. Thanks a lot!

Simon, August of 2023. (thanks <3 )

Nicolas HafnerI've opened up a Patreon - Confession 93

· 31 days ago
https://studio.tymoon.eu/api/studio/file?id=2570

I've been debating opening up a Patreon for many years and I've always been hesitant about accepting donations from people, but I think it's finally time to change my mind on that!

Why make a Patreon now?

I've been working full time on Kandria and associated projects since 2020, and continue to do so today. All of the work that I've done as part of that has been released as open source software, including Kandria itself as well as the engine it runs on, Trial.

Since the release, I've mostly focused on support and the pre-pre-production of my next title, which primarily involves adding new features to Trial that are necessary to create a full-3D game. I can't yet announce much about the game itself, other than that it is a character action game, meaning it features third-person hack and slash focused on slick and satisfying combat.

Unfortunately the release of Kandria has not gone as well as I would have liked, and revenue from it is minimal. Most months I receive only about 200 bucks from Steam, which as you might imagine is not enough to sustain myself full-time, let alone any other artists that are necessary to produce another high-quality game.

So I am finally opening myself up for continued public funding. I know people have wanted to support me in the past before, and I've always been hesitant about accepting that. But now with the financial pressure increasing, I think it's finally time to let people that want to be generous, actually be generous!

What can I expect from this?

Aside from simply funding my existence and allowing me to continue to produce high-quality open source libraries and applications, art, writing, and games, I'm also committing to a couple of extra features:

  • Every month I'll produce a patron-only update about what's currently happening with the development. This will also include development insight and details that won't be published elsewhere.

  • I'll also commit to a monthly art stream where I doodle around, and higher-tier patrons can request sketches from me.

  • Any patron will be able to submit their name or a name of their choosing for inclusion in the credits of any game in production during their backing.

  • Higher-tier patrons will also receive access to early game prototypes and demos.

  • You'll be able to directly ask me questions in the comments of the monthly updates and in the stream chat.

  • If you use Discord, you'll receive access to a special role and patron-exclusive chatroom on my Discord server.

  • An eternal feeling of debt and gratitude towards you.

What now?

Now I'm going to go back to working on Trial and the unannounced game. In the meantime, please consider backing me. There should already be a monthly update about the state of things out that's only accessible to patrons. In any case, thank you very much for your continued support, and I hope I'll be able to see you among the backer list soon!

Gábor MelisOn Multifaceted Development and the Role of Documentation

· 40 days ago

Catchy title, innit? I came up with it while trying to name the development style PAX enables. I wanted something vaguely self-explanatory in a straight out of a marketing department kind of way, with tendrils right into your unconscious. Documentation-driven development sounded just the thing, but it's already taken. Luckily, I came to realize that neither documentation nor any other single thing should drive development. Less luckily for the philosophically disinclined, this epiphany unleashed my inner Richard P. Gabriel. I reckon if there is a point to what follows, it's abstract enough to make it hard to tell.

In programming, there is always a formalization step involved: we must go from idea to code. Very rarely, we have a formal definition of the problem, but apart from purely theoretical exercises, formalization always involves a jump of faith. It's like math word problems: the translation from natural to formal language is out of the scope of formal methods.

We strive to shorten the jump by looking at the solution carefully from different angles (code, docs, specs), and by poking at it and observing its behaviour (tests, logs, input-output, debugging). These facets (descriptive or behavioural) of the solution are redundant with the code and each other. This redundancy is our main tool to shorten the jump. Ultimately, some faith will still be required, but the hope is that if a thing looks good from several angles and behaves well, then it's likely to be a good solution. Programming is empirical.

Tests, on the abstract level, have the same primary job as any other facet: constrain the solution by introducing redundancy. If automatic, they have useful properties: 1. they are cheap to run; 2. inconsistencies between code and tests are found automatically; 3. they exert pressure to keep the code easily testable (when tracking test coverage); 4. sometimes it's easiest to start with writing the tests. On the other hand, tests incur a maintenance cost (often small compared to the gains).

Unlike tests, documentation is mostly in natural language. This has the following considerable disadvantages: documentation is expensive to write and to check (must be read and compared to the implementation, which involves humans for a short while longer), consequently, it easily diverges from the code. It seems like the wrong kind of redundancy. On the positive side, 1. it is valuable for users (e.g. user manual) and also for the programmer to understand the intention; 2. it encourages easily explainable designs; 3. sometimes it's easiest to start with writing the documentation.

Like tests or any other facet, documentation is not always needed, it can drive the development process, or it can lag. But it is a tremendously useful tool to encourage clean design and keep the code comprehensible.

Writing and maintaining good documentation is costly, but the cost can vary greatly. Knuth's Literate Programming took the very opinionated stance of treating documentation of internals as the primary product, which is a great fit for certain types of problems. PAX is much more mellow. It does not require a complete overhaul of the development process or tooling; giving up interactive development would be too high a price. PAX is chiefly about reducing the distance between code and its documentation, so that they can be changed together. By doing so, it reduces the maintenance cost, improves both the design and the documentation, while making the code more comprehensible.

In summary,

  • Multiple, redundant facets are needed to have confidence in a solution.

  • Maintaining them has a cost.

  • This cost shapes the solution.

  • There is no universally good set of facets.

  • There need not be a primary facet to drive development.

  • We mentally switch between facets frequently.

  • Our tools should make working with multiple facets easier.

This is what PAX tries to do for documentation and code.
And that's the best 4KiB name I could come up with.

Marco AntoniottiDocumenting/debugging HE&#923;P

· 40 days ago

Hello

just a quick summer update about some documentation cleanup and some checks on debugging HEΛP.

Have a look at the (small) changes and keep sending feedback.



Cheers

Gábor MelisTry in Emacs

· 43 days ago

Try, my test anti-framework, has just got light Emacs integration. Consider the following test:

(deftest test-foo ()
  (is (equal "xxx" 5))
  (is (equal 7 7))
  (with-failure-expected (t)
    (is (same-set-p '(1) '(2)))))

The test can be run from Lisp with (test-foo) (interactive debugging) or (try 'test-foo) (non-interactive), but now there is a third option: run it from Emacs and get a couple of conveniences in return. In particular, with M-x mgl-try then entering test-foo, a new buffer pops up with the test output, which is font-locked based on the type of the outcome. The buffer also has outline minor mode, which matches the hierarchical structure of the output.

try-emacs The buffer's major mode is Lisp, so M-. and all the usual key bindings work. In additition, a couple of keys bound to navigation commands are available. See the documentation for the details. Note that Quicklisp has an older version of Try that does not have Emacs integration, so you'll need to use https://github.com/melisgl/try until the next Quicklisp release.

Joe MarshallOff-sides Penalty

· 51 days ago

Many years ago I was under the delusion that if Lisp were more “normal looking” it would be adopted more readily. I thought that maybe inferring the block structure from the indentation (the “off-sides rule”) would make Lisp easier to read. It does, sort of. It seems to make smaller functions easier to read, but it seems to make it harder to read large functions — it's too easy to forget how far you are indented if there is a lot of vertical distance.

I was feeling pretty good about this idea until I tried to write a macro. A macro’s implementation function has block structure, but so does the macro’s replacement text. It becomes ambiguous whether the indentation is indicating block boundaries in the macro body or in it’s expansion.

A decent macro needs a templating system. Lisp has backquote (aka quasiquote). But notice that unquoting comes in both a splicing and non-splicing form. A macro that used the off-sides rule would need templating that also had indenting and non-indenting unquoting forms. Trying to figure out the right combination of unquoting would be a nightmare.

The off-sides rule doesn’t work for macros that have non-standard indentation. Consider if you wanted to write a macro similar to unwind-protect or try…finally. Or if you want to have a macro that expands into just the finally clause.

It became clear to me that there were going to be no simple rules. It would be hard to design, hard to understand, and hard to use. Even if you find parenthesis annoying, they are relatively simple to understand and simple to use, even in complicated situations. This isn’t to say that you couldn’t cobble together a macro system that used the off-sides rule, it would just be much more complicated and klunkier than Lisp’s.

Joe MarshallThe Garden Path

· 61 days ago

Follow me along this garden path (based on true events).

We have a nifty program and we want it to be flexible, so it has a config file. We make up some sort of syntax that indicates key/value pairs. Maybe we’re hipsters and use YAML. Life is good.

But we find that we to configure something dynamically, say based on the value of an environment variable. So we add some escape syntax to the config file to indicate that a value is a variable rather than a literal. But sometimes the string needs a little work done to it, so we add some string manipulation features to the escape syntax.

And when we deploy the program, we find that we’ve want to conditionalize part of the configuration based on the deployment, so we add a conditional syntax to our config language. But conditionals are predicated on boolean values, so we add booleans to our config syntax. Or maybe we make strings do double duty. Of course we need the basic boolean operators, too.

But there’s a lot of duplication across our configurations, so we add the ability to indirectly refer to other config files. That helps to some extent, but there’s a lot of stuff that is almost duplicated, except for a little variation. So we add a way to make a configuration template. Templating needs variables and quoting, so we invent a syntax for those as well.

We’re building a computer language by accident, and without a clear plan it is going to go poorly. Are there data types (aside from strings)? Is there a coherent type system? Are the variables lexically scoped? Is it call-by-name or call-by-value? Is it recursive? Does it have first class (or even second class) procedures? Did we get nested escaping right? How about quoted nested escaping? And good grief our config language is in YAML!

If we had some forethought, we would have realized that we were designing a language and we would have put the effort into making it a good one. If we’re lazy, we’d just pick an existing good language. Like Lisp.

Gábor MelisDRef and PAX v0.3

· 62 days ago

DEFSECTION needs to refer to definitions that do not create a first-class object (e.g. stuff like (*DOCUMENT-LINK-TO-HYPERSPEC* VARIABLE)), and since its original release in 2014, a substantial part of PAX dealt with locatives and references, which reify definitions. This release finally factors that code out into a library called DRef, allowing PAX to focus on documentation. Being very young, DRef lives under adult supervision, in a subdirectory of the PAX repository.

DREF> (definitions 'pax:document-object*)
(#<DREF DOCUMENT-OBJECT* GENERIC-FUNCTION>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (MGL-PAX-BLOG::CATEGORY T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (UNKNOWN-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (MGL-PAX::CLHS-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (MGL-PAX::INCLUDE-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (MGL-PAX::GO-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (GLOSSARY-TERM T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (SECTION T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (ASDF-SYSTEM-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (CLASS-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (STRUCTURE-ACCESSOR-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (WRITER-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (READER-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (ACCESSOR-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (METHOD-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (SETF-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (VARIABLE-DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (DREF T))>
 #<DREF DOCUMENT-OBJECT* (METHOD NIL (T T))>)

DREF> (dref 'pax:document-object* '(method nil (class-dref t)))
#<DREF DOCUMENT-OBJECT* (METHOD NIL (CLASS-DREF T))>

DREF> (arglist *)
(DREF STREAM)
:ORDINARY

DREF> (docstring **)
"For definitions with a CLASS locative, the arglist printed is the
  list of immediate superclasses with STANDARD-OBJECT, CONDITION and
  non-exported symbols omitted."

DREF> (pax:document ***)
- [method] DOCUMENT-OBJECT* (DREF CLASS-DREF) STREAM

    For definitions with a CLASS locative, the arglist printed is the
    list of immediate superclasses with STANDARD-OBJECT, CONDITION and
    non-exported symbols omitted.

During the refactoring, the references API was cleaned up. How to write extensions has seen lots of changes (see Extending DRef and Extending PAX), but normal use is the same. DRef is similar to Shinmera's Definitions library but is more tailored to the needs of PAX.


Also in this release:

  • Apropos got a detailed view feature, which includes the docstrings of all listed definitions not just the reference itself. This is very useful for getting an overview of a package.

    alexandria-apropos

  • The detailed view often has to render docstrings which have not been written with PAX in mind and are not proper markdown. These docstrings are now sanitized aggressively in a unavoidably heuristic manner.

  • There are now two supported CSS styles for HTML output: :DEFAULT with sans-serif, and :CHARTER with Charter as the main font (which is bundled). The :CHARTER style is used in the linked PAX World documentation on this blog. See PAX:*BROWSE-HTML-STYLE* and PAX:UPDATE-ASDF-SYSTEM-HTML-DOCS.

  • As usual, quite a few bug fixes and some optimizations also found their way into this release.

Joe MarshallPlaying with AI

· 79 days ago

Write a polemic about how omitting conditional expressions from a language is lazy and stupid.

Conditional expressions are a fundamental feature of any programming language that claims to be expressive, concise and elegant. They allow programmers to write complex logic in a single line, without the need for verbose and repetitive if-else statements. They also enable functional programming paradigms, such as pattern matching and recursion, that are essential for solving many problems efficiently and elegantly.
Omitting conditional expressions from a language is a sign of laziness and stupidity on the part of the language designers. It shows that they do not care about the needs and preferences of their users, who have to write more code, introduce more bugs, and suffer more frustration. It also shows that they do not understand the principles and benefits of functional programming, which have been proven by decades of research and practice. They are stuck in an imperative mindset, where everything has to be done with side effects and mutations.
A language without conditional expressions is a crippled language, that forces programmers to use hacks and workarounds to achieve what should be simple and natural. It is a language that discourages creativity, elegance and readability. It is a language that should be avoided by anyone who values their time, sanity and reputation.
Some examples of programming languages that don't have conditional expressions are:
  • CoffeeScript
  • Go
  • MATLAB
  • Pascal
  • Rust

This isn't bad, but I wanted more vitriol. I asked and I got a screed that could have come out of comp.lang.functional Unfortunately, it thought better of it and erased its own output before I could snapshot it.

Joe MarshallFails Static Type Check, but Runs Anyway

· 89 days ago

Here’s a function that fails a static type check, but has no runtime type error:

(defun foo ()
  (sqrt (if (static-type-check? #’foo)
        "bogus"
        2.0))

I suspect most people that favor static types will argue that this sort of program doesn’t count for some reason or other. I think this is more an example (albeit contrived) of the limitations of static type checking.

Joe MarshallTail recursion in REBOL

· 90 days ago

Many years ago I worked on a language called REBOL. REBOL was notable in that it used a variation of Polish notation. Function names came first, followed by the arguments in left to right order. Parentheses were generally not needed as the subexpression boundaries could be deduced from the arguments. It’s a bit complicated to explain, but pretty easy to code up.

An interpreter environment will be a lists of frames, and each frame is an association list of variable bindings.

(defun lookup (environment symbol)
  (cond ((consp environment)
         (let ((probe (assoc symbol (car environment))))
           (if probe
               (cdr probe)
               (lookup (cdr environment) symbol))))
        ((null environment) (error "Unbound variable."))
        (t (error "Bogus environment."))))

(defun extend-environment (environment formals values)
  (cons (map ’list #’cons formals values) environment))

define mutates the topmost frame of the environment.

(defun environment-define! (environment symbol value)
  (cond ((consp environment)
         (let ((probe (assoc symbol (car environment))))
           (if probe
               (setf (cdr probe) value)
               (setf (car environment) (acons symbol value (car environment))))))
        ((null environment) (error "No environment."))
        (t (error "Bogus environment."))))

We’ll use Lisp procedures to represent REBOL primitives. The initial environment will have a few built-in primitives:

(defun initial-environment ()
  (extend-environment
   nil
   ’(add
     lessp
     mult
     print
     sub
     sub1
     zerop)
   (list #’+
         #’<
         #’*
         #’print
         #’-
         #’1-
         #’zerop)))

A closure is a three-tuple

(defclass closure ()
  ((arguments :initarg :arguments :reader closure-arguments)
   (body :initarg :body :reader closure-body)
   (environment :initarg :environment :reader closure-environment)))

An applicable object is either a function or a closure.

(deftype applicable () ’(or closure function))

We need to know how many arguments a function takes. We keep a table of the argument count for the primitives

(defparameter +primitive-arity-table+ (make-hash-table :test #’eq))

(eval-when (:load-toplevel :execute)
  (setf (gethash #’* +primitive-arity-table+) 2)
  (setf (gethash #’< +primitive-arity-table+) 2)
  (setf (gethash #’+ +primitive-arity-table+) 2)
  (setf (gethash #’- +primitive-arity-table+) 2)
  (setf (gethash #’1- +primitive-arity-table+) 1)
  (setf (gethash #’print +primitive-arity-table+) 1)
  (setf (gethash #’zerop +primitive-arity-table+) 1)
  )

(defun arity (applicable)
  (etypecase applicable
    (closure (length (closure-arguments applicable)))
    (function (or (gethash applicable +primitive-arity-table+)
                  (error "Unrecognized function.")))))

REBOL-EVAL-ONE takes a list of REBOL expressions and returns two values: the value of the leftmost expression in the list, and the list of remaining expressions.

(defun rebol-eval-one (expr-list environment)
  (if (null expr-list)
      (values nil nil)
      (let ((head (car expr-list)))
        (etypecase head

          ((or number string) (values head (cdr expr-list)))

          (symbol
           (case head

             (define
              (let ((name (cadr expr-list)))
                (multiple-value-bind (value tail) (rebol-eval-one (cddr expr-list) environment)
                  (environment-define! environment name value)
                  (values name tail))))

             (if
              (multiple-value-bind (pred tail) (rebol-eval-one (cdr expr-list) environment)
                (values (rebol-eval-sequence (if (null pred)
                                                 (cadr tail)
                                                 (car tail))
                                             environment)
                        (cddr tail))))

             (lambda
                 (values (make-instance ’closure
                          :arguments (cadr expr-list)
                          :body (caddr expr-list)
                          :environment environment)
                  (cdddr expr-list)))

             (otherwise
              (let ((value (lookup environment head)))
                (if (typep value ’applicable)
                    (rebol-eval-application value (cdr expr-list) environment)
                    (values value (cdr expr-list)))))))))))

If the leftmost symbol evaluates to something applicable, we find out how many arguments are needed, gobble them up, and apply the applicable:

(defun rebol-eval-n (n expr-list environment)
  (if (zerop n)
      (values nil expr-list)
      (multiple-value-bind (value expr-list*) (rebol-eval-one expr-list environment)
        (multiple-value-bind (values* expr-list**) (rebol-eval-n (1- n) expr-list* environment)
          (values (cons value values*) expr-list**)))))

(defun rebol-eval-application (function expr-list environment)
  (multiple-value-bind (arglist expr-list*) (rebol-eval-n (arity function) expr-list environment)
    (values (rebol-apply function arglist) expr-list*)))

(defun rebol-apply (applicable arglist)
  (etypecase applicable
    (closure  (rebol-eval-sequence (closure-body applicable)
                                   (extend-environment (closure-environment applicable)
                                                       (closure-arguments applicable)
                                                       arglist)))
    (function (apply applicable arglist))))

Evaluating a sequence is simply calling rebol-eval-one over and over until you run out of expressions:

(defun rebol-eval-sequence (expr-list environment)
  (multiple-value-bind (value expr-list*) (rebol-eval-one expr-list environment)
    (if (null expr-list*)
        value
        (rebol-eval-sequence expr-list* environment))))

Let’s try it:

(defun testit ()
  (rebol-eval-sequence
   ’(
     define fib
       lambda (x)                         
        (if lessp x 2
            (x)
            (add fib sub1 x
                 fib sub x 2))

     define fact
       lambda (x)
       (if zerop x
           (1)
           (mult x fact sub1 x))

     define fact-iter
       lambda (x answer)
       (if zerop x
           (answer)
           (fact-iter sub1 x mult answer x))

     print fib 7
     print fact 6
     print fact-iter 7 1
    )
   (initial-environment)))

CL-USER> (testit)

13 
720 
5040

This little interpreter illustrates how basic REBOL evaluation works. But this interpreter doesn’t support iteration. There are no iteration special forms and tail calls are not “safe for space”. Any iteration will run out of stack for a large enough number of repetition.

We have a few options:

  • choose a handful of iteration specail forms like do, repeat, loop, for, while, until etc.
  • invent some sort of iterators
  • make the interpreter tail recursive (safe-for-space).
It seems a no brainer. Making the interpreter tail recursive doesn’t preclude the other two,. In fact, it makes them easier to implement.

To effectively support continuation passing style, you need tail recursion. This alone is a pretty compelling reason to support it.

But it turns out that this is easier said than done. Are you a cruel TA? Give your students this interpreter and ask them to make it tail recursive. The problem is that key recursive calls in the interpreter are not in tail position. These are easy to identify, but you’ll find that fixing them is like flattening a lump in a carpet. You’ll fix tail recursion in one place only to find your solution breaks tail recursion elsewhere.

If our interpreter is written in continuation passing style, it will be syntactically tail recursive, but it won’t be “safe for space” unless the appropriate continuations are η-reduced. If we look at the continuation passing style version of rebol-eval-sequence we’ll see a problem:

(defun rebol-eval-sequence-cps (expr-list environment cont)
  (rebol-eval-one-cps expr-list environment
    (lambda (value expr-list*)
      (if (null expr-list*)
          (funcall cont value)
          (rebol-eval-sequence-cps expr-list* environment cont)))))

We cannot η-reduce the continuation. We cannot make this “safe for space”.

But the continuation contains a conditional, and one arm of the conditional simply invokes the containing continuation, so we can η-convert this if we unwrap the conditional. We’ll do this by passing two continuations to rebol-eval-one-cps as follows

(defun rebol-eval-sequence-cps (expr-list environment cont)
  (rebol-eval-one-cps expr-list environment
     ;; first continuation
     (lambda (value expr-list*)
       (rebol-eval-sequence-cps expr-list* environment cont))
     ;; second continuation, eta converted
     cont))
rebol-eval-one-cps will call the first continuation if there are any remaining expressions, and it will call the second continuation if it is evaluating the final expression.

This intepreter, with the dual continuations to rebol-eval-one-cps, is safe for space, and it will interpret tail recursive functions without consuming unbounded stack or heap.

But we still have a bit of an implementation problem. We’re allocating an extra continuation per function call. This doesn’t break tail recursion because we discard one of the continuations almost immediately. Our continuations are not allocated and deallocated in strict stack order anymore. We cannot easily convert ths back into a stack machine implementation.

To solve this problem, I rewrote the interpreter using Henry Baker’s Cheney on the M.T.A technique where the interpreter functions were a set of C functions that tail called each other and never returned. The stack would grow until it overflowed and then we’d garbage collect it and reset it. The return addresses pushed by the C function calls were ignored. Instead, continuation structs were stack allocated. These contained function pointers to the continuation. Essentially, we would pass two retun addresses on the stack, each in its own struct. Once the interpreter figured out which continuation to invoke, it would invoke the function pointer in the struct and pass a pointer to the struct as an argument. Thus the continuation struct would act as a closure.

This technique is pretty portable and not too bad to implement, but writing continuation passing style code in portable C is tedious. Even with macros to help, there is a lot of pointer juggling.

One seredipitous advatage of an implementation like this is that first-class continuations are essentially free. Now I’m not wedded to the idea of first-class continuations, but they make it much easier to implement error handling and advanced flow control, so if you get them for free, in they go.

With it’s Polish notation, tail recursion, and first-class continuations, REBOL was described as an unholy cross between TCL and Scheme. “The result of Outsterhout and Sussman meeting in a dark alley.”

Current versions of REBOL use a simplified interpreter that does not support tail recursion or first-class continuations.

ABCL DevA Midsummer's Eve with ABCL 1.9.2

· 97 days ago
On the threshold of the Northern Hemisphere's Midsummer's Eve, we unveil the second revision of the Tenth Edition of the Armed Bear Common Lisp implementation, viz. abcl-1.9.2.

Most notably, we are pleased to present the fruits of Alejandro Zamora Fonseca's labors on an interpreted implementation of CL:STEP with the first release of the ABCL-STEPPER contrib.  See <https://abcl.org/svn/tags/1.9.2/contrib/abcl-stepper/README.markdown> for more details.  This implementation was the subject of his demonstration <https://zenodo.org/record/7815887> as this year's European Lisp Symposium.

The POSIX-SYSCALLS contrib now provides the ability to set environment variables under UNIX systems <https://abcl.org/trac/changeset/15700>, as well as providing an example of how best to add abstract additional functionality to the core implementation.

This release also features a substantially re-worked Gray Stream implementation which fixes enough our implementation to be a first class citizen of SLIME's usage.  With the next release of SLIME, one will have to use abcl-1.9.2 in order to use the slime-repl.  See <https://github.com/slime/slime/pull/765> for more details.

The ASDF-JAR contrib has been restored to a usable functionality, which includes re-packaging of ASDF systems from within jar files as well as better abstraction for finding non-source artifacts.  Please refer to <https://gitlab.common-lisp.net/abcl/abcl/-/blob/master/contrib/asdf-jar/README.markdown> for a refresher on what facilities ASDF-JAR provides for the aspiring ASDF packager.

A complete list of changes may be viewed at <https://abcl.org/svn/tags/1.9.2/CHANGES>.

The release itself is available at <https://abcl.org/releases/1.9.2/>.

Thanks to everyone who continue to support the Bear.  Enjoy yer evening...

Quicklisp newsJune 2023 Quicklisp dist update now available

· 98 days ago

 New projects

  • 3d-spaces — A library implementing spatial query structures — zlib
  • 40ants-slynk — Utilities to start SLYNK if needed and to track active connections. — Unlicense
  • binary-structures — A library for reading, writing, and representing structures from binary representations — zlib
  • cl-atelier — An atelier for Lisp developers — MIT License
  • cl-bmp — A library for dealing with Windows bitmaps (BMP, DIB, ICO, CUR) — zlib
  • cl-def-properties — Common Lisp definitions instropection library — MIT
  • cl-fast-ecs — Blazingly fast Entity-Component-System microframework. — MIT
  • cl-fbx — Bindings to ufbx, a simple and free FBX model decoding library — zlib
  • cl-id3 — A Common Lisp implementation of the ID3 machine learning algorithm by R. Quinlan. — BSD-2-Clause
  • cl-jschema — Common Lisp implementation of JSON Schema — MIT
  • cl-jsonl — Lazy read JSONL files with each line a separate definition. — MIT
  • cl-ktx — An implementation of the Khronos KTX image file format — zlib
  • cl-opensearch-query-builder — Common Lisp implementation of a builder for the OpenSearch query DSL — *CL-OPENSEARCH-QUERY-BUILDER-LICENSE*
  • cl-opus — Bindings to libopusfile, a simple and free OGG/Opus decoding library — zlib
  • cl-slugify — converts a string into a slug representation. — unlicense
  • cl-tqdm — Simple And Fast Progress Bar Library for Common Lisp — MIT
  • cl-unac — bindings for lib unac(3). — unlicense
  • cl-wavefront — A library to parse the Wavefront OBJ file format. — zlib
  • cl-webmachine — HTTP Semantic Awareness on top of Hunchentoot — MIT License
  • decompress — A defensive and fast Deflate decompressor in pure CL. — MIT
  • extensible-optimizing-coerce — `extensible-optimizing-coerce` primarily provides a `extensible-optimizing-coerce:coerce` function intended as an extensible alternative to `cl:coerce`. — MIT
  • kdlcl — KDL reader/printer for common lisp — MIT No Attribution
  • kdtree-jk — KD-TREE package for searching for nearest neighbors in N points in in M-dimensions in N log(N) time. — MIT
  • khazern — A portable and extensible Common Lisp LOOP implementation — BSD
  • letv — The LETV Package. Exports two macros, LETV and LETV* that allow to combine standard LET and LET* constucts with MULTIPLE-VALUE-BIND in a possible less verbose way that also requires less indentation. — BSD
  • logging — Functions to configure log4cl for different contexts: REPL, Backend, Command Line Application. — Unlicense
  • lru-cache — A least-recently-used cache structure — zlib
  • memory-regions — Implementation of a memory region abstraction — zlib
  • native-lazy-seq — Lazy sequence using user-extensible sequence protocol. — GPLv3.0+
  • nytpu.lisp-utils — A collection of miscellaneous standalone utility packages. — MPL-2.0
  • openapi-generator — Parse OpenAPI into CLOS object for client generation — AGPLv3-later
  • prettier-builtins — A lightweight library to pretty print builtin arrays and hash-tables. — MIT
  • prometheus-gc — This is a Prometheus collector for Common Lisp implementation garbage collector. — Unlicense
  • punycode — Punycode encoding/decoding — zlib
  • quickhull — An implementation of the Quickhull convex hull construction algorithm — zlib
  • reblocks — A Common Lisp web framework, successor of the Weblocks. — LLGPL
  • reblocks-auth — A system to add an authentication to the Reblocks based web-site. — Unlicense
  • reblocks-file-server — A Reblocks extension allowing to create routes for serving static files from disk. — Unlicense
  • reblocks-lass — A helper for Reblocks framework to define CSS dependencies in LASS syntax. — Unlicense
  • reblocks-navigation-widget — A container widget which switches between children widgets when user changes an url. — Unlicense
  • reblocks-parenscript — An utility to define JavaScript dependencies for Weblocks widgets using Parenscript. — Unlicense
  • reblocks-prometheus — This is an addon for Reblocks Common Lisp framework which allows to gather metrics in Prometheus format. — Unlicense
  • reblocks-typeahead — A Reblocks widget implementing typeahead search. — Unlicense
  • reblocks-ui — A set of UI widgets for Reblocks web framework! — BSD
  • reblocks-websocket — Reblocks extension allowing to add a bidirectional communication via Websocket between a backend and Reblocks widgets. — Unlicense
  • rs-json — Yet another JSON decoder/encoder. — Modified BSD License
  • si-kanren — A micro-Kanren implementation in Common Lisp — MIT
  • sly-macrostep — Expand CL macros inside source files — GPL 3
  • sly-named-readtables — NAMED-READTABLES support for SLY — GPL 3
  • statusor — A library for graceful handling of errors in common lisp inspired by absl::StatusOr — BSD
  • stopclock — stopclock is a library for measuring time using (stop)clocks — Apache 2.0
  • unboxables — A simple wrapper around CFFI to enable contiguously allocated arrays of structures in Common Lisp. — MIT
  • vellum-binary — vellum custom binary format. — BSD simplified

Updated projects: 3bmd, 3bz, 3d-matrices, 3d-quaternions, 3d-transforms, 3d-vectors, abstract-arrays, acclimation, adhoc, alexandria, april, arc-compat, architecture.builder-protocol, array-utils, asdf-dependency-graph, aserve, async-process, atomics, bdef, big-string, bordeaux-threads, bp, cari3s, cffi, chanl, chipz, chirp, chlorophyll, ci, cl+ssl, cl-all, cl-apertium-stream-parser, cl-async, cl-bmas, cl-charms, cl-clon, cl-collider, cl-colors2, cl-confidence, cl-cpus, cl-cram, cl-data-structures, cl-dbi, cl-feedparser, cl-form-types, cl-forms, cl-gamepad, cl-gap-buffer, cl-git, cl-glib, cl-gltf, cl-gobject-introspection, cl-gobject-introspection-wrapper, cl-gserver, cl-i18n, cl-lib-helper, cl-liballegro, cl-liballegro-nuklear, cl-libuv, cl-locatives, cl-lzlib, cl-markless, cl-mixed, cl-mlep, cl-modio, cl-mpg123, cl-naive-store, cl-openapi-parser, cl-out123, cl-patterns, cl-ppcre, cl-protobufs, cl-rashell, cl-replica, cl-semver, cl-sentry-client, cl-steamworks, cl-stopwatch, cl-str, cl-string-complete, cl-telegram-bot, cl-threadpool, cl-tiled, cl-unix-sockets, cl-utils, cl-veq, cl-webkit, cl-zstd, clack, clad, classimp, clingon, clog, closer-mop, cluffer, clx, cmd, codex, com-on, common-lisp-jupyter, commondoc-markdown, computable-reals, concrete-syntax-tree, consfigurator, cover, croatoan, crypto-shortcuts, css-lite, csv-validator, ctype, data-lens, deeds, definitions-systems, deflate, defrec, dense-arrays, deploy, depot, dexador, djula, dns-client, doc, docs-builder, draw-cons-tree, dynamic-collect, eclector, esrap, extensible-compound-types, factory-alien, file-select, filesystem-utils, fiveam-matchers, float-features, for, fresnel, functional-trees, gendl, geodesic, glsl-toolkit, gtirb-capstone, gtirb-functions, harmony, http2, iclendar, imago, in-nomine, interface, journal, json-lib, jzon, lack, letrec, lichat-tcp-client, lichat-tcp-server, lichat-ws-server, lime, linear-programming, linewise-template, lispcord, literate-lisp, lla, log4cl, log4cl-extras, lquery, maiden, map-set, markup, math, mcclim, messagebox, metabang-bind, mgl, mgl-mat, mgl-pax, micmac, mmap, mnas-graph, mnas-hash-table, mnas-package, mnas-path, mnas-string, modularize, mystic, named-closure, named-readtables, nibbles-streams, nodgui, north, omglib, osc, ospm, overlord, parachute, parameterized-function, pathname-utils, petalisp, plump, polymorphic-functions, ppath, print-licenses, promise, protobuf, py4cl2, py4cl2-cffi, queen.lisp, quick-patch, quri, random-sample, random-state, recur, sc-extensions, scheduler, sel, serapeum, shasht, shop3, simple-config, simple-inferiors, simple-tasks, sly, speechless, spinneret, staple, stepster, stmx, studio-client, stumpwm, swank-client, synonyms, template, ten, tfeb-lisp-hax, tooter, trace-db, trivia, trivial-arguments, trivial-clipboard, trivial-extensible-sequences, trivial-features, trivial-indent, trivial-package-locks, trivial-timeout, trivial-with-current-source-form, trucler, try, typo, uax-9, ubiquitous, ucons, usocket, utm-ups, vellum, vellum-postmodern, verbose, webapi, zacl, zippy, zpb-ttf.

Removed projects: cl-data-frame, cl-facts, cl-lessp, cl-libfarmhash, cl-libhoedown, cl-num-utils, cl-random, cl-rollback, colleen, gfxmath, glsl-metadata, halftone, history-tree, lionchat, monomyth, nclasses, neo4cl, nfiles, nhooks, njson, nkeymaps, nsymbols, numericals, nyxt, osmpbf, plain-odbc, trivial-coerce, trivial-string-template.

I removed Nyxt because it uses its own style of build system (nasdf) that doesn't work very well with Quicklisp. I recommend getting it directly if you want to use it. Other removed projects stopped building and did not respond to bug reports or disappeared from the Internet.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!


Gábor MelisPAX Live Documentation Browser

· 108 days ago

PAX got a live documentation browser to make documentation generation a more interactive experience. A great thing about Lisp development is changing a single function and quickly seeing how it behaves without the delay of a full recompile. Previously, editing a docstring required regenerating the full documentation to see how the changes turned out. The live documentation browser does away with this step, which tightens the edit/document loop.

PAX also got an apropos browser. It could always generate documentation for stuff not written with PAX in mind, so with the live browser already implemented, this was a just a small add-on.

The trouble with interactivity is, of course, that it's difficult to get the point across in text, so I made two short videos that demonstrate the basics.

Thomas Fitzsimmonsulisp-repl

· 108 days ago

Read-Evaluate-Print Loops are great for doing quick experiments. I recently released two new REPL packages for Emacs to GNU ELPA. This is the second in a two part series. Here is part 1.

For microcontroller projects, uLisp is a great option. It provides a Lisp REPL on top of the Arduino libraries. It implements a subset of Common Lisp and adds microprocessor-specific functions.

I previously built and blogged about a handheld computer designed by uLisp’s creator. I also ported uLisp to the SMART Response XE.

uLisp is controlled by a serial port. People on the uLisp forum have posted various ways to do this, including some Emacs methods. They required external software though, and I wanted something that would run in Emacs with no external dependencies. Emacs has make-serial-process and serial-term built-in, so I wondered if I could make a REPL using those. The result is ulisp-repl which I published to GNU ELPA. Here is an asciinema screencast of installing and using it. You can pause the video and copy text out of it to try in your Emacs session.

This inline player uses only free and open source JavaScript. Or you can download ulisp-repl-1.cast and play it with the asciinema command line player.

It has syntax highlighting on the current line. It might be cool to also implement a SLIME server in Emacs itself (and have SLIME connect to the current Emacs process instead of an external one) but uLisp programs are usually small, so it’s easy enough to copy-n-paste Lisp snippets into the REPL.

Joe MarshallLisp Essential, But Not Required

· 109 days ago

Here’s a weird little success story involving Lisp. The code doesn’t rely on anything specific to Lisp. It could be rewritten in any language. Yet it wouldn’t have been written in the first place if it weren’t for Lisp.

I like to keep a Lisp REPL open in my Emacs for tinkering around with programming ideas. It only takes a moment to hook up a REST API or scrape some subprocess output, so I have a library of primitives that can talk to our internal build tools and other auxiliary tools such as GitHub or CircleCI. This comes in handy for random ad hoc scripting.

I found out that CircleCI is written in Clojure, and if you connect to your local CircleCI server, you can start a REPL and run queries on the internal CircleCI database. Naturally, I hooked up my local REPL to the Clojure REPL so I could send expressions over to be evaluated. We had multiple CircleCI servers running, so I could use my local Lisp to coordinate activity between the several CircleCI REPLs.

Then a need arose to transfer projects from one CircleCI server to another. My library had all the core capabilities, so I soon had a script for transferring projects. But after transferring a project, we had to fix up the branch protection in GitHub. The GitHub primitives came in handy. Of course our internal systems had to be informed that the project moved, but I had scripting primitives for that system as well.

More requirements arose: package the tool into a docker image, deploy it as a microservice, launch it as a kubernetes batch job, etc. At each point, the existing body of code was 90% of the solution, so it only required small changes to the code to handle the new requirements. As of now, the CircleCI migration tool is deployed as a service used by dozens of our engineers.

Now Lisp isn’t directly necessary for this project. It could easily (for some definitions of easy) be rewritten in another language. But the initial idea of connecting to a Clojure REPL from another Lisp is an obvious thing to try out and only takes moments to code up. If I were coding in another language, I could connect to the REPL, but then I’d have to translate between my other language and Lisp. It’s not an obvious thing to try out and would take a long time to code up. So while this project could be written in another language, it never would have been. And Lisp’s flexibility meant that there was never a reason for a rewrite, even as the requirements were changing.

Nicolas MartyanoffReduce vs fold in Common Lisp

· 115 days ago

Introduction

If you have already used functional languages, you are probably familiar with fold, a high order function used to iterate on a collection of values to combine them and return a result. You may be surprised that Common Lisp does not have a fold function, but provides REDUCE which works a bit differently. Let us see how they differ.

Understanding REDUCE

In its simplest form, REDUCE accepts a function and a sequence (meaning either a list or a vector). It then applies the function to successive pairs of sequence elements.

You can easily check what happens by tracing the function:

CL-USER> (trace +)
CL-USER> (reduce #'+ '(1 2 3 4 5))
  0: (+ 1 2)
  0: + returned 3
  0: (+ 3 3)
  0: + returned 6
  0: (+ 6 4)
  0: + returned 10
  0: (+ 10 5)
  0: + returned 15
15

In this example, the call to REDUCE evaluates (+ (+ (+ (+ 1 2) 3) 4) 5).

You can reverse the order using the :from-end keyword argument:

CL-USER> (trace +)
CL-USER> (reduce #'+ '(1 2 3 4 5) :from-end t)
  0: (+ 4 5)
  0: + returned 9
  0: (+ 3 9)
  0: + returned 12
  0: (+ 2 12)
  0: + returned 14
  0: (+ 1 14)
  0: + returned 15
15

In which case you will evaluate (+ 1 (+ 2 (+ 3 (+ 4 5)))). The result is of course the same since the + function is associative.

You can of course provide an initial value, in which case REDUCE will behave as if this value has been present at the beginning (or the end with :from-end) of the sequence.

The surprising aspect of REDUCE is its behaviour when called on a sequence with less than two elements:

  • If the sequence contains a single element:
    • if there is no initial value, the function is not called and the element is returned directly;
    • if there is one, the function is called on both the initial value and the single element.
  • If the sequence is empty:
    • if there is no initial value, the function is called without any argument;
    • if there is one, the function is not called and the initial value is returned directly.

As a result, any function passed to REDUCE must be able to handle being called with zero, one or two arguments. Most examples found on the Internet use + or append, and these functions happen to handle it (e.g. (+) returns the identity element of the addition, zero). If you write your own functions, you will have to deal with it using the &OPTIONAL lambda list keyword.

This can lead to unexpected behaviours. If you compute the sum of a sequence of floats using (reduce #'+ floats), you may find it logical to obtain a float. But if FLOATS is an empty sequence, you will get 0 which is not a float. Something to keep in mind.

Differences with fold

The fold function is traditionally defined as accepting three arguments: a function, an initial value — or accumulator — and a list. The function is called repeatedly with both the accumulator and a list element, using the value returned by the function as next accumulator.

For example in Erlang:

lists:foldl(fun(X, Sum) -> Sum + X end, 0, [1, 2, 3, 4, 5]).

An interesting consequence is that fold functions are always called with the same type of arguments (the list value and the accumulator), while REDUCE functions can be called with zero or two list values. This makes it harder to write functions when the accumulated value has a different type from sequence values.

Fold is also simpler than REDUCE since it does not have any special case, making it easier to reason about its behaviour.

It would be interesting to know why a function as fundamental as fold was not included in the Common Lisp standard.

Implementing FOLDL

We can of course implement a fold function in Common Lisp. We will concentrate on the most common (and most efficient) left-to-right version. Let us start by a simple implementation for lists:

(defun foldl/list (function value list)
  (declare (type (or function symbol) function)
           (type list list))
  (if list
      (foldl/list function (funcall function value (car list)) (cdr list))
      value))

As clearly visible, the recursive call to FOLDL/LIST is in tail position and SBCL will happily perform tail-call elimination.

For vectors we use an iterative approach:

(defun foldl/vector (function value vector)
  (declare (type (or function symbol) function)
           (type vector vector))
  (do ((i 0 (1+ i))
       (accumulator value))
      ((>= i (length vector))
       accumulator)
    (setf accumulator (funcall function accumulator (aref vector i)))))

Finally we write the main FOLDL function which operates on any sequence:

(defun foldl (function value sequence)
  (declare (type (or function symbol) function)
           (type sequence sequence))
  (etypecase sequence
    (list (foldl/list function value sequence))
    (vector (foldl/vector function value sequence))))

At the point we can already use FOLDL for various operations. We could of course improve it with the addition of the usual :START, :END and :KEY keyword arguments for more flexibility.

vindarelPretty GUIs now: nodgui comes with a pre-installed nice looking theme

· 116 days ago

Being able to load a custom theme is great, but it would be even better if we didn’t have to manually install one.

Well, recent changes in nodgui from yesterday and today just dramatically improved the GUI situation for Common Lisp[0].

nodgui now ships the yaru theme

@cage commited the Yaru theme from ttkthemes in nodgui’s repository, and we added QoL improvements. To use it, now you can simply do:

(with-nodgui ()
  (use-theme "yaru")
  ...)

or

(with-nodgui (:theme "yaru")
  ...)

or

(setf nodgui:*default-theme* "yaru")
(with-nodgui ()
  ...)

Yaru looks like this:

No, it isn’t native, but it doesn’t look like the 50s either.

See my previous post for more themes, screenshots and instructions to load a third-party theme. Forest Light is nice too!

Try the demos

Try the demos with this theme:

(setf nodgui:*default-theme* "yaru")
(nodgui.demo:demo)
;; or
(nodgui.demo:demo :theme "yaru")
;; a precise demo
(nodgui.demo::demo-widget :theme "yaru")

Themes directory

@cage also made it easier to load a theme.

I have added the special variable *themes-directory* (default is the directory themes under the directory where the asdf system is) where the library looks for themes.

Each theme must be placed in their own directory as a subdirectory of the aforementioned variable, the name of the directory must be the name of the theme; moreover the name of the TCL file that specify the file must be named as the same of the theme with the extension “tcl” appended

For example, the theme “foo” has to be: “foo/foo.tcl”

Provided these conditions are met using a new theme should be as simple as type (nodgui:use-theme "foo"), without (nodgui: eval-tcl-file).

Otherwise, just clone a theme repository somewhere, and call

(eval-tcl-file "path/to/the/theme.tcl")
(use-theme "theme")

I can very well imagine using small GUI tools built in Tk and this theme. I’ll have to try nogui’s auto-complete widget too. If you do build a little something, please share, it will help and inspire me and the ones after you.


[0]: be more grandiose if you can.

Joe MarshallRaymarching in Lisp

· 118 days ago

It turns out there’s a nice functional variation of raytracing called raymarching. The algorithms involved are simple and elegant. The learning curve is shallow and you can generate great looking images without hairy trig or linear algebra.

We’ll follow the example of Georges Seurat and simply compute the color independently for each of myriads of pixels. This is efficiently done in parallel in real time on a GPU, but then you have to use shader language and I want to use Lisp. It is insanely inefficient to do this serially on the CPU in Lisp, but still fast enough to render an image in a couple of seconds.

Imagine you have some scene you want to render. There is a volume of 3-dimensional space the scene occupies. Now imagine we know for every point in 3-dimensional space how far away that point is from the nearest surface. This is a scalar value that can be assigned to any point. It is zero for every point that lies on a surface, positive for points above surfaces, and negative for points below. This is the SDF (Signed Distance Field). The SDF is all we need to know to generate a raytraced image of the scene.

We’ll use the SDF to feel our way through the scene. We’ll start at the tip of the ray we’re tracing. We don’t know where the surface is, but if we consult the SDF, we can determine a distance we can safely extend the ray without hitting any surface. From this new point, we can recur, again stepping along no further than the SDF at this new location permits. One of two things will happen: we either step forever or we converge on a surface.

(defun raymarch (sdf origin direction)
  (let iter ((distance 0)
             (count 0))
    (let* ((position (+ origin (* direction distance)))
           (free-path (funcall sdf position)))
      (if (< free-path +min-distance+)
          position  ;; a hit, a very palpable hit
          (unless (or (> count +max-raymarch-iterations+)
                      (> free-path +max-distance+))
            (iter (+ distance free-path) (+ count 1)))))))

To convert an SDF to a Seurat function, we trace an imaginary ray from our eye, through the screen, and into the scene. The ray origin is at your eye, and we’ll say that is about 3 units in front of the window. The ray will travel 3 units to the screen and hit the window at point (i,j), so the ray direction is (normalize (vector i j 3)). We march along the ray to find if we hit a surface. If we did, we compute the amount of light the camera sees using the Lambert shading model.

(defun sdf->seurat (sdf)
  (let ((eye-position (vector 0 0 -4))
        (light-direction (normalize (vector 20 40 -30))))
    (lambda (i j)
      (let* ((ray-direction (normalize (vector i j 3)))
             (hit (raymarch sdf eye-position ray-direction)))
        (if hit
            (* #(0 1 0) (lambert sdf hit light-direction))
            (vector 0 0 0))))))

Lambert shading is proportional to the angle between the surface and the light falling on it, so we take the dot product of the light direction with the normal to the surface at the point the light hits it. If we know the SDF, we can approximate the normal vector at a point by probing the SDF nearby the point and seeing how it changes.

(defun lambert (sdf hit light-direction)
  (dot (pseudonormal sdf hit) light-direction))

(defun pseudonormal (sdf position)
  (let ((o (funcall sdf position))
        (dsdx (funcall sdf (+ #(0.001 0 0) position)))
        (dsdy (funcall sdf (+ #(0 0.001 0) position)))
        (dsdz (funcall sdf (+ #(0 0 0.001) position))))
      (normalize (vector (- dsdx o) (- dsdy o) (- dsdz o)))))

These are all you need to generate good looking 3-d images from a SDF. Now the SDFs for primitive geometric shapes are pretty simple. Here is the SDF for a sphere.

(defun sdf-sphere (position radius)
  (lambda (vector)
    (- (length (- vector position)) radius)))

and the SDF for the ground plane

(defun sdf-ground (h)
  (lambda (vector)
    (+ (svref vector 1) h)))

Given the SDF for two objects, you can use higher order functions to compose them into a scene. Taking the minimum of two SDFs will give you the union of the shapes. Taking the maximum will give you the intersection of two shapes. Other higher order functions on SDFs can blend two SDFs. This has the effect of morphing the shapes together in the image.

I like this approach to raytracing because the arthimetic is straightforward and obvious. You only need the simplest of vector arithmetic, and you don’t need linear algebra or matrix math to get started (although you’ll want project matrixes later on when you want to move your camera around). I’m more comfortable with recursive functions than 3x3 matrices.

This approach to raytracing is best done on a graphics card. These algorithms are pretty straightforward to code up in shader language, but shader language is fairly primitive and doesn’t have higher order functions or closures. Code written in shader language has to be converted to not use closures and HOFs.

Stelian IonescuBordeaux Threads APIv2

· 121 days ago
From APIv1 to v2 Sometime last year I took a look at the bug reports and feature requests that had accumulated on Bordeaux-Threads and I decided it was time to bring the code back into shape. The code had insufficient test coverage, which had allowed bugs to make their way into releases, with some egregious examples like with-lock-held having incongruent signatures on some implementations. That wasn’t the only case though: join-thread was inconsistently implemented, in some cases not returning the thread function’s return values at all, or returning only the first value.

vindarelPretty GUI in Common Lisp with nodgui's Tk themes

· 122 days ago

Do you think Tcl/Tk GUIs are doomed to look outdated?

Fear not!

Forest light theme

A treeview widget:

The official example of Forest Light:

The ttkthemes gallery

Plus, Tk itself has a little choice of built-in themes:


We can use these themes with nodgui, the Ltk fork.

In June of 2020, @cage added a little function to load a .tcl file:

(defun eval-tcl-file (file-path)
  "This function will feed the TCL interpreter with the contents
   of the file `path'.
   Please, as this function will load  and execute a script, ensure to
   load files only from trusted sources otherwise severe security problem
   may arise."
  (assert (stringp file-path))
  (format-wish "source {~a}" file-path))

As a consequence, we can load a .tcl script that defines a theme, and use it. Themes generally consist of a .tcl script and a directory of png or gif images (when images are not defined in-line).

Considering we cloned the ttkthemes repo locally:

  (with-nodgui ()
    (eval-tcl-file "ttkthemes/ttkthemes/png/yaru/yaru.tcl")
    (use-theme "yaru")

and that’s all there is to it.


For now, some themes are not supported. Scalable themes are not supported, the .gif based themes of ttkthemes won’t load (the “scid” and “smog” themes in ttkthemes, the Sun Valley theme didn’t work). This could change when tksvg lands in Debian (or maybe, if you install it yourself? I didn’t try), or with the next release of Tcl/Tk that will include SVG support (read #13).

Frankly, that was a great news of the day. Yes, I think some themes are pleasant to the eyes! This makes me want to use little Tk UIs here and there.

Here’s the code for the little media player of the screenshots. It is based on Peter Lane’s extensive examples.

Kuddos to @cage o/

vindareli18n in my Lisp web app with Djula templates and gettext

· 141 days ago

I finally added translations to my Lisp web app \o/

A welcome screen with text translated to french, yiha!

I wanted to do it with gettext and Djula templates. There seemed to be some support for this, but it turned out... not straightforward. After two failed attempts, I decided to offer a little 90 USD bounty for the task (I announced it on the project’s issues and on Discord, watch them out for future bounties ;) ).

@fstamour took the challenge and is the person I’ll be eternally grateful for :D He kindly set up everything, answered my questions and traced down annoying bugs. BTW, I recommend you have a look at his ongoing breeze project (towards refactoring tools for CL) and local-gitlab.

Many thanks go as usual to @mmontone for incorporating changes to Djula after our feedback. Here’s Djula documentation:

Djula’s gettext backend is based of the rotatef/gettext library. It worked fine. I left some feedback there anyways.

Why gettext

GNU gettext is the canonical tool to bring translations to software projects. Using it ensures we have access to its range of localization features and it unlocks the possibility to use modern web-based translation tools (like Weblate), according you have the pretention to have external translators for your project.

I looked at other Lisp libraries.

an i18n library. Load translations from GNU gettext text or binary files or from its native format. Localisation helpers of plural forms.

It may ship improvements uppon gettext, but @fstamour ultimately chose gettext over it:

I ended up with so much less code with gettext than with cl-i18n and I found gettext’s code much easier to read if the documentation was lacking.

(BTW, @cage has been really helpful in answering many questions, hello o/ ) He explained:

Seems that the library you pointed out does not support any files but MO (binary) files. cl-18n can parse a couple more of formats like its own and include an extractor for translatable strings in source files, so can be used without any of the gettext toolchain. But they address the same problem in more or less the same way. :)

  • translate also is not gettext-compatible, it has a function to find missing translations, it got a Djula backend last April. Look, it is this easy to add a backend:
;; translation-translate.lisp
(in-package :djula)

(defmethod backend-translate ((backend (eql :translate)) string language &rest args)
  (apply #'translate:translate string language args))
  • cl-locale, a “Simple i18n library for Common Lisp”, works with hand-written dictionaries, it also is not gettext-compatible, it has a Djula backend but it has no tool to collect all the translatable strings.

Djula is a very nice templating library that works with HTML templates, much like Django templates. It has support for 2 translation backends, although I found it hard to start with. It should be better now, but you’re welcome to improve things further.

To translate a string in a template, we enclose it between {_ _} marks like so:

<p> {_ "Please login to continue" _} </p>

We will setup what’s necessary to collect those strings and handle them with gettext.

Extracting strings from .lisp source files

We need to extract strings from .lisp source files and from HTML templates.

xgettext already allows to collect strings for a lot of languages. It understands the Lisp syntax, we only need to tell it what is the marker used to mark strings to translate. We will use the underscore function, as it is the convention for many languages out there:

(_ "welcome")

We have to setup the gettext library:

(setf (gettext:textdomain) "bookshops")
;;                          ^^ a meaningful name for gettext's catalogue.

(gettext:setup-gettext #.*package* "bookshops")

This creates new functions under the hood in the current package:

(defmacro setup-gettext (package default-domain)
  (setf package (find-package package))
  (check-type default-domain string)
  `(progn
     (defun ,(intern "GETTEXT" package) (msgid &optional domain category locale)
       (gettext* msgid (or domain ,default-domain) category locale))
     (defun ,(intern "_" package) (msgid &optional domain category locale)
       (gettext* msgid (or domain ,default-domain) category locale))
     (defun ,(intern "NGETTEXT" package) (msgid1 msgid2 n &optional domain category locale)
       (ngettext* msgid1 msgid2 n (or domain ,default-domain) category locale))
     (defun ,(intern "N_" package) (msgid)
       msgid)
     (defun ,(intern "CATALOG-META" package) (&optional domain category locale)
       (catalog-meta* (or domain ,default-domain) category locale))))

So yes, it creates the _ function. It does this in a macro so that the function will populate our catalogue by default. You can now export it:

(defpackage :bookshops.i18n
  (:use :cl)
  (:import-from :gettext #:*current-locale*)
  (:export
   #:_
   #:n_
   #:*current-locale*
   #:list-loaded-locales
   #:set-locale
   #:with-locale
   #:update-djula.pot)
  (:documentation "Internationalisation utilities"))

We can now call xgettext:

xgettext --language=lisp --from-code=UTF-8 --keyword=_ --output=locale/ie.pot --sort-output

the --keyword argument (-K) tells it we are using the underscore. Hey, we also want to collect the N_ ones (for ngettext, it handles grammatical forms that depend on a number (typically, plurals)):

xgettext -k_ -kN_

OK, now we want to find all our .lisp sources and extract strings from them all. We’ll search them with a call to find . -iname "*.lisp" .... You have an example in Djula’s doc, here’s how we did (ahem, how Francis did) with a Makefile target:

# List .lisp files under our src/ directory, unless they contain a #
SRC := $(shell find src/ -name '*.lisp' -a ! -name '*\#*')
HTML := $(shell find src/ -name '*.html' -a ! -name '*\#*')
DEPS := $(SRC) $(HTML) bookshops.asd # and some more...

# list of supported locales
LOCALES := fr_fr
# Example of how the variable should look after adding a new locale:
# LOCALES := fr_FR en_GB

.PHONY: tr
tr: ${MO_FILES}

PO_TEMPLATE_DIR := locale/templates/LC_MESSAGES
PO_TEMPLATE := ${PO_TEMPLATE_DIR}/bookshops.pot

# Rule to extract translatable strings from SRC
${PO_TEMPLATE_DIR}/lisp.pot: $(SRC)
	mkdir -p $(@D)
	xgettext -k_ -kN_ --language=lisp -o $@ $^

# and then, come the rules to extract strings from HTML templates
# and build everything.

Extracting strings from HTML templates

Now, we need to fire a Lisp and call the Djula function that knows how to collect marked strings.

The Djula doc shows how to do it with djula:xgettext-templates:

sbcl --eval '(ql:quickload :my-project)'
     --eval '(djula::xgettext-templates
               :my-project-package
               (asdf:system-relative-pathname :my-project "i18n/xgettext.lisp"))'
     --quit

This function receives 2 arguments: your project package and the output file, where to store results. It stores them in a .lisp file in a regular gettext syntax, so this .lisp file is then read by a regular xgettext command (looking for _ strings), and this command ultimately creates the .pot file:

find src -iname "*.lisp" | xargs xgettext --from-code=UTF-8 --keyword=_ --output=i18n/my-project.pot --sort-output

We did it a bit differently with two other functions, in order to keep track of the source filename of each string (our source here):

#|
This could technically be just
(mapcan #'djula.locale:file-template-translate-strings
        (djula:list-asdf-system-templates "bookshops" "src/web/templates"))

But I (fstamour) made it just a bit more complex in order to keep track of the source (just the
filename) of each translatable strings. Hence why the hash-table returned is named `locations`.
|#
(defun extract-translate-strings ()
  "Extract all {_ ... _} string from the djula templates."
  (loop
    :with locations = (make-hash-table :test 'equal)
    :for path :in (djula:list-asdf-system-templates "bookshops" "src/web/templates")
    :for strings = (djula.locale:file-template-translate-strings path)
    :do (loop :for string :in strings
              :unless (gethash string locations)
                :do (setf (gethash string locations) path))
    :finally (return locations)))


(defun update-djula.pot ()
  "Update djula.pot from *.html files."
  (with-open-file (s (asdf:system-relative-pathname "bookshops" "locale/templates/LC_MESSAGES/djula.pot")
                     :direction :output
                     :if-exists :supersede
                     :if-does-not-exist :create)
    (let* ((locations (extract-translate-strings))
           (strings (alexandria:hash-table-keys locations)))
      (loop
        :for string :in strings
        :for location = (gethash string locations)
        :do
           (format s "~%#: ~a~%#, lisp-format~%msgid ~s~%msgstr \"\" ~%"
                   (enough-namestring location (asdf:system-relative-pathname "bookshops" ""))
                   string)))))

So this is our Makefile:

# Rule to extract translatable strings from djula templates
${PO_TEMPLATE_DIR}/djula.pot: $(HTML) src/i18n.lisp
	$(LISP) --non-interactive \
		--eval '(ql:quickload "deploy")' \
		--eval '(ql:quickload "cl+ssl")' \
		--eval '(asdf:load-asd (truename "bookshops.asd"))' \
		--eval '(push :djula-binary *features*)' \
		--eval '(ql:quickload :bookshops)' \
		--eval '(bookshops.i18n:update-djula.pot)'

# Rule to combine djula.pot and lisp.pot into bookshops.pot
${PO_TEMPLATE}: ${PO_TEMPLATE_DIR}/djula.pot ${PO_TEMPLATE_DIR}/lisp.pot
	msgcat --use-first $^ > $@

# Rule to generate or update the .po files from the .pot file
locale/%/LC_MESSAGES/bookshops.po: ${PO_TEMPLATE}
	mkdir -p $(@D)
	[ -f $@ ] || msginit --locale=$* \
          -i $< \
          -o $@ \
	&& msgmerge --update $@ $<

# Rule to create the .mo files from the .po files
locale/%/LC_MESSAGES/bookshops.mo: locale/%/LC_MESSAGES/bookshops.po
	mkdir -p $(@D)
	msgfmt -o $@ $<

Ultimately, this is the one make target we, as a developer, have to use:

.PHONY: tr
tr: ${MO_FILES}

Loading the translations

Once gettext is run and we added a couple translations, we have to load them inside our lisp app. We use gettext:preload-catalogs, as in:

;; Only preload the translations into the image if we're not deployed yet.
(unless (deploy:deployed-p)
  (format *debug-io* "~%Reading all *.mo files...")
  (gettext:preload-catalogs
   ;; Tell gettext where to find the .mo files
   #.(asdf:system-relative-pathname :bookshops "locale/")))

This is a top-level instruction, we want it to work on our machine during development or when building the binary (situations where asdf will find the required directories), but not when we run the binary (the location wanted by asdf would not exist on another machine), and we can do this with the help of Deploy (Cookbook recipe).

The gettext hash-table is saved into the binary, we correctly find our translations when we deploy it.

During development

Set the current locale:

(defun set-locale (locale)
  "Setf gettext:*current-locale* and djula:*current-language* if LOCALE seems valid."
  ;; It is valid to set the locale to nil.
  (when (and locale
             (not (member locale (list-loaded-locales)
                          :test 'string=)))
    (error "Locale not valid or not available: ~s" locale))
  (setf *current-locale* locale
        djula:*current-language* locale))


(defmacro with-locale ((locale) &body body)
  "Calls BODY with gettext:*current-locale* and djula:*current-language* set to LOCALE."
  `(let (*current-locale*
         djula:*current-language*)
     (set-locale ,locale)
     ,@body))
BOOKSHOPS> (djula:set-locale "fr_fr")   ;; <-- same as the ones declared in the Makefile

The change takes effect immediately.

However, run this when developping to reload the translations into gettext’s catalogue:

#+ (or)
(progn
  ;; Clear gettext's cache (it's a hash table)
  (clrhash gettext::*catalog-cache*)
  (gettext:preload-catalogs
   ;; Tell gettext where to find the .mo files
   #.(asdf:system-relative-pathname :bookshops "locale/")))

I’ll add a file watcher to automatically reload them later, when I work more with the system.

and:

;; Run this to see the list of loaded message for a specific locale
#+ (or)
(gettext::catalog-messages
 (gethash '("fr_fr" :LC_MESSAGES "bookshops")  ;; yes, a list for the HT key.
	  gettext::*catalog-cache*))

and:

;; Test the translation of a string
#+ (or)
(with-locale ("fr_fr")
  (_ "Please login to continue"))

Usage

From our readme:

make tr takes care of extracting the strings (generating .pot files) and generating or updating (with msgmerge) .po and .mo files for each locale. The .mo files are loaded in the lisp image at compile-time (or run-time, when developing the application).

How to add a new locale?

  1. Add the new locale to the LOCALES variable in the makefile.
  2. Call make tr. This will generate the .po file (and directory) for the new locale.

How to add a translation for an existing string?

  1. Update the .po file for the locale.
    1. Find the msgid that corresponds to the string you want to translate.
    2. Fill the msgstr.
  2. Call make tr to update the .mo file for the locale.

Another blog post I wish I had read a couple years ago o/

You are welcome to make everything even easier to use.

Happy lisping!

Tim BradshawA horrible solution

· 145 days ago

Yesterday I wrote an article describing one of the ways traditional Lisp macros can be unhygienic even when they appear to be hygienic. Here’s a horrible solution to that.

The problem I described is that the expansion of a macro can refer to the values (usually the function values) of names, which the user of the macro can bind, causing the macro to fail. So, given a function

(defun call-with-foo (thunk)
  ...
  (funcall thunk))

Then the macro layer on top of it

(defmacro with-foo (&body forms)
  `(call-with-foo (lambda () ,@forms)))

is not hygienic so long as local functions named call-with-foo are allowed:

(flet ((call-with-foo (...) ...))
  (with-foo ...))

The sensible solution to this is to say, just as the standard does about symbols in the CL package that you are not allowed to do that.

Here’s another solution:

(defmacro with-foo (&body forms)
  `(funcall (symbol-function 'call-with-foo) (lambda () ,@forms)))

This is robust against anything short of top-level redefinition of call-with-foo. And you can be mostly robust even against that:

(defmacro with-foo (&body forms)
  `(funcall (load-time-value (symbol-function 'call-with-foo))
            (lambda () ,@forms)))

This still isn’t safe against really malignant users, since the load time of the macro’s definition and its uses are not generally the same. But it’s probably fairly good.

I hope I never feel I have to use techniques like this.

Tim BradshawTwo sides to hygiene

· 146 days ago

It’s tempting to think that by being sufficiently careful about names bound by traditional Lisp macros you can write macros which are hygienic. This is not true: it’s much harder than that.

Hygienic macros

I do not fully understand all the problems which Scheme-style hygienic macros try to solve, and the implementation of the solutions is usually sufficiently difficult to understand that I have always been put off doing so, especially as the details of the implementation in Racket, the Scheme-related language I use most, seems to change every few years. I’m happy enough that I am mostly competent to write the macros I need in Racket, without understanding the details of the implementation.

Traditional Lisp macros are, to me, far more appealing because they work in such an explicit and simple way: you could pretty easily write a macroexpander which did most of what the Common Lisp macroexpander does, for instance. I have written several toy versions of such a thing: I’m sure most Lisp people have. Traditional Lisp macros are just functions between bits of language expressed explicitly as s-expressions: what could be simpler?

In fact I am reasonably confident that, if I had to choose one, I’d choose CL’s macros over Racket’s: writing macros in raw CL is a bit annoying because you need explicit gensyms and you need to do pattern matching yourself. But you can write, and I have written tools to make most of this go away. With these, writing macros in CL can often be very pleasant. And it’s easy to understand what is going on.

What is far harder though, is to make it completely hygienic. Here’s one reason why.

Several versions of a macro in Common Lisp

Let’s imagine I want a macro which allows you to select actions based on the interval a real number is in. It might look like this:

(interval-case x
  ((0 1) ...)
  ((1) 2) ...)
  (otherwise ...))

Here intervals are specified the way they are in type specifiers for reals:

  • (a b) where a and b are reals means \([a,b]\);
  • ((a) b) where a and b are reals means \((a,b]\);
  • and so on.

There can be only one interval per clause, for simplicity.

I will write several versions of this macro. For all of them I will use dsm and, later, metatronic macros to make things better.

First of all here’s a function1 which, given an interval specification, returns a form which will match numbers in that interval:

(defun compute-interval-form (v iv)
  (destructuring-match iv
    (((l) (h))
     (:when (and (realp l) (realp h)))
     `(< ,l ,v ,h))
    ((l (h))
     (:when (and (realp l) (realp h)))
     `(and (<= ,l ,v) (< ,v ,h)))
    (((l) h)
     (:when (and (realp l) (realp h)))
     `(and (< ,l ,v) (<= v ,h)))
    ((l h)
     (:when (and (realp l) (realp h)))
     `(<= ,l ,v ,h))
    (default
     (:when (member default '(t otherwise)))
     t)
    (otherwise
     (error "~S is not an interval designator" iv))))

A hopeless version

Here is a version of this macro which is entirely hopeless:

(defmacro interval-case (n &body clauses)
  ;; Hopeless
  `(cond
    ,@(mapcar (lambda (clause)
                (destructuring-bind (iv &body forms) clause
                  `(,(compute-interval-form n iv) ,@forms)))
              clauses)))

It’s hopeless because of this:

> (let ((x 1))
    (interval-case (incf x)
      ((1 (2)) '(1 (2)))
      ((2 (3)) '(2 (3)))))

So (incf x) where x is initially 1 is apparently neither in \([1,2)\) nor \([2,3)\) which is strange. This is happening, of course, because the macro is multiply-evaluating its argument, which it should not do.

An obviously unhygienic repair

So let’s try to fix that:

(defmacro interval-case (n &body clauses)
  ;; Unhygenic
  `(let ((v ,n))
     (cond
      ,@(mapcar (lambda (clause)
                  (destructuring-bind (iv &body forms) clause
                    `(,(compute-interval-form 'v iv) ,@forms)))
                clauses))))

Well, this is better:

> (let ((x 1))
    (interval-case (incf x)
      ((1 (2)) '(1 (2)))
      ((2 (3)) '(2 (3)))))
((2) (3))

but … not much better:

> (let ((x 1) (v 10))
    (interval-case (incf x)
      ((1 (2)) nil)
      ((2 (3)) v)))
2

The macro binds v, which shadows the outer binding of v and breaks everything.

A repair which might be hygienic

Here is the normal way to fix that:

(defmacro interval-case (n &body clauses)
  ;; OK
  (let ((vn (make-symbol "V")))
    `(let ((,vn ,n))
       (cond
        ,@(mapcar (lambda (clause)
                    (destructuring-bind (iv &body forms) clause
                      `(,(compute-interval-form vn iv) ,@forms)))
                  clauses)))))

And now

> (let ((x 1) (v 10))
    (interval-case (incf x)
      ((1 (2)) nil)
      ((2 (3)) v)))

Good. I think it is possible to argue that this version of the macro is hygienic, at least in terms of names.

A simpler repair using metatronic macros

Here is the previous macro written using metatronic macros:

(defmacro/m interval-case (n &body clauses)
  ;; OK, easier
  `(let ((<v> ,n))
       (cond
        ,@(mapcar (lambda (clause)
                    (destructuring-bind (iv &body forms) clause
                      `(,(compute-interval-form '<v> iv) ,@forms)))
                  clauses))))

This is simpler to read and should be as good.

An alternative approach …

Although it is not entirely natural in the case of this macro, many macros can be written by having the macro expand into a call to a function, passing another function whose body is the body of the macro as an argument. These things often exist as pairs of with-* (the macro) and call-with-* (the function).

We can persuade interval-case to work like that: it’s not a natural macro to write that way and writing it that way will end up with something almost certainly less efficient as (at least the way I’ve written it) as it needs to interpret the interval specifications at runtime rather than compile them2. But I wanted to have just one example.

Here is call/intervals, the function layer:

(defun call/intervals (n ivs/thunks)
  ;; Given a real n and a list of (interval-spec thunk ...), find the
  ;; first spec that n matches and call its thunk.
  (if (null ivs/thunks)
      nil
    (destructuring-bind (iv thunk . more) ivs/thunks
      (if (destructuring-match iv
            (((l) (h))
             (:when (and (realp l) (realp h)))
             (< l n h))
            ((l (h))
             (:when (and (realp l) (realp h)))
             (and (<= l n) (< n h)))
            (((l) h)
             (:when (and (realp l) (realp h)))
             (and (< l n) (<= n h)))
            ((l h)
             (:when (and (realp l) (realp h)))
             (<= l n h))
            (default
             (:when (member default '(t otherwise)))
             t)
            (otherwise
             (error "~S is not an interval designator" iv)))
          (funcall thunk)
        (call/intervals n more)))))

As well, here is a ‘nospread’ variation on call/intervals which serves as an impedence matcher:

(defun call/intervals* (n &rest ivs/thunks)
  ;; Impedence matcher
  (declare (dynamic-extent ivs/thunks))
  (call/intervals n ivs/thunks))

Now here’s the macro layer:

(defmacro interval-case (n &body clauses)
  ;; Purports to be hygienic
  `(call/intervals*
    ,n
    ,@(mapcan (lambda (clause)
                `(',(car clause)
                  (lambda () ,@(cdr clause))))
              clauses)))

So we can test this:

> (let ((x 1) (v 10))
    (interval-case (incf x)
      ((1 (2)) nil)
      ((2 (3)) v)))
10

So, OK, that’s good, right? This is another hygienic macro. Not so fast.

… which is not hygienic

> (flet ((call/intervals* (&rest junk)
           (declare (ignore junk))
           86))
    (interval-case 2
      ((1 2) 'two)))
86

Not so hygienic, then.

The alternative approach in Racket

Here is a similar alternative approach implemented in Racket:

(define (call/intervals n ivs/thunks)
  ;; Here ivs/thunks is a list of (iv thunk) pairs, which is not the same
  ;; as the CL version: that's because I can't work out how to do the
  ;; syntax rule otherwise.
  (match ivs/thunks
    ['() #f]
    [(list (list iv thunk) more ...)
     (if
      (match iv
        [(list (list (? real? l))
               (list (? real? h)))
         (< l n h)]
        [(list (? real? l)
               (list (? real? h)))
         (and (<= l n) (< n h))]
        [(list (list (? real? l))
               (? real? h))
         (and (< l n) (<= n h))]
        [(list (? real? l) (? real? h))
         (<= l n h)]
        [(or 'otherwise #t)
         #t]
        [_
         (error 'call/intervals "~S is not an interval designator" iv)])
      (thunk)
      (call/intervals n more))]))

(define (call/intervals* n  . ivs/thunks)
  ;; impedence matcher (not so useful here)
  (call/intervals n ivs/thunks))

(define-syntax-rule (interval-case n (key body ...) ...)
  (call/intervals* n (list 'key (thunk body ...)) ...))

And now:

> (call/intervals* 1 (list '(0 1) (thunk 3)))
3
> (interval-case 2
    ((1 2) 'two))
'two
> (let ([call/intervals* (thunk* 86)])
    (interval-case 2
      ((1 2) 'two)))
'two
> (let ([call/intervals* (thunk* 86)])
    (call/intervals* 2))
86

In Racket this macro is hygienic.

Two sides to hygiene

So the problem here is that there are at least two sides to hygiene for macros:

  • names they use, usually by binding variables but also in other ways, must not interfere with names used in the program where the macro is used;
  • the program where the macro is used must not be able to alter what names the macro refers to mean.

In both cases, of course, there need to be exceptions which are part of the macro’s contract with its users: with-standard-io-syntax is allowed (and indeed required) to bind *print-case* and many other variables.

I think almost everyone understands the first of these problems, but the second is much less often thought about.

Dealing with this problem in Common Lisp

I think a full solution to this problem in CL would be very difficult: macros would have to refer to the names they rely on by names which were somewhow unutterable by the programs that used them. Short of actually writing a fully-fledged hygienic macro system for CL this sounds impractical.

In practice the solution is to essentially extend what CL already does. For symbols (so, names) in the CL package there are strong restrictions on what conforming programs may do. This program is not legal CL3 for instance:

(flet ((car (x) x))
  ... (car ...))

So the best answer is then, I think, to:

  • use packages with well-defined interfaces in the form of exported symbols;
  • disallow or strongly discourage the use of internal symbols of packages by programs which are not part of the implementation of the package;
  • and finally place restrictions similar to those placed on the CL package on exported symbols of your packages.

Note that package locks don’t answer this problem: they usually forbid the modification of various attributes of symbols and the creation or deletion of symbols, but what is needed is considerably stronger than that: it needs to be the case that you can’t establish any kind of binding, even a lexical one, for symbols in the package.

Is this a problem in practice? Probably not often. Do I still prefer traditional Lisp macros? Yes, I think so.


  1. This function what you would want to make more complicated to allow multiple intervals per clause. 

  2. This interpretation could be avoided by havig the compiler turn the interval specifications into one-argument functions. I think it’s still not a natural way to write this macro. 

  3. Assuming that car means ‘the symbol whose name is "CAR" in the "COMMON-LISP" package’. 

Tim BradshawNirvana

· 146 days ago

An article constructed from several emails from my friend Zyni, reproduced with her permission. Note that Zyni’s first language is not English.

Many people have tried to answer what is so special about Lisp by talking about many things.

Such as interactive development, a thing common now to many languages of course, and if you use Racket with DrRacket not in fact how development usually works there at all. Are we to cast Racket into the outer darkness?1

Such as CLOS, a thing specific to Common Lisp: can you not achieve Lisp enlightenment unless you program in Common Lisp? Was Lisp enlightmenent impossible before CLOS existed? What stupid ideas. Could you implement CLOS in a language which was not Lisp? Certainly you could.

Such as the CL condition system: a thing also specific to Common Lisp. Something also which could be implemented in any sufficiently dynamic language. Something almost nobody who writes in Common Lisp understands I think.

And so it goes on.

None of this is the answer. None of this is close to the answer. To find the answer ask why did these things arise in Lisp first? What is the property of Lisp which is in fact unique to Lisp and which defines Lisp in strict sense that if any other language had this property it would be a Lisp? To see answer to this you must understand Bradshaw’s law and my corollary to it:

Bradshaw’s law. All sufficiently large software systems end up being programming languages.

Zyni’s corollary. At whatever size you think Bradshaw’s law applies, it applies sooner than that.

This means that all programming is language construction.2 When you write a program you are writing a language in which to express the problem you wish to solve.

Now you can begin understand what is so interesting about Lisp. In almost all programming languages when you solve a problem you define a lot of new words for the language you have, and perhaps you define elaborate classifications of the nouns of the language you will allow. But you can do nothing with the structure of the language you must use because the language will not allow that: it has a fixed grammar handed down by the great and good who designed it who are sometimes not fools. And indeed you are fiercely discouraged from even understanding what it is you are doing: discouraged from understanding that you are building a new language.

And quite soon (sooner than you think and in fact immediately) you find you must actually have new structure, new grammar. But you cannot do this easily both because the language you use does not allow it and also because you do not know what it is you are doing - you do not realise that you are making a language. So probably you use a templating system or something and build an awful horror. Often this horror will have nested languages where inner languages appear in strings in outer languages. Often it will have evaluation rules so obscure and inconsistent that it is impossible for humans to write safe large programs in this language (Unix shells: I look at you). We have all seen these things.

And so you live out your life crawling in the dirt, never understanding what thing it is of which you are making a very bad, very unsafe, very ugly version. Because you have been taught there is only mud so all you do is pile up structures out of mud, to be washed away by the next rain. A little way over is a tribe who knows only straw and they build structures from straw which blow away in the first wind. You hate them; they hate you. Sometimes you have little wars.

What, on the other hand, do you do in Lisp? Well, few days ago I needed a way to express the idea of searching some (very) large structure and being able to fail in a structured way. So after ten minutes work, my program now says things like this:

(defun big-serch-thing (thing)
  (attempting
    (quick-and-dirty thing)
    (try-harder thing)))

(defun try-harder (thing)
  (walking-thing (node thing :level 0)
    (attempting
      (first-pass thing)
      (desparate-fallback thing))))

(defun first-pass (thing)
  ...
  (when doom (fail))
  ...)

Well it does not matter what this does and this is not what my program is actually like, but what is clear just by looking is that this language is not Common Lisp. Instead it is Common Lisp extended with at least two new grammatical constructs: attempting with its friend fail which looks like a verb but in fact is a control construct really, and walking-thing which is some kind of new iteration construct perhaps.

And there is more: when you look at attempting you will find it is implemented (by a function which) uses a construct called looping which is another extension to Common Lisp. And similarly for walking-thing (which is not really called that) which uses I think four separate new grammatical constructs I do not remember.

And there is more: when I started this essay these constructs were mostly as I showed above, but we have decided this was wrong, so the new language is now somewhat different and somewhat richer. A few more tens of minutes of work, most of it altering the existing programs in the old language to use the new language. The new language is even defined using a language-extending construct which itself is an extension to CL’s provided ones.

And this is how you program in Lisp. In Lisp, writing programs is building languages: in Lisp to solve a problem is to first build a language in which the problem may be solved. And because doing this is so easy in Lisp, this is what you do even for very small problems: you incrementally extend the grammar of the language — not just its lexicon — to create a language in which to describe the problem.

Well, this is not surprising, is it? This is what the laws imply: programming is constructing languages, and this applies even for very small programs. What is surprising is that so few languages encourage this. And because they do not we end up with the horror we all know. Perhaps even this is not surprising: any language which supports this well will have all the characteristics of Lisp, will in fact be a Lisp. So no other languages do this because to do it requires being Lisp. So why is Lisp not more popular? Well, answer is fairly easy but this is discussion for another day, I think.

And now we see why Lisp got features first: because it could. Let us say you wish to explore an object system in Lisp. Well, perhaps you will want a class-defining construct, so you write a macro, define-class or something. And you wish to be able to send messages, so you write a send function and then you modify the readtable so [o message ...] is (send o message ...). And perhaps you wish some new binding construct for fields so you write with-fields and so, and so.

And now you have a new language. If you were careful you may even have constructed that new language inside a single running Lisp image. And this took, perhaps, some hours. And later, you decide that no, you wish your new language to be different, so you change it. Another few hours. Eventually, in a different world, you call this part of the language ZLOS and there is a standard.

And this is why these linguistic innovations happen in Lisp: because Lisp is a machine for linguistic innovation. It is that feature of Lisp which makes it interesting, and it is only that feature: both because all other features derive from that one and because to have that feature is to be Lisp.

That is all.


  1. Do not answer this or I will kill you with a stale loaf of bread. 

  2. This is exaggeration: if you define no names in your program you are, perhaps, not constructing a language. 

vindarelHow to parse command line arguments in Common Lisp (bis)

· 160 days ago

In 2018, I wrote a blog post and the Cookbook page on how to build Common Lisp binaries, and how to parse command-line arguments with the unix-opts library.

But since then, new libraries were created an they are pretty good! They are simpler to use, and have much more features. I had a good experience with Clingon: its usage is clear, its documentation is very good, it is very flexible (it has hooks and generic functions waiting to have an :around method) and @dnaeon is not at his first great CL project.

You might give adopt a look, or maybe defmain though I felt a little something was missing.

So I updated the guide to use Clingon. Let’s go.

=> This article is best read on the Common Lisp Cookbook where it will receive updates.

As a reminder to this often-asked question, my SBCL standalone binaries, with dozens of dependencies (SBCL’s compiler and debugger (very useful to load code during the application’s lifecycle), a web server, static assets and other libraries) weight about 30MB and start in ±0.4s, with SBCL compression. Without compression, it’s more about 130MB and 0.01s.

Parsing command line arguments

SBCL stores the command line arguments into sb-ext:*posix-argv*.

But that variable name differs from implementations, so we want a way to handle the differences for us.

We have (uiop:command-line-arguments), shipped in ASDF and included in nearly all implementations. From anywhere in your code, you can simply check if a given string is present in this list:

(member "-h" (uiop:command-line-arguments) :test #'string-equal)

That’s good, but we also want to parse the arguments, have facilities to check short and long options, build a help message automatically, etc.

We chose the Clingon library, because it may have the richest feature set:

  • it handles subcommands,
  • it supports various kinds of options (flags, integers, booleans, counters, enums...),
  • it generates Bash and Zsh completion files as well as man pages,
  • it is extensible in many ways,
  • we can easily try it out on the REPL
  • etc

Let’s download it:

(ql:quickload "clingon")

As often, work happens in two phases:

  • we first declare the options that our application accepts, their kind (flag, string, integer...), their long and short names and the required ones.
  • we ask Clingon to parse the command-line options and run our app.

Declaring options

We want to represent a command-line tool with this possible usage:

$ myscript [-h, --help] [-n, --name NAME]

Ultimately, we need to create a Clingon command (with clingon:make-command) to represent our application. A command is composed of options and of a handler function, to do the logic.

So first, let’s create options. Clingon already handles “–help” for us, but not the short version. Here’s how we use clingon:make-option to create an option:

(clingon:make-option
 :flag                ;; <--- option kind. A "flag" does not expect a parameter on the CLI.
 :description "short help"
 ;; :long-name "help" ;; <--- long name, sans the "--" prefix, but here it's a duplicate.
 :short-name #\h      ;; <--- short name, a character
 ;; :required t       ;; <--- is this option always required? In our case, no.
 :key :help)          ;; <--- the internal reference to use with getopt, see later.

This is a flag: if “-h” is present on the command-line, the option’s value will be truthy, otherwise it will be falsy. A flag does not expect an argument, it’s here for itself.

Similar kind of options would be:

  • :boolean: that one expects an argument, which can be “true” or 1 to be truthy. Anything else is considered falsy.
  • :counter: a counter option counts how many times the option is provided on the command line. Typically, use it with -v / --verbose, so the user could use -vvv to have extra verbosity. In that case, the option value would be 3. When this option is not provided on the command line, Clingon sets its value to 0.

We’ll create a second option (“–name” or “-n” with a parameter) and we put everything in a litle function.

;; The naming with a "/" is just our convention.
(defun cli/options ()
  "Returns a list of options for our main command"
  (list
   (clingon:make-option
    :flag
    :description "short help."
    :short-name #\h
    :key :help)
   (clingon:make-option
    :string              ;; <--- string type: expects one parameter on the CLI.
    :description "Name to greet"
    :short-name #\n
    :long-name "name"
    :env-vars '("USER")     ;; <-- takes this default value if the env var exists.
    :initial-value "lisper" ;; <-- default value if nothing else is set.
    :key :name)))

The second option we created is of kind :string. This option expects one argument, which will be parsed as a string. There is also :integer, to parse the argument as an integer.

There are more option kinds of Clingon, which you will find on its good documentation: :choice, :enum, :list, :filepath, :switch and so on.

Top-level command

We have to tell Clingon about our top-level command. clingon:make-command accepts some descriptive fields, and two important ones:

  • :options is a list of Clingon options, each created with clingon:make-option
  • :handler is the function that will do the app’s logic.

And finally, we’ll use clingon:run in our main function (the entry point of our binary) to parse the command-line arguments, and apply our command’s logic. During development, we can also manually call clingon:parse-command-line to try things out.

Here’s a minimal command. We’ll define our handler function afterwards:

(defun cli/command ()
  "A command to say hello to someone"
  (clingon:make-command
   :name "hello"
   :description "say hello"
   :version "0.1.0"
   :authors '("John Doe <john.doe@example.org")
   :license "BSD 2-Clause"
   :options (cli/options) ;; <-- our options
   :handler #'null))  ;; <--  to change. See below.

At this point, we can already test things out on the REPL.

Testing options parsing on the REPL

Use clingon:parse-command-line: it wants a top-level command, and a list of command-line arguments (strings):

CL-USER> (clingon:parse-command-line (cli/command) '("-h" "-n" "me"))
#<CLINGON.COMMAND:COMMAND name=hello options=5 sub-commands=0>

It works!

We can even inspect this command object, we would see its properties (name, hooks, description, context...), its list of options, etc.

Let’s try again with an unknown option:

CL-USER> (clingon:parse-command-line (cli/command) '("-x"))
;; => debugger: Unknown option -x of kind SHORT

In that case, we are dropped into the interactive debugger, which says

Unknown option -x of kind SHORT
   [Condition of type CLINGON.CONDITIONS:UNKNOWN-OPTION]

and we are provided a few restarts:

Restarts:
 0: [DISCARD-OPTION] Discard the unknown option
 1: [TREAT-AS-ARGUMENT] Treat the unknown option as a free argument
 2: [SUPPLY-NEW-VALUE] Supply a new value to be parsed
 3: [RETRY] Retry SLIME REPL evaluation request.
 4: [*ABORT] Return to SLIME's top level.

which are very practical. If we needed, we could create an :around method for parse-command-line, handle Clingon’s conditions with handler-bind and use its restarts, to do something different with unknown options. But we don’t need that yet, if ever: we want our command-line parsing engine to warn us on invalid options.

Last but not least, we can see how Clingon prints our CLI tool’s usage information:

CL-USER> (clingon:print-usage (cli/command) t)
NAME:
  hello - say hello

USAGE:
  hello [options] [arguments ...]

OPTIONS:
      --help          display usage information and exit
      --version       display version and exit
  -h                  short help.
  -n, --name <VALUE>  Name to greet [default: lisper] [env: $USER]

AUTHORS:
  John Doe <john.doe@example.org

LICENSE:
  BSD 2-Clause

We can tweak the “USAGE” part with the :usage key parameter of the lop-level command.

Handling options

When the parsing of command-line arguments succeeds, we need to do something with them. We introduce two new Clingon functions:

  • clingon:getopt is used to get an option’s value by its :key
  • clingon:command-arguments gets use the free arguments remaining on the command-line.

Here’s how to use them:

CL-USER> (let ((command (clingon:parse-command-line (cli/command) '("-n" "you" "last"))))
           (format t "name is: ~a~&" (clingon:getopt command :name))
           (format t "free args are: ~s~&" (clingon:command-arguments command)))
name is: you
free args are: ("last")
NIL

It is with them that we will write the handler of our top-level command:

(defun cli/handler (cmd)
  "The handler function of our top-level command"
  (let ((free-args (clingon:command-arguments cmd))
        (name (clingon:getopt cmd :name)))  ;; <-- using the option's :key
    (format t "Hello, ~a!~%" name)
    (format t "You have provided ~a more free arguments~%" (length free-args))
    (format t "Bye!~%")))

We must tell our top-level command to use this handler:

;; from above:
(defun cli/command ()
  "A command to say hello to someone"
  (clingon:make-command
   ...
   :handler #'cli/handler))  ;; <-- changed.

We now only have to write the main entry point of our binary and we’re done.

By the way, clingon:getopt returns 3 values:

  • the option’s value
  • a boolean, indicating wether this option was provided on the command-line
  • the command which provided the option for this value.

See also clingon:opt-is-set-p.

Main entry point

This can be any function, but to use Clingon, use its run function:

(defun main ()
  "The main entrypoint of our CLI program"
  (clingon:run (cli/command)))

To use this main function as your binary entry point, see above how to build a Common Lisp binary. A reminder: set it in your .asd system declaration:

:entry-point "my-package::main"

And that’s about it. Congratulations, you can now properly parse command-line arguments!

Go check Clingon’s documentation, because there is much more to it: sub-commands, contexts, hooks, handling a C-c (see also the Cookbook for that), developing new options such as an email kind, Bash and Zsh completion...


Thanks for reading and thanks again to @dnaeon.

Tim BradshawSomething unclear in the Common Lisp standard

· 161 days ago

There is what I think is a confusion as to bound declarations in the Common Lisp standard. I may be wrong about this, but I think I’m correct.

Bound and free declarations

Declarations in Common Lisp can be either bound or free:

  • a bound declaration appears at the head of a binding form and applies to a variable or function binding made by that form;
  • a free declaration is any declaration which is not bound.

There are declarations which do not apply to bindings, such as optimize: these are always free.

Examples of bound and free declarations

In the form

(let ((x 1))
  (declare (type integer x))
  ...)

the declaration is bound and applies to the binding of x. In the form

(let ((/x/ 1))
  (declare (special /x/)
           (optimize (speed 3)))
  ...)

the special declaration is bound and applies to the binding of /x/, while the optimize declaration is free.

In the form

(let ((x 1))
  (locally
    (declare (type integer x)
             (optimize speed))
    ...)
  ...)

Both declarations are free and apply only to the body of the locally form.

Declarations which may not be ignored

Most declarations may be ignored by the implementation: this is the case for all type declarations, for instance. Two may not be:

  • notinline forbids inline compilation of the functions it names;
  • special requires dynamic bindings to be made when it is bound, and requires references to be to dynamic, not lexical bindigns when it is free.

I’m going to exploit the non-ignorability of special declarations to show a case where the confusion arises.

The confusion

Forms like let* bind sequentially:

(let* ((x 1) (y x))
  ...)

first binds x and then binds y to the value of x. Now, I am not sure of the standard ever says this, but all implementations I have tried take this to mean that the same name can be bound several times by let*:

(let* ((x 1) (x x))
  ...)

is legal, if stylistically awful. That’s because the obvious transformation of let* into nested lets turns this into:

(let ((x 1))
  (let ((x x))
    ...))

which is clearly fine.

So now we come to the problem: what should this mean?

(let* ((x 1) (x x))
  (declare (type fixnum x))
  ...

Which binding of x does the declaration apply to? The standard does not say. In this case it might not matter, because this declaration can be ignored, but here is a case where it does matter:

(let (c)
  (let* ((/x/ 1)
         (/x/ (progn
                (setf c (lambda () /x/))
                2)))
    (declare (special /x/))
    (values c (lambda () /x/))))

This expression returns two values, both of which are functions:

  • if the first /x/ is special then calling the first function will result in an error;
  • if the second /x/ is special then calling the second function will result in an error.

So using this trick you can know whether the first binding, second binding, or both bindings are affected by the special declaration.

And, again, the standard does not say which binding is affected, or whether both should be. And implementations differ. Given the following file

(in-package :cl-user)  

(defun call-ok-p (f)
  (multiple-value-bind (v c)
      (ignore-errors
       (funcall f)
       t)
    (declare (ignore c))
    v))

(defun ts ()
  (multiple-value-bind (one two)
      (let (c)
        (let* ((/x/ 1)
               (/x/ (progn
                      (setf c (lambda () /x/))
                      2)))
          (declare (special /x/))
          (values c (lambda () /x/))))
    (values (call-ok-p one)
            (call-ok-p two))))

(multiple-value-bind (first-lexical second-lexical) (ts)
  (format t "~&first  ~:[special~;lexical~]~%~
               second ~:[special~;lexical~]~%"
          first-lexical second-lexical))

SBCL

first  lexical
second special

CCL

first  special
second special

LispWorks

first  special
second special

What should the answer be?

I think that the interpretation taken by CCL and LispWorks is better: in forms like this declarations should apply to all the bindings made by the form. An alternative answer is that the declarations should apply to the visible bindings at the point of the declaration, which is the approach taken by SBCL.

It’s tempting to say that the obvious rewrite of let* as nested lets gives you the SBCL answer, but it does not. In a form like

(let* ((x 3) (y x))
  (declare (type integer x)
           (type (integer 0) y))
  ...)

This must be rewritten as

(let ((x 3))
  (declare (type integer x))
  (let ((y x))
    (declare (type (integer 0) y))
    ...))

So the declaration for x must be raised out of the inner let so it remains bound: the implementation already has to do work to get declarations in the right place and can’t just naïvely rewrite the form.

I prefer the first interpretation both because I think it represents what people are likely to want more closely, but also because I think the standard could be interpreted as meaning that without being rewritten.

Does this matter?

Probably only in very obscure cases! I just thought it was interesting.


Thanks to vrious people on the Lisp-HUG mailing list for coming up with this.

Nicolas Hafner&#27425;&#12399;...&#20309;&#65311; - April Kandria Update

· 176 days ago

Another month gone by. There's no major updates this month, since we just had the level editor release last month. However, there is a somewhat major update coming out next week!

You may have guessed from the title, but we're adding Japanese localisation to the game! It'll release on Wednesday, 12th of April. As someone that's been trying to learn Japanese for a long time, this is another step in the development that I've been looking forward to very much.

https://filebox.tymoon.eu//file/TWpZM09BPT0=

I'm also really glad that we can release this already, so it'll be well out by the time I visit Tokyo Game Show in September!

Other news

With the level editor and the beginning of the modding support already in place, I've taken a bit of a step back from Kandria development for a bit, and instead focused on working on a lot of other stuff that's been way overdue, like moving and upgrading our servers, taking care of my mental health, and starting work on pre-pre-production for the next game.

However, Kandria development is not yet done. The second and final major update with an official modding API is still coming. I don't have a release date for that yet, so I hope you remain patient!

In the meantime, my paper for the European Lisp Symposium in Amsterdam was accepted, so I will be presenting Kandria there. The paper is quite technical, but if you're interested in Common Lisp, it might be worth a read. And if you're going to be in Amsterdam, let me know. I'd be happy to chat!

That's it for now

A rather short update this month, but if you're interested in the details of the pre-pre-production process, come on hang out in the Shirakumo channel with the rest of us! We're discussing all the improvements to Trial there.

Tim BradshawMeasuring some tree-traversing functions

· 184 days ago

In a previous article my friend Zyni wrote some variations on a list-flattening function, some of which were ‘recursive’ and some of which ‘iterative’, managing the stack explicitly. We thought it would be interesting to see what the performance differences were, both for this function and a more useful variant which searches a tree rather than flattening it.

What we measured

The code we used is here1. We measured four variations of each of two functions.

List flattening

All these functions use collecting to build their results forwards. They live in flatten-variants.lisp.

  • flatten/implicit-stack works in the obvious recursive way, with an implicit stack. This uses iterate to express the local recursive function.
  • flatten/explicit-stack uses an explicit stack (called agenda in the code) represented as a vector, and uses looping to express iteration.
  • flatten/explicit-stack/adja is like the previous function but it is willing to extend the explicit stack, which it does by using adjust-array and assignment.
  • flatten/explicit-stack/adjb is like flatten/explicit-stack/adja but uses a local tail-recursive function to bind the extended stack rather than assignment.
  • Finally flatten/consy-stack is very close to Zyni’s original iterative solution: it represents the stack as a list. This version necessarily conses fairly copiously.

Searching cons trees

These functions, in treesearch-variants.lisp, correspond to the flattening variants, except they are searching for some atomic value in the tree of conses:

  • search/implicit-stack uses an implicit stack;
  • search/explicit-stack uses a vector;
  • search/explicit-stack/adja uses a vector and adjusts by assignment;
  • search/explicit-stack/adjb uses a vector and adjusts by binding;
  • search/consy-stack uses a consy stack.

Notes on the code

The functions all have (declare (optimize (speed 3))) but specifically don’t turn off safety or use implementation-specific settings: we wanted to test code we felt we’d be happy running, and that means code compiled with reasonable settings for safety: if you turn safety off you’re brave, foolish, or both.

We did not compare looping with do or loop: we probably should. However the expansion of looping is pretty straightforward:

(looping ((this o) (depth 0))
  (declare ...)
  ...)

Turns into

(let ((this o) (depth 0))
  (declare ...)
  (block nil
    (tagbody
      #:start
      (multiple-value-setq (this depth) ...)
      (go #:start))))

The only real question here, we think is whether multiple-value-setq is compiled well: brief inspection implies it is. We should probably still compare the current version with more ‘native CL’ variants.

The variants which use a vector as a stack maintain the current element themselves: that’s because we tested using a fill pointer and vector-push / vector-pop and it was really significantly slower in both implementations.

What we did

The Lisp implementations we used

We used LispWorks 8.0 and very recent SBCL builds, compiled from the master branch no more than a few days before we ran the tests in mid March 2023.

In the case of SBCL we paid attention to notes and warnings during compilation. The significant one we did not address was that it complained vociferously about not being able to optimize calls to eql: that’s because we don’t know the type of the thing we are searching for: it needs to do the work it is trying to avoid. Apart from this the only warnings were about the computation of the new length of the agenda, which never actually happens in the tests we ran.

The machines we benchmarked on

We both have M1-based Macbook Airs so this is what we used. In particular we have not run any benchmarks on x64.

What we ran

make-car-cdr, in common.lisp, makes a list where each element is a chain linked by cars, finally terminating in a specified element. Controlling the length of the list and the depth of the chains gives the functions more iterative or more recursive work to do respectively. The benchmarking code then made a series of suitable structures of increasing size and timed many iterations of each function on the same structure, computing the time per call. We then wrote a program in Racket to plot the results on axes of ‘breadth’ (length of the list) and ‘depth’ (depth of the car-linked chain). For the search functions the element being searched for was not in the tree so they had to do as much work as possible.

Life was usually arranged so that the initial agenda was big enough for the functions which used a vector as the agenda, so none of that aspect of them was teated, except for one case below. Apart from that case, the ‘vector stack’ timings refer to flatten/explicit-stack and treesearch/explicit-stack, not the adjustable-stack variants.

Some results

We timed 1,000 iterations of each call, for list lengths (breadth in the plots and below) from 30 to 1,000 in steps of 10 and depths (depth in the plots and below) from 10 to 300 in steps of 10, computing times in μs per iteration. Neither of us knows anything about how data like this should be best presented but simply plotting the performance surfaces seemed reasonable. We used bilinear interpolation to make the surface from the points2.

LispWorks

Treesearch: implicit compared with vector stack

Treesearch: implicit compared with vector stack

This is nicely linear in both breadth and depth, and so quadratic in breadth \(\times\) depth. And it’s easy to see that for LW using the implicit stack is faster than the manually-managed stack.

Treesearch: vector stack compared with consy stack

Treesearch: vector stack compared with consy stack

This compares the vector stack with the consy stack, for treesearch. The consy stack is slightly faster which surprised us. This conses a list as long as the depth of the tree for each ‘leftward’ branch, and then immediately unwinds that and throws the whole list away. So it creates significant garbage, but the allocation and garbage collection overhead together is still faster than using a vector. Consing really is (almost) free.

Treesearch compared with flatten, both with implicit stacks

Treesearch compared with flatten, both with implicit stacks

Here is more evidence that consing is very cheap: the difference between treesearch (which does not cons) and flatten (which does) is tiny.

SBCL

Treesearch: implicit compared with vector stack

Treesearch: implicit compared with vector stack

So here is SBCL. For SBCL explicitly managing the stack as a vector is significantly faster than the implicit stack. Something that is also apparent here is how variable SBCL’s timings are compared with LW’s: we don’t know why that is although we suspect it might be because SBCL’s garbage collector is more intrusive than LW’s. We also don’t know whether this variation is repeatable, or whether it’s due to a single very slow run or something like that.

Treesearch: vector stack compared with consy stack

Treesearch: vector stack compared with consy stack

For SBCL the consy stack is significantly slower than the vector stack, so for SBCL the vector stack is the fastest.

Treesearch compared with flatten, both with implicit stacks

Treesearch compared with flatten, both with implicit stacks

SBCL has a slightly larger difference between treesearch and flatten, with flatten being slower. There are also curious ‘waves’ in the plot as depth increases.

LispWorks compared with SBCL

Treesearch: SBCL compared with Lispworks, implicit stacks

Treesearch: SBCL compared with Lispworks, implicit stacks

LW is significantly faster than SBCL for implicit stacks except for very small depths.

Treesearch: SBCL compared with Lispworks, best stacks

Treesearch: SBCL compared with Lispworks, best stacks

This compares LW using an implicit stack with SBCL using an explicit vector stack. The difference is pretty small now.

Flatten: SBCL compared with Lispworks, consy stacks

Flatten: SBCL compared with Lispworks, consy stacks

This was meant to be the worst-case for both: flattening and a consy stack. But it’s not particularly informative, I think.

The outer reaches: LispWorks with a deep tree

We did one run with the maximum depth set to 10,000 with a step of 500, and maximum breadth set to 1,000 with a step of 100, averaged over 100 iterations instead of 1,000. This is too deep for LW’s stack, but LW allows stack extension, and we wrote what later became this to extend the stack as required. Note that this happens only during the first recursion into the left-hand branch of the tree so has minimal effect on performance. This also used search/explicit-stack/adjb for the vector stack.

Treesearch: implicit compared with consy stack, deep tree

Treesearch: implicit compared with consy stack, deep tree

As before the implicit stack is much better for LW. This is much more bumpy than LW was for smaller depths, this might have been because the machine did other things while it was running but we don’t think so.

Some conclusions

None of the differences were really large. In particular there’s no enormous advantage from managing the stack yourself.

Consing and the resulting garbage-collection does really seem to be very cheap, especially in LispWorks: the days of long GC pauses are long gone.

We were surprised that LispWorks was fairly reliably faster than SBCL: surprised enough that we ran everything several times to be sure. It’s also interesting how much smoother LW’s performance surface is in most cases.

It is possible that our implementations just suck, of course.

Mostly it’s just some pretty pictures.


  1. All of the functions should be portable CL. Some of the mechanism for expressing dependencies and loading things is not. However it should be easy for anyone to run this if they wish to. 

  2. Getting the bilinear interpolation right took longer than anything else, and perhaps longer than everything else put together. 

Tim BradshawThe absurdity of stacks

· 185 days ago

Very often people regard the stack as a scarce, expensive resource, while the heap is plentiful and very cheap. This is absurd: the stack is memory, the heap is also memory. Deforming programs so they are ‘iterative’ in order that they do not run out of the stack we imagine to be so costly is ridiculous: if you have a program which is inherently recursive, let it be recursive.

In a previous article my friend Zyni wrote some variations on a list-flattening function1, some of which were ‘recursive’ and some of which ‘iterative’. Of course, the ones which claim to be iterative are, in fact, recursive: any procedure which traverses a recursively-defined data structure such as a tree of conses is necessarily recursive. The ‘iterative’ versions just use an explicitly-maintained stack rather than the implicit stack provided by the language. That makes sense only if stack space is very small compared to the heap and must therefore be conserved. And, well, for many systems that’s true. But it is small only because we have administratively decided it should be small: the stack is just memory. If there is plenty of memory for the heap, there is plenty for the stack.

There are, or may be, arguments for why stacks needed to be small on ancient machines. The history is fascinating, but it is not relevant to today’s systems, other than tiny embedded ones. The persistent view of modern machines as giant PDP–11s has been a blight for well over two decades now: it needs to stop.

The argument that the stack should be small often seems to be that, if it’s not, people will write programs which run away. That’s spurious: if such a program is, in fact, iterative, then good compilers will eliminate the tail calls and it will not use stack: a small limit on the stack will not help. If it’s really recursive then why should it run out of storage before its conversion to a program which manages the stack explicitly does? Of course that’s exactly what compilers which do CPS conversion already do: programs written using compilers which do that won’t have these weird stack limits in the first place. But it should not be necessary to rely on a CPS-converting compiler, or to write in continuation-passing style manually to avoid stack usage: it should be used for other reasons, because the stack is not, in fact, expensive.

Still less should people feel the need to write programs which explicitly manage a stack except in extraordinary cases.

There need to be some limits on stack size, just as there need to be some limits on heap size, but making the limit on stack size far smaller than the limit on heap size simply encourages people to believe things which aren’t true, and to live in fear of recursive programs.


  1. I still want to know how often functions like this are used in real life. 


For older items, see the Planet Lisp Archives.


Last updated: 2023-09-15 15:07