#### Michał Herda — Detecting non-local exits in Common Lisp

· 25 hours ago

#CommonLisp #Lisp #Tip

Sometimes you need to do something only if control in your code zips by so fast that you cannot grab it... and you do not really care if goes by slowly. Like, you know, when you are suspicious that it would be doing something funky rather than going by and doing its business.

In other words, sometimes you are in need of detecting non-local exits from a block of code while ignoring normal returns.

There's an idiom for that - using LET over UNWIND-PROTECT.

            ;;; © Michał "THEN PAY WITH YOUR BLOOD" Herda 2022

(defun oblivion (thunk)
(let ((successfulp nil))
(unwind-protect (multiple-value-prog1 (funcall thunk)
(setf successfulp t))
(unless successfulp
(error "STOP RIGHT THERE CRIMINAL SCUM")))))

CL-USER> (oblivion (lambda () (+ 2 2)))
4

CL-USER> (block nil (oblivion (lambda () (return-from nil 42))))
;;; Error: STOP RIGHT THERE CRIMINAL SCUM
;;;     [Condition of type SIMPLE-ERROR]



The explanation is simple: we bind a variable with a default value which assumes that there was a non-local exit.

Then we execute our block of code in an UNWIND-PROTECT, and only after it executes successfully we set that value again to denote that running our code succeeded and we are ready to return its values.

The cleanup forms of the UNWIND-PROTECT are conditionalized on the same variable and will only trigger if the SETF SUCCESSFULP T did not execute - and that only happens if there was a non-local exit that prevented it from occurring.

In fact, there's an Alexandria utility that does just that! The macro ALEXANDRIA:UNWIND-PROTECT-CASE is capable of supporting this behavior.

            ;;; (ql:quickload :alexandria)

(catch 'foo
(alexandria:unwind-protect-case ()
(throw 'foo 1)
(:abort (format t "ABORTED"))))



Thanks to Stelian Ionescu for the heads-up!

#### Marco Antoniotti — CDR is Next...

· 5 days ago

Hi

as many may know, I have been nursing (almost to death!) the Common Lisp Document Repository (CDR).  Pascal Costanza et al., started the project many years ago and then I sat on top of it for many more.

I finally found some time to work on it and the result is a revamped site (cdr.common-lisp.dev) with the addition of stashing documents in a Zenodo Community (CDR), which has the benefit of producing a DOI for each write-up.

Moreover, Pascal Bourguignon, Michał "phoe" Herda and Didier Verna have agreed to become CDR editors.  Many thanks to them.

So, if anyone wants to submit a "specification" for something of interests to the CL community, she/he is most welcome to do so.  Just remember that good specifications are not so easy to write.

(cheers)

#### Michał Herda — Forever Stable Branch

· 6 days ago

#CommonLisp #Lisp

I wrote a kinda-long post in which:

• I try to give an overview of how I perceive the current troublesome situation regarding ASDF and SBCL and everyone and everything else,
• I try to brainstorm and describe some ways forward and out of this impasse.

Because of its length (and because of current Movim technical issues wrt rendering Markdown and Common Lisp), it's on its own separate page. You can read it here.

#### Eitaro Fukamachi — Day 4: Roswell: How to make Roswell scripts faster

· 9 days ago

Hi, all Common Lispers.

In the previous post, I introduced "Roswell script", the powerful scripting integration of Roswell. Not only does it allow to provide a command-line interface, but it makes it easy to install via Roswell. In this point of view, Roswell can be regarded as a distribution system for Common Lisp applications.

Today's topic is related — the speed of scripts.

## Why my simple Roswell script is so slow?

Let's begin with the following simple Roswell script. It's a pretty simple one that just prints "Hello" and quits.

#!/bin/sh
#|-*- mode:lisp -*-|#
#|
exec ros -Q -- $0 "$@"
|#
(progn ;;init forms
(ros:ensure-asdf))

(defpackage :ros.script.hello.3850273748
(:use :cl))
(in-package :ros.script.hello.3850273748)

(defun main (&rest argv)
(declare (ignorable argv))
(write-line "Hello"))
;;; vim: set ft=lisp lisp:

$./hello.ros Hello  Though people expect this simple script to end instantly, it takes longer. $ time ./hello.ros
Hello

real    0m0.432s
user    0m0.315s
sys    0m0.117s


0.4 seconds to print "Hello". It feels like an early scripting language.

An equivalent script with sbcl --script is much faster for the record.

$time ./hello.lisp Hello real 0m0.006s user 0m0.006s sys 0m0.000s  Fortunately, there're several solutions to this problem. ## Hacks to speedup I have to admit that the Roswell script can't be faster than sbcl --script since it does many things, but it's possible to make it closer. ### Stop loading Quicklisp The first bottleneck is "Quicklisp". Quicklisp is a de fact standard and available anywhere today, so we may not realize the cost of loading it. But, it can't be ignored in scripting. Fortunately, it's easy to disable Quicklisp in the Roswell script. Just replace -Q with +Q in the exec line.  #!/bin/sh #|-*- mode:lisp -*-|# #| -exec ros -Q --$0 "$@" +exec ros +Q --$0 "$@" |# (progn ;;init forms (ros:ensure-asdf))  Let's see the difference. # No Quicklisp version$ time ./hello.ros
Hello

real    0m0.142s
user    0m0.119s
sys    0m0.020s


It's approximately 0.3 seconds faster. Conversely, it takes this long to load Quicklisp. This is not little time for a program that starts many times, like scripts.

Additionally, omit ros:ensure-asdf since ASDF is unnecessary in this script.

 exec ros +Q -- $0 "$@"
|#
(progn ;;init forms
-  (ros:ensure-asdf))
+  )

(defpackage :ros.script.hello.3850273748
(:use :cl))

$time ./hello.ros Hello real 0m0.072s user 0m0.052s sys 0m0.020s  ASDF seems to require to load approximately 0.07 sec. Now, it's 6 times faster. These changes are effective for small scripts which don't require Quicklisp or ASDF. Even in the case of scripts that use Quicklisp and ASDF, this method can be applied partially by loading them conditionally. For example, a script has several subcommands like run or help. Let's suppose run requires Quicklisp to load the main application, and help doesn't. If you use -Q option, Quicklisp will make help command slow though it doesn't require Quicklisp. In this case, it is better to use the +Q option and load Quicklisp if necessary. ros:quicklisp is a function to load Quicklisp manually, even when the ros started with +Q. By calling this function right before Quicklisp is needed, it's possible to make the other part faster. ### Dump core with -m' What about in case of fairly complicated applications which must require Quicklisp to load external dependencies. Building a binary is a prevailing solution. I suppose it won't surprise you. It's a common technique even in no Roswell world. Also, I've mentioned ros build in the previous article, which makes a binary executable from a Roswell script. However, we can't assume people always run ros build to speed up your application after installation. Roswell takes care of it. Roswell has a feature to build the script dump implicitly to speed up its execution. Add -m' option to theexec ros line. Then, enable Quicklisp and ASDF to see how this feature is practical. @@ -1,10 +1,10 @@ #!/bin/sh #|-*- mode:lisp -*-|# #| -exec ros +Q --$0 "$@" +exec ros -Q -m hello --$0 "$@" |# (progn ;;init forms - ) + (ros:ensure-asdf)) (defpackage :ros.script.hello.3850273748 (:use :cl))  And install the script. $ ros install hello.ros
/home/fukamachi/.roswell/bin/hello


Let's try it. It'll take a little time for Roswell to dump a core named hello.core for the first time.

$hello Making core for Roswell... building dump:/home/fukamachi/.roswell/impls/arm64/linux/sbcl-bin/2.2.0/dump/hello.core WARNING: :SB-EVAL is no longer present in *FEATURES* Hello  The second time, it's way faster. $ time hello
Hello

real    0m0.032s
user    0m0.009s
sys    0m0.024s


It's approximately 13 times faster than the initial version. Of course, it includes the load time of Quicklisp and ASDF.

Remember that this requires the script to be installed at ~/.roswell/bin via ros install.

A living example is "lem", a text editor written in Common Lisp.

In the case of "lem", it requires lots of dependencies to run, and people expect a text editor to launch instantly. The dumping core works nicely for it.

# Installation
$ros install lem-project/lem # Takes a little time for the first time$ lem
Making core for Roswell...
building dump:/home/fukamachi/.roswell/impls/arm64/linux/sbcl-bin/2.2.0/dump/lem-ncurses.core
WARNING: :SB-EVAL is no longer present in *FEATURES*

# === lem is opened in fullscreen ===
# Type C-x C-c to quit


It takes a little time to boot up the first time, but the second time is quicker. Also, it'll be dumped again when Roswell detects some file changes.

Note that the name of the core needs to be unique. If there is a conflict, Roswell will load a different core.

Actually, this behavior doesn't go with Qlot well. If there's an application installed in user local and another installed in project local, Roswell can't distinguish between their cores. So then, even if you think you have fixed the version of the library, it will be using a core with a different version loaded.

This is not a problem for independent software like lem, but you should be careful with applications that load other software while running. A bad example of this problem is "Lake".

## Conclusion

In this article, I introduced a technique to speed up the startup of Roswell scripts.

• To startup faster
• Add +Q to disable loading QuicklispF
• Dump cores with -m option

Both have pros and cons.

The nice thing about the -m' option is that the end-user doesn't need to be aware of it, which is a good part of Roswell as a distribution system for Common Lisp applications.

#### Michał Herda — The mystery of :UNINTERN

· 9 days ago

#CommonLisp #Lisp

> Let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don't see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.' > > -- Wikipedia - Chesterton's fence

UIOP:DEFINE-PACKAGE is the part of UIOP that I personally use the most - it fills (IMO) the biggest hole in the Common Lisp package system, which is CLHS Macro DEFPACKAGE saying:

> If the new definition is at variance with the current state of that package, the consequences are undefined; (...)

This means that removing an export from a DEFPACKAGE can cause your implementation to wag a finger at you, and also ignore your attempt at removing it.

            CL-USER&gt; (defpackage #:foo (:use) (:export #:bar))
#&lt;PACKAGE "FOO"&gt;

CL-USER&gt; (defpackage #:foo (:use) (:export))
;; WARNING: FOO also exports the following symbols:
;;   (FOO:BAR)
;;   The ANSI Standard, Macro DEFPACKAGE
;;   The SBCL Manual, Variable *ON-PACKAGE-VARIANCE*
#&lt;PACKAGE "FOO"&gt;

CL-USER&gt; (loop for sym being the external-symbols of :foo
collect sym)
(FOO:BAR)



The solution is to manually call UNEXPORT on FOO::BAR, at which point SBCL will calm down and let you evaluate the second DEFPACKAGE form in peace.

DEFINE-PACKAGE, in the same situation, will do "the right thing" (read: the thing I personally expect it to) and adjust the package's export list to be consistent with the one provided to it.

            CL-USER&gt; (uiop:define-package #:foo (:use) (:export #:bar))
#&lt;PACKAGE "FOO"&gt;

CL-USER&gt; (uiop:define-package #:foo (:use) (:export))
#&lt;PACKAGE "FOO"&gt;

CL-USER&gt; (loop for sym being the external-symbols of :foo
collect sym)
NIL



There's plenty of other useful options, such as :MIX, :REEXPORT and all, but there's one of them that looks... A bit off.

## Mystery time

The option :UNINTERN is specified to call CL:UNINTERN on some symbols when the package is defined.

Hold up, wait a second, though. Uninterning symbols? During package definition?

When a package is defined for the first time, there are no symbols to unintern. This means that this option is only useful when a package already exists, and therefore UIOP:DEFINE-PACKAGE is used to redefine it.

This, and uninterning cannot be used to achieve "partial :use", that is, to remove symbols from packages that are :used in the current package in order to only "use a part of" this other package. That simply isn't doable in Common Lisp - :use imports all of the symbols exported by another package, except those that are explicitly :shadowed.

So, again, what's the point? Scroll down only if you'd like the mystery to be spoiled to you.

## Story time

Let's assume a very simple situation:

            (defpackage #:bar
(:use)
(:export #:symbol))



We have a single package which exports a single symbol. That package was created by some software which we use, and the symbol BAR:SYMBOL is useful to us in some way.

And then, while our Lisp image is still running, we'd like to upgrade this software to a new version. That is, we'd like to load a new version of that software and disregard the old one. In the new version of our software, the package structure looks like this:

            (defpackage #:foo
(:use)
(:export #:symbol))

(defpackage #:bar
(:use #:foo)
(:export #:symbol))



It seems that the symbol named SYMBOL was moved into another package, possibly because that is where the implementation of that symbol has been moved to. Oh well, looks understandable from a software architecture point of view!

...and then trying to load the upgraded version will fail at the very beginning. Worse - it might fail, since we have just stepped into undefined behavior area, as stated in the beginning of this post.

In particular, DEFPACKAGE FOO will be evaluated without any problem, but a keen eye will notice an error which will be signaled the moment we evaluate DEFPACKAGE BAR. The currently existing package contains its own version of the symbol named SYMBOL, whereas the new requirement is to :USE the package FOO, which has its own symbol named SYMBOL - a classic package name conflict.

What is the producer of this piece of software to do now in order to ensure a smooth transition?

One way forward is to DELETE-PACKAGE before moving on with the upgrade, but that's pretty explosive - if BAR exported any other symbols, naming e.g. class definitions, then this means trouble for us. Another way forward is to manually call UNINTERN before calling DEFPACKAGE, but only if the package already exists - and that is a little bit messy.

And this is exactly the problem that is meant to be solved by UIOP:DEFINE-PACKAGE. In particular, this utility is capable of automatically changing the structure of the underlying package to resolve conflicts in favor of the newly added symbols. We can simply use it as a drop-in replacement for DEFPACKAGE, like this:

            (defpackage #:foo
(:use)
(:export #:symbol))

(uiop:define-package #:bar
(:use #:foo)
(:export #:symbol))



That change allows this code to compile and load without errors. In particular, we can verify that BAR:SYMBOL correctly resolves to the new symbol from package FOO:

            CL-USER&gt; 'bar:symbol
FOO:SYMBOL



So, that's one upgrading problem less, solved by using UIOP:DEFINE-PACKAGE instead of DEFPACKAGE.

...but, uh, what about DEFINE-PACKAGE :UNINTERN? That's still not the end of the story.

## Edge case time

Let us assume that you are the developer of Lisp software who is working on it and you are testing the scenario in which you upgrade one version of software to another. The technique described above works well with regard to upgrading software, but let's say that your package definition looked like this:

            (defpackage #:foo
(:use)
(:intern #:some #:totally-random #:stuff))



And you want to replace it with the following:

            (uiop:define-package #:foo
(:use)
(:intern #:some #:totally-randomized #:stuff))



The explanation is that TOTALLY-RANDOM was a symbol that was useful (and used) in the previous version of software, but the new version uses something better, which also has a better name - TOTALLY-RANDOMIZED.

And all is fine and well, until you go into your REPL and see this:

The syntax completion is suggesting the old symbol even though it no longer bears any meaning. It means that you, as the programmer, need to hit the ↓ key to navigate downwards and select the proper symbol, which can annoy you to no avail. That's a pet peeve.

But it also means that you have the possibility of introducing bugs into the system by using the old version of a function - or, worse, breaking the build by using a symbol that is only present on systems upgraded from the old version and not ones which had the new version loaded start from scratch.

That's actually scary.

And that's the concrete edge case solved by :UNINTERN!

            (uiop:define-package #:foo
(:use)
(:intern #:totally-randomized)
(:unintern #:totally-random))



Using this fixes the syntax completion:

Evaluating this :UNINTERN option inside DEFINE-PACKAGE will either be a no-op (if the symbol doesn't exist, e.g. when defining the package from scratch) or automatically unintern the old symbol from the system (if it exists, e.g. when upgrading the package to a newer version).

In particular, the second option will happen even if the current shape of the source code no longer has any other mentions of it and even if this :UNINTERN call seems to make no sense.

In this context, :UNINTERN is something protecting the programmer from a danger that may no longer be relevant for current versions of the software, but was once something that the programmer considered important enough to remove during a software upgrade. This :UNINTERN should stay in the source code for however long it is supported to make upgrades from the versions of software which still used this symbol to the current version.

Hell of an edge case, eh? As always, it's an edge case until you hit it and need a tool for solving it - and :UNINTERN fits that description pretty damn well.

And let's not think about the scenario where your software needs to reintroduce that symbol later on, possibly for different purposes... and support all the upgrade paths along the way.

This, and I heard that it's useful when developing, especially with one-package-per-file style (which also includes ASDF's package-inferred systems); I heard that it's more convenient to jump to the top of the file, add a (:UNINTERN #:FOO) clause to the UIOP:DEFINE-PACKAGE there, reevaluate the form, remove the clause, and keep on hacking, rather than change Emacs buffers in order to jump into the REPL and evaluate a (UNINTERN '#:FOO) form there.

Personally, though, I don't share the sentiment - I can use C-↓ or C-↑ anywhere in the file to go out of whatever form my cursor is in, write a (UNINTERN '#:FOO), C-c C-c that form to get Slime to evaluate it, and then delete the form and continue hacking.

## Conclusion

UIOP:DEFINE-PACKAGE's :UNINTERN option is useful in the rare and obscure situations when all of the following are true:

• you are hot-patching an existing Lisp image and do not want to restart it,
• you need to redefine a package (possibly as a part of a software upgrade),
• you need to ensure that, after such a redefinition, a symbol with a given name is not internal in a given package.

This is useful e.g. for avoiding invalid syntax completions inside your Lisp image.

## Thanks

Thanks to Robert Goldman and Phoebe Goldman for helping me solve the mystery of :UNINTERN.

Thanks to Francis St-Amour for his long and painful review of this post.

Thanks to Catie from #lispcafe on Libera Chat and Gnuxie for shorter, less painful reviews of this post.

#### Eric Timmons — CL Community(?) Norms

· 10 days ago

In case you haven't seen it, the ASDF maintainer is considering resigning. The reason is pretty straightforward: continued antagonization toward the ASDF team from a prominent CL developer (and maintainer of a number of widely used libraries) and the seeming acceptance of this by the CL community.

The only other forum I'm aware of that's discussing this is this Reddit thread. However, I found most of the conversation in that thread simultaneously depressing and wildly missing the point. So I decided to take advantage of this space to speak my thoughts clearly, plainly, and without interruption or other noise surrounding them.

Side note: if you know of some other place this is being discussed, I'd love to know. Bonus points if it's not a SOS's pool.

Full disclosure: I am an ASDF developer, but I was not a developer when most of the relevant events happened. I am u/daewok on the Reddit thread. Last, I think Robert has done a great job shepherding ASDF and don't want to see him resign, especially over this.

## The Issue

The current flash point is this flexi-streams GitHub issue. However, the tensions have been building for quite a while.

Basically, ASDF 3.mumble improved upon an under-specified area of defining multiple systems per .asd file. The new method improved reliability, improved safety, and reduced user surprise. The cost is that a certain system naming convention needs to be followed. The naming convention is even backward compatible (if you adopt the new convention, it'll still work exactly as expected on older ASDF versions).

But even then, ASDF didn't even break extant naming schemes: all it does is signal a warning telling the user about the updated naming scheme. I personally would love it if, at some point, ASDF stops supporting anything other than the new scheme. But we are years away from considering that (ideally after everyone has adopted the new (and I can't emphasize this enough: backward compatible) naming scheme).

A mixture of ASDF and non-ASDF developers have submitted patches to projects to use the updated naming scheme. The fact that non-ASDF developers have gotten involved shows that the warning works. Most projects have accepted these patches. However, there was a notable holdout in the edicl-verse. Not only did this maintainer refuse to apply the trivial patches, they openly expressed their hope that as many people as possible would complain to the ASDF devs. This latter behavior is what Robert is unspeakably frustrated by and is what prompted his resignation consideration.

# The Existing Discussion

Let me get this out of the way first: could Robert's initial interaction on the flexi-streams issue been better? Almost certainly. But I'm willing to cut him a little slack given that his previous interactions in other threads were collegial, I know he's been antagonized a lot over this issue and similar ones throughout the years, and were are (still) in the middle of a pandemic that's affecting everyone in different ways.

I think this antagonization of the volunteer team maintaining a widely used piece of CL infrastructure is something that very much needs to be discussed. Like I said in the intro, the Reddit thread is the only place I've really seen it discussed in any depth. And that's a shame, because the discussion there missed the point in two major ways.

First, there was a group of people that focused on the technical issues at hand. Basically things like: "should ASDF have made this change?", "the warning being signaled is unnecessary in this specific case!", "why is ASDF signaling warnings at all!?" Which, in addition to missing the point of Robert's email entirely, also managed to demonstrate that people have shockingly strong opinions on what they want the world to look like, but have made little to no effort to make it happen in a positive way. I can definitely say that the ASDF team would love it if more people were involved in developing and testing the shared resource that is ASDF!

Second, there was the group that believed that the maintainer had the absolute right to ignore/reject the patches and that ends the discussion. I give this group credit for at least discussing a non-technical aspect of this. And while they are correct that he could ignore the patches, it misses the more interesting questions of should he have rejected the patches and was his behavior in calling for as many complaints as possible to the ASDF team reasonable.

## My thoughts

Frankly, I think the call to brigade the ASDF team was out of line and I definitely expect better from a prominent CL developer. Additionally, while I agree that he had the right to not merge the patches, I still think he should have and am upset that he didn't.

There were at least three ways to remove the warning. All three were offered at one point or another. All three were backward compatible (one maybe needed a bit of reader macro magic to be so). Two did not require changing the names of the systems.

I know this developer is competent enough to understand the improvements the new naming scheme brought, so why was he a stick in the mud about it? The technical arguments for the change were strong and a PR was waiting for approval, so the two most obvious explanations are that it was some personal vendetta or he wanted to punish the ASDF developers for not getting the issue correct on the first try and wanted to force us to continue to support a horribly broken feature.

Maybe it was something else, but in any case, it makes me upset that he would prioritize whatever that reason was over supporting another CL project that is attempting to make things easier and more reliable for nearly every CL developer.

That being said, there is one place where I think I disagree with Robert. He said that the CL community tacitly accepts this behavior. But I'm really starting to think that this can't be true because there really is no CL community to speak of.

fe[nl]ix's blog post on IDEs is probably what planted this idea in my head. I didn't believe (or want to believe) it at the time, but the more I've thought about it the more sense it makes. There is no big, happy CL community. Instead there's this diaspora of small communities that only tangentially interact with each other. So it's true that the edicl community tacitly (or explicitly) accepted this behavior. But there are other communities that find the behavior abhorrent. But because they're not the same community, and each community's resources (especially manpower) are finite, there's not much they could do about it. Heck, they may not have known the issue existed until Robert's email!

To be fair, one person in the Reddit thread pointed out there is no CL community. I down voted him at the time, but it was done out of anger and I have since turned it into an up vote. That broke my spirit a bit, but it needed to be done.

This realization is very sobering and distressing. One thing I've learned about myself is that I work the best when part of a supportive community and I am willing to make some personal sacrifices to help my community. I'm lucky enough to have such a community at the moment -- my research group. I've done a lot of work in CL that I didn't need to do for my own personal goals, but I found enjoyable because I was invested in helping others and improving our shared situation and ability to make progress.

However, I won't be a part of this research group forever. So what happens when I look to find a new community? Will I be forced to reside in multiple small, fractured communities? Or is it more likely that I'll drift away from CL forever?

So far, I have found the ASDF community, Robert in particular, to be supportive. But there's a decent chance he's going to resign. I've also found the Common Lisp Foundation folks to be extremely supportive and, like me, willing to take on small personal costs for the greater good. But their reach is somewhat limited (again, mostly due to the low manpower inherent in having many small communities).

But I want more. I want a broader CL community that supports one another, gives constructive feedback, uses each others' projects, and contributes code and issues. I want a community that is willing to make small individual sacrifices in order to improve everyone's situation. I want a community that realizes that because our language is frozen in time, we can devote more efforts to continuously improving our software, even if it means there are breaking changes (so long as those changes are communicated in advance :D). That last one is particularly important to me because, let's face it, most CL projects don't have a brilliant committee designing them and didn't get their interfaces perfect the first time.

So, how do we move forward? For the immediate issue, the edicl community has grown a little bit with the addition of new maintainers. At least some of those new maintainers care about this issue and are working to improve their system definitions.

But how to build a bigger, better CL community escapes me. I personally think the CLF has the best chance at being the seed crystal of such a community. They have a nonprofit set up, they are already providing shared infrastructure (such as Gitlab, project web site hosting (side note: there's some exciting news coming down the pipe soon on that front), mailing lists, and fundraising), and it seems to be run by level-headed folks that truly want to see CL succeed and a community grow. So I highly recommend that more people join that community by taking advantage of what they offer and floating any community building ideas you have on their fora or at their monthly meetings.

Beyond that, I think the best advice may be to try and broaden out any community you find yourself a part of. Give more people commit rights (after making sure they're trustworthy, of course). File issues and PRs instead of forking a project (and be responsive when you receive them!). Plan for project succession by hosting projects in a shared org instead of in your personal namespace. If you've got a single person project, consider hosting it on CLF's Gitlab so that if you drop off the face of the Earth an admin can step in and make sure someone else is able to continue working on it.

If we all grow our communities enough maybe they'll merge and we'll get our one big happy community. Then again, maybe not, but I think it's the best idea I've got at the moment.

#### Michał Herda — Macroexpand-time branching

· 17 days ago

#CommonLisp #Lisp

Let's consider the following function:

            (defun make-adder (x huge-p)
(lambda (y) (+ x y (if huge-p 1000 0))))



The result of calling (MAKE-ADDER 10) closes over HUGE-P and makes a runtime check for its value.

            CL-USER> (disassemble (make-adder 10 nil))
; disassembly for (LAMBDA (Y) :IN MAKE-ADDER)
; Size: 65 bytes. Origin: #x53730938                          ; (LAMBDA (Y) :IN MAKE-ADDER)
; 38:       488975F8         MOV [RBP-8], RSI
; 3C:       488BD3           MOV RDX, RBX
; 3F:       E8EC012DFF       CALL #x52A00B30                  ; GENERIC-+
; 44:       488B75F8         MOV RSI, [RBP-8]
; 48:       4881FE17011050   CMP RSI, #x50100117              ; NIL
; 4F:       BFD0070000       MOV EDI, 2000
; 54:       B800000000       MOV EAX, 0
; 59:       480F44F8         CMOVEQ RDI, RAX
; 5D:       E8CE012DFF       CALL #x52A00B30                  ; GENERIC-+
; 62:       488BE5           MOV RSP, RBP
; 65:       F8               CLC
; 66:       5D               POP RBP
; 67:       C3               RET
; 68:       CC10             INT3 16                          ; Invalid argument count trap
; 6A:       6A20             PUSH 32
; 6C:       E8FFFA2CFF       CALL #x52A00470                  ; ALLOC-TRAMP
; 71:       5B               POP RBX
; 72:       E958FFFFFF       JMP #x537308CF
; 77:       CC10             INT3 16                          ; Invalid argument count trap
NIL



It would be better for performance if the test was only made once, in MAKE-ADDER, rather than on every call of the adder closure. MAKE-ADDER could then return one of two functions depending on whether the check succeeds.

            (defun make-adder (x huge-p)
(if huge-p
(lambda (y) (+ x y 1000))
(lambda (y) (+ x y 0))))



A brief look at the disassembly of this fixed version shows us that we're right:

            CL-USER> (disassemble (make-adder 10 nil))
; disassembly for (LAMBDA (Y) :IN MAKE-ADDER)
; Size: 21 bytes. Origin: #x53730BC7                          ; (LAMBDA (Y) :IN MAKE-ADDER)
; C7:       488BD1           MOV RDX, RCX
; CA:       E861FF2CFF       CALL #x52A00B30                  ; GENERIC-+
; CF:       31FF             XOR EDI, EDI
; D1:       E85AFF2CFF       CALL #x52A00B30                  ; GENERIC-+
; D6:       488BE5           MOV RSP, RBP
; D9:       F8               CLC
; DA:       5D               POP RBP
; DB:       C3               RET
NIL



Still, with more flags than one, this style of writing code is likely to become unwieldy. For three flags, we would need to write something like this for the runtime version:

            (defun make-adder (x huge-p enormous-p humongous-p)
(lambda (y) (+ x y
(if huge-p 1000 0)
(if enormous-p 2000 0)
(if humongous-p 3000 0))))



But it would look like this for the macroexpand-time version:

            (defun make-adder (x huge-p enormous-p humongous-p)
(if huge-p
(if enormous-p
(if humongous-p
(lambda (y) (+ x y 1000 2000 3000))
(lambda (y) (+ x y 1000 2000 0)))
(if humongous-p
(lambda (y) (+ x y 1000 0 3000))
(lambda (y) (+ x y 1000 0 0))))
(if enormous-p
(if humongous-p
(lambda (y) (+ x y 0 2000 3000))
(lambda (y) (+ x y 0 2000 0)))
(if humongous-p
(lambda (y) (+ x y 0 0 3000))
(lambda (y) (+ x y 0 0 0))))))



The total number of combinations for n boolean flags is 2^n, making it hard to write and maintain code with so many branches. This is where WITH-MACROEXPAND-TIME-BRANCHING comes into play. Using it, we can write our code in a way that looks similar to the runtime-check version:

            (defun make-adder (x huge-p enormous-p humongous-p)
(with-macroexpand-time-branching (huge-p enormous-p humongous-p)
(lambda (y) (+ x y
(macroexpand-time-if huge-p 1000 0)
(macroexpand-time-if enormous-p 2000 0)
(macroexpand-time-if humongous-p 3000 0)))))



This code gives us the clarity of runtime-checked version and the performance of a macroexpand-time-checked version. A total of eight versions of the body (and therefore, eight possible LAMBDA forms) are generated. At runtime, only one of them is selected, based on the boolean values of the three flags we provided.

Three conditional operators are provided - MACROEXPAND-TIME-IF, MACROEXPAND-TIME-WHEN, and MACROEXPAND-TIME-UNLESS, mimicking the syntax of, respectively, IF, WHEN, and UNLESS.

It is possible to use the variable *MACROEXPAND-TIME-BRANCH-BYPASS* for bypassing macroexpand-time branching; this is useful e.g. when trying to read the macroexpansions or when debugging. If that variable is set to true, the behavior of the macroexpander is modified:

• WITH-MACROEXPAND-TIME-BRANCHING expands into a PROGN form,
• MACROEXPAND-TIME-IF expands into an IF form,
• MACROEXPAND-TIME-WHEN expands into a WHEN form,
• MACROEXPAND-TIME-UNLESS expands into an UNLESS form.

Trying to use MACROEXPAND-TIME-IF, MACROEXPAND-TIME-WHEN, or MACROEXPAND-TIME-UNLESS outside the lexical environment established by WITH-MACROEXPAND-TIME-BRANCHES will signal a PROGRAM-ERROR.

Trying to use a branch name MACROEXPAND-TIME-IF, MACROEXPAND-TIME-WHEN, or MACROEXPAND-TIME-UNLESS that wasn't declared in WITH-MACROEXPAND-TIME-BRANCHES will signal a PROGRAM-ERROR.

Grab the code from GitHub.

#### Joe Marshall — Idle puzzles 2: Revenge of the Shift

· 20 days ago

The idle puzzles got some web traffic, so here are a couple more in the same vein. Not much new, just a variation on a theme. They can be done in your head, but I spent a few minutes coding up some solutions to see what was involved.

In the previous puzzles, you were given these numeric primitives:

(import 'cl:zerop (find-package "PUZZLE"))
(defun puzzle::shr (n) (floor n 2))
(defun puzzle::shl0 (n) (* n 2))
(defun puzzle::shl1 (n) (1+ (* n 2)))

and the task was to implement basic arithmetic on non-negative integers.

These puzzles extend the task to include negatve numbers. We are given one additional primitive:

(defun puzzle::-1? (n) (= n -1))

If you want challenge yourself, you could speedrun the problems from start to finish. Or try to adapt the solutions to the prior puzzles with the minimal amount of editing and new code (minimize the diff). Another thing you could try is instrumenting shr, shl0, and shl1 to count the amount of shifting taking place and try to minimize that.

Here is the puzzle:

;;; -*- Lisp -*-

(defpackage "PUZZLE"
(:use)
(:import-from "COMMON-LISP"
"AND"
"COND"
"DEFUN"
"IF"
"FUNCALL"
"FUNCTION"
"LAMBDA"
"LET"
"MULTIPLE-VALUE-BIND"
"NIL"
"NOT"
"OR"
"T"
"VALUES"
"ZEROP"))

(defun puzzle::shr (n) (floor n 2))
(defun puzzle::shl0 (n) (* n 2))
(defun puzzle::shl1 (n) (1+ (* n 2)))

(defun puzzle::-1? (n) (= n -1)) ;; new primitive

(in-package "PUZZLE")

;;; You can only use the symbols you can access in the PUZZLE package.

;;; Problem -1 (Example).  Fix = to handle negative numbers.

(defun = (l r)
(cond ((zerop l) (zerop r))
((-1? l) (-1? r))     ;; new base case
((zerop r) nil)
((-1? r) nil)         ;; new base case
(t (multiple-value-bind (l* l0) (shr l)
(multiple-value-bind (r* r0) (shr r)
(if (zerop l0)
(and (zerop r0)
(= l* r*))
(and (not (zerop r0))
(= l* r*))))))))

;;; Problem 0.  Implement minusp and plusp.

;;; Problem 1.  Fix > to handle negative numbers.

;;; Problem 2.  Fix inc and dec to handle negative numbers.

;;; Problem 3.  Implement logand, logior, and logxor.

;;; Problem 4.  Implement neg (unary minus).

;;; Problem 5.  Fix add and sub to handle negative numbers.

;;; Problem 6.  Fix mul to handle negative numbers.

;;; Problem 7.  Implement floor and ceiling for both positive and
;;;             negative numbers


## My Solutions

The reason we're given a new primitive, -1?, is because the shr function has two fixed points: 0, and -1. So when we write code that recurs over a shr, the recursion is going to bottom out in one of those two base cases and we need to distinguish between them. The earlier puzzles ensured we'd bottom out at zero by specifying non-negative numbers, but if we allow negative numbers, our recursions could bottom out at -1.

;;; Problem 0.  Implement minusp and plusp

(defun minusp (n)
(cond ((zerop n) nil)
((-1? n) t)
(t (minusp (shr n)))))

(defun plusp (n)
(not (or (zerop n)
(minusp n))))

We have to handle both base cases for both the arguments:

;;; Problem 1.  Fix > to handle negative numbers.

(defun > (l r)
(cond ((zerop l) (minusp r))
((-1? l) (and (minusp r)
(not (-1? r))))
((or (zerop r)
(-1? r)) (plusp l))
(t (multiple-value-bind (l* l0) (shr l)
(multiple-value-bind (r* r0) (shr r)
(if (and (not (zerop l0)) (zerop r0))
(not (> r* l*))
(> l* r*)))))

This is interesting. The base case handles when one or the other argument is 0 or -1, but the recursive case doesn't know if the arguments are positive or negative. It doesn't seem to care, either. What is going on? This is the result of using floor on a negative number. The remainder is still a positive number, so when we operate on l0 and r0 we treat them as positive numbers regardless of whether l or r are positive or negative.

;;; Problem 2.  Fix inc and dec to handle negative numbers.

(defun inc (n)
(if (-1? n)
0
(multiple-value-bind (n* n0) (shr r)
(if (zerop n0)
(shl1 n*)
(shl0 (inc n*))))))

(defun dec (n)
(if (zerop n)
-1
(multiple-value-bind (n* n0) (shr r)
(if (zerop n0)
(shl1 (dec n*))
(shl0 n*)))))

Well that was easy, we just had to handle the zero crossing.

No doubt you've noticed that shr shifts a number to the right as if it were held in a register. If you shift the bits out of a negative number, you will notice that the bits come out as if the number were "stored" in two's complement form, with negative numbers being infinitely extended to the left with 1s. This is curious because we didn't design or choose a two's complement representation, it just sort of appears.

;;; Problem 3.  Implement logand, logior, and logxor.

(defun logand (l r)
(cond ((zerop l) 0)
((-1? l) r)
((zerop r) 0)
((-1? r) l)
(t (multiple-value-bind (l* l0) (shr l)
(multiple-value-bind (r* r0) (shr r)
(if (or (zerop l0) (zerop r0))
(shl0 (logand l* r*))
(shl1 (logand l* r*))))))))

(defun logior (l r)
(cond ((zerop l) r)
((-1? l) -1)
((zerop r) l)
((-1? r) -1)
(t (multiple-value-bind (l* l0) (shr l)
(multiple-value-bind (r* r0) (shr r)
(if (and (zerop l0) (zerop r0))
(shl0 (logior l* r*))
(shl1 (logior l* r*))))))))

(defun complement (n)
(cond ((zerop n) -1)
((-1? n) 0)
(t (multiple-value-bind (n* n0) (shr n)
(if (zerop n0)
(shl1 (complement n*))
(shl0 (complement n*)))))))

(defun logxor (l r)
(cond ((zerop l) r)
((-1? l) (complement r))
((zerop r) l)
((-1? r) (complement l))
(t (multiple-value-bind (l* l0) (shr l)
(multiple-value-bind (r* r0) (shr r)
(if (or (and (zerop l0) (zerop r0))
(and (not (zerop l0)) (not (zerop r0))))
(shl0 (logxor l* r*))
(shl1 (logxor l* r*))))))))

;;; Problem 4.  Implement neg (unary minus).

(defun neg (n)
(cond ((zerop n) 0)
((-1? n) 1)
(t (multiple-value-bind (n* n0) (shr n)
(if (zerop n0)
(shl0 (neg n*))
(shl1 (complement n*)))))))

This is basically (inc (complement n)), which is how you negate a two's complement number, but inc and the complement step have been folded together to reduce the amount of shifting.

;;; Problem 5.  Fix add and sub to handle negative numbers.

(defun add (l r)
(cond ((zerop l) r)
((-1? l) (dec r))
((zerop r) l)
((-1? r) (dec l))
(t (multiple-value-bind (l* l0) (shr l)
(multiple-value-bind (r* r0) (shr r)
(if (zerop l0)
(if (zerop r0)
(shl0 (add l* r*))
(shl1 (add l* r*)))
(if (zerop r0)
(shl1 (add l* r*))
(shl0 (addc l* r*)))))))))

(defun addc (l r)
(cond ((zerop l) (inc r))
((-1? l) r)
((zerop r) (inc l))
((-1? r) l)
(t (multiple-value-bind (l* l0) (shr l)
(multiple-value-bind (r* r0) (shr r)
(if (zerop l0)
(if (zerop r0)
(shl1 (add l* r*))
(shl0 (addc l* r*)))
(if (zerop r0)
(shl0 (addc l* r*))
(shl1 (addc l* r*)))))))))

(defun sub (l r) (add l (neg r)))

The great thing about two's complement is that you can handle negative numbers without changing how you handle the low order bits. For add and addc, I only had to add the two additional base cases for l or r being -1.

By the way, you shouldn't define sub this way. It's double the number of shifts.

;;; Problem 6.  Fix mul to handle negative numbers.

(defun fma (l r a)
(cond ((zerop r) a)
((-1? r) (sub a l))   ;; added this line
(t (multiple-value-bind (r* r0) (shr r)
(fma (shl0 l) r* (if (zerop r0) a (add l a)))))))

(defun mul (l r) (fma l r 0))

Two's complement to the rescue again. The loop can treat the low-order bits of r the same way regardless of whether r is positive or negative.

;;; Problem 7.  Implement floor and ceiling for both positive and
;;;             negative numbers

(defun floor0 (n d)
(if (> d n)
(values 0 n)
(multiple-value-bind (q r) (floor0 n (shl0 d))
(if (> d r)
(values (shl0 q) r)
(values (shl1 q) (sub r d))))))

(defun ceil0 (n d)
(if (not (> n d))
(values 1 (sub n d))
(multiple-value-bind (q r) (ceil0 n (shl0 d))
(let ((r1 (add d r)))
(if (plusp r1)
(values (shl0 q) r)
(values (dec (shl0 q)) r1))))))

(defun floor (n d)
(if (minusp n)
(multiple-value-bind (q r) (ceiling (neg n) d)
(values (neg q) (neg r)))
(if (minusp d)
(multiple-value-bind (q r) (ceil0 n (neg d))
(values (neg q) r))
(floor0 n d))))

(defun ceiling (n d)
(if (minusp n)
(multiple-value-bind (q r) (floor (neg n) d)
(values (neg q) (neg r)))
(if (minusp d)
(multiple-value-bind (q r) (floor0 n (neg d))
(values (neg q) r))
(ceil0 n d))))

My original method for division doesn't work too well with negative numbers. I worked around that by converting the problem to positive numbers and converting the answer back to negative numbers where appropriate. Supporting negative numbers for division is an exercise in combinatorics. All this checking for minusp and calls to neg cause a lot of shifting of numbers. There is no doubt a better way, but my brain hurts now.

#### Marco Antoniotti — My List of Common Lisp Libraries and SW 2022 (Plus a Couple of Other Things)

· 22 days ago

Hello

I have been bumping on a few "lists of Common Lisp libraries and tools" written by many people.  I feel like rantin... pardon, blogging about this state of affairs.

First of all we have CLiki, which is a rather comprehensive list of CL libraries and whatnot, and then we have a few, unnamed, "state of the CL ecosystem", "preferred list of Common Lisp libraries", etc, etc.

I have nothing against people blogging or making lists of course, but I tend not to make generalized statements.  Especially in order to avoid disrespecting some people's work, just by not knowing of its existence.  This is a hint.

So, in order to proceed with my ran... blog post, here is the list of CL libraries I use.  Turns out there is quite a bit of NIH syndrome here, but I never claimed not to suffer from it.  Also remember that I have a Mac, a Windows and a Linux system at hand.  I always try to have stuff that works on all of them.

## Distribution Systems

I use, obviously, Quicklisp.  It works.  It can be improved, but hey!  Let's just give Zach our love! 💓 (And money!)

## Implementations

• I work mostly on Lispworks.  I have the luck to be able to afford the Enterprise edition (or better, my funding does) and I am very happy with it.  The folks at Lispworks know that I can be a pest (environments? rounding modes?) but they are just doing a fantastic job.
• I use, of course, SBCL.  The implementation is rock solid and it has all the bells and whistles you need for a world class system.  Recompiling and deploying for SBCL usually uncovers bugs and potential pitfalls in your code.  I have a small rant about it though.  Guys, do not write code that "works on SBCL" and just assume all is fine. It is not.
• I have also CMUCL installed and I use it as well.  It still has a few good things to it.
• Next I use Armed Bear Common Lisp, an excellent testbed for checking portability.
• Of course, I have the free edition of Allegro CL: the other excellent commercial alternative.
• Last but not least (with memories going back decades) I have CCL installed and always ready to use.
• I am very intrigued by CLASP, but I admit, I have not had the strength to install it (too much of a production; remember that I try to get things working on three platforms); any help will be appreciated.
• Same for the latest incarnation of ECL.  Sorry guys, I will get back to installing it soon.
• Finally, we always have CLisp: old but good.
I know I forgot many.  Apologies for my senescence.

## System Building

If you checked any of my libraries, you will have noticed that I still have .system files everywhere.  I use ASDF of course, but I have been nursing (with Madhu) a version of MK:DEFSYSTEM; it is clunkier (and old) internally, but I feel it has a simpler and more straightforward interface, especially when it comes to its footprint on your system (i.e., somewhat simpler working of the registry).

## Concurrency

I use Bordeaux Threads for portability.  There are a few open issues about it that are of very difficult solution but it is Good Enough (it has a couple of warts: a bad .asd file for example - but this is a separate rant, pardon, blog post).  Of course I also like very much Lispworks multiprocessing library, especially mailboxes, but I understand that not everybody has access to it.

## Testing

I must say that none of the libraries I checked does ONE thing I really need: running something and stopping after a timeout (see the open issue I mentioned just before for Bordeaux Threads).
Having said that, I use FiveAM; I have to muck a bit around it to get the timeout I need, and the documentation, especially about macr... pardon, fixtures is lacking, but it does the job.  I am experimenting with Parachute, but have not made the switch yet.

## Portability

Most of my libraries try to be portable.  One major library that helps you a lot with this thankless job ("let's have filename case insensitive anyone?") is of course UIOP.  You have it pretty much by default, but it would be nice if it were eventually decoupled from ASDF.
Another little library I advocate (yes, Virginia, here they come!) is CLAD: it does not do much, but it gives you a bedrock upon which to build your work.

## Editing

I use the Lispworks IDE or the Editor (as in "thou shalt not have other Editor than...") with SLIME.  End of story.

## HTML Generation

Here I start with the rantin..., thoughtful exposition of various libraries.
To generate (X)HTML (or HTML5) I use (X)HTMΛ.  It is a pretty solid library with a couple of bells and whistles: mainly, it has an object model and it leverages the sorrily underleveraged Common Lisp pretty printer to produce readable HTML.  The web page for (X)HTMΛ is produced using itself as a building block; see below.

## Web Server and Client

In this case I stick with the tried and true Hunchentoot and its counterpart Drakma, although for simpler, one-shot things I do use Lispworks facilities when needed.

## Documentation Generation

As I said, all my libraries suffer from a severe case of NIH, which is reflected in my use of the hacked up HEΛP documentation system.  It is still not perfect, but, again, by using (X)HTMΛ and, again, leveraging the pretty printer, I find that the results are quite pleasant (for the neapolitans: ogni scarrafone...).  Of course, if you agree to format your doc string in a Hyperspec way.  How can it be improved?  Four things.
1. Incorporating Eclector as a "reader" (as of now the tool uses more than a kludge to get past the non standardized way of dealing with READER- and PACKAGE- errors).
2. Finishing the HTML5 generation scheme.
3. Adding Texinfo generation (as per Didier Verna's declt library).
4. Fixing cross-referencing, especially with standard CL (cfr., Hyperspec).

## Mathematics

Why did I start HEΛP?  Many years ago (or many Kgs ago) a post on comp.lang.lisp stated that you could shadow +, -, *, and / in a package and make them generic functions (as an aside, I recently found out that the idea of generics dates back to PL/I - but this is a different rabbit's hole).
The result was what is now a forever embryonic Common Math, which led to my forays in "let's do what R can do" -- Ρ (greek Rho) -- and to the stuff I am talking about in "Why you cannot write an Interval Arithmetic Library...".  All of this another rabbit hole: bottom line, this is what I am working on and off right now, apart from my day job.

## Other Libraries

### Laziness as a Way of Life

Of course, rabbits' holes are never linear.  They branch in several directions at once.  E.g., once you look at other "modern" languages (I am partial to Haskell) you want to have some of these features in Common Lisp as well); hence the CLAZY library (laziness as a way of life).

### Code Handling

Needless to say, CLAZY eventually needed some proper code walking, or better it needed some proper Abstract Syntax Tree (AST) handling, which led to the CLAST library: this last library allows you to code walk and to properly inspect a piece of Common Lisp code, striving very hard to portably manage the environments that were present in CLtL2 but did not make it in the ANSI spec.

### Programming Common Lisp

As we all know, Lisp is like a ball of mud.  Over the years I threw a lot of mud at it.
Some of the mud I liked the most is embedded in the following libraries
• definer is a small hack to have Pythonesque def available in Common Lisp, of course, in an extensible way (documentation being updated soon).
• cl-unification is a full blown unification machine for Common Lisp objects and "templates"; not as fast as a simpler pattern matcher (like several listed in CLiki Pattern Matching), but very general (it also relies on CL-PPCRE).
• cl-enumeration was the first full blown Java-like enumeration/iterator library for Common Lisp; it works.
• defenum is another Java-ism ported to Common Lisp; it essentially creates enums that work like Java, i.e., with quite a bit of underlying machinery.  Of course you do not have Haskell data definitions available, but it was fun to write.
• with-contexts was a bout of Python-envy, but then again, I think I was able to show that you really can program Common Lisp in a fun way.

### Literary Programming in Common Lisp

One final library I am quite fond of expresses my love of (certain) books.  And to deal with books, you need a Library and a Librarian.  Hence the OOK compiler in Common Lisp.  Don't say Monkey!

## Not Done Yet...

Although I am dabbling with Julia (a Common Lisp in a drag) and Rust (I have a day job which also involves programming languages and not only Cancer Research) I still have a lot of "not-quite-there" stuff I wrote in Common Lisp.
All I can say about it is... stay tuned,  I do put out stuff and I do pester people about Common Lisp.  You may find some of the stuff interesting.

(cheers)

#### Eric Timmons — CL-TAR v0.2.0

· 23 days ago

I just released the first version of cl-tar (previously). That I consider to have the minimum set of features necessary to make it usable and useful.

Unfortunately, there seems to be something wrong with the Gitlab Pages setup for its documentation so you can't see the very beautiful docs :(. I'll update this post when it's fixed. For now, the README will likely give you everything you need to know, particularly the Quickstart section.

This release brings:

• transparent gzip compression and decompression
• convenience function for creating an archive from the file system, while preserving hard links, symlinks, block and character devices, fifos, and most metadata ({a,c,m}time, uname, gname, uid, gid). And, yes, sym and hardlinks are preserved even on Windows!
• A precompiled executable for Linux AMD64. This implements extraction and creation and could (if you're daring) replace GNU (or BSD) tar for those purposes.

It's still v0.2.0, so interfaces may break in the future. But, I'm going to do my best to prevent that (yay keyword args!).

You can find the new versions at https://gitlab.common-lisp.net/cl-tar/cl-tar/-/releases/v0.2.0 and https://gitlab.common-lisp.net/cl-tar/cl-tar-file/-/releases/v0.2.0. I've also requested that both cl-tar and cl-tar-file be added to Quicklisp.

# Known Issues

• Sym and hard link extraction do not work on Windows (yet). This was just a matter of the amount of time I had to work on this (it was my xmas present to myself). It should not be technically hard to implement.
• Extracting to the file system can be pretty slow (regardless of compression). I've profiled it to see the hotspots and have a bit of an idea how to speed it up.
• Creation of archives from the file system on Windows requires https://github.com/osicat/osicat/pull/57. On another note: I barely use Windows, so if there's anyone with more Windows experience out there I'd appreciate it if you looked at the PR with a fresh set of eyes.
• I'd like to make more precompiled binaries. Again, this was mostly a matter of time (setting up all the correct CI runners and debugging them).

#### Eric Timmons — Roswell and Walled Gardens

· 23 days ago

Recently, Eitaro Fukamachi has been sharing blog posts about Roswell, "a launcher for a major lisp environment that just works." Like many in the CL community, I've heard of Roswell and even dabbled with it a bit. I'm not sure how many people actually use Roswell, but I do know it's non-negligible.

Roswell certainly solves some real problems for folks, but I could never get into it myself. There are two primary reasons for that. First, I use a Linux distro with that a) stays relatively up to date with upstreams and b) makes it trivial to carry my own patches to CL implementations (which I frequently do). Second, Roswell feels like a walled garden to me (I doubt this was an intentional decision by its authors, however).

The purpose of this post is to dig more into the second reason. This is mostly for my own benefit. I have not really progressed beyond broad "feelings" on this subject and I'd be doing myself and the Roswell authors a disservice if I keep not using it based on mere feelings without some concrete issues backing it up. Perhaps it will benefit others as well by finding others with concerns similar to mine and getting a concrete set of issues laid out that we could work on contributing fixes for.

Roswell authors: If you read this, please know this isn't meant to be a dig at you. I'm writing this as a sincere effort at exploring why I don't like Roswell with an eye toward coming up with solutions that would make it more palatable to me and (hopefully) others with a similar mindset.

# User/Environment Intercession

My core complaint is that Roswell interposes itself between the user and the CL environment in a highly visible and intrusive way: through the ros executable. Let's look at what this means in terms of both being an implementation manager and scripting.

## Implementation Manager

Roswell bills itself largely as an implementation manager. It makes it trivial to install just about any version of any major CL implementation on any computer. That's a huge win for folks running Debian or Ubuntu LTSs (as they tend to have packages that are extremely out of date) or on odd arch/OS combinations (if binary packages are not provided, Roswell can build the implementation for you).

But what does it mean to be an implementation manager? To me, that means after installing the implementation, I should be able to use it freely, as if it were installed by my native package manager. So let's give that a try:

user@rocinante:~$ros install sbcl-bin No SBCL version specified. Downloading sbcl-bin_uri.tsv to see the available versions... [##########################################################################]100% Installing sbcl-bin/2.2.0... Downloading https://github.com/roswell/sbcl_bin/releases/download/2.2.0/sbcl-2.2.0-x86-64-linux-binary.tar.bz2 [##########################################################################]100% Extracting sbcl-bin-2.2.0-x86-64-linux.tar.bz2 to /home/user/.roswell/src/sbcl-2.2.0-x86-64-linux/ Building sbcl-bin/2.2.0... Done. Install Script for sbcl-bin... Making core for Roswell... Installing Quicklisp... Done 7169 user@rocinante:~$ export PATH="/home/user/.roswell/bin:$PATH" user@rocinante:~$ sbcl
zsh: command not found: sbcl

Huh. Well that's disappointing. It seems that the only (out of the box) way to run an implementation is via ros run.

user@rocinante:~$ros run * (find-package :ql) #<PACKAGE "QUICKLISP-CLIENT"> What does this mean? Virtually everything CL related needs to know you use Roswell. Switching from an OS-managed SBCL install to Roswell-managed? Better update your SLIME/Sly config to use ros run instead of sbcl. Writing documentation for a cool hack? Better include directions for Roswell as well as stock implementations (or hope that your users are confident enough in CL to figure it out on their own). You know those bad jokes that go something like "How do you know if someone is X? Don't worry, they'll tell you!"? This kind of feels like a real-world instantiation of that. Not only that, but Roswell is imposing its opinions on its users. See that #<PACKAGE "QUICKLISP-CLIENT"> in the REPL? That certainly doesn't come from my .sbclrc, so where does it come from? Let's look at what ros run invokes under the hood: user@rocinante:~$ ps aux | grep sbcl
user       43354  0.4  0.5 1238788 93548 pts/1   Sl+  10:27   0:00 /home/user/.roswell/impls/x86-64/linux/sbcl-bin/2.2.0/bin/sbcl --core /home/user/.roswell/impls/x86-64/linux/sbcl-bin/2.2.0/lib/sbcl/sbcl.core --noinform --no-sysinit --no-userinit --eval (progn #-ros.init(cl:load "/etc/roswell/init.lisp")) --eval (ros:run '((:eval"(ros:quicklisp)")))

Yikes. It looks like ros run modifies your CL image a decent amount by default. Not only does it load its own init file at /etc/roswell/init.lisp (while ignoring your own!), it also loads the Quicklisp client for you. And it's not obvious here, but the QL client it loads is located at ~/.roswell/quicklisp/, not the standard ~/quicklisp/ folder.

I dislike this for several reasons. First, I'm definitely biased here, but Quicklisp isn't the only dependency management solution out there. Second, this can make providing support to Roswell users a nightmare. If something goes wrong with one of my programs on a Roswell user's computer, I need to become an expert in Roswell to help them! Third, it really rubs me the wrong way that it just blithely ignores a user's standard customization file by default. Fourth, I think almost every feature added on top of the vanilla implementation should default to off. That reduces cognitive burden when worrying about if Roswell will ever change a default on me when they add a new feature.

So, how do we get a bog standard REPL from Roswell? Based on the help messages, there is an option to disable loading QL and most of Roswell's own init files. But there's no option to load the default RC files or skip /etc/roswell/init.lisp. So the best we can do seems to be:

user@rocinante:~$ros run +Q +R --load /etc/sbclrc --load ~/.sbclrc Which ends up invoking SBCL as: /home/user/.roswell/impls/x86-64/linux/sbcl-bin/2.2.0/bin/sbcl --core /home/user/.roswell/impls/x86-64/linux/sbcl-bin/2.2.0/lib/sbcl/sbcl.core --noinform --no-sysinit --no-userinit --eval (progn #-ros.init(cl:load "/etc/roswell/init.lisp")) --eval (ros:run '((:load "/etc/sbclrc")(:load "/home/user/.sbclrc"))) Not terrible, but not great either. ### Solution? We already have a standard way for a user to specify to shadow programs of the same name: the $PATH variable. This is already used by other programming language environment managers out there. Let's take a look at RVM, which is probably the closest analog to Roswell that I know of.

user@rocinante:~$rvm install 2.6.9 [OUTPUT CUT] user@rocinante:~$ which ruby
/home/user/.rvm/rubies/ruby-2.6.9/bin/ruby

That's nice! Every tutorial or piece of documentation out there that uses Ruby should Just Work. No need to modify anything because you're using RVM-managed Ruby instead of an OS-managed one.

So, Roswell can keep its opinionated setup if it really wants to, no matter how much I disagree with it (hey, that's how opinions go). But I think it would do its users a great service if installing an implementation also placed that implementation on the PATH, with the standard name. The easiest way of doing it is probably a shell script that looks something like:

#!/bin/sh

# A hypothetical Roswell command that resolves which version of SBCL we should
# use. This could look at config files, envvars, whatever.
_SBCL_PATH="$(ros which sbcl)" exec "$_SBCL_PATH" "$@" UPDATE: Opened an issue to discuss this more. ## Scripting Let's turn to scripting now: the other big place where it feels like Roswell is making a land grab and then building a wall around it. First, let's get this out of the way: each CL implementation has different CLI options, some of them are persnickety about order, and it really sucks. This does make writing portable scripts difficult and is something that really needs improvement. But, again, with Roswell's solution we see it forcing itself between the user and the CL implementation. First, let's consider a script that works only on SBCL. Starting that script with the following is a great way of doing that (assuming you don't care that Busybox's env doesn't support the -S option). #!/usr/bin/env -S sbcl --script If Roswell added its managed implementations to the PATH, this would even work with Roswell! As it currently stands though, you need to start with something like: #!/bin/sh #|-*- mode:lisp -*-|# #| exec ros --$0 "$@" |# You additionally need to define a main function which Roswell calls for you. Explicitly calling a function in the file, whether it be main or another, might work (so you could use sbcl --script or similar which does not automatically call main for you), but I suspect it'd break ros build and might result in some weird error messages if you ever return from that function. With this approach we run into many of the same issues as above. Support is difficult for projects maintained by non-Roswell users, it requires a Roswell and non-Roswell version of any given script, and you're subject to Roswell's opinionated defaults. The defaults issue is particularly thorny here, as I have seen Roswell scripts in the wild that depend on the existence of the ros or ql packages and functions they export. This means that folks using SBCL can't count on being able to do sbcl --load some-script.ros --eval '(main)' and have it work. This does not smell like portability to me. ### Solution? The CL community really needs a portable way of running scripts across multiple implementations and OSes. Unfortunately, I don't have a concrete solution in mind. If I did I would have started trying to implement it already! I think cl-launch is pretty nice, but have issues with its insistence on loading ASDF and upgrading it. ASDF is not needed for every script under the sun and I'd sometimes prefer to use a specific version of ASDF I ship with the script instead of whatever the end-user has lying around in their ~/common-lisp/asdf/ folder. I also dislike that it's written as a shell script, which makes it a non starter on Windows. If Roswell aimed to be less of a monolith, perhaps its scripting facilities could be broken out into a separate project and adapted to call implementations directly instead of via ros. This might be a tough sell though, given the current defaults load a decent chunk of Roswell code into the image. Honestly, I worry that there's never going to be a single implementation of CL scripting that satisfies everyone. Which leads to the next point... # Importance of interfaces I don't know if you've noticed or not, but directing CL'ers (myself included) is a lot like herding cats. If you tell them to do one thing you'll have a couple follow along, some start then get lost on the way, some start to explore different options and then do what you suggested (or get lost), and some that do the exact opposite of what you want/invent a new way of doing it purely out of spite (I jest about the spite part... mostly). Given Roswell's current state, if someone told me that I had to install Roswell to run their fancy program which is run through a .ros script, then, by God, I will find a different way to run it. Roswell is a black box to me. I don't know what utilities are loaded with ros run or when running a script. And even if I did, Roswell could choose to change them at any time. Similarly, I expect a certain contract from the implementation's CLI when I run it, which ros run breaks. If we instead had some community specifications that described things like "CL scripts" (written with an honest attempt at considering competing needs and desires) and some project told me "you can run this script with any CL script runner that conforms to v1 of the CL script spec. Oh, by the way, Roswell contains one such implementation," I'd be much more likely to just say "OK" and install Roswell to get it to work. Knowing that there's the possibility I could make my own implementation of a conforming CL script runner would make me more likely to follow the crowd in the short term and then split off later if I really needed to. I speak only for myself, of course, but my gut tells me a lot of CL'ers would feel the same way. Maybe it's because it's the way our favorite language is designed :). Anyways, this post has grown too long. But it has achieved its primary purpose of helping me organize my thoughts on this topic. Now I can idly day-dream about "CL script" specs and how to get Roswell to install unsullied CL implementations on the user's PATH. #### Eitaro Fukamachi — Day 3: Roswell: Common Lisp scripting · 23 days ago Hi, all Common Lispers. 🎊 And, happy new year in 2022! 🎍 I'm happy that you came to this blog again this year 😆 I explained how to install Lisp implementations, libraries, and applications with Roswell until the previous article. Today, I can finally introduce a different aspect of Roswell -- as a scripting supporter. ## Scripting in Common Lisp world Let's think about writing a script in Common Lisp. It has to accept input as a shell command, execute some code, and quit. How to achieve it in the REPL first language? In SBCL, the --script option is just for it. $ sbcl --script hello.lisp


Using this in shebang, it works as a shell command.

#!/usr/local/bin/sbcl --script

(write-line "Hello, World!")


Save this code as a "hello.lisp" file and give it execute permission.

$./hello.lisp Hello, World!  Good work. But, this requires SBCL to be installed at "/usr/local/bin/sbcl". If it's installed at the other location, the shell raises an error: $ ./hello.lisp
-bash: ./hello.lisp: /usr/local/bin/sbcl: bad interpreter: No such file or directory


This problem is not limited to Common Lisp but also in Python, Ruby, and other languages. A standard solution is to rewrite shebang using /usr/bin/env.

#!/usr/bin/env -S sbcl --script

(write-line "Hello, World!")


The -S option is specified to make /usr/bin/env accept options. If you don't specify it, an error will occur trying to find the program named "sbcl --script".

Command-line arguments can be accessed via sb-ext:*posix-argv*. Looks good, huh?

## Is it portable enough?

Of course, this only works with SBCL, but it's not so bad for UNIX-like OSes where SBCL is installed. Not often, but I use this method in Docker containers.

However, what if you want to write a Common Lisp application with a command-line interface that is supposed to run in various environments. Is the SBCL script portable enough?

First of all, it's not easy to load libraries via Quicklisp. Because sbcl --script doesn't read .sbclrc, and don't load Quicklisp even if it's installed.

Besides, it also becomes more difficult if you are aiming for a general-purpose script, such as if you want to run it on other implementations or if you want to support Windows.

## Roswell Script

"Roswell script" is a good option in that case.

Start writing a Roswell script that outputs "Hello, World!". ros init is a command to create a template for a Roswell script.

$ros init hello-world Successfully generated: hello-world.ros  The output file ending with .ros will look like the following: #!/bin/sh #|-*- mode:lisp -*-|# #| exec ros -Q --$0 "$@" |# (progn ;;init forms (ros:ensure-asdf) #+quicklisp(ql:quickload '() :silent t) ) (defpackage :ros.script.hello-world.3848003839 (:use :cl)) (in-package :ros.script.hello-world.3848003839) (defun main (&rest argv) (declare (ignorable argv))) ;;; vim: set ft=lisp lisp:  It will be bewildering at first, so let's take a look at what it does, part by part. ### Line 1-5: Hack to make the startup portable The first 5 lines are for hacking to make the startup process portable. You don't need to know how these lines work, but I'm writing this just for curious people. #!/bin/sh #|-*- mode:lisp -*-|# #| exec ros -Q --$0 "$@" |#  The terminal reads the first line as its shebang and launches the program with /bin/sh. # is a comment in the shell, so the 2-3 lines are skipped. Next, the shell comes to the exec ros line and finally invokes ros. The second launch is with ros. ros will skip the first shebang. Also, inside of #| to|# will be ignored since it's a multi-line comment in Common Lisp. Then, execute the code below as Common Lisp. ### Line 6-9: Initial forms Lines 6 to 9 are the place to write the code for initialization. (progn ;;init forms (ros:ensure-asdf) #+quicklisp(ql:quickload '() :silent t) )  Mainly, this is for loading external libraries. I asked the Roswell author, @snmsts, something like, "Is there any significance to write external dependencies here?" He said, "I think it would be treated special when it's built to the binary, but I don't remember much. That is for fast startup, maybe?" After some investigation, he found it didn't work as intended. Nowadays, it seems to remain merely a good manner to list all dependencies in one place. ### After Line 10: Main part The rest of the file is a normal Common Lisp program. (defpackage :ros.script.hello-world.3848003839 (:use :cl)) (in-package :ros.script.hello-world.3848003839) (defun main (&rest argv) (declare (ignorable argv)))  The main function is always required. It will be the entry function when the Roswell script is executed. It takes command-line arguments as a list of strings. There are no restrictions on defining other functions and so on. This part can be written like a regular program. ## Benefits of Roswell script Why should you write a script in a Roswell manner? There are 3 benefits to writing Roswell scripts. First of all, the Roswell script is independent of Lisp implementations. sbcl --script obviously doesn't work other than SBCL; however, the Roswell script works every Lisp supported by Roswell. The second benefit is automatic script installation with ros install. I introduced ros install in "Day 2", which is for installing Common Lisp applications/libraries. Not only placing files, but also it treats Roswell scripts special. When the application has a directory named roswell, copy all scripts under it into ~/.roswell/bin. This is the reason why you can use the qlot command right after running ros install fukamachi/qlot. The last one is binary-build. Roswell scripts can be built as a binary by running ros build. It can be a boon as it skips reading a file, compilations, and loading external libraries. To summarize, • Lisp implementation portable • Automatic installation with ros install • Allow building a binary with ros build These benefits make it the perfect way to distribute a general-purpose application and provide its command-line interface. ## Examples At last, I introduce the real examples of Roswell scripts. See inside ~/.roswell directory of those projects. • Qlot • A project-local library installer • Provides qlot command to install/update project dependencies and utilize them. • Clack • Web server abstraction layer • Provides clackup command to start a web application. • lem • Common Lisp editor/IDE with high expansibility • Provides lem command to start an editor. #### Michał Herda — FILL-POINTER-OUTPUT-STRING · 24 days ago #CommonLisp #Lisp Someone noted that they'd like a stream that can append to an existing stream with a fill pointer, like the stream that with-output-to-string can produce, except with indefinite extent. A little bit of Lisp hackery produced something that seems to work, even if greatly untested (yet).  ;;; A fill pointer output stream with indefinite extent (defclass fill-pointer-output-stream (trivial-gray-streams:fundamental-character-output-stream) ((string :accessor fill-pointer-output-stream-string :initarg :string)) (:default-initargs :string (a:required-argument :string))) (defmethod trivial-gray-streams:stream-line-column ((stream fill-pointer-output-stream))) (defmethod trivial-gray-streams:stream-start-line-p ((stream fill-pointer-output-stream))) (defmethod trivial-gray-streams:stream-write-char ((stream fill-pointer-output-stream) char) (vector-push-extend char (fill-pointer-output-stream-string stream))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; CL-USER> (let* ((string (make-array 0 :element-type 'character :fill-pointer 0)) (stream (make-instance 'fill-pointer-output-stream :string string))) (write-string "asdf" stream) (close stream) string) "asdf"   #### Nicolas Hafner — 2021 for Kandria in Review · 27 days ago Wow, it's already been another year! When I was thinking about writing this year roundup for Kandria, I started getting the sweats, because I realised just how many things changed and happened during it. Even now writing this I'm not sure how well of a job I'll be able to do to summarise it all without forgetting about important details. However, since you can go read all the monthly updates, I'll just take this as a good sign that we made a lot of progress! ## Vertical Slice The first thing we did in the year was start up the actual production phase of Kandria, and I'd like to remind you at this point that we had only drafted up the overall locations and plot of the game, but not made any content for it. All we had at this point was the pre-production demo. I might still have a build of this lying around somewhere... It took us a little over three months to complete the slice, which contained about an hour of content, multiple NPCs, multiple tilesets, complex pathing AI, and several quests. Look at all that content! We then took a retreat to create a very small game very quickly. ## Eternia: Pet Whisperer In just two weeks we built and released a visual novel game called Eternia: Pet Whisperer. You can find and get the game on Steam (currently half-off!) The purpose of this exercise was both to give us a break from the rush of developing the vertical slice, and to give us the run-down on a full game production cycle including release on Steam and all that entailed. It was quite an adventure, but I'm still pretty happy with the level of quality we managed to achieve in the final product! ## Team Expansion After Tim and Fred joined the team last year, we had another expansion happen this year, with Mikel joining us as the composer, and Cai as the sound designer. With their help the game really started to expand its atmosphere. There's still a lot more to do to improve Kandria's soundscape, but we're already so much further from where we started off when I was trying to make my own effects and we only had a royalty free sample track for the music. Anyway, with the two on board as well we are now at maximum capacity. Unfortunately with my meagre funds I can't afford to hire any other people, no matter how useful to the project they might be. Heck, I can't even afford to hire the current team members full-time. Making games is ridiculously expensive, and since we currently aren't making any money, this is how it's going to be. ## A New Trailer After Mikel joined the team we launched into a two week production on a new trailer, which required a lot of extra art, editing, custom music, and voice acting as well. I'm really glad we took those two weeks to do it though, as the trailer was invaluable for all the upcoming pitches I had to do to publishers. By now I've seen the trailer so many times that it wore off a bit and I'd like to make a new one soon. Probably once the horizontal slice is complete, eh? ## Events Thanks to the very generous support from the Swiss arts council Pro Helvetia, we were able to attend a number of events and conferences this year! The Swiss Games booth at the Game Industry Conference. It was a lot of fun! • Global Games Pitch • Pocket Gamer Connects Digital • Game Developer Conference • Gamescom, Devcom, and Indie Arena Booth • Game Industry Conference • Nordic Games This was tremendously helpful to get our name out there a little and start building a network of contacts, especially for publishers. Quite a few of them have showed interest in Kandria, and we're currently in deeper talks with one in particular. We'll keep you posted as soon as we can. At the GIC we took part in the first ever Polish-Swiss game jam, which the team Tim and I were a part of ended up winning! It was really cool to get to know a couple of our fellow Swiss devs more closely, and I hope to meet them again sometime! The Polish-Swiss game jam awards ceremony We also applied for the GDC 2022 call Pro Helvetia put out. If we get accepted to that, we will also be at GDC in person next time! How exciting! ## Digital Dragons Accelerator Thanks to our attendance at Gamescom, we were put into contact with the organisers of the Digital Dragons Accelerator, a new programme by the KPT Poland Prize team. This programme offers both mentoring and a significant grant for non-polish studios, under the requirement that you first establish a company in Poland. Out of more than 90 applicants, 13 were selected to be a part of the programme, and we are part of it, which is super exciting. Things aren't for free entirely though, as setting up a company in Poland is quite an involved affair, and so I've had to spend a lot of time communicating with the accelerator team and a law firm in Poland to get that process going. We're now getting close to being done - we have and fully own a company in Poland, but still need to handle some further affairs such as setting up a bank account, and filling in quite a bit of extra stuff for the acceleration contract itself. However, this is all progressing, and we're excited to use the accelerator funds and support to further the marketing of Kandria in the coming year. We'll have more details on that to share soon, so please keep on the lookout! ## Horizontal Slice Over summer while we were busy with all the events and marketing and all sorts of other stuff we also kept polishing the vertical slice we had, added an interactive tutorial, started user testing it in full, and generally improved its overall flow, look, and feel a lot. We then started production on a horizontal slice. A look at the new second region of the game The slice required writing out the main storyline of the game, implementing it in quests, designing the new tilesets for the other game regions, designing the looks of the new NPCs, implementing new platforming mechanics for gameplay variance, and ultimately implementing a tonne of new platforming challenge rooms to fill out the world. The horizontal slice map. The uppermost part you see is the vertical slice content We should be able to complete the slice in February, and then use that to create a new demo, which should be shorter, more focused, and give a quicker in on the action than the slow ramp-up of the actual game's start that we used for the vertical slice demo. ## Looking on to 2022 As I'm writing this on the last day of 2021, another heck year on heck planet, I have quite a lot of things on my mind. One of the things that's bothering me in particular is the planned release schedule for Kandria. So far, the date has always been March of 2023, but I'm very strongly considering moving that to something still within 2022. I've been avoiding trying to think of the project during the holidays, so I haven't formed a decision on that yet. As with all of the other things that are in flight right now though, I'll be sure to let you know as soon as I can if you subscribe to our mailing list. I hope you had a good holiday season and make a great start into the new year. As always, I want to thank you for sticking around and following Kandria's development. It hasn't been an easy journey so far, and I'm often plagued with doubts about it all working out, so it means the world to hear people following its development, and excited about its eventual release. Thank you! #### Quicklisp news — (Second) December 2021 Quicklisp dist update now available · 27 days ago New projects: • adhoc — Another Declarative Hierarchical Object-centric CLOS Customization — GPLv3 • amb — An implementation of John McCarthy's ambiguous operator — MIT • fsocket — Franks socket API — MIT • lisp-interface-library — Long name alias for lil — MIT • polymorphic-functions — Type based dispatch for Common Lisp — MIT • purgatory — A simple implementation of the 9p filesystem protocol. — LLGPL • quux-hunchentoot — Thread pooling for hunchentoot — MIT • schannel — CFFI wrapper to SChannel — MIT • trivial-package-locks — A standard interface to the various package lock implementations. — MIT Updated projects: alexandria-plus, bitfield, cl+ssl, cl-ana, cl-collider, cl-data-structures, cl-enumeration, cl-form-types, cl-gserver, cl-incognia, cl-info, cl-kraken, cl-sdl2, cl-unification, cl-webdriver-client, clad, clast, clazy, clingon, clog, closer-mop, consfigurator, contextl, croatoan, cserial-port, dartsclhashtree, defenum, definer, defmain, doc, fare-scripts, fof, fresnel, gendl, glsl-toolkit, hash-set, helambdap, hu.dwim.asdf, hu.dwim.def, hu.dwim.defclass-star, hu.dwim.graphviz, hu.dwim.logger, hu.dwim.presentation, hu.dwim.reiterate, hu.dwim.stefil, hu.dwim.util, hu.dwim.web-server, imago, lack, lichat-protocol, literate-lisp, log4cl-extras, math, mcclim, mgl-pax, mnas-string, monomyth, neural-classifier, new-op, ningle, nyaml, nyxt, omglib, ook, opticl, petalisp, pgloader, polisher, printv, promise, random-sample, safe-read, sc-extensions, scheduler, sel, serapeum, slite, sly, smug, stumpwm, tfeb-lisp-tools, trivial-cltl2, trivial-garbage, trucler, uncursed, vecto, vellum, vgplot. To get this update, use (ql:update-dist "quicklisp") Enjoy! edit Oops. I forgot I already have a December release. Oh well, enjoy a double-update month! #### Joe Marshall — Idle puzzles · 29 days ago I've been amusing myself with these little puzzles. They're simple enough you can do them in your head, but I coded them up just for fun and to see if they worked. ;;; -*- Lisp -*- (defpackage "PUZZLE" (:use) (:import-from "COMMON-LISP" "AND" "COND" "DEFUN" "IF" "FUNCALL" "FUNCTION" "LAMBDA" "LET" "MULTIPLE-VALUE-BIND" "NIL" "NOT" "OR" "T" "VALUES" "ZEROP")) (defun puzzle::shr (n) (floor n 2)) (defun puzzle::shl0 (n) (* n 2)) (defun puzzle::shl1 (n) (1+ (* n 2))) (in-package "PUZZLE") ;;; You can only use the symbols you can access in the PUZZLE package. ;;; Problem -1 (Example). Define = (defun = (l r) (cond ((zerop l) (zerop r)) ((zerop r) nil) (t (multiple-value-bind (l* l0) (shr l) (multiple-value-bind (r* r0) (shr r) (if (zerop l0) (and (zerop r0) (= l* r*)) (and (not (zerop r0)) (= l* r*)))))))) ;;; Problem 0. Define > ;;; Problem 1. Define (inc n), returns n + 1 for any non-negative n. ;;; Problem 2. Define (dec n), returns n - 1 for any positive n. ;;; Problem 3. Define (add l r), returns the sum of l and r where l ;;; and r each are non-negative numbers. ;;; Problem 4. Define (sub l r), returns the difference of l and r ;;; where l and r are non-negative numbers and l >= r. ;;; Problem 5. Define (mul l r), returns the product of l and r where ;;; l and r are non-negative integers. ;;; Problem 6. Define (pow l r), returns l raised to the r power, ;;; where l is positive and r is non-negative. ;;; Problem 7. Define (div l r), returns the quotient and remainder ;;; of l/r. l is non-negative and r is positive. ;;; Problem 8. Define (log l r), returns the integer logarithm of l ;;; base r  ### My Solutions ;;; Problem 0. Define > (defun > (l r) (cond ((zerop l) nil) ((zerop r) t) (t (multiple-value-bind (l* l0) (shr l) (multiple-value-bind (r* r0) (shr r) (if (and (not (zerop l0)) (zerop r0)) (not (> r* l*)) (> l* r*))))))) This one turned out to be trickier than I thought. I figured you’d basically discard the low order bit and just compare the high ones. And you do, but for this one case where the left bit is one and the right bit is zero. In this case, l > r if l* >= r*, so we swap the arguments and invert the sense of the conditional. ;;; Problem 1. Define (inc n), returns n + 1 for any non-negative n. (defun inc (n) (multiple-value-bind (n* n0) (shr n) (if (zerop n0) (shl1 n*) (shl0 (inc n*))))) ;;; Problem 2. Define (dec n), returns n - 1 for any positive n. (defun dec (n) (multiple-value-bind (n* n0) (shr n) (if (zerop n0) (shl1 (dec n*)) (shl0 n*)))) ;;; Problem 3. Define (add l r), returns the sum of l and r where l ;;; and r each are non-negative numbers. (defun add (l r) (cond ((zerop l) r) ((zerop r) l) (t (multiple-value-bind (l* l0) (shr l) (multiple-value-bind (r* r0) (shr r) (if (zerop l0) (if (zerop r0) (shl0 (add l* r*)) (shl1 (add l* r*))) (if (zerop r0) (shl1 (add l* r*)) (shl0 (addc l* r*))))))))) (defun addc (l r) (cond ((zerop l) (inc r)) ((zerop r) (inc l)) (t (multiple-value-bind (l* l0) (shr l) (multiple-value-bind (r* r0) (shr r) (if (zerop l0) (if (zerop r0) (shl1 (add l* r*)) (shl0 (addc l* r*))) (if (zerop r0) (shl0 (addc l* r*)) (shl1 (addc l* r*))))))))) ;;; Problem 4. Define (sub l r), returns the difference of l and r ;;; where l and r are non-negative numbers and l >= r. (defun sub (l r) (cond ((zerop l) 0) ((zerop r) l) (t (multiple-value-bind (l* l0) (shr l) (multiple-value-bind (r* r0) (shr r) (if (zerop l0) (if (zerop r0) (shl0 (sub l* r*)) (shl1 (subb l* r*))) (if (zerop r0) (shl1 (sub l* r*)) (shl0 (sub l* r*))))))))) (defun subb (l r) (cond ((zerop l) 0) ((zerop r) (dec l)) (t (multiple-value-bind (l* l0) (shr l) (multiple-value-bind (r* r0) (shr r) (if (zerop l0) (if (zerop r0) (shl1 (subb l* r*)) (shl0 (subb l* r*))) (if (zerop r0) (shl0 (sub l* r*)) (shl1 (subb l* r*))))))))) The presence of a carry or borrow is encoded by which procedure you are in. Effectively, we're encoding the carry or borrow in the program counter. ;;; Problem 5. Define (mul l r), returns the product of l and r where ;;; l and r are non-negative integers. (defun fma (l r a) (if (zerop r) a (multiple-value-bind (r* r0) (shr r) (fma (shl0 l) r* (if (zerop r0) a (add l a)))))) (defun mul (l r) (fma l r 0)) This has a nice iterative solution if we define a “fused multiply add” operation that given l, r, and a, computes (+ (* l r) a). Exponentiation has an obvious analogy to multiplication, but instead of doubling l each iteration, we square it, and we multiply into the accumulator rather than adding into it. ;;; Problem 6. Define (pow l r), returns l raised to the r power, ;;; where l is positive and r is non-negative. (defun fem (b e m) (if (zerop e) m (multiple-value-bind (e* e0) (shr e) (fem (mul b b) e* (if (zerop e0) m (mul b m)))))) (defun pow (b e) (fem b e 1)) For division we use a curious recursion. To divide a big number n by a divisor d, we first pass the buck and divide by (* d 2) and get a quotient and remainder. The quotient we return is twice the quotient we got back, plus 1 if the remainder we got back is bigger than d. We either return the remainder we got back or we subtract d from it. I find this curious because one usually performs a recursion by making one of the arguments in some way smaller on each recursive call. The recursion bottoms out when the argument can get no smaller. In this recursion, however, we keep trying to divide by bigger and bigger divisors until we cannot anymore. ;;; Problem 7. Define (div l r), returns the quotient and remainder ;;; of l/r. l is non-negative and r is positive. (defun div (n d) (if (> d n) (values 0 n) (multiple-value-bind (q r) (div n (shl0 d)) (if (> d r) (values (shl0 q) r) (values (shl1 q) (sub r d)))))) Logarithm should be analagous (mutatis mutandis). ;;; Problem 8. Define (log l r), returns the integer logarithm of l ;;; base r (defun log (l r) (if (> r l) (values 0 l) (multiple-value-bind (lq lr) (log l (mul r r)) (if (> r lr) (values (shl0 lq) lr) (values (shl1 lq) (div lr r)))))) #### Quicklisp news — December 2021 Quicklisp dist update now available · 48 days ago New projects: • chain — Two chaining/piping macros, one of them setfing its first argument — BSD-3 • cl-getopt — CFFI wrapper to the libc getopt_long function — Public Domain • cl-tls — An implementation of the Transport Layer Security Protocols — BSD-3-Clause • dotenv — Ease pain with working with .env files. — MIT • latter-day-paypal — Paypal api wrapper. — MIT • lunamech-matrix-api — An implementation of the Matrix API taken from LunaMech see https://lunamech.com — MIT • stripe-against-the-modern-world — Implementation of the Stripe API. — MIT • verlet — Verlet is a simple physics engine based on verlet integration. It supports particles with position and direction, springs between particles, global gravity as well as gravity between particles, and spacial constraints. — BSD-3 Updated projects: adopt, aether, alexandria, anaphora, arrival, aserve, basic-binary-ipc, bdef, bodge-host, caveman, chameleon, check-bnf, cl+ssl, cl-apertium-stream-parser, cl-autowrap, cl-bus, cl-collider, cl-conllu, cl-cron, cl-cxx-jit, cl-data-structures, cl-decimals, cl-enchant, cl-etcd, cl-form-types, cl-gamepad, cl-gcrypt, cl-general-accumulator, cl-gpio, cl-gserver, cl-info, cl-just-getopt-parser, cl-kraken, cl-l10n, cl-liballegro, cl-liballegro-nuklear, cl-mixed, cl-mpg123, cl-opencl, cl-patterns, cl-permutation, cl-progress-bar, cl-prolog2, cl-sparql, cl-string-match, cl-tld, cl-utils, cl-webdriver-client, cl-webkit, cl-yxorp, clack, clack-static-asset-middleware, clad, clingon, clip, clog, closer-mop, cmd, common-lisp-jupyter, commondoc-markdown, compiler-macro-notes, consfigurator, croatoan, cserial-port, data-frame, defconfig, defmain, dexador, djula, doc, docs-builder, easy-audio, fiveam, fiveam-asdf, gadgets, gendl, glacier, gtirb-capstone, gtirb-functions, gute, harmony, helambdap, hu.dwim.defclass-star, hu.dwim.perec, hu.dwim.presentation, hu.dwim.web-server, jingoh, lack, lichat-protocol, lichat-tcp-client, lift, lisp-stat, log4cl, log4cl-extras, maiden, mcclim, mgl-pax, micmac, millet, mito, mnas-package, mnas-string, mutility, named-readtables, nibbles, nodgui, numcl, numerical-utilities, nyxt, omglib, opticl, osicat, overlord, petalisp, plot, portal, postmodern, pp-toml, py4cl2, qlot, quilc, retrospectiff, rove, sel, serapeum, shop3, sly, snappy, static-dispatch, stefil-, structure-ext, stumpwm, tfeb-lisp-tools, trivial-features, trivial-timeout, trivial-utf-8, trivialib.bdd, uax-15, vas-string-metrics, vellum, vellum-csv, vellum-postmodern, vivid-colors, vivid-diff, wallstreetflets, woo, xhtmlambda, zippy. Removed projects: lisp-interface-library, quux-hunchentoot. To get this update, use (ql:update-dist "quicklisp"). Enjoy! #### Max-Gerd Retzlaff — uLisp on M5Stack (ESP32):<br /> support for the LED matrix of the M5Atom Matrix · 49 days ago I got a good friend join the uLisp fun and he extended my support for the single LED of the M5Atom Lite to support the 25 LEDs of the M5Atom Matrix. The single LED has just the same interface as the LED matrix, as expected. Thanks, Thorsten! It has a nice backwards compatible interface, the functions atomled (for C) and atom-led (for Lisp) just have a new second argument index, which is 0 by default, for the first— or, in case of the M5Atom Lite, only—LED. The C function you can call like this: atomled(0x00ff00); /* or: */ atomled(0x00ff00, 23); where 0x00ff00 describes a RGB color in 32 bits. And the uLisp function you can call very similarly like this: (atom-led #xffff00) #| or: |# (atom-led #xffff00 23) I have merged it to my repository ulisp-esp-m5stack already. Activate the new flag #define enable_m5atom_led_matrix in addition to #define enable_m5atom_led to use the whole LED matrix of the M5Atom Matrix instead of just the first LED. See also built-in LED of the M5Atom Lite. Read the whole article. #### Max-Gerd Retzlaff — uLisp on M5Stack (ESP32):<br /> built-in LED of the M5Atom Lite · 50 days ago I just published support of the M5Atom Lite LED at ulisp-esp-m5stack. There is a C function that you can call like this: atomled(0x00ff00); where 0x00ff00 describes a RGB color in 32 bits. And a uLisp function that you can call very similarly like this: (atom-led #xffff00) Activate #define enable_m5atom_led to get it. That will also automatically init_atomled(); in setup() after booting the ESP32. I have actually tried the libraries FastLED (by Daniel Garcia, version 3.4.0), Easy Neopixels (by Evelyn MAsso, version 0.2.3), and NeoPixelBus (by Makuna, version 2.6.9) as well, but settled to use the library Adafruit NeoPixel (by Adafruit, version 1.10.0). It is small, doesn't have tons of bloat, works for me and has a nice interface that makes my implementation so tiny you would think it was almost no work. Read the whole article. #### vindarel — Lisp Interview: Arnold Noronha of Screenshotbot: from Facebook and Java to Common Lisp. · 52 days ago I have come to like asking questions to people running companies that use CL, and I have Arnold in my radar for quite some time. He contributed a while back to my cl-str library, and at that time, I don’t recall how many Lisp projects he had in his Github, but not as much as today. Since then, he created ScreenShotBot (an open-source screenshot testing service) and he released a few very useful Lisp (and Elisp) libraries. Wow! I’ll try to investigate what happened :) ## First, ScreenShotBot. It’s an open-source project and a company. Its website shows a team of four. Can you tell us what’s the state of the project, of the company, what are your goals with it? Who are your clients? Sure. So the team of four isn’t fully active at the moment. We’re all still friends, but Screenshotbot didn’t take off the way we wanted it to, so currently it’s just me and Reuben. I’m the primary developer, Reuben 1 is actually my brother, he manages marketing. But he’s tech savvy. I’ll talk about how he contributes in code without knowing any Lisp in a second. Lili 2 and Francesco 3 are awesome designers we worked with in building our marketing pages. Reuben and I are also prototyping some other ideas at the moment. By the way, I think cl-str was probably one of the first CL libraries I contributed to. It’s also easily my most used library :) ## You pinned the project facebook/screenshot-tests-for-android on your GH profile. Maybe you wrote about it already. What’s your relation with this project? (we read you “built the infrastructure for running Screenshot Tests at Facebook, which still runs tests from iOS, Android and React at Scale.”) I was the original author of that project when I worked at Facebook. It’s the de-facto screenshot testing library for Android. (The other one, called Shot, uses this library under the hood.) It’s maintained by other engineers at Facebook at the moment. At Facebook, I also ended up building the infrastructure to run these screenshot tests, and that infrastructure ended up being used for iOS and React and ElectronJS etc. The infrastructure stored the screenshots, bisected, sent tasks, notified on diff reviews etc. With Screenshotbot I wanted to build a similar infrastructure that can be used outside of Facebook. ## So... do you come from Facebook and from Java ? (ah, you worked at Google too!) Yes Java! Also did C++ and lots of Python at Facebook. ## What made you start a new project/company, and... Speaking of Java.. Java is ridiculously slow to work with. Especially on Android, every single change required you to re-build the entire app. My initial goal when I started my own company was to solve this problem. I wanted to build CL style interactive development for Java, but without forcing developers to switch from Java. (Take a look at an early demo here: https://www.jipr.io/demo). I haven’t completely abandoned this project, I might get back to this soon. But this is a good segue to your next question. ## (the famous question) how did you end up with Common Lisp? So I’ve been toying with Lisp for a while. I started using StumpWM around 2010, and have been using it ever since (I think I was looking for Emacs level customizability for my WM which CL and StumpWM gave me). My own personal website runs on CL, probably since about 2015. Using CL encouraged me to build small tools for personal use without much friction. (For instance, I have my own weight tracker tool, tracking my car mileage, tracking my investment portfolio etc.) I just had my server running on my desktop, and Emacs was perpetually connected to the Lisp server, so making changes would take less than a minute. Even before I started using CL professionally, I attempted to build a shitty Lisp interpreter way back in 2012 4. The goal here was mostly to learn about compilers. Now, when I started working on Jipr, at some point I realized I needed to define some kind of “bytecode”. But then I thought, why not just use Lisp as the “bytecode”, i.e. the Java gets compiled to Lisp? I needed a Lisp-like language that works with Android and Java to build the actual Java interpreter. At that time, I could not find any CL that provided me the ability, so I boldly took the Lisp that I built in 2012, and actually built a shit tonne of real code on top of it. (The demo I linked to earlier used my homegrown Lisp.) At some point, that stopped scaling. Every small change required lots of debugging because of the lack of SLIME, and other debugging tools. I gave up for a while, until one day I realized I could actually pay LispWorks, and they have already built Android support for me. Paying for Lispworks was the best thing I’ve done in my startup journey. Initially, I just wanted the Android support for building Jipr, and I was able to remove all references of my homegrown Lisp with a month of effort. Their support is spectacular. And having access to Java libraries meant I could move a lot faster on my future ideas, including Screenshotbot. ## Are you the only (Lisp) developer on Screenshotbot, did you have to make a colleague work on the Lisp codebase and how did that go? I’m the solo Lisp developer. However: I’ve worked on multiple websites collaborating with my wife (who has some experience with Python, but not technical), and my brother (who is tech savvy, but not a programmer). Both helped me build websites in a Lisp codebase because of Markup! I helped them set up Atom + SLIMA (with SBCL, not Lispworks since I couldn’t afford another license), and then since it looked like HTML from that point, they would mostly work with a theme that we would purchase to build complex UIs. When they took over the UI, that left me more time to work on building the core components and interactions for the apps. ## We can find you on reddit. Besides it, is there another Lisp circle where you are active? (a pro mailing list?) Do you have interaction with the “pro” side of the CL community? Hmm, I’m definitely active on Reddit. I follow the LispWorks HUG, but I’m not a super active participant. Do you have suggestions of where else I should be active on? ## If I recall correctly, you use LispWorks, and you appreciate its Java FFI. Can you give details on your stack? When do you turn to the Java side? I briefly described this above. For Jipr, which is literally a Java interpreter, Java interop was a necessity (the interpreter which runs in Lisp, keeps calling back and forth into Java). And it helps that Lispworks also has Android support (which CCL and ABCL don’t have, the other two implementations that support Java). For Screenshotbot, the app has multiple integrations with externals tools such as Slack, Asana, etc. Sometimes, interacting with the services’ APIs aren’t that hard, and can be written in plain old Lisp. But often, I need to prototype a solution quickly, and having access to existing well-tested Java libraries are a necessity. (For instance, I had a few sales calls where the client was using a specific external tool for which I didn’t have the integration, and I would have to work overnight to build the integration). Over-reliance on Java can eventually cause pain and suffering (I have to restart the Lisp process if I make a Java change), so I avoid writing any Java code, and instead talk to the Java libraries directly, even if it’s slower. Usually these Java libraries are not in the hot-path anyway, so performance isn’t critical. Occasionally I rewrite code that relied on Java to just use plain CL. ## Anymore feedback on LispWorks? It seems you also extensively use Emacs and Slime. Yeah, I definitely don’t use the LispWorks IDE. :) I tried it for a while, but it wasn’t for me. Sly (I prefer Sly over Slime these days), is definitely a more “complete” experience for me. I’m the kind of hacker that is always optimizing their workflow. Emacs just makes it way more easier to do that. Slite is obviously one of my bigger examples, but there are always smaller things. (My most recent one was automatically guessing which package to import a symbol from, so I can press C-c i on a symbol, and it’ll give me a list of package candidates using ido-completing-read.) Perhaps there are ways to do this with the LispWorks IDE, but I’m more experienced with Emacs Lisp, and there’s a wealth of documentation and blogs about how to do different things in Emacs. ## How’s your CI and deployment story going, is everything fine, is there anything particular to CL? I have a mono-repo. Almost all my Lisp code is in one single repo. Some of my newer open source libraries get copied over from the mono-repo to GitHub using Copybara. I run my own Jenkins server, but plan to switch to Buildbot. Jenkins is annoying beyond a certain point. But it’s a great starting point for anybody who doesn’t have CI. I have two desktops, and use my older desktop exclusively as a Jenkins worker. But Lispworks adds a complication because of licensing issues, so my primary desktop runs all the Lispworks jobs. Deployment is interesting. I heavily use bknr.datastore 5, which while awesome, adds a little pain point to deployment. Restarting a service can take a 10-20s because we have to load all the objects to memory. So I avoid restarting the service. Lispworks also adds a quirk. A deployed Lispworks image doesn’t have a compiler, so I can’t just git pull and (asdf:load-system ...). So instead, from my desktop I build a bundled fasl file, and have scripts to upload it to my server and have the server load the fasl file. Finally, bknr.datastore has some issues which reloading code, which causes existing indices to become empty. I haven’t debugged why this happens, but I’m close. I have workaround scripts for these that can correct a bad index, but because of the potential of bringing my datastore into a bad state, deployment is pretty manual at the moment. ## Some CL libraries you particularly like? Some you wish existed? Apart from the usual suspects (cl-str, alexandria, cl-json, hunchentoot etc): • bknr.datastore: This is a game changer. If you’re the type of person that likes prototyping things quickly, this is for you. If you need to scale things, you can always do it when you have to. (In my experience, not all ideas reach the stage where you need to scale over a 100 servers.) It does take a bit of getting used to (particularly dealing with how to recover old snapshots, or replaying bad transactions). I’m using my own patches for Lispworks that aren’t merged upstream yet (I have a PR for it). I think the index failing issue might be in my own patches, but I don’t know for sure yet. • dexador (as opposed to Drakma): For the longest time I avoided Dexador because I thought Drakma is the more well established library. But Drakma for all its history, still doesn’t handle SSL correctly at least on LispWorks. And dexador does. • Qlot also sounds awesome, and I want to start using it. Some I wish existed: • A carefully designed algorithms library, with all the common algorithms, and consistent APIs. • …in particular graph algorithms. There are libraries out there, and I have used them, but they are clunky and not very well documented. • Better image processing libraries. Opticl is fine, but confusing to work with. Maybe it’s just me. • A modern and complete Selenium library. I use the Java library with the Java FFI. • An extensible test matchers library, similar to Java’s Hamcrest. There’s a cl-hamcrest, but it’s not very extensible, and you can’t configure the test result output IIRC. I attempted a solution here (copied over as part of my mono-repo): https://github.com/screenshotbot/screenshotbot-oss/tree/main/src/fiveam-matchers, but it’s not ready for publishing yet. I also think there’s a verbosity problem to be solved with classes and methods, that still needs to be solved in the Lispy way. For instance, in Java or python, the method names don’t have to be explicitly imported, but in CL we have to import each method that we need to use, which makes it hard to define what the object’s “interface” is. I am not proposing a solution here, I’m just identifying this as a problem that slows me down. ## Is that a baby alligator you caught yourself on your GH profile picture? It’s one of those pictures they take of you at a tourist trap alligator tour. :) The alligator’s jaw is taped shut. ## Anything more to add? Nothing I can think of :) Thanks and the best for your projects! notes: #### Max-Gerd Retzlaff — uLisp on M5Stack (ESP32):<br /> new version published · 52 days ago I got notified that I haven't updated ulisp-esp-m5stack at GitHub for quite a while. Sorry, for that. Over the last months I worked on a commercial project using uLisp and forgot to update the public repository. At least I have bumped ulisp-esp-m5stack to my version of it from May 13th, 2021 now. It is a—then—unpublished version of uLisp named 3.6b which contains a bug fix for a GC bug in with-output-to-string and a bug fix for lispstring, both authored by David Johnson-Davies who sent them to my via email for testing. Thanks a lot again! It seems they are also included in the uLisp Version 3.6b that David published on 20th June 2021. I know there David published a couple of new releases of uLisp in the meantime with many more interesting improvements but this is the version I am using since May together with a lot of changes by me which I hope to find time to release as well in the near future. ## Error-handling in uLisp by Gohecca I am using Goheeca's Error-handling code since June and I couldn't work without it anymore. I just noticed that he allowed my to push his work to my repository in July already. So I just also published my branch error-handling to ulisp-esp-m5stack/error-handling. It's Goheeca's patches together with a few small commits by me on top of it, mainly to achieve this (as noted in the linked forum thread already): To circumvent the limitation of the missing multiple-values that you mentioned with regard to ignore-errors, I have added a GlobalErrorString to hold the last error message and a function get-error to retrieve it. I consider this to be a workaround but it is good enough to show error messages in the little REPL of the Lisp handheld. Read the whole article. #### Nicolas Hafner — Slicing the horizon - December Kandria Update · 52 days ago November has been filled with horizontal slice development! Nearly all our time was spent working on new content, which is fantastic! The world is already four times as big as the demo, and there's still plenty more to go. ## Horizontal Slice Development We've been busy working on the horizontal slice content, hammering out quests, art, music, levels, and new mechanics. We now have an overview of the complete game map, and it clocks in at about 1.2 x 2.2 km, divided up into 265 unique rooms. This is pretty big already, but not quite the full map size yet. Once we're done with the horizontal slice, we'll be branching things out with sidequests and new side areas that are going to make the map even more dense and broad. The map is split up into four distinct areas, which we call the Surface and Regions 1-3. Each of those areas have their own unique tileset, music tracks, NPCs, and platforming mechanics. The demo already shows off the Surface as well as the upper part of Region 1: We can also give you a peek at the visuals for Region 2: I'm really excited to see everything come together, but there's still a lot more levels for me to design before that. I'm glad that I finally managed to get up to speed doing that, but it's still surprisingly hard. Coming up with fresh ideas for each room and making sure the challenges are properly balanced is very time consuming. As such, progress has been a bit slower than I would have liked, and that's been eating at me. Still, I think we can get the horizontal slice done without too much of a delay, and we still have a lot of development time scheduled in our budget, so I think we'll be fine. ## Tim I've been working on the horizontal slice, and act 2's mainline quests are all but done to first draft quality, with a decent first pass on the dialogue. This contains several significant new quests, which send the player far and wide around the lower part of region 1, which Nick has greyboxed out. It's been fun getting into the headspaces and voices of the new characters you'll meet here, and spinning up again on the scripting language. There was some tricky functionality to script, since we want some of the quests to be encountered naturally by the player, even if they're not at that part of the story yet; it needed some extra thought to make sure these hang together based on the different ways the player might approach it. This should be good learning going into act 3, another meaty act. Though things should get faster to implement for the following acts 4 and 5, since the plot there is getting railroaded towards the climax. ## The bottom line As always, let's look at the roadmap from last month. • Fix reported crashes and bugs • Explore platforming items and mechanics • Practise platforming level design • Draft out region 2 main quest line levels • Revise some of the movement mechanics • Animate more NPC characters and add an AI for them • Implement RPG mechanics for levelling and upgrades (partially done) • Draft out region 3 main quest line levels (partially done) • Complete the horizontal slice December is going to be a short month as we have two weeks of holidays ahead of us, which I'm personally really looking forward to. I will be writing a year wrap-up for the end of December though, just like last year. As always, I sincerely hope you give the new demo a try if you haven't yet. Let us know what you think when you do or if you have already! #### Tim Bradshaw — The endless droning: corrections and clarifications · 63 days ago It seems that my article about the existence in the Lisp community of rather noisy people who seem to enjoy complaining rather than fixing things has atracted some interest. Some things in it were unclear, and some other things seem to have been misinterpreted: here are some corrections and clarifications. First of all some people pointed out, correctly, that LispWorks is expensive if you live in a low-income country. That’s true: I should have been clearer that I believe the phenonenon I am describing is exclusively a rich-world one. I may be incorrect but I have never heard anyone from a non-rich-world country doing this kind of destructuve whining. It may also have appeared that I am claiming that all Lisp people do this: I’m not. I think the number of people is very small, and that it has always been small. But they are very noisy and even a small number of noisy people can be very destructive. Some people seem to have interpreted what I wrote as saying that the current situation was fine and that Emacs / SLIME / SLY was in fact the best possible answer. Given that my second sentence was [Better IDEs] would obviously be desirable. this is a curious misreading. Just in case I need to make the point any more strongly: I don’t think that Emacs is some kind of be-all and end-all: better IDEs would be very good. But I also don’t think Emacs is this insurmountable barrier that people pretend it is, and I also very definitely think that some small number of people are claiming it is because they want to lose. I should point out that this claim that it is not an insurmountable barrier comes from some experience: I have taught people Common Lisp, for money, and I’ve done so based on at least three environments: • LispWorks; • Something based around Emacs and a CL running under it; • Genera. None of those environments presented any significant barrier. I think that LW was probably the most liked but none of them got in the way or put people off. In summary: I don’t think that the current situation is ideal, and if you read what I wrote as saying that you need to read more carefully. I do think that the current situation is not going to deter anyone seriously interested and is very far from the largest barrier to becoming good at Lisp. I do think that, if you want to do something to make the situation better then you should do it, not hang around on reddit complaining about how awful it is, but that there are a small number of noisy people who do exactly that because, for them, no situation would be ideal because what they want is to avoid being able to get useful work done. Those people, unsurprisingly, often become extremely upset when you confront them with this awkward truth about themselves. They are also extremely destructive influences on any discussion around Lisp. (Equivalents of these noisy people exist in other areas, of course.) That’s one of the reasons I no longer participate in the forums where these people tend to exist. (Thanks to an ex-colleague for pointing out that I should perhaps post this.) #### vindarel — Lisp for the web: pagination and cleaning up HTML with LQuery · 63 days ago I maintain a web application written in Common Lisp, used by real world© clients© (incredible I know), and I finally got to finish two little additions: • add pagination to the list of products • cleanup the HTML I get from webscraping (so we finally fetch a book summary, how cool) (for those who pay for it, we can also use a third-party book database). The HTML cleanup part is about how to use LQuery for the task. Its doc shows the remove function from the beginning, but I have had difficulty to find how to use it. Here’s how. (see issue #11) ## Cleanup HTML with lquery https://shinmera.github.io/lquery/ LQuery has remove, remove-attr, remove-class, remove-data. It seems pretty capable. Let’s say I got some HTML and I parsed it with LQuery. There are two buttons I would like to remove (you know, the “read more” and “close” buttons that are inside the book summary): (lquery:$ *node* ".description" (serialize))
;; HTML content...
<button type=\"button\" class=\"description-btn js-descriptionOpen\"><span class=\"mr-005\">Lire la suite</span><i class=\"far fa-chevron-down\" aria-hidden=\"true\"></i></button>
<button type=\"button\" class=\"description-btn js-descriptionClose\"><span class=\"mr-005\">Fermer</span><i class=\"far fa-chevron-up\" aria-hidden=\"true\"></i></button></p>")


On GitHub, @shinmera tells us we can simply do:

($*node* ".description" (remove "button") (serialize))  Unfortunately, I try and I still see the two buttons in the node or in the output. What worked for me is the following: • first I check that I can access these HTML nodes with a CSS selector: (lquery:$ *NODE* ".description button" (serialize))
;; => output

• now I use remove. This returns the removed elements on the REPL, but they are corrcetly removed from the node (a global var passed as parameter):
(lquery:$*NODE* ".description button" (remove) (serialize)) ;; #("<button type=\"button\" class=\"description-btn js-descriptionOpen\"><span class=\"mr-005\">Lire la suite</span><i class=\"far fa-chevron-down\" aria-hidden=\"true\"></i></button>"  Now if I check the description field: (lquery:$ *NODE* ".description" (serialize))
;; ...
;; </p>")


I have no more buttons \o/

Now to pagination.

## Pagination

This is my 2c, hopefully this will help someone do the same thing quicker, and hopefully we’ll abstract this in a library...

On my web app I display a list of products (books). We have a search box with a select input in order to filter by shelf (category). If no shelf was chosen, we displayed only the last 200 most recent books. No need of pagination, yet... There were only a few thousand books in total, so we could show a shelf entirely, it was a few hundred books by shelf maximum. But the bookshops grow and my app crashed once (thanks, Sentry and cl-sentry). Here’s how I added pagination. You can find the code here and the Djula template there.

The goal is to get this and if possible, in a re-usable way:

I simply create a dict object with required data:

• the current page number
• the page size
• the total number of elements
• the max number of buttons we want to display
• etc
(defun make-pagination (&key (page 1) (nb-elements 0) (page-size 200)
(max-nb-buttons 5))
"From a current page number, a total number of elements, a page size,
return a dict with all of that, and the total number of pages.

Example:

(get-pagination :nb-elements 1001)
;; =>
(dict
:PAGE 1
:NB-ELEMENTS 1001
:PAGE-SIZE 200
:NB-PAGES 6
:TEXT-LABEL \"Page 1 / 6\"
)
"
(let* ((nb-pages (get-nb-pages nb-elements page-size))
(max-nb-buttons (min nb-pages max-nb-buttons)))
(serapeum:dict :page page
:nb-elements nb-elements
:page-size page-size
:nb-pages nb-pages
:max-nb-buttons max-nb-buttons
:text-label (format nil "Page ~a / ~a" page nb-pages))))

(defun get-nb-pages (length page-size)
"Given a total number of elements and a page size, compute how many pages fit in there.
(if there's a remainder, add 1 page)"
(multiple-value-bind (nb-pages remainder)
(floor length page-size)
(if (plusp remainder)
(1+ nb-pages)
nb-pages)))
#+(or)
(assert (and (= 30 (get-nb-pages 6000 200))
(= 31 (get-nb-pages 6003 200))
(= 1 (get-nb-pages 1 200))))


You call it:

(make-pagination :page page
:page-size *page-length*
:nb-elements (length results))


then pass it to your template, which can {% include %} the template given above, which will create the buttons (we use Bulma CSS there).

When you click a button, the new page number is given as a GET parameter. You must catch it in your route definition, for example:

(easy-routes:defroute search-route ("/search" :method :get) (q shelf page)
...)


Finally, I updated my web app (while it runs, it’s more fun and why shut it down? It’s been 2 years I do this and so far all goes well (I try to not upgrade the Quicklisp dist though, it went badly once, because of external, system-wide dependencies)) (see this demo-web-live-reload).

That’s exactly the sort of things that should be extracted in a library, so we can focus on our application, not on trivial things. I started that work, but I’ll spend more time next time I need it... call it “needs driven development”.

Happy lisping.

#### Stelian Ionescu — On New IDEs

· 65 days ago
There has been some brouhaha about the state of Common Lisp IDEs, and a few notable reactions to that, so I’m adding my two Euro cents to the conversation. What is a community ? It’s a common mistake to refer to some people doing a certain thing as a “community”, and it’s easy to imagine ridiculous examples: the community of suburban lawn-mowing dwellers, the community of wearers of green jackets, the community of programmers-at-large etc…

#### Tim Bradshaw — The endless droning

· 66 days ago

Someone asked about better Lisp IDEs on reddit. Such things would obviously be desirable. But the comments are entirely full the usual sad endless droning from people who need there always to be something preventing them from doing what they pretend to want to do, and are happy to invent such barriers where none really exist. comp.lang.lisp lives on in spirit if not in fact.

[The rest of this article is a lot ruder than the above and I’ve intentionally censored it from the various feeds. See also corrections and clarifications.]

More…

#### Wimpie Nortje — Set up Verbose for multi-threaded standalone applications.

· 68 days ago

Although Verbose is one of few logging libraries that work with threaded applications (See Comparison of Common Lisp Logging Libraries), I had some trouble getting it to work in my application. I have a Hunchentoot web application which handles each request in a separate thread that is built as a standalone executable. Getting Verbose to work in Slime was trivial but once I built the standalone, it kept crashing.

The Verbose documentation provides all the information needed to make this setup work but not in a step-by-step fashion so this took me some time to figure out.

To work with threaded applications Verbose must run inside a thread of its own. It tries to make life easier for the majority case by starting its thread as soon as it is loaded. Creating a standalone application requires that the running lisp image contains only a single running thread. The Verbose background thread prevents the binary from being built. This can be remedied by preventing Verbose from immediately starting its background thread and then manually start it inside the application.

When Verbose is loaded inside Slime it prints to the REPL's *standard-output* without fuss but when I loaded it inside my standalone binary it caused the application to crash. I did not investigate the *standard-output* connection logic but I discovered that you must tell Verbose explicitly about the current *standard-output* in a binary otherwise it won't work.

Steps:

1. (pushnew :verbose-no-init *features*)

This feature must be set before the Verbose system is loaded. It prevents Verbose from starting its main background thread, which it does by default immediately when it is loaded.

I added this form in the .asd file immediately before my application system definition. While executing code inside the .asd file is considered bad style it provided the cleanest way for me to do this otherwise I would have to do it in multiple places to cover all the use cases for development flows and building the production binary. There may be a better way to set *features* before a system is loaded but I have not yet discovered it.

2. (v:output-here *standard-output*)

This form makes Verbose use the *standard-output* as it currently exists. Leaving out this line was the cause of my application crashes. I am not sure what the cause is but I suspect Verbose tries to use Slime's version of *standard-output* if you don't tell it otherwise, even when it is not running in Slime.

This must be done before starting the Verbose background thread.

3. (v:start v:*global-controller*)

Start the Verbose background thread.

4. (v:info :main "Hello world!")

Start logging.

I use systemd to run my applications. Systemd recommends that applications run in the foreground and print logs to the standard output. The application output is captured and logged in whichever way systemd is configured. On default installations this is usually in /var/log/syslog in the standard logging format which prepends the timestamp and some other information. Verbose also by default prints the timestamp in the logged message, which just adds noise and makes syslog difficult to read.

Verbose's logging format can be configured to be any custom format by subclassing its message class and providing the proper formatting method. This must be done before any other Verbose configuration.

Combining all the code looks like below.

In app.asd:

(pushnew :verbose-no-init *features*)

(defsystem #:app
...)


In app.lisp:

(defclass log-message (v:message) ())

(defmethod v:format-message ((stream stream) (message log-message))
(format stream "[~5,a] ~{<~a>~} ~a"
(v:level message)
(v:categories message)
(v:format-message NIL (v:content message))))

(defun run ()
(setf v:*default-message-class* 'log-message)
(v:output-here *standard-output*)
(v:start v:*global-controller*)
(v:info :main "Hello world!")

...)


#### Eitaro Fukamachi — Day 2: Roswell: Install libraries/applications

· 75 days ago

Hi, all Common Lispers.

In the previous article, I introduced the management of Lisp implementations with Roswell.

One of the readers asked me how to install Roswell itself. Sorry, I forgot to mention it. Please look into the official article at GitHub Wiki. Even on Windows, it recently has become possible to install it with a single command. Quite easy.

Today, I'm going to continue with Roswell: the installation of Common Lisp libraries and applications.

## Install from Quicklisp dist

Quicklisp is the de-facto library registry. When you install Roswell, the latest versions of SBCL and Quicklisp are automatically set up.

Let's try to see the value of ql:*quicklisp-home* in REPL to check where Quicklisp is loaded from.

$ros run * ql:*quicklisp-home* #P"/home/fukamachi/.roswell/lisp/quicklisp/"  You see that Quicklisp is installed in ~/.roswell/lisp/quicklisp/. To install a Common Lisp project using this Quicklisp, execute ros install command: # Install a project from Quicklisp dist$ ros install <project name>


You probably remember ros install command is also used to install Lisp implementations. If you specify something other than the name of implementations, Roswell assumes that it's the name of an ASDF project. If the project is available in Quicklisp dist, it will be installed from Quicklisp.

Installed files will be placed under ~/.roswell/lisp/quicklisp/dists/quicklisp/software/ along with its dependencies.

If it's installed from Quicklisp, it may seem to be the same as ql:quickload. So you would think that this is just a command to be run from the terminal.

In most cases, that's true. However, if the project being installed contains some command-line programs with the directory named roswell/, Roswell will perform an additional action.

For example, Qlot provides qlot command. By running ros install qlot, Roswell installs the executable at ~/.roswell/bin/qlot.

This shows that Roswell can be used as an installer not only for simple projects but also for command-line applications.

Other examples of such projects are "lem", a text editor written in Common Lisp, and "mondo", a REPL program.

I'll explain how to write such a project in another article someday.

## Install from GitHub

How about installing a project that is not in Quicklisp? Or, in some cases, the monthly Quicklisp dist is outdated, and you may want to use the newer version.

By specifying GitHub's user name and project name for ros install, you can install the project from GitHub.

$ros install <user name>/<project name> # In the case of Qlot$ ros install fukamachi/qlot


Projects installed from GitHub will be placed under ~/.roswell/local-projects.

To update it, run ros update:

# Note that it is not "fukamachi/qlot".
$ros update qlot  Besides, you can also install a specific version by specifying a tag name or a branch name. # Install Qlot v0.11.4 (tag name)$ ros install fukamachi/qlot/0.11.4

# Install the development version (branch name)
\$ ros install fukamachi/qlot/develop


### Manual installation

How about installing a project that doesn't exist in both Quicklisp and GitHub?

It's also easy. Just place the files under ~/.roswell/local-projects, and run ros install <project name>.

Let me explain a little about how it works.

This mechanism is based on the local-projects mechanism provided by Quicklisp.

The "~/.roswell/local-projects" directory can be treated just like the local-projects directory of Quicklisp.

As a side note, if you want to treat other directories like local-projects, just add the path to ros:*local-project-directories*. This is accomplished by adding Roswell-specific functions to asdf:*system-definition-search-functions*. Check it out if you are interested.

You can place your personal projects there or symbolically link to them to make them loadable.

But, I personally think that this directory should be used with caution.

### Caution on the operation of local-projects

Projects placed under the local-projects directory can be loaded immediately after starting the REPL. I suppose many users use it for this convenience.

However, this becomes a problem when developing multiple projects on the same machine. Quicklisp's "local-projects" directory is user-local. Which means all projects will share it. Therefore, even if you think you are loading from Quicklisp, you may be loading a previously installed version from GitHub.

To avoid these dangers, I recommend using Qlot. If you are interested, please look into it.

Anyway, it is better to keep the number of local-projects to a minimum to avoid problems.

If you suspect that an unintended version of the library is loaded, you can check where the library is loaded by executing (ql:where-is-system :<project name>).

### Conclusion

I introduced how to install Common Lisp projects with Roswell.

• From Quicklisp
• ros install <project name>
• From GitHub
• ros install <user name>/<project name>
• ros install <user name>/<project name>/<tag>
• ros install <user name>/<project name>/<branch>
• Manual installation
• Place files under ~/.roswell/local-projects

#### Tim Bradshaw — The proper use of macros in Lisp

· 77 days ago

People learning Lisp often try to learn how to write macros by taking an existing function they have written and turning it into a macro. This is a mistake: macros and functions serve different purposes and it is almost never useful to turn functions into macros, or macros into functions.

Let’s say you are learning Common Lisp1, and you have written a fairly obvious factorial function based on the natural mathematical definition: if $$n \in \mathbb{N}$$, then

$n! = \begin{cases} 1 &n \le 1\\ n \times (n - 1)! &n > 1 \end{cases}$

So this gives you a fairly obvious recursive definition of factorial:

(defun factorial (n)
(if (<= n 1)
1
(* n (factorial (1- n )))))

And so, you think you want to learn about macros so can you write factorial as a macro? And you might end up with something like this:

(defmacro factorial (n)
(if (<= ,n 1)
1
(* ,n (factorial ,(1- n )))))

And this superficially seems as if it works:

> (factorial 10)
3628800

But it doesn’t, in fact, work:

> (let ((x 3))
(factorial x))

Error: In 1- of (x) arguments should be of type number.

Why doesn’t this work and can it be fixed so it does? If it can’t what has gone wrong and how are macros meant to work and what are they useful for?

It can’t be fixed so that it works. trying to rewrite functions as macros is a bad idea, and if you want to learn what is interesting about macros you should not start there.

To understand why this is true you need to understand what macros actually are in Lisp.

## What macros are: a first look

A macro is a function whose domain and range is syntax.

Macros are functions (quite explicitly so in CL: you can get at the function of a macro with macro-function, and this is something you can happily call the way you would call any other function), but they are functions whose domain and range is syntax. A macro is a function whose argument is a language whose syntax includes the macro and whose value, when called on an instance of that language, is a language whose syntax doesn’t include the macro. It may work recursively: its value may be a language which includes the same macro but in some simpler way, such that the process will terminate at some point.

So the job of macros is to provide a family of extended languages built on some core Lisp which has no remaining macros, only functions and function application, special operators & special forms involving them and literals. One of those languages is the language we call Common Lisp, but the macros written by people serve to extend this language into a multitude of variants.

As an example of this I often write in a language which is like CL, but is extended by the presence of a number of extra constructs, one of which is called ITERATE (but it predates the well-known one and is not at all the same):

(iterate next ((x 1))
(if (< x 10)
(next (1+ x))
x)

is equivalent to

(labels ((next (x)
(if (< x 10)
(next (1+ x))
x)))
(next 1))

Once upon a time when I first wrote iterate, it used to manually optimize the recursive calls to jumps in some cases, because the Symbolics I wrote it on didn’t have tail-call elimination. That’s a non-problem in LispWorks2. Anyone familiar with Scheme will recognise iterate as named let, which is where it came from (once, I think, it was known as nlet).

iterate is implemented by a function which maps from the language which includes it to a language which doesn’t include it, by mapping the syntax as above.

So compare this with a factorial function: factorial is a function whose domain is natural numbers and whose range is also natural numbers, and it has an obvious recursive definition. Well, natural numbers are part of the syntax of Lisp, but they’re a tiny part of it. So implementing factorial as a macro is, really, a hopeless task. What should

(factorial (+ x y (f z)))

Actually do when considered as a mapping between languages? Assuming you are using the recursive definition of the factorial function then the answer is it can’t map to anything useful at all: a function which implements that recursive definition simply has to be called at run time. The very best you could do would seem to be this:

(defun fact (n)
(if (< n 3)
n
(* n (fact (1- n)))))

(defmacro factorial (expression)
(fact ,expression))

And that’s not a useful macro (but see below).

So the answer is, again, that macros are functions which map between languages and they are useful where you want a new language: not just the same language with extra functions in it, but a language with new control constructs or something like that. If you are writing functions whose range is something which is not the syntax of a language built on Common Lisp, don’t write macros.

## What macros are: a second look

Macroexpansion is compilation.

A function whose domain is one language and whose range is another is a compiler for the language of the domain, especially when that language is somehow richer than the language of the range, which is the case for macros.

But it’s a simplification to say that macros are this function: they’re not, they’re only part of it. The actual function which maps between the two languages is made up of macros and the macroexpander provided by CL itself. The macroexpander is what arranges for the functions defined by macros to be called in the right places, and also it is the thing which arranges for various recursive macros to actually make up a recurscive function. So it’s important to understand that the macroexpander is a critical part of the process: macros on their own only provide part of it.

## An example: two versions of a recursive macro

People often say that you should not write recursive macros, but this prohibition on recursive macros is pretty specious: they’re just fine. Consider a language which only has lambda and doesn’t have let. Well, we can write a simple version of let, which I’ll call bind as a macro: a function which takes this new language and turns it into the more basic one. Here’s that macro:

(defmacro bind ((&rest bindings) &body forms)
((lambda ,(mapcar #'first bindings) ,@forms)
,@(mapcar #'second bindings)))

And now

> (bind ((x 1) (y 2))
(+ x y))
(bind ((x 1) (y 2)) (+ x y))
-> ((lambda (x y) (+ x y)) 1 2)
3

(These example expansions come via use of my trace-macroexpand package, available in a good Lisp near you: see appendix for configuration).

So now we have a language with a binding form which is more convenient than lambda. But maybe we want to be able to bind sequentially? Well, we can write a let* version, called bind*, which looks like this

(defmacro bind* ((&rest bindings) &body forms)
(if (null (rest bindings))
(bind ,bindings ,@forms)
(bind (,(first bindings))
(bind* ,(rest bindings) ,@forms))))

And you can see how this works: it checks if there’s just one binding in which case it’s just bind, and if there’s more than one it peels off the first and then expands into a bind* form for the rest. And you can see this working (here both bind and bind* are being traced):

> (bind* ((x 1) (y (+ x 2)))
(+ x y))
(bind* ((x 1) (y (+ x 2))) (+ x y))
-> (bind ((x 1)) (bind* ((y (+ x 2))) (+ x y)))
(bind ((x 1)) (bind* ((y (+ x 2))) (+ x y)))
-> ((lambda (x) (bind* ((y (+ x 2))) (+ x y))) 1)
(bind* ((y (+ x 2))) (+ x y))
-> (bind ((y (+ x 2))) (+ x y))
(bind ((y (+ x 2))) (+ x y))
-> ((lambda (y) (+ x y)) (+ x 2))
(bind* ((y (+ x 2))) (+ x y))
-> (bind ((y (+ x 2))) (+ x y))
(bind ((y (+ x 2))) (+ x y))
-> ((lambda (y) (+ x y)) (+ x 2))
4

You can see that, in this implementation, which is LW again, some of the forms are expanded more than once: that’s not uncommon in interpreted code: since macros should generally be functions (so, not have side-effects) it does not matter that they may be expanded multiple times. Compilation will expand macros and then compile the result, so all the overhead of macroexpansion happend ahead of run-time:

 (defun foo (x)
(bind* ((y (1+ x)) (z (1+ y)))
(+ y z)))
foo

> (compile *)
(bind* ((y (1+ x)) (z (1+ y))) (+ y z))
-> (bind ((y (1+ x))) (bind* ((z (1+ y))) (+ y z)))
(bind ((y (1+ x))) (bind* ((z (1+ y))) (+ y z)))
-> ((lambda (y) (bind* ((z (1+ y))) (+ y z))) (1+ x))
(bind* ((z (1+ y))) (+ y z))
-> (bind ((z (1+ y))) (+ y z))
(bind ((z (1+ y))) (+ y z))
-> ((lambda (z) (+ y z)) (1+ y))
foo
nil
nil

> (foo 3)
9

There’s nothing wrong with macros like this, which expand into simpler versions of themselves. You just have to make sure that the recursive expansion process is producing successively simpler bits of syntax and has a well-defined termination condition.

Macros like this are often called ‘recursive’ but they’re actually not: the function associated with bind* does not call itself. What is recursive is the function implicitly defined by the combination of the macro function and the macroexpander: the bind* function simply expands into a bit of syntax which it knows will cause the macroexpander to call it again.

It is possible to write bind* such that the macro function itself is recursive:

(defmacro bind* ((&rest bindings) &body forms)
(labels ((expand-bind (btail)
(if (null (rest btail))
(bind ,btail
,@forms)
(bind (,(first btail))
,(expand-bind (rest btail))))))
(expand-bind bindings)))

And now compiling foo again results in this output from tracing macroexpansion:

(bind* ((y (1+ x)) (z (1+ y))) (+ y z))
-> (bind ((y (1+ x))) (bind ((z (1+ y))) (+ y z)))
(bind ((y (1+ x))) (bind ((z (1+ y))) (+ y z)))
-> ((lambda (y) (bind ((z (1+ y))) (+ y z))) (1+ x))
(bind ((z (1+ y))) (+ y z))
-> ((lambda (z) (+ y z)) (1+ y))

You can see that now all the recursion happens within the macro function for bind* itself: the macroexpander calls bind*’s macro function just once.

While it’s possible to write macros like this second version of bind*, it is normally easier to write the first version and to allow the combination of the macroexpander and the macro function to implement the recursive expansion.

## Two historical uses for macros

There are two uses for macros — both now historical — where they were used where functions would be more natural.

The first of these is function inlining, where you want to avoid the overhead of calling a small function many times. This overhead was a lot on computers made of cardboard, as all computers were, and also if the stack got too deep the cardboard would tear and this was bad. It makes no real sense to inline a recursive function such as the above factorial: how would the inlining process terminate? But you could rewrite a factorial function to be explicitly iterative:

(defun factorial (n)
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k n) f)))

And now, if you have very many calls to factorial, you wanted to optimise the function call overhead away, and it was 1975, you might write this:

(defmacro factorial (n)
(let ((nv ,n))
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k nv) f))))

And this has the effect of replacing (factorial n) by an expression which will compute the factorial of n. The cost of that is that (funcall #'factorial n) is not going to work, and (funcall (macro-function 'factorial) ...) is never what you want.

Well, that’s what you did in 1975, because Lisp compilers were made out of the things people found down the sides of sofas. Now it’s no longer 1975 and you just tell the compiler that you want it to inline the function, please:

(declaim (inline factorial))
(defun factorial (n) ...)

and it will do that for you. So this use of macros is now purely historicl.

The second reason for macros where you really want functions is computing things at compile time. Let’s say you have lots of expressions like (factorial 32) in your code. Well, you could do this:

(defmacro factorial (expression)
(typecase expression
((integer 0)
(factorial/fn expression))
(number
(error "factorial of non-natural literal ~S" expression))
(t
(factorial/fn ,expression))))

So the factorial macro checks to see if its argument is a literal natural number and will compute the factorial of it at macroexpansion time (so, at compile time or just before compile time). So a function like

(defun foo ()
(factorial 32))

will now compile to simply return 263130836933693530167218012160000000. And, even better, there’s some compile-time error checking: code which is, say, (factorial 12.3) will cause a compile-time error.

Well, again, this is what you would do if it was 1975. It’s not 1975 any more, and CL has a special tool for dealing with just this problem: compiler macros.

(defun factorial (n)
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k n) f)))

(define-compiler-macro factorial (&whole form n)
(typecase n
((integer 0)
(factorial n))
(number
(error "literal number is not a natural: ~S" n))
(t form)))

Now factorial is a function and works the way you expect — (funcall #'factoial ...) will work fine. But the compiler knows that if it comes across (factorial ...) then it should give the compiler macro for factorial a chance to say what this expression should actually be. And the compiler macro does an explicit check for the argument being a literal natural number, and if it is computes the factorial at compile time, and the same check for a literal number which is not a natural, and finally just says ’I don’t know, call the function’. Note that the compiler macro itself calls factorial, but since the argument isn’t a literal there’s no recursive doom.

So this takes care of the other antique use of macros where you would expect functions. And of course you can combine this with inlining and it will all work fine: you can write functions which will handle special cases via compiler macros and will otherwise be inlined.

That leaves macros serving the purpose they are actually useful for: building languages.

## Appendix: setting up trace-macroexpand

(use-package :org.tfeb.hax.trace-macroexpand)

;;; Don't restrict print length or level when tracing
(setf *trace-macroexpand-print-level* nil
*trace-macroexpand-print-length* nil)

;;; Enable tracing
(trace-macroexpand)

;;; Trace the macros you want to look at ...
(trace-macro ...)

;;; ... and ntrace them
(untrace-macro ...)

1. All the examples in this article are in Common Lisp except where otherwise specified. Other Lisps have similar considerations, although macros in Scheme are not explicitly functions in the way they are in CL.

2. This article originated as a message on the lisp-hug mailing list for LispWorks users. References to ‘LW’ mean LispWorks, although everything here should apply to any modern CL. (In terms of tail call elimination I would define a CL which does not eliminate tail self-calls in almost all cases under reasonable optimization settings as pre-modern: I don’t use such implementations.)

#### Nicolas Hafner — GIC, Digital Dragons, and more - November Kandria Update

· 81 days ago

An event-ful month passed by for Kandria! Lots of developments in terms of conferences and networking. This, in addition to falling ill for a few days, left little time for actual dev again, though even despite everything we still have some news to share on that front as well!

## Swiss-Polish Game Jam

One of the major events this month was the Swiss-Polish game jam alongside GIC, which was organised largely by the Swiss embassy in Poland. Tim and I partnered up with three good fellows from Blindflug Studios, and made a small game called Eco Tower. The jam lasted only 48 hours, so it's nothing grand, but I'm still quite happy with how it turned out, and it was a blast working with the rest of the team!

You can find the game on itch.io.

## Game Industry Conference

The Game Industry Conference was pretty great! I had a fun time talking to the rest of Pro Helvetia and the other delegated teams, as well as the various attendees that checked out our booth. I wrote a lot more about it and the game jam in a previous weekly mailing list update, which, as an exception, you can see here.

## Digital Dragons

Over the course of our Poland visit we were also informed that we'd been accepted into the Digital Dragons Accelerator programme, which is very exciting! Digital Dragons is a Polish conference and organisation to support games, and with this new accelerator programme they're now also reaching out to non-polish developers to support their projects. Only 13 teams out of 97 from all over Europe were chosen, so we're really happy to have been accepted!

As part of the programme we'll be partnered with a Polish publishing company to settle on and then together achieve a set of milestones, over which the grant money of over 50k€ will be paid out. The partner will not be our publisher, just a partner, for the duration of this programme.

Now, you may be wondering what's in it for Poland, as just handing out a load of money to external studios sounds a bit too good to be true, and indeed there's a small catch. As part of the programme we have to first establish a company in Poland, to which the grant will be paid out, and with the hopes that you'll continue using this company after the accelerator ends. We're now in the process of establishing this company, and have already signed a contract with a law firm to help us out with everything involved.

In any case, this is all very exciting, and I'm sure we'll have more to share about all of this as time goes on.

## Nordic Games

Then this week was the Nordic Games Winter conference, with another MeetToMatch platform. We were also accepted into its "publisher market", which had us automatically paired up with 10 publishing firms for pitches on Tuesday. That, combined with law firm meetings, meant that on Tuesday I had 12 meetings almost back to back. Jeez!

I'm not hedging my bets on getting any publishing deals out of this yet, but it is still a great opportunity to grow our network and get our name and game out there into the collective mind of the industry. The response from the recruiters also generally seems favourable, which is really cool.

I do wish we had a new trailer though. While I still think our current VS trailer is good, I've now had to listen to it so many times during pitches and off that I really can't stand it anymore, ha ha! We'll hold off on that though, creating new content and hammering out that horizontal slice is far more important at this stage.

## Hotfix Release

There was a hotfix release along the line that clears out a bunch of critical bugs, and adds a few small features as well. You can get it from your usual link, or by signing up.

## Horizontal Slice

We're now well into the horizontal slice development, and I've started hammering out the level design for the lower part of region 1. I'm still very slow-going on that since I just lack the experience to do it easily, which in turn makes me loathe doing it, which in turn makes me do less of it, which in turn does not help my experience. Woe is me! Anyway, I'll just grit my teeth for now and get as much done as I can - I'll get better over time I'm sure!

As part of the level design process I've also started implementing more platforming mechanics such as the slide move, lava and oil liquids, a dash-recharge element, and recallable elevators. I'll have to add a few more things still, such as crumbling platforms, springs and springboards, wind, exhaust pipes, and conveyor belts.

## Tim

This month has been horizontal slice quest development, with the trip to Poland for GIC sandwiched in the middle. I'm sure Nick has covered this in depth above, but I wanted to add that it was an amazing experience for me: travelling to Poland and seeing a new country and culture (St. Martin's croissants / Rogals are AMAZING); the game jam where although as a writer I was somewhat limited (helped a bit with design, research and playtesting), it was nevertheless a great experience with the best result - and I got to shake hands with the Swiss ambassador!; the GIC conference itself, where it was a great feeling with Kandria live on the show floor, and watching players and devs get absorbed; the studio visit with Vile Monarch and 11 bit (Frostpunk is one of my favourite games). But the best thing was the people: getting to meet Nick in real life and see the man behind the magic, not to mention all the other devs, industry folk, and organisers from Switzerland and Poland. It was a real privilege to be part of the group.

I've also been continuing to help with the meet-to-match platform for both GIC, and Nordic Game this past week, filtering publishers to suit our needs and booking meetings. Aside from that, it's now full steam ahead on the horizontal slice! With the quest document updated with Nick's feedback, it's a strong roadmap for me to follow. I'm now back in-game getting my hands dirty with the scripting language - it feels good to be making new content, and pushing the story into the next act beyond the vertical slice.

## Fred

Fred's been very busy implementing the new moves for the Stranger, as well as doing all the animations for new NPC characters that we need in the extended storyline. One thing I'm very excited about is the generic villagers, as I want to add a little AI to them to make them walk about and really make the settlements feel more alive!

## Mikel

Similarly, Mikel's been hard at work finalising the tracks for the next regions and producing variants for the different levels of tension. I'm stoked to see how they'll work in-game! Here's a peek at one of the tracks:

## A minor note

I'll take this moment to indulge in a little side project. For some years now I've been producing physical desktop calendars, with my own art, design, and distribution thereof. If you like the art I make or would simply like to support what we do and get something small out of it, consider get one on Gumroad.

## The bottom line

As always, let's look at the roadmap from last month.

• Fix reported crashes and bugs

• Add a update notice to the main screen to avoid people running outdated versions

• Implement some more accessibility options

• Implement more combat and platforming moves

• Implement RPG mechanics for levelling and upgrades (partially done)

• Explore platforming items and mechanics (partially done)

• Practise platforming level design (partially done)

• Draft out region 2 main quest line levels and story

• Draft out region 3 main quest line levels and story

• Complete the horizontal slice

Well, we're starting to crunch away at that horizontal slice content. Still got a long way to go, though!

As always, I sincerely hope you give the new demo a try if you haven't yet. Let us know what you think when you do or if you have already!

For older items, see the Planet Lisp Archives.

Last updated: 2022-01-26 18:33