Planet Lisp

Max-Gerd RetzlaffuLisp on M5Stack (ESP32):<br /> new version published

· 12 hours ago

I got notified that I haven't updated ulisp-esp-m5stack at GitHub for quite a while. Sorry, for that. Over the last months I worked on a commercial project using uLisp and forgot to update the public repository. At least I have bumped ulisp-esp-m5stack to my version of it from May 13th, 2021 now.

It is a—then—unpublished version of uLisp named 3.6b which contains a bug fix for a GC bug in with-output-to-string and a bug fix for lispstring, both authored by David Johnson-Davies who sent them to my via email for testing. Thanks a lot again! It seems they are also included in the uLisp Version 3.6b that David published on 20th June 2021.

I know there David published a couple of new releases of uLisp in the meantime with many more interesting improvements but this is the version I am using since May together with a lot of changes by me which I hope to find time to release as well in the near future.

Error-handling in uLisp by Gohecca

I am using Goheeca's Error-handling code since June and I couldn't work without it anymore. I just noticed that he allowed my to push his work to my repository in July already. So I just also published my branch error-handling to ulisp-esp-m5stack/error-handling. It's Goheeca's patches together with a few small commits by me on top of it, mainly to achieve this (as noted in the linked forum thread already):

To circumvent the limitation of the missing multiple-values that you mentioned with regard to ignore-errors, I have added a GlobalErrorString to hold the last error message and a function get-error to retrieve it. I consider this to be a workaround but it is good enough to show error messages in the little REPL of the Lisp handheld.


See also "Stand-alone uLisp computer (with code!)".

Read the whole article.

Nicolas HafnerSlicing the horizon - December Kandria Update

· 18 hours ago
https://filebox.tymoon.eu//file/TWpRd05RPT0=

November has been filled with horizontal slice development! Nearly all our time was spent working on new content, which is fantastic! The world is already four times as big as the demo, and there's still plenty more to go.

Horizontal Slice Development

We've been busy working on the horizontal slice content, hammering out quests, art, music, levels, and new mechanics. We now have an overview of the complete game map, and it clocks in at about 1.2 x 2.2 km, divided up into 265 unique rooms.

https://filebox.tymoon.eu//file/TWpRd013PT0=

This is pretty big already, but not quite the full map size yet. Once we're done with the horizontal slice, we'll be branching things out with sidequests and new side areas that are going to make the map even more dense and broad.

The map is split up into four distinct areas, which we call the Surface and Regions 1-3. Each of those areas have their own unique tileset, music tracks, NPCs, and platforming mechanics.

The demo already shows off the Surface as well as the upper part of Region 1:

https://kandria.com/press/screenshot%202.pnghttps://kandria.com/press/screenshot%204.png

We can also give you a peek at the visuals for Region 2:

https://filebox.tymoon.eu//file/TWpRd05BPT0=

I'm really excited to see everything come together, but there's still a lot more levels for me to design before that. I'm glad that I finally managed to get up to speed doing that, but it's still surprisingly hard. Coming up with fresh ideas for each room and making sure the challenges are properly balanced is very time consuming.

As such, progress has been a bit slower than I would have liked, and that's been eating at me. Still, I think we can get the horizontal slice done without too much of a delay, and we still have a lot of development time scheduled in our budget, so I think we'll be fine.

Tim

I've been working on the horizontal slice, and act 2's mainline quests are all but done to first draft quality, with a decent first pass on the dialogue. This contains several significant new quests, which send the player far and wide around the lower part of region 1, which Nick has greyboxed out. It's been fun getting into the headspaces and voices of the new characters you'll meet here, and spinning up again on the scripting language. There was some tricky functionality to script, since we want some of the quests to be encountered naturally by the player, even if they're not at that part of the story yet; it needed some extra thought to make sure these hang together based on the different ways the player might approach it. This should be good learning going into act 3, another meaty act. Though things should get faster to implement for the following acts 4 and 5, since the plot there is getting railroaded towards the climax.

The bottom line

As always, let's look at the roadmap from last month.

  • Fix reported crashes and bugs

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Draft out region 2 main quest line levels

  • Revise some of the movement mechanics

  • Animate more NPC characters and add an AI for them

  • Implement RPG mechanics for levelling and upgrades (partially done)

  • Draft out region 3 main quest line levels (partially done)

  • Complete the horizontal slice

December is going to be a short month as we have two weeks of holidays ahead of us, which I'm personally really looking forward to. I will be writing a year wrap-up for the end of December though, just like last year.

As always, I sincerely hope you give the new demo a try if you haven't yet. Let us know what you think when you do or if you have already!

Tim BradshawThe endless droning: corrections and clarifications

· 11 days ago

It seems that my article about the existence in the Lisp community of rather noisy people who seem to enjoy complaining rather than fixing things has atracted some interest. Some things in it were unclear, and some other things seem to have been misinterpreted: here are some corrections and clarifications.


First of all some people pointed out, correctly, that LispWorks is expensive if you live in a low-income country. That’s true: I should have been clearer that I believe the phenonenon I am describing is exclusively a rich-world one. I may be incorrect but I have never heard anyone from a non-rich-world country doing this kind of destructuve whining.

It may also have appeared that I am claiming that all Lisp people do this: I’m not. I think the number of people is very small, and that it has always been small. But they are very noisy and even a small number of noisy people can be very destructive.

Some people seem to have interpreted what I wrote as saying that the current situation was fine and that Emacs / SLIME / SLY was in fact the best possible answer. Given that my second sentence was

[Better IDEs] would obviously be desirable.

this is a curious misreading. Just in case I need to make the point any more strongly: I don’t think that Emacs is some kind of be-all and end-all: better IDEs would be very good. But I also don’t think Emacs is this insurmountable barrier that people pretend it is, and I also very definitely think that some small number of people are claiming it is because they want to lose.

I should point out that this claim that it is not an insurmountable barrier comes from some experience: I have taught people Common Lisp, for money, and I’ve done so based on at least three environments:

  • LispWorks;
  • Something based around Emacs and a CL running under it;
  • Genera.

None of those environments presented any significant barrier. I think that LW was probably the most liked but none of them got in the way or put people off.

In summary: I don’t think that the current situation is ideal, and if you read what I wrote as saying that you need to read more carefully. I do think that the current situation is not going to deter anyone seriously interested and is very far from the largest barrier to becoming good at Lisp. I do think that, if you want to do something to make the situation better then you should do it, not hang around on reddit complaining about how awful it is, but that there are a small number of noisy people who do exactly that because, for them, no situation would be ideal because what they want is to avoid being able to get useful work done. Those people, unsurprisingly, often become extremely upset when you confront them with this awkward truth about themselves. They are also extremely destructive influences on any discussion around Lisp. (Equivalents of these noisy people exist in other areas, of course.) That’s one of the reasons I no longer participate in the forums where these people tend to exist.


(Thanks to an ex-colleague for pointing out that I should perhaps post this.)

vindarelLisp for the web: pagination and cleaning up HTML with LQuery

· 11 days ago

I maintain a web application written in Common Lisp, used by real world© clients© (incredible I know), and I finally got to finish two little additions:

  • add pagination to the list of products
  • cleanup the HTML I get from webscraping (so we finally fetch a book summary, how cool) (for those who pay for it, we can also use a third-party book database).

The HTML cleanup part is about how to use LQuery for the task. Its doc shows the remove function from the beginning, but I have had difficulty to find how to use it. Here’s how. (see issue #11)

Cleanup HTML with lquery

https://shinmera.github.io/lquery/

LQuery has remove, remove-attr, remove-class, remove-data. It seems pretty capable.

Let’s say I got some HTML and I parsed it with LQuery. There are two buttons I would like to remove (you know, the “read more” and “close” buttons that are inside the book summary):

(lquery:$ *node* ".description" (serialize))
   ;; HTML content...
        <button type=\"button\" class=\"description-btn js-descriptionOpen\"><span class=\"mr-005\">Lire la suite</span><i class=\"far fa-chevron-down\" aria-hidden=\"true\"></i></button>
        <button type=\"button\" class=\"description-btn js-descriptionClose\"><span class=\"mr-005\">Fermer</span><i class=\"far fa-chevron-up\" aria-hidden=\"true\"></i></button></p>")

On GitHub, @shinmera tells us we can simply do:

($ *node* ".description" (remove "button") (serialize))

Unfortunately, I try and I still see the two buttons in the node or in the output. What worked for me is the following:

  • first I check that I can access these HTML nodes with a CSS selector:
(lquery:$ *NODE* ".description button" (serialize))
;; => output
  • now I use remove. This returns the removed elements on the REPL, but they are corrcetly removed from the node (a global var passed as parameter):
(lquery:$ *NODE* ".description button" (remove) (serialize))
;; #("<button type=\"button\" class=\"description-btn js-descriptionOpen\"><span class=\"mr-005\">Lire la suite</span><i class=\"far fa-chevron-down\" aria-hidden=\"true\"></i></button>"

Now if I check the description field:

(lquery:$ *NODE* ".description" (serialize))
;; ...
;; </p>")

I have no more buttons \o/

Now to pagination.

Pagination

This is my 2c, hopefully this will help someone do the same thing quicker, and hopefully we’ll abstract this in a library...

On my web app I display a list of products (books). We have a search box with a select input in order to filter by shelf (category). If no shelf was chosen, we displayed only the last 200 most recent books. No need of pagination, yet... There were only a few thousand books in total, so we could show a shelf entirely, it was a few hundred books by shelf maximum. But the bookshops grow and my app crashed once (thanks, Sentry and cl-sentry). Here’s how I added pagination. You can find the code here and the Djula template there.

The goal is to get this and if possible, in a re-usable way:

I simply create a dict object with required data:

  • the current page number
  • the page size
  • the total number of elements
  • the max number of buttons we want to display
  • etc
(defun make-pagination (&key (page 1) (nb-elements 0) (page-size 200)
                         (max-nb-buttons 5))
  "From a current page number, a total number of elements, a page size,
  return a dict with all of that, and the total number of pages.

  Example:

(get-pagination :nb-elements 1001)
;; =>
 (dict
  :PAGE 1
  :NB-ELEMENTS 1001
  :PAGE-SIZE 200
  :NB-PAGES 6
  :TEXT-LABEL \"Page 1 / 6\"
 )
"
  (let* ((nb-pages (get-nb-pages nb-elements page-size))
         (max-nb-buttons (min nb-pages max-nb-buttons)))
    (serapeum:dict :page page
                   :nb-elements nb-elements
                   :page-size page-size
                   :nb-pages nb-pages
                   :max-nb-buttons max-nb-buttons
                   :text-label (format nil "Page ~a / ~a" page nb-pages))))

(defun get-nb-pages (length page-size)
  "Given a total number of elements and a page size, compute how many pages fit in there.
  (if there's a remainder, add 1 page)"
  (multiple-value-bind (nb-pages remainder)
      (floor length page-size)
    (if (plusp remainder)
        (1+ nb-pages)
        nb-pages)))
#+(or)
(assert (and (= 30 (get-nb-pages 6000 200))
             (= 31 (get-nb-pages 6003 200))
             (= 1 (get-nb-pages 1 200))))

You call it:

(make-pagination :page page
    :page-size *page-length*
    :nb-elements (length results))

then pass it to your template, which can {% include %} the template given above, which will create the buttons (we use Bulma CSS there).

When you click a button, the new page number is given as a GET parameter. You must catch it in your route definition, for example:

(easy-routes:defroute search-route ("/search" :method :get) (q shelf page)
   ...)

Finally, I updated my web app (while it runs, it’s more fun and why shut it down? It’s been 2 years I do this and so far all goes well (I try to not upgrade the Quicklisp dist though, it went badly once, because of external, system-wide dependencies)) (see this demo-web-live-reload).


That’s exactly the sort of things that should be extracted in a library, so we can focus on our application, not on trivial things. I started that work, but I’ll spend more time next time I need it... call it “needs driven development”.

Happy lisping.

Stelian IonescuOn New IDEs

· 13 days ago
There has been some brouhaha about the state of Common Lisp IDEs, and a few notable reactions to that, so I’m adding my two Euro cents to the conversation. What is a community ? It’s a common mistake to refer to some people doing a certain thing as a “community”, and it’s easy to imagine ridiculous examples: the community of suburban lawn-mowing dwellers, the community of wearers of green jackets, the community of programmers-at-large etc…

Tim BradshawThe endless droning

· 14 days ago

Someone asked about better Lisp IDEs on reddit. Such things would obviously be desirable. But the comments are entirely full the usual sad endless droning from people who need there always to be something preventing them from doing what they pretend to want to do, and are happy to invent such barriers where none really exist. comp.lang.lisp lives on in spirit if not in fact.

[The rest of this article is a lot ruder than the above and I’ve intentionally censored it from the various feeds. See also corrections and clarifications.]

More…

Wimpie NortjeSet up Verbose for multi-threaded standalone applications.

· 17 days ago

Although Verbose is one of few logging libraries that work with threaded applications (See Comparison of Common Lisp Logging Libraries), I had some trouble getting it to work in my application. I have a Hunchentoot web application which handles each request in a separate thread that is built as a standalone executable. Getting Verbose to work in Slime was trivial but once I built the standalone, it kept crashing.

The Verbose documentation provides all the information needed to make this setup work but not in a step-by-step fashion so this took me some time to figure out.

To work with threaded applications Verbose must run inside a thread of its own. It tries to make life easier for the majority case by starting its thread as soon as it is loaded. Creating a standalone application requires that the running lisp image contains only a single running thread. The Verbose background thread prevents the binary from being built. This can be remedied by preventing Verbose from immediately starting its background thread and then manually start it inside the application.

When Verbose is loaded inside Slime it prints to the REPL's *standard-output* without fuss but when I loaded it inside my standalone binary it caused the application to crash. I did not investigate the *standard-output* connection logic but I discovered that you must tell Verbose explicitly about the current *standard-output* in a binary otherwise it won't work.

Steps:

  1. (pushnew :verbose-no-init *features*)

    This feature must be set before the Verbose system is loaded. It prevents Verbose from starting its main background thread, which it does by default immediately when it is loaded.

    I added this form in the .asd file immediately before my application system definition. While executing code inside the .asd file is considered bad style it provided the cleanest way for me to do this otherwise I would have to do it in multiple places to cover all the use cases for development flows and building the production binary. There may be a better way to set *features* before a system is loaded but I have not yet discovered it.

  2. (v:output-here *standard-output*)

    This form makes Verbose use the *standard-output* as it currently exists. Leaving out this line was the cause of my application crashes. I am not sure what the cause is but I suspect Verbose tries to use Slime's version of *standard-output* if you don't tell it otherwise, even when it is not running in Slime.

    This must be done before starting the Verbose background thread.

  3. (v:start v:*global-controller*)

    Start the Verbose background thread.

  4. (v:info :main "Hello world!")

    Start logging.

I use systemd to run my applications. Systemd recommends that applications run in the foreground and print logs to the standard output. The application output is captured and logged in whichever way systemd is configured. On default installations this is usually in /var/log/syslog in the standard logging format which prepends the timestamp and some other information. Verbose also by default prints the timestamp in the logged message, which just adds noise and makes syslog difficult to read.

Verbose's logging format can be configured to be any custom format by subclassing its message class and providing the proper formatting method. This must be done before any other Verbose configuration.

Combining all the code looks like below.

In app.asd:

(pushnew :verbose-no-init *features*)

(defsystem #:app
  ...)

In app.lisp:

(defclass log-message (v:message) ())

(defmethod v:format-message ((stream stream) (message log-message))
  (format stream "[~5,a] ~{<~a>~} ~a"
          (v:level message)
          (v:categories message)
          (v:format-message NIL (v:content message))))

(defun run ()
  (setf v:*default-message-class* 'log-message)
  (v:output-here *standard-output*)
  (v:start v:*global-controller*)
  (v:info :main "Hello world!")
  
  ...)

Eitaro FukamachiDay 2: Roswell: Install libraries/applications

· 23 days ago

Hi, all Common Lispers.

In the previous article, I introduced the management of Lisp implementations with Roswell.

One of the readers asked me how to install Roswell itself. Sorry, I forgot to mention it. Please look into the official article at GitHub Wiki. Even on Windows, it recently has become possible to install it with a single command. Quite easy.

Today, I'm going to continue with Roswell: the installation of Common Lisp libraries and applications.

Install from Quicklisp dist

Quicklisp is the de-facto library registry. When you install Roswell, the latest versions of SBCL and Quicklisp are automatically set up.

Let's try to see the value of ql:*quicklisp-home* in REPL to check where Quicklisp is loaded from.

$ ros run
* ql:*quicklisp-home*
#P"/home/fukamachi/.roswell/lisp/quicklisp/"

You see that Quicklisp is installed in ~/.roswell/lisp/quicklisp/.

To install a Common Lisp project using this Quicklisp, execute ros install command:

# Install a project from Quicklisp dist
$ ros install <project name>

You probably remember ros install command is also used to install Lisp implementations. If you specify something other than the name of implementations, Roswell assumes that it's the name of an ASDF project. If the project is available in Quicklisp dist, it will be installed from Quicklisp.

Installed files will be placed under ~/.roswell/lisp/quicklisp/dists/quicklisp/software/ along with its dependencies.

If it's installed from Quicklisp, it may seem to be the same as ql:quickload. So you would think that this is just a command to be run from the terminal.

In most cases, that's true. However, if the project being installed contains some command-line programs with the directory named roswell/, Roswell will perform an additional action.

For example, Qlot provides qlot command. By running ros install qlot, Roswell installs the executable at ~/.roswell/bin/qlot.

This shows that Roswell can be used as an installer not only for simple projects but also for command-line applications.

Other examples of such projects are "lem", a text editor written in Common Lisp, and "mondo", a REPL program.

I'll explain how to write such a project in another article someday.

Install from GitHub

How about installing a project that is not in Quicklisp? Or, in some cases, the monthly Quicklisp dist is outdated, and you may want to use the newer version.

By specifying GitHub's user name and project name for ros install, you can install the project from GitHub.

$ ros install <user name>/<project name>

# In the case of Qlot
$ ros install fukamachi/qlot

Projects installed from GitHub will be placed under ~/.roswell/local-projects.

To update it, run ros update:

# Note that it is not "fukamachi/qlot".
$ ros update qlot

Besides, you can also install a specific version by specifying a tag name or a branch name.

# Install Qlot v0.11.4 (tag name)
$ ros install fukamachi/qlot/0.11.4

# Install the development version (branch name)
$ ros install fukamachi/qlot/develop

Manual installation

How about installing a project that doesn't exist in both Quicklisp and GitHub?

It's also easy. Just place the files under ~/.roswell/local-projects, and run ros install <project name>.

Let me explain a little about how it works.

This mechanism is based on the local-projects mechanism provided by Quicklisp.

The "~/.roswell/local-projects" directory can be treated just like the local-projects directory of Quicklisp.

As a side note, if you want to treat other directories like local-projects, just add the path to ros:*local-project-directories*. This is accomplished by adding Roswell-specific functions to asdf:*system-definition-search-functions*. Check it out if you are interested.

You can place your personal projects there or symbolically link to them to make them loadable.

But, I personally think that this directory should be used with caution.

Caution on the operation of local-projects

Projects placed under the local-projects directory can be loaded immediately after starting the REPL. I suppose many users use it for this convenience.

However, this becomes a problem when developing multiple projects on the same machine. Quicklisp's "local-projects" directory is user-local. Which means all projects will share it. Therefore, even if you think you are loading from Quicklisp, you may be loading a previously installed version from GitHub.

To avoid these dangers, I recommend using Qlot. If you are interested, please look into it.

Anyway, it is better to keep the number of local-projects to a minimum to avoid problems.

If you suspect that an unintended version of the library is loaded, you can check where the library is loaded by executing (ql:where-is-system :<project name>).

Conclusion

I introduced how to install Common Lisp projects with Roswell.

  • From Quicklisp
    • ros install <project name>
  • From GitHub
    • ros install <user name>/<project name>
    • ros install <user name>/<project name>/<tag>
    • ros install <user name>/<project name>/<branch>
  • Manual installation
    • Place files under ~/.roswell/local-projects

Tim BradshawThe proper use of macros in Lisp

· 25 days ago

People learning Lisp often try to learn how to write macros by taking an existing function they have written and turning it into a macro. This is a mistake: macros and functions serve different purposes and it is almost never useful to turn functions into macros, or macros into functions.


Let’s say you are learning Common Lisp1, and you have written a fairly obvious factorial function based on the natural mathematical definition: if \(n \in \mathbb{N}\), then

\[ n! = \begin{cases} 1 &n \le 1\\ n \times (n - 1)! &n > 1 \end{cases} \]

So this gives you a fairly obvious recursive definition of factorial:

(defun factorial (n)
  (if (<= n 1)
      1
    (* n (factorial (1- n )))))

And so, you think you want to learn about macros so can you write factorial as a macro? And you might end up with something like this:

(defmacro factorial (n)
  `(if (<= ,n 1)
      1
    (* ,n (factorial ,(1- n )))))

And this superficially seems as if it works:

> (factorial 10)
3628800

But it doesn’t, in fact, work:

> (let ((x 3))
    (factorial x))

Error: In 1- of (x) arguments should be of type number.

Why doesn’t this work and can it be fixed so it does? If it can’t what has gone wrong and how are macros meant to work and what are they useful for?

It can’t be fixed so that it works. trying to rewrite functions as macros is a bad idea, and if you want to learn what is interesting about macros you should not start there.

To understand why this is true you need to understand what macros actually are in Lisp.

What macros are: a first look

A macro is a function whose domain and range is syntax.

Macros are functions (quite explicitly so in CL: you can get at the function of a macro with macro-function, and this is something you can happily call the way you would call any other function), but they are functions whose domain and range is syntax. A macro is a function whose argument is a language whose syntax includes the macro and whose value, when called on an instance of that language, is a language whose syntax doesn’t include the macro. It may work recursively: its value may be a language which includes the same macro but in some simpler way, such that the process will terminate at some point.

So the job of macros is to provide a family of extended languages built on some core Lisp which has no remaining macros, only functions and function application, special operators & special forms involving them and literals. One of those languages is the language we call Common Lisp, but the macros written by people serve to extend this language into a multitude of variants.

As an example of this I often write in a language which is like CL, but is extended by the presence of a number of extra constructs, one of which is called ITERATE (but it predates the well-known one and is not at all the same):

(iterate next ((x 1))
 (if (< x 10)
     (next (1+ x))
   x)

is equivalent to

(labels ((next (x)
          (if (< x 10)
              (next (1+ x))
            x)))
 (next 1))

Once upon a time when I first wrote iterate, it used to manually optimize the recursive calls to jumps in some cases, because the Symbolics I wrote it on didn’t have tail-call elimination. That’s a non-problem in LispWorks2. Anyone familiar with Scheme will recognise iterate as named let, which is where it came from (once, I think, it was known as nlet).

iterate is implemented by a function which maps from the language which includes it to a language which doesn’t include it, by mapping the syntax as above.

So compare this with a factorial function: factorial is a function whose domain is natural numbers and whose range is also natural numbers, and it has an obvious recursive definition. Well, natural numbers are part of the syntax of Lisp, but they’re a tiny part of it. So implementing factorial as a macro is, really, a hopeless task. What should

(factorial (+ x y (f z)))

Actually do when considered as a mapping between languages? Assuming you are using the recursive definition of the factorial function then the answer is it can’t map to anything useful at all: a function which implements that recursive definition simply has to be called at run time. The very best you could do would seem to be this:

(defun fact (n)
 (if (< n 3)
     n
   (* n (fact (1- n)))))

(defmacro factorial (expression)
 `(fact ,expression))

And that’s not a useful macro (but see below).

So the answer is, again, that macros are functions which map between languages and they are useful where you want a new language: not just the same language with extra functions in it, but a language with new control constructs or something like that. If you are writing functions whose range is something which is not the syntax of a language built on Common Lisp, don’t write macros.

What macros are: a second look

Macroexpansion is compilation.

A function whose domain is one language and whose range is another is a compiler for the language of the domain, especially when that language is somehow richer than the language of the range, which is the case for macros.

But it’s a simplification to say that macros are this function: they’re not, they’re only part of it. The actual function which maps between the two languages is made up of macros and the macroexpander provided by CL itself. The macroexpander is what arranges for the functions defined by macros to be called in the right places, and also it is the thing which arranges for various recursive macros to actually make up a recurscive function. So it’s important to understand that the macroexpander is a critical part of the process: macros on their own only provide part of it.

An example: two versions of a recursive macro

People often say that you should not write recursive macros, but this prohibition on recursive macros is pretty specious: they’re just fine. Consider a language which only has lambda and doesn’t have let. Well, we can write a simple version of let, which I’ll call bind as a macro: a function which takes this new language and turns it into the more basic one. Here’s that macro:

(defmacro bind ((&rest bindings) &body forms)
 `((lambda ,(mapcar #'first bindings) ,@forms)
   ,@(mapcar #'second bindings)))

And now

> (bind ((x 1) (y 2))
    (+ x y))              
(bind ((x 1) (y 2)) (+ x y))
 -> ((lambda (x y) (+ x y)) 1 2)
3

(These example expansions come via use of my trace-macroexpand package, available in a good Lisp near you: see appendix for configuration).

So now we have a language with a binding form which is more convenient than lambda. But maybe we want to be able to bind sequentially? Well, we can write a let* version, called bind*, which looks like this

(defmacro bind* ((&rest bindings) &body forms)
 (if (null (rest bindings))
     `(bind ,bindings ,@forms)
   `(bind (,(first bindings))
      (bind* ,(rest bindings) ,@forms))))

And you can see how this works: it checks if there’s just one binding in which case it’s just bind, and if there’s more than one it peels off the first and then expands into a bind* form for the rest. And you can see this working (here both bind and bind* are being traced):

> (bind* ((x 1) (y (+ x 2)))
    (+ x y))
(bind* ((x 1) (y (+ x 2))) (+ x y))
 -> (bind ((x 1)) (bind* ((y (+ x 2))) (+ x y)))
(bind ((x 1)) (bind* ((y (+ x 2))) (+ x y)))
 -> ((lambda (x) (bind* ((y (+ x 2))) (+ x y))) 1)
(bind* ((y (+ x 2))) (+ x y))
 -> (bind ((y (+ x 2))) (+ x y))
(bind ((y (+ x 2))) (+ x y))
 -> ((lambda (y) (+ x y)) (+ x 2))
(bind* ((y (+ x 2))) (+ x y))
 -> (bind ((y (+ x 2))) (+ x y))
(bind ((y (+ x 2))) (+ x y))
 -> ((lambda (y) (+ x y)) (+ x 2))
4

You can see that, in this implementation, which is LW again, some of the forms are expanded more than once: that’s not uncommon in interpreted code: since macros should generally be functions (so, not have side-effects) it does not matter that they may be expanded multiple times. Compilation will expand macros and then compile the result, so all the overhead of macroexpansion happend ahead of run-time:

 (defun foo (x)
   (bind* ((y (1+ x)) (z (1+ y)))
     (+ y z)))
foo

> (compile *)
(bind* ((y (1+ x)) (z (1+ y))) (+ y z))
 -> (bind ((y (1+ x))) (bind* ((z (1+ y))) (+ y z)))
(bind ((y (1+ x))) (bind* ((z (1+ y))) (+ y z)))
 -> ((lambda (y) (bind* ((z (1+ y))) (+ y z))) (1+ x))
(bind* ((z (1+ y))) (+ y z))
 -> (bind ((z (1+ y))) (+ y z))
(bind ((z (1+ y))) (+ y z))
 -> ((lambda (z) (+ y z)) (1+ y))
foo
nil
nil

> (foo 3)
9

There’s nothing wrong with macros like this, which expand into simpler versions of themselves. You just have to make sure that the recursive expansion process is producing successively simpler bits of syntax and has a well-defined termination condition.

Macros like this are often called ‘recursive’ but they’re actually not: the function associated with bind* does not call itself. What is recursive is the function implicitly defined by the combination of the macro function and the macroexpander: the bind* function simply expands into a bit of syntax which it knows will cause the macroexpander to call it again.

It is possible to write bind* such that the macro function itself is recursive:

(defmacro bind* ((&rest bindings) &body forms)
  (labels ((expand-bind (btail)
             (if (null (rest btail))
                 `(bind ,btail
                    ,@forms)
               `(bind (,(first btail))
                  ,(expand-bind (rest btail))))))
    (expand-bind bindings)))

And now compiling foo again results in this output from tracing macroexpansion:

(bind* ((y (1+ x)) (z (1+ y))) (+ y z))
 -> (bind ((y (1+ x))) (bind ((z (1+ y))) (+ y z)))
(bind ((y (1+ x))) (bind ((z (1+ y))) (+ y z)))
 -> ((lambda (y) (bind ((z (1+ y))) (+ y z))) (1+ x))
(bind ((z (1+ y))) (+ y z))
 -> ((lambda (z) (+ y z)) (1+ y))

You can see that now all the recursion happens within the macro function for bind* itself: the macroexpander calls bind*’s macro function just once.

While it’s possible to write macros like this second version of bind*, it is normally easier to write the first version and to allow the combination of the macroexpander and the macro function to implement the recursive expansion.


Two historical uses for macros

There are two uses for macros — both now historical — where they were used where functions would be more natural.

The first of these is function inlining, where you want to avoid the overhead of calling a small function many times. This overhead was a lot on computers made of cardboard, as all computers were, and also if the stack got too deep the cardboard would tear and this was bad. It makes no real sense to inline a recursive function such as the above factorial: how would the inlining process terminate? But you could rewrite a factorial function to be explicitly iterative:

(defun factorial (n)
 (do* ((k 1 (1+ k))
       (f k (* f k)))
      ((>= k n) f)))

And now, if you have very many calls to factorial, you wanted to optimise the function call overhead away, and it was 1975, you might write this:

(defmacro factorial (n)
 `(let ((nv ,n))
    (do* ((k 1 (1+ k))
          (f k (* f k)))
         ((>= k nv) f))))

And this has the effect of replacing (factorial n) by an expression which will compute the factorial of n. The cost of that is that (funcall #'factorial n) is not going to work, and (funcall (macro-function 'factorial) ...) is never what you want.

Well, that’s what you did in 1975, because Lisp compilers were made out of the things people found down the sides of sofas. Now it’s no longer 1975 and you just tell the compiler that you want it to inline the function, please:

(declaim (inline factorial))
(defun factorial (n) ...)

and it will do that for you. So this use of macros is now purely historicl.

The second reason for macros where you really want functions is computing things at compile time. Let’s say you have lots of expressions like (factorial 32) in your code. Well, you could do this:

(defmacro factorial (expression)
 (typecase expression
   ((integer 0)
    (factorial/fn expression))
   (number
    (error "factorial of non-natural literal ~S" expression))
   (t
    `(factorial/fn ,expression))))

So the factorial macro checks to see if its argument is a literal natural number and will compute the factorial of it at macroexpansion time (so, at compile time or just before compile time). So a function like

(defun foo ()
 (factorial 32))

will now compile to simply return 263130836933693530167218012160000000. And, even better, there’s some compile-time error checking: code which is, say, (factorial 12.3) will cause a compile-time error.

Well, again, this is what you would do if it was 1975. It’s not 1975 any more, and CL has a special tool for dealing with just this problem: compiler macros.

(defun factorial (n)
 (do* ((k 1 (1+ k))
       (f k (* f k)))
      ((>= k n) f)))

(define-compiler-macro factorial (&whole form n)
 (typecase n
   ((integer 0)
    (factorial n))
   (number
    (error "literal number is not a natural: ~S" n))
   (t form)))

Now factorial is a function and works the way you expect — (funcall #'factoial ...) will work fine. But the compiler knows that if it comes across (factorial ...) then it should give the compiler macro for factorial a chance to say what this expression should actually be. And the compiler macro does an explicit check for the argument being a literal natural number, and if it is computes the factorial at compile time, and the same check for a literal number which is not a natural, and finally just says ’I don’t know, call the function’. Note that the compiler macro itself calls factorial, but since the argument isn’t a literal there’s no recursive doom.

So this takes care of the other antique use of macros where you would expect functions. And of course you can combine this with inlining and it will all work fine: you can write functions which will handle special cases via compiler macros and will otherwise be inlined.

That leaves macros serving the purpose they are actually useful for: building languages.


Appendix: setting up trace-macroexpand

(use-package :org.tfeb.hax.trace-macroexpand)

;;; Don't restrict print length or level when tracing
(setf *trace-macroexpand-print-level* nil
      *trace-macroexpand-print-length* nil)

;;; Enable tracing
(trace-macroexpand)

;;; Trace the macros you want to look at ...
(trace-macro ...)

;;; ... and ntrace them
(untrace-macro ...)

  1. All the examples in this article are in Common Lisp except where otherwise specified. Other Lisps have similar considerations, although macros in Scheme are not explicitly functions in the way they are in CL. 

  2. This article originated as a message on the lisp-hug mailing list for LispWorks users. References to ‘LW’ mean LispWorks, although everything here should apply to any modern CL. (In terms of tail call elimination I would define a CL which does not eliminate tail self-calls in almost all cases under reasonable optimization settings as pre-modern: I don’t use such implementations.) 

Nicolas HafnerGIC, Digital Dragons, and more - November Kandria Update

· 29 days ago
https://filebox.tymoon.eu//file/TWpNNU53PT0=

An event-ful month passed by for Kandria! Lots of developments in terms of conferences and networking. This, in addition to falling ill for a few days, left little time for actual dev again, though even despite everything we still have some news to share on that front as well!

Swiss-Polish Game Jam

One of the major events this month was the Swiss-Polish game jam alongside GIC, which was organised largely by the Swiss embassy in Poland. Tim and I partnered up with three good fellows from Blindflug Studios, and made a small game called Eco Tower. The jam lasted only 48 hours, so it's nothing grand, but I'm still quite happy with how it turned out, and it was a blast working with the rest of the team!

You can find the game on itch.io.

Game Industry Conference

The Game Industry Conference was pretty great! I had a fun time talking to the rest of Pro Helvetia and the other delegated teams, as well as the various attendees that checked out our booth. I wrote a lot more about it and the game jam in a previous weekly mailing list update, which, as an exception, you can see here.

https://filebox.tymoon.eu//file/TWpNNU5RPT0=

Digital Dragons

Over the course of our Poland visit we were also informed that we'd been accepted into the Digital Dragons Accelerator programme, which is very exciting! Digital Dragons is a Polish conference and organisation to support games, and with this new accelerator programme they're now also reaching out to non-polish developers to support their projects. Only 13 teams out of 97 from all over Europe were chosen, so we're really happy to have been accepted!

As part of the programme we'll be partnered with a Polish publishing company to settle on and then together achieve a set of milestones, over which the grant money of over 50k€ will be paid out. The partner will not be our publisher, just a partner, for the duration of this programme.

Now, you may be wondering what's in it for Poland, as just handing out a load of money to external studios sounds a bit too good to be true, and indeed there's a small catch. As part of the programme we have to first establish a company in Poland, to which the grant will be paid out, and with the hopes that you'll continue using this company after the accelerator ends. We're now in the process of establishing this company, and have already signed a contract with a law firm to help us out with everything involved.

In any case, this is all very exciting, and I'm sure we'll have more to share about all of this as time goes on.

Nordic Games

Then this week was the Nordic Games Winter conference, with another MeetToMatch platform. We were also accepted into its "publisher market", which had us automatically paired up with 10 publishing firms for pitches on Tuesday. That, combined with law firm meetings, meant that on Tuesday I had 12 meetings almost back to back. Jeez!

I'm not hedging my bets on getting any publishing deals out of this yet, but it is still a great opportunity to grow our network and get our name and game out there into the collective mind of the industry. The response from the recruiters also generally seems favourable, which is really cool.

I do wish we had a new trailer though. While I still think our current VS trailer is good, I've now had to listen to it so many times during pitches and off that I really can't stand it anymore, ha ha! We'll hold off on that though, creating new content and hammering out that horizontal slice is far more important at this stage.

Hotfix Release

There was a hotfix release along the line that clears out a bunch of critical bugs, and adds a few small features as well. You can get it from your usual link, or by signing up.

https://filebox.tymoon.eu//file/TWpNNU5BPT0=

Horizontal Slice

We're now well into the horizontal slice development, and I've started hammering out the level design for the lower part of region 1. I'm still very slow-going on that since I just lack the experience to do it easily, which in turn makes me loathe doing it, which in turn makes me do less of it, which in turn does not help my experience. Woe is me! Anyway, I'll just grit my teeth for now and get as much done as I can - I'll get better over time I'm sure!

As part of the level design process I've also started implementing more platforming mechanics such as the slide move, lava and oil liquids, a dash-recharge element, and recallable elevators. I'll have to add a few more things still, such as crumbling platforms, springs and springboards, wind, exhaust pipes, and conveyor belts.

Tim

This month has been horizontal slice quest development, with the trip to Poland for GIC sandwiched in the middle. I'm sure Nick has covered this in depth above, but I wanted to add that it was an amazing experience for me: travelling to Poland and seeing a new country and culture (St. Martin's croissants / Rogals are AMAZING); the game jam where although as a writer I was somewhat limited (helped a bit with design, research and playtesting), it was nevertheless a great experience with the best result - and I got to shake hands with the Swiss ambassador!; the GIC conference itself, where it was a great feeling with Kandria live on the show floor, and watching players and devs get absorbed; the studio visit with Vile Monarch and 11 bit (Frostpunk is one of my favourite games). But the best thing was the people: getting to meet Nick in real life and see the man behind the magic, not to mention all the other devs, industry folk, and organisers from Switzerland and Poland. It was a real privilege to be part of the group.

I've also been continuing to help with the meet-to-match platform for both GIC, and Nordic Game this past week, filtering publishers to suit our needs and booking meetings. Aside from that, it's now full steam ahead on the horizontal slice! With the quest document updated with Nick's feedback, it's a strong roadmap for me to follow. I'm now back in-game getting my hands dirty with the scripting language - it feels good to be making new content, and pushing the story into the next act beyond the vertical slice.

Fred

Fred's been very busy implementing the new moves for the Stranger, as well as doing all the animations for new NPC characters that we need in the extended storyline. One thing I'm very excited about is the generic villagers, as I want to add a little AI to them to make them walk about and really make the settlements feel more alive!

https://filebox.tymoon.eu//file/TWpNNU1RPT0=https://filebox.tymoon.eu//file/TWpNNU1nPT0=https://filebox.tymoon.eu//file/TWpNNU13PT0=

Mikel

Similarly, Mikel's been hard at work finalising the tracks for the next regions and producing variants for the different levels of tension. I'm stoked to see how they'll work in-game! Here's a peek at one of the tracks:

A minor note

I'll take this moment to indulge in a little side project. For some years now I've been producing physical desktop calendars, with my own art, design, and distribution thereof. If you like the art I make or would simply like to support what we do and get something small out of it, consider get one on Gumroad.

https://filebox.tymoon.eu//file/TWpNNU5nPT0=

The bottom line

As always, let's look at the roadmap from last month.

  • Fix reported crashes and bugs

  • Add a update notice to the main screen to avoid people running outdated versions

  • Implement some more accessibility options

  • Implement more combat and platforming moves

  • Implement RPG mechanics for levelling and upgrades (partially done)

  • Explore platforming items and mechanics (partially done)

  • Practise platforming level design (partially done)

  • Draft out region 2 main quest line levels and story

  • Draft out region 3 main quest line levels and story

  • Complete the horizontal slice

Well, we're starting to crunch away at that horizontal slice content. Still got a long way to go, though!

As always, I sincerely hope you give the new demo a try if you haven't yet. Let us know what you think when you do or if you have already!

Tim BradshawThe best Lisp

· 33 days ago

People sometimes ask which is the best Lisp dialect? That’s a category error, and here’s why.


Programming in Lisp — any Lisp — is about building languages: in Lisp the way you solve a problem is by building a language — a jargon, or a dialect if you like — to talk about the problem and then solving the problem in that language. Lisps are, quite explicitly, language-building languages.

This is, in fact, how people solve large problems in all programming languages: Greenspun’s tenth rule isn’t really a statement about Common Lisp, it’s a statement that all sufficiently large software systems end up having some hacked-together, informally-specified, half-working language in which the problem is actually solved. Often people won’t understand that the thing they’ve built is in fact a language, but that’s what it is. Everyone who has worked on large-scale software will have come across these things: often they are very horrible, and involve much use of language-in-a-string1.

The Lisp difference is two things: when you start solving a problem in Lisp, you know, quite explicitly, that this is what you are going to do; and the language has wonderful tools which let you incrementally build a series of lightweight languages, ending up with one or more languages in which to solve the problem.

So, after that preface, why is this question the wrong one to ask? Well, if you are going to program in Lisp you are going to be building languages, and you want those languages not to be awful. Lisp makes it it far easier to build languages which are not awful, but it doesn’t prevent you doing so if you want to. And again, anyone who has dealt with enough languages built on Lisps will have come across some which are, in fact, awful.

If you are going to build languages then you need to understand how languages work — what makes a language habitable to its human users (the computer does not care with very few exceptions). That means you will need to be a linguist. So the question then is: how do you become a linguist? Well, we know the answer to that, because there are lots of linguists and lots of courses on linguistics. You might say that, well, those people study natural languages, but that’s irrelevant: natural languages have been under evolutionary pressure for a very long time and they’re really good for what they’re designed for (which is not the same as what programming languages are designed for, but the users — humans — are the same).

So, do you become a linguist by learning French? Or German? Or Latin? Or Cuzco Quechua? No, you don’t. You become a linguist by learning enough about enough languages that you can understand how languages work. A linguist isn’t someone who speaks French really well: they’re someone who understands that French is a Romance language, that German isn’t but has many Romance loan words, that English is closer to German than it is French but got a vast injection of Norman French, which in turn wasn’t that close to modern French, that Swiss German has cross-serial dependencies but Hochdeutsch does not and what that means, and so on. A linguist is someone who understands things about the structure of languages: what do you see, what do you never see, how do different languages do equivalent things? And so on.

The way you become a linguist is not by picking a language and learning it: it’s by looking at lots of languages enough to understand how they work.

If you want to learn to program in Lisp, you will need to become a linguist. The very best way to ensure you fail at that is to pick a ‘best’ Lisp and learn that. There is no best Lisp, and in order to program well in any Lisp you must be exposed to as many Lisps and as many other languages as possible.


If you think there’s a distinction between a ‘dialect’, a ‘jargon’ and a ‘language’ then I have news for you: there is. A language is a dialect with a standards committee. (This is stolen from a quote due to Max Weinrich that all linguists know:

אַ שפּראַך איז אַ דיאַלעקט מיט אַן אַרמיי און פֿלאָט

a shprakh iz a dyalekt mit an armey un flot.)


  1. ‘Language-in-a-string’ is where a programming language has another programming language embedded in strings in the outer language. Sometimes programs in that inner programming language will be made up by string concatenation in the outer language. Sometimes that inner language will, in turn, have languages embedded in its strings. It’s a terrible, terrible thing. 

vindarelLisp Interview: questions to Alex Nygren of Kina Knowledge, using Common Lisp extensively in their document processing stack

· 45 days ago

Recently, the awesome-lisp-companies list was posted on HN, more people got to know it (look, this list is fan-cooked and we add companies when we learn about one, often by chance, don’t assume it’s anything “official” or exhaustive), and Alex Nygren informed us that his company Kina Knowledge uses Common Lisp in production:

We use Common Lisp extensively in our document processing software core for classification, extraction and other aspects of our service delivery and technology stack.

He very kindly answered more questions.

Thanks for letting us know about Kina Knowledge. A few more words if you have time? What implementation(s) are you using?

We use SBCL for all our Common Lisp processes. It’s easier with the standardization on a single engine, but we also have gotten tied to it in some of our code base due to using the built in SBCL specific extensions. I would like, but have no bandwidth, to evaluate CCL as well, especially on the Windows platform, where SBCL is weakest. Since our clients use Windows systems attached to scanners, we need to be able to support it with a client runtime.

Development is on MacOS with Emacs or Ubuntu with Emacs for CL, and then JetBrains IDEs for Ruby and JS and Visual Studio for some interface code to SAP and such. We develop the Kina UI in Kina itself using our internal Lisp, which provides a similar experience to Emacs/SLY.

What is not Lisp in your stack? For example, in “Kina extracts information from PDF, TIFFs, Excel, Word and more” as we read on your website.

Presently we use a Rails/Ruby environment for driving our JSON based API, and some legacy web functions. However, increasingly, once the user is logged in, they are interacting with a Common Lisp back end via a web socket (Hunchentoot and Hunchensocket) interacting with a Lisp based front end. Depending on the type of information extraction, the system uses Javascript, Ruby and Common Lisp. Ideally, I’d like to get all the code refactored into a prefix notation, targeting Common Lisp or DLisp (what we call our internal Lisp that compiles into Javascript).

What’s your position on open-source: do you use open-source Lisp libraries, do you (plan to) open-source some?

Yes. We recently put our JSON-LIB (https://github.com/KinaKnowledge/json-lib) out on Github, which is our internal JSON parser and encoder and we want to open source DLisp after some clean-up work. Architecturally, DLisp can run in the browser, or in sandboxed Deno containers on the server side, so we can reuse libraries easily. It’s not dependent on a server-side component though to run.

Library wise, we strictly try and limit how many third party (especially from the NPM ecosystem) libraries we are dependent on, especially in the Javascript world. In CL, we use the standard stuff like Alexandria, Hunchentoot, Bordeaux Threads, and things like zip.

How did hiring and forming lisp or non-lisp developers go? Did you look for experienced lispers or did you seek experienced engineers, even with little to no prior Lisp background?

Because we operate a lot in Latin America, I trained non-lisper engineers who speak Spanish on how to program Lisp, specifically our DLisp, since most customizations occur specifically for user interface and workflows around document centric processes, such as presenting linked documents and their data in specific ways. How the lisp way of thinking really depended on their aptitude with programming, and their English capabilities to understand me and the system. The user system is multilingual, but the development documentation is all in English. But it was really amazing when I saw folks who are experienced with Javascript and .Net get the ideas of Lisp and how compositional it can be as you build up towards a goal.

Besides, with DLisp, you can on the fly construct a totally new UI interaction - live - in minutes and see changes in the running app without the dreadful recompile-and-reload everything cycle that is typical. Instead, just recompile the function (analogous to C-c, C-c in Emacs), in the browser, and see the change. Then these guys would go out and interact with clients and build stuff. I knew once I saw Spanish functions and little DSLs showing up in organizational instances that they were able to make progress. I think it is a good way to introduce people to Lisp concepts without having to deal with the overhead of learning Emacs at the same time. I pushed myself through that experience when I first was learning CL, and now use Emacs every day for a TON of work tasks, but at the beginning it was tough, and I had to intentionally practice getting to the muscle memory that is required to be truly productive in a tool.

How many lispers are working together, how big a codebase do you manage?

Right now, in our core company we have three people, two here in Virginia and one in Mexico City. We use partners that provide services such as scanning and client integration work. We are self-funded and have grown organically, which is freeing because we are not beholden to investor needs. We maintain maximum flexibility, at the expense of capital. Which is OK for us right now. Lisp allows us to scale dramatically and manage a large code base. I haven’t line counted recently, but it exceeds 100K lines across server and client, with > 50% in Lisp.

Do you sometimes wish the CL (pro) world was more structured? (we have a CL Foundation but not so much active).

I really like the Common Lisp world. I would like it to be more popular, but at the same time, it is a differentiator for us. It is fast - our spatial classifier takes only milliseconds to come to a conclusion about a page (there is additional time prior to this step due to the OpenCV processing - but not too much) and identify it and doesn’t require expensive hardware. Most of our instances run on ARM-64, which at least at AWS, is 30% or so cheaper than x86-64. The s-expression structures align to document structures nicely and allow a nice representation that doesn’t lose fidelity to the original layouts and hierarchies. I am not as active as I would like to be in the Common Lisp community, mainly due to time and other commitments. I don’t know much about the CL foundation.

And so, how did you end up with CL?

Our UI was first with the DLisp concepts. I was intrigued by Clojure for the server portion, but I couldn’t come to terms with the JVM and the heavyweight of it. The server-side application was outgrowing the Rails architecture in terms of what we wanted to do with it, and, at the time, 4 years ago, Ruby was slower. In fact, Ruby had become a processing bottleneck for us (though I am certain the code could have been improved too). I liked the idea of distributing binary applications as well, which we needed to do in some instances, and building a binary runtime of the software was a great draw, too.

I also liked how well CL is thought out, from a spec standpoint. It is stable both in terms of performance and change. I had been building components with TensorFlow and Python 3, but for what I wanted to do, I couldn’t see how I could get there with back propagation and the traditional “lets calculate the entire network state”. If you don’t have access to high end graphic cards, it’s just too slow and too heavy. I was able to get what we needed to do in CL after several iterations and dramatically improve speed and resource utilization. I am very happy with that outcome. We are in what I consider to be a hard problem space: we take analog representations of information, a lot of it being poor quality and convert it to clean, structured digital information. CL is the core of that for us.

Here is an example of our UI, where extractions and classification can be managed. This is described in DLisp which interacts with a Common Lisp back end via a web socket.

Here is the function for the above view being edited in Kina itself. We do not obfuscate our client code, and all code that runs on our clients’ computers is fully available to view and, with the right privileges, to modify and customize. You can see the Extract Instruction Language in the center pane, which takes ideas from the Logo language in terms of a cursor (aka the turtle) that can be moved around relative to the document. We build this software to be used by operations teams and having a description language that is understandable by non-programmers such as auditors and operations personnel, is very useful. You can redefine aspects of the view or running environment and the change can take effect on the fly. Beyond the Javascript boot scaffolding to get the system started up in the browser, everything is DLisp communicating with Common Lisp and, depending on the operation, Rails.

I hope this information is helpful!


It is, thanks again!

Quicklisp newsOctober 2021 Quicklisp dist update now available

· 47 days ago

 New projects

  • alexandria-plus — A conservative set of extensions to Alexandria utilities — Microsoft Public License
  • autoexport — A small library to automatically export definitions — BSD-3-Clause
  • cephes.cl — Wrapper for the Cephes Mathematical Library — Microsoft Public License
  • cl-apertium-stream-parser — Apertium stream parser written in Common Lisp — Apache-2.0
  • cl-bus — A(n almost) referentially transparent interface for streams — BSD-3
  • cl-cram — A simple, Progress bar for Common Lisp — MIT
  • cl-earley-parser — Natural language parser using Jay Earleys well-known algorithm — MIT
  • cl-etcd — Run etcd as an asynchronous inferior process. — AGPL3
  • cl-gcrypt — Common Lisp bindings for libgcrypt — LGPLv2.1
  • cl-termbox — Bindings for termbox library, a minimalistic library for building text-mode applications without curses — MIT license
  • cl-with — WITH- group with- macros, allocate objects and rebind slots — BSD 3-clause
  • cl-yxorp — A reverse proxy server that supports WebSocket, HTTP, HTTPS, HTTP to HTTPS redirecting, port and host forwarding configuration using a real programming language, HTTP header and body manipulation (also using a real programming language). — AGPL3
  • claxy — Simple proxy middleware for clack — Apache License, version 2.0
  • clerk — A cron-like scheduler with sane DSL — MIT
  • clingon — Command-line options parser system for Common Lisp — BSD 2-Clause
  • clutter — Cluttering classes and slots with annotations/decorators/attributes metadata — LGPL
  • commondoc-markdown — Converter from Markdown to CommonDoc. — Unlicense
  • compiler-macro-notes — Provides a macro and some conditions for use within macros and compiler-macros. — MIT
  • ctype — An implementation of the Common Lisp type system. — BSD
  • docs-builder — A meta documentation builder for Common Lisp projects. — Unlicense
  • funds — portable, purely functional data structures in Common Lisp — Apache 2.0
  • geodesic — Library for geodesic calculations. — ISC
  • hashtrie — An implementation of the Hash Trie datastructure, based on Clojure's — Eclipse 2.0
  • mcase — Control frow macros with case comprehensiveness checking. — Public domain
  • mnas-path — Describe mnas-path here — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • parsnip — Parser combinator library — BSD 3-Clause
  • promise — A small, independent promise library for asynchronous frameworks — zlib
  • quick-patch — Easily override quicklisp projects without using git submodules — Mozilla Public License 2.0
  • strict-function — Utility of function definition — MIT
  • vivid-colors — colored object printer — MIT
  • vivid-diff — Colored object diff viewer. — MIT

Updated projects: 3d-matrices, also-alsa, april, architecture.builder-protocol, bdef, beast, bike, bnf, bp, chameleon, check-bnf, chirp, ci-utils, cl+ssl, cl-ana, cl-ansi-term, cl-ansi-text, cl-async, cl-bloggy, cl-collider, cl-colors2, cl-cron, cl-data-structures, cl-dbi, cl-digraph, cl-environments, cl-form-types, cl-forms, cl-gearman, cl-gserver, cl-info, cl-kraken, cl-liballegro-nuklear, cl-libsvm, cl-marshal, cl-megolm, cl-mixed, cl-opencl, cl-opencl-utils, cl-patterns, cl-pdf, cl-permutation, cl-png, cl-readline, cl-schedule, cl-sdl2-mixer, cl-ses4, cl-telebot, cl-utils, cl-wave-file-writer, cl-webdriver-client, cl-webkit, cletris, clj-re, clog, closer-mop, cluffer, clunit2, clx, cmd, colored, common-lisp-jupyter, concrete-syntax-tree, consfigurator, core-reader, croatoan, cytoscape-clj, dartsclhashtree, data-frame, defmain, dfio, djula, dns-client, doc, doplus, easy-routes, eclector, esrap, fare-scripts, fof, fresnel, functional-trees, gadgets, gendl, generic-cl, glacier, gtirb-capstone, gute, harmony, hash-table-ext, helambdap, hunchenissr, imago, ironclad, jingoh, kekule-clj, lack, lambda-fiddle, lass, legit, lisp-namespace, lisp-stat, literate-lisp, log4cl, log4cl-extras, lsx, maiden, markup, math, matrix-case, mcclim, messagebox, mgl-pax, micmac, millet, mito, mnas-graph, mnas-hash-table, mnas-package, mnas-string, mutility, null-package, numerical-utilities, nyxt, omglib, osicat, parachute, petalisp, physical-quantities, plot, portal, postmodern, pp-toml, prompt-for, qlot, query-repl, quilc, read-as-string, resignal-bind, rove, rpcq, salza2, sel, serapeum, sha1, shasht, shop3, sketch, slite, smart-buffer, spinneret, staple, static-dispatch, stealth-mixin, structure-ext, swank-protocol, sycamore, tfeb-lisp-hax, tfeb-lisp-tools, tooter, trace-db, trestrul, trivia, trivial-with-current-source-form, uax-15, uncursed, vellum, vellum-postmodern, vgplot, vk, whirlog, with-c-syntax, zippy.

Removed projects: adw-charting, cl-batis, cl-bunny, cl-dbi-connection-pool, cl-reddit, cl-server-manager, corona, gordon, hemlock, hunchenissr-routes, prepl, s-protobuf, submarine, torta, trivial-swank, weblocks-examples, weblocks-prototype-js, weblocks-tree-widget, weblocks-utils.

To get this update, use (ql:update-dist "quicklisp").

There are a lot of removed projects this month. These projects no longer build with recent SBCLs, and all bug reports have gone ignored for many months. If one of these projects is important to you, consider contributing to its maintenance and help it work again.

Incidentally, this is the eleventh anniversary of the first Quicklisp dist release back in October 2010.

TurtleWareSelective waste collection

· 47 days ago

When an object in Common Lisp is not reachable it is garbage collected. Some implementations provide the functionality to set finalizers for these objects. A finalizer is a function that is run when the object is not reachable.

Whether the finalizer is run before the object is deallocated or after is a nuance differing between implementations.

On ABCL, CMU CL, LispWorks, Mezzano, SBCL and Scieener CL the finalizer does not accept any arguments and it can't capture the finalized object (because otherwise it will be always reachable); effectively it may be already deallocated. As the least common denominator it is the approach taken in the portability library trivial-garbage.

(let* ((file (open "my-file"))
       (object (make-instance 'pseudo-stream :file file)))
  (flet ((finalize () (close file)))
    (trivial-garbage:set-finalizer object (lambda () (close file))))

On contrary ACL, CCL, clasp, clisp, corman and ECL the finalizer accepts one argument - the finalized object. This relieves the programmer from the concern of what should be captured but puts the burden on the programmer to ensure that there are no circular dependencies between finalized objects.

(let ((object (make-instance 'pseudo-stream :file (open "my-file"))))
  (flet ((finalize (stream) (close (slot-value stream 'file))))
    (another-garbage:set-finalizer object #'finalize)))

The first approach may for instance store weak pointers to objects with registered finalizers and when a weak pointer is broken then the finalizer is called.

The second approach requires more synchronization with GC and for some strategies makes it possible to absolve objects from being collected - i.e by stipulating that finalizers are executed in a topological order one per the garbage collection cycle.

In this post I want to discuss a certain problem related to finalizers I've encountered in an existing codebase. Consider the following code:

(defclass pseudo-stream ()
  ((resource :initarg :resource :accessor resource)))

(defun open-pseudo-stream (uri)
  (make-instance 'pseudo-stream :resource (make-resource uri)))

(defun close-pseudo-stream (object)
  (destroy-resource (resource object))))

(defvar *pseudo-streams* (make-hash-table))

(defun reload-pseudo-streams ()
  (loop for uri in *uris*
        do (setf (gethash uri *pseudo-streams*)
                 (open-pseudo-stream uri))))

The function reopen-pseudo-streams may be executed i.e to invalidate caches. Its main problem is that it leaks resources by not closing the pseudo stream before opening a new one. If the resource consumes a file descriptor then we'll eventually run out of them.

A naive solution is to close a stream after assigning a new one:

(defun reload-pseudo-streams/incorrect ()
  (loop for uri in *uris*
        for old = (gethash uri *pseudo-streams*)
        do (setf (gethash uri *pseudo-streams*)
                 (open-pseudo-stream uri))
           (close-pseudo-stream old)))

This solution is not good enough because it is prone to race conditions. In the example below we witness that the old stream (that is closed) may still be referenced after a new one is put in the hash table.

(defun nom-the-stream (uri)
  (loop
    (let ((stream (gethash uri *pseudo-streams*)))
      (some-long-computation-1 stream)
      ;; reload-pseudo-streams/incorrect called, the stream is closed
      (some-long-computation-2 stream) ;; <-- aaaa
      )))

This is a moment when you should consider abandoning the function reload-pseudo-streams/incorrect and using a finalizer. The new version of the function open-pseudo-stream destroys the resource only when the stream is no longer reachable, so the function nom-the-stream can safely nom.

When the finalizer accepts the object as an argument then it is enough to register the function close-pseudo-stream. Otherwise, since we can't close over the stream, we close over the resource and open-code destroying it.

(defun open-pseudo-stream (uri)
  (let* ((resource (make-resource uri))
         (stream (make-instance 'pseudo-stream :resource resource)))

    #+trivial-garbage ;; closes over the resource (not the stream)
    (flet ((finalizer () (destroy-resource resource)))
      (set-finalizer stream #'finalizer))

    #+another-garbage ;; doesn't close over anything
    (set-finalizer stream #'close-pseudo-stream)

    stream))

Story closed, the problem is fixed. It is late friday afternoon, so we eagerly push the commit to the production system and leave home with a warm feeling of fulfilled duty. Two hours later all hell breaks loose and the system fails. The problem is the following function:

(defun run-client (stream)
  (assert (pseudo-stream-open-p stream))
  (loop for message = (read-message stream)
        do (process-message message)
        until (eql message :server-closed-connection)
        finally (close-pseudo-stream stream)))

The resource is released twice! The first time when the function run-client closes the stream and the second time when the stream is finalized. A fix for this issue depends on the finalization strategy:

#+trivial-garbage ;; just remove the reference
(defun close-pseudo-stream (stream)
  (setf (resource stream) nil))

#+another-garbage ;; remove the reference and destroy the resource
(defun close-pseudo-stream (stream)
  (when-let ((resource (resource steram)))
    (setf (resource stream) nil)
    (destroy-resource resource)))

With this closing the stream doesn't interfere with the finalization. Hurray! Hopefully nobody noticed, it was late friday afternoon after all. This little incident tought us to never push the code before testing it.

We build the application from scratch, test it a little and... it doesn't work. After some investigation we find the culpirt - the function creates a new stream with the same resource and closes it.

(defun invoke-like-a-good-citizen-with-pseudo-stream (original-stream fn)
  (let* ((resource (resource original-stream))
         (new-stream (make-instance 'pseudo-stream :resource resource)))
    (unwind-protect (funcall fn new-stream)
      (close-pseudo-stream new-stream))))

Thanks to our previous provisions closing the stream doesn't collide with finalization however the resource is destroyed for each finalized stream because it is shared between distinct instances.

When the finalizer accepts the collected object as an argument then the solution is easy because all we need is to finalize the resource instead of the pseudo stream (and honestly we should do it from the start!):

#+another-garbage
(defun open-pseudo-stream (uri)
  (let* ((resource (make-resource uri))
         (stream (make-instance 'pseudo-stream :resource resource)))
    (set-finalizer resource #'destroy-resource)
    stream))

#+another-garbage
(defun close-pseudo-stream (stream)
  (setf (resource stream) nil))

When the finalizer doesnt't accept the object we need to do the trick and finalize a shared pointer instead of a verbatim resource. This has a downside that we need to always unwrap it when used.

#+trivial-garbage
(defun open-pseudo-stream (uri)
  (let* ((resource (make-resource uri))
         (wrapped (list resource))
         (stream (make-instance 'pseudo-stream :resource wrapped)))
    (flet ((finalize () (destroy-resource resource)))
      (set-finalizer wrapped #'finalize)
    stream))

#+trivial-garbage
(defun close-pseudo-stream (stream)
  (setf (resource stream) nil))

When writing this post I've got too enthusiastic and dramatized a little about the production systems but it is a fact, that I've proposed a fix similar to the first finalization attempt in this post and when it got merged it broke the production system. That didn't last long though because the older build was deployed almost immedietely. Cheers!

Eitaro FukamachiDay 1: Roswell, as a Common Lisp implementation manager

· 49 days ago

This is my first public article in English. I've been sending out newsletters about what I've been doing only to sponsors, but there have been requests to publish my know-how on my blog, so I'm writing this way.

However, my English skills are still developing, so I can't suddenly deliver a lot of information at once. So instead, I'm going to start writing fragments of knowledge in the form of technical notes, little by little. The articles may not be in order. But I suppose each one would help somehow as a tip for your Common Lisp development.

When I thought of what I should start from, "Roswell" seemed appropriate, because most of the topics I want to tell depends on it.

It's been six years since Roswell was born. Although its usage has been expanding, I still feel that Roswell is underestimated, especially among the English community.

Not because of you. I think a lot of the reason for this is that the author is Japanese, like me, and has neglected to send out information in English.

If you are not familiar with Roswell or have tried it before but didn't get as much use out of it as you wanted, I hope this article will make you interested.

What's Roswell

Roswell has the following features:

  • Install Common Lisp implementations of specific versions and switch between them as needed
  • Install libraries from GitHub
  • Common Lisp scripting (aka. Roswell script)
  • Enthusiastic CI support

It would be too much work to explain everything in a single article, so I will explain from the first one today: installation of Common Lisp implementations.

Installation

See the official installation guide.

Installation of Common Lisp implementations

To install implementations with Roswell, use its "install" subcommand.

$ ros help install
Usage:

To install a new Lisp implementaion:
   ros install impl [options]
or a system from the GitHub:
   ros install fukamachi/prove/v2.0.0 [repository... ]
or an asdf system from quicklisp:
   ros install quicklisp-system [system... ]
or a local script:
   ros install ./some/path/to/script.ros [path... ]
or a local system:
   ros install ./some/path/to/system.asd [path... ]

For more details on impl specific options, type:
   ros help install impl

Candidates impls for installation are:
abcl-bin
allegro
ccl-bin
clasp-bin
clasp
clisp
cmu-bin
ecl
mkcl
sbcl-bin
sbcl-head
sbcl
sbcl-source

For instance, SBCL, currently the most popular implementation, can be installed with sbcl-bin or sbcl.

# Install the latest SBCL binary
$ ros install sbcl-bin

# Install the SBCL 2.1.7 binary
$ ros install sbcl-bin/2.1.7

# Build and install the latest SBCL from the source
$ ros install sbcl

Since Roswell author builds and hosts its own SBCL binaries, it can install more versions of binaries than the official binary support. So in most cases, you can just run ros install sbcl-bin/<version> to install a specific version of SBCL.

After installing a new Lisp, it will automatically be in the active one. To switch implementations/versions, ros use command is available.

# Switch to SBCL 2.1.7 binary version
$ ros use sbcl-bin/2.1.7

# Switch to ECL of the latest installed version
$ ros use ecl

To see what implementations/versions are installed, ros list installed is available.

$ ros list installed
Installed implementations:

Installed versions of ecl:
ecl/21.2.1

Installed versions of sbcl-bin:
sbcl-bin/2.1.7
sbcl-bin/2.1.9

Installed versions of sbcl-head:
sbcl-head/21.9.21

To check the active implementation, run ros run -- --version.

# Print the active implementation and its version
$ ros run -- --version
SBCL 2.1.7

Run REPL with Roswell

To start a REPL, execute ros run.

# Start the REPL of the active Lisp
$ ros run

# Start the REPL of a specific implementation/version
$ ros -L sbcl-bin/2.1.7 run

"sbcl" command needed?

For those of you who have been installing SBCL from a package manager, the lack of the sbcl command may be disconcerting. Some people are relying on the "sbcl" command in your editor settings. As a workaround to install the "sbcl" command, such as the following command would help.

# Installation of "sbcl" command at /usr/local/bin/sbcl
$ printf '#!/bin/sh\nexec ros -L sbcl-bin run -- "$@"\n' | \
    sudo tee /usr/local/bin/sbcl \
  && sudo chmod +x /usr/local/bin/sbcl

Though once you get used to it, I'm sure you'll naturally start using ros run.

Conclusion

I introduced the following subcommand/options in this article.

  • Subcommand
    • install <impl>
      • Install a new Lisp implementation
    • use <impl>
      • Switch another installed Lisp implementation
    • run
      • Start a REPL
  • Options
    • -L
      • Specify the Lisp implementation to run a command

(Rough) Troubleshooting

If you have a problem like "Roswell worked fine at first but won't work after I updated SBCL," simply delete ~/.roswell .

Roswell writes all related files under the directory, like configurations, Lisp implementations, and Quicklisp libraries, etc. When the directory doesn't exist, Roswell creates and initializes it implicitly. So it's okay to delete ~/.roswell.

TurtleWareA curious case of HANDLER-CASE

· 49 days ago

Common Lisp is known among Common Lisp programmers for its excellent condition system. There are two operators for handling conditions: handler-case and handler-bind:

(handler-case (do-something)
  (error (condition)
    (format *debug-io* "The error ~s has happened!" condition)))

(handler-bind ((error
                 (lambda (condition)
                   (format *debug-io* "The error ~s has happened!" condition))))
  (do-something))

Their syntax is different as well as semantics. The most important semantic difference is that handler-bind doesn't unwind the dynamic state (i.e the stack) and doesn't return on its own. On the other hand handler-case first unwinds the dynamic state, then executes the handler and finally returns.

What does it mean? When do-something signals an error, then:

  • handler-case prints "The error ... has happened!" and returns nil
  • handler-bind prints "The error ... has happened!" and does nothing

By "doing nothing" I mean that it does not handle the condition and the control flow invokes the next visible handler (i.e invokes a debugger). To prevent that it is enough to return from a block:

(block escape
  (handler-bind ((error
                   (lambda (condition)
                     (format *debug-io* "The error ~s has happened!" condition)
                     (return-from escape))))
    (do-something)))

With this it looks at a glance that both handler-case and handler-bind behave in a similar manner. That brings us to the essential part of this post: handler-case is not suitable for printing the backtrace! Try the following:

(defun do-something ()
  (error "Hello world!"))

(defun try-handler-case ()
  (handler-case (do-something)
    (error (condition)
      (trivial-backtrace:print-backtrace condition))))

(defun try-handler-bind ()
  (handler-bind ((error
                   (lambda (condition)
                     (trivial-backtrace:print-backtrace condition)
                     (return-from try-handler-bind))))
    (do-something)))

When we invoke try-handler-case then the top of the backtrace is

1: (TRIVIAL-BACKTRACE:PRINT-BACKTRACE #<SIMPLE-ERROR "Hello world!" {1002D77DD3}> :OUTPUT NIL :IF-EXISTS :APPEND :VERBOSE NIL)
2: ((FLET "FUN1" :IN TRY-HANDLER-CASE) #<SIMPLE-ERROR "Hello world!" {1002D77DD3}>)
3: (TRY-HANDLER-CASE)
4: (SB-INT:SIMPLE-EVAL-IN-LEXENV (TRY-HANDLER-CASE) #<NULL-LEXENV>)
5: (EVAL (TRY-HANDLER-CASE))

While when we invoke try-handler-bind then the backtrace contains the function do-something:

0: (TRIVIAL-BACKTRACE:PRINT-BACKTRACE-TO-STREAM #<SYNONYM-STREAM :SYMBOL SWANK::*CURRENT-DEBUG-IO* {1001860B63}>)
1: (TRIVIAL-BACKTRACE:PRINT-BACKTRACE #<SIMPLE-ERROR "Hello world!" {1002D9CE23}> :OUTPUT NIL :IF-EXISTS :APPEND :VERBOSE NIL)
2: ((FLET "H0" :IN TRY-HANDLER-BIND) #<SIMPLE-ERROR "Hello world!" {1002D9CE23}>)
3: (SB-KERNEL::%SIGNAL #<SIMPLE-ERROR "Hello world!" {1002D9CE23}>)
4: (ERROR "Hello world!")
5: (DO-SOMETHING)
6: (TRY-HANDLER-BIND)
7: (SB-INT:SIMPLE-EVAL-IN-LEXENV (TRY-HANDLER-BIND) #<NULL-LEXENV>)
8: (EVAL (TRY-HANDLER-BIND))

Printing the backtrace of where the error was signaled is certainly more useful than printing the backtrace of where it was handled.

This post doesn't exhibit all practical differences between both operators. I hope that it will be useful for some of you. Cheers!

Max-Gerd RetzlaffuLisp on M5Stack (ESP32):<br /> controlling relays connected to I2C via a PCF8574

· 49 days ago

relay module connected to I2C via a PCF8574; click for a larger version (180 kB).

Looking at the data sheet of the PCF8574 I found that it will be trivially simple to use it to control a relay board without any lower level Arduino library: Just write a second byte in addtion to the address to the I2C bus directly with uLisp's WITH-I2C.

Each bit of the byte describes the state of one of the eight outputs, or rather its inverted state as the PCF8574 has open-drain outputs and thus setting an output to LOW opens a connection to ground (with up to 25 mA), while HIGH disables the relay. (The data sheets actually say they are push-pull outputs but as high-level output the maximum current is just 1 mA which is not much and for this purpuse certainly not enough.)

The whole job can basically done with one or two lines. Here is switching on the forth relay (that is number 3 with zero-based counting):

(with-i2c (str #x20)
  (write-byte (logand #xff (lognot (ash 1 3))) str))

Here is my whole initial library:

Read the whole article.

TurtleWareHow do you DO when you do DO?

· 53 days ago

In this short post I'll explain my understanding of the following quote describing the iteration construct do:

The Common Lisp do macro can be thought of as syntactic sugar for tail recursion, where the initial values for variables are the argument values on the first function call, and the step values are argument values for subsequent function calls.

-- Peter Norvig and Kent Pitman, Tutorial on Good Lisp Programming Style

Writing a recursive function usually involves three important parts:

  1. The initial values - arguments the programmer passes to the function
  2. The base case - a case when function may return without recurring
  3. The step values - arguments the function passes to itself when recurring

An example of a recursive function is this (inefficient) definition:

(defun fib (n)
  (cond
    ((= n 0) 0)
    ((= n 1) 1)
    (t (+ (fib (- n 1))
          (fib (- n 2))))))

The initial value here is n, base cases are (= n 0) and (= n 1) and the step values are (- n 1) and (n 2).

To make a function tail-recursive there is one more important requirement: the subsequent function call must be in a tail position, that is it must be the last function called. The definition above is not tail-recursive, because we first need to call the function and then add results. A proper tail-recursive version requires little gimnastic:

(defun fib* (n)
  (labels ((fib-inner (n-2 n-1 step)
             (if (= step n)
                 (+ n-2 n-1)
                 (fib-inner n-1
                            (+ n-2 n-1)
                            (1+ step)))))
    (cond
      ((= n 0) 0)
      ((= n 1) 1)
      (t (fib-inner 0 1 2)))))

The initial values are 0, 1 and 2, the base case is (= step n) and the step values are n-1, (+ n-2 n-1) and (1+ step). The function fib-inner is in tail position because there is no more computation after its invocation.

A quick remainder how do works:

(do ((a 1 (foo a))
     (b 3 (bar b)))
    ((= a b) 42)
  (side-effect! a b))

First assign to a and b the initial values 1 and 3, then check for the base case (= a b) and if true return 42, otherwise execute the body (side-effect! a b) and finally update a and b by assigning to them the step values (foo a) and (foo b). Then repeat from checking the base case. The last step could be equaled to an implicit tail-call of a function. Let's put it now in terms of the function we've defined earlier:

(defun fib** (n)
  (cond
    ((= n 0) 0)
    ((= n 1) 1)
    (t (do ((n-2 0 n-1)
            (n-1 1 (+ n-2 n-1))
            (step 2 (1+ step)))
           ((= step n)
            (+ n-2 n-1))))))

This do form is a direct translation of the function fib-inner defined earlier.

I hope that you've enjoyed this short explanation. If you did then please let me know on IRC - my handle is jackdaniel @ libera.chat.

Joe MarshallUpdate October 2021

· 53 days ago

Here's a few things I've been playing with lately.

jrm-code-project/utilities has a few utilities that I commonly use. Included are utilities/lisp/promise and utilities/lisp/stream which provide S&ICP-style streams (lazy lists). utilities/lisp/utilities is a miscellaneous grab bag of functions and macros.

jrm-code-project/homographic is a toolkit for linear fractional transforms (homographic functions). In addition to basic LFT functionality, it provides examples of exact real arithmetic using streams of LFTs.

jrm-code-project/LambdaCalculus has some code for exploring lambda calculus.

jrm-code-project/CLRLisp is an experimental Lisp based on the .NET Common Language Runtime. The idea is that instead of trying to adapt a standard Lisp implementation to run on the .NET CLR, we just add a bare-bones eval and apply that use the CLR reflection layer and see what sort of Lisp naturally emerges. At this point, it only just shows signs of life: there are lambda expressions and function calls, but no definitions, conditionals, etc. You can eval lists: (System.Console.WriteLine "Hello World."), but I haven't written a reader and printer, so it is impractical for coding.

Thomas FitzsimmonsMezzano on Librebooted ThinkPads

· 54 days ago

I decided to try running Mezzano on real hardware. I figured my Librebooted ThinkPads would be good targets, since, thanks to Coreboot and the Linux kernel, I have reference source code for all the hardware.

On boot, these machines load Libreboot from SPI flash; included in this Libreboot image is GRUB, as a Coreboot payload.

Mezzano, on the other hand, uses the KBoot bootloader. I considered chainloading KBoot from GRUB, but I wondered if I could have GRUB load the Mezzano image directly, primarily to save a video mode switch.

I didn’t want to have to reflash the Libreboot payload on each modification (writing to SPI flash is slow and annoying to recover from if something goes wrong), so I tried building a GRUB module “out-of-tree” and loading it in the existing GRUB. Eventually I got this working, at which point I could load the module from a USB drive, allowing fast development iteration. (I realize out-of-tree modules are non-ideal so if there’s interest I may try to contribute this work to GRUB.)

The resulting GRUB module, mezzano.mod, is largely the KBoot Mezzano loader code, ported to use GRUB facilities for memory allocation, disk access, etc. It’s feature-complete, so I released it to Sourcehut. (I’ve only tested it on Libreboot GRUB, not GRUB loaded by other firmware implementations.)

Here’s a demo of loading Mezzano on two similar ThinkPads:

GRUB Mezzano module demo

For ease of use, mezzano.mod supports directly loading the mezzano.image file generated by MBuild — instead of requiring that mezzano.image be dd‘d to a disk. It does so by skipping the KBoot partitions to find the Mezzano disk image. The T500 in the video is booted this way. Alternatively, mezzano.mod can load the Mezzano disk image from a device, as is done for the W500 in the video. Both methods look for the Mezzano image magic — first at byte 0 and, failing that, just after the KBoot partitions.

I added the set-i8042-bits argument because Coreboot does not set these legacy bits, yet Mezzano’s PS/2 keyboard and mouse drivers expect them; at this point Mezzano does not have a full ACPI device tree implementation.

Vsevolod DyomkinWatching a Model Train

· 55 days ago

Last week, I did a quick hack that quite delighted me: I added a way to visually watch the progress of training my MGL-based neural networks inside Emacs. And then people on twitter asked me to show the code. So, it will be here, but first I wanted to rant a bit about one of my pet peeves.

Low-Tech

In the age of Jupyter and TensorBoard, adding a way to see an image that records the value of a loss function blinking on the screen — "huh, big deal" you would say. But I believe this example showcases a difference between low-tech and high-tech approaches. Just recently I chatted with one of my friends who is entering software engineering at a rather late age (30+), and we talked of how frontend development became even more complicated than backend one (while, arguably, the complexity of tasks solved on the frontend is significantly lower). And that discussion just confirmed to me that the tendency to overcomplicate things is always there, with our pop-culture industry, surely, following it. But I always tried to stay on the simple side, on the side of low-tech solutions. And that's, by the way, one of the reasons I chose to stick with Lisp: with it, you would hardly be forced into some nonsense framework hell, or playing catch-up with the constant changes of your environment, or following crazy "best practices". Lisp is low-tech just like the Unix command-line or vanilla Python or JS. Contrary to the high-tech Rust, Haskell or Java. Everything text-based is also low-tech: text-based data formats, text-based visualization, text-based interfaces.

So, what is low-tech, after all? I saw the term popularized by Kris De Decker from the Low-Tech Magazine, which focuses on using simple (perhaps, outdated by some standards) technologies for solving serious engineering problems. Most people, and the software industry is no exception, are after high-tech, right? Progress of technology enables solving more and more complex tasks. And, indeed, that happens. Sometimes, not always. Sometimes, the whole thing crumbles, but that's a different story. Yet, even when it happens, there's a catch, a negative side-effect: the barrier of entry rises. If 5 or 10 years ago it was enough to know HTML, CSS, and JavaScript to be a competent frontend developer, now you have to learn a dozen more things: convoluted frameworks, complicated deploy toolchains, etc., etc. Surely, sometimes it's inevitable, but it really delights me when you can avoid all the bloat and use simple tools to achieve the same result. OK, maybe not completely the same, maybe not a perfect one. But good enough. The venerable 80% solution that requires 20% effort.

Low-tech is not low-quality, it's low-barrier of entry.

And I would argue that, in the long run, better progress in our field will be made if we strive towards lowering the bar to more people in, than if we continue raising it (ensuring our "job security" this way). Which doesn't mean that the technologies should be primitive (like BASIC). On the contrary, the most ingenious solutions are also the simplest ones. So, I'm going to continue this argument in the future posts I'd like to write about interactive programming. And now, back to our hacks.

Getting to Terms with MGL

In my recent experiments I returned to MGL — an advanced, although pretty opinionated, machine learning library by the prolific Gabor Melis — for playing around with neural networks. Last time, a few years ago I stumbled when I tried to use it to reproduce a very advanced (by that time's standards) recurrent neural network and failed. Yet, before that, I was very happy using it (or rather, it's underlying MGL-MAT library) for running in Lisp (in production) some of the neural networks that were developed by my colleagues. I know it's usually the other way around: Lisp for prototyping, some high-tech monstrosity for production, but we managed to turn the tides for some time :D

So, this time, I decided to approach MGL step by step, starting from simple building blocks. First, I took on training a simple feed-forward net with a number of word inputs converted to vectors using word2vec-like approach.

This is the network I created. Jumping slightly ahead, I've experimented with several variations of the architecture, starting from a single hidden layer MLP, and this one worked the best so far. As you see, it has 2 hidden layers (l1/l1-l and l2/l2-l) and performs 2-class classification. It also uses dropout after each of the layers as a standard means of regularization in the training process.


(defun make-nlp-mlp (&key (n-hidden 100))
  (mgl:build-fnn (:class 'nlp-mlp)
    (in (->input :size *input-len*))
    (l1-l (->activation in :size n-hidden))
    (l1 (->relu l1-l))
    (d1 (->dropout l1 :dropout 0.5))
    (l2-l (->activation d1 :size (floor n-hidden 2)))
    (l2 (->relu l2-l))
    (d2 (->dropout l2 :dropout 0.5))
    (out-l (->activation d2 :size 2))
    (out (->softmax-xe-loss out-l))))

MGL model definition is somewhat different from the approach one might be used to with Keras or TF: you don't imperatively add layers to the network, but, instead, you define all the layers at once in a declarative fashion. A typical Lisp style it is. Yet, what still remains not totally clear to me yet, is the best way to assemble layers when the architecture is not a straightforward one-direction or recurrent, but combines several parts in nonstandard ways. That's where I stumbled previously. I plan to get to that over time, but if someone has good examples already, I'd be glad to take a look at those. Unfortunately, despite the proven high-quality of MGL, there's very little open-source code that uses it.

Now, to make a model train (and watch it), we have to pass it to mgl:minimize alongside with a learner:


(defun train-nlp-fnn (&key data (batch-size 100) (epochs 1000) (n-hidden 100)
                       (random-state *random-state*))
  (let ((*random-state* random-state)
        (*agg-loss* ())
        (opt (make 'mgl:segmented-gd-optimizer
                   :termination (* epochs batch-size)
                   :segmenter (constantly
                                (make 'mgl:adam-optimizer
                                      :n-instances-in-batch batch-size))))
        (fnn (make-nlp-mlp :n-hidden n-hidden)))
    (mgl:map-segments (lambda (layer)
                        (mgl:gaussian-random!
                         (mgl:nodes layer)
                         :stddev (/ 2 (reduce '+ (mgl:mat-dimensions (mgl:nodes layer))))))
                      fnn)
    (mgl:monitor-optimization-periodically
     opt
     `((:fn mgl:reset-optimization-monitors :period ,batch-size :last-eval 0)
       (:fn draw-test-error :period ,batch-size)))
    (mgl:minimize opt (make 'mgl:bp-learner
                            :bpn fnn
                            :monitors (mgl:make-cost-monitors
                                       fnn :attributes `(:event "train")))
                  :dataset (sample-data data (* epochs batch-size)))
    fnn))

This code is rather complex, so let me try to explain each part.

  • We use (let ((*random-state* random-state) to ensure that we can reproduce training in exactly the same way if needed.
  • mgl:segmented-gd-optimizer is a class that allows us to specify a different optimization algorithm for each segment (layer) of the network. Here we use the same standard mgl:adam-optimizer with vanilla parameters for each segment (constantly).
  • The following mgl:map-segments call is performing the Xavier initialization of the input layers. It is crucial to properly initialize the layers of the network before training or, at least, ensure that they are not all set to zeroes.
  • The next part is, finally, responsible for WATCHING THE MODEL TRAIN. mgl:monitor-optimization-periodically is a hook to make MGL invoke some callbacks that will help you peek into the optimization process (and, perhaps, do other needful things). That's where we insert our draw-test-error function that will run each batch. There's also an out-of-the-box cost-monitor attached directly to the mgl:bp-learner, which is collecting the data for us and also printing it on the screen. I guess, we could build the draw-test-error monitor in a similar way, but I opted for my favorite Lisp magic wand — a special variable *agg-loss*.
  • And last but not least, we need to provide the dataset to the model: (sample-adata data (* epochs batch-size)). The simple approach that I use here is to pre-sample the necessary number of examples beforehand. However, streaming sampling may be also possible with a different dataset-generating function.

Now, let's take a look at the function that is drawing the graph:


(defun draw-test-error (opt learner)
  ;; here, we print out the architecture and parameters of
  ;; our model and learning algorithm
  (when (zerop (mgl:n-instances opt))
    (describe opt)
    (describe (mgl:bpn learner)))
  ;; here, we rely on the fact that there's
  ;; just a single cost monitor defined
  (let ((mon (first (mgl:monitors learner))))
;; using some of RUTILS syntax sugar here to make the code terser
    (push (pair (+ (? mon 'counter 'denominator)
                   (if-it (first *agg-loss*)
                          (lt it)
                          0))
                (? mon 'counter 'numerator))
          *agg-loss*)
    (redraw-loss-graph)))

(defun redraw-loss-graph (&key (file "/tmp/loss.png") (smoothing 10))
  (adw-charting:with-chart (:line 800 600)
    (adw-charting:add-series "Loss" *agg-loss*)
    (adw-charting:add-series
     (fmt "Smoothed^~a Loss" smoothing)
     (loop :for i :from 0
           :for off := (* smoothing (1+ i))
           :while (< off (length *agg-loss*))
           :collect (pair (? *agg-loss* (- off (floor smoothing 2)) 0)
                          (/ (reduce ^(+ % (rt %%))
                                     (subseq *agg-loss* (- off smoothing) off)
                                     :initial-value 0)
                             smoothing))))
    (adw-charting:set-axis :y "Loss" :draw-gridlines-p t)
    (adw-charting:set-axis :x "Iteration #")
    (adw-charting:save-file file)))

Using this approach, I could also draw the change of the validation loss on the same graph. And I'll do that in the next version.

ADW-CHARTING is my goto-library when I need to draw a quick-and-dirty chart. As you see, it is very straightforward to use and doesn't require a lot of explanation. I've looked into a couple other charting libraries and liked their demo screenshots (probably, more than the style of ADW-CHARTING), but there were some blockers that prevented me from switching to them. Maybe, next time, I'll have more inclination.  

To complete the picture we now need to display our learning progress not just with text running in the console (produced by the standard cost-monitor), but also by updating the graph. This is where Emacs' nature of a swiss-army knife for any interactive workflow came into play. Surely, there was already an existing auto-revert-mode that updates the contents of a Emacs buffer on any change or periodically. For my purposes, I've added this lines to my Emacs config:


(setq auto-revert-use-notify nil)
(setq auto-revert-interval 6)  ; refresh every seconds

Obviously, this can be abstracted away into a function which could be invoked by pressing some key or upon other conditions occurring.

Nicolas HafnerPatch update, holidays, and the GIC - October Kandria Update

· 63 days ago
https://filebox.tymoon.eu//file/TWpNM013PT0=

A shorter monthly update for once, as this included two weeks of holidays, in which we were both morally, legally, and bound by higher powers unbeknown to us forbidden from working on Kandria. Regardless, progress was made, and news are here to be shared, so an update was written!

Kandria Patch Update

A patch update is live that fixes the issues that were reported to us through the automated crash system. You can get the update from the usual link on the mailing list. I'm happy to say that there were not a lot of these to fix!

Of course, if you haven't yet had time to check out the new demo release, we hope you'll do so soon! We're always very excited to hear people's thoughts on what we have so far.

Holidays

We don't have too much to show for this update, as we had a nice two weeks of holiday. I, for my part, spent some days down in Italy, which was really nice. Great weather for the most part, and it was great to have a proper change of scenery and schedule for once!

https://filebox.tymoon.eu//file/TWpNM05BPT0=

Because I'm me and can't help it, I did do some work as well during my holidays. I made some good progress on a project I've had in the slow cooker for years now, which will ultimately also be useful for Kandria, to support its modding system. But, more importantly to me, I finally got back into drawing more regularly again and made some pieces I'm actually quite happy with.

Wow! If what I wrote sounded any more confident, I'd have to mistake it for a toilet paper advertisement!

Game Industry Conference

Later this month is the Game Industry Conference, in which we'll be taking part. Once again, sponsored thanks to Pro Helvetia! I submitted a talk for the conference and actually got accepted for it as well, so I'll be presenting there in person. I don't know the exact date of my talk yet, but I'll be sure to announce it ahead of time on Twitter as soon as I do know.

If you're in Poland or are attending the conference yourself, let me know! I'd be happy to meet up!

Tim

This was a shorter month for me as I was on holiday for weeks. However, I've been researching key streamers and influencers that I'd previously highlighted for us, since they'd covered games similar to Kandria, and sending out emails to promote the new demo. Nick got some great feedback from Chris Zukowski on how to format these, which involved finding a hook in an influencer's content that I could latch onto, to show I'd actually engaged with their content. Nick also fed back on the horizontal slice quest designs I've been doing, which map out the rest of the narrative to the end of the game. This has been great to get some new eyes and steers on, and will help tighten up the content and manage the scope.

Fred & Mikel

These two have already started into content for the new areas. We're currently hashing out the look and feel, for which the second region is already getting close to final. We don't have any screenshots or music to show you yet though, you'll have to be a bit more patient for that.

Roadmap

As always, let's look at the roadmap from last month.

  • Fix reported crashes and bugs

  • Add telemetry to allow detecting problematic gameplay behaviour

  • Add more stats tracking for later achievements and ranking

  • Allow changing press actions to toggle actions in the input mapper

  • Add unlockable items and a fishing encyclopedia

  • Implement RPG mechanics for levelling and upgrades

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Draft out region 2 main quest line levels and story

  • Draft out region 3 main quest line levels and story

  • Complete the horizontal slice

Oops! None of the items we had on there last time changed yet. But, some other important things were added and fixed already. Anyway, we'll start to get to the other things this month.

Until that's ready, I sincerely hope you give the new demo a try if you haven't yet. Let us know what you think if you do or have already!

Eric TimmonsNew Project: cl-tar

· 74 days ago

I have just published the first release of a new project: cl-tar. This was supposed to be my summer side-project, but it ran long as they often do :).

The goal of this project is to provide a Common Lisp interface to tar archives. It has its foundations in Nathan Froyd's archive library, but has been significantly extended and improved.

cl-tar-file

There are actually two subprojects under the cl-tar umbrella. The first is cl-tar-file, which provides the ASDF system and package tar-file. This project provides low-level access to physical entries in tar files. As a consequence, two tar files that extract to the same set of files on your filesystem may have two very different sets of entries of tar-file's point of view, depending on the tar format used (PAX vs ustar vs GNU vs v7).

The cl-tar-file project is technically a fork of archive. Except, all non-portable bits have been removed (such as code to create symlinks), better support for the various archive variants has been added, better blocking support added (tar readers/writers are supposed to read/write in some multiple of 512 bytes), cpio support removed, and a test suite added, along with other miscellaneous fixes and improvements.

cl-tar

The second sub project is cl-tar itself, which provides three ASDF systems and packages: tar, tar-simple-extract, and tar-extract.

The tar system provides a thin wrapper over the tar-file system that operates on logical entries in tar files. That is, a regular file is represented as a single entry, no matter how many entries it is composed of in the actual bits that get written to the tar file. This system is useful for analyzing a tar file or creating one using data that is not gotten directly from the file system.

The tar-simple-extract system provides a completely portable interface to extract a tar archive to your file system. The downside of portability is that there is information loss. For example, file owners, permissions, and modification times cannot be set. Additionally, symbolic links cannot be extracted as symbolic links (but they can be dereferenced).

The tar-extract system provides a more lossless extraction capability. The downside of being lossless is that it is more demanding (osicat must support your implementation and OS) and it raises security concerns.

A common security concern is that a malicious tar file can extract a symlink that points to an arbitrary location in your filesystem and then trick you into overwriting files at the location by extracting later files through that symlink. This system tries its best to mitigate that (but makes no guarantees), so long as you use its default settings. If you find a bug that allows an archive to extract to an arbitrary location in your filesystem, I'd appreciate it if you report it!

Also note that tar-extract currently requires a copy of osicat that has the commits associated with this PR applied.

next steps

First, close the loop on the osicat PR. It started off as a straightforward PR that just added new functions. However, when I tested on Windows, I realized I couldn't load osicat. So I added a commit that fixed that. There may be some feedback and changes requested on how I actually acomplished that.

Second, integrate tar-extract into CLPM. CLPM currently shells out to a tar executable to extract archives. I'd like to use this pure CL solution instead. Plus, using it with CLPM will act as a stress test by exposing it to many tar files.

Third, add it to Quicklisp. tar-extract won't compile without the osicat changes, so those definitely need to be merged first. Additionally, I want to have at least some experience with real world tar files before making this project widely available.

Fourth, add support for creating archives from the filesystem.

Fifth, add the ability to compile to an executable so you could use this in place of GNU or BSD tar :).

If the fourth and fifth steps excite you, I'd love to have your help making them a reality! They're not on my critical path for anything at the moment, so it'll likely be a while before I can get to them.

Eric TimmonsCLPM 0.4.0 released

· 84 days ago

I have tagged CLPM 0.4.0 and posted the build artifacts at https://files.clpm.dev/clpm/. This release brings quite the laundry list of bug fixes and enhancements, including the much awaited Mac M1 support. The full changelog summary is below the break.

Additionally, the burgeoning CLPM community now has more spaces to interact. If you're interested in learning about or getting help on CLPM, I encourage you to join #clpm on Libera.chat. We have a Matrix room as well (clpm:matrix.org), but the Libera room is currently more active and preferred.

If you are already using CLPM, I encourage you to subscribe to the clpm-announce mail list. This is a low traffic list where new releases will be announced.

  • Changed layout of release tarballs.
  • Published tarballs now contain a static executable (#11).
  • No longer using the deploy library to build releases (#15 #11).
  • Updated build script to more easily build static or dynamic executables (#11).
  • Fixed bug in computing the source-registry.d file for the clpm client (#16)
  • Starting to build up a test suite (#3)
  • Added automated testing on Gitlab CI.
  • Added clpm-client:*activate-asdf-integration* to control default integration with ASDF upon context activation.
  • The default directories for CLPM cache, config, and data have changed on Windows. They are now %LOCALAPPDATA%\clpm\cache\, %LOCALAPPDATA%\clpm\config\, and %LOCALAPPDATA%\clpm\data\.
  • Added new config option (:grovel :lisp :command). This string is split via shlex into a list of arguments that can be used to start the child lisp process.
  • Deprecated (:grovel :lisp :path) in favor of (:grovel :lisp :command).
  • Added new value for (:grovel :lisp :implementation) - :custom. When :custom is used, no arguments are taken from the lisp-invocation library, the user must specify a command that completely starts the child lisp in a clean state.
  • Better support for using MSYS2's git on Windows.
  • Support for Mac M1 (#20).
  • Fixed bug causing groveling to take an inordinately long time for systems with :defsystem-depends-on or direct calls to asdf:load-system in their system definition files (!9).
  • Fixed bug causing unused git and asd directives to linger in clpmfile.lock (#32).
  • Add support for bare git repos in clpmfile (not from Github or Gitlab) (#22).
  • Add clpmfile :api-version "0.4". Remains backwards compatible with 0.3. (#22).
  • Fix bug saving project metadata on Windows.
  • Fix client's UIOP dependency to better suit ECL's bundled fork of ASDF.
  • Fix issue READing strings in client from a lisp that is not SBCL (#13).
  • Parse inherited CL_SOURCE_REGISTRY config in client using ASDF (#14).

Nicolas Hafner0.2.2 Demo Release, Gamescom & more - September Kandria Update

· 92 days ago
https://filebox.tymoon.eu//file/TWpNMU5BPT0=

Big news this month: there's a new free demo release (!!), news on the Pro Helvetia grant and on Gamescom, and a lot of stuff to look at that we worked on!

0.2.2 Demo Release

There's a new demo release for Kandria that you can now get for free! If you're on our mailing list, you should have gotten a download link already. There's a lot of changes for this version since the last release in April, but let me highlight a few of the big ones:

  • Major revisions to all the dialogue and quest logic to make it flow better

  • A proper tutorial sequence to introduce the controls and the game's world

  • UI for everything, including a map, main menu, quest log, etc

  • A new fishing minigame to relax and take in the world

  • Major changes to the combat to improve the flow and feel

  • Lots and lots of bugfixes and tweaks based on user feedback. Thank you!

  • Custom sound effects for everything

  • Custom music, including horizontal mixing of music tracks!

The game is now not only on Steam, but also on Itch.io, if you prefer to follow updates on that platform!

Again, to get the new demo, sign up here: https://kandria.com/prototype

Pro Helvetia Grant

The Pro Helvetia Interactive Media Grant deadline was on the 1st, and we've submitted our stuff for it! I'm quite happy with the game design pitch doc that we put together, but since there's probably quite a few very capable studios applying for the grant, who knows whether we'll have any luck with it.

I'm definitely keeping my fingers crossed for it, as getting the grant would not only be very important for us financially, but also be a huge boost in confidence, to have support like that. It would also make negotiations with publishers easier, as another organisation has then already given their vote of confidence. Getting the first foot in the door like that is always the hardest!

Anyway, it'll probably take a few months before we know anything, so there's no point in worrying about the result of it now, we'll just keep on trucking in the meantime.

Gamescom, Devcom, IndieArenaBooth

So, Gamescom was this month, which took up a week with meetings and such. We've contacted a bunch of publishers and met with a few that expressed interest in Kandria as well, which is cool. I'm not expecting anything of course, competition in this area is extremely fierce, but it's nice that we're at least getting warm receptions and actual interest.

They'll look at the new demo release now and hopefully get back to us on what they think of the game in the next few weeks. We'll be sure to let you know if we hear anything about that!

Aside from that, Kandria and Eternia both were part of the Steam listing for Gamescom, which gave us a nice spike on our wishlist numbers:

https://filebox.tymoon.eu//file/TWpNMU13PT0=

We definitely need a bunch more spikes like that though, so I'm keeping my eyes open for other festival opportunities like that!

UI

This month was a "UI month" and as such there's a lot of changes for that. Overall I'm really glad we had time for this finally, as it really improved the overall polish of the game by a lot.

There's now a main menu and load screen:

A handy shop UI:

An in-game map showing you where you and NPCs are, as well as where you've gone:

Finally a way to change key mappings to your liking:

And all sorts of other improvements to things like the item menu, options menus, etc.

Tim

Since this was the last month before the Pro Helvetia submission and new demo, I've spent a lot of time playtesting the questline, reporting and fixing bugs, and just polishing the content as much as possible - such as adding descriptions for all items and fish, and a first-pass economy for when you buy/sell with Sahil. I also proofed Nick's pitch doc and did a little market research for this on similar games. Since we want to make a bit of a fuss over this new demo, I've also researched into reddit and prepped a couple of stories, which we could post to announce the release.

Since Gamescom also happened this month, I filtered the list of attending publishers and researched them, trying to find those who'd be most suitable for our game. Nick was then able to use this to prioritise meetings.

Finally, I've started to look beyond the demo to the horizontal slice, which we're planning to tackle next. We'd already done some work outlining the remaining acts of the story (the current demo is essentially act 1 of 5), writing some backstory and lore for the other factions and regions (which Fred has been concepting); but now we need to bring these acts to life with actual mainline quest content. I'm still in the middle of this, but I've already made a good start at planning out the main quests for acts 2 and 3. Next I'll be getting feedback on these from the team, but already it's nice to be pushing back the fog of war, and defining the rest of the story in more concrete terms.

Fred

Fred's been working on the horizontal slice content already, doing concept work for the second and third areas:

https://filebox.tymoon.eu//file/TWpNMU1RPT0=https://filebox.tymoon.eu//file/TWpNMU1nPT0=

We're trying to keep all the areas very visually distinct, and I think that's working out quite well so far! I'm really excited to see it in-game.

Mikel

Mikel's got all the necessary tracks for the demo finished in record time, and in the past week has been going over all of them to revise them and make them fit even better to the game's feel. Here's some of the tracks:

These tracks have many variants still to adapt to the mood of the game at the moment, so there's a lot more work behind this than might first be apparent!

Cai

Cai's been hard at work implementing all the needed sound effects that I'd been slacking on so far. We now have sounds for almost every interaction in the game, and it has contributed a ton to make the game feel more alive. We even have stuff like distinguished footsteps based on the terrain you're walking on.

There's still a few revisions left to be done, but otherwise this first batch of sounds got done really well. We'll probably wait a bit more until we do another batch, as we have to focus on actually making some more content, first.

Roadmap

And with that, let's look at the roadmap from last month with the updates from this month added on top:

  • Implement a main menu, map, and other UI elements

  • Implement a load screen and fix UI issues

  • Create and integrate new sound effects for 90% of the interactions in the game

  • Start work on the horizontal slice

  • Complete and revise all the music tracks for the current regions

  • Update marketing materials like the capsule image

  • Start work on modding integration

  • Implement RPG mechanics for levelling and upgrades

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Draft out region 2 main quest line levels and story

  • Draft out region 3 main quest line levels and story

  • Complete the horizontal slice

But, before we get to all that we got a well deserved two weeks of holidays ahead of us. Once we're all back though, I'm sure we'll get that horizontal slice knocked out in no time!

In the meantime, I sincerely hope you give the new demo a try, and let us know what you think if you do!

Eric TimmonsToward a New CL Project Index

· 92 days ago

Quicklisp has had a profound impact on the CL community. It's transformed the way CL devs share libraries, made it easier and encouraged devs to re-use existing code instead of implementing everything in house, and is widely used. While Quicklisp took the CL community a huge step forward, I nevertheless think we can and should do better.

To that end, I've been working on two interlinked projects, CLPM and the Common Lisp Project Index (CLPI). I've posted about CLPM in various places before and awareness of it is already growing in the CL community. Therefore, this post will focus on CLPI and why I think it is important. My ultimate goal is to find like-minded people to collaborate with on bringing CLPI (or something similar) to reality.

I've been meaning to make a post like this for a while, but life kept putting it on the back burner. However, I've recently found more CLPM users in the wild, which always gets my energy levels up for this type of work. Plus, discussions I've seen in various Lisp forums (including this tweet that was brought to my attention) have made me think that the time may finally be ripe to start discussing this topic more broadly.

Before continuing, I want to make clear that I have the utmost respect for Xach, Quicklisp, and the services he provides to the community. This post does critique what is probably Xach's most well known work, but it is by no means an attack against either QL or him and I will not tolerate any comments or discussion that cross that line.

What is a Project Index?

First, let's clarify at a high level what I mean by a project index. Basically, a package index is a listing of projects and ASDF systems. For every project, it contains information on what releases are available (how to actually get the code), along with what systems are included in each release and what the dependencies of those systems are.

A project index lets you quickly answer questions like "what is the latest version of cffi?", "what are the dependencies of the latest version of cffi?", or "where can I download the latest version of ironclad?" without needing to load any code.

Quicklisp Issues

Now let's look at what I consider to be flaws of the Quicklisp project index model.

  1. Conflation of project manager and project index. When I mention Quicklisp, what do you think of first? Perhaps the quicklisp-client that gets loaded into your image and provides ql:quickload? Or is it the distinfo.txt, systems.txt, and releases.txt files that contain all the projects known to Quicklisp?
    The problem is that it's both! I think that there needs to be a clear separation between the project index (distinfo.txt and friends) and the consumers of the project index (quicklisp-client). Such a separation both makes it clearer to what is being referred in casual conversation, and makes it easier to build competing consumers or servers of the project index.

  2. The project index format is not documented. I believe this is a consequence of the previous issue. To the best of my knowledge, the only documentation of the QL project index format is the quicklisp client code that interacts with it. This makes it harder to implement both competing clients (I had to do quite a bit of code diving to get CLPM to understand the QL project index) and competing servers (there exist several forks of the quickdist project, yet none of them seem to create the dist versions file that CLPM needs).

  3. Not a lot of data is provided. The QL project index does not contain critical information, such as license, ASDF system version number, location of the canonical VCS repo, or author names.

  4. Not easily extensible. The only way to include more information in a QL project index is to add more files. Information cannot be added to releases.txt nor systems.txt without breaking the QL client. Additionally, if the current aesthetics are to be maintained, each line in a file must represent one entry that consists of tokens to be interpreted as strings, integers, or a list of strings (but only one list per entry).

  5. Enforces a particular style of development. A QL project index is rolling: it always grabs the latest version of projects. This forces projects to always use the latest and "greatest" (scare quotes intended) versions of their dependencies or risk being cut from the index. Additionally, it makes it difficult for developers to continue supporting old versions of their code that they would like to maintain; if version 1.0.0 of system A is released, then version 2.0.0, followed by 1.1.0, version 1.1.0 will never show up in a QL project index.

  6. Takes control of releases away from developers. Not only does the QL project index preclude releasing bug fixes to older, stable code, it also takes away the choice of when to perform a release. A developer cannot say "oh crap, I just realized 1.0.0 had a huge bug, I need to get 1.0.1 out today!", instantly publish 1.0.1, and then have others immediately use it. Instead, they have to wait until the next time the QL project index maintainer decides to poll them and see if a new version is available. For the primary QL index, this process can take about a month.

  7. The index is not space efficient. There is a lot of duplicated information in a QL project index. If a project had new releases in QL version M and N, then the information for the release in version M is replicated identically for releases M through N-1. This is an issue if you want to make a consumer that can show when things changed, can install any version of a project, or just wants to efficiently sync index data over the internet.

Ultralisp

A side note on Ultralisp. Ultralisp largely seeks to address issue 6. However, as far as I can tell, it still polls, so developers cannot push new versions to it on demand (please correct me if I'm wrong here!). However, even if it does allow pushes, it still falls victim to the all the other issues except 1. Additionally, Ultralisp is very affected by issue 7 given its update frequency.

CLPI

To address these concerns, I've been slowly developing the Common Lisp Project Index (CLPI) specification. Additionally, I currently have two instances of the index running. One mirrors the data available in the primary QL index, the other is for internal use with my coworkers. Last, CLPM can efficiently consume an index that follows the CLPI spec.

I'm not claiming that CLPI is perfect, but I think it is a significant step forward from QL project indices. Plus, I have some experience running it so I also know that it works (albeit for relatively small audiences). The QL mirror is located at https://quicklisp.common-lisp-project-index.org/.

Now, let's take a brief dive into each of the issues I raised with the QL project index and see how CLPI addresses them.

Conflation of project manager and project index

There is no project manager named CLPI. I do not ever plan on creating one. In any case, Common Lisp Project Index would be a weird name for a project manager.

The project index format is not documented

The current specification of the format of CLPI indices is located at https://gitlab.common-lisp.net/clpm/clpi/-/blob/master/specs/clpi-0.4.org.

The current object model used by CLPI is located at https://gitlab.common-lisp.net/clpm/clpi/-/blob/master/specs/clpi-object-model.org.

Not a lot of data is provided

CLPI allows a project's canonical VCS to be provided. Each system can have the author, license, description, and version specified. System dependencies can include ASDF's (:version ...) and (:feature ...) specifiers.

Not easily extensible

Every file must contain one or more forms that are suitable for READing. Additionally, all the non trivial files consist of plists. This makes it trivial to both write a parser for each file and to extend files with extra information without breaking consumers (so long as the extra information does not change the semantics on which older versions are relying).

Enforces a particular style of development

Every release of every project is made available. Additionally, with the preservation of (:version ...) specifiers from ASDF's dependency lists, developers can easily provide version constraints and project managers can also take those constraints into account.

Takes control of releases away from developers

The proof of concept CLPI server I have developed for my internal use allows a developer to push releases on demand. I am using this in conjunction with Gitlab CI to push releases when tags are created on our git repos.

The index is not space efficient

CLPI borrows a lot of ideas from Rubygems' compact_index. While it is not required as part of the spec, CLPI instances can signal that they intend to only append to the files served to consumers. This lets consumers easily use HTTP headers to download only the new parts of each file that they have yet to see.

Additionally, instead of a monolithic file like releases.txt that contains release information, CLPI splits this info into project specific files. For example, you can get all the known releases for fiveam by downloading https://quicklisp.common-lisp-project-index.org/clpi/v0.4/projects/fiveam/releases-0. To do the same thing with a QL index, you'd have to download releases.txt for every dist version (currently 117 in the primary QL index). For comparison, the CLPI file is currently 2183 bytes, while a single releases.txt file (from the 2021-08-07 dist) is 506134 bytes, or over 200 times bigger. Additionally, the CLPI version also tells you the dependencies. To get that from QL, you also need to download systems.txt (the 2021-08-07 version is 374391 bytes).

Next Steps

Does CLPI excite you? Do you want it to become reality? Awesome, I'd love to collaborate with you to bring CLPI or something similar to the CL community at large! Please reach out to me here, on #commonlisp or #clpm on Libera.chat (I'm etimmons), on Matrix at #clpm:matrix.org (I'm @eric:davidson-timmons.com), or via email at clpm-devel@common-lisp.net.

There's a lot to do and I really want to make this a community effort.

  • I'd love it people could provide feedback on both the object model and the index format!

  • I'd love to work with people excited to help take my proof of concept CLPI server and make it production ready (or just make a new one from scratch)! This would include implementing a database backend, support for multiple users, and a permissions system.

  • I'd love to work with others interested in standing up a CLPI server for the whole community to craft a set of community guidelines and policies that address concerns such as when and how projects can be pulled (remember NPM's leftpad incident??), project ownership, etc!

  • I'd love to have feedback from all the people out there that are unsatisfied by both the QL client and CLPM! If you're making your own project manager, is there anything we can do with the CLPI spec to make your life easier? Do you have something like CLPI that we can learn from/build off?

  • But perhaps most of all, I'd love to hear if developers would be interested in publishing their code to a community CLPI server! This is one place where QL's model shines. Xach does all the work, so it is nearly effortless for individual developers to get their latest releases into the QL index. Under the CLPI model, someone (ideally the developer, but potentially a proxy maintainer) would have to perform an action on every release to get it into the CLPI instance.

It's likely that I'll continue putzing along with CLPI, even if I don't get any help. But it'll likely never get to the point of being usable by the community at large without input from others. And even if I somehow managed to get a CLPI server that is usable by the whole community, I wouldn't host it without a team willing to help maintain it, both policy- and tech-wise. I run enough projects with a bus factor of one as it is.

Eric TimmonsCLPM 0.4.0-rc.1 Available

· 92 days ago

I have just tagged CLPM 0.4.0-rc.1 and posted the build artifacts at https://files.clpm.dev/clpm/. Assuming there are no show stoppers discovered, I plan to release v0.4.0 next weekend.

This release will bring quite the laundry list of bug fixes and enhancements, including the much awaited Mac M1 support. The complete list (at this point in time) is included below.

But first, I want to inform you that I plan on getting v0.5.0 out the door ASAP. And that 0.5 will likely include some breaking changes to clpmfiles. For more information, see this issue. The breaking change is necessary to fix issue 39 and will in general lead to CLPM being cleaner and faster.

Additionally, the two other big ticket features I plan to add are groups inside clpmfiles (e.g. you can have dependencies that are only installed for dev/testing purposes) and support for versioning ASDF/UIOP in bundles. The latter change is going to be difficult due to both ASDF being a dependency of clpm-client and the special relationship the UIOP and ASDF enjoy. However, I have most of a plan and think it will be feasible without placing too much of a burden on the end user.

I would have liked to include all of these changes in v0.4, but I've been sitting on v0.4 for a long time, have been telling enough people variants of "use the latest v0.4 alpha, it's good to go except M1 support!", and I told people to use v0.4 along with my demo paper (page 21) at ELS '21. So I'd feel pretty bad about breaking clpmfiles at the moment.

If you like CLPM, have feedback, or just want to chat about it, please join us on Matrix (preferred) or #clpm on Libera.chat.

The current changelog entry for v0.4.0 is:

  • Changed layout of release tarballs.
  • Published tarballs now contain a static executable (#11).
  • No longer using the deploy library to build releases (#15 #11).
  • Updated build script to more easily build static or dynamic executables (#11).
  • Fixed bug in computing the source-registry.d file for the clpm client (#16)
  • Starting to build up a test suite (#3)
  • Added automated testing on Gitlab CI.
  • Added clpm-client:*activate-asdf-integration* to control default integration with ASDF upon context activation.
  • The default directories for CLPM cache, config, and data have changed on Windows. They are now %LOCALAPPDATA%\clpm\cache\, %LOCALAPPDATA%\clpm\config\, and %LOCALAPPDATA%\clpm\data\.
  • Added new config option (:grovel :lisp :command). This string is split via shlex into a list of arguments that can be used to start the child lisp process.
  • Deprecated (:grovel :lisp :path) in favor of (:grovel :lisp :command).
  • Added new value for (:grovel :lisp :implementation) - :custom. When :custom is used, no arguments are taken from the lisp-invocation library, the user must specify a command that completely starts the child lisp in a clean state.
  • Better support for using MSYS2's git on Windows.
  • Support for Mac M1 (#20).
  • Fixed bug causing groveling to take an inordinately long time for systems with :defsystem-depends-on or direct calls to asdf:load-system in their system definition files (!9).
  • Fixed bug causing unused git and asd directives to linger in clpmfile.lock (#32).
  • Add support for bare git repos in clpmfile (not from Github or Gitlab) (#22).
  • Add clpmfile :api-version "0.4". Remains backwards compatible with 0.3. (#22).
  • Fix bug saving project metadata on Windows.
  • Fix client's UIOP dependency to better suit ECL's bundled fork of ASDF.
  • Fix issue READing strings in client from a lisp that is not SBCL (#13).
  • Parse inherited CL_SOURCE_REGISTRY config in client using ASDF (#14).

Joe MarshallTail recursion and fold-left

· 98 days ago

fold-left has this basic recursion:

(fold-left f init ())      = init
(fold-left f init (a . d)) = (fold-left f (f init a) d)
A straightforward implementation of this is
(defun fold-left (f init list)
  (if (null list)
      init
      (fold-left f (funcall f init (car list)) (cdr list))))
The straightforward implementation uses a slightly more space than necessary. The call to f occurs in a subproblem position, so there the stack frame for fold-left is preserved on each call and the result of the call is returned to that stack frame.

But the result of fold-left is the result of the last call to f, so we don't need to retain the stack frame for fold-left on the last call. We can end the iteration on a tail call to f on the final element by unrolling the loop once:

(defun fold-left (f init list)
  (if (null list)
      init
      (fold-left-1 f init (car list) (cdr list))))

(defun fold-left-1 (f init head tail)
  (if (null tail)
      (funcall f init head)
      (fold-left-1 f (funcall f init head) (car tail) (cdr tail))))

There aren't many problems where this would make a difference (a challenge to readers is to come up with a program that runs fine with the unrolled loop but causes a stack overflow with the straightforward implementation), but depending on how extreme your position on tail recursion is, this might be worthwhile.

Joe MarshallA Floating-point Problem

· 101 days ago

Here's a 2x2 matrix:

[64919121   -159018721]
[41869520.5 -102558961]
We can multiply it by a 2 element vector like this:
(defun mxv (a b
            c d

            x
            y

            receiver)
  (funcall receiver
           (+ (* a x) (* b y))
           (+ (* c x) (* d y))))

* (mxv 64919121     -159018721
       41869520.5d0 -102558961
 
       3
       1

       #'list)

(35738642 2.30496005d7)
Given a matrix and a result, we want to find the 2 element vector that produces that result. To do this, we compute the inverse of the matrix:
(defun m-inverse (a b
                  c d

                  receiver)
  (let ((det (- (* a d) (* b c))))
    (funcall receiver
             (/ d det) (/ (- b) det)
             (/ (- c) det) (/ a det))))
and multiply the inverse matrix by the result:
(defun solve (a b
              c d

              x
              y

              receiver)
  (m-inverse a b
             c d
             (lambda (ia ib
                      ic id)
               (mxv ia ib
                    ic id

                    x
                    y
                    receiver))))
So we can try this on our matrix
* (solve 64919121     -159018721
         41869520.5d0 -102558961

         1
         0
         #'list)

(1.02558961d8 4.18695205d7)
and we get the wrong answer.

What's the right answer?

* (solve 64919121         -159018721
         (+ 41869520 1/2) -102558961

         1
         0
         #'list)

(205117922 83739041)
If we use double precision floating point, we get the wrong answer by a considerable margin.

I'm used to floating point calculations being off a little in the least significant digits, and I've seen how the errors can accumulate in an iterative calculation, but here we've lost all the significant digits in a straightforward non-iterative calculation. Here's what happened: The determinant of our matrix is computed by subtracting the product of the two diagonals. One diagonal is (* 64919121 -102558961) = -6658037598793281, where the other diagonal is (* (+ 41869520 1/2) -159018721) = -6658037598793280.5 This second diagonal product cannot be represented in double precision floating point, so it is rounded down to -6658037598793280. This is where the error is introduced. An error of .5 in a quantity of -6658037598793281 is small indeed, but we amplify this error when we subtract out the other diagonal. We still have an absolute error of .5, but now it occurs within a quantity of 1, which makes it relatively huge. This is called “catastrophic cancellation” because the subtraction “cancelled” all the significant digits (the “catastrophe” is presumably the amplification of the error).

I don't care for the term “catastrophic cancellation” because it places the blame on the operation of subtraction. But the subtraction did nothing wrong. The difference betweeen -6658037598793280 and -6658037598793281 is 1 and that is the result we got. It was the rounding in the prior step that introduced an incorrect value into the calculation. The subtraction just exposed this and made it obvious.

One could be cynical and reject floating point operations as being too unreliable. When we used exact rationals, we got the exactly correct result. But rational numbers are much slower than floating point and they have a tendancy to occupy larger and larger amounts of memory as the computation continues. Floating point is fast and efficient, but you have to be careful when you use it.

Joe MarshallFold right

· 110 days ago

fold-left takes arguments like this:

(fold-left function init list)
and computes
* (fold-left (lambda (l r) `(f ,l ,r)) 'init '(a b c))
(F (F (F INIT A) B) C)
Notice how init is the leftmost of all the arguments to the function, and each argument appears left to right as it is folded in.

Now look at the usual way fold-right is defined:

(fold-right function init list)
It computes
* (fold-right (lambda (l r) `(f ,l ,r)) 'init '(a b c))
(F A (F B (F C INIT)))
although init appears first and to the left of '(a b c) in the arguments to fold-right, it is actually used as the rightmost argument to the last application.

It seems to me that the arguments to fold-right should be in this order:

; (fold-right function list final)
* (fold-right (lambda (l r) `(f ,l ,r)) '(a b c) 'final)
(F A (F B (F C FINAL)))
The argument lists to fold-left and fold-right would no longer match, but I think switching things around so that the anti-symmetry of the arguments matches the anti-symmetry of the folding makes things clearer.


For older items, see the Planet Lisp Archives.


Last updated: 2021-12-06 15:32