Planet Lisp

Nicolas HafnerUpdates Galore - June Kandria Update

· 18 days ago
https://filebox.tymoon.eu//file/TWpNeU1RPT0=

There's a lot of different news to talk about this month, so strap in!

The New Trailer

The most important thing to come out of this month is the new trailer! Check it out if you haven't yet:

I'm overall really happy with how it came together, and we all had a part in the end result. I'd also like to give a special commendation to Elissa Park who did the amazing voice over for the trailer. It was a pleasure to work together!

It's also been great to finally get some custom music by Mikel into an official part of the game. He's also been working on the first music tracks that'll be in the game, and I've been working on a music system to support horizontal mixing with the tracks. I'm very excited to get all that together into the game and see how it all feels! I hope that by next month's update we'll have a short preview of that for you.

0.1.1 Release

Meanwhile we also pushed out an update to the vertical slice release that makes use of the new linear quest system we put together. It should overall also be a lot more stable and includes many fixes for issues people reported to us. Thanks!

As always, if you want to have a look at the demo yourself, you can do so free of charge.

I think this will be the last patch we put out until September. I can't afford to backport fixes even if more bug reports come in, as the overhead of managing that is just too high. I can't just push out new versions that follow internal development either, as those are frequently in flight and have more regressions that we typically stamp out over time, but would in the meantime provide a more buggy experience.

https://filebox.tymoon.eu//file/TWpNeU1BPT0=We started working on fishing just recently!

Dev Streams

I'm heavily considering doing regular weekly development streams, both to see if we can attract some more interest for the project, and to be more open about the process in general. I feel like we're already very open about everything with our weekly updates, but having an immediate insight into how the game is made is another thing entirely. I think it would be really cool to show that side of development off more often!

In order to coordinate what time would suit the most people, please fill out this Doodle form. The exact dates don't matter, just watch for the day of the week and the time. Don't worry about the name it asks for either, it won't be public!

I'll probably close the poll in a weeks, so make sure to submit an answer soon if you're interested. Streams will happen both on https://twitch.tv/shinmera and https://youtube.com/shinmera, with both being reachable through the official stream page at https://stream.shinmera.com . See you there!

Palestinian Aid Bundle

Some good folks have put together a bundle on itch.io gathering money for Palestinian aid. I'm very happy to say that our game, Eternia: Pet Whisperer is a part of this bundle!

If you want to support this cause and get a huge collection of amazing games in the process, head on over to itch.io!

Tim

This month I had a varied mix of tasks: working on the script for the new trailer; updating the quests in the vertical slice demo to work closer to our original vision; researching press and influencer contacts as we plan more of Kandria's marketing.

The trailer came out great, and I'm really happy with the voice acting that Nick produced with Elissa Park. It was a great idea Nick had to use the character of Fi as the narrator here (originally we were going to use Catherine) - her serious outlook, and reflection on the events of the story, was just what we needed to fit the epic music from Mikel, and the epic gameplay and exploration that Nick captured on screen. The whole thing just screams epic.

The quests were vastly improved too. The first quest now uses Nick's new sequencing system, so that triggers fire automatically (and more robustly) when the player arrives at the correct location, and when combat encounters are completed. The logic is also much quicker to write, so linear quests will be much faster to produce in the future. The mushroom quest also had a big refactor; now you can organically collect mushrooms out in the world, rather than going to specific enabled points. You can even sell what you find to the trader, including those poisonous black knights. It really makes the world feel more interactive. There's been general tweaks to the other quests from playtesting, and I'll continue to refine them from my own playing and players' feedback up until the Pro Helvetia submission later in the year. I'm also planning to add a couple more sidequest diversions based on the new fishing (!) minigame being added at the moment; we think a combat-focused sidequest will work well too.

Finally on the marketing side, it's been rewarding to collect tons of potential press and influencer contacts we could approach in the future. I've basically been taking games that are strong influences and have similarities to Kandria - from hugely popular games like Celeste and Dead Cells, to lesser known ones like Kunai and Armed with Wings - then cataloguing key journalists and influencers who've streamed, made videos, or written about them. This will hopefully highlight some of the right people we can contact to help spread the word about the game.

Roadmap

Let's look at the roadmap from last month with the updates from this month:

  • Make the combat more flashy

  • Finish a new trailer

  • Revise the quest system's handling of linear quest lines

  • Design and outline the factions for the rest of the game

  • Develop the soundscape for Kandria and start working on the tracks for region 1

  • Add a music system that can do layering and timed transitions

  • Build and add the tutorial sequence to the beginning of the game

  • Finish the region 1 music tracks

  • Reach out to journalists, streamers, and other communities

  • Polish the ProHelvetia submission material

  • Polish and revise the combat design

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Start work on the horizontal slice

As always there's some smaller tasks that aren't in the overall roadmap. We seem to be doing pretty well keeping on track with what needs done, which is really good! It's all too easy to misjudge the time required to complete things, especially in games.

In any case, time is flying fast, and there's a lot to do. In the meantime be sure to check our mailing list for weekly updates, and the discord community for fan discussions!

Eric TimmonsASDF 3.3.5 Release Candidate

· 20 days ago

ASDF 3.3.4.19 has been tagged. This is a release candidate for 3.3.5. As the announcement says, please give it a spin on your setup and report any regressions. Bugs can be reported to the Gitlab issue tracker (preferred) or to the asdf-devel mailing list.

Changes

The full(ish) Changelog can be found here.

In addition to assorted bug fixes, there are several new features. Both user facing:

  • Support for package local nicknames in uiop:define-package.
  • SBCL should now be able to find function definitions nested in the with-upgradability macro.
  • package-inferred-system source files can use extensions other than .lisp.

And developer facing:

  • Building out a fairly extensive CI pipeline.

Future

This is planned to be the last release in the 3.3 series. We are excited to get this out the door because we already have several focal points for the 3.4 series in mind, including:

  • Support for more expressive version strings and version constraints. issue draft MR.
  • A new package defining form that is explicitly designed to better tie in with package-inferred-system. issue draft MR.

Please join in the conversation if any of these features excite you, you have features you'd like to see added, or you have bugs that need to be squashed.

Michał HerdaCurrent Common Lisp IRC situation

· 23 days ago

Because of the upheaval at Freenode, I've migrated to Libera Chat along with a bunch of other Lisp programmers. We have used that as a chance to make some small changes to the channel structure:

  • #commonlisp is the on-topic Common Lisp channel (formerly #lisp),
  • #lisp is the somewhat on-topic discussion about all Lisp dialects (formerly ##lisp),
  • the rest of the channel names should work the same.

The first two lines of the above were mentioned because #lisp on Freenode used to have a non-trivial volume of people asking questions about Scheme or Emacs Lisp due to the too-generic channel name. Naming the Common Lisp channel #commonlisp resolves this issue, at the cost of sacrificing a lucrative and attractive five-character channel name.

Quicklisp newsMay 2021 Quicklisp dist update now available

· 24 days ago

 New projects: 

  • adopt-subcommands — Extend the Adopt command line processing library to handle nested subcommands. — MIT
  • cl-cerf — Lisp wrapper to libcerf — Public Domain
  • cl-cxx-jit — Common Lisp Cxx Interoperation — MIT
  • cl-form-types — Library for determining types of Common Lisp forms. — MIT
  • cl-incognia — Incognia API Common Lisp Client — MIT
  • cl-info — A helper to an answer a question about OS, Lisp and Everything. — BSD
  • cl-mimeparse — Library for parsing MIME types, in the spirit of http://code.google.com/p/mimeparse/, with a Common Lisp flavor. — MIT
  • cl-opencl — CFFI for OpenCL and Lisp wrapper API — Public Domain
  • cl-schedule — cl-schedule is a cron-like scheduling library in common-lisp. It subsumes and replaces traditional cron managers thanks to richer expressiveness of Lisp. — MIT
  • cl-vorbis — Bindings to stb_vorbis, a simple and free OGG/Vorbis decoding library — zlib
  • claw-olm — Thin wrapper over OLM — MIT
  • context-lite — A CLOS extension to support specializing methods on special/dynamic variables. — MIT
  • defmain — A wrapper around net.didierverna.clon which makes command line arguments parsing easier. — BSD
  • defrest — defrest: expose functions as REST webservices for ajax or other stuff — BSD
  • doc — Documentation generator, based on MGL-PAX. Allows to put documentation inside lisp files and cross-reference between different entities. — MIT
  • ec2-price-finder — Quickly find the cheapest EC2 instance that you need across regions — BSD-3-Clause
  • file-notify — Access to file change and access notification. — zlib
  • fresnel — Bidirectional translation with lenses — MIT
  • log4cl-extras — A bunch of addons to LOG4CL: JSON appender, context fields, cross-finger appender, etc. — BSD
  • mnas-package — Система @b(mnas-package) предназначена для подготовки документации, извлекаемой из asdf-систем. @begin(section) @title(Мотивация) Система @b(Codex) является достаточно удобной для выполнения документирования систем, написанных с использованием @b(Common Lisp). Она позволяет получить на выходе документацию приемлемого вида. К недостатку сустемы @b(Codex) можно отнести то, что формирование шаблона документации не выполняется автоматически. Указание на включение разделов документации, относящихся к отдельным сущностям к которым можно отнести: @begin(list) @item(системы;) @item(пакеты;) @item(классы;) @item(функции, setf-функции;) @item(обобщенные функции,методы, setf-методы;) @item(макросы;) @item(и т.д., и т.п.) @end(list) приходится формировать вручную. Этот проект пытается устранить данный недостаток системы @b(Codex) за счет определения функций и методов позволяющих: @begin(list) @item(формировать код, предназначенный для передачи в систему @b(Codex);) @item(формировать представление отдельных частей системы в виде графов.) @end(list) @end(section) — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • mnas-string — Система @b(mnas-string) предназначена для: @begin(list) @item(парсинга вещественного числа;) @item(разделения строки на подстроки;) @item(замены всех вхождений подстроки в строке;) @item(замены множественного вхождения паттерна единичным;) @item(подготовки строки в качестве аргумента для like запроса SQL;) @item(обрамления строки префиксом и постфиксом;) @item(вывода представления даты и времени в поток или строку;) @item(траслитерации строки.) @end(list) — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • osmpbf — Library to read OpenStreetMap PBF-encoded files. — MIT
  • plot — Plots for Common Lisp — MS-PL
  • scheduler — Extensible task scheduler. — BSD-2-Clause
  • speechless — A dialogue system language implementation. — zlib
  • stumpwm-sndioctl — Interface to OpenBSD's sndioctl for StumpWM. — ISC
  • vellum — Data Frames for Common Lisp — BSD simplified
  • vellum-clim — Simplistic vellum data frames viewer made with mcclim. — BSD simplified
  • vellum-csv — CSV support for Vellum Data Frames — BSD simplified
  • vellum-postmodern — Postgres support for Vellum Data Frames (via postmodern). — BSD simplified
  • vk — Common Lisp bindings for the Vulkan API. — MIT
  • wasm-encoder — Library for serializing WebAssembly modules to binary .wasm files — MIT

Updated projects: adopt, agutil, also-alsa, anypool, april, architecture.builder-protocol, async-process, audio-tag, bdef, bnf, burgled-batteries.syntax, caveman, chirp, cl+ssl, cl-ana, cl-argparse, cl-async, cl-collider, cl-covid19, cl-cuda, cl-data-frame, cl-data-structures, cl-environments, cl-forms, cl-gamepad, cl-glfw3, cl-gserver, cl-kraken, cl-liballegro, cl-liballegro-nuklear, cl-markless, cl-mixed, cl-naive-store, cl-num-utils, cl-patterns, cl-pdf, cl-prevalence, cl-rfc4251, cl-sendgrid, cl-slice, cl-smt-lib, cl-ssh-keys, cl-steamworks, cl-str, cl-tiled, cl-typesetting, cl-utils, cl-webkit, clack, clack-pretend, clath, clavier, clog, closer-mop, cmd, coleslaw, common-lisp-jupyter, configuration.options, consfigurator, croatoan, cytoscape-clj, damn-fast-priority-queue, data-frame, dataloader, defconfig, definitions, deploy, dfio, diff-match-patch, dissect, djula, dns-client, doplus, dufy, duologue, easy-routes, eclector, erudite, file-attributes, flare, fmt, functional-trees, gendl, generic-cl, golden-utils, gute, harmony, herodotus, hu.dwim.presentation, hunchenissr, hunchensocket, hunchentoot-errors, iterate, kekule-clj, lack, language-codes, lichat-protocol, lisp-stat, literate-lisp, magicffi, maiden, math, mcclim, messagebox, mnas-graph, mnas-hash-table, multiposter, mutility, named-readtables, nibbles, nodgui, numcl, numerical-utilities, nyaml, nyxt, overlord, parseq, pathname-utils, plump-sexp, plump-tex, portable-threads, postmodern, pzmq, qlot, qt-libs, rpcq, sb-cga, sc-extensions, screamer, sel, serapeum, shadow, shop3, specialized-function, spinneret, split-sequence, static-dispatch, static-vectors, stumpwm, sxql, ten, tfeb-lisp-hax, trivial-indent, trivial-timer, trivial-with-current-source-form, trucler, uax-15, vgplot, wild-package-inferred-system, with-c-syntax.


To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Pavel Korolev:claw honing - Beta milestone and alien-works

· 25 days ago

Long time no see my C++ autowrapping rants. But several bindings later, :claw has reached the stage where it is ready to leave a garage and see the outside world. Since my last post, some approaches were revised, though the autowrapping process is still not fully cemented yet. I wouldn't expect :claw to be out of beta for at least a year. That doesn't mean it is unusable, but rather I cannot guarantee a stable interface and a trivial procedure for setting it up.

In other big news, :alien-works system got all required foreign libraries wrapped and integrated, including some complex and peculiar C++ ones (Skia 👀). Next step is to write a game, based on :alien-works framework, to see how much lispification of autowrapped systems is possible without loosing any performance and what is required for a solid game delivery.

Joe MarshallStupid Y operator tricks

· 33 days ago

Here is the delta function: δ = (lambda (f) (f f)). Delta takes a function and tail calls that function on itself. What happens if we apply the delta function to itself? Since the delta function is the argument, it is tail called and applied to itself. Which leads again to itself being tail called and applied to itself. We have a situation of infinite regression: the output of (δ δ) ends up being a restatement of the output of (δ δ). Now in this case, regression is infinite and there is no base case, but imagine that somehow there were a base case, or that somehow we identified a value that an infinite regression equated to. Then each stage of the infinite regression just replicates the previous stage exactly. It is like having a perfectly silvered mirror: it just replicates the image presented to it exactly. By calling delta on delta, we've arranged our perfectly silvered mirror to reflect an image of itself. This leads to the “infinite hall of mirrors” effect.

So let's tweak the delta function so that instead of perfectly replicating the infinite regression, it applies a function g around the replication: (lambda (f) (g (f f))). If we apply this modified delta function to itself, each expansion of the infinite regression ends up wrapping an application of the g around it: (g (f f)) = (g (g (f f))) = (g (g (g (f f)))) = (g (g (g (g … )))). So our modified delta function gives us a nested infinite regression of applications of g. This is like our perfectly silvered mirror, but now the reflected image isn't mirrored exactly: we've put a frame on the mirror. When we arrange for the mirror to reflect itself, each nested reflection also has an image of the frame around the reflection, so we get a set of infinitely nested frames.

An infinite regression of (g (g (g (g … )))) is confusing. What does it mean? We can untangle this by unwrapping an application. (g (g (g (g … )))) is just a call to g. The argument to that call is weird, but we're just calling (g <something>). The result of the infinite regression (g (g (g (g … )))) is simply the result of the outermost call to g. We can use this to build a recursive function.

;; If factorial = (g (g (g (g … )))), then
;; factorial = (g factorial), where

(defun g (factorial)
  (lambda (x)
    (if (zerop x)
        1
        (* x (funcall factorial (- x 1))))))
The value returned by an inner invocation of g is the value that will be funcalled in the altenative branch of the conditional.

Y is defined thus:

Y = λg.(λf.g(f f))(λf.g(f f))

A straightforward implementation attempt would be
;; Non working y operator
(defun y (g)
  (let ((d (lambda (f) (funcall g (funcall f f)))))
    (funcall d d)))
but since lisp is a call-by-value language, it will attempt to (funcall f f) before funcalling g, and this will cause runaway recursion. We can avoid the runaway recursion by delaying the (funcall f f) with a strategically placed thunk
;; Call-by-value y operator
;; returns (g (lambda () (g (lambda () (g (lambda () … ))))))
(defun y (g)
  (let ((d (lambda (f) (funcall g (lambda () (funcall f f))))))
    (funcall d d)))
Since the recursion is now wrapped in a thunk, we have to funcall the thunk to force the recursive call. Here is an example where we see that:
* (funcall (Y (lambda (thunk)
                (lambda (x)
                  (if (zerop x)
                      1
                      (* x (funcall (funcall thunk) (- x 1)))))))
           6)
720
the (funcall thunk) invokes the thunk in order to get the actual recursive function, which we when then funcall on (- x 1).

By wrapping the self-application with a thunk, we've made the call site where we use the thunk more complicated. We can clean that up by wrapping the call to the thunk in something nicer:

* (funcall
    (y (lambda (thunk)
         (flet ((factorial (&rest args)
                  (apply (funcall thunk) args)))

           (lambda (x)
             (if (zerop x)
                 1
                 (* x (factorial (- x 1))))))))
      6)
720
And we can even go so far as to hoist that wrapper back up into the definiton of y
(defun y1 (g)
  (let ((d (lambda (f) (funcall g (lambda (&rest args) (apply (funcall f f) args))))))
    (funcall d d)))

* (funcall
    (y1 (lambda (factorial)
          (lambda (x)           
            (if (zerop x)
                1
                (* x (funcall factorial x))))))
    6)
720
y1 is an alternative formulation of the Y operator where we've η-expanded the recursive call to avoid the runaway recursion.

The η-expanded version of the applicative order Y operator has the advantage that it is convenient for defining recursive functions. The thunkified version is less convenient because you have to force the thunk before using it, but it allows you to use the Y operator to define recursive data structures as well as functions:

(Y
  (lambda (delayed-ones)
    (cons-stream 1 (delayed-ones))))
{1 …}

The argument to the thunkified Y operator is itself a procedure of one argument, the thunk. Y returns the result of calling its argument. Y should return a procedure, so the argument to Y should return a procedure. But it doesn't have to immediately return a procedure, it just has to eventually return a procedure, so we could, for example, print something before returning the procedure:

* (funcall (Y (lambda (thunk)
                (format t "~%Returning a procedure")
                (lambda (x)
                  (if (zerop x)
                      1
                      (* x (funcall (funcall thunk) (- x 1)))))))
         6)
Returning a procedure
Returning a procedure
Returning a procedure
Returning a procedure
Returning a procedure
Returning a procedure
720
There is one caveat. You must be able to return the procedure without attempting to make the recursive call.

Let's transform the returned function before returning it by applying an arbitrary function h to it:

(Y (lambda (thunk)
     (h (lambda (x)
          (if (zerop x)
              1
              … )))))
Ok, so now when we (funcall thunk) we don't get what we want, we've got an invocation of h around it. If we have an inverse to h, h-1, available, we can undo it:
(y (lambda (thunk)
      (h (lambda (x)
           (if (zerop x)
               1
               (* (funcall (h-1 (funcall thunk)) (- x 1))))))))
As a concrete example, we return a list and at the call site we extract the first element of that list before calling it:
* (funcall (car (y (lambda (thunk)
                     (list (lambda (x)
                             (if (zerop x)
                                 1
                                 (* x (funcall (car (funcall thunk)) (- x 1))))))))
           6)
720
So we can return a list of mutually recursive functions:
(y (lambda (thunk)
    (list
      ;; even?
      (lambda (n)
        (or (zerop n)
            (funcall (cadr (funcall thunk)) (- n 1))))

      ;; odd?
      (lambda (n)
        (and (not (zerop n))
             (funcall (car (funcall thunk)) (- n 1))))
      )))
If we use the η-expanded version of the Y operator, then we can adapt it to expect a list of mutually recursive functions on the recursive call:
(defun y* (&rest g-list)
  (let ((d (lambda (f)
             (map 'list (lambda (g)
                          (lambda (&rest args)
                            (apply (apply g (funcall f f)) args)))
                  g-list))))
     (funcall d d)))
which we could use like this:
* (let ((eo (y* (lambda (even? odd?)
                  (declare (ignore even?))
                  (lambda (n)
                    (or (zerop n)
                        (funcall odd? (- n 1)))))

                (lambda (even? odd?)
                  (declare (ignore odd?))
                  (lambda (n)
                    (and (not (zerop n))
                         (funcall even? (- n 1))))))))
     (let ((even? (car eo))
           (odd?  (cadr eo)))
       (do ((i 0 (+ i 1)))
           ((>= i 5))
         (format t "~%~d, ~s ~s"
                 i
                 (funcall even? i)
                 (funcall odd? i)))))))
                 
0, T NIL
1, NIL T
2, T NIL
3, NIL T
4, T NIL
Instead of returning a list of mutually recursive functions, we could return them as multiple values. We just have to be expecting multiple values at the call site:
(defun y* (&rest gs)
  (let ((d (lambda (f)
             (apply #'values
                    (map 'list
                         (lambda (g)
                           (lambda (&rest args)
                             (apply (multiple-value-call g (funcall f f)) args)))
                         gs)))))
    (funcall d d)))

MIT Scheme used to have a construct called a named lambda. A named lambda has an extra first argument that is automatically filled in with the function itself. So during evaluation of the body of a named lambda, the name is bound to the named lambda, enabling the function to call itself recursively:

(defmacro named-lambda ((name &rest args) &body body)
  `(y1 (lambda (,name)
         (lambda ,args
           ,@body))))

* (funcall (named-lambda (factorial x)
            (if (zerop x)
                1
                (* x (funcall factorial (- x 1)))))
         6)
720
This leads us to named let expressions. In a named let, the implicit lambda that performs the let bindings is a named lambda. Using that name to invoke the lambda on a different set of arguments is like recursively re-doing the let.
* (named-let fact ((x 6)) (if (zerop x) 1 (* x (funcall fact (- x 1)))))

720

In Scheme, you use letrec to define recursive or mutually recursive procedures. Internal definitions expand into an appropriate letrec. letrec achieves the necessary circularity not through the Y operator, but through side effects. It is hard to tell the difference, but there is a difference. Using the Y operator would allow you to have recursion, but avoid the implicit side effects in a letrec.

Oleg Kiselyov has more to say about the Y operator at http://okmij.org/ftp/Computation/fixed-point-combinators.html

Joe Marshall&#946;-conversion

· 40 days ago

If you have an expression that is an application, and the operator of the application is a lambda expression, then you can β-reduce the application by substituting the arguments of the application for the bound variables of the lambda within the body of the lambda.

(defun beta (expression if-reduced if-not-reduced)
  (if (application? expression)
      (let ((operator (application-operator expression))
            (operands (application-operands expression)))
        (if (lambda? operator)
            (let ((bound-variables (lambda-bound-variables operator))
                  (body (lambda-body operator)))
              (if (same-length? bound-variables operands)
                  (funcall if-reduced
                           (xsubst body
                                   (table/extend* (table/empty)
                                                  bound-variables
                                                  operands)))
                  (funcall if-not-reduced)))
            (funcall if-not-reduced)))
      (funcall if-not-reduced)))

* (beta '((lambda (x y) (lambda (z) (* x y z))) a (+ z 3))
        #'identity (constantly nil))
(LAMBDA (#:Z460) (* A (+ Z 3) #:Z460))

A large, complex expression may or may not have subexpressions that can be β-reduced. If neither an expression or any of its subexpressions can be β-reduced, then we say the expression is in “β-normal form”. We may be able to reduce an expression to β-normal form by β-reducing where possible. A β-reduction can introduce further reducible expressions if we substitute a lambda expression for a symbol in operator position, so reducing to β-normal form is an iterative process where we continue to reduce any reducible expressions that arise from substitution.

(defun beta-normalize-step (expression)
  (expression-dispatch expression

    ;; case application
    (lambda (subexpressions)
      ;; Normal order reduction
      ;; First, try to beta reduce the outermost application,
      ;; otherwise, recursively descend the subexpressions, working
      ;; from left to right.
      (beta expression
            #'identity
            (lambda ()
              (labels ((l (subexpressions)
                         (if (null subexpressions)
                             '()
                             (let ((new-sub (beta-normalize-step (car subexpressions))))
                               (if (eq new-sub (car subexpressions))
                                   (let ((new-tail (l (cdr subexpressions))))
                                     (if (eq new-tail (cdr subexpressions))
                                         subexpressions
                                         (cons (car subexpressions) new-tail)))
                                   (cons new-sub (cdr subexpressions)))))))
                (let ((new-subexpressions (l subexpressions)))
                  (if (eq new-subexpressions subexpressions)
                      expression
                      (make-application new-subexpressions)))))))

    ;; case lambda
    (lambda (bound-variables body)
      (let ((new-body (beta-normalize-step body)))
        (if (eql new-body body)
            expression
            (make-lambda bound-variables new-body))))

    ;; case symbol
    (constantly expression)))

;;; A normalized expression is a fixed point of the
;;; beta-normalize-step function.
(defun beta-normalize (expression)
  (do ((expression expression (beta-normalize-step expression))
       (expression1 '() expression)
       (count 0 (+ count 1)))
      ((eq expression expression1)
       (format t "~%~d beta reductions" (- count 1))
       expression)))

You can compute just by using β-reduction. Here we construct an expression that reduces to the factorial of 3. We only have β-reduction, so we have to encode numbers with Church encoding.

(defun test-form ()
  (let ((table
         (table/extend*
          (table/empty)
          '(one
            three
            *
            pred
            zero?
            y)
          '(
            ;; Church numeral one
            (lambda (f) (lambda (x) (f x)))

            ;; Church numeral three 
            (lambda (f) (lambda (x) (f (f (f x)))))

            ;; * (multiply Church numerals)
            (lambda (m n)
              (lambda (f)
                (m (n f))))

            ;; pred (subtract 1 from Church numeral)
            (lambda (n)
              (lambda (f)
                (lambda (x) (((n (lambda (g)
                                   (lambda (h)
                                     (h (g f)))))
                              (lambda (u) x))
                             (lambda (u) u)))))

            ;; zero? (test if Church numeral is zero)
            (lambda (n t f) ((n (lambda (x) f)) t))

            ;; Y operator for recursion
            (lambda (f)
              ((lambda (x) (f (x x)))
               (lambda (x) (f (x x)))))

            )))
        (expr
         '((lambda (factorial)
             (factorial three))
           (y (lambda (fact)
                (lambda (x)
                  (zero? x
                   one
                   (* (fact (pred x)) x))))))))
    (xsubst expr table)))

* (test-form)

((LAMBDA (FACTORIAL) (FACTORIAL (LAMBDA (F) (LAMBDA (X) (F (F (F X)))))))
 ((LAMBDA (F) ((LAMBDA (X) (F (X X))) (LAMBDA (X) (F (X X)))))
  (LAMBDA (FACT)
    (LAMBDA (X)
      ((LAMBDA (N T F) ((N (LAMBDA (X) F)) T)) X
       (LAMBDA (F) (LAMBDA (X) (F X)))
       ((LAMBDA (M N) (LAMBDA (F) (M (N F))))
        (FACT
         ((LAMBDA (N)
            (LAMBDA (F)
              (LAMBDA (X)
                (((N (LAMBDA (G) (LAMBDA (H) (H (G F))))) (LAMBDA (U) X))
                 (LAMBDA (U) U)))))
          X))
        X))))))

* (beta-normalize (test-form))

127 beta reductions
(LAMBDA (F) (LAMBDA (X) (F (F (F (F (F (F X))))))))

This is the Church numeral for 6.

I find it pretty amazing that we can bootstrap ourselves up to arithmetic just by repeatedly β-reducing where we can. It doesn't seem like we're actually doing any work. We're just replacing names with what they stand for.

The β-substitution above replaces all the bound variables with their arguments if there is the correct number of arguments. One could easily implement a partial β-substitution that replaced only some of the bound variables. You'd still have an application, but some of the bound variables in the lambda would be eliminated and the corresponding argument would be removed.

Joe MarshallSubstitution

· 45 days ago

In McCarthy's early papers on Lisp, he notes that he needs a modified version of subst which needs to be aware of quoted expressions (and avoid substituting within them). He would also need a subst that was aware of lambda expressions. It would have to avoid substituting within the lambda if the name substituted matches one of the bound variables. To be useful for evaluation, it will have to deal with accidental variable capture when substituting within a lambda.

The root problem is that expressions are actually structured objects, but we are working with the list representation of those objects. Instead of substituting by operating on objects, we substitute on the list representation. We have to arrange for the syntactic substitution on the list representation to preserve the semantics of substitution on the objects they represent.

In the substitution model, we take a symbolic expression and replace some of the atoms in the expression with other expressions. We first need a way to discriminate between the different kinds of expressions. An expression is either an atomic symbol, or a list of expressions called an application. There are no other kinds of expressions.

(defun expression-dispatch (expression if-symbol if-application)
  (cond ((symbolp expression) (funcall if-symbol expression))
        ((consp expression)   (funcall if-application expression))
        (t (error "~s is not an expression." expression))))
Substitution is straightforward:
(defun xsubst (table expression)
  (expression-dispatch expression
    (lambda (symbol)
      (funcall table symbol #'identity (constantly symbol)))

    (lambda (subexpressions)
      (map 'list (lambda (subexpression) (xsubst table subexpression)) subexpressions))))

* (let ((table (table/extend (table/empty) 'x '(* a 42))))
    (xsubst table '(+ x y)))  
(+ (* A 42) Y)
We need a table of multiple substitutions so that we can substitute in parallel:
* (let ((table (table/extend
                 (table/extend (table/empty) 'x 'y)
                 'y 'x)))
    (xsubst table '(+ x y)))
(+ Y X)

So far, so good. Let's add lambda expressions. First, we need to add a new expression kind:

(defun expression-dispatch (expression if-symbol if-lambda if-application) 
  (cond ((symbolp expression) (funcall if-symbol expression))
        ((consp expression)
         (cond ((eq (car expression) 'lambda)
                (funcall if-lambda (cadr expression) (caddr expression)))
               (t (funcall if-application expression))))
        (t (error "~s is not an expression." expression))))

Substitution within a lambda expression is a bit tricky. First, you don't want to substitute a symbol if it is one of the bound variables of the lambda expression. Second, substituting a symbol may introduce more symbols. We don't want the new symbols to be accidentally captured by the bound variables in the lambda. If any new symbol has the same name as a bound variable, we have to rename the bound variable (and all its occurrances) to a fresh name so that it doesn't capture the new symbol being introduced. We'll need a helper function

(defun free-variables (expression)
   (expression-dispatch expression
     (lambda (symbol) (list symbol))
     (lambda (bound-variables body)
       (set-difference (free-variables body) bound-variables))
     (lambda (subexpressions)
       (fold-left #'union '() (map 'list #'free-variables subexpressions)))))
Now when we substitute within a lambda, we first find each free variable in the lambda, look it up in the substitution table, and collect the free variables of the substituted value:
(map 'list (lambda (var)
             (funcall table var #'free-variables (constantly '())))
      (free-variables expression))
This gives us the new free variables for each substitution. The union of all of these is the set of all the new free variables
(fold-left #'union '()
           (map 'list (lambda (var)
                        (funcall table var #'free-variables (constantly '())))
                 (free-variables expression)))
We have to rename the bound variables that are in this set:
 
(intersection 
  bound-variables
  (fold-left #'union '()
             (map 'list (lambda (var)
                          (funcall table var #'free-variables (constantly '())))
                  (free-variables expression))))
So we make a little table for renaming:
(defun make-alpha-table (variables)
  (fold-left (lambda (table variable)
               (table/extend table variable (gensym (symbol-name variable))))
             (table/empty)
             variables))

(let ((alpha-table
       (make-alpha-table
        (intersection 
          bound-variables
          (fold-left #'union '()
                     (map 'list (lambda (var)
                                  (funcall table var #'free-variables (constantly '())))
                          (free-variables expression)))))))
  …)
We rename the bound variables as necessary:
  (make-lambda
    (map 'list (lambda (symbol)
                 (funcall alpha-table symbol #'identity (constantly symbol)))
         bound-variables)
    …)
Finally, we redact the bound variables from the substitution table and append the alpha-table to make the substitutions we need for the lambda body
  (make-lambda
   (map 'list (lambda (symbol)
                (funcall alpha-table symbol #'identity (constantly symbol)))
        bound-variables)
   (xsubst (table/append alpha-table (table/redact* table bound-variables))
           body))))
The entire definition of xsubst is now this:
(defun xsubst (table expression)
  (expression-dispatch expression
    (lambda (symbol)
      (funcall table symbol #'identity (constantly symbol)))

    (lambda (bound-variables body)
      (let ((alpha-table
             (make-alpha-table
              (intersection
               bound-variables
               (fold-left #'union '()
                          (map 'list (lambda (var)
                                       (funcall table var
                                                #'free-variables
                                                (constantly '())))
                               (set-difference (free-variables body) bound-variables)))))))
        (make-lambda
         (map 'list (lambda (symbol)
                      (funcall alpha-table symbol #'identity (constantly symbol)))
              bound-variables)
         (xsubst (table/append alpha-table (table/redact* table bound-variables))
                 body))))

    (lambda (subexpressions)
      (make-application
       (map 'list (lambda (subexpression)
                    (xsubst table subexpression))
            subexpressions)))))
This is certainly more complicated than simple substitution, but we can see it does the right thing here:
* (xsubst (table/extend (table/empty) 'x '(* a y)) '(lambda (y) (+ x y)))
(LAMBDA (#:Y234) (+ (* A Y) #:Y234))

It should be obvious how to add quoted forms. This would require adding a new kind of expression to expression-dispatch and a new handling clause in xsubst that avoids substitution.

I'm not completely happy with how we've added lambda expressions to the expression syntax. Using the symbol lambda as a syntactic marker for lambda expressions causes problems if we also want to use that symbol as an argument or variable. Initially, it seems reasonable to be able to name an argument “lambda”. Within the body of the function, references to the variable lambda would refer to that argument. But what about references in the operator position? By defining lambda expressions as three element lists beginning with the symbol lambda we've made it ambiguous with two-argument applications whose operator is the variable lambda. We have to resolve this ambiguity. The current behavior is that we always interpret the symbol lambda as a syntactic marker so you simply cannot use a variable named lambda as a function.

Nicolas HafnerEternia release and updated plans - May Kandria Update

· 46 days ago
https://filebox.tymoon.eu//file/TWpNd01RPT0=

Another hectic month gone by. Getting Eternia: Pet Whisperer done and published on Steam in such a short time was no small feat. There were a few hurdles along the way as well, especially when it came to getting it ready for Steam, but overall I'm really glad we managed to get it out. Having a bona fide released title is amazing!

Most of the trouble with getting it put on Steam was the manual review process they have. It turns out, real people actually check your builds before you're allowed to sell the game. And not only do they check whether it launches, they also do other tests like whether gamepad support works, whether captions are included, etc. It's a lot more extensive than I had expected, which is really nice!

Unfortunately I was also confused by a couple of options, and misunderstood others, so I had to go through the review a bunch of times, which caused a bunch of stress for me and ultimately delayed the release of Eternia by two days. Welp! At least now I know for the future that I'll have to start with the Steam review process well in advance of the actual release. With Eternia everything was so back to back that there really wasn't any more time for that than we had.

The days since the release haven't been much less hectic either. There were a few bugs that had to be ironed out that prevented people from playing the game. Most notably:

  • People with surround headphones could not play as those headphones announce themselves as 7.1 surround systems, which I didn't expect anyone would have.

  • Some bad versions of shared libraries snuck themselves into the release builds on Linux.

  • The save menu was bugged due to a regression in the UI library

  • Avast antivirus causes random Windows exceptions in the game (this is not fixed and I don't know how to fix it).

There were also some minor content fixes for typos and such along the way. In any case, the automated crash report system I instated for Kandria helped a lot as it told me when people were having issues. Still, it was heartbreaking every time I got a report, as knowing people can't even play the game and are likely very frustrated or disappointed stings a lot.

I really hope the Kandria release will be more smooth and less prone to issues now that we have a better handle on the process. Still, the amount of issues that are only uncovered by having people with various PC setups try to run your game is... well, it's not great. One of the many perils of the PC platform, I suppose.

It hasn't yet been a full week so I don't really want to go into the statistics of the Eternia release, but I'll be sure to talk about that stuff in the next weekly update!

Anyway, aside from supporting the Eternia review and taking care of the marketing around that, I also did some planning for Kandria. I asked ProHelvetia, and it turns out the submission deadline for the grant is going to be 1st of September of this year. This gives us roughly four months to graft a tutorial onto the vertical slice, and polish the fuck out of it to make it look as good as possible.

A first step in that is to iron out egregious bugs that were reported from people playing the public vertical slice, and making a new trailer that shows off all the new content we made. I've now started working on both of those things and we should be able to finish the new trailer in the next week.

I initially also wanted to put out a patch for the vertical slice already, but while we did fix a number of bugs and there's a few improvements, I'd rather wait until the rest of the other known and egregious bugs are ironed out at least. Regardless though, if you're on the Discord or the mailing list, we'll let you know when that patch hits!

Composers

In the last monthly I announced that you'll get to hear the pieces the three finalists put together, so here they are! I hope you enjoy listening to them!

While all three of them did a fantastic job and the decision was a hard one, we ultimately went with...

Mikel

Hey everyone, my name is Mikel - nice to meet you! I'll be working as Kandria's composer from now on, which means I'll bang some notes on the piano and hope they sound good.

In my first week as part of the team, I've been working side by side Shinmera on the game's new trailer. Taking inspiration from OSTs like Astral Chain and Octopath Traveller (as well as using our secret weapon, Julie Elven), we're cooking up something real good!

If you have any suggestions, any soundtracks you'd like me to check out, or any weird instruments/sounds you demand I incorporate in the music, please let me know on the Discord! It'd be great to have some feedback from you too :)

Ah yes, time for shameless plug: if you'd like to listen to some other games I'm working on, feel free to peruse Underdesert (rogue-lite dungeon FPS game), Waves of Steel (naval battle!), Cryptonom (if you like monster RPGs), or Heredity (good ol' fantasy ARPG).

You can also check out my website for more deets.

Look forward to showing you what's next!

Tim

The majority of this month has been occupied by our jam game Eternia. It was actually my first game jam, and I've really enjoyed it - I'm not exaggerating when I say it's been some of the most fun I've had in game writing so far. We agreed on the broad concept early on, and then I was given a lot of creative freedom on the story and character dialogue. I'm really pleased with how much content I produced (around 20k words) in such a short time frame, which was definitely helped by the quick-iteration dialogue system, and forced by the necessary constraints.

The game is incredibly non-linear, so the challenge was writing things in a modular way, such that they made sense whichever direction you came at them from. With some crafty narrative design and plotting, I think it holds together remarkably well; players would be surprised how little, if any, conditional branching there is behind the scenes. With such a tight dev time, it was important the game wasn't a spiderweb of quest logic, for both my own sanity, and for playtesting.

Thankfully I didn't have to do too much research either, since my wife Lesleyann and I have rescued animals from shelters in the past (especially rats!). Les used to be an animal behaviourist too, so I've gained a base awareness of things like calming signals in dogs, by osmosis over the years; she was also a big help when I needed a steer on certain things. There really wasn't time for anything more than a cursory amount of research, so this was definitely a plus for this idea when Nick first suggested it.

Towards the end of the month I jumped back on Kandria, fleshing out the lore for the other world factions ready for Fred to concept. This was really beneficial to give me a feel for the wider world, and allowed some tweaking of the plot. As we progress with the content for the rest of this year, these factions will get fleshed out further with their own unique characters and quest lines.

I've also worked with Nick on the script for the new trailer, and helped in sourcing a voice actor. I've also gotten back up to speed on the vertical slice content, playtesting the quests to make sure they're still working correctly after the restructuring of the quest system. With fresh eyes, this has also let me tweak dialogue as I went.

What's next

Here's a rough roadmap for the rest of the year:

  • Make the combat more flashy

  • Finish a new trailer

  • Design and outline the factions for the rest of the game

  • Develop the soundscape for Kandria and start working on the tracks for region 1

  • Add a music system that can do layering and timed transitions

  • Build and add the tutorial sequence to the beginning of the game

  • Polish the ProHelvetia submission material

  • Polish and revise the combat design

  • Explore platforming items and mechanics

  • Practise platforming level design

  • Start work on the horizontal slice

Of course the size of the items varies a lot, but hopefully we should be able to begin work on the horizontal slice by November. If we extrapolate the time it took for the vertical slice and cut some content, this should leave us with enough time to finish by early 2023... provided we don't run outta money.

Anyway, that's why the grant is the primary target for now. Though we were also accepted by ProHelvetia for GDC Online and the SwissNex US Game Industry Week, so that'll give us some more opportunities to look for a publisher. Having a new trailer for that should definitely help a lot!

Well, see you next time then! Remember that there's a mailing list with weekly updates, a discord for the community, and my twitter with short videos and images on the development! Oh, and do check out Eternia if you haven't yet. We've gotten some really nice reviews for it, which has been a joy to see!

Joe MarshallLightweight table

· 52 days ago

You don't need a data structure to make a lookup table. You can make a table just out of the lookup function. In this example, we start with a continuation passing style lookup function:

lookup (key if-found if-not-found)
  
    Invokes (funcall if-found value) if key is in the table,
    invokes (funcall if-not-found) otherwise.
An empty table just invokes the if-not-found continuation:
(defun table/empty ()
  (lambda (key if-found if-not-found)
    (declare (ignore key if-found))
    (funcall if-not-found)))
A table can be extended by wrapping it:
(defun table/extend (table key* value)
  (lambda (key if-found if-not-found)
    (if (eql key key*)
        (funcall if-found value)
        (funcall table key if-found if-not-found))))
So let's try it out:
(defvar *table-1* (table/extend 
                    (table/extend
                      (table/empty)
                      'foo 42)
                    'bar 69))

* (funcall *table-1* 'foo #'identity (constantly 'not-found))
42

* (funcall *table-1* 'quux #'identity (constantly 'not-found))
NOT-FOUND
You can also redact an entry from a table by wrapping the table:
(defun table/redact (table redacted)
  (lambda (key if-found if-not-found)
    (if (eql key redacted)
        (funcall if-not-found)
        (funcall table key if-found if-not-found))))

(defvar *table-2* (table/redact *table-1* 'foo))

* (funcall *table-2* 'foo #'identity (constantly 'not-found))
NOT-FOUND

Are there any advantages to implementing a table in this curious manner? Building a table by nesting a series of lookup steps leads to a linear lookup in linear space, so this kind of table should be more or less comparable to an alist for individual entries. Unlike a traditional table made with a data structure, you cannot enumerate the keys and values in the table. On the other hand, you gain the ability to map keys to values without having to enumerate the keys:

(defun table/bind-predicate (table predicate value)
  (lambda (key if-found if-not-found)
    (if (funcall predicate key)
        (funcall if-found value)
        (funcall table key if-found if-not-found))))

;;; bind all even numbers to the symbol 'EVEN
(defvar *table-3* 
  (table/bind-predicate *table-2* (lambda (n) (and (numberp n) (evenp n))) 'even))

* (funcall *table-3* 6 #'identity (constantly 'not-found))
EVEN
Or you can add a default value to an existing table:
(defun table/add-default (table default-value)
  (lambda (key if-found if-not-found)
    (declare (ignore if-not-found))
    (funcall table key
      if-found
      (lambda () (funcall if-found default-value)))))

(defvar *table-4* (table/add-default *table-3* 'default))

* (funcall *table-4* 'bar #'identity (constantly 'not-found))
69
    
* (funcall *table-4* 'xyzzy #'identity (constantly 'not-found))
DEFAULT

Perhaps the biggest disadvantage of this implementation is the difficulty in inspecting a table.

* *table-4*
#<CLOSURE (LAMBDA (KEY IF-FOUND IF-NOT-FOUND) :IN TABLE/ADD-DEFAULT) {1003CD641B}>
We can use the object inspector to peek inside the closure and maybe sleuth out what this table is made out of, but it isn't just an alist where we can print out the entries.

So far, we've defined a table as being a procedure with the (key if-found if-not-found) signature, but we can flip this around and say that any procedure with a (key if-found if-not-found) signature can be thought of as a table. For example, a regular expression matcher could be considered to be a table of strings (if that were a more useful model).

Wimpie NortjeUser feedback during long running external processes.

· 53 days ago

Sometimes it may be necessary to execute an external command that takes a long time to complete, long enough that the user needs visual feedback while it is running to show that the process is still alive.

UIOP provides fantastic tools for running external commands in a portable manner but it was not obvious to me how to show the external command's output to the user while it was still busy. I also wanted to execute the external command in a synchronous fashion, i.e. my lisp application must wait for the external command to finish before continuing. The need for synchronicity sent me down the wrong path of using the synchronous uiop:run-program. It only returns when the external command has finished, which means you can only process the command output once it is completed.

I eventually realised I should use uiop:launch-program, the asynchronous version, and I came up with the following solution. In the example below the (ping) function pings a website and prints the results as they become available. Afterwards it returns the exit code of the ping command.

(defun ping ()
  (let (proc out exit-code)
    (unwind-protect
         (progn
           (setf proc (uiop:launch-program
                       (list "ping" "-c" "5" "www.darkchestnut.com")
                       :ignore-error-status t
                       :output :stream))
           (setf out (uiop:process-info-output proc))
           (iter
             (while (uiop:process-alive-p proc))
             (iter
               (while (listen out))
               (write-char (read-char out) *STANDARD-OUTPUT*))
             ;; ... Maybe do something here
             (sleep 0.5))
           (uiop:copy-stream-to-stream out *STANDARD-OUTPUT* 
                                       :linewise t))
      (setf exit-code (uiop:wait-process proc))
      (uiop:close-streams proc))
    exit-code))

In the first example the command's output is shown to the user but it is not processed in any other way. If you need to do some extra processing on it after completion then the next example should provide a good starting point.

(defun ping-processing ()
  (let (proc out exit-code output)
    (with-open-stream (output-stream (make-string-output-stream))
      (with-open-stream (broadcast-stream 
                         (make-broadcast-stream *STANDARD-OUTPUT* 
                                                output-stream))
        (unwind-protect
             (progn
               (setf proc (uiop:launch-program
                           (list "ping" "-c" "5" 
                                 "www.darkchestnut.com")
                           :ignore-error-status t
                           :output :stream))
               (setf out (uiop:process-info-output proc))
               (iter
                 (while (uiop:process-alive-p proc))
                 (iter
                   (while (listen out))
                   (write-char (read-char out) broadcast-stream))
                 (sleep 0.5))
               (uiop:copy-stream-to-stream out broadcast-stream 
                                           :linewise t)))
          (setf exit-code (uiop:wait-process proc))
          (uiop:close-streams proc)
        (setf output (get-output-stream-string output-stream))
        ;; ... process output here
        exit-code))))

Eric TimmonsNew Project: adopt-subcommands

· 63 days ago

I have just released a new project: adopt-subcommands. This project extends the excellent Adopt library with support for arbitrarily nested subcommands. See the README for more information.

I have just asked that it be included in Quicklisp, so hopefully it will be present in the next QL release.

History

After bouncing around between CL command line processing libraries for a while (including CLON, unix-opts, and another I forget), I tried Adopt shortly after it was released and immediately got hooked. It was just super easy to use and it used functions as the default way to define interfaces (which encouraged reuse and programatic generation). To be fair, other libraries have similar features, but there's just something about Adopt that clicked with me.

The big thing missing for me was easy support for subcommands. Libraries like CLON support that out of the box, but (at least in CLON's case) required that you completely specify every option at the terminal nodes. I wanted to define a folder-like hierarchy where options defined at some level get automatically applied to everything below it as well.

I was able to hack together a solution using Adopt, but I built it in a hurry and it was definitely not fit for general consumption. Since then, I was inspired by Steve Losh's Reddit comment giving an example of how he'd make a simple subcommand CLI parser using Adopt. His post made me realize I missed the existence of the adopt:treat-as-argument restart (d'oh!) and after that, all the pieces fell into place on how to cleanly rewrite my solution. This library is the result!

Nifty Features

I work with a number of programs written in golang that (IMO) have atrocious CLI handling (like helmfile and Kaniko). Maybe it's the individual program's fault, but it's endemic enough that I suspect whatever CLI parser the golang community has landed on is just terrible.^1

For instance, position of the options matters. "Global" options have to come before the subcommand is even specified. So foo --filter=hi run can have a completely different meaning than foo run --filter=hi. Additionally, some of the subcommand style programs I work with don't print all the options if you ask for help, they only print the options associated with the most recent subcommand.

Needless to say, I made sure adopt-subcommands didn't exhibit any of these behaviors. As this library is parsing the command line, it builds up a path of the folders (and eventually the terminal command) it passes through. This path can be passed to adopt-subcommands:print-help to print a help string that includes all the associated options. Additionally, options can come at any point after the subcommand that defines them.^2

There are two major difference between Adopt and this library:

  1. You need to provide functions when you define a terminal subcommand. This function will be called with the results of the parsing when you dispatch.
  2. The dispatch function has a keyword argument :print-help-and-exit. If you provide the symbol naming your help option, then this library will automatically print the help and exit if that option is specified, and after doing as much parsing as possible.

Give it a try and let me know of any issues that you find!

1: although, it wouldn't surprise me if some gophers started arguing that it's totally on purpose, is actually quite elegant, blah blah blah. I kid. I'm just salty about golang's lack of conditions and insistence on using tabs and let that color my take on the entire language.

2: It would be possible to let them come before as well, but at the risk of introducing ambiguity. It's not clear to me that it's worth it.

Joe Marshall&#951;-conversion and tail recursion

· 66 days ago

Consider this lambda expression: (lambda (x) (sqrt x)). This function simply calls sqrt on its argument and returns whatever sqrt returns. There is no argument you could provide to this function that would cause it to return a different result than you would get from calling sqrt directly. We say that this function and the sqrt function are extensionally equal. We can replace this lambda expression with a literal reference to the sqrt function without changing the value produced by our code.

You can go the other way, too. If you find a literal reference to a function, you can replace it with a lambda expression that calls the function. This is η-conversion. η-reduction is removing an unnecessary lambda wrapper, η-expansion is introducting one.

η-conversion comes with caveats. First, it only works on functions. If I have a string "foo", and I attempt to η-expand this into (lambda (x) ("foo" x)), I get nonsense. Second, a reduction strategy that incorporates η-reduction can be weaker than one that does not. Consider this expression: (lambda (x) ((compute-f) x)). We can η-reduce this to (compute-f), but this makes a subtle difference. When wrapped with the lambda, (compute-f) is evaluated just before it is applied to x. In fact, we won't call (compute-f) unless we invoke the result of the lambda expression somewhere. However, once η-reduced, (compute-f) is evaluated at the point the original lambda was evaluated, which can be quite a bit earlier.


When a function foo calls another function bar as a subproblem, an implicit continuation is passed to bar. bar invokes this continuation on the return value that it computes. We can characterize this continuation like this:

kbar = (lambda (return-value)
         (kfoo (finish-foo return-value)))
this just says that when bar returns, we'll finish running the code in foo and further continue by invoking the continuation supplied to foo.

If foo makes a tail call to bar, then foo is just returning what bar computes. There is no computation for foo to finish, so the continuation is just

kbar = (lambda (return-value)
         (kfoo return-value))
But this η-reduces to just kfoo, so we don't have to allocate a new trivial continuation when foo tail calls bar, we can just pass along the continuation that was passed to foo.

Tail recursion is equivalent to η-reducing the implicit continuations to functions where possible. A Scheme aficionado might prefer to say avoiding η-expanding where unnecessary.

This is a mathematical curiosity, but does it have practical significance? If you're programming in continuation passing style, you should be careful to η-reduce (or avoid η-expanding) your code.

Years ago I was writing an interpreter for the REBOL language. I was getting frustrated trying to make it tail recursive. I kept finding places in the interpreter where the REBOL source code was making a tail call, but the interpreter itself wasn't, so the stack would grow without bound. I decided to investigate the problem by rewriting the interpreter in continuation passing style and seeing why I couldn't η-convert the tail calls. Once in CPS, I could see that eval took two continuations and I could achieve tail recursion by η-reducing one of them.

Wimpie NortjeProcess sub-command style command line options with Adopt.

· 66 days ago

How to process sub-command style command line arguments is a question that arises more and more. Many of the basic option handling libraries can not handle this at all, or they make it very difficult to do so.

One of the newer libraries in the option processing field is Adopt by Steve Losh. It was not designed to handle sub-commands but it is in fact very capable to do this without having to jump through too many hoops.

In a Reddit thread someone asked if Adopt can handle sub-command processing and Steve answered with the following example:

(eval-when (:compile-toplevel :load-toplevel :execute)
  (ql:quickload '(:adopt) :silent t))

(defpackage :subex
  (:use :cl)
  (:export :toplevel *ui*))

(in-package :subex)

;;;; Global Options and UI ----------------------------------------------------
(defparameter *o/help*
  (adopt:make-option 'help :long "help" :help "display help and exit" :reduce (constantly t)))

(defparameter *o/version*
  (adopt:make-option 'version :long "version" :help "display version and exit" :reduce (constantly t)))

(defparameter *ui/main*
  (adopt:make-interface
    :name "subex"
    :usage "[subcommand] [options]"
    :help "subcommand example program"
    :summary "an example program that uses subcommands"
    :contents (list *o/help* *o/version*)))

(defparameter *ui* *ui/main*)


;;;; Subcommand Foo -----------------------------------------------------------
(defparameter *o/foo/a*
  (adopt:make-option 'a :result-key 'mode :short #\a :help "run foo in mode A" :reduce (constantly :a)))

(defparameter *o/foo/b*
  (adopt:make-option 'b :result-key 'mode :short #\b :help "run foo in mode B" :reduce (constantly :b)))

(defparameter *ui/foo*
  (adopt:make-interface
    :name "subex foo"
    :usage "foo [-a|-b]"
    :summary "foo some things"
    :help "foo some things"
    :contents (list *o/foo/a* *o/foo/b*)))

(defun run/foo (mode)
  (format t "Running foo in ~A mode.~%" mode))


;;;; Subcommand Bar -----------------------------------------------------------
(defparameter *o/bar/meow*
  (adopt:make-option 'meow :long "meow" :help "meow loudly after each step" :reduce (constantly t)))

(defparameter *ui/bar*
  (adopt:make-interface
    :name "subex bar"
    :usage "bar [--meow] FILE..."
    :summary "bar some files"
    :help "bar some files"
    :contents (list *o/bar/meow*)))

(defun run/bar (paths meow?)
  (dolist (p paths)
    (format t "Bar-ing ~A.~%" p)
    (when meow?
      (write-line "meow."))))


;;;; Toplevel -----------------------------------------------------------------
(defun toplevel/foo (args)
  (multiple-value-bind (arguments options) (adopt:parse-options-or-exit *ui/foo* args)
    (unless (null arguments)
      (error "Foo does not take arguments, got ~S" arguments))
    (run/foo (gethash 'mode options))))

(defun toplevel/bar (args)
  (multiple-value-bind (arguments options) (adopt:parse-options-or-exit *ui/bar* args)
    (when (null arguments)
      (error "Bar requires arguments, got none."))
    (run/bar arguments (gethash 'meow options))))

(defun lookup-subcommand (string)
  (cond
    ((null string) (values nil *ui/main*))
    ((string= string "foo") (values #'toplevel/foo *ui/foo*))
    ((string= string "bar") (values #'toplevel/bar *ui/bar*))
    (t (error "Unknown subcommand ~S" string))))

(defun toplevel ()
  (sb-ext:disable-debugger)
  (multiple-value-bind (arguments global-options)
      (handler-bind ((adopt:unrecognized-option 'adopt:treat-as-argument))
        (adopt:parse-options *ui/main*))
    (when (gethash 'version global-options)
      (write-line "1.0.0")
      (adopt:exit))
    (multiple-value-bind (subtoplevel ui) (lookup-subcommand (first arguments))
      (when (or (null subtoplevel)
                (gethash 'help global-options))
        (adopt:print-help-and-exit ui))
      (funcall subtoplevel (rest arguments)))))

Quicklisp newsApril 2021 Quicklisp dist update now available

· 66 days ago

 New projects

  • cluffer — Library providing a protocol for text-editor buffers. — FreeBSD, see file LICENSE.text
  • data-frame — Data frames for Common Lisp — MS-PL
  • dfio — Common Lisp library for reading data from text files (eg CSV). — MS-PL
  • herodotus — Wrapper around Yason JSON parser/encoder with convenience methods for CLOS — BSD
  • lisp-stat — A statistical computing environment for Common Lisp — MS-PL
  • numerical-utilities — Utilities for numerical programming — MS-PL
  • nyxt — Extensible web-browser in Common Lisp — BSD 3-Clause
  • shop3 — SHOP3 Git repository — Mozilla Public License
  • special-functions — Special functions in Common Lisp — MS-PL
  • tfeb-lisp-hax — TFEB.ORG Lisp hax — MIT

Updated projects: 3bmd, 3d-matrices, alexandria, algae, anypool, april, array-operations, async-process, audio-tag, bdef, bp, canonicalized-initargs, cffi, chanl, ci-utils, cl+ssl, cl-autowrap, cl-change-case, cl-clon, cl-collider, cl-colors2, cl-coveralls, cl-cxx, cl-data-structures, cl-digraph, cl-environments, cl-gamepad, cl-gserver, cl-heredoc, cl-json-pointer, cl-kraken, cl-las, cl-liballegro, cl-liballegro-nuklear, cl-markless, cl-marshal, cl-maxminddb, cl-mixed, cl-mock, cl-patterns, cl-rabbit, cl-ses4, cl-shlex, cl-ssh-keys, cl-str, cl-strings, cl-typesetting, cl-utils, cl-webkit, clack, clods-export, clog, closer-mop, common-lisp-jupyter, computable-reals, concrete-syntax-tree, consfigurator, cricket, croatoan, cubic-bezier, cytoscape-clj, damn-fast-priority-queue, dataloader, defconfig, definitions-systems, dexador, doplus, eazy-documentation, eclector, enhanced-defclass, femlisp, file-attributes, flac-metadata, freesound, functional-trees, gadgets, gendl, glacier, golden-utils, gtirb-capstone, gtirb-functions, gtwiwtg, harmony, helambdap, hunchenissr, hyperluminal-mem, imago, ironclad, json-mop, kekule-clj, lake, lass, lichat-protocol, linear-programming, linux-packaging, lisp-binary, listopia, magicl, maiden, markup, mcclim, mgl-pax, mito, multiposter, mutility, neural-classifier, nodgui, north, omer-count, origin, parachute, parsley, patchwork, perceptual-hashes, petalisp, plump, pngload, postmodern, qlot, quicklisp-stats, quilc, quri, random-uuid, sc-extensions, seedable-rng, sel, select, serapeum, shadow, shasht, slot-extra-options, sly, staple, static-dispatch, stripe, stumpwm, taglib, tfeb-lisp-tools, tfm, trivia, trivial-features, trivial-timer, ttt, umbra, umlisp, utilities.print-items, validate-list, vgplot, with-user-abort, zippy.

Removed projects: its

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Wimpie NortjeA list of Common Lisp command line argument parsers.

· 67 days ago

I was searching for a command line option parser that can handle git-style sub-commands and found a whole bunch of libraries. It appears as if libraries on this topic proliferate more than usual.

I evaluated them only to the point where I could decide to skip it or give it a cursory test. The information I gathered is summarised below.

If you only need the usual flag and option processing, i.e. not sub-commands, then I would suggest unix-opts. It appears to be the accepted standard and is actively maintained. It is also suggested by both Awesome Common Lisp and the State of the Common Lisp Ecosystem Survey 2020.

If your needs are very complex or specific you can investigate clon, utility-arguments or ace.flag.

For basic flags and options with sub-commands, there are a few libraries that explicitly support sub-command processing but you should be able to make it work with many of the other options and a bit of additional code.

Name Print help Native sub-commands Notes
ace.flag ? ? Not in QL.
adopt Yes No Can generate man files.
apply-argv No No Does not handle -xyz as three flags.
cl-just-getopt-parser No No Easy to use.
cl-cli Yes Yes  
cl-argparse Yes Yes  
cli-parser No No Does not handle free arguments, not in QL.
clon ? Yes Very complex, most feature rich.
command-line-arguments ? ? Not well documented.
getopt No No Does not handle -xyz as three flags, not well documented.
parse-args No No Not in QL
utility-arguments ? ? Complex to set up
unix-options Yes No Easy to use.
unix-opts Yes No The standard recommendation.

Nicolas HafnerSlicing Up the Game - April Kandria Update

· 74 days ago

header
What a hell of a month! We got a lot done, all of it culminating in the release of the new vertical slice demo! This demo is now live, and you can check it out for free!. This slice includes an hour or more of content for you to explore, so we hope you enjoy it!

Visuals and Level Design

Like last month, a good chunk of this month was spent designing the remaining areas we needed for the slice. However, this is also the part that got the most shafted compared to how much time I should be investing in it. I'm going to have to dedicate a month or two at some point to just doing rough levels and figuring out what works, both for platforming challenges and for combat. So far I've never actually taken the time to do this, so I still feel very uncertain when it comes to designing stuff.

Still, I'm fairly happy with at least the visual look of things. Fred has done some excellent work with the additional tile work I've requested from him, and I'm starting to learn how to mash different tiles together to create new environments without having to create new assets all the time.

areas

I've also spent some time on the side making new palettes for the stranger. This was mostly for fun, but I think allowing this kind of customisation for the player is also genuinely valuable. At least I always enjoy changing the looks of the characters I play to my liking. There's 32 palettes already, but I'm still open for more ideas if you have any, by the way!

palettes

We're not quite sure yet how we want to present the palettes in-game. Probably allowing you to pick between a few in the settings, and having some others as items you have to discover first.

Gameplay changes

We've gone over the combat some more and tweaked it further. It's still a good shot away from what I'd like it to be, and I'll probably have to spend a full month at some point to improve it. Whatever the case, what we have now is already miles ahead of how things started out.

The player movement has also been slightly tweaked to fit better for the exploration and kinds of levels we've built, and to overall feel a bit smoother. The exact changes are very subtle, though I hope you'll still notice them, even if just subconsciously!

roll

I've also added elevators back into the game. That lead to a bunch of days of frustrated collision problem fixes again, but still, elevators are an important part of the game, so I'm glad I've gotten around to adding them back in.

There's also been a bunch of improvements and fixes to the movement AI so NPCs can find their way better through the complicated mess of underground tunnels and caved in complexes.

Optimisation

Due to a number of people reporting problems with stutter, and generally the game showing slowdown even on my beefy machine, I put a bit of time into various optimisations. Chief among those is the reduction of produced garbage, which means the garbage collector will be invoked far less often, leading to fewer GC pauses stuttering up the framerate. There's still a lot left to be done for that, but I'll do that another time.

I also finally got around to implementing a spatial query data structure - this is extremely useful as it massively reduces the time needed to do collision testing and so forth. What I've gone with is a much simplified bounding volume hierarchy tree (BVH), mostly because the concept is very simple to understand: every object in the scene you put into a box that encompasses it. You then group two such boxes at a time into another box that encompasses both. You keep doing that until you get one last box that encompasses everything.

bvh

If you now want to know which objects are contained in a region, you start testing the biggest box, and descend into the smaller boxes as long as that region is still a part of the box. If this tree of boxes is well balanced (meaning the closest objects are grouped together), it should reduce the number of tests you need to make drastically.

Implementing this was a surprisingly painless task that only took me a bout a day. Even if the BVH I have is most definitely not ideally balanced at every point in time, it's still good enough for now.

Editor

As you may or may not know, Kandria is built with a custom engine, and includes a fully featured editor of its own. This editor is shipped along with every version of the game, and you can open it up at any time by pressing the section key (below Escape).

This month I've made a number of improvements to the editor to add extra tools and fix a lot of issues to its stability. This was necessary to make my own life designing levels not completely miserable, but I think the editor is now also approaching a level of usability that should make it approachable by people not in the dev team, like you!

There's a bit of public documentation on the editor, so if you're interested in messing around with the existing levels, or even building your own, check it out! We're still intending on organising a level design contest as well, though for that I want to take some time to polish the editor even more, so you'll have to wait a bit longer for that. If that sounds exciting to you though, be sure to join our Discord, as we'll be organising the event through there whenever it comes to be.

UI

There's been a number of improvements to the game's user interface. Chief among them being that dialogue choices are now displayed in a less confusing manner, but there's also been some additions to the main menu to allow you to save & quit the game, check your quest log and inventory, and check the button mappings.

We've also included some more accessibility options so that you can change the UI scaling to your liking, pick between different fonts if the default is hard to read, and to disable or tweak things like the gamepad rumble strength or the camera shake intensity.

controls

Unfortunately we haven't had time to build a button remapping UI yet, though the game is already capable of doing the remapping for you. We'll definitely build such a UI in time for the full first act demo, though.

If you have other suggestions for accessibility improvements, please do let me know. Accessibility is very important to me, and I'd like to make Kandria a good example in that domain.

Composers

Last month we put out a listing for a composer for Kandria. The response to that was frankly astounding. Within two days we had gotten over a hundred applications, and within the week I had to close the listing down again as we were getting close to three hundred in total!

I knew there were going to be lots of applications, but still, I didn't expect this big of a response. Processing everything and evaluating all the applications took a fair amount of time out of the month, and it was really, really hard, too. So many of the pieces I listened to over the course of doing this were really fantastic!

We're still not quite done with the evaluation, though. We managed to whittle the list down to 10 for interviews, and from there to 3 for a third round. This third round is still going on now; the three were paid to produce a one minute track of music for a specific section of Kandria. The production process, communication, and how well the piece ultimately fits to our vision are going to help us decide who to pick.

The three finalists, Jacob Lincke, João Luís, and Mikel Dale, have all agreed to be named publicly, and to have their pieces published once they're done. The deadline for that is 18th of April, so you'll get to hear what they made in the next monthly update! After the deadline we hope to also finalise a contract with our pick until the end of the month, so that they can start with us in May, or shortly after.

I've heard some drafts from each of them already, and what they've produced is really good stuff. It has made me so excited to finally be able to not only see, but also properly hear Kandria!

Events

Gamedev isn't all about just developing though, as you also have to worry about organisation, management, planning, marketing, and funding. The last is another thing that ate some days' worth of time this month. We were chosen by ProHelvetia to participate in the Global Games Pitch and Pocket Gamer Connects Digital. We're of course very grateful for these opportunities, and it's fantastic to be able to present Kandria at some events despite Corona!

Still, pitching is a very stressful affair for me, so preparing for GGP and actually executing it took a good bite out of me. On the flipside, we now have some good quality pitching material that we can much more easily adapt and re-use in the future as well. I haven't heard back from anyone about the pitch I did, so I don't have any feedback on what was good or bad about it, which is a shame. I didn't really expect to get any feedback from it though, so I can't say I'm upset about it either.

In any case, PG Connects Digital is happening in a little less than two weeks from now, so I'll have to make sure to be ready for that whenever it comes about.

Tim's recount

We've reached the vertical slice deadline - the quests are done now and feeling pretty good I think. The dialogue and structures have been refined with feedback from Nick; there's also been a fair amount of self-testing, and a couple of week's testing from our Discord, which has all helped tighten things up. I feel like there's a good balance between plot, character development, player expression, and non-linearity, while also teasing aspects of the wider setting and story. I'm still not totally sure how much playtime the quests constitute right now; I think it largely depends on how fast a player is at the gameplay, and how much they want to engage with the dialogue - but they do take them to the four corners of the current map, and there's some replayability in there too. It feels like a good chunk of content and a major part of the first act. I'm looking forward to seeing how people get on with them, and to learn from their feedback to tweak things further.

I've learnt lots of new scripting tricks in the dialogue engine to bring this together, which will be useful going forwards, and should make generating this amount of content much quicker in the future. Nick and I also have some ideas to improve the current quests, which we should be able to do alongside the next milestone's work.

This month I also helped Nick prepare for the Global Games Pitch event; it was great to watch the stream, and see how other developers pitched their projects. Hopefully this leads to some new opportunities for Kandria too!

Fred's recount

Added a lot of little things this month. Happy with the new content we got, though I wish I had been able to finish polishing the animations and attack moves on the Stranger for the vertical slice. I had kinda left those anims behind for a while, but I feel it's pretty helpful to gauge the combat feel better.

Otherwise, I am really stoked to get started with the game jam coming up. I love those, last one I did was for my birthday in 2019 and it was the best birthday present ever. :D

Going forward

As Fred mentioned, the next two weeks we'll be working on a new, secret project! But don't worry, it won't stay secret for very long, and we won't be putting Kandria off for long either. It's going to be a short two-week jam-type project, which we'll release at the end of the month, so you'll know what it is and get to play it by the next monthly update! If you're really curious though, you should sign up to our mailing list where we'll talk about the project next week already!

If you want to try out the new demo release, you'll get a download link when you subscribe, as well. I hope you enjoy it!

Joe MarshallCan continuation passing style code perform well?

· 75 days ago

Continuation passing style is a powerful technique that allows you to abstract over control flow in your program. Here is a simple example: We want to look things up in a table, but sometimes the key we use is not associated with any value. In that case, we have to do something different, but the lookup code doesn't know what the caller wants to do, and the caller doesn't know how the lookup code works. Typically, we would arrange for the lookup code to return a special “key not found” value:

(let ((answer (lookup key table)))
   (if (eq answer 'key-not-found)
       ... handle missing key ...    
       ... compute something with answer...)

There are two minor problems with this approach. First, the “key not found” value has to be within the type returned by lookup. Consider a table that can only contain integers. Unfortunately, we cannot declare answer to be an integer because it might be the “key not found” value. Alternatively, we might decide to reserve a special integer to indicate “key not found”. The answer can then be declared an integer, but there is now a magic integer that cannot be stored in the table. Either way, answer is a supertype of what can be stored in the table, and we have to project it back down by testing it against “key not found”.

The second problem is one of redundancy. Presumably, somewhere in the code for lookup there is a conditional for the case that the key hasn't been found. We take a branch and return the “key not found” value. But now the caller tests the return value against “key not found” and it, too, takes a branch. We only take the true branch in the caller if the true branch was taken in the callee and we only take the false branch in the caller if the false branch was taken in the callee. In essence, we are branching on the exact same condition twice. We've reified the control flow, injected the reified value into the space of possible return values, passed it through the function call boundary, then projected and reflected the value back into control flow at the call site.

If we write this in continuation passing style, the call looks like this

(lookup key table
   (lambda (answer)
     …compute something with answer)
   (lambda ()
     …handle missing key…))
lookup will invoke the first lambda expression on the answer if it is found, but it will invoke the second lambda expression if the answer is not found. We no longer have a special “key not found” value, so answer can be exactly the type of what is stored in the table and we don't have to reserve a magic value. There is also no redundant conditional test in the caller.

This is pretty cool, but there is a cost. The first is that it takes practice to read continuation passing style code. I suppose it takes practice to read any code, but some languages make it extra cumbersome to pass around the lambda expressions. (Some seem actively hostile to the idea.) It's just more obscure to be passing around continuations when direct style will do.

The second cost is one of performance and efficiency. The lambda expressions that you pass in to a continuation passing style program will have to be closed in the caller's environment, and this likely means storage allocation. When the callee invokes one of the continuations, it has to perform a function call. Finally, the lexically scoped variables in the continuation will have to be fetched from the closure's environment. Direct style performs better because it avoids all the lexical closure machinery and can keep variables in the local stack frame. For these reasons, you might have reservations about writing code in continuation passing style if it needs to perform.

Continuation passing style looks complicated, but you don't need a Sufficiently Smart compiler to generate efficient code from it. Here is lookup coded up to illustrate:

(defun lookup (key table if-found if-not-found)
   (labels ((scan-entries (entries)
              (cond ((null entries) (funcall if-not-found))
                    ((eq (caar entries) key) (funcall if-found (cdar entries)))
                    (t (scan-entries (cdr entries))))))
     (scan-entries table)))
and a sample use might be
(defun probe (thing)
  (lookup thing *special-table*
    (lambda (value) (format t "~s maps to ~s." thing value))
    (lambda () (format t "~s is not special." thing))))

Normally, probe would have to allocate two closures to pass in to lookup, and the code in each closure would have to fetch the lexical value of key from the closure. But without changing either lookup or probe we can (declaim (inline lookup)). Obviously, inlining the call will eliminate the overhead of a function call, but watch what happens to the closures:

(defun probe (thing)
  ((lambda (key table if-found if-not-found)
     (labels ((scan-entries (table)
                (cond ((null entries) (funcall if-not-found))
                      ((eq (caar entries) key) (funcall if-found (cdar entries)))
                      (t (scan-entries (cdr entries))))))
        (scan-entries table)))
    thing *special-table*
    (lambda (value) (format t "~s maps to ~s." thing value))
    (lambda () (format t "~s has no mapping." thing))))
A Decent Compiler will easily notice that key is just an alias for thing and that table is just an alias for *special-table*, so we get:
(defun probe (thing)
  ((lambda (if-found if-not-found)
     (labels ((scan-entries (entries)
                (cond ((null entries) (funcall if-not-found))
                      ((eq (caar entries) thing) (funcall if-found (cdar entries)))
                      (t (scan-entries (cdr entries))))))
        (scan-entries *special-table*)))
    (lambda (value) (format t "~s maps to ~s." thing value))
    (lambda () (format t "~s has no mapping." thing))))
and the expressions for if-found and if-not-found are side-effect free, so they can be inlined (and we expect the compiler to correctly avoid unexpected variable capture):
(defun probe (thing)
  ((lambda ()
     (labels ((scan-entries (entries)
                (cond ((null entries)
                       (funcall (lambda () (format t "~s has no mapping." thing))))
                      ((eq (caar entries) thing)
                       (funcall (lambda (value) (format t "~s maps to ~s." thing value))
                                (cdar entries)))
                      (t (scan-entries (cdr entries))))))
        (scan-entries *special-table*)))))
and the immediate calls to literal lambdas can be removed:
(defun probe (thing)
   (labels ((scan-entries (entries)
              (cond ((null entries) (format t "~s has no mapping." thing))
                    ((eq (caar entries) thing)
                     (format t "~s maps to ~s." thing (cdar value))))
                    (t (scan-entries (cdr entries))))))
      (scan-entries *special-table*)))

Our Decent Compiler has removed all the lexical closure machinery and turned the calls to the continuations into direct code. This code has all the features we desire: there is no special “key not found” value to screw up our types, there is no redundant branch: the (null entries) test directly branches into the appropriate handling code, we do not allocate closures, and the variables that would have been closed over are now directly apparent in the frame.

It's a bit vacuous to observe that an inlined function performs better. Of course it does. At the very least you avoid a procedure call. But if you inline a continuation passing style function, any Decent Compiler will go to town and optimize away the continuation overhead. It's an unexpected bonus.

On occasion I find that continuation passing style is just the abstraction for certain code that is also performance critical. I don't give it a second thought. Continuation passing style can result in high-performance code if you simply inline the critical calls.

Joe MarshallEarly LISP Part II (Apply redux)

· 82 days ago

By April of 1959, issues with using subst to implement β-reduction became apparent. In the April 1959 Quarterly Progress Report of the Research Laboratory of Electronics, McCarthy gives an updated definition of the universal S-function apply:

    apply[f;args]=eval[cons[f;appq[args]];NIL]
where
    appq[m]=[null[m]→NIL;T→cons[list[QUOTE;car[m]];appq[cdr[m]]]]
and
       eval[e;a]=[
atom[e]→eval[assoc[e;a];a];
atom[car[e]]→[
car[e]=QUOTE→cadr[e];
car[e]=ATOM→atom[eval[cadr[e];a]];
car[e]=EQ→[eval[cadr[e];a]=eval[caddr[e];a]];
car[e]=COND→evcon[cdr[e];a];
car[e]=CAR→car[eval[cadr[e];a]];
car[e]=CDR→cdr[eval[cadr[e];a]];
car[e]=CONS→cons[eval[cadr[e];a];eval[caddr[e];a]];
T→eval[cons[assoc[car[e];a];evlis[cdr[e];a]];a]];
caar[e]=LABEL→eval[cons[caddar[e];cdr[e]];cons[list[cadar[e];car[e]];a]];
caar[e]=LAMBDA→eval[caddar[e];append[pair[cadar[e];cdr[e]];a]]

and
    evcon[c;a]=[eval[caar[c];a]→eval[cadar[c];a];T→evcon[cdr[c];a]]
and
    evlis[m;a]= [null[m]→NIL;T→cons[list[QUOTE;eval[car[m];a]];
evlis[cdr[m];a]]

I find this a lot easier to understand if we transcribe it into modern Common LISP:

;;; Hey Emacs, this is -*- Lisp -*-

(in-package "CL-USER")

;; Avoid smashing the standard definitions.
(shadow "APPLY")
(shadow "ASSOC")
(shadow "EVAL")

(defun apply (f args)
  (eval (cons f (appq args)) nil))

(defun appq (m)
  (cond ((null m) nil)
        (t (cons (list 'QUOTE (car m)) (appq (cdr m))))))

(defun eval (e a)
  (cond ((atom e) (eval (assoc e a) a))
        ((atom (car e))
         (cond ((eq (car e) 'QUOTE) (cadr e))
               ((eq (car e) 'ATOM)  (atom (eval (cadr e) a)))
               ((eq (car e) 'EQ)    (eq (eval (cadr e) a) (eval (caddr e) a)))
               ((eq (car e) 'COND)  (evcon (cdr e) a))
               ((eq (car e) 'CAR)   (car (eval (cadr e) a)))
               ((eq (car e) 'CDR)   (cdr (eval (cadr e) a)))
               ((eq (car e) 'CONS)  (cons (eval (cadr e) a) (eval (caddr e) a)))
               (t (eval (cons (assoc (car e) a) (evlis (cdr e) a)) a))))
        ((eq (caar e) 'LABEL) (eval (cons (caddar e) (cdr e))
                                    (cons (list (cadar e) (car e)) a)))
        ((eq (caar e) 'LAMBDA) (eval (caddar e)
                                     (append (pair (cadar e) (cdr e)) a)))))

(defun evcon (c a)
  (cond ((eval (caar c) a) (eval (cadar c) a))
        (t (evcon (cdr c) a))))

(defun evlis (m a)
  (cond ((null m) nil)
        (t (cons (list 'QUOTE (eval (car m) a)) (evlis (cdr m) a)))))

;;; Modern helpers
(defun assoc (k l)
  (cadr (cl:assoc k l)))

(defun pair (ls rs)
  (map 'list #'list ls rs))

(defun testit ()
  (apply '(label ff (lambda (x) (cond ((atom x) x) ((quote t) (ff (car x))))))
         (list '((a . b) . c))))

There are a few things to notice about this. First, there is no code that inspects the value cell or function cell of a symbol. All symbols are evaluated by looking up the value in the association list a, so this evaluator uses one namespace. Second, the recursive calls to eval when evaluating combinations (the last clause of the inner cond and the LABEL and LAMBDA clauses) are in tail position, so this evaluator could be coded up tail-recursively. (It is impossible to say without inspecting the IBM 704 assembly code.)

What is most curious about this evaluator is the first clause in the outer cond in eval. This is where variable lookup happens. As you can see, we look up the variable by calling assoc, but once we obtain the value, we call eval on it. This LISP isn't storing values in the environment, but rather expressions that evaluate to values. If we look at the LAMBDA clause of the cond, the one that handles combinations that begin with lambda expressions, we can see that it does not evaluate the arguments to the lambda but instead associates the bound variables with the arguments' expressions. This therefore has call-by-name semantics rather than the modern call-by-value semantics.

By April 1960 we see these changes:

(defun eval (e a)
  (cond ((atom e) (assoc e a))
        ((atom (car e))
         (cond ((eq (car e) 'QUOTE) (cadr e))
               ((eq (car e) 'ATOM)  (atom (eval (cadr e) a)))
               ((eq (car e) 'EQ)    (eq (eval (cadr e) a) (eval (caddr e) a)))
               ((eq (car e) 'COND)  (evcon (cdr e) a))
               ((eq (car e) 'CAR)   (car (eval (cadr e) a)))
               ((eq (car e) 'CDR)   (cdr (eval (cadr e) a)))
               ((eq (car e) 'CONS)  (cons (eval (cadr e) a) (eval (caddr e) a)))
               (t (eval (cons (assoc (car e) a) (evlis (cdr e) a)) a))))
        ((eq (caar e) 'LABEL) (eval (cons (caddar e) (cdr e))
                                    (cons (list (cadar e) (car e)) a)))
        ((eq (caar e) 'LAMBDA) (eval (caddar e)
                                     (append (pair (cadar e) (evlis (cdr e) a)) a)))))
Note how evaluating an atom now simply looks up the value of the atom in the association list and evaluation of a combination of a lambda involves evaluating the arguments eagerly. This is a call-by-value interpreter.

Max-Gerd RetzlaffuLisp on M5Stack (ESP32):<br />Stand-alone uLisp computer (with code!)

· 83 days ago

Last Thursday, I started to use the m5stack faces keyboard I mentioned before and wrote a keyboard interpreter and REPL so that this makes another little handheld self-containd uLisp computer. Batteries are included so this makes it stand-alone and take-along. :)

I have made this as a present to my nephew who just turned eight last Saturday. Let's see how this can be used to actually teach a bit of Lisp. The first programming language needs to be Lisp, of course!

Programming “Hello World!” on the M5Stack with Faces keyboard; click for a larger version (252 kB).

Read the whole article.

Joe MarshallEarly LISP

· 87 days ago

In AI Memo 8 of the MIT Research Laboratory of Electronics (March 4, 1959), John McCarthy gives a definition of the universal S-function apply:

     apply is defined by
     apply[f;args]=eval[combine[f;args]]
     eval is defined by
eval[e]=[
first[e]=NULL→[null[eval[first[rest[e]]]]→T;1→F]
first[e]=ATOM→[atom[eval[first[rest[e]]]]→T;1→F]
first[e]=EQ→[eval[first[rest[e]]]=eval[first[rest[rest[e]]]]→T;
     1→F]
first[e]=QUOTE→first[rest[e]];
first[e]=FIRST→first[eval[first[rest[e]]]];
first[e]=REST→rest[eval[first[rest[e]]];
first[e]=COMBINE→combine[eval[first[rest[e]]];eval[first[rest[rest
     [e]]]]];
first[e]=COND→evcon[rest[e]]
first[first[e]]=LAMBDA→evlam[first[rest[first[e]]];first[rest[rest
    [first[e]]]];rest[e]];
first[first[e]]=LABELS→eval[combine[subst[first[e];first[rest
    [first[e]]];first[rest[rest[first[e]]]]];rest[e]]]]
where: evcon[c]=[eval[first[first[c]]]=1→eval[first[rest[first[c]]]];
           1→evcon[rest[c]]]
and
evlam[vars;exp;args]=[null[vars]→eval[exp];1→evlam[
     rest[vars];subst[first[vars];first[args];exp];rest[args]]]
McCarthy asserts that “if f is an S-expression for an S-function φ and args is a list of the form (arg1, …, argn) where arg1, ---, argn are arbitrary S-expressions then apply[f,args] and φ(arg1, …, argn) are defined for the same values of arg1, … argn and are equal when defined.”

I find it hard to puzzle through these equations, so I've transcribed them into S-expressions to get the following:

;;; Hey Emacs, this is -*- Lisp -*-

(in-package "CL-USER")

;; Don't clobber the system definitions.
(shadow "APPLY")
(shadow "EVAL")

(defun apply (f args)
  (eval (combine f args)))

(defun eval (e)
  (cond ((eq (first e) 'NULL)    (cond ((null (eval (first (rest e)))) t)
                                       (1 nil)))
        ((eq (first e) 'ATOM)    (cond ((atom (eval (first (rest e)))) t)
                                       (1 nil)))
        ((eq (first e) 'EQ)      (cond ((eq (eval (first (rest e)))
                                            (eval (first (rest (rest e))))) t)
                                       (1 nil)))
        ((eq (first e) 'QUOTE)   (first (rest e)))
        ((eq (first e) 'FIRST)   (first (eval (first (rest e)))))
        ((eq (first e) 'REST)    (rest  (eval (first (rest e)))))
        ((eq (first e) 'COMBINE) (combine (eval (first (rest e)))
                                          (eval (first (rest (rest e))))))
        ((eq (first e) 'COND)    (evcon (rest e)))
        ((eq (first (first e)) 'LAMBDA) (evlam (first (rest (first e)))
                                               (first (rest (rest (first e))))
                                               (rest e)))
        ((eq (first (first e)) 'LABELS) (eval (combine (subst (first e)
                                                              (first (rest (first e)))
                                                              (first (rest (rest (first e)))))
                                                       (rest e))))))

(defun evcon (c)
  (cond ((eval (first (first c))) (eval (first (rest (first c)))))
        (1 (evcon (rest c)))))

(defun evlam (vars exp args)
  (cond ((null vars) (eval exp))
        (1 (evlam (rest vars)
                  (subst (first args)
                         (first vars)
                         exp)
                  (rest args)))))
We just have to add a definition for combine as a synonym for cons and this should run:
* (eval '(eq (first (combine 'a 'b) (combine 'a 'c))))
T

As Steve “Slug” Russell observed, eval is an interpreter for Lisp. This version of eval uses an interesting evaluation strategy. If you look carefully, you'll see that there is no conditional clause for handling variables. Instead, when a lambda expression appears as the operator in a combination, the body of the lambda expression is walked and the bound variables are substituted with the expressions (not the values!) that represent the arguments. This is directly inspired by β-reduction from lambda calculus.

This is buggy, as McCarthy soon discovered. In the errata published one week later, McCarthy points out that the substitution process doesn't respect quoting, as we can see here:

* (eval '((lambda (name) (combine 'your (combine 'name (combine 'is (combine name nil))))) 'john))
(YOUR 'JOHN IS JOHN)
With a little thought, we can easily generate other name collisions. Notice, for example, that the substitution will happily substitute within the bound variable list of nested lambdas.

Substitution like this is inefficient. The body of the lambda is walked once for each bound variable to be substituted, then finally walked again to evaluate it. Later versions of Lisp will save the bound variables in an environment structure and substitute them incrementally during a single evaluation pass of the lambda body.

Jonathan GodboutCl-Protobufs Enumerations

· 88 days ago

In the last few posts we discussed family life, and before that we created a toy application using cl-protobufs and the ACE lisp libraries. Today we will dive deeper into the cl-protobufs library by looking at Enumerations. We will first discuss enumerations in Protocol Buffers, then we will discuss Lisp Protocol Buffer enums.

Enums:

Most modern languages have a concept of enums. In C++ enumerations are compiled down to integers and you are free to use integer equality. For example

enum Fish {
 salmon,
 trout,
}

void main {
  std::cout &lt;&lt; salmon == 0 &lt;&lt; std::endl;
}

Will print true. This is in many ways wonderful: enums compile down to integers and there's no cost to using them. It is baked into the language! 

Protocol Buffers are available for many languages, not just C++. You can find the documentation for Protocol Buffer enums here: 

https://developers.google.com/protocol-buffers/docs/proto#enum

Each language has its own way to support enumeration types. Languages like C++ and Java, which have built-in support for enumeration types, can treat protobuf enums like any other enum. The above enum could be written (with some caveats) in Protocol Buffer as:

enum Fish {
  salmon = 0;
  trout = 1;
}

You should be careful though, Protoc will give a compile warning that enum 0 should be a default value, so 

enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
}

Is preferred.

Let’s get into some detail for the two variants of Protocol Buffers in use.

// Example message to use below.
enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
}

message Meal {
  {optional} Fish fish;
}

The `optional` label will only be written for proto 2.

Proto 2:

In proto 2 we can always tell whether `Meal.fish` was set. If the field has the `required` label then it must be set, by definition. (But the `required` label is considered harmful; don’t use it.) If the field has an `optional` label then we can check if it has been set or not, so again a default value isn’t necessary.

If the enum is updated to:

// Example message to use below.
enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
  tilapia = 3;
}

and someone sends fish = tilapia to a system where tilapia isn't a valid entry, the library is allowed to do whatever it wants! In Java it sets it to the first entry, so Meal.fish would be default! 

Proto 3

In proto3 if the value of Meal.fish is not set, calling its accessor will return the default value which is always the zero value. There is no way to check whether the field was explicitly set. A default value (i.e., a name that maps to the value zero) must always be given, else the user will get a compile error.

If the Fish enum was updated to contain tilapia as above, and someone sent a proto message containing tilapia to a system with an older program that had the message not containing tilapia, the deserializer should save the enum value. That is, the underlying data structure should know it received a "3" for the fish field in Meal. How the accessors return this value is language dependent. Re-serializing the message should preserve this "unrecognized" value.

A common example is: A gateway system wants to do something with the message and then forward it to another system. Even though the middle system has an older schema for the Fish message it needs to forward all the data to the downstream system.

Cl-protobufs:

Now that we understand the basics of enumerations, it is important to understand how cl-protobufs records enumeration values

Lisp as a language does not have a concept of enumerations; what it does understand is keywords. Taking fish as above and running protoc we will get (see readme https://github.com/qitab/cl-protobufs/#enums):

(deftype fish '(:default :salmon :trout))

(defun fish-to-int (keyword) 
  (ecase keyword
    (:default 0)
    (:salmon 1)
    (:trout 2)))

(defun int-to-fish (int)
  (ecase int
    (0 :default)
    (1 :salmon)
    (2 :trout)))

Looking at the tilapia example, the enum deserializer preserves the unknown field in both proto2 and proto3. Calling an accessor on a field containing an unknown value will return :%undefined-n. So for tilapia we will see :%undefined-3.

Warning: To get this to work properly we have to remove type checks from protocol buffer enumerations. You can set the field value in a lisp protocol buffer message to any keyword you want, but you will get a serialization error when you try to serialize. This was a long discussion internally, but that design discussion could turn into a blog post of its own.

Conclusion:

The enumeration fields in cl-protobufs are fully proto2 and proto3 compliant. To do this we had to remove type checking. As a consumer, it is suggested that you always type check and handle undefined enumeration values in your usage of protocol buffer enums. We give you a deftype to easily check.

I hope you have enjoyed this deep dive into cl-protobuf enums. We strive to remove as many gotchas as possible.


Thanks to Ron and Carl for the continual copy edits and improvements!

Max-Gerd RetzlaffuLisp on M5Stack (ESP32):<br /> temperature sensors via one wire

· 88 days ago

I added support for Dallas temperature sensors to ulisp-esp-m5stack. Activate #define enable_dallastemp in order to use it. It bases on the Arduino libraries OneWire.h DallasTemperature.h.

I used pin 16 to connect my sensors but you can change ONE_WIRE_BUS to use a different pin. As the OneWire library uses simple bit bagging and no hardware support, e. g. UART, any general-purpose input/output (GPIO) pin will work.

The interface consists of four uLisp functions: INIT-TEMP, GET-TEMP, SET-TEMP-RESOLUTION, and GET-TEMP-DEVICES-COUNT. Here is their documentation:

Function init-temp
 
Syntax:
   init-temp
     => result-list

Arguments and values:
   result-list---a list of device addresses; each address being a list of 8 integer values specifying a device address.

Description:
   Detects all supported temperature sensors connected via one wire bus to the pin ONE_WIRE_BUS and returns the list of the sensors' device addresses.

   All sensors are configured to use the resolution specified by default DEFAULT_TEMPERATURE_PRECISION via a broadcast. Note that a sensor might choose a different resolution if the desired resolution is not supported. See also: set-temp-resolution.

Function get-temp
 
Syntax:
   get-temp address
     => temperature

Arguments and values:
   address---a list of 8 integer values specifying a device address.

   temperature---an integer value; the measured temperature in Celsius.

Description:
   Requests the sensor specified by address to measure and compute a new temperature reading, retrieves the value from the sensor device and returns the temperature in Celsius.

Function set-temp-resolution
 
Syntax:
   set-temp-resolution address [resolution]
     => actual-resolution

Arguments and values:
   address---a list of 8 integer values specifying a device address.

   resolution---an integer value.

   actual-resolution---an integer value.

Description:
   Tries to configure the sensor specified by address to use the given resolution and returns the actual resolution that the devices is set to after the attempt.

   Note that a sensor might choose a different resolution if the desired resolution is not supported. In this case, the returned actual-resolution differs from the argument resolution.

If the argument resolution is missing, instead the default given by DEFAULT_TEMPERATURE_PRECISION is used.

Function get-temp-devices-count
 
Syntax:
   get-temp-devices-count
     => count

Arguments and values:
   count---an integer value; the number of detected supported temperature sensors.

Description:
   Returns the number of temperature sensors supported by this interface that were detected by the last call to INIT-TEMP. Note that this might not be the correct current count if sensors were removed or added since the last call to INIT-TEMP.

Findings from reading DallasTemperature.h and DallasTemperature.cpp

These are the notes I wrote down when reading the source code of the Dallas temperature sensor library and my conclusion how to best use it which lead to my implementation for uLisp.

1. The process of counting the number of devices is efficiently done in parallel by a binary tree algorithm.

2. The result of the search is the number of devices with their addresses.

3. The DallasTemperature library keeps only a count of devices and a count of supported temperature sensors (ds18Count) in memory, not an indexed list of addresses. This is done in DallasTemperature::begin() by doing a search but only the counts are kept, no addresses are stored. Sadly, it also does not return anything.

4. getAddress() does a search again to determine the address for an device index. So it is faster to just get a sensor reading by using the address not the index, it safes one search.

5. Sadly, there is not command to get a list of addresses in a row. So at least once you have to do getAddress() to actually get the addresses of all devices.

5. requestTemperature() can be applied to a single device only or to all devices in parallel. It is as fast to request a temperature from all devices as only one device.

6. Actually getting the temperature reading works only one at a time. getTemp*(deviceAddress) is faster than getTemp*ByIndex(index) as the latter has to do a search first (see 4.).

7. There are these temperature resolutions: 9, 10, 11, and 12 bits. The conversion (=reading) times are:
9 bit – 94 ms
10 bit – 188 ms
11 bit – 375 ms
12 bit – 750 ms

8. setResolution() can either set all devices in parallel or only set one device at a time (only by address, there is no setResultionByIndex()).

9. The temperatures are internally stored in 1/128 degree steps. This is the "raw" readings returned by DallasTemperature::getTemp() as int16_t.

DallasTemperature::getTempC returns "(float) raw * 0.0078125f" and
DallasTemperature::getTempF returns "((float) raw * 0.0140625f) + 32.0f".

In case of an error,
getTempC() will return DEVICE_DISCONNECTED_C which is "(float)-127",
getTempF() will return DEVICE_DISCONNECTED_F which is "(float)-196.6", and
getTemp() will return DEVICE_DISCONNECTED_RAW which is "(int16_t)-7040", respectively.

10. If you don't need the actual temperature but just to monitor that the temperature is in a defined range, it is not necessary to read the temperatures at all (which has to happen one sensor at a time). Instead, you can use the alarm signaling.

For that, you can set a high and a low alarm temperature per device and then you can do an alarm search to determine in parallel if there are sensors with alarms. The range can be half open, that is you can also only define high and low alarm temperatures.

DallasTemperature::alarmSearch() returns one device address with an alarm at a time. It is also possible to install an alarm handler and then call DallasTemperature::processAlarms() which will do repeated alarm searches and call the handler for each device with an alarm.

11. isConnected(deviceAddress) can be used to determine if a certain sensor is still available. It will return quickly when it is not but transfer a full sensor reading in case it is still available. The library currently does not support a case where parallel search is used to determine if known devices are still present.

12. The search is deterministic, it seems, so as long as you don't change sensors, the indices stay the same. If you add and remove a sensor, existing sensor might get new indices. So it seems actually not to be safe to use *ByIndex() functions.

13. getDeviceCount() gives you the number of all devices, getDS18Count() the number of all supported DS18 sensors. But no function gives you the list of indices or addresses of all supported DS19 sensors.

validFamily(deviceAddress) lets you check by address if a device is supported. Supported are DS18S20MODEL (also DS1820), DS18B20MODEL (also MAX31820), DS1822MODEL, DS1825MODEL, and DS28EA00MODEL.

getAddress() just checks if the address is valid (using validAddress(deviceAddress)) but not if the device is actually known. As getAddress() already calls validAddress() for you, there should be no need to ever call validAddress() from user code. If you just request a temperature from all devices till getDeviceCount() you'll also send requests to unsupported devices.

In conclusion, this seems to be the best approach to setup all devices:

  1. Call getDS18Count() once to determine that there are any supported temperature sensors at all.
  2. Iterate over all devices, that is, from index "0" up to "getDeviceCount() - 1".
  3. Call getAddress() for each index (this will also check validAddress())
  4. and then call validFamily() for the address.
  5. If validFamily() returns true, store the address for later temperature readings.
  6. This is also a good time to call setResolution() as per default each device is left at its individual default resolution if you have sensors of different kinds. Either call getResolution(newResolution) to set all devices in parallel, or setResolution(address, newResolution) in the loop right after each call to validFamily() to set up individual resolutions.

To read sensor values:

  1. Call requestTemperature() to request all sensors to do new readings in parallel,
  2. then iterate over the stored list of DS18 addresses and
  3. call getTempC(address), getTempF(address), or getTemp(address) for each address and
  4. check for error return values (see Finding 9.).

Note: getTempC() and getTempF() will call getTemp() internally and that one will also use isConnected(). So there should be no need to call isConnected() from user code if you check for the error return values of the functions (see Finding 8.)



This is the last thing I promised to release in my previous post of February 15, 2021. Documentation takes time! But I programmed new features last Thursday so stay tuned.


See also "Curl/Wget for uLisp", time via NTP, lispstring without escaping and more space, flash support, muting of the speaker and backlight control and uLisp on M5Stack (ESP32).

Read the whole article.

Didier VernaClon 1.0b25 is out

· 92 days ago

Today, I'm releasing the next beta version of Clon, my command-line options management library.

The previous official release occurred 6 years ago. Since then, a number of changes had been quietly sleeping in the trunk but never made their way into Quicklisp. More recently, I have also applied a number of changes that are worth mentioning here.

First of all, a large part of the infrastructure as been updated, following the evolution of the 8 supported compilers, and that of ASDF and CFFI as well. This should normally be transparent to the user though, provided that one uses reasonably recent compiler / ASDF version ("reasonably" intentionally left undefined). Other than that...

  • The constraints on termio support auto-detection had become slightly too restrictive, so they have been relaxed.
  • The exit function has been deprecated in favor of uiop:quit.
  • The support for running in scripts rather than in dumped executables has been improved, notably by offering the possibility to provide an alternate program name when argv0 is not satisfactory.
  • Clon is now compatible with executables dumped via ASDF's program-op operation, or dumped natively. The demonstration programs in the distribution have been updated to illustrate both dumping methods (ASDF, and Clon's dump function).
  • The documentation on application delivery has been largely rewritten, and has become a full chapter rather than a thin appendix.

There are also a few bug fixes in this release.

  • Several custom readtable problems have been fixed for CCL, CLISP, and ECL (thanks to Thomas Fitzsimmons). Note that Clon depends on named-readtables now.
  • Clon now compiles its termio support correclty with a C++ based ECL (thanks to Pritam Baral).
  • One problem in the conversion protocol for path options has been corrected (thanks to Olivier Certner).

All entrey points are on Clon's web page.

Enjoy!

Marco AntoniottiHE&#923;Ping ASDF

· 96 days ago

 ... more fixing and, ça va sans dire, more creeping features.

I got prodded to integrate HEΛP with other tools; mostly, of course, ASDF.  A simple solution was to define a document-op for a system.  After jumping through a few hoops, the solution was to use the :properties of a system to pile up arguments for the main HEΛP document function (well, only one for the time being).  Bottom line, suppose you have:

  (asdf:defsystem "foosys"
     :pathname #P"D:/Common Lisp/Systems/foosys/")

now you just issue

  (asdf:operate 'hlp:document-op "foosys")

and the documentation for the system "foosys" will appear in the "docs/html/" subfolder.

If you want to pass a title to the document function, you set up your system as:

  (asdf:defsystem "foosys"
     :properties (:documentation-title "The FOO Omnipotent Tool")
     :pathname #P"D:/Common Lisp/Systems/foosys/")

and the parameter will be used (instead of the bare system name).

It works! 😁

Some more fixing and more extensions may be needed (hlp:document takes a lot of parameters) but it is already usable.

All the necessary bits and pieces are in the HEΛP repository, and they should get into Quicklisp in the next release.

Enjoy


(cheers)

Marco AntoniottiNeed more HE&#923;P?

· 100 days ago

Just a quick note for people following these... parentheses.

I have carved out some time to do some more Lisp hacking and this lead me to look at the very nice usocket library (I want to do some network programming).  The usocket library documentation page has a bit of an "old" and "handcrafted" look and feel to it, so I tried to produce a version of the documentation with help from my HEΛP library.

Well, it turns out that usocket has some more than legitimate code within it that my HEΛP library was not handling; even worse, it unearthed a bug in the Lambda List parsing routines.

As an example, usocket uses the following idiom to set some of the documentation strings.

    (setf (documentation 'fun 'function) "Ain't this fun?") 

This is perfectly fine, but it needed some extra twist to get HEΛP do what is, IMHO, the right thing: in this case it meant ensuring that the lambda list of the function was properly rendered in the final documentation.

Apart from that, a few not so nice buglets were exposed in the code parsing lambda lists.  The result is that now the logic of that piece of code is simpler and somewhat cleaner.

So, if you want to get HEΛP to document your Common Lisp code, give it a spin.



(cheers)

Nicolas HafnerGoing Underground - March Kandria Update

· 110 days ago

header
I can't believe it's been two months already since the year started. Time moves extremely quickly these days. Anyway, we have some solid progress to show, and some important announcements to make this month, so strap in!

Overall progress

Last month was a big update with a lot of new content, particularly all the custom buildings Fred and I had put together to build the surface camp. This month involved a lot more of that, but for the first underground region. This region is still very close to the surface, so it'll be composed out of a mix of ruins of modern corporate architecture, and natural caves.

office dorms

As before, figuring out a fitting style was very challenging, even disregarding the fact that it has to be in ruins, as well. Still, I think what we put together, especially combined with Kandria's lighting system, creates a great amount atmosphere and evokes that feeling of eerie wonder that I've always wanted to hit.

Mushrooms are a big part of the ecosystem in Kandria, being the primary food source for the underground dwellers, so I couldn't resist adding giant mushrooms to the caves.

cave

On the coding side there's been a bunch of bugfixing and general improvement going on. The movement AI can now traverse the deep underground regions seemingly without problem. Game startup speed is massively improved thanks to some caching of the movement data, and NPCs can now climb ropes and use teleporters when navigating.

We've also spent some time working on the combat again, adding some extra bits that, while seemingly small, change the feel quite a lot. Attacks now have a cooldown that forces you to consider the timing, and inputs are no longer buffered for the entire duration of an animation, which eliminates the feeling of lag that was prevalent before. Fred also tuned some of the player's attack animations some more and while I couldn't tell you what exactly changed, when I first tried it out I immediately noticed that it felt a lot better!

All of this just further reaffirms my belief that making a good combat system involves a ton of extremely subtle changes that you wouldn't notice at all unless you did a frame-by-frame analysis. It all lies in the intuition the system builds up within you, which makes it hard to tune. I'm sure we'll need to do more rounds of tuning like that as we progress.

wolf

Then I've also reinstated the wolf enemy that I first worked on close to a year ago. The AI is a lot simpler now, but it also actually works a lot better. It's still a bit weird though, especially when interacting with slopes and obstacles, but it does make for a nice change of pace compared to the zombie enemy. We'll have to see how things turn out when they're placed in the context of actual exploration and quests, though.

Another feature I resurrected and finally got to work right is the ability to save and load regions from zip files. This makes it easy to exchange custom levels. The editor used for the game is shipped with the game and always available at the press of a button, so we're hoping to use that in combination with the zip capability to organise a small level design contest within the community. We'll probably launch that in April, once the new demo hits. If you like building or playing levels, keep an eye out!

The biggest chunk of work this month went into doing level design. I've been putting that off for ages and ages, because it's one of those things that I'm not very familiar with myself, so it seems very daunting. I don't really know where to start or how to effectively break down all the constraints and requirements and actually start building a level around them, let alone a level that's also fun to traverse and interesting to look at! It was so daunting to me in fact, that I couldn't work on anything at all for one day because I was just stuck in a sort of stupor.

Whatever the case though, the only way to break this mould and get experience, and thus some confidence and ability in making levels, is to actually do it. I've put together the first part of the first region now, though it's all still very rough and needs a ton more detail and playtesting.

region1 upper

The part above is the surface settlement, with the city ruins to the right. Below the camp lies the central hub of the first region, which links up to a variety of different rooms - an office, a market, an apartment complex, and several natural caves that formed during the calamity. The sections below the city ruins don't belong to the slice, but will be part of the full "first act demo" that we plan to release some months after the slice.

Even with all the tooling I built that allows you to easily drag out geometry and automatically tile a large chunk of it, it still takes a ton of time to place all the little details like chairs, doors, railings, machines, plants, broken rubble, background elements, to vary the elements and break up repetition, etc. It also takes a lot of extra effort to ensure that the tiles work correctly in this pseudo-isometric view we have going on for the rooms. Still, the rooms do look a lot better like this than they did with my initial heads-on view, so I think we'll stick with it even if it costs us more time to build.

Speechless

I've finally gotten around to documenting the dialogue system I've developed for Kandria. I've given it the name Speechless, since it's based on Markless. It's designed to be engine-independent, so if you have your own game in Lisp and need a capable dialogue logic system, you should be able to make use of it. If you do, please tell me about it, I'd be all ears!

I'd also be interested to hear from other narrative designers on what they think of it. I can't say I'm familiar with the tools that are used in other engines - a lot of it seems to be in-house, and frequently based around flow-charts from what I can tell. Having things completely in text does remove some of the visual clarity, but I think it also makes it a lot quicker to put things together.

Now, I know that the Lisp scene is very small, and the games scene within it even smaller, so I don't think Speechless will gain much traction, but even if it itself won't, I hope that seeing something like it will at least inspire some to build similar systems, as I think this text based workflow can be extremely effective.

Hiring a musician

I'm hiring again! Now that Kandria's world is properly coming together it's time to look at a composer to start with a soundtrack to really bring the world to life. Music is extremely important to me, so I wanted to wait until we had enough of the visuals together to properly inform the mood and atmosphere. I'm still having a lot of trouble imagining what the world should actually sound like, and there's a broad range of music I like, so I hope that I can find someone that can not only produce a quality score, but also help figure out the exact sound aesthetics to go for.

If you are a musician, or know musicians that are looking for work, have a look at the listing!

Tim's recount

Quests quests and quests! I've got the core gameplay scripting done for most of the vertical slice quests now. The last couple are still using placeholder dialogue, but for the others I've done several drafts in the voice of the characters, sprinkling in player choices here and there and yeah - it feels like it's coming together. Hopefully it's familiarising the player with the characters, their unique voices, and their motivations, whilst keeping the gameplay and plot momentum moving forwards. I've now written in anger for all of the main hub characters, and feel like I'm getting into their headspace.

Some of the scripting functionality has been more complex than I anticipated - but with help from Nick creating new convenience functions, and showing me the best way to structure things, I feel like I've gotten most of the design patterns down now that I'm going to need going forwards.

The rest of the month will involve rounding out these quests, iterating on feedback, and transposing the triggers (which are still using debug locations) into the main region layout

Fred's recount

Quite a lot of character anims in! It'll be exciting to see the camp characters come to life in the game and not just in my animation software. 🙂

This feels like this month was an important milestone at making Kandria's world more immersive. There's still more work to do on the buildings and getting convincing yet fun to explore ruins but overall it feels like a lot of stuff is coming together.

The future

This is the last month we had in our plan for the vertical slice. Unfortunately it turns out that we had way underestimated the amount of time it would take to create the required tilesets and design the levels. Still, it seems much more important to avoid crunch, and to deliver a quality slice, so we're looking to extend the deadline.

We'll still try to release an early slice for our testers by the end of this month, but then we'll take two additional weeks for bugfixing and polish, so the updated public demo should be out mid-April. We'll be sure to make an announcement when it comes out or if there's other problems that'll further delay it. Please bear with us!

The remainder of April though we're planning to completely switch gears away from Kandria and catch a mental breather. We'll instead work on a new, very small jam project, that we hope to build and release within the two weeks. We're not entirely certain yet what exactly we'll do, but it should be a lot of fun to do a jam again one of these days.

As always, thank you very much for reading and in general for your interest in Kandria! Starting from scratch like we are (in multiple ways at that!) isn't easy, and it's been really nice to see people respond and support the project.

If you'd like to support us, it would help a lot to wishlist Kandria on Steam, and to join the Discord! There's also a lot of additional information on the development and our current thoughts in the weekly mailing list updates and my Twitter.

Quicklisp newsFebruary 2021 Quicklisp dist update now available

· 115 days ago

 New projects

  • audio-tag — tool to deal with audio tags. read and write — BSD-2-Clause License
  • canonicalized-initargs — Provides a :canonicalize slot option accepting an initarg canonicalization function. — Unlicense
  • cl-debug-print — A reader-macro for debug print — MIT
  • cl-json-schema — Describe cl-json-schema here — Specify license here
  • cl-ses4 — AWS SES email sender using Signature Version 4 of Amazon's API — Public Domain
  • cl-telebot — Common Lisp Telegram Bot API — MIT
  • consfigurator — Lisp declarative configuration management system — GPL-3+
  • cricket — A library for generating and manipulating coherent noise — MIT
  • cubic-bezier — A library for constructing and evaluating cubic Bézier curve paths. — MIT
  • defconfig — A configuration system for user exposed variables — GPLv3
  • enhanced-defclass — Provides a truly extensible version of DEFCLASS that can accurately control the expansion according to the metaclass and automatically detect the suitable metaclass by analyzing the DEFCLASS form. — Unlicense
  • freesound — A client for Freesound.org. — MIT
  • mnas-graph — Defines basic functions for creating a graph data structure and displaying it via graphviz. — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • mnas-hash-table — Defines some functions for working with hash tables. — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • nyaml — Native YAML Parser — MIT
  • pvars — easily define persistent variables — MIT
  • random-uuid — Create and parse RFC-4122 UUID version 4 identifiers. — MIT
  • sanity-clause — Sanity clause is a data contract and validation library. — LGPLv3
  • seedable-rng — A seedable random number generator. — MIT
  • slot-extra-options — Extra options for slots using MOP. — LGPL-3.0-or-later
  • tailrec — Guaranteed tail call optimization. — LLGPL
  • tfeb-lisp-tools — TFEB.ORG Lisp tools — MIT

Updated projects: algae, april, async-process, black-tie, cepl, cl+ssl, cl-ana, cl-async, cl-change-case, cl-coveralls, cl-data-structures, cl-dbi, cl-fxml, cl-grip, cl-gserver, cl-html-readme, cl-ipfs-api2, cl-kraken, cl-liballegro-nuklear, cl-libusb, cl-patterns, cl-pdf, cl-prevalence, cl-reexport, cl-shlex, cl-smtp, cl-string-generator, cl-threadpool, cl-typesetting, cl-unicode, cl-utils, cl-webkit, cl-yesql, clog, closer-mop, clsql, cmd, common-lisp-jupyter, core, cover, croatoan, datum-comments, defenum, dexador, easy-audio, eclector, fast-websocket, feeder, file-select, flare, float-features, freebsd-sysctl, functional-trees, fxml, geco, gendl, gtirb-capstone, gtirb-functions, gtwiwtg, harmony, hu.dwim.bluez, hu.dwim.common-lisp, hu.dwim.defclass-star, hu.dwim.logger, hu.dwim.quasi-quote, hu.dwim.reiterate, hu.dwim.sdl, hu.dwim.walker, hu.dwim.zlib, hunchenissr, iterate, jingoh, lass, lichat-protocol, linear-programming, lisp-chat, lmdb, magicl, maiden, mailgun, markdown.cl, mcclim, mgl-pax, mito, monomyth, named-read-macros, nodgui, num-utils, numcl, open-location-code, origin, orizuru-orm, osicat, periods, petalisp, plump-sexp, portal, py4cl, py4cl2, qlot, quri, read-as-string, repl-utilities, rpcq, rutils, s-sysdeps, sel, select, serapeum, shared-preferences, sly, spinneret, studio-client, stumpwm, ten, trivia, trivial-clipboard, trivial-features, ttt, uax-15, ucons, umlisp, uncursed, utm-ups, with-contexts, zacl, zippy.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Eric TimmonsStatic Executables with SBCL v2

· 120 days ago

It's taken me much longer than I hoped, but I finally have a second version of my patches to build static executables tested and ready to go! This set of patches vastly improves upon the first by reducing the amount of compilation needed at the cost of sacrificing a little purity. Additionally I have created a system that automates the process of building a static executable, along with other release related tasks.

At a Glance

  • The new patch set can be found on the static-executable-v2 branch of my SBCL fork or at https://www.timmons.dev/static/patches/sbcl/$VERSION/static-executable-support-v2.patch with a detached signature available at https://www.timmons.dev/static/patches/sbcl/$VERSION/static-executable-support-v2.patch.asc signed with GPG key 0x9ACF6934.
  • You'll definitely want to build SBCL with the :sb-prelink-linkage-table feature (newly added by the patch). You'll probably also want the :sb-linkable-runtime feature (already exists, but the patch also enables it on arm/arm64).
  • The new patch lets you build a static executable with less compilation of Lisp code.
  • The asdf-release-ops system automates the process of building a static executable by tying it into ASDF.

What's New?

If you need a refresher about what static executables are or what use cases they're good for, see my previous post on this topic.

With my previous patch, the only way you could create a static executable was to perform the following steps:

  1. Determine the foreign symbols needed by your code. The easiest way to do this is to compile all your Lisp code and then dump the information from the image.
  2. From that list of foreign symbols, create a C file that contains fills an array with references to those symbols.
  3. Recompile the SBCL core and runtime with this new file, additionally disabling libdl support and linking against your foreign libraries.
  4. (Re)compile all your Lisp code with the new runtime (if you made an image in step 1 it will not be compatible with the new runtime due to feature and build ID mismatches).
  5. Dump the executable.

In the most general case, this involved compiling your entire Lisp image twice. After some #lisp discussions, I realized there was a better way of doing this. While the previous process still works, the new recommended process now looks like:

  1. Build the image you would like to make into a static executable and save it.
  2. Dump the foreign symbol info from this image and write the C file that SBCL can use to prelink itself.
  3. Compile that C file and link it into an existing sbcl.o file to make a new runtime. sbcl.o is the SBCL runtime in object form, created when building with the :sb-linkable-runtime feature.
  4. Load the image from step 1 into your new runtime. It will be compatible because the build ID and feature set are the same!
  5. Dump your now static executable.

This new process can significantly reduce the amount of time needed to make an executable. Plus it lets you take more advantage of image based development. It's fairly trivial to build an image exactly like you want, dump it, and then pair it with a custom static runtime to make a static executable.

There were two primary challenges that needed to be overcome for this version of the patch set.

First, the SBCL core had to be made robust to every libdl function uncondtionally returning an error. Since we want the feature set to remain constant we can't recompile the runtime with #-os-provides-dlopen. Instead, we take advantage of the fact that Musl libc lets you link static executables against libdl, but all those functions are noops. This is the "purity" sacrifice I alluded to above.

Second, since we are reusing a image, the prelink info table (the generated C file) needed to order the symbols exactly as the image expects them to be ordered. The tricky bit here is that some libraries (like cl-plus-ssl) add symbols to the linkage table that will always be undefined. cl-plus-ssl does this in order to support a wide range of openssl versions. The previous patch set unconditionally filtered out undefined symbols, which horribly broke things in the new approach.

More Documentation

As before, after applying the patch you'll find a README.static-executable file in the root of the repo. You'll also find a Dockerfile and an example of how to use it in the README.static-executable.

You can also check out the tests and documentation in the asdf-release-ops system.

Known Issues

  • The :sb-prelink-linkage-table feature does not work on 32-bit ARM + Musl libc >= 1.2. Musl switched to 64-bit time under the hood while still mataining compatibility with everything compiled for 32-bit time.

The issue is how they maintained backwards compatibility. Every time related symbol still exists and implements everything on top of the 32-bit time interface. However, if you include the standard header file where the symbol is defined or you look up the symbol via dlsym you actually get a pointer to the 64-bit time version of the symbol. We can't use dlsym (it doesn't work in static executables). And the generated C file doesn't include any headers.

This could be fixed if someone is motiviated enough to create/find a complete, easy to use map between libc symbols and the headers that define them and integrate it into the prelink info generator.

  • The :sb-prelink-linkage-table works on Windows but causes test failures. The root issue is that mingw64 has implemented their own libm. Their trig functions are fast, but use inaccurate instructions (like FSIN) under the hood. When prelinking these inaccurate implementations are used instead of the more accurate ones (from msvcrt.dll ?) found when using dlsym to look up the symbol.

Next Steps

  1. I would love to get feedback on this approach and any ideas on how to improve it! Please drop me a line (etimmons on Freenode or daewok on Github/Gitlab) if you have suggestions.

  2. I've already incorporated static executables into CLPM and will be distributing them starting with v0.4.0! I'm going to continue rolling out static executables in my other projects.

  3. Pieces of the patch set are now solid enough that I think they can be submitted for upstream consideration. I'll start sending them after the current 2.1.2 freeze.

Max-Gerd Retzlaff"Curl/Wget for uLisp"<br />Or: An HTTP(s) get/post/put function for uLisp

· 121 days ago

Oh, I forgot to continue posting… I just published a quite comprehensive HTTP function supporting put, post, get, auth, HTTP and HTTPS, and more for uLisp at ulisp-esp-m5stack.

Activate #define enable_http and #define enable_http_keywords to get it; the keywords used by the http function are to be enabled separately as they might be used more general and not just by this function.

Note that you need to connect to the internet first. Usually with WIFI-CONNECT.

Here is the full documentation with example calls:

Function http
 
Syntax:
   http url &key verbose
                 (https t)
                 auth
                 (user default_username)
                 (password default_password)
                 accept
                 content-type
                 (method :get)
                 data
     => result-string

Arguments and values:
   verbose---t, or nil (the default); affects also debug output of the argument decoding itself and should be put in first position in a call for full effect.

   https---t (the default), nil, or a certificate as string; uses default certificate in C string root_ca if true; url needs to fit: "http://..." for true and and "https://..." for false.

   auth---t, or nil (the default).

   user---a string, or nil (the default); uses default value in C string default_username if nil; only used if :auth t.

   password---a string, or nil (the default); uses default value in C string default_password if nil; only used if :auth t.

   accept---nil (the default), or a string.

   content-type---nil (the default), or a string.

   method---:get (the default), :put, or :post.

   data---nil (the default), or a string; only necessary in case of :method :put or :method :post; error for :method :get.

Examples:
   ;; HTTP GET:
   (http "http://192.168.179.41:2342" :https nil)
 
   ;; HTTP PUT:
   (http "http://192.168.179.41:2342"
         :https nil
         :accept "application/n-quads"
         :content-type "application/n-quads"
         :auth t :user "foo" :password "bar"
         :method :put
         :data (format nil "<http://example.com/button> <http://example.com/pressed> \"~a\" .~%"
                           (get-time)))

It can be tested with an minimal HTTP server simulation using bash and netcat:

while true; do echo -e "HTTP/1.1 200 OK\n\n $(date)" | nc -l -p 2342 -q 1; done
(To test with HTTPS in a similar fashion you can use openssl s_server, as explained, for example, in the article Create a simple HTTPS server with OPENSSL S_SERVER by Joris Visscher on July 22, 2015, but then you need to use certificates.)


See also Again more features for uLisp on M5Stack (ESP32):
time via NTP, lispstring without escaping and more space
, More features for uLisp on M5Stack (ESP32):
flash support, muting of the speaker and backlight control
and uLisp on M5Stack (ESP32).

Read the whole article.


For older items, see the Planet Lisp Archives.


Last updated: 2021-06-06 15:49