Planet Lisp

Joe Marshallη-conversion and tail recursion

· 2 days ago

Consider this lambda expression: (lambda (x) (sqrt x)). This function simply calls sqrt on its argument and returns whatever sqrt returns. There is no argument you could provide to this function that would cause it to return a different result than you would get from calling sqrt directly. We say that this function and the sqrt function are extensionally equal. We can replace this lambda expression with a literal reference to the sqrt function without changing the value produced by our code.

You can go the other way, too. If you find a literal reference to a function, you can replace it with a lambda expression that calls the function. This is η-conversion. η-reduction is removing an unnecessary lambda wrapper, η-expansion is introducting one.

η-conversion comes with caveats. First, it only works on functions. If I have a string "foo", and I attempt to η-expand this into (lambda (x) ("foo" x)), I get nonsense. Second, a reduction strategy that incorporates η-reduction can be weaker than one that does not. Consider this expression: (lambda (x) ((compute-f) x)). We can η-reduce this to (compute-f), but this makes a subtle difference. When wrapped with the lambda, (compute-f) is evaluated just before it is applied to x. In fact, we won't call (compute-f) unless we invoke the result of the lambda expression somewhere. However, once η-reduced, (compute-f) is evaluated at the point the original lambda was evaluated, which can be quite a bit earlier.


When a function foo calls another function bar as a subproblem, an implicit continuation is passed to bar. bar invokes this continuation on the return value that it computes. We can characterize this continuation like this:

kbar = (lambda (return-value)
         (kfoo (finish-foo return-value)))
this just says that when bar returns, we'll finish running the code in foo and further continue by invoking the continuation supplied to foo.

If foo makes a tail call to bar, then foo is just returning what bar computes. There is no computation for foo to finish, so the continuation is just

kbar = (lambda (return-value)
         (kfoo return-value))
But this η-reduces to just kfoo, so we don't have to allocate a new trivial continuation when foo tail calls bar, we can just pass along the continuation that was passed to foo.

Tail recursion is equivalent to η-reducing the implicit continuations to functions where possible. A Scheme aficionado might prefer to say avoiding η-expanding where unnecessary.

This is a mathematical curiosity, but does it have practical significance? If you're programming in continuation passing style, you should be careful to η-reduce (or avoid η-expanding) your code.

Years ago I was writing an interpreter for the REBOL language. I was getting frustrated trying to make it tail recursive. I kept finding places in the interpreter where the REBOL source code was making a tail call, but the interpreter itself wasn't, so the stack would grow without bound. I decided to investigate the problem by rewriting the interpreter in continuation passing style and seeing why I couldn't η-convert the tail calls. Once in CPS, I could see that eval took two continuations and I could achieve tail recursion by η-reducing one of them.

Wimpie NortjeProcess sub-command style command line options with Adopt.

· 3 days ago

How to process sub-command style command line arguments is a question that arises more and more. Many of the basic option handling libraries can not handle this at all, or they make it very difficult to do so.

One of the newer libraries in the option processing field is Adopt by Steve Losh. It was not designed to handle sub-commands but it is in fact very capable to do this without having to jump through too many hoops.

In a Reddit thread someone asked if Adopt can handle sub-command processing and Steve answered with the following example:

(eval-when (:compile-toplevel :load-toplevel :execute)
  (ql:quickload '(:adopt) :silent t))

(defpackage :subex
  (:use :cl)
  (:export :toplevel *ui*))

(in-package :subex)

;;;; Global Options and UI ----------------------------------------------------
(defparameter *o/help*
  (adopt:make-option 'help :long "help" :help "display help and exit" :reduce (constantly t)))

(defparameter *o/version*
  (adopt:make-option 'version :long "version" :help "display version and exit" :reduce (constantly t)))

(defparameter *ui/main*
  (adopt:make-interface
    :name "subex"
    :usage "[subcommand] [options]"
    :help "subcommand example program"
    :summary "an example program that uses subcommands"
    :contents (list *o/help* *o/version*)))

(defparameter *ui* *ui/main*)


;;;; Subcommand Foo -----------------------------------------------------------
(defparameter *o/foo/a*
  (adopt:make-option 'a :result-key 'mode :short #\a :help "run foo in mode A" :reduce (constantly :a)))

(defparameter *o/foo/b*
  (adopt:make-option 'b :result-key 'mode :short #\b :help "run foo in mode B" :reduce (constantly :b)))

(defparameter *ui/foo*
  (adopt:make-interface
    :name "subex foo"
    :usage "foo [-a|-b]"
    :summary "foo some things"
    :help "foo some things"
    :contents (list *o/foo/a* *o/foo/b*)))

(defun run/foo (mode)
  (format t "Running foo in ~A mode.~%" mode))


;;;; Subcommand Bar -----------------------------------------------------------
(defparameter *o/bar/meow*
  (adopt:make-option 'meow :long "meow" :help "meow loudly after each step" :reduce (constantly t)))

(defparameter *ui/bar*
  (adopt:make-interface
    :name "subex bar"
    :usage "bar [--meow] FILE..."
    :summary "bar some files"
    :help "bar some files"
    :contents (list *o/bar/meow*)))

(defun run/bar (paths meow?)
  (dolist (p paths)
    (format t "Bar-ing ~A.~%" p)
    (when meow?
      (write-line "meow."))))


;;;; Toplevel -----------------------------------------------------------------
(defun toplevel/foo (args)
  (multiple-value-bind (arguments options) (adopt:parse-options-or-exit *ui/foo* args)
    (unless (null arguments)
      (error "Foo does not take arguments, got ~S" arguments))
    (run/foo (gethash 'mode options))))

(defun toplevel/bar (args)
  (multiple-value-bind (arguments options) (adopt:parse-options-or-exit *ui/bar* args)
    (when (null arguments)
      (error "Bar requires arguments, got none."))
    (run/bar arguments (gethash 'meow options))))

(defun lookup-subcommand (string)
  (cond
    ((null string) (values nil *ui/main*))
    ((string= string "foo") (values #'toplevel/foo *ui/foo*))
    ((string= string "bar") (values #'toplevel/bar *ui/bar*))
    (t (error "Unknown subcommand ~S" string))))

(defun toplevel ()
  (sb-ext:disable-debugger)
  (multiple-value-bind (arguments global-options)
      (handler-bind ((adopt:unrecognized-option 'adopt:treat-as-argument))
        (adopt:parse-options *ui/main*))
    (when (gethash 'version global-options)
      (write-line "1.0.0")
      (adopt:exit))
    (multiple-value-bind (subtoplevel ui) (lookup-subcommand (first arguments))
      (when (or (null subtoplevel)
                (gethash 'help global-options))
        (adopt:print-help-and-exit ui))
      (funcall subtoplevel (rest arguments)))))

Quicklisp newsApril 2021 Quicklisp dist update now available

· 3 days ago

 New projects

  • cluffer — Library providing a protocol for text-editor buffers. — FreeBSD, see file LICENSE.text
  • data-frame — Data frames for Common Lisp — MS-PL
  • dfio — Common Lisp library for reading data from text files (eg CSV). — MS-PL
  • herodotus — Wrapper around Yason JSON parser/encoder with convenience methods for CLOS — BSD
  • lisp-stat — A statistical computing environment for Common Lisp — MS-PL
  • numerical-utilities — Utilities for numerical programming — MS-PL
  • nyxt — Extensible web-browser in Common Lisp — BSD 3-Clause
  • shop3 — SHOP3 Git repository — Mozilla Public License
  • special-functions — Special functions in Common Lisp — MS-PL
  • tfeb-lisp-hax — TFEB.ORG Lisp hax — MIT

Updated projects: 3bmd, 3d-matrices, alexandria, algae, anypool, april, array-operations, async-process, audio-tag, bdef, bp, canonicalized-initargs, cffi, chanl, ci-utils, cl+ssl, cl-autowrap, cl-change-case, cl-clon, cl-collider, cl-colors2, cl-coveralls, cl-cxx, cl-data-structures, cl-digraph, cl-environments, cl-gamepad, cl-gserver, cl-heredoc, cl-json-pointer, cl-kraken, cl-las, cl-liballegro, cl-liballegro-nuklear, cl-markless, cl-marshal, cl-maxminddb, cl-mixed, cl-mock, cl-patterns, cl-rabbit, cl-ses4, cl-shlex, cl-ssh-keys, cl-str, cl-strings, cl-typesetting, cl-utils, cl-webkit, clack, clods-export, clog, closer-mop, common-lisp-jupyter, computable-reals, concrete-syntax-tree, consfigurator, cricket, croatoan, cubic-bezier, cytoscape-clj, damn-fast-priority-queue, dataloader, defconfig, definitions-systems, dexador, doplus, eazy-documentation, eclector, enhanced-defclass, femlisp, file-attributes, flac-metadata, freesound, functional-trees, gadgets, gendl, glacier, golden-utils, gtirb-capstone, gtirb-functions, gtwiwtg, harmony, helambdap, hunchenissr, hyperluminal-mem, imago, ironclad, json-mop, kekule-clj, lake, lass, lichat-protocol, linear-programming, linux-packaging, lisp-binary, listopia, magicl, maiden, markup, mcclim, mgl-pax, mito, multiposter, mutility, neural-classifier, nodgui, north, omer-count, origin, parachute, parsley, patchwork, perceptual-hashes, petalisp, plump, pngload, postmodern, qlot, quicklisp-stats, quilc, quri, random-uuid, sc-extensions, seedable-rng, sel, select, serapeum, shadow, shasht, slot-extra-options, sly, staple, static-dispatch, stripe, stumpwm, taglib, tfeb-lisp-tools, tfm, trivia, trivial-features, trivial-timer, ttt, umbra, umlisp, utilities.print-items, validate-list, vgplot, with-user-abort, zippy.

Removed projects: its

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Wimpie NortjeA list of Common Lisp command line argument parsers.

· 4 days ago

I was searching for a command line option parser that can handle git-style sub-commands and found a whole bunch of libraries. It appears as if libraries on this topic proliferate more than usual.

I evaluated them only to the point where I could decide to skip it or give it a cursory test. The information I gathered is summarised below.

If you only need the usual flag and option processing, i.e. not sub-commands, then I would suggest unix-opts. It appears to be the accepted standard and is actively maintained. It is also suggested by both Awesome Common Lisp and the State of the Common Lisp Ecosystem Survey 2020.

If your needs are very complex or specific you can investigate clon, utility-arguments or ace.flag.

For basic flags and options with sub-commands, there are a few libraries that explicitly support sub-command processing but you should be able to make it work with many of the other options and a bit of additional code.

Name Print help Native sub-commands Notes
ace.flag ? ? Not in QL.
adopt Yes No Can generate man files.
apply-argv No No Does not handle -xyz as three flags.
cl-just-getopt-parser No No Easy to use.
cl-cli Yes Yes  
cl-argparse Yes Yes  
cli-parser No No Does not handle free arguments, not in QL.
clon ? Yes Very complex, most feature rich.
command-line-arguments ? ? Not well documented.
getopt No No Does not handle -xyz as three flags, not well documented.
parse-args No No Not in QL
utility-arguments ? ? Complex to set up
unix-options Yes No Easy to use.
unix-opts Yes No The standard recommendation.

Nicolas HafnerSlicing Up the Game - April Kandria Update

· 10 days ago

header
What a hell of a month! We got a lot done, all of it culminating in the release of the new vertical slice demo! This demo is now live, and you can check it out for free!. This slice includes an hour or more of content for you to explore, so we hope you enjoy it!

Visuals and Level Design

Like last month, a good chunk of this month was spent designing the remaining areas we needed for the slice. However, this is also the part that got the most shafted compared to how much time I should be investing in it. I'm going to have to dedicate a month or two at some point to just doing rough levels and figuring out what works, both for platforming challenges and for combat. So far I've never actually taken the time to do this, so I still feel very uncertain when it comes to designing stuff.

Still, I'm fairly happy with at least the visual look of things. Fred has done some excellent work with the additional tile work I've requested from him, and I'm starting to learn how to mash different tiles together to create new environments without having to create new assets all the time.

areas

I've also spent some time on the side making new palettes for the stranger. This was mostly for fun, but I think allowing this kind of customisation for the player is also genuinely valuable. At least I always enjoy changing the looks of the characters I play to my liking. There's 32 palettes already, but I'm still open for more ideas if you have any, by the way!

palettes

We're not quite sure yet how we want to present the palettes in-game. Probably allowing you to pick between a few in the settings, and having some others as items you have to discover first.

Gameplay changes

We've gone over the combat some more and tweaked it further. It's still a good shot away from what I'd like it to be, and I'll probably have to spend a full month at some point to improve it. Whatever the case, what we have now is already miles ahead of how things started out.

The player movement has also been slightly tweaked to fit better for the exploration and kinds of levels we've built, and to overall feel a bit smoother. The exact changes are very subtle, though I hope you'll still notice them, even if just subconsciously!

roll

I've also added elevators back into the game. That lead to a bunch of days of frustrated collision problem fixes again, but still, elevators are an important part of the game, so I'm glad I've gotten around to adding them back in.

There's also been a bunch of improvements and fixes to the movement AI so NPCs can find their way better through the complicated mess of underground tunnels and caved in complexes.

Optimisation

Due to a number of people reporting problems with stutter, and generally the game showing slowdown even on my beefy machine, I put a bit of time into various optimisations. Chief among those is the reduction of produced garbage, which means the garbage collector will be invoked far less often, leading to fewer GC pauses stuttering up the framerate. There's still a lot left to be done for that, but I'll do that another time.

I also finally got around to implementing a spatial query data structure - this is extremely useful as it massively reduces the time needed to do collision testing and so forth. What I've gone with is a much simplified bounding volume hierarchy tree (BVH), mostly because the concept is very simple to understand: every object in the scene you put into a box that encompasses it. You then group two such boxes at a time into another box that encompasses both. You keep doing that until you get one last box that encompasses everything.

bvh

If you now want to know which objects are contained in a region, you start testing the biggest box, and descend into the smaller boxes as long as that region is still a part of the box. If this tree of boxes is well balanced (meaning the closest objects are grouped together), it should reduce the number of tests you need to make drastically.

Implementing this was a surprisingly painless task that only took me a bout a day. Even if the BVH I have is most definitely not ideally balanced at every point in time, it's still good enough for now.

Editor

As you may or may not know, Kandria is built with a custom engine, and includes a fully featured editor of its own. This editor is shipped along with every version of the game, and you can open it up at any time by pressing the section key (below Escape).

This month I've made a number of improvements to the editor to add extra tools and fix a lot of issues to its stability. This was necessary to make my own life designing levels not completely miserable, but I think the editor is now also approaching a level of usability that should make it approachable by people not in the dev team, like you!

There's a bit of public documentation on the editor, so if you're interested in messing around with the existing levels, or even building your own, check it out! We're still intending on organising a level design contest as well, though for that I want to take some time to polish the editor even more, so you'll have to wait a bit longer for that. If that sounds exciting to you though, be sure to join our Discord, as we'll be organising the event through there whenever it comes to be.

UI

There's been a number of improvements to the game's user interface. Chief among them being that dialogue choices are now displayed in a less confusing manner, but there's also been some additions to the main menu to allow you to save & quit the game, check your quest log and inventory, and check the button mappings.

We've also included some more accessibility options so that you can change the UI scaling to your liking, pick between different fonts if the default is hard to read, and to disable or tweak things like the gamepad rumble strength or the camera shake intensity.

controls

Unfortunately we haven't had time to build a button remapping UI yet, though the game is already capable of doing the remapping for you. We'll definitely build such a UI in time for the full first act demo, though.

If you have other suggestions for accessibility improvements, please do let me know. Accessibility is very important to me, and I'd like to make Kandria a good example in that domain.

Composers

Last month we put out a listing for a composer for Kandria. The response to that was frankly astounding. Within two days we had gotten over a hundred applications, and within the week I had to close the listing down again as we were getting close to three hundred in total!

I knew there were going to be lots of applications, but still, I didn't expect this big of a response. Processing everything and evaluating all the applications took a fair amount of time out of the month, and it was really, really hard, too. So many of the pieces I listened to over the course of doing this were really fantastic!

We're still not quite done with the evaluation, though. We managed to whittle the list down to 10 for interviews, and from there to 3 for a third round. This third round is still going on now; the three were paid to produce a one minute track of music for a specific section of Kandria. The production process, communication, and how well the piece ultimately fits to our vision are going to help us decide who to pick.

The three finalists, Jacob Lincke, João Luís, and Mikel Dale, have all agreed to be named publicly, and to have their pieces published once they're done. The deadline for that is 18th of April, so you'll get to hear what they made in the next monthly update! After the deadline we hope to also finalise a contract with our pick until the end of the month, so that they can start with us in May, or shortly after.

I've heard some drafts from each of them already, and what they've produced is really good stuff. It has made me so excited to finally be able to not only see, but also properly hear Kandria!

Events

Gamedev isn't all about just developing though, as you also have to worry about organisation, management, planning, marketing, and funding. The last is another thing that ate some days' worth of time this month. We were chosen by ProHelvetia to participate in the Global Games Pitch and Pocket Gamer Connects Digital. We're of course very grateful for these opportunities, and it's fantastic to be able to present Kandria at some events despite Corona!

Still, pitching is a very stressful affair for me, so preparing for GGP and actually executing it took a good bite out of me. On the flipside, we now have some good quality pitching material that we can much more easily adapt and re-use in the future as well. I haven't heard back from anyone about the pitch I did, so I don't have any feedback on what was good or bad about it, which is a shame. I didn't really expect to get any feedback from it though, so I can't say I'm upset about it either.

In any case, PG Connects Digital is happening in a little less than two weeks from now, so I'll have to make sure to be ready for that whenever it comes about.

Tim's recount

We've reached the vertical slice deadline - the quests are done now and feeling pretty good I think. The dialogue and structures have been refined with feedback from Nick; there's also been a fair amount of self-testing, and a couple of week's testing from our Discord, which has all helped tighten things up. I feel like there's a good balance between plot, character development, player expression, and non-linearity, while also teasing aspects of the wider setting and story. I'm still not totally sure how much playtime the quests constitute right now; I think it largely depends on how fast a player is at the gameplay, and how much they want to engage with the dialogue - but they do take them to the four corners of the current map, and there's some replayability in there too. It feels like a good chunk of content and a major part of the first act. I'm looking forward to seeing how people get on with them, and to learn from their feedback to tweak things further.

I've learnt lots of new scripting tricks in the dialogue engine to bring this together, which will be useful going forwards, and should make generating this amount of content much quicker in the future. Nick and I also have some ideas to improve the current quests, which we should be able to do alongside the next milestone's work.

This month I also helped Nick prepare for the Global Games Pitch event; it was great to watch the stream, and see how other developers pitched their projects. Hopefully this leads to some new opportunities for Kandria too!

Fred's recount

Added a lot of little things this month. Happy with the new content we got, though I wish I had been able to finish polishing the animations and attack moves on the Stranger for the vertical slice. I had kinda left those anims behind for a while, but I feel it's pretty helpful to gauge the combat feel better.

Otherwise, I am really stoked to get started with the game jam coming up. I love those, last one I did was for my birthday in 2019 and it was the best birthday present ever. :D

Going forward

As Fred mentioned, the next two weeks we'll be working on a new, secret project! But don't worry, it won't stay secret for very long, and we won't be putting Kandria off for long either. It's going to be a short two-week jam-type project, which we'll release at the end of the month, so you'll know what it is and get to play it by the next monthly update! If you're really curious though, you should sign up to our mailing list where we'll talk about the project next week already!

If you want to try out the new demo release, you'll get a download link when you subscribe, as well. I hope you enjoy it!

Joe MarshallCan continuation passing style code perform well?

· 11 days ago

Continuation passing style is a powerful technique that allows you to abstract over control flow in your program. Here is a simple example: We want to look things up in a table, but sometimes the key we use is not associated with any value. In that case, we have to do something different, but the lookup code doesn't know what the caller wants to do, and the caller doesn't know how the lookup code works. Typically, we would arrange for the lookup code to return a special “key not found” value:

(let ((answer (lookup key table)))
   (if (eq answer 'key-not-found)
       ... handle missing key ...    
       ... compute something with answer...)

There are two minor problems with this approach. First, the “key not found” value has to be within the type returned by lookup. Consider a table that can only contain integers. Unfortunately, we cannot declare answer to be an integer because it might be the “key not found” value. Alternatively, we might decide to reserve a special integer to indicate “key not found”. The answer can then be declared an integer, but there is now a magic integer that cannot be stored in the table. Either way, answer is a supertype of what can be stored in the table, and we have to project it back down by testing it against “key not found”.

The second problem is one of redundancy. Presumably, somewhere in the code for lookup there is a conditional for the case that the key hasn't been found. We take a branch and return the “key not found” value. But now the caller tests the return value against “key not found” and it, too, takes a branch. We only take the true branch in the caller if the true branch was taken in the callee and we only take the false branch in the caller if the false branch was taken in the callee. In essence, we are branching on the exact same condition twice. We've reified the control flow, injected the reified value into the space of possible return values, passed it through the function call boundary, then projected and reflected the value back into control flow at the call site.

If we write this in continuation passing style, the call looks like this

(lookup key table
   (lambda (answer)
     …compute something with answer)
   (lambda ()
     …handle missing key…))
lookup will invoke the first lambda expression on the answer if it is found, but it will invoke the second lambda expression if the answer is not found. We no longer have a special “key not found” value, so answer can be exactly the type of what is stored in the table and we don't have to reserve a magic value. There is also no redundant conditional test in the caller.

This is pretty cool, but there is a cost. The first is that it takes practice to read continuation passing style code. I suppose it takes practice to read any code, but some languages make it extra cumbersome to pass around the lambda expressions. (Some seem actively hostile to the idea.) It's just more obscure to be passing around continuations when direct style will do.

The second cost is one of performance and efficiency. The lambda expressions that you pass in to a continuation passing style program will have to be closed in the caller's environment, and this likely means storage allocation. When the callee invokes one of the continuations, it has to perform a function call. Finally, the lexically scoped variables in the continuation will have to be fetched from the closure's environment. Direct style performs better because it avoids all the lexical closure machinery and can keep variables in the local stack frame. For these reasons, you might have reservations about writing code in continuation passing style if it needs to perform.

Continuation passing style looks complicated, but you don't need a Sufficiently Smart compiler to generate efficient code from it. Here is lookup coded up to illustrate:

(defun lookup (key table if-found if-not-found)
   (labels ((scan-entries (entries)
              (cond ((null entries) (funcall if-not-found))
                    ((eq (caar entries) key) (funcall if-found (cdar entries)))
                    (t (scan-entries (cdr entries))))))
     (scan-entries table)))
and a sample use might be
(defun probe (thing)
  (lookup thing *special-table*
    (lambda (value) (format t "~s maps to ~s." thing value))
    (lambda () (format t "~s is not special." thing))))

Normally, probe would have to allocate two closures to pass in to lookup, and the code in each closure would have to fetch the lexical value of key from the closure. But without changing either lookup or probe we can (declaim (inline lookup)). Obviously, inlining the call will eliminate the overhead of a function call, but watch what happens to the closures:

(defun probe (thing)
  ((lambda (key table if-found if-not-found)
     (labels ((scan-entries (table)
                (cond ((null entries) (funcall if-not-found))
                      ((eq (caar entries) key) (funcall if-found (cdar entries)))
                      (t (scan-entries (cdr entries))))))
        (scan-entries table)))
    thing *special-table*
    (lambda (value) (format t "~s maps to ~s." thing value))
    (lambda () (format t "~s has no mapping." thing))))
A Decent Compiler will easily notice that key is just an alias for thing and that table is just an alias for *special-table*, so we get:
(defun probe (thing)
  ((lambda (if-found if-not-found)
     (labels ((scan-entries (entries)
                (cond ((null entries) (funcall if-not-found))
                      ((eq (caar entries) thing) (funcall if-found (cdar entries)))
                      (t (scan-entries (cdr entries))))))
        (scan-entries *special-table*)))
    (lambda (value) (format t "~s maps to ~s." thing value))
    (lambda () (format t "~s has no mapping." thing))))
and the expressions for if-found and if-not-found are side-effect free, so they can be inlined (and we expect the compiler to correctly avoid unexpected variable capture):
(defun probe (thing)
  ((lambda ()
     (labels ((scan-entries (entries)
                (cond ((null entries)
                       (funcall (lambda () (format t "~s has no mapping." thing))))
                      ((eq (caar entries) thing)
                       (funcall (lambda (value) (format t "~s maps to ~s." thing value))
                                (cdar entries)))
                      (t (scan-entries (cdr entries))))))
        (scan-entries *special-table*)))))
and the immediate calls to literal lambdas can be removed:
(defun probe (thing)
   (labels ((scan-entries (entries)
              (cond ((null entries) (format t "~s has no mapping." thing))
                    ((eq (caar entries) thing)
                     (format t "~s maps to ~s." thing (cdar value))))
                    (t (scan-entries (cdr entries))))))
      (scan-entries *special-table*)))

Our Decent Compiler has removed all the lexical closure machinery and turned the calls to the continuations into direct code. This code has all the features we desire: there is no special “key not found” value to screw up our types, there is no redundant branch: the (null entries) test directly branches into the appropriate handling code, we do not allocate closures, and the variables that would have been closed over are now directly apparent in the frame.

It's a bit vacuous to observe that an inlined function performs better. Of course it does. At the very least you avoid a procedure call. But if you inline a continuation passing style function, any Decent Compiler will go to town and optimize away the continuation overhead. It's an unexpected bonus.

On occasion I find that continuation passing style is just the abstraction for certain code that is also performance critical. I don't give it a second thought. Continuation passing style can result in high-performance code if you simply inline the critical calls.

Joe MarshallEarly LISP Part II (Apply redux)

· 18 days ago

By April of 1959, issues with using subst to implement β-reduction became apparent. In the April 1959 Quarterly Progress Report of the Research Laboratory of Electronics, McCarthy gives an updated definition of the universal S-function apply:

    apply[f;args]=eval[cons[f;appq[args]];NIL]
where
    appq[m]=[null[m]→NIL;T→cons[list[QUOTE;car[m]];appq[cdr[m]]]]
and
       eval[e;a]=[
atom[e]→eval[assoc[e;a];a];
atom[car[e]]→[
car[e]=QUOTE→cadr[e];
car[e]=ATOM→atom[eval[cadr[e];a]];
car[e]=EQ→[eval[cadr[e];a]=eval[caddr[e];a]];
car[e]=COND→evcon[cdr[e];a];
car[e]=CAR→car[eval[cadr[e];a]];
car[e]=CDR→cdr[eval[cadr[e];a]];
car[e]=CONS→cons[eval[cadr[e];a];eval[caddr[e];a]];
T→eval[cons[assoc[car[e];a];evlis[cdr[e];a]];a]];
caar[e]=LABEL→eval[cons[caddar[e];cdr[e]];cons[list[cadar[e];car[e]];a]];
caar[e]=LAMBDA→eval[caddar[e];append[pair[cadar[e];cdr[e]];a]]

and
    evcon[c;a]=[eval[caar[c];a]→eval[cadar[c];a];T→evcon[cdr[c];a]]
and
    evlis[m;a]= [null[m]→NIL;T→cons[list[QUOTE;eval[car[m];a]];
evlis[cdr[m];a]]

I find this a lot easier to understand if we transcribe it into modern Common LISP:

;;; Hey Emacs, this is -*- Lisp -*-

(in-package "CL-USER")

;; Avoid smashing the standard definitions.
(shadow "APPLY")
(shadow "ASSOC")
(shadow "EVAL")

(defun apply (f args)
  (eval (cons f (appq args)) nil))

(defun appq (m)
  (cond ((null m) nil)
        (t (cons (list 'QUOTE (car m)) (appq (cdr m))))))

(defun eval (e a)
  (cond ((atom e) (eval (assoc e a) a))
        ((atom (car e))
         (cond ((eq (car e) 'QUOTE) (cadr e))
               ((eq (car e) 'ATOM)  (atom (eval (cadr e) a)))
               ((eq (car e) 'EQ)    (eq (eval (cadr e) a) (eval (caddr e) a)))
               ((eq (car e) 'COND)  (evcon (cdr e) a))
               ((eq (car e) 'CAR)   (car (eval (cadr e) a)))
               ((eq (car e) 'CDR)   (cdr (eval (cadr e) a)))
               ((eq (car e) 'CONS)  (cons (eval (cadr e) a) (eval (caddr e) a)))
               (t (eval (cons (assoc (car e) a) (evlis (cdr e) a)) a))))
        ((eq (caar e) 'LABEL) (eval (cons (caddar e) (cdr e))
                                    (cons (list (cadar e) (car e)) a)))
        ((eq (caar e) 'LAMBDA) (eval (caddar e)
                                     (append (pair (cadar e) (cdr e)) a)))))

(defun evcon (c a)
  (cond ((eval (caar c) a) (eval (cadar c) a))
        (t (evcon (cdr c) a))))

(defun evlis (m a)
  (cond ((null m) nil)
        (t (cons (list 'QUOTE (eval (car m) a)) (evlis (cdr m) a)))))

;;; Modern helpers
(defun assoc (k l)
  (cadr (cl:assoc k l)))

(defun pair (ls rs)
  (map 'list #'list ls rs))

(defun testit ()
  (apply '(label ff (lambda (x) (cond ((atom x) x) ((quote t) (ff (car x))))))
         (list '((a . b) . c))))

There are a few things to notice about this. First, there is no code that inspects the value cell or function cell of a symbol. All symbols are evaluated by looking up the value in the association list a, so this evaluator uses one namespace. Second, the recursive calls to eval when evaluating combinations (the last clause of the inner cond and the LABEL and LAMBDA clauses) are in tail position, so this evaluator could be coded up tail-recursively. (It is impossible to say without inspecting the IBM 704 assembly code.)

What is most curious about this evaluator is the first clause in the outer cond in eval. This is where variable lookup happens. As you can see, we look up the variable by calling assoc, but once we obtain the value, we call eval on it. This LISP isn't storing values in the environment, but rather expressions that evaluate to values. If we look at the LAMBDA clause of the cond, the one that handles combinations that begin with lambda expressions, we can see that it does not evaluate the arguments to the lambda but instead associates the bound variables with the arguments' expressions. This therefore has call-by-name semantics rather than the modern call-by-value semantics.

By April 1960 we see these changes:

(defun eval (e a)
  (cond ((atom e) (assoc e a))
        ((atom (car e))
         (cond ((eq (car e) 'QUOTE) (cadr e))
               ((eq (car e) 'ATOM)  (atom (eval (cadr e) a)))
               ((eq (car e) 'EQ)    (eq (eval (cadr e) a) (eval (caddr e) a)))
               ((eq (car e) 'COND)  (evcon (cdr e) a))
               ((eq (car e) 'CAR)   (car (eval (cadr e) a)))
               ((eq (car e) 'CDR)   (cdr (eval (cadr e) a)))
               ((eq (car e) 'CONS)  (cons (eval (cadr e) a) (eval (caddr e) a)))
               (t (eval (cons (assoc (car e) a) (evlis (cdr e) a)) a))))
        ((eq (caar e) 'LABEL) (eval (cons (caddar e) (cdr e))
                                    (cons (list (cadar e) (car e)) a)))
        ((eq (caar e) 'LAMBDA) (eval (caddar e)
                                     (append (pair (cadar e) (evlis (cdr e) a)) a)))))
Note how evaluating an atom now simply looks up the value of the atom in the association list and evaluation of a combination of a lambda involves evaluating the arguments eagerly. This is a call-by-value interpreter.

Max-Gerd RetzlaffuLisp on M5Stack (ESP32):<br />Stand-alone uLisp computer (with code!)

· 19 days ago

Last Thursday, I started to use the m5stack faces keyboard I mentioned before and wrote a keyboard interpreter and REPL so that this makes another little handheld self-containd uLisp computer. Batteries are included so this makes it stand-alone and take-along. :)

I have made this as a present to my nephew who just turned eight last Saturday. Let's see how this can be used to actually teach a bit of Lisp. The first programming language needs to be Lisp, of course!

Programming “Hello World!” on the M5Stack with Faces keyboard; click for a larger version (252 kB).

Read the whole article.

Joe MarshallEarly LISP

· 23 days ago

In AI Memo 8 of the MIT Research Laboratory of Electronics (March 4, 1959), John McCarthy gives a definition of the universal S-function apply:

     apply is defined by
     apply[f;args]=eval[combine[f;args]]
     eval is defined by
eval[e]=[
first[e]=NULL→[null[eval[first[rest[e]]]]→T;1→F]
first[e]=ATOM→[atom[eval[first[rest[e]]]]→T;1→F]
first[e]=EQ→[eval[first[rest[e]]]=eval[first[rest[rest[e]]]]→T;
     1→F]
first[e]=QUOTE→first[rest[e]];
first[e]=FIRST→first[eval[first[rest[e]]]];
first[e]=REST→rest[eval[first[rest[e]]];
first[e]=COMBINE→combine[eval[first[rest[e]]];eval[first[rest[rest
     [e]]]]];
first[e]=COND→evcon[rest[e]]
first[first[e]]=LAMBDA→evlam[first[rest[first[e]]];first[rest[rest
    [first[e]]]];rest[e]];
first[first[e]]=LABELS→eval[combine[subst[first[e];first[rest
    [first[e]]];first[rest[rest[first[e]]]]];rest[e]]]]
where: evcon[c]=[eval[first[first[c]]]=1→eval[first[rest[first[c]]]];
           1→evcon[rest[c]]]
and
evlam[vars;exp;args]=[null[vars]→eval[exp];1→evlam[
     rest[vars];subst[first[vars];first[args];exp];rest[args]]]
McCarthy asserts that “if f is an S-expression for an S-function φ and args is a list of the form (arg1, …, argn) where arg1, ---, argn are arbitrary S-expressions then apply[f,args] and φ(arg1, …, argn) are defined for the same values of arg1, … argn and are equal when defined.”

I find it hard to puzzle through these equations, so I've transcribed them into S-expressions to get the following:

;;; Hey Emacs, this is -*- Lisp -*-

(in-package "CL-USER")

;; Don't clobber the system definitions.
(shadow "APPLY")
(shadow "EVAL")

(defun apply (f args)
  (eval (combine f args)))

(defun eval (e)
  (cond ((eq (first e) 'NULL)    (cond ((null (eval (first (rest e)))) t)
                                       (1 nil)))
        ((eq (first e) 'ATOM)    (cond ((atom (eval (first (rest e)))) t)
                                       (1 nil)))
        ((eq (first e) 'EQ)      (cond ((eq (eval (first (rest e)))
                                            (eval (first (rest (rest e))))) t)
                                       (1 nil)))
        ((eq (first e) 'QUOTE)   (first (rest e)))
        ((eq (first e) 'FIRST)   (first (eval (first (rest e)))))
        ((eq (first e) 'REST)    (rest  (eval (first (rest e)))))
        ((eq (first e) 'COMBINE) (combine (eval (first (rest e)))
                                          (eval (first (rest (rest e))))))
        ((eq (first e) 'COND)    (evcon (rest e)))
        ((eq (first (first e)) 'LAMBDA) (evlam (first (rest (first e)))
                                               (first (rest (rest (first e))))
                                               (rest e)))
        ((eq (first (first e)) 'LABELS) (eval (combine (subst (first e)
                                                              (first (rest (first e)))
                                                              (first (rest (rest (first e)))))
                                                       (rest e))))))

(defun evcon (c)
  (cond ((eval (first (first c))) (eval (first (rest (first c)))))
        (1 (evcon (rest c)))))

(defun evlam (vars exp args)
  (cond ((null vars) (eval exp))
        (1 (evlam (rest vars)
                  (subst (first args)
                         (first vars)
                         exp)
                  (rest args)))))
We just have to add a definition for combine as a synonym for cons and this should run:
* (eval '(eq (first (combine 'a 'b) (combine 'a 'c))))
T

As Steve “Slug” Russell observed, eval is an interpreter for Lisp. This version of eval uses an interesting evaluation strategy. If you look carefully, you'll see that there is no conditional clause for handling variables. Instead, when a lambda expression appears as the operator in a combination, the body of the lambda expression is walked and the bound variables are substituted with the expressions (not the values!) that represent the arguments. This is directly inspired by β-reduction from lambda calculus.

This is buggy, as McCarthy soon discovered. In the errata published one week later, McCarthy points out that the substitution process doesn't respect quoting, as we can see here:

* (eval '((lambda (name) (combine 'your (combine 'name (combine 'is (combine name nil))))) 'john))
(YOUR 'JOHN IS JOHN)
With a little thought, we can easily generate other name collisions. Notice, for example, that the substitution will happily substitute within the bound variable list of nested lambdas.

Substitution like this is inefficient. The body of the lambda is walked once for each bound variable to be substituted, then finally walked again to evaluate it. Later versions of Lisp will save the bound variables in an environment structure and substitute them incrementally during a single evaluation pass of the lambda body.

Jonathan GodboutCl-Protobufs Enumerations

· 25 days ago

In the last few posts we discussed family life, and before that we created a toy application using cl-protobufs and the ACE lisp libraries. Today we will dive deeper into the cl-protobufs library by looking at Enumerations. We will first discuss enumerations in Protocol Buffers, then we will discuss Lisp Protocol Buffer enums.

Enums:

Most modern languages have a concept of enums. In C++ enumerations are compiled down to integers and you are free to use integer equality. For example

enum Fish {
 salmon,
 trout,
}

void main {
  std::cout &lt;&lt; salmon == 0 &lt;&lt; std::endl;
}

Will print true. This is in many ways wonderful: enums compile down to integers and there's no cost to using them. It is baked into the language! 

Protocol Buffers are available for many languages, not just C++. You can find the documentation for Protocol Buffer enums here: 

https://developers.google.com/protocol-buffers/docs/proto#enum

Each language has its own way to support enumeration types. Languages like C++ and Java, which have built-in support for enumeration types, can treat protobuf enums like any other enum. The above enum could be written (with some caveats) in Protocol Buffer as:

enum Fish {
  salmon = 0;
  trout = 1;
}

You should be careful though, Protoc will give a compile warning that enum 0 should be a default value, so 

enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
}

Is preferred.

Let’s get into some detail for the two variants of Protocol Buffers in use.

// Example message to use below.
enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
}

message Meal {
  {optional} Fish fish;
}

The `optional` label will only be written for proto 2.

Proto 2:

In proto 2 we can always tell whether `Meal.fish` was set. If the field has the `required` label then it must be set, by definition. (But the `required` label is considered harmful; don’t use it.) If the field has an `optional` label then we can check if it has been set or not, so again a default value isn’t necessary.

If the enum is updated to:

// Example message to use below.
enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
  tilapia = 3;
}

and someone sends fish = tilapia to a system where tilapia isn't a valid entry, the library is allowed to do whatever it wants! In Java it sets it to the first entry, so Meal.fish would be default! 

Proto 3

In proto3 if the value of Meal.fish is not set, calling its accessor will return the default value which is always the zero value. There is no way to check whether the field was explicitly set. A default value (i.e., a name that maps to the value zero) must always be given, else the user will get a compile error.

If the Fish enum was updated to contain tilapia as above, and someone sent a proto message containing tilapia to a system with an older program that had the message not containing tilapia, the deserializer should save the enum value. That is, the underlying data structure should know it received a "3" for the fish field in Meal. How the accessors return this value is language dependent. Re-serializing the message should preserve this "unrecognized" value.

A common example is: A gateway system wants to do something with the message and then forward it to another system. Even though the middle system has an older schema for the Fish message it needs to forward all the data to the downstream system.

Cl-protobufs:

Now that we understand the basics of enumerations, it is important to understand how cl-protobufs records enumeration values

Lisp as a language does not have a concept of enumerations; what it does understand is keywords. Taking fish as above and running protoc we will get (see readme https://github.com/qitab/cl-protobufs/#enums):

(deftype fish '(:default :salmon :trout))

(defun fish-to-int (keyword) 
  (ecase keyword
    (:default 0)
    (:salmon 1)
    (:trout 2)))

(defun int-to-fish (int)
  (ecase int
    (0 :default)
    (1 :salmon)
    (2 :trout)))

Looking at the tilapia example, the enum deserializer preserves the unknown field in both proto2 and proto3. Calling an accessor on a field containing an unknown value will return :%undefined-n. So for tilapia we will see :%undefined-3.

Warning: To get this to work properly we have to remove type checks from protocol buffer enumerations. You can set the field value in a lisp protocol buffer message to any keyword you want, but you will get a serialization error when you try to serialize. This was a long discussion internally, but that design discussion could turn into a blog post of its own.

Conclusion:

The enumeration fields in cl-protobufs are fully proto2 and proto3 compliant. To do this we had to remove type checking. As a consumer, it is suggested that you always type check and handle undefined enumeration values in your usage of protocol buffer enums. We give you a deftype to easily check.

I hope you have enjoyed this deep dive into cl-protobuf enums. We strive to remove as many gotchas as possible.


Thanks to Ron and Carl for the continual copy edits and improvements!

Max-Gerd RetzlaffuLisp on M5Stack (ESP32):<br /> temperature sensors via one wire

· 25 days ago

I added support for Dallas temperature sensors to ulisp-esp-m5stack. Activate #define enable_dallastemp in order to use it. It bases on the Arduino libraries OneWire.h DallasTemperature.h.

I used pin 16 to connect my sensors but you can change ONE_WIRE_BUS to use a different pin. As the OneWire library uses simple bit bagging and no hardware support, e. g. UART, any general-purpose input/output (GPIO) pin will work.

The interface consists of four uLisp functions: INIT-TEMP, GET-TEMP, SET-TEMP-RESOLUTION, and GET-TEMP-DEVICES-COUNT. Here is their documentation:

Function init-temp
 
Syntax:
   init-temp
     => result-list

Arguments and values:
   result-list---a list of device addresses; each address being a list of 8 integer values specifying a device address.

Description:
   Detects all supported temperature sensors connected via one wire bus to the pin ONE_WIRE_BUS and returns the list of the sensors' device addresses.

   All sensors are configured to use the resolution specified by default DEFAULT_TEMPERATURE_PRECISION via a broadcast. Note that a sensor might choose a different resolution if the desired resolution is not supported. See also: set-temp-resolution.

Function get-temp
 
Syntax:
   get-temp address
     => temperature

Arguments and values:
   address---a list of 8 integer values specifying a device address.

   temperature---an integer value; the measured temperature in Celsius.

Description:
   Requests the sensor specified by address to measure and compute a new temperature reading, retrieves the value from the sensor device and returns the temperature in Celsius.

Function set-temp-resolution
 
Syntax:
   set-temp-resolution address [resolution]
     => actual-resolution

Arguments and values:
   address---a list of 8 integer values specifying a device address.

   resolution---an integer value.

   actual-resolution---an integer value.

Description:
   Tries to configure the sensor specified by address to use the given resolution and returns the actual resolution that the devices is set to after the attempt.

   Note that a sensor might choose a different resolution if the desired resolution is not supported. In this case, the returned actual-resolution differs from the argument resolution.

If the argument resolution is missing, instead the default given by DEFAULT_TEMPERATURE_PRECISION is used.

Function get-temp-devices-count
 
Syntax:
   get-temp-devices-count
     => count

Arguments and values:
   count---an integer value; the number of detected supported temperature sensors.

Description:
   Returns the number of temperature sensors supported by this interface that were detected by the last call to INIT-TEMP. Note that this might not be the correct current count if sensors were removed or added since the last call to INIT-TEMP.

Findings from reading DallasTemperature.h and DallasTemperature.cpp

These are the notes I wrote down when reading the source code of the Dallas temperature sensor library and my conclusion how to best use it which lead to my implementation for uLisp.

1. The process of counting the number of devices is efficiently done in parallel by a binary tree algorithm.

2. The result of the search is the number of devices with their addresses.

3. The DallasTemperature library keeps only a count of devices and a count of supported temperature sensors (ds18Count) in memory, not an indexed list of addresses. This is done in DallasTemperature::begin() by doing a search but only the counts are kept, no addresses are stored. Sadly, it also does not return anything.

4. getAddress() does a search again to determine the address for an device index. So it is faster to just get a sensor reading by using the address not the index, it safes one search.

5. Sadly, there is not command to get a list of addresses in a row. So at least once you have to do getAddress() to actually get the addresses of all devices.

5. requestTemperature() can be applied to a single device only or to all devices in parallel. It is as fast to request a temperature from all devices as only one device.

6. Actually getting the temperature reading works only one at a time. getTemp*(deviceAddress) is faster than getTemp*ByIndex(index) as the latter has to do a search first (see 4.).

7. There are these temperature resolutions: 9, 10, 11, and 12 bits. The conversion (=reading) times are:
9 bit – 94 ms
10 bit – 188 ms
11 bit – 375 ms
12 bit – 750 ms

8. setResolution() can either set all devices in parallel or only set one device at a time (only by address, there is no setResultionByIndex()).

9. The temperatures are internally stored in 1/128 degree steps. This is the "raw" readings returned by DallasTemperature::getTemp() as int16_t.

DallasTemperature::getTempC returns "(float) raw * 0.0078125f" and
DallasTemperature::getTempF returns "((float) raw * 0.0140625f) + 32.0f".

In case of an error,
getTempC() will return DEVICE_DISCONNECTED_C which is "(float)-127",
getTempF() will return DEVICE_DISCONNECTED_F which is "(float)-196.6", and
getTemp() will return DEVICE_DISCONNECTED_RAW which is "(int16_t)-7040", respectively.

10. If you don't need the actual temperature but just to monitor that the temperature is in a defined range, it is not necessary to read the temperatures at all (which has to happen one sensor at a time). Instead, you can use the alarm signaling.

For that, you can set a high and a low alarm temperature per device and then you can do an alarm search to determine in parallel if there are sensors with alarms. The range can be half open, that is you can also only define high and low alarm temperatures.

DallasTemperature::alarmSearch() returns one device address with an alarm at a time. It is also possible to install an alarm handler and then call DallasTemperature::processAlarms() which will do repeated alarm searches and call the handler for each device with an alarm.

11. isConnected(deviceAddress) can be used to determine if a certain sensor is still available. It will return quickly when it is not but transfer a full sensor reading in case it is still available. The library currently does not support a case where parallel search is used to determine if known devices are still present.

12. The search is deterministic, it seems, so as long as you don't change sensors, the indices stay the same. If you add and remove a sensor, existing sensor might get new indices. So it seems actually not to be safe to use *ByIndex() functions.

13. getDeviceCount() gives you the number of all devices, getDS18Count() the number of all supported DS18 sensors. But no function gives you the list of indices or addresses of all supported DS19 sensors.

validFamily(deviceAddress) lets you check by address if a device is supported. Supported are DS18S20MODEL (also DS1820), DS18B20MODEL (also MAX31820), DS1822MODEL, DS1825MODEL, and DS28EA00MODEL.

getAddress() just checks if the address is valid (using validAddress(deviceAddress)) but not if the device is actually known. As getAddress() already calls validAddress() for you, there should be no need to ever call validAddress() from user code. If you just request a temperature from all devices till getDeviceCount() you'll also send requests to unsupported devices.

In conclusion, this seems to be the best approach to setup all devices:

  1. Call getDS18Count() once to determine that there are any supported temperature sensors at all.
  2. Iterate over all devices, that is, from index "0" up to "getDeviceCount() - 1".
  3. Call getAddress() for each index (this will also check validAddress())
  4. and then call validFamily() for the address.
  5. If validFamily() returns true, store the address for later temperature readings.
  6. This is also a good time to call setResolution() as per default each device is left at its individual default resolution if you have sensors of different kinds. Either call getResolution(newResolution) to set all devices in parallel, or setResolution(address, newResolution) in the loop right after each call to validFamily() to set up individual resolutions.

To read sensor values:

  1. Call requestTemperature() to request all sensors to do new readings in parallel,
  2. then iterate over the stored list of DS18 addresses and
  3. call getTempC(address), getTempF(address), or getTemp(address) for each address and
  4. check for error return values (see Finding 9.).

Note: getTempC() and getTempF() will call getTemp() internally and that one will also use isConnected(). So there should be no need to call isConnected() from user code if you check for the error return values of the functions (see Finding 8.)



This is the last thing I promised to release in my previous post of February 15, 2021. Documentation takes time! But I programmed new features last Thursday so stay tuned.


See also "Curl/Wget for uLisp", time via NTP, lispstring without escaping and more space, flash support, muting of the speaker and backlight control and uLisp on M5Stack (ESP32).

Read the whole article.

Didier VernaClon 1.0b25 is out

· 29 days ago

Today, I'm releasing the next beta version of Clon, my command-line options management library.

The previous official release occurred 6 years ago. Since then, a number of changes had been quietly sleeping in the trunk but never made their way into Quicklisp. More recently, I have also applied a number of changes that are worth mentioning here.

First of all, a large part of the infrastructure as been updated, following the evolution of the 8 supported compilers, and that of ASDF and CFFI as well. This should normally be transparent to the user though, provided that one uses reasonably recent compiler / ASDF version ("reasonably" intentionally left undefined). Other than that...

  • The constraints on termio support auto-detection had become slightly too restrictive, so they have been relaxed.
  • The exit function has been deprecated in favor of uiop:quit.
  • The support for running in scripts rather than in dumped executables has been improved, notably by offering the possibility to provide an alternate program name when argv0 is not satisfactory.
  • Clon is now compatible with executables dumped via ASDF's program-op operation, or dumped natively. The demonstration programs in the distribution have been updated to illustrate both dumping methods (ASDF, and Clon's dump function).
  • The documentation on application delivery has been largely rewritten, and has become a full chapter rather than a thin appendix.

There are also a few bug fixes in this release.

  • Several custom readtable problems have been fixed for CCL, CLISP, and ECL (thanks to Thomas Fitzsimmons). Note that Clon depends on named-readtables now.
  • Clon now compiles its termio support correclty with a C++ based ECL (thanks to Pritam Baral).
  • One problem in the conversion protocol for path options has been corrected (thanks to Olivier Certner).

All entrey points are on Clon's web page.

Enjoy!

Marco AntoniottiHE&#923;Ping ASDF

· 32 days ago

 ... more fixing and, ça va sans dire, more creeping features.

I got prodded to integrate HEΛP with other tools; mostly, of course, ASDF.  A simple solution was to define a document-op for a system.  After jumping through a few hoops, the solution was to use the :properties of a system to pile up arguments for the main HEΛP document function (well, only one for the time being).  Bottom line, suppose you have:

  (asdf:defsystem "foosys"
     :pathname #P"D:/Common Lisp/Systems/foosys/")

now you just issue

  (asdf:operate 'hlp:document-op "foosys")

and the documentation for the system "foosys" will appear in the "docs/html/" subfolder.

If you want to pass a title to the document function, you set up your system as:

  (asdf:defsystem "foosys"
     :properties (:documentation-title "The FOO Omnipotent Tool")
     :pathname #P"D:/Common Lisp/Systems/foosys/")

and the parameter will be used (instead of the bare system name).

It works! 😁

Some more fixing and more extensions may be needed (hlp:document takes a lot of parameters) but it is already usable.

All the necessary bits and pieces are in the HEΛP repository, and they should get into Quicklisp in the next release.

Enjoy


(cheers)

Marco AntoniottiNeed more HE&#923;P?

· 36 days ago

Just a quick note for people following these... parentheses.

I have carved out some time to do some more Lisp hacking and this lead me to look at the very nice usocket library (I want to do some network programming).  The usocket library documentation page has a bit of an "old" and "handcrafted" look and feel to it, so I tried to produce a version of the documentation with help from my HEΛP library.

Well, it turns out that usocket has some more than legitimate code within it that my HEΛP library was not handling; even worse, it unearthed a bug in the Lambda List parsing routines.

As an example, usocket uses the following idiom to set some of the documentation strings.

    (setf (documentation 'fun 'function) "Ain't this fun?") 

This is perfectly fine, but it needed some extra twist to get HEΛP do what is, IMHO, the right thing: in this case it meant ensuring that the lambda list of the function was properly rendered in the final documentation.

Apart from that, a few not so nice buglets were exposed in the code parsing lambda lists.  The result is that now the logic of that piece of code is simpler and somewhat cleaner.

So, if you want to get HEΛP to document your Common Lisp code, give it a spin.



(cheers)

Nicolas HafnerGoing Underground - March Kandria Update

· 46 days ago

header
I can't believe it's been two months already since the year started. Time moves extremely quickly these days. Anyway, we have some solid progress to show, and some important announcements to make this month, so strap in!

Overall progress

Last month was a big update with a lot of new content, particularly all the custom buildings Fred and I had put together to build the surface camp. This month involved a lot more of that, but for the first underground region. This region is still very close to the surface, so it'll be composed out of a mix of ruins of modern corporate architecture, and natural caves.

office dorms

As before, figuring out a fitting style was very challenging, even disregarding the fact that it has to be in ruins, as well. Still, I think what we put together, especially combined with Kandria's lighting system, creates a great amount atmosphere and evokes that feeling of eerie wonder that I've always wanted to hit.

Mushrooms are a big part of the ecosystem in Kandria, being the primary food source for the underground dwellers, so I couldn't resist adding giant mushrooms to the caves.

cave

On the coding side there's been a bunch of bugfixing and general improvement going on. The movement AI can now traverse the deep underground regions seemingly without problem. Game startup speed is massively improved thanks to some caching of the movement data, and NPCs can now climb ropes and use teleporters when navigating.

We've also spent some time working on the combat again, adding some extra bits that, while seemingly small, change the feel quite a lot. Attacks now have a cooldown that forces you to consider the timing, and inputs are no longer buffered for the entire duration of an animation, which eliminates the feeling of lag that was prevalent before. Fred also tuned some of the player's attack animations some more and while I couldn't tell you what exactly changed, when I first tried it out I immediately noticed that it felt a lot better!

All of this just further reaffirms my belief that making a good combat system involves a ton of extremely subtle changes that you wouldn't notice at all unless you did a frame-by-frame analysis. It all lies in the intuition the system builds up within you, which makes it hard to tune. I'm sure we'll need to do more rounds of tuning like that as we progress.

wolf

Then I've also reinstated the wolf enemy that I first worked on close to a year ago. The AI is a lot simpler now, but it also actually works a lot better. It's still a bit weird though, especially when interacting with slopes and obstacles, but it does make for a nice change of pace compared to the zombie enemy. We'll have to see how things turn out when they're placed in the context of actual exploration and quests, though.

Another feature I resurrected and finally got to work right is the ability to save and load regions from zip files. This makes it easy to exchange custom levels. The editor used for the game is shipped with the game and always available at the press of a button, so we're hoping to use that in combination with the zip capability to organise a small level design contest within the community. We'll probably launch that in April, once the new demo hits. If you like building or playing levels, keep an eye out!

The biggest chunk of work this month went into doing level design. I've been putting that off for ages and ages, because it's one of those things that I'm not very familiar with myself, so it seems very daunting. I don't really know where to start or how to effectively break down all the constraints and requirements and actually start building a level around them, let alone a level that's also fun to traverse and interesting to look at! It was so daunting to me in fact, that I couldn't work on anything at all for one day because I was just stuck in a sort of stupor.

Whatever the case though, the only way to break this mould and get experience, and thus some confidence and ability in making levels, is to actually do it. I've put together the first part of the first region now, though it's all still very rough and needs a ton more detail and playtesting.

region1 upper

The part above is the surface settlement, with the city ruins to the right. Below the camp lies the central hub of the first region, which links up to a variety of different rooms - an office, a market, an apartment complex, and several natural caves that formed during the calamity. The sections below the city ruins don't belong to the slice, but will be part of the full "first act demo" that we plan to release some months after the slice.

Even with all the tooling I built that allows you to easily drag out geometry and automatically tile a large chunk of it, it still takes a ton of time to place all the little details like chairs, doors, railings, machines, plants, broken rubble, background elements, to vary the elements and break up repetition, etc. It also takes a lot of extra effort to ensure that the tiles work correctly in this pseudo-isometric view we have going on for the rooms. Still, the rooms do look a lot better like this than they did with my initial heads-on view, so I think we'll stick with it even if it costs us more time to build.

Speechless

I've finally gotten around to documenting the dialogue system I've developed for Kandria. I've given it the name Speechless, since it's based on Markless. It's designed to be engine-independent, so if you have your own game in Lisp and need a capable dialogue logic system, you should be able to make use of it. If you do, please tell me about it, I'd be all ears!

I'd also be interested to hear from other narrative designers on what they think of it. I can't say I'm familiar with the tools that are used in other engines - a lot of it seems to be in-house, and frequently based around flow-charts from what I can tell. Having things completely in text does remove some of the visual clarity, but I think it also makes it a lot quicker to put things together.

Now, I know that the Lisp scene is very small, and the games scene within it even smaller, so I don't think Speechless will gain much traction, but even if it itself won't, I hope that seeing something like it will at least inspire some to build similar systems, as I think this text based workflow can be extremely effective.

Hiring a musician

I'm hiring again! Now that Kandria's world is properly coming together it's time to look at a composer to start with a soundtrack to really bring the world to life. Music is extremely important to me, so I wanted to wait until we had enough of the visuals together to properly inform the mood and atmosphere. I'm still having a lot of trouble imagining what the world should actually sound like, and there's a broad range of music I like, so I hope that I can find someone that can not only produce a quality score, but also help figure out the exact sound aesthetics to go for.

If you are a musician, or know musicians that are looking for work, have a look at the listing!

Tim's recount

Quests quests and quests! I've got the core gameplay scripting done for most of the vertical slice quests now. The last couple are still using placeholder dialogue, but for the others I've done several drafts in the voice of the characters, sprinkling in player choices here and there and yeah - it feels like it's coming together. Hopefully it's familiarising the player with the characters, their unique voices, and their motivations, whilst keeping the gameplay and plot momentum moving forwards. I've now written in anger for all of the main hub characters, and feel like I'm getting into their headspace.

Some of the scripting functionality has been more complex than I anticipated - but with help from Nick creating new convenience functions, and showing me the best way to structure things, I feel like I've gotten most of the design patterns down now that I'm going to need going forwards.

The rest of the month will involve rounding out these quests, iterating on feedback, and transposing the triggers (which are still using debug locations) into the main region layout

Fred's recount

Quite a lot of character anims in! It'll be exciting to see the camp characters come to life in the game and not just in my animation software. 🙂

This feels like this month was an important milestone at making Kandria's world more immersive. There's still more work to do on the buildings and getting convincing yet fun to explore ruins but overall it feels like a lot of stuff is coming together.

The future

This is the last month we had in our plan for the vertical slice. Unfortunately it turns out that we had way underestimated the amount of time it would take to create the required tilesets and design the levels. Still, it seems much more important to avoid crunch, and to deliver a quality slice, so we're looking to extend the deadline.

We'll still try to release an early slice for our testers by the end of this month, but then we'll take two additional weeks for bugfixing and polish, so the updated public demo should be out mid-April. We'll be sure to make an announcement when it comes out or if there's other problems that'll further delay it. Please bear with us!

The remainder of April though we're planning to completely switch gears away from Kandria and catch a mental breather. We'll instead work on a new, very small jam project, that we hope to build and release within the two weeks. We're not entirely certain yet what exactly we'll do, but it should be a lot of fun to do a jam again one of these days.

As always, thank you very much for reading and in general for your interest in Kandria! Starting from scratch like we are (in multiple ways at that!) isn't easy, and it's been really nice to see people respond and support the project.

If you'd like to support us, it would help a lot to wishlist Kandria on Steam, and to join the Discord! There's also a lot of additional information on the development and our current thoughts in the weekly mailing list updates and my Twitter.

Quicklisp newsFebruary 2021 Quicklisp dist update now available

· 52 days ago

 New projects

  • audio-tag — tool to deal with audio tags. read and write — BSD-2-Clause License
  • canonicalized-initargs — Provides a :canonicalize slot option accepting an initarg canonicalization function. — Unlicense
  • cl-debug-print — A reader-macro for debug print — MIT
  • cl-json-schema — Describe cl-json-schema here — Specify license here
  • cl-ses4 — AWS SES email sender using Signature Version 4 of Amazon's API — Public Domain
  • cl-telebot — Common Lisp Telegram Bot API — MIT
  • consfigurator — Lisp declarative configuration management system — GPL-3+
  • cricket — A library for generating and manipulating coherent noise — MIT
  • cubic-bezier — A library for constructing and evaluating cubic Bézier curve paths. — MIT
  • defconfig — A configuration system for user exposed variables — GPLv3
  • enhanced-defclass — Provides a truly extensible version of DEFCLASS that can accurately control the expansion according to the metaclass and automatically detect the suitable metaclass by analyzing the DEFCLASS form. — Unlicense
  • freesound — A client for Freesound.org. — MIT
  • mnas-graph — Defines basic functions for creating a graph data structure and displaying it via graphviz. — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • mnas-hash-table — Defines some functions for working with hash tables. — GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 or later
  • nyaml — Native YAML Parser — MIT
  • pvars — easily define persistent variables — MIT
  • random-uuid — Create and parse RFC-4122 UUID version 4 identifiers. — MIT
  • sanity-clause — Sanity clause is a data contract and validation library. — LGPLv3
  • seedable-rng — A seedable random number generator. — MIT
  • slot-extra-options — Extra options for slots using MOP. — LGPL-3.0-or-later
  • tailrec — Guaranteed tail call optimization. — LLGPL
  • tfeb-lisp-tools — TFEB.ORG Lisp tools — MIT

Updated projects: algae, april, async-process, black-tie, cepl, cl+ssl, cl-ana, cl-async, cl-change-case, cl-coveralls, cl-data-structures, cl-dbi, cl-fxml, cl-grip, cl-gserver, cl-html-readme, cl-ipfs-api2, cl-kraken, cl-liballegro-nuklear, cl-libusb, cl-patterns, cl-pdf, cl-prevalence, cl-reexport, cl-shlex, cl-smtp, cl-string-generator, cl-threadpool, cl-typesetting, cl-unicode, cl-utils, cl-webkit, cl-yesql, clog, closer-mop, clsql, cmd, common-lisp-jupyter, core, cover, croatoan, datum-comments, defenum, dexador, easy-audio, eclector, fast-websocket, feeder, file-select, flare, float-features, freebsd-sysctl, functional-trees, fxml, geco, gendl, gtirb-capstone, gtirb-functions, gtwiwtg, harmony, hu.dwim.bluez, hu.dwim.common-lisp, hu.dwim.defclass-star, hu.dwim.logger, hu.dwim.quasi-quote, hu.dwim.reiterate, hu.dwim.sdl, hu.dwim.walker, hu.dwim.zlib, hunchenissr, iterate, jingoh, lass, lichat-protocol, linear-programming, lisp-chat, lmdb, magicl, maiden, mailgun, markdown.cl, mcclim, mgl-pax, mito, monomyth, named-read-macros, nodgui, num-utils, numcl, open-location-code, origin, orizuru-orm, osicat, periods, petalisp, plump-sexp, portal, py4cl, py4cl2, qlot, quri, read-as-string, repl-utilities, rpcq, rutils, s-sysdeps, sel, select, serapeum, shared-preferences, sly, spinneret, studio-client, stumpwm, ten, trivia, trivial-clipboard, trivial-features, ttt, uax-15, ucons, umlisp, uncursed, utm-ups, with-contexts, zacl, zippy.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Eric TimmonsStatic Executables with SBCL v2

· 56 days ago

It's taken me much longer than I hoped, but I finally have a second version of my patches to build static executables tested and ready to go! This set of patches vastly improves upon the first by reducing the amount of compilation needed at the cost of sacrificing a little purity. Additionally I have created a system that automates the process of building a static executable, along with other release related tasks.

At a Glance

  • The new patch set can be found on the static-executable-v2 branch of my SBCL fork or at https://www.timmons.dev/static/patches/sbcl/$VERSION/static-executable-support-v2.patch with a detached signature available at https://www.timmons.dev/static/patches/sbcl/$VERSION/static-executable-support-v2.patch.asc signed with GPG key 0x9ACF6934.
  • You'll definitely want to build SBCL with the :sb-prelink-linkage-table feature (newly added by the patch). You'll probably also want the :sb-linkable-runtime feature (already exists, but the patch also enables it on arm/arm64).
  • The new patch lets you build a static executable with less compilation of Lisp code.
  • The asdf-release-ops system automates the process of building a static executable by tying it into ASDF.

What's New?

If you need a refresher about what static executables are or what use cases they're good for, see my previous post on this topic.

With my previous patch, the only way you could create a static executable was to perform the following steps:

  1. Determine the foreign symbols needed by your code. The easiest way to do this is to compile all your Lisp code and then dump the information from the image.
  2. From that list of foreign symbols, create a C file that contains fills an array with references to those symbols.
  3. Recompile the SBCL core and runtime with this new file, additionally disabling libdl support and linking against your foreign libraries.
  4. (Re)compile all your Lisp code with the new runtime (if you made an image in step 1 it will not be compatible with the new runtime due to feature and build ID mismatches).
  5. Dump the executable.

In the most general case, this involved compiling your entire Lisp image twice. After some #lisp discussions, I realized there was a better way of doing this. While the previous process still works, the new recommended process now looks like:

  1. Build the image you would like to make into a static executable and save it.
  2. Dump the foreign symbol info from this image and write the C file that SBCL can use to prelink itself.
  3. Compile that C file and link it into an existing sbcl.o file to make a new runtime. sbcl.o is the SBCL runtime in object form, created when building with the :sb-linkable-runtime feature.
  4. Load the image from step 1 into your new runtime. It will be compatible because the build ID and feature set are the same!
  5. Dump your now static executable.

This new process can significantly reduce the amount of time needed to make an executable. Plus it lets you take more advantage of image based development. It's fairly trivial to build an image exactly like you want, dump it, and then pair it with a custom static runtime to make a static executable.

There were two primary challenges that needed to be overcome for this version of the patch set.

First, the SBCL core had to be made robust to every libdl function uncondtionally returning an error. Since we want the feature set to remain constant we can't recompile the runtime with #-os-provides-dlopen. Instead, we take advantage of the fact that Musl libc lets you link static executables against libdl, but all those functions are noops. This is the "purity" sacrifice I alluded to above.

Second, since we are reusing a image, the prelink info table (the generated C file) needed to order the symbols exactly as the image expects them to be ordered. The tricky bit here is that some libraries (like cl-plus-ssl) add symbols to the linkage table that will always be undefined. cl-plus-ssl does this in order to support a wide range of openssl versions. The previous patch set unconditionally filtered out undefined symbols, which horribly broke things in the new approach.

More Documentation

As before, after applying the patch you'll find a README.static-executable file in the root of the repo. You'll also find a Dockerfile and an example of how to use it in the README.static-executable.

You can also check out the tests and documentation in the asdf-release-ops system.

Known Issues

  • The :sb-prelink-linkage-table feature does not work on 32-bit ARM + Musl libc >= 1.2. Musl switched to 64-bit time under the hood while still mataining compatibility with everything compiled for 32-bit time.

The issue is how they maintained backwards compatibility. Every time related symbol still exists and implements everything on top of the 32-bit time interface. However, if you include the standard header file where the symbol is defined or you look up the symbol via dlsym you actually get a pointer to the 64-bit time version of the symbol. We can't use dlsym (it doesn't work in static executables). And the generated C file doesn't include any headers.

This could be fixed if someone is motiviated enough to create/find a complete, easy to use map between libc symbols and the headers that define them and integrate it into the prelink info generator.

  • The :sb-prelink-linkage-table works on Windows but causes test failures. The root issue is that mingw64 has implemented their own libm. Their trig functions are fast, but use inaccurate instructions (like FSIN) under the hood. When prelinking these inaccurate implementations are used instead of the more accurate ones (from msvcrt.dll ?) found when using dlsym to look up the symbol.

Next Steps

  1. I would love to get feedback on this approach and any ideas on how to improve it! Please drop me a line (etimmons on Freenode or daewok on Github/Gitlab) if you have suggestions.

  2. I've already incorporated static executables into CLPM and will be distributing them starting with v0.4.0! I'm going to continue rolling out static executables in my other projects.

  3. Pieces of the patch set are now solid enough that I think they can be submitted for upstream consideration. I'll start sending them after the current 2.1.2 freeze.

Max-Gerd Retzlaff"Curl/Wget for uLisp"<br />Or: An HTTP(s) get/post/put function for uLisp

· 57 days ago

Oh, I forgot to continue posting… I just published a quite comprehensive HTTP function supporting put, post, get, auth, HTTP and HTTPS, and more for uLisp at ulisp-esp-m5stack.

Activate #define enable_http and #define enable_http_keywords to get it; the keywords used by the http function are to be enabled separately as they might be used more general and not just by this function.

Note that you need to connect to the internet first. Usually with WIFI-CONNECT.

Here is the full documentation with example calls:

Function http
 
Syntax:
   http url &key verbose
                 (https t)
                 auth
                 (user default_username)
                 (password default_password)
                 accept
                 content-type
                 (method :get)
                 data
     => result-string

Arguments and values:
   verbose---t, or nil (the default); affects also debug output of the argument decoding itself and should be put in first position in a call for full effect.

   https---t (the default), nil, or a certificate as string; uses default certificate in C string root_ca if true; url needs to fit: "http://..." for true and and "https://..." for false.

   auth---t, or nil (the default).

   user---a string, or nil (the default); uses default value in C string default_username if nil; only used if :auth t.

   password---a string, or nil (the default); uses default value in C string default_password if nil; only used if :auth t.

   accept---nil (the default), or a string.

   content-type---nil (the default), or a string.

   method---:get (the default), :put, or :post.

   data---nil (the default), or a string; only necessary in case of :method :put or :method :post; error for :method :get.

Examples:
   ;; HTTP GET:
   (http "http://192.168.179.41:2342" :https nil)
 
   ;; HTTP PUT:
   (http "http://192.168.179.41:2342"
         :https nil
         :accept "application/n-quads"
         :content-type "application/n-quads"
         :auth t :user "foo" :password "bar"
         :method :put
         :data (format nil "<http://example.com/button> <http://example.com/pressed> \"~a\" .~%"
                           (get-time)))

It can be tested with an minimal HTTP server simulation using bash and netcat:

while true; do echo -e "HTTP/1.1 200 OK\n\n $(date)" | nc -l -p 2342 -q 1; done
(To test with HTTPS in a similar fashion you can use openssl s_server, as explained, for example, in the article Create a simple HTTPS server with OPENSSL S_SERVER by Joris Visscher on July 22, 2015, but then you need to use certificates.)


See also Again more features for uLisp on M5Stack (ESP32):
time via NTP, lispstring without escaping and more space
, More features for uLisp on M5Stack (ESP32):
flash support, muting of the speaker and backlight control
and uLisp on M5Stack (ESP32).

Read the whole article.

Max-Gerd RetzlaffAgain more features for uLisp on M5Stack (ESP32):<br />time via NTP, lispstring without escaping and more space

· 64 days ago

I just pushed three small things to ulisp-esp-m5stack: Get time via NTP, add optional escape parameter to function lispstring, increased WORKSPACESIZE and SYMBOLTABLESIZE for M5Stack.

Getting time via NTP

Enable #define enable_ntptime to get time via NTP. New functions INIT-NTP and GET-TIME. Note that you need to connect to the internet first. Usually with WIFI-CONNECT.

Function init-ntp
 
Syntax:
   init-ntp
     => nil

Description:
   Initializes and configures NTP.

Function get-time
 
Syntax:
   get-time
     => timestamp

Arguments and values:
   timestamp---a string; containing a timestamp in the format of xsd:dateTime.

Description:
   Returns a timestamp in the format of xsd:dateTime.

Add optional escape parameter to function lispstring

I have changed the function lispstring to have an optional escape parameter to switch off the default behavior of handling the backslash escape character. The default behavior is not changed.

The C function lispstring takes a C char* string and return a uLisp string object. When parsing data in the n-triples format retrieved via HTTP I noticed that the data got modified already by lispstring which broke my parser implemented in uLisp.

As lispstring might be used in other contexts that expect this behavior, I just added the option to switch the un-escaping off.

Increased WORKSPACESIZE and SYMBOLTABLESIZEfor M5Stack

The M5Stack ESP32 has 320 kB of usable DRAM in total. Although with a lot of restrictions (see next section),

I increased WORKSPACESIZE to 9000 cells, which equals 72,000 bytes, and SYMBOLTABLESIZE to 2048 bytes. These sizes seem to work still safely even with bigger applications and a lot of consing.

Warning: You cannot load images created with different settings!

The SRAM of the M5Stack ESP32

In total the M5Stack ESP32 comes with 520 kB of SRAM. The catch is that the ESP32 is based on the Harvard architecture and 192 kB is in the SRAM0 block intended(!) for instructions (IRAM). There is another 128 kB block in block SRAM1 which can be used either for instructions or data (DRAM). The third block SRAM2 has got a size of 200 kB and is for data only. But 8 kB of SRAM2 is lost for ROM mappings.

The ESP-IDF and thus also the Arduino environment use only SRAM0 for instructions and SRAM1 and SRAM2 for data, which is fine for uLisp as it is an interpreter and therefore more RAM for data is perfect. SRAM0 will just hold the machine code of the uLisp implementation but no code written in the language uLisp.

Of the remaining 320 kB another 54 kB will be dedicated for Bluetooth if Bluetooth is enabled in ESP-IDF (which it is by default, #define CONFIG_BT_RESERVE_DRAM 0xdb5c) in the SRAM2 block. And if trace memory is enabled, another 32 kB of SRAM1 are reserved (by default it is disabled, #define CONFIG_TRACEMEM_RESERVE_DRAM 0x0).

So, by default with Bluetooth enabled and trace memory disabled, 266 kB are left. At the bottom of SRAM2 right after the 62 kB used for Bluetooth and ROM are the application's data and BSS segments. Sadly, at around the border between SRAM1 and SRAM2 there seem to be two small reserved regions again of a bit more the 1 kB each, limiting statically allocated memory.

Thus, the "shared data RAM" segment dram0_0_seg in the linker script memory layout is configured to have a default length of 0x2c200 -; CONFIG_BT_RESERVE_DRAM. That is, 176.5 kB (= 180,736 bytes) without Bluetooth and 121.66 kB (= 124,580 bytes) with Bluetooth enabled.

But actually I have already written more than I have intended for this blog post and the rest of my notes, calculations and experiments will have to wait for a future article. For now, I just increased the size of the statically allocated uLisp workspace to make more use of the available memory of the ESP32 in the M5Stack.


See also More features for uLisp on M5Stack (ESP32):
flash support, muting of the speaker and backlight control
and uLisp on M5Stack (ESP32).

References

Espressif Systems, ESP32 Technical Reference Manual, Shanghai, 2020, section 2.3.2 Embedded Memory.

Read the whole article.

Tycho Garen Programming in the Common Lisp Ecosystem

· 65 days ago

I've been writing more and more Common Lips recently and while I reflected a bunch on the experience in a recent post that I recently followed up on .

Why Ecosystems Matter

Most of my thinking and analysis of CL comes down to the ecosystem: the language has some really compelling (and fun!) features, so the question really comes down to the ecosystem. There are two main reasons to care about ecosystems in programming languages:

  • a vibrant ecosystem cuts down the time that an individual developer or team has to spend doing infrastructural work, to get started. Ecosystems provide everything from libraries for common tasks as well as conventions and established patterns for the big fundamental application choices, not to mention things like easily discoverable answers to common problems.

    The more time between "I have an idea" to "I have running (proof-of-concept quality) code running," matters so much. Everything is possible to a point, but the more friction between "idea" and "working prototype" can be a big problem.

  • a bigger and more vibrant ecosystem makes it more tenable for companies/sponsors (of all sizes) to choose to use Common Lisp for various projects, and there's a little bit of a chicken and egg problem here, admittedly. Companies and sponsors want to be confidence that they'll be able to efficiently replace engineers if needed, integrate or lisp components into larger ecosystems, or be able to get support problems. These are all kind of intangible (and reasonable!) and the larger and more vibrant the ecosystem the less risk there is.

    In many ways, recent developments in technology more broadly make lisp slightly more viable, as a result of making it easier to build applications that use multiple languages and tools. Things like microservices, better generic deployment orchestration tools, greater adoption of IDLs (including swagger, thrift and GRPC,) all make language choice less monolithic at the organization level.

Great Things

I've really enjoyed working with a few projects and tools. I'll probably write more about these individually in the near future, but in brief:

  • chanl provides. As a current/recovering Go programmer, this library is very familiar and great to have. In some ways, the API provides a bit more introspection, and flexibility that I've always wanted in Go.
  • lake is a buildsystem tool, in the tradition of make, but with a few additional great features, like target namespacing, a clear definition between "file targets" and "task targets," as well as support for SSH operations, which makes it a reasonable replacement for things like fabric, and other basic deployment tools.
  • cl-docutils provides the basis for a document processing system. I'm particularly partial because I've been using the python (reference) implementation for years, but the implementation is really quite good and quite easy to extend.
  • roswell is really great for getting started with CL, and also for making it possible to test library code against different implementations and versions of the language. I'm a touch iffy on using it to install packages into it's own directory, but it's pretty great.
  • ASDF is the "buildsystem" component of CL, comparable to setuptools in python, and it (particularly the latest versions,) is really great. I like the ability to produce binaries directly from asdf, and the "package-inferred" is a great addition (basically, giving python-style automatic package discovery.)
  • There's a full Apache Thrift implementation. While I'm not presently working on anything that would require a legit RPC protocol, being able to integrate CL components into larger ecosystem, having the option is useful.
  • Hunchensocket adds websockets! Web sockets are a weird little corner of any stack, but it's nice to be able to have the option of being able to do this kind of programming. Also CL seems like a really good platform to do
  • make-hash makes constructing hash tables easier, which is sort of needlessly gawky otherwise.
  • ceramic provides bridges between CL and Electron for delivering desktop applications based on web technologies in CL.

I kept thinking that there wouldn't be good examples of various things, (there's a Kafka driver! there's support for various other Apache ecosystem components,) but there are, and that's great. There's gaps, of course, but fewer, I think, than you'd expect.

The Dark Underbelly

The biggest problem in CL is probably discoverability: lots of folks are building great tools and it's hard to really know about the projects.

I thought about phrasing this as a kind of list of things that would be good for supporting bounties or something of the like. Also if I've missed something, please let me know! I've tried to look for a lot of things, but discovery is hard.

Quibbles

  • rove doesn't seem to work when multi-threaded results effectively. It's listed in the readme, but I was able to write really trivial tests that crashed the test harness.
  • Chanl would be super lovely with some kind of concept of cancellation (like contexts in Go,) and while it's nice to have a bit more thread introspection, given that the threads are somewhat heavier weight, being able to avoid resource leaks seems like a good plan.
  • There doesn't seem to be any library capable of producing YAML formated data. I don't have a specific need, but it'd be nice.
  • it would be nice to have some way of configuring the quicklisp client to be able to prefer quicklisp (stable) but also using ultralisp (or another source) if that's available.
  • Putting the capacity in asdf to produce binaries easily is great, and the only thing missing from buildapp/cl-launch is multi-entry binaries. That'd be swell. It might also be easier as an alternative to have support for some git-style sub-commands in a commandline parser (which doesn't easily exist at the moment'), but one-command-per-binary, seems difficult to manage.
  • there are no available implementations of a multi-reader single-writer mutex, which seems like an oversite, and yet, here we are.

Bigger Projects

  • There are no encoders/decoders for data formats like Apache Parquet, and the protocol buffers implementation don't support proto3. Neither of these are particular deal breakers, but having good tools dealing with common developments, lowers to cost and risk of using CL in more applications.
  • No support for http/2 and therefore gRPC. Having the ability to write software in CL with the knowledge that it'll be able to integrate with other components, is good for the ecosystem.
  • There is no great modern MongoDB driver. There were a couple of early implementations, but there are important changes to the MongoDB protocol. A clearer interface for producing BSON might be useful too.
  • I've looked for libraries and tools to integrate and manage aspects of things like systemd, docker, and k8s. k8s seems easiest to close, as things like cube can be generated from updated swagger definitions, but there's less for the others.
  • Application delievery remains a bit of an open. I'm particularly interested in being able to produce binaries that target other platforms/systems (cross compilation,) but there are a class of problems related to being able to ship tools once built.
  • I'm eagerly waiting and concerned about the plight of the current implementations around the move of ARM to Darwin, in the intermediate term. My sense is that the transition won't be super difficult, but it seems like a thing.

Max-Gerd RetzlaffMore features for uLisp on M5Stack (ESP32):<br />flash support, muting of the speaker and backlight control

· 65 days ago

I finished the IoT sensor device prototype and shipped it last Thursday. It just has a stub bootstrap system in the flash via uLisp's Lisp Library and downloads the actual application in a second boot phase via HTTPS. More on that later.

To make it happen I've added a bunch of things to ulisp-esp-m5stack: flash support, fixes for some quirks of the M5Stack, time via NTP, an HTTP function supporting methods PUT, POST, GET, Auth, HTTP and HTTP, temperature sensors via one wire, and more. I plan to publish all these features in the next days.

Today you get: flash support, muting of the builtin speaker and control of the LED backlight of the builtin display.

Read the whole article.

Quicklisp newsNewer Quicklisp client available

· 67 days ago

 I had to revert the change that allows slashes in dist names for Ultralisp. If your Quicklisp directory has a lot of files and subdirectories (which is normal), the wild-inferiors file search for dist info is unacceptably slow. 

You can get an updated client with the feature reverted with (ql:update-client).

Quicklisp newsNew Quicklisp client available

· 69 days ago

 I've just published a new version of the Quicklisp client. You can get it with (ql:update-client).

This version updates the fallback ASDF from 2.26 to 3.2.1. (This will not have any effect on any implementation except CLISP, which does not come with ASDF of any version.)

It also includes support for dists with slashes in the name, as published by Ultralisp.

Thanks to those who contributed pull requests incorporated in this update.

Nicolas HafnerSetting Up Camp - February Kandria Update

· 72 days ago

header
I hope you've all started well into the new year! We're well into production now, with the vertical slice slowly taking shape. Much of the work in January has been on concept and background work, which is now done, so we are moving forward on the implementation of the new features, assets, and writing. This entry will have a lot of pictures and videos to gander at, so be warned!

The vertical slice will include three areas - the central camp, or hub location, the first underground area, and the desert ruins. We're now mostly done implementing the central camp. Doing so was a lot of work, since it requires a lot of unique assets. It still requires a good amount of polish before it can be called well done, but for the vertical slice I think we're good at the point we are now.

camp-1 camp-3

The camp is where all the main cast are (Fi, Jack, Catherine, and Alex), and where you'll return to after most missions. As such, it's important that it looks nice, since this is where you'll spend a lot of your time. It also has to look believable and reasonable for the cast to try and live here, so we spent a good amount of time thinking about what buildings there would be, what purpose they should fulfil, and so forth.

We also spent a good deal of time figuring out the visual look. Since Kandria is set quite far into the future, with that future also having undergone a calamity, the buildings both have to look suitably modern for a future society to have built, but at the same time ruined and destroyed, to fit the calamity event.

camp-2 camp-4

I also finished the character redesign for Fi. Her previous design no longer really fit with her current character, so I really wanted to get that done.

fi-draft fi

On the gameplay side the movement AI has been revised to be able to deal with far more complicated scenarios. Characters can now follow you along, move to various points on the map independently, or lead the player to a destination.

Quests now also automatically track your time to completion, which allows us both to do some nice tracking for score and speedrun purposes, but also to implement a 'race' quest. We have a few ideas on those, and it should serve as a nice challenge to try and traverse the various areas as quickly as possible.

We're also thinking of setting up leaderboards or replays for this, but that's gonna have to wait until after the vertical slice.

For look and feel there's also been a bunch of changes. First, there's now a dedicated particle system for effects like explosions, sparks, and so forth. Adding such details really enhances the feel of the combat, and gives a nice, crunchy, oomph to your actions. I still have a few more ideas for additional effects to pile on top, and I'll see that I can get to those in due time.

particles

Also on the combat side, there's now a quick-use menu so you can access your healing items and so forth easily during combat. It even has a nice slow-mo effect!

Since we're not making a procedural game, we do have to have a way of gating off far areas in a way that feels at least somewhat natural. To do this I've implemented a shader effect that renders a sandstorm on top of everything. The strength of the effect can be fine-tuned, so we could also use it for certain setpieces or events.

The effect looks a lot better in-game. Video compression does not take kindly to very noisy and detailed effects like this. Having the sand howl around really adds a lot to the feel of the game. In a similar vein, there's also grass and other foliage that can be placed now, which reacts to the wind and characters stepping on it. You can see that in action in this quick run-down of the camp area:

There's a bunch of other things we can't show off quite yet, especially a bunch of excellent animations by Fred. I haven't had the time to integrate all of those yet!

We've also been thinking more about how to handle the marketing side of things. I'm now doing a weekly screenshotsaturday thing on Twitter, and semi-regularly post quick progress gifs and images as well. Give me a follow if you haven't yet!

Then I took advantage of Rami Ismail's excellent consulting service and had a talk with him about what we should do to improve the first impressions for Kandria and how to handle the general strategy. He gave some really excellent advice, though I wish I had had more time to ask other questions, too! I'll probably schedule a consultancy hour with him later this year to catch up with all of that.

Anyway, I think a lot of the advice he gave us isn't necessarily specific to Kandria, so I thought it would be good to share it here, in case you're a fellow developer, or just interested in marketing in general:

  • Make sure to keep a consistent tone throughout your paragraph or trailer. This means that you want to avoid going back and forth between advertising game features or narrative elements, for instance. In Kandria's case we had a lot of back and forth in our press kit and steam page texts, which we've now gone over and revised to be more consistent.
  • Marketing is as much about attracting as many people as possible as it is about pushing people away. You want to be as efficient as possible at advertising to your target group. This also means being as up-front as possible about what your game is and who it is for, so you immediately pull in the people that would care about it, and push away the people that would not.
  • You need to figure out which part of your game best appeals to your core audience, and how you need to put it to make it attractive. Having an advertisement platform that gives you plenty of statistics and targeting features is tremendously helpful for this. Rami specifically suggested using short Facebook ads, since those can be targeted towards very specific groups. Do many small ads using different copy texts and trailers to see which work the best at attracting people to your Steam page.
  • Always use a call to action at the end of your top of the funnel (exposure) marketing. In fact, don't just use one link, use one for every way people have to interact with your game, if you have several. For us in specific this means I'll now include a link to our mailing list, our discord, and our steam page in our material.
  • Only use community/marketing platforms that you're actually comfortable with engaging with yourself. This means don't force yourself to make a Discord or whatever if you're not going to really engage with it. I'm fairly comfortable with where we are now, though I'm considering also branching out to imgur for more top of the funnel marketing. We'll see.
  • Two years is plenty of time to get marketing going. Generally you want to really up the hype train about three months before release. The wishlist peak about one month before release should give you a rough idea of whether the game is going to be successful or not - 5-10k is good, 15-20k should be very good.
  • Three weeks before release is when you want to start contacting press - write emails to people that have reviewed the games that inspired yours and seem to generally fit the niche you're targeting. Let them know you'll send a final build a week before release.
  • Actually do that exactly a week before release. Ideally your game will be done and you won't fudge with it until after release.
  • On the day before release, log onto gamespress.com and submit your game. Actual journalists don't tend to look there it seems, since they already get way more than enough mail, but third parties and independent people might!

And that's about what we managed to discuss in the 20 minutes we had. As mentioned, I'll probably schedule another consultancy later in the year. I'll be sure to let you know how it went!

Alright, I've run my mouth for long enough now, here's some words from Tim about his experience for January:

It's been a documentation-heavy month for me: designing the vertical slice quests on paper (which will become the first act of the narrative), making some tweaks to the characters and plots to fit the game's pillars, and also tweaking the press kit and marketing copy from Rami's feedback.

The last two weeks I've also started implementing the first quest, reminding myself how to use the scripting language and editor (it's amazing how much you forget after a couple of weeks away from it). This has also involved familiarising myself with the "proper" quest structure, using the hierarchy of quest > task > trigger (for the demo quest it was more like task > trigger, trigger, trigger, etc. you get the idea). What's been most fun though is getting into the headspace for Jack and Catherine, writing their initial dialogues, and threading in some player choice. Catherine is quickly becoming my favourite character.

It's also been great to see the level design and art coming along - Nick's sketched layouts, and now the pixel art for the ruined buildings which he and Fred have been working on. Oh, and seeing the AI in action, with Catherine bounding along after The Stranger blew my mind.

Well, that's about it for this month. It's been exciting to finally see a change in the visuals, and I'm excited to start tackling the first underground area. I see a lot more pixel work ahead of us...

Anyway, in the meantime until the next monthly update, do consider checking out the mailing list if you want more in-depth, weekly updates on things. We cover a lot of stuff there that never makes it into the monthlies, too! If you want to get involved in discussions and feedback around the game, hop onto the discord. We're slowly building a community of fans there, and are trying to post more actively about the process. For a more casual thing, there's also my twitter with plenty of gifs and images. Finally, please do wishlist Kandria on Steam! It might seem like it isn't much, but it really does help out a lot!

Thanks for reading, and see you next time!

Vsevolod Dyomkin"Programming Algorithms in Lisp" Is Out!

· 72 days ago

The updated version of my book "Programming Algorithms" has been released by Apress recently. It has undergone a number of changes that I want to elaborate on in this post.

But first, I'd like to thank all the people who contributed to the book or supported my work on it in other ways. It was an honor for me to be invited to Apress as "Practical Common Lisp" published by them a decade ago was my one-way ticket to the wonderful world of Lisp. Writing "Programming Algorithms" was, in a way, an attempt to give something back. Also, I was very curious to see how the cooperation with the publisher would go. And I can say that they have done a very professional job and helped significantly improve the book through the review process. That 5-10% change that is contributed by the editors, although it may seem insignificant, is very important to bring any book to the high standard that allows not to annoy many people. Unfortunately, I am not a person who can produce a flawless result at once, so helping with correcting those flaws is very valuable. Part of gratitude for that also, surely, goes to many of the readers who have sent their suggestions.

I was very pleased that Michał "phoe" Herda has agreed to become the technical reviewer. He has found a number of bugs and suggested lots of improvements, of which I could implement, maybe, just a third. Perhaps, the rest will go into the second edition :)

Now, let's speak about some of those additions to Programming Algorithms in Lisp.

Curious Fixes

First of all, all the executable code from the book was published in a github repo (and also republished to the oficial Apress repo). As suggested by Michał, I have added automated tests to ensure (for now, partially, but we plan to make the test suite all-encompassing) that everything compiles and runs correctly. Needless to say that some typos and other issues were found in the process. Especially, connected with handling different corner cases. So, if you have trouble running some code from the book, you can use the github version. Funny enough, I got into a similar situation recently, when I tried to utilize the dynamic programming example in writing a small tool for aligning outputs of different ASR systems and found a bug in it. The bug was is in the matrix initialization code:


-    (dotimes (k (1+ (length s1))) (setf (aref ld k 0) 0))
-    (dotimes (k (1+ (length s2))) (setf (aref ld 0 k) 0)))
+    (dotimes (k (1+ (length s1))) (setf (aref ld k 0) k))
+    (dotimes (k (1+ (length s2))) (setf (aref ld 0 k) k)))

Another important fix that originated from the review process touched not only the book but also the implementation of the slice function in RUTILS! It turned out that I was naively assuming that displaced arrays will automatically recursively point into the original array, and thus, inadvertently, created a possibility for O(n) slice performance instead of O(1). It explains the strange performance of array sorting algorithms at the end of Chapter 5. After fixing slice, the measurements started to perfectly resemble the theoretical expectations! And, also the performance has improved an order of magnitude :D


CL-USER> (let ((vec (random-vec 10000)))
           (print-sort-timings "Insertion " 'insertion-sort vec)
           (print-sort-timings "Quick" 'quicksort vec)
           (print-sort-timings "Prod" 'prod-sort vec))
= Insertion sort of random vector (length=10000) =
Evaluation took:
  0.632 seconds of real time
...
= Insertion sort of sorted vector (length=10000) =
Evaluation took:
  0.000 seconds of real time
...
= Insertion sort of reverse sorted vector (length=10000) =
Evaluation took:
  1.300 seconds of real time
...
= Quicksort of random vector (length=10000) =
Evaluation took:
  0.039 seconds of real time
...
= Quicksort of sorted vector (length=10000) =
Evaluation took:
  1.328 seconds of real time
...
= Quicksort of reverse sorted vector (length=10000) =
Evaluation took:
  1.128 seconds of real time
...
= Prodsort of random vector (length=10000) =
Evaluation took:
  0.011 seconds of real time
...
= Prodsort of sorted vector (length=10000) =
Evaluation took:
  0.011 seconds of real time
...
= Prodsort of reverse sorted vector (length=10000) =
Evaluation took:
  0.021 seconds of real time
...

Also, there were some missing or excess closing parens in a few code blocks. This, probably, resulted from incorrectly copying the code from the REPL after finishing experimenting with it. :)

New Additions

I have also added more code to complete the full picture, so to say, in several parts where it was lacking, from the reviewers' point of view. Most new additions went into expanding "In Action" sections where it was possible. Still, unfortunately, some parts remain on the level of general explanation of the solution as it was not possible to include whole libraries of code into the book. You can see a couple of snippets below:

Binary Search in Action: a Fast Specialized In-Memory DB

We can outline the operation of such a datastore with the following key structures and functions.

A dictionary *dict* will be used to map words to numeric codes. (We'll discuss hash-tables that are employed for such dictionaries several chapters later. For now, it will be sufficient to say that we can get the index of a word in our dictionary with (rtl:? *dict* word)). The number of entries in the dictionary will be around 1 million.

All the ngrams will be stored alphabetically sorted in 2-gigabyte files with the following naming scheme: ngram-rank-i.bin. rank is the ngram word count (we were specifically using ngrams of ranks from 1 to 5) and i is the sequence number of the file. The contents of the files will constitute the alternating ngram indices and their frequencies. The index for each ngram will be a vector of 32-bit integers with the length equal to the rank of an ngram. Each element of this vector will represent the index of the word in *dict*. The frequency will also be a 32-bit integer.

All these files will be read into memory. As the structure of the file is regular — each ngram corresponds to a block of (1+ rank) 32-bit integers — it can be treated as a large vector.

For each file, we know the codes of the first and last ngrams. Based on this, the top-level index will be created to facilitate efficiently locating the file that contains a particular ngram.

Next, binary search will be performed directly on the contents of the selected file. The only difference with regular binary search is that the comparisons need to be performed rank times: for each 32-bit code.

A simplified version of the main function get-freq intended to retrieve the ngram frequency for ranks 2-5 will look something like this:


(defun get-freq (ngram)
  (rt:with ((rank (length ngram))
            (codes (ngram-codes ngram))
            (vec index found?
                 (bin-search codes
                             (ngrams-vec rank codes)
                             :less 'codes<
                             :test 'ngram=)))
     (if found?
         (aref vec rank)
         0)))

where


(defun ngram-codes (ngram)
  (map-vec (lambda (word) (rtl:? *dict* word))
           ngram))

(defun ngrams-vec (rank codes)
  (loop :for ((codes1 codes2) ngrams-vec) :across *ngrams-index*
        :when (and (<= (aref codes1 0) (aref codes 0))
                   (codes< codes codes2 :when= t))
        :do (return ngrams-vec)))
             
(defun codes< (codes1 codes2 &key when=)
  (dotimes (i (length codes1)
              ;; this will be returned when all
              ;; corresponding elements of codes are equal
              when=)
    (cond ((< (aref codes1 i)
              (aref codes2 i))
           (return t))
          ((> (aref codes1 i)
              (aref codes2 i))
           (return nil)))))

(defun ngram= (block1 block2)
  (let ((rank (1- (length block1))))
    (every '= (rtl:slice block1 0 rank)
              (rtl:slice block2 0 rank)))

We assume that the *ngrams-index* array containing a pair of pairs of codes for the first and last ngram in the file and the ngrams data from the file itself was already initialized. This array should be sorted by the codes of the first ngram in the pair. A significant drawback of the original version of this program was that it took quite some time to read all the files (tens of gigabytes) from disk. During this operation, which measured in several dozens of minutes, the application was not responsive. This created a serious bottleneck in the system as a whole and complicated updates, as well as put normal operation at additional risk. The solution we utilized to counteract this issue was a common one for such cases: switching to lazy loading using the Unix mmap facility. With this approach, the bounding ngram codes for each file should be precalculated and stored as metadata, to initialize the *ngrams-index* before loading the data itself.

Pagerank MapReduce Explanation


;; this function will be executed by mapper workers
(defun pr1 (node n p &key (d 0.85))
  (let ((pr (make-arrray n :initial-element 0))
        (m (hash-table-count (node-children node))))
    (rtl:dokv (j child (node-children node))
      (setf (aref pr j) (* d (/ p m))))
    pr))

(defun pagerank-mr (g &key (d 0.85) (repeat 100))
  (rtl:with ((n (length (nodes g)))
             (pr (make-arrray n :initial-element (/ 1 n))))
    (loop :repeat repeat :do
      (setf pr (map 'vector (lambda (x)
                              (- 1 (/ d n)))
                    (reduce 'vec+ (map 'vector (lambda (node p)
                                                 (pr1 node n p :d d))
                                       (nodes g)
                                       pr)))))
    pr))

Here, we have used the standard Lisp map and reduce functions, but a map-reduce framework will provide replacement functions which, behind-the-scenes, will orchestrate parallel execution of the provided code. We will talk a bit more about map-reduce and see such a framework in the last chapter of this book.

One more thing to note is that the latter approach differs from the original version in that each mapper operates independently on an isolated version of the pr vector, and thus the execution of Pagerank on the subsequent nodes during a single iteration will see an older input value p. However, since the algorithm is stochastic and the order of calculations is not deterministic, this is acceptable: it may impact only the speed of convergence (and hence the number of iterations needed) but not the final result.

Other Significant Changes

My decision to heavily rely on syntactic utilities from my RUTILS library was a controversial one, from the start. And, surely, I understood it. But my motivation, in this regard, always was and still remains not self-promotion but a desire to present Lisp code so that it didn't seem cumbersome, old-fashioned, or cryptic (and, thankfully, the language provides all possibilities to tune its surface look to your preferences). However, as it bugged so many people, including the reviewers, for the new edition, we have come to a compromise to use all RUTILS code only qualified with the rtl prefix so that it was apparent. Besides, I have changed some of the minor purely convenience abbreviations to their standard counterparts (like returning funcall instead of call).

Finally, the change that I regret the most, but understand that it was inevitable, is the change of title and the new cover, which is in standard Apress style. However, they have preserved the Draco tree in the top right corner. And it's like a window through which you can glance at the original book :)  


So, that is an update on the status of the book.

For those who were waiting for the Apress release to come out, it's your chance to get it. The price is quite affordable. Basically, the same as the one I asked for (individual shipping via post is a huge expense).

And for those who have already gotten the original version of the book, all the major changes and fixes are listed in the post. Please, take notice if you had any issues.

I hope the book turns out to be useful to the Lisp community and serves both Lisp old-timers and newcomers.

Max-Gerd RetzlaffStumpWM: vsplit-three

· 74 days ago

A good ten month ago I switched away from a full desktop environment being finally tired enough that user software gets more and ever more features and tries to anticipate more and more what I might want but in the end my own computer never actually does what I want and only that. PulseAudio being the most dreaded example of a piece of code that gets more, more and more magic and complexity and in the end it never does what you actually want while at the same time telling it to do so got completely impossible because of many layers of abstractions and magic. PulseAudio has a lot of "rules", "profiles", "device intended roles", "autodetecting", "automatic setup and routing" and "other housekeeping actions". Look at this article PulseAudio under the hood by Victor Gaydov (which is also the source of the terms I just quoted): It has 174 occurrences of words starting with "auto-": automatically – 106, automatic – 27, autoload – 16, autospawn – 14, autodetect – 4, autoexit – 2, automate – 2, auto timing – 1, auto switch – 1, and once "magically" when it is even too much for the author.

So, more control and less clutter instead. After years again I use just a good old window manager, individual programs, and got rid of PulseAudio.

I switched to StumpWM which is written in Common Lisp. It is easy to modify and try stuff. While it's running. I have it run Slime so that I can connect to it from Emacs and hack stuff that is missing. From time to time I got StumpWM hanging while hacking, so I added a signal handler for POSIX signal SIGHUP to force a hard stumpwm restart. (There is a new version of that signal handler without the CFFI dependency but that pull request is not merged yet.) When I did something stupid I switch to a console, fire a killall -HUP stumpwm to have it reset hard. Since then I haven't lost a X11 session even while changing quite a bit.

Read the whole article.

Jonathan GodboutProto Cache: Flags and Hooks

· 77 days ago

Today’s Updates

Last week we made our Pub/Sub application use protocol buffer objects for most of its internal state. This week we'll take advantage of that change by setting startup and shutdown hooks to load state and save state respectively. We will add flags so someone starting up our application can set the load and save files on the command line. We will then package our application into an executable with a new asdf command.

Code Changes

Proto-cache.lisp

Defpackage Updates:

We will use ace.core.hook to implement our load and exit hooks. We will show how to make methods that will run at load and exit time when we use this library in the code below. In the defpackage we use the nickname hook. The library is available in the ace.core repository.

We use ace.flag as our command line flag parsing library. This is a command line flag library used extensively at Google for our lisp executables. The library can be found in the ace.flag repository.

Flag definitions:

We define three command line flags:

  • flag::*load-file*
  • flag::*save-file*
  • flag::*new-subscriber* 
    • This flag is used for testing purposes. It should be removed in the future.
  • flag::*help*

The definitions all look the same, we will look at flag::*load-file* as an example:

(flag:define flag::*load-file* ""
  "Specifies the file from which to load the PROTO-CACHE on start up."
  :type string)
  • We use the flag:define macro to define a flag. Please see the code for complete documentation of this macro (REAME.md update coming). We only use a small subset of the ace.flag package.
  • flag::*load-file*: This is the global where the parsed command line flag will be stored.
  • The documentation string to document the flag. If flag:print-help is called this documentation will be printed:

    --load-file (Determines the file to load PROTO-CACHE from on startup)

     Type: STRING

  • :type : The type of the flag. Here we have a string.

We use the symbol-name string of the global in lowercase as the command line input. 

For example:

  1. flag::*load-file* becomes --load-file
  2. flag::*load_file* becomes –load_file

The :name or :names key in the flag:define macro will let users select their own names for the command line input instead of this default.

Main definition:

We want to create a binary for our application. Since we have no way to add publishers and subscribers outside of the repl we define a dummy main that adds publishers and subscribers for us:

(defun main ()
  (register-publisher "pika" "chu")
  (register-subscriber "pika" flag::*new-subscriber*)
  (update-publisher-any
    "pika" "chu"
    (google:make-any :type-url "a"))
  ;; Sleep to make sure running threads exit.
  (sleep 2))

After running the application we can check for a new subscriber URL in the saved proto-cache application state file. I will show this shortly.

Load/Exit hooks:

We have several pre-made hooks defined in ace.core.hook. Two useful functions are ace.core.hook:at-restart and ace.core.hook:at-exit. As one can imagine, at-restart runs when the lisp image starts up, and at-exit runs when the lisp image is about to exit.

The first thing we do when we start our application is parse our command line:

(defmethod hook::at-restart parse-command-line ()
  "Parse the command line flags."
  (flag:parse-command-line)
  (when flag::*help*
    (flag:print-help)))

You MUST call flag:parse-command-line for the defined command line flags to have non default values.

We also print a help menu  if --help was passed in.

Then we can load our proto if the load-file flag was passed in:

(defmethod hook::at-restart load-proto-cache :after parse-command-line  ()
  "Load the command line specified file at startup."
  (when (string/= flag::*load-file* "")
    (load-state-from-file :filename flag::*load-file*)))                                                                                            

We see an :after clause in our defmethod. We want the load-proto-cache method called during start-up but after we have parsed the command line so flag::*load-file* has been properly set. 

Note: The defmethod here uses a special defmethod syntax added in ace.core.hook. Please see the hook-method documentation for complete details.

Finally we save our image state at exit:

(defmethod hook::at-exit save-proto-cache ()
  "Save the command line specified file at exit."
  (when (string/= flag::*save-file* "")
    (save-state-to-file :filename flag::*save-file*)))

The attentive reader will notice our main function never explicitly called any of these hook functions...

Proto-cache.asd:

We add code to build an executable using asdf:

(defpackage :proto-cache ...
  :build-operation "program-op"
  :build-pathname "proto-cache"
  :entry-point "proto-cache:main")

This is a program-op. The executable pathname is relative, we save the binary as "proto-cache" in the same directory as our proto-cache code. The entry point function is proto-cache:main.

We may then call: 

sbcl --eval "(asdf:operate :build-op :proto-cache)" 

at the command line to create our binary.

Running our binary:

With our binary built we can call:

./proto-cache  --save-file /tmp/first.proto --new-subscriber http://www.google.com

Trying cat /tmp/first.pb:

pika'
http://www.google.com
a?pika"chujg

These are serialized values so one shouldn't try to understand the output so much. We can see "http://www.google.com", "pika", and "chu" are all saved.

Calling

./proto-cache   --load-file /tmp/first.pb --save-file /tmp/first.pb --new-subscriber http://www.altavista.com

And then cat /tmp/first.pb:

I
pikaA
?http://www.altavista.com
http://www.google.com
a?pika"chujg
"

Finally calling  ./proto-cache  --help

We get:

Flags from ace.flag:

    --lisp-global-flags
     (When provided, allows specifying global and special variables as a flag on the command line.
       The values are NIL - for none, :external - for package external, and T - for all flags.)
     Type: ACE.FLAG::GLOBAL-FLAGS

    --help (Whether to print help) Type: BOOLEAN Value: T

    --load-file (Determines the file to load PROTO-CACHE from on startup)
     Type: STRING
     Value: ""

    --new-subscriber (URL for a new subscriber, just for testing)
     Type: STRING
     Value: ""

    --lisp-normalize-flags
     (When non-nil the parsed flags will be transformed into a normalized form.
       The normalized form contains hyphens in place of underscores, trims '*' characters,
       and puts the name into lower case for flags names longer than one character.)
     Type: BOOLEAN

    --save-file (Determines the file to save PROTO-CACHE from on shutdown)
     Type: STRING
     Value: ""

This shows our provided documentation of the command line flags as expected.

Conclusions:

Today we added command line flags, load and exit hooks, and made our application buildable as an executable. We can build our executable and distribute it as we see fit. We can direct it to load and save the application state to user specified files without updating the code. There is still much to do before it’s done but this is slowly becoming a usable application.

There are a few additions I would like to make, but I have a second child coming soon. This may (or may not) be my last technical blog post for quite some time. I hope this sequence of Proto Cache posts has been useful thus far, and I hope to have more in the future.

Thanks to Ron Gut and Carl Gay for copious edits and comments.

ECL NewsECL 21.2.1 release

· 80 days ago

Dear Community,

We are announcing a new stable ECL release which fixes a number of bugs from the previous release. Changes made include amongst others

  • working generational and precise garbage collector modes
  • support for using precompiled headers to improve compilation speed
  • the bytecompiler correctly implements the ANSI specification for load time forms of literal objects in compiled files
  • fixes for encoding issues when reading in the output of the MSVC compiler
  • issues preventing ECL from compiling on Xcode 12 and running on ARM64 versions of Mac OS have been rectified

More detailed information can be obtained from the CHANGELOG file and git commit logs. We'd like to thank all people who contributed to this release. Some of them are listed here (without any particular order): Paul Ruetz, Karsten Poeck, Eric Timmons, Vladimir Sedach, Dima Pasechnik, Matthias Köppe, Yuri Lensky, Tobias Hansen, Pritam Baral, Marius Gerbershagen and Daniel Kochmański.

This release is available for download in a form of a source code archive (we do not ship prebuilt binaries):

Happy Hacking,
The ECL Developers

Pascal CostanzaThe Slick programming language

· 82 days ago

I'm happy to announce the release of the Slick programming language, an s-expression surface syntax for Go, with some extensions inspired by Common Lisp and Scheme. See the Slick programming language repository for more details.

Max-Gerd RetzlaffuLisp on M5Stack (ESP32)

· 83 days ago

About two years I ago I bought a couple of M5Stack ESP32 computers on a Maker Fair. I made little use of them so far but now I started to put uLisp on them and suddenly it is much more fun.

“Hello uLisp!” on the M5Stack; click for a larger version (323 kB).

Read the whole article.


For older items, see the Planet Lisp Archives.


Last updated: 2021-04-19 18:28