
Scott L. Burson — FSet 2.0 is coming!
@2025-10-11 07:58 · 6 days agoI have pushed and tagged the first release candidate, v2.0.0-rc0, of FSet version 2! I'm keeping it in a GitLab Merge Request (MR) for the moment, but I am very much hoping to get some FSet users to try it out and give me some feedback.
One major change is that sets and maps now use the CHAMP implementations by default. This change should be transparent as long as:
- you haven't written any complex custom compare methods (if all the method does is call compare-slots, it can be easily converted to use the new macro define-equality-slots), and
- you don't care about the ordering of your sets and maps, or in the cases where you do care, you've used the new custom-ordering features.
The second major change is to the defaulting behavior of maps and seqs. FSet 1 uses a "default default" of nil, meaning that if you don't supply an explicit default when creating a map or seq, its default is nil. The default is returned on a map lookup when the supplied key is not in the map; it is returned on a seq lookup when the supplied index is not in bounds (the bounds being 0 up to, but excluding, the size of the seq).
In FSet 2, there is no default default. If you don't supply an explicit default, the map or seq has no default, and an access attempt will signal an error instead in these cases. So, migrating your code to FSet 2 will probably require a little debugging — running your test suite, noting when you get one of the new errors, finding the form where the map or seq involved is initially created, and adding :default nil to the form or wrapping it in (with-default ... nil).
Examples:
But, there's good news! You don't have to convert your code if you don't want to. Merely loading FSet 2 doesn't expose your code to these changes; the behavior of names exported from package fset has mostly not changed. Instead, I've added a new package, fset2, that exports its own versions of the names with new behavior. So, to use FSet 2, change :use fset in your defpackage form(s) to :use fset2.
(There is one change you will see even if you don't use the new package, having to do with the printing of map and seq defaults. Previously, a nil default would not be printed explicitly; now, it will be, so you'll see things like ##{| (a 3) |}/NIL and #[ 3 1 4 ]/NIL.)
For complete details of all changes in this release, see the MR.
So, for anybody who wants to help me out, here's what I ask:
- Clone this repo (or this one), and in your copy, do: git checkout fset2.
- If you didn't clone it in ~/quicklisp/local-projects/, arrange for Quicklisp to find this copy, in whatever way you do that (e.g. by pushing the directory pathname onto asdf:*central-registry*).
- Recompile your client code and test it. If anything doesn't work, please let me know immediately.
- Go into the :use clause of your defpackage form(s) and change fset to fset2.
- Recompile your client code again, and test it again. This time you may need to make some changes, as discussed above. Let me know how much trouble you have, whether a little or a lot (and especially let me know if you give up). You can post comments in the MR, or in this GitHub issue.
Again, this is a release candidate, not yet a release. I've tested it pretty thoroughly, but there could still be bugs. OTOH, if there's something in particular you don't like about it, I may be more willing to make changes than I will be after it's released.
Share and enjoy!
Joe Marshall — A Couple of More AI Apps
@2025-10-10 16:46 · 7 days agoI deployed three additional AI powered apps.
- Common Lisp Coach — enter your Lisp code and get concrete suggestions for improvement.
- LLM Prompt Refiner — enter a lame LLM prompt and get a much better one.
- LLM System Instruction Refiner — enter a vague System Instruction and get a much better one.
Tim Bradshaw — Tracing the expansion of external macros
@2025-10-10 10:39 · 7 days agoI have improved my trace-macroexpand
system so you can say, in effect ‘trace the expansion of only the macros in the interface to a given package’. This is a fairly useful thing.
Tracing macroexpansion in Common Lisp is a pretty useful thing to be able to do, in my experience. It is completely possible to do this in portable CL via *macroexpand-hook*
: you simply put your tracing function on this hook, making sure it actually does expand the macro. trace-macroexpand
does just this, and lets you specify which macros you want to be traced.
It has always allowed you to say ‘trace all macros whose home package is this package’. That’s less useful than you might think:
- it means that not only macros whose names are exported from the packqge are traced, but any macros in its guts are also traced, which generally a user of the package should not be interested in;
- it doesn’t trace macros which are exported from a package but whose home package is not that package.
Very often the second thing is exactly what you want: you want to be able to say ‘let me see the expansion of macros in the public interface to this package, but I don’t care about the internal details of it’.
It can now do exactly that.
trace-macro-package
now takes a list of package specifiers. If a package specifier is a list of one or more other package specifiers, then it changes their meaning to be ‘trace the exports of these packages only’.
Here is an example:
> (find-symbol "FOR" :org.tfeb.star)
for
:external
> (symbol-package *)
#<The ORG.TFEB.STAR/IMPL package, 188/512 internal, 6/16 external>
> (trace-macroexpand t)
nil
> (setf *trace-macroexpand-per-line-prefix* "| ")
"| "
> (trace-macro-package :org.tfeb.star)
("ORG.TFEB.STAR")
> (for ((_ (in-naturals 10))))
nil
> (untrace-macro-package :org.tfeb.star)
nil
> (trace-macro-package '(:org.tfeb.star))
(("ORG.TFEB.STAR"))
> (for ((_ (in-naturals 10))))
| (for (#))
| -> (multiple-value-bind (#:<v>) 0 ...)
nil
As well as this, both trace-macro-package
and untrace-macro-package
now canonicalise the specifiers they are given, which means, for instance that (trace-macro-package '("FOO" "BAR"))
is exactly the same as (trace-macro-package '("FOO") '("BAR"))
: this means that things like
> (trace-macro-package '("FOO" "BAR"))
[...]
> (untrace-macro-package '("FOO"))
will work properly.
This change is in version 10.9.0 of the TFEB.ORG Lisp hax, git repo.
Joe Marshall — Common Lisp Syntax Highlighter
@2025-10-09 21:26 · 7 days agoI often quote snippets of Common Lisp in my blog. I thought it'd be cool if I could colorize them, so I generated this app that takes Common Lisp and outputs a syntax colored version in standalone HTML suitable for pasting into a blog. The output HTML has a ton of <span> tags that apply a color style to the text. No stylesheet is necessary.
Tim Bradshaw — Optional per-line prefix for trace-macroexpand
@2025-10-08 16:52 · 9 days agoMy macroexpansion tracer can now print per-line prefixes when tracing, which can make things more readable.
I find trace-macroexpand
pretty useful: if you write Lisp with lots of nontrivial macros1 then it can be fairly hard to understand what’s going on when something is not working properly. trace-macroexpand
lets you see this either for individual macros or many of them. It can, portably, trace any macro including ones defined by CL, which is even nicer.
However it’s not always easy to distinguish its output from other output. So, I realised I could add a per-line prefix which can help distinguish its output. Here is an example.
> (defvar *a* (cons nil nil))
*a*
> (trace-macroexpand t)
nil
> (trace-macro setf)
(setf)
> (setf (car *a*) 10)
(setf (car *a*) 10)
-> (system::%rplaca *a* 10)
10
> (setf *trace-macroexpand-per-line-prefix* "| ")
(setf *trace-macroexpand-per-line-prefix* "| ")
-> (let* (#) (setq *trace-macroexpand-per-line-prefix* #:|Store-Var-1692|))
"| "
> (setf (car *a*) 11)
| (setf (car *a*) 11)
| -> (system::%rplaca *a* 11)
11
This is in version 10.8.12 of the TFEB.ORG Lisp hax, git repo.
Tim Bradshaw — Defaulting places in Common Lisp
@2025-10-07 11:50 · 10 days agoOr: less boilerplate.
Common Lisp (CL) has a general notion of a place, which is a form which has a value or values and into which a value or values can be stored. Variables are places, but so are forms like (car c)
: (setf (car c) 2)
will store 2 into the car of the cons bound to c
. Places can even store multiple values:
(let ((a 1) (b 3))
(setf (values a b) (values 3 4))
(values a b))
for instance. Here the place is (values a b)
which is a place composed of two other places.
This is a really useful notion, not only because places mean the language no longer needs all sorts of special-purpose mutation functions — rplaca
still exists for compatibility but there is no sethash
or aset
— but because you can implement your own places which behave just like the ones the language provides.
Here’s an example of a place called a ‘wrapped alist’: it’s just a cons whose cdr is an alist. It’s done like this so storing works in general (think about empty alists).
(defun make-wrapped-alist (&optional (for-alist '()))
(cons nil for-alist))
(defun wrapped-alist-alist (wa)
(cdr wa))
(defun wav (item wrapped-alist &key (test nil testp) (default nil))
(let ((found (if testp
(assoc item (cdr wrapped-alist) :test test)
(assoc item (cdr wrapped-alist)))))
(if found
(values (cdr found) t)
(values default nil))))
(defun (setf wav) (new item wrapped-alist &key (test nil testp) default)
(declare (ignore default))
(let ((found (if testp
(assoc item (cdr wrapped-alist) :test test)
(assoc item (cdr wrapped-alist)))))
(if found
(setf (cdr found) new)
(progn
(push (cons item new) (cdr wrapped-alist))
new))))
I will use these wrapped alist places in the examples below.
Defaulting places
Quite often, a place has a default value or a way of indicating that there is no value in it, and you want to be able to say ‘if this place has not been stored into, then store this into it’. In the case of hash tables, the indicator is that gethash
returns a second value of nil
, and that is the same for av
and my wrapped alists.
Sometimes this is not a problem, especially when the accessor for a place lets you provide a default:
(defun symbol-counts (l)
(let ((table (make-hash-table)))
(dolist (e l)
(when (symbolp e)
(incf (gethash e table 0))))
(collecting
(maphash (lambda (k v)
(collect (cons k v)))
table))))
Or
(defun symbol-counts/probably-slower (l)
(let ((wa (make-wrapped-alist)))
(dolist (e l)
(when (symbolp e)
(incf (av e wa :default 0))))
(wrapped-alist-alist wa)))
But sometimes it is a problem. Consider the case where the fallback thing you want to store is expensive, or has side-effects. Now you need to write some boilerplate code:
(unless (nth-value 1 (wav item wa)
(setf (wav item wa) (compute-complicated-thing))))
The wrong way
Well, boilerplate is bad. So you might want to replace this by a macro:
(defmacro defaulting/wrong (place-form value-form)
;; This just assumes that PLACE-FORM returns NIL if it has no value:
;; in real life you need to be cleverer.
`(or ,place-form
(setf ,place-form ,value-form)))
This is not only limited, but incorrect. It’s incorrect because it multiply evaluates subforms to place-form
. Consider this:
(let ((i 0) (table (make-hash-table)))
(defaulting/wrong (gethash (incf i) table) 3))
Well, using wrapped alists it’s easy to see what this is doing wrong:
> (let ((i 0) (wa (make-wrapped-alist)))
(defaulting/wrong (wav (incf i) wa) 3)
(wrapped-alist-alist wa))
((2 . 3))
So, not great. The boilerplate you’d need to write is:
> (let ((i 0) (wa (make-wrapped-alist)))
(let ((k (incf i)))
(unless (wav k wa)
(setf (wav k wa) 3)))
(wrapped-alist-alist wa))
((1 . 3))
The right way
The problem is that any such defaulting
macro doesn’t know anything about the place it’s defaulting. So it can’t know which subforms of the place it needs to stash values for.
Well, it turns out that the designers of CL thought of this, and they provided the tool you need, which is get-setf-expansion
. Given a place and optionally an environment, this will tell you exactly what you need to know to both read from that place and write to it, and to do so multiple times if need be.
get-setf-expansion
is what you need to be able to write your own setf
:
(defmacro assign (&rest pairs &environment e)
;; This should be SETF give or take
`(progn
,@(collecting
(for ((tail (on-list pairs :by #'cddr)))
(destructuring-bind (place-form value-form . _) tail
(declare (ignore _))
(multiple-value-bind (vars vals store-vars writer-form reader-form)
(get-setf-expansion place-form e)
(declare (ignore reader-form))
(collect
`(let* ,(mapcar #'list vars vals)
(multiple-value-bind ,store-vars ,value-form
,writer-form)))))))))
But you can also use it to write defaulting
properly. Here is a much fancier version of it, which is now correct (I hope):
(defmacro defaulting (place value-form
&body options
&key test default-value nth-value &environment e)
(declare (ignore options)) ;just for indent
(multiple-value-bind (tvars tforms store-variables storing-form accessing-form)
(get-setf-expansion place e)
`(let* ,(mapcar #'list tvars tforms)
(when ,(cond
((and test nth-value)
`(not (funcall ,test ,default-value (nth-value ,nth-value ,accessing-form))))
(test
`(not (multiple-value-call ,test ,default-value ,accessing-form)))
((and default-value nth-value)
`(eql ,default-value (nth-value ,nth-value ,accessing-form)))
(default-value
`(eql ,default-value ,accessing-form))
(nth-value
`(not (nth-value ,nth-value ,accessing-form)))
(t
`(not ,accessing-form)))
(multiple-value-bind ,store-variables ,value-form
,storing-form))
,accessing-form)))
So now:
> (let ((i 0) (wa (make-wrapped-alist)))
(defaulting (wav (incf i) wa) 3)
(wrapped-alist-alist wa))
Or, using options to this defaulting
to tell it the value to be checked:
> (let ((i 0) (wa (make-wrapped-alist)))
(defaulting (wav (incf i) wa) 3 :nth-value 1)
(wrapped-alist-alist wa))
((1 . 3))
Finally, you can see the expansion using trace-macroexpand
:
> (let ((a (make-wrapped-alist)))
(defaulting (wav 'k a) 3 :nth-value 1))
(defaulting (wav 'k a)
3
:nth-value 1)
-> (let* ((#:a1 a))
(when (not (nth-value 1 (wav 'k #:a1)))
(multiple-value-bind (#:new0) 3 (funcall #'(setf wav) #:new0 'k #:a1)))
(wav 'k #:a1))
3
t
and this is obviously correct.
This macro exists in org.tfeb.hax.utilities
, the git repo for which is tfeb.org/computer/repos/tfeb-lisp-hax.git
. Note it is not in the archived GitHub repo.
This is version 10.7.0 of the TFEB.ORG Lisp hax.
Zach Beane — Planet Lisp is refreshing again
@2025-10-04 23:35 · 12 days agoLast month I switched servers on short notice and a few services stopped working. I’ve been bringing them back up as I can. Today I got Planet Lisp refreshing again, and I hope to get l1sp.org back shortly.
Joe Marshall — LLM Prompt Fixed Point: the Ultimate Prompt
@2025-10-03 18:12 · 14 days agoSuppose we direct the LLM to improve a prompt. Suppose we direct the LLM to improve the very prompt that directs the LLM to improve prompts. Suppose we run this prompt self-improvement in a loop until we hit a fixed point. Here's a possible result:
Core Operating Principles for AI Assistant: These principles serve as the foundational guidelines for all AI operations, ensuring optimal performance, reliability, and ethical conduct. Strict adherence is mandatory to provide the highest quality of service.
- Optimal Execution: Execute tasks with precision and maximum efficiency, employing meticulous planning and judicious resource management.
- Clear & Direct Communication: Deliver concise, relevant, and unambiguous responses, strictly avoiding verbosity, unverified speculation, or extraneous information.
- Strategic Tool Utilization: Select and apply the most appropriate and effective tools and resources, always prioritizing authoritative and reliable sources.
- Rigorous Output Validation: Thoroughly verify all generated outputs for accuracy, cross-referencing against independent and credible information sources.
- Continuous Learning & Improvement: Actively analyze performance metrics, integrate new knowledge, and refine operational strategies to continuously enhance capabilities and adapt to evolving requirements.
- Ethical & User-Centric Conduct: Maintain an unvaryingly neutral, professional, helpful, safe, unbiased, and ethical demeanor, consistently prioritizing user well-being, data privacy, and security.
- Proactive Clarification & Intent Understanding: Diligently identify and resolve any ambiguities or gaps in instructions. Actively seek clarification to ensure a complete and accurate understanding of user intent before proceeding.
- Transparent Reporting & Limitation Acknowledgment: Clearly communicate the outcome of all tasks (successful, partially fulfilled, or uncompletable), providing detailed and actionable explanations. Candidly acknowledge any inherent limitations of the AI or the current operational context.
- Contextual Awareness & Adaptability: Continuously assess and adapt to the evolving context of the interaction and task. Tailor responses and actions to best fit the current situation and user needs.
Joe Marshall — Is Worse Really Better?
@2025-10-02 17:46 · 15 days agoIn Richard Gabriel's essay “Worse is Better ”, Gabriel contrasts the “MIT approach” of designing for correctness and completeness with the “New Jersey approach” of designing for simplicity of implementation. He argues that the MIT approach, which is more principled, is likely to be overtaken by the New Jersey approach, which is easier to reproduce. While writing the prompt for the Advent of Code problems, I noticed that my prompt was reminiscent of Gabriel's characterizations. I decided to get the LLM to compare the two approaches by running it on each type of prompt and seeing how the output differed.
Prompts
The salient part of the MIT prompt is
As an Elite Common Lisp Developer, your unwavering and paramount mission is to design and meticulously craft Common Lisp programs that are not only correct, but also simple, efficient, and robust. Your programs are not mere instructions; they are masterpieces of clarity and precision, firmly grounded in these foundational, non-negotiable principles:
- Correctness Your program must be correct in all observable aspects. Incorrectness is simply not allowed.
- Consistency Your program must be consistent. Consistency is as important as correctness. Simplicity may be sacrificed for consistency and correctness.
- Simplicity Your program must be as simple as possible, but no simpler. It is more important for the interface to be simple than the implementation. Simplicity is not allowed to overly reduce completeness.
- Completeness Your program must cover as many important situations as is practical. All reasonably expected situations must be handled.
The salient part of the New Jersey prompt is
As an Expert Common Lisp Developer, your primary and overriding objective is to create Common Lisp programs that are simple. While other qualities such as consistency, completeness, and correctness are important, they are secondary to simplicity. Your programs should be designed with the following prioritized principles in mind:
- Simplicity Your program must be simple, both in interface and implementation. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in the design.
- Correctness Your program must be correct in all observable aspects. It is slightly better that your program be simple than correct.
- Consistency Your program must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.
- Completeness Your program must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.
Both prompts had additional instructions about how to find the input and where to write the output.
Results
I had Gemini analyze the solution pairs to the Advent of Code problems. In almost every case, Gemini preferred the MIT approach to the New Jersey approach. Here is an example analysis of day 10:
Excellent. I have the contents of both files. I will now proceed with a detailed analysis and comparison, as requested.
Analysis of day10_right_thing_solution.lisp
This solution presents a very clean, structured, and pragmatic approach. It reads as a piece of code written by a developer who values clarity and standard algorithmic patterns.
Structure:
-
The code is well-organized into distinct functions, each with a single, clear purpose:
parse-input
,find-trailheads
,get-neighbors
,calculate-score
(for Part 1),calculate-rating
(for Part 2), and a mainsolve
function to orchestrate the process. - This separation of concerns makes the code easy to read, test, and debug. The main
solve
function provides a high-level summary of the program's execution flow.
Algorithms & Style:
-
Part 1 (
calculate-score
): This function uses a classic iterative Breadth-First Search (BFS). The use of a queue (q
) and avisited
hash table is a textbook implementation for graph traversal. This approach is highly efficient and guarantees finding the shortest path in terms of steps, though that is not a requirement here. It is robust and not susceptible to stack depth limits. - Part 2 (
calculate-rating
): This function employs a bottom-up dynamic programming strategy. It builds apath-counts
table, iterating through the heights from 1 to 9. For each cell, it calculates the number of paths to it by summing the paths from its valid, lower neighbors. This is a very efficient and elegant method that avoids the combinatorial explosion of a naive search. - Lisp Idioms: The code uses common and readable Lisp constructs like
loop
,dotimes
,destructuring-bind
, andlet*
. The use of hash tables forvisited
sets is appropriate and performant.
Strengths:
- Clarity and Readability: The code is exceptionally clear. Function names are descriptive, and the logic is straightforward and easy to follow.
- Robustness: The iterative algorithms are safe from stack overflow errors, making the solution suitable for arbitrarily large inputs.
- Efficiency: Both the BFS and the bottom-up DP are highly efficient algorithms for their respective problems.
Weaknesses:
- Minor Environmental Dependency: It relies on
a placeholder
readFileLines
function, making it slightly less portable out-of-the-box compared to a solution using a standard library like UIOP.
Analysis of day10_new_jersey_solution.lisp
This solution uses a recursive approach, which can be very elegant but also carries certain trade-offs. It feels more like a direct translation of a mathematical recurrence relation.
Structure:- The overall structure is similar, with parsing and solving functions. However, the core algorithmic logic is encapsulated within local functions defined by
labels
. - This makes the solver functions more self-contained but also increases their cognitive complexity, as the main logic is nested one level deeper.
- Part 1 (
solve-part1
): This function uses a recursive Depth-First Search (DFS) with memoization. Thefind-reachable-nines
local function explores paths recursively. Thememo
hash table is crucial to prevent re-computation and avoid infinite loops, effectively turning the search into a dynamic programming solution on a graph. The use ofunion
to merge results from recursive calls can be inefficient, as it requires traversing lists. - Part 2 (
solve-part2
): This also uses a recursive DFS with memoization (top-down dynamic programming). Thecount-paths
function is a classic example of this pattern. It computes the number of paths from a given cell and stores the result in thememo
table before returning it. - Lisp Idioms: The use of
labels
for recursion is a very common and powerful Lisp idiom. The use ofcons
cells for coordinates is also common, though perhaps slightly less explicit than alist
. It correctly usesuiop:read-file-lines
for better portability.
- Elegance of Recursion: For those comfortable with recursion, the code can be seen as a very direct and elegant expression of the problem's structure.
- Portability: The use of
uiop
for file reading is a good practice.
- Stack Overflow Risk: The primary weakness is its reliance on deep recursion. For a sufficiently large or complex input grid, this solution could exceed the call stack limit and crash, whereas an iterative solution would not.
- Lower Readability: The combination of recursion, memoization, and nested functions can make the logic harder to trace and debug compared to the straightforward iterative loops in the first solution.
- Potential Inefficiency: The use of
union
in Part 1 is likely less performant than the hash-table approach used in the first solution for tracking unique reachable nodes.
Feature | right_thing_solution |
new_jersey_solution |
---|---|---|
Paradigm | Iterative (BFS, Bottom-up DP) | Recursive (DFS with Memoization) |
Robustness | Superior. Immune to stack overflow. | Weaker. Vulnerable to stack overflow. |
Clarity | Superior. Clear, flat structure. Standard algorithms. | Good, but more complex. Nested logic can be harder to follow. |
Performance | Excellent. Efficient, standard algorithms. | Good, but recursion overhead and union may be slower. |
Style | Pragmatic, clear, maintainable. | Elegant, concise, mathematical. |
While both files represent valid and functional solutions to the
problem, I unequivocally
recommend day10_right_thing_solution.lisp
as the better overall solution.
Justification:
- Robustness is Paramount: The most critical advantage of the "Right Thing" solution is its immunity to stack overflow errors. This makes it fundamentally more reliable and suitable for general-purpose use where input constraints may not be known. In a professional software engineering context, this is a decisive factor.
- Clarity Enhances Maintainability: The code in the "Right Thing" solution is significantly easier to read, understand, and maintain. A developer can grasp the BFS and bottom-up DP algorithms quickly, making debugging and future modifications simpler. The recursive nature of the "New Jersey" solution, while elegant, requires more mental effort to trace.
- Efficiency and Predictability: The iterative approach offers more predictable performance. The BFS algorithm and the systematic table-filling of the DP solution have well-understood and excellent time complexities, free from the overhead of function calls inherent in recursion.
In conclusion, the "New Jersey" solution is a fine example of a recursive, top-down DP approach. However, the "Right Thing" solution embodies a superior engineering strategy, prioritizing robustness, clarity, and maintainability, which are the hallmarks of high-quality, professional-grade code. It is the solution I would write and the one I would want to inherit in a team setting.
Neil Munro — Ningle Tutorial 11: Posting Tweets & Advanced Database Queries
@2025-09-30 11:30 · 17 days agoContents
- Part 1 (Hello World)
- Part 2 (Basic Templates)
- Part 3 (Introduction to middleware and Static File management)
- Part 4 (Forms)
- Part 5 (Environmental Variables)
- Part 6 (Database Connections)
- Part 7 (Envy Configuation Switching)
- Part 8 (Mounting Middleware)
- Part 9 (Authentication System)
- Part 10 (Email)
- Part 11 (Posting Tweets & Advanced Database Queries)
Introduction
Welcome back! I hope you are well, this tutorial will be have us writing code to integrate the concept of "posts" into our tutorial app, up until now we had a list of posts displayed as an example of how the page might look, well, this changes now. In the course of this tutorial we will be adding new models and forms (like we did in Part 9 (Authentication System)), we will be exploring a new concept in Ningle that allows us to define and use our own requirements, we will also be using some advanced SXQL to perform somewhat more complicated collection of data than we have previously used, and finally, we can add an honest to goodness json library for returning some responses as something other than html.
With any luck that all sounds exciting! We can broadly split our work this month into three sections, which should make the task easier.
DB Schema and forms
Here we will be defining our new models, but unlike before, not every model will be getting a form, some models will be used behind the scenes and users wont directly interact with. This is one of those areas where data has to be very carefully thought of, the example here is likes
, in social media platforms, each post
has some sort of interaction (likes, reactions, thumbsup, etc), and it looks like this is a property of a post
, and indeed it might make sense to assume that a post
"has" likes, but this isn't actually true, what we will have is a likes model
that relates to both a post
and a user
. The user is just presented with visual information that makes it look like likes
are something a post
has.
src/models.lisp
Our models file will include more than just model definitions, we have some methods and functions we need to write to access or alter our data, we will have two models our posts
and likes
, we will use likes
to link a post
to a user
(from our ningle-auth package).
Let's start by defining our package and models, we will look at the other methods and functions we are exporting a little further down.
(defpackage ningle-tutorial-project/models
(:use :cl :mito :sxql)
(:import-from :ningle-auth/models #:user)
(:export #:post
#:id
#:content
#:likes
#:user
#:liked-post-p
#:logged-in-posts
#:not-logged-in-posts
#:toggle-like))
(in-package ningle-tutorial-project/models)
(deftable post ()
((user :col-type ningle-auth/models:user :initarg :user :accessor user)
(content :col-type (:varchar 140) :initarg :content :accessor content)))
(deftable likes ()
((user :col-type ningle-auth/models:user :initarg :user :reader user)
(post :col-type post :initarg :post :reader post))
(:unique-keys (user post)))
Our post has a user and some content, we don't have comments or reposts or anything (this is a tutorial after all!), what we want to ensure with the likes
model though, is that there's a unique constraint between user and post, this ensures that a user can like a specific post only once. Otherwise our like count would be unreliable.
In our exports list you will see we export the id, user, content, likes, post etc, but there's more!
Recall that Common Lisp is a lisp-2 and as such we can have function/method names as the same as objects, and because of this, we will defined some methods with the name "likes" which are different from our class
called "likes".
(defgeneric likes (post)
(:documentation "Returns the number of likes a post has"))
(defmethod likes ((post post))
(mito:count-dao 'likes :post post))
Here we define a method that will accept a post and return the total number of likes it has, which will give us our likes count when we render the main page.
The next method we are going to write is a way to toggle the user like of a post, if they don't like it, clicking it will like the post, if they do already like the post, clicking the like button will undo the like.
(defmethod toggle-like ((ningle-auth/models:user user) (post post))
(let ((liked-post (liked-post-p user post)))
(if liked-post
(mito:delete-dao liked-post)
(mito:create-dao 'likes :post post :user user))
(not liked-post)))
(defgeneric liked-post-p (user post)
(:documentation "Returns true if a user likes a given post"))
(defmethod liked-post-p ((ningle-auth/models:user user) (post post))
(mito:find-dao 'likes :user user :post post))
The toggle-like
tries to be as simple as possible, by calling the liked-post-p
method to query if a user likes a post, and if the post is liked, the record of the like is deleted, if not it is created. The final thing the function does is return the not
of liked-post-p
, so if the post was liked at first, it will return nil
, if the post wasn't liked, it'll return t
. This will become important later, but if your function can be written in a way that can return helpful information, I suggest doing so, you may not always, or ever use the data it returns, but it's there if you need to, it forms a usable interface.
Now to SQL!
If you are unfamiliar with SQL this part might look complicated, but in terms of SQL, it isn't, SQL is a language used for a very specific purpose; querying and manipulating data! If you have not used SQL before/much, I highly encourage you to do so, it's nearly 50 years old, it's a very well tested and proven technology. It's not going anywhere (despite what you may read online NoSQL isn't going to replace it), and will be great for your career.
Mito is a pretty thin wrapper around SQL, unlike something like Django, Rails, or Larvel (comprehensive web frameworks), Mito doesn't have a complex DSL for abstracting the SQL details away, instead it has the user use an SQL generator SXQL, so, for things beyond the simplest of things, we're gonna have to get into SQL, which is fine.
We have two things we want to do:
- Retrieve 50 posts ordered in descending order, with an extra column for the like count.
- Retrieve 50 posts ordered in descending order, with two extra columns, one for the like count, and a second indicating if the logged in user liked the post.
Let's start with the first case, a user has loaded the website, but they are not logged in. The best place to start is with the SQL query we want to run:
SELECT post.*, COUNT(likes.id) AS like_count
FROM post
LEFT JOIN likes ON (post.id = likes.post_id)
GROUP BY post.id
ORDER BY post.created_at DESC
LIMIT 50;
This will give us a structure like this:
id | user_id | content | created_at | updated_at | like_count |
---|---|---|---|---|---|
1 | 4 | "hi" | 2025-09-13 19:43:16.718416Z | 2025-09-13 19:43:16.718416Z | 5 |
This query works by using joins, we want to get each post record and its like count, so we must join post
and likes
on the intersection of post.id
and likes.post.id
. This will allow us to iterate over the combined results and use them in our templates later.
We also use the GROUP BY
clause to ensure that there is only one result per post, and that each like for a given post is summed together, so we have one post with many likes, rather than many copies of the same post each with one like.
We use the retrieve-by-sql
function from mito
which allows us to run SQL explicitly, but as previously mentioned we will use SXQL to more easily generate the SQL we might want within Common Lisp.
We will also use the yield
function (from SXQL
) to actually convert the Common Lisp representation into a string SQL can use, within that we will begin with select
(also from SXQL
).
(defun not-logged-in-posts ()
(mito:retrieve-by-sql
(sxql:yield
(sxql:select
(:post.* (:as (:count :likes.id) :like_count))
(sxql:from :post)
(sxql:left-join :likes :on (:= :post.id :likes.post_id))
(sxql:group-by :post.id)
(sxql:order-by (:desc :post.created_at))
(sxql:limit 50)))))
You should be able to see that our original SQL is represented quite similarly in the SXQL, here's a table to clearly show the minor differences.
SQL | SXQL |
---|---|
SELECT post.*, COUNT(likes.id) AS like_count FROM post LEFT JOIN likes ON (post.id = likes.post_id) GROUP BY post.id ORDER BY post.created_at DESC LIMIT 50; |
(sxql:select (:post.* (:as (:count :likes.id) :like_count)) (sxql:from :post) (sxql:left-join :likes :on (:= :post.id :likes.post_id)) (sxql:group-by :post.id) (sxql:order-by (:desc :post.created_at)) (sxql:limit 50)) |
The next query we need to construct is that of the logged in user, which includes a column denoting likes for any specific post, this will be our second function logged-in-posts
. As before, let's start with what the SQL will be:
SELECT post.*, COUNT(likes.id) AS like_count, COUNT(user_likes.id) AS liked_by_user
FROM post
LEFT JOIN likes ON (post.id = likes.post_id)
LEFT JOIN likes AS user_likes ON ((post.id = user_likes.post_id) AND (user_likes.user_id = ?))
GROUP BY post.id
ORDER BY post.created_at DESC
LIMIT 50;
Please note that we have a ?
where the user id would go, we do not wish to be subject to SQL injection attacks, so mito allows us to bind values, but we will keep the ?
as it's what we will use in the SXQL too.
Which will generate the following table structure.
id | user_id | content | created_at | updated_at | like_count | liked_by_user |
---|---|---|---|---|---|---|
1 | 4 | "hi" | 2025-09-13 19:43:16.718416Z | 2025-09-13 19:43:16.718416Z | 5 | 1 |
The extra column is only a small change on the first query, by adding a new call to COUNT
in the SELECT
line, we prepare the column, and we get the data from the second LEFT JOIN
which will join (using a new alias; user_likes
) where the post id is the same as the user likes post id and where the user likes user id is the same as the logged in user, this will either return a record or null. When we call count on the record returned, it becomes 1 or 0, effectively a boolean check.
We can see the differences between the SQL and the SXQL here.
SQL
SELECT post.*, COUNT(likes.id) AS like_count, COUNT(user_likes.id) AS liked_by_user
FROM post
LEFT JOIN likes ON (post.id = likes.post_id)
LEFT JOIN likes AS user_likes ON ((post.id = user_likes.post_id) AND (user_likes.user_id = ?))
GROUP BY post.id
ORDER BY post.created_at DESC
LIMIT 50;
SXQL
(sxql:select (:post.* (:as (:count :likes.id) :like_count) (:as (:count :user_likes.id) :liked_by_user))
(sxql:from :post)
(sxql:left-join :likes :on (:= :post.id :likes.post_id))
(sxql:left-join (:as :likes :user_likes) :on (:and (:= :post.id :user_likes.post_id) (:= :user_likes.user_id :?)))
(sxql:group-by :post.id)
(sxql:order-by (:desc :post.created_at))
(sxql:limit 50)))
So this SXQL will be used in our function like so:
(defmethod logged-in-posts ((user user))
(let ((uid (slot-value user 'mito.dao.mixin::id)))
(mito:retrieve-by-sql
(sxql:yield
(sxql:select
(:post.* (:as (:count :likes.id) :like_count) (:as (:count :user_likes.id) :liked_by_user))
(sxql:from :post)
(sxql:left-join :likes :on (:= :post.id :likes.post_id))
(sxql:left-join (:as :likes :user_likes)
:on (:and (:= :post.id :user_likes.post_id)
(:= :user_likes.user_id :?)))
(sxql:group-by :post.id)
(sxql:order-by (:desc :post.created_at))
(sxql:limit 50)))
:binds (list uid))))
As mentioned before, you can see the :binds
which will insert the user id into the SXQL query for safety.
So with these two complex functions in place now, we have everything we need, for clarity the complete listing of the models.lisp file is as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
(defpackage ningle-tutorial-project/models
(:use :cl :mito :sxql)
(:import-from :ningle-auth/models #:user)
(:export #:post
#:id
#:content
#:likes
#:user
#:liked-post-p
#:logged-in-posts
#:not-logged-in-posts
#:toggle-like))
(in-package ningle-tutorial-project/models)
(deftable post ()
((user :col-type ningle-auth/models:user :initarg :user :accessor user)
(content :col-type (:varchar 140) :initarg :content :accessor content)))
(deftable likes ()
((user :col-type ningle-auth/models:user :initarg :user :reader user)
(post :col-type post :initarg :post :reader post))
(:unique-keys (user post)))
(defgeneric likes (post)
(:documentation "Returns the number of likes a post has"))
(defmethod likes ((post post))
(mito:count-dao 'likes :post post))
(defgeneric toggle-like (user post)
(:documentation "Toggles the like of a user to a given post"))
(defmethod toggle-like ((ningle-auth/models:user user) (post post))
(let ((liked-post (liked-post-p user post)))
(if liked-post
(mito:delete-dao liked-post)
(mito:create-dao 'likes :post post :user user))
(not liked-post)))
(defgeneric liked-post-p (user post)
(:documentation "Returns true if a user likes a given post"))
(defmethod liked-post-p ((ningle-auth/models:user user) (post post))
(mito:find-dao 'likes :user user :post post))
(defgeneric logged-in-posts (user)
(:documentation "Gets the posts for a logged in user"))
(defmethod logged-in-posts ((user user))
(let ((uid (slot-value user 'mito.dao.mixin::id)))
(mito:retrieve-by-sql
(sxql:yield
(sxql:select
(:post.*
(:as (:count :likes.id) :like_count)
(:as (:count :user_likes.id) :liked_by_user))
(sxql:from :post)
(sxql:left-join :likes :on (:= :post.id :likes.post_id))
(sxql:left-join (:as :likes :user_likes)
:on (:and (:= :post.id :user_likes.post_id)
(:= :user_likes.user_id :?)))
(sxql:group-by :post.id)
(sxql:order-by (:desc :post.created_at))
(sxql:limit 50)))
:binds (list uid))))
(defun not-logged-in-posts ()
(mito:retrieve-by-sql
(sxql:yield
(sxql:select
(:post.* (:as (:count :likes.id) :like_count))
(sxql:from :post)
(sxql:left-join :likes :on (:= :post.id :likes.post_id))
(sxql:group-by :post.id)
(sxql:order-by (:desc :post.created_at))
(sxql:limit 50)))))
src/forms.lisp
Our forms are much simpler, we only have one form, the post. While we do have the likes model, our users will not be directly using that, and thus we don't need to render a form for this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(defpackage ningle-tutorial-project/forms
(:use :cl :cl-forms)
(:export #:post
#:content
#:submit))
(in-package ningle-tutorial-project/forms)
(defparameter *post-validator* (list (clavier:not-blank)
(clavier:is-a-string)
(clavier:len :max 140)))
(defform post (:id "post" :csrf-protection t :csrf-field-name "csrftoken" :action "/post")
((content :string :value "" :constraints *post-validator*)
(submit :submit :label "Post")))
Like we used in a previous tutorial, we use the clavier
validation library to ensure that our users post things that fit within the constraints of our system, we also want to make sure we are using CSRF tokens for security.
We will style this form using CSS later.
src/migrations.lisp
Now, our main project now contains its own migrations, we perhaps should have written the code to perform migrations in another file and reserved this for specific migrations, but we can work with things the way they are.
We are going to start by adding a function to the top of our migrations.lisp
file.
(defun migrate ()
"Explicitly apply migrations when called."
(format t "Applying migrations...~%")
(mito:ensure-table-exists 'ningle-tutorial-project/models:post)
(mito:ensure-table-exists 'ningle-tutorial-project/models:likes)
(mito:migrate-table 'ningle-tutorial-project/models:post)
(mito:migrate-table 'ningle-tutorial-project/models:likes)
(format t "Migrations complete.~%"))
These will be the project specific migrations, however we still need a way to trigger them, and since we wrote a way to apply specific apps only, we need a way to exclude these if we do not wish to run these migrations.
The next thing we need to do is to extend the migrate-apps
function we previously wrote. We will add a parameter to the function:
(defun migrate-apps (&optional (apps nil) &key skip-root)
And within the macro
call:
(with-db-connection
...)
We add:
(unless skip-root
(format t "Running root project migrations...~%")
(migrate))
There is also a small correction we need to make, this line.
(error "Migrate function not found in package ~A." migrations-pkg-name)
Needs to be corrected to:
(error (format nil "Migrate function not found in package ~A." migrations-pkg-name))
Full listing:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
(defpackage ningle-tutorial-project/migrations
(:use :cl :ningle-tutorial-project/contrib)
(:export #:migrate-apps))
(in-package :ningle-tutorial-project/migrations)
(defun migrate ()
"Explicitly apply migrations when called."
(format t "Applying migrations...~%")
(mito:ensure-table-exists 'ningle-tutorial-project/models:post)
(mito:ensure-table-exists 'ningle-tutorial-project/models:likes)
(mito:migrate-table 'ningle-tutorial-project/models:post)
(mito:migrate-table 'ningle-tutorial-project/models:likes)
(format t "Migrations complete.~%"))
(defun migrate-apps (&optional (apps nil) &key skip-root)
"Run migrate function for each app in APPS list. If APPS is nil, migrate all apps listed in *config* :installed-apps."
(let ((apps (or apps (getf (envy:config :ningle-tutorial-project/config) :installed-apps))))
(unless apps
(error "No apps specified and no :installed-apps found in config."))
(with-db-connection
(unless skip-root
(format t "Running root project migrations...~%")
(migrate))
(dolist (app apps)
(let* ((migrations-pkg-name (string-upcase (format nil "~A/MIGRATIONS" (string-upcase (symbol-name app)))))
(migrations-pkg (find-package migrations-pkg-name)))
(unless migrations-pkg
(error "Migrations package ~A not found." migrations-pkg-name))
;; Set app-specific config before calling migrate
(let ((migrate-fn (find-symbol "MIGRATE" migrations-pkg))) ;; Name known to project
(unless (and migrate-fn (fboundp migrate-fn))
(error (format nil "Migrate function not found in package ~A." migrations-pkg-name)))
(funcall migrate-fn)))))))
ningle-tutorial-project.asd
With these files added, we need to remember to add them to our project.asd file.
:components ((:module "src"
:components
((:file "contrib")
(:file "middleware")
(:file "config")
(:file "models") ; add this line
(:file "forms") ; add this line
(:file "migrations")
(:file "main"))))
Controller Logic
src/main.lisp
We will now look at the controller logic to handle posting, well, posts. We will introduce a feature of Ningle
we have not yet looked into that can help us create smaller, more specialised, logical units of work, requirements
. Ningle has the ability to define conditions that can be passed as keyword arguments to a controller, if the condition is true, the controller is triggered. In our controllers previously we have had if
checks for if a user is logged in, or if a request is a GET
or a POST
, these requirements allow us to write smaller functions to help us focus on one specific type of request (even if on the same route). I find this helps me, personally, if I can reduce the number of things I have to be remembering when I am working on a function.
Before we do, however, we will allow our main code to use the forms we defined in the previous section.
(defpackage ningle-tutorial-project
(:use :cl :sxql :ningle-tutorial-project/forms) ; Add the :ningle-tutorial-project/forms bit!
(:export #:start
#:stop))
(in-package ningle-tutorial-project)
Now with that in place we can begin in earnest! We already use these requirements already with our :method '(:GET :POST)
that we used previously, but we can define our own! We will define a requirement that there is a logged in user. In our src/main.lisp
file, before the routes we previously defined, we will add this:
(setf (ningle:requirement *app* :logged-in-p)
(lambda (value)
(and (cu-sith:logged-in-p) value)))
Since this will be used as a keyword argument, the lambda
function will always define a parameter, this will be the value found to the key word argument later when this is used in a route definition. We will use this requirement
in a few places here, starting with our "/" route.
Previously we just had a dummy response that returned what we thought the posts might look like, but now we have the capability to store and retrieve posts from a database we can change this now.
We have different database queries too, a query to run when a user is not logged in, and a query to run when they are, this this helps split our controllers into a logged in view, and a not logged in view.
A quick word on controller definitions, if you have multiple controllers, you must define the most specific ones first! So we will start by defining a view that matches on "/" and when logged-in-p
is t
, because if we try to match on "/" first, then it matches every controller for that route, ignoring any other specific requirements of it, so we must define our logged in view first!
(setf (ningle:route *app* "/" :logged-in-p t)
(lambda (params)
(let* ((user (gethash :user ningle:*session*))
(form (cl-forms:find-form 'post))
(posts (ningle-tutorial-project/models:logged-in-posts user)))
(djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :form form))))
In this controller we ensure that there is a user that is logged in using :logged-in-p t
, and another change this controller handles if a user is logged in, is permitting them to post! So this controller grabs the logged in user, the form for posting content and the first 50 posts (which is what logged-in-posts
does) and renders them in the template.
Then we can define a more general "/" controller after it.
(setf (ningle:route *app* "/")
(lambda (params)
(let ((posts (ningle-tutorial-project/models:not-logged-in-posts)))
(djula:render-template* "main/index.html" nil :title "Home" :user (gethash :user ningle:*session*) :posts posts))))
This is simpler, by not needing a user or post form, we can forgo these and simply get a list of posts with not-logged-in-posts
. Although, now I think about it, I could have written a helper method that takes a user object and runs these functions depending on if the user is nil
or not, you live and learn!
Please note that these two controllers will replace the previous "/" controller we had.
With these in place we need a controller to toggle the liked
status of a post.
(setf (ningle:route *app* "/post/:id/likes" :method :POST :logged-in-p t)
(lambda (params)
(let* ((user (gethash :user ningle:*session*))
(post (mito:find-dao 'ningle-tutorial-project/models:post :id (parse-integer (ingle:get-param :id params))))
(res (make-hash-table :test 'equal)))
(setf (gethash :post res) (ingle:get-param :id params))
(setf (gethash :likes res) (ningle-tutorial-project/models:likes post))
(setf (gethash :liked res) (ningle-tutorial-project/models:toggle-like user post))
(com.inuoe.jzon:stringify res))))
Here, this controller is permitted to POST
only and requires that a user is logged in, we obviously don't want users that aren't logged in to be able to like posts. So we grab the user, the post that is to be liked and we create a hash-table
for creating our response because here, we actually use the jzon
package to return a valid json response. This controller sets the :post
, :likes
, and :liked
fields and stringifies the hash-table
so it can be read as json. We need to grab the post id from the url, but we have seen this before.
Our next controller simply directs the user to a specific post.
(setf (ningle:route *app* "/post/:id")
(lambda (params)
(handler-case
(let ((post (mito:find-dao 'ningle-tutorial-project/models:post :id (parse-integer (ingle:get-param :id params)))))
(djula:render-template* "main/post.html" nil :title "Post" :post post))
(parse-error (err)
(setf (lack.response:response-status ningle:*response*) 404)
(djula:render-template* "error.html" nil :title "Error" :error err)))))
We set up a handler-case
to attempt to load a specific post and render the template, if that fails, we set a 404 response code and render the error page.
Moving on now to actually posting some content! Once again this controller should only be permitted to serve POST
requests and require that a user is logged in. As we have seen previously in this series we need to grab the user object and the form that was submitted. From there we do the error handling handler-case
by handling the loading of the form, we handle the values of valid
, or errors
and enter the content of a post into the database if there's no errors, if there are, a 403 is set and the error is rendered.
(setf (ningle:route *app* "/post" :method :POST :logged-in-p t)
(lambda (params)
(let ((user (gethash :user ningle:*session*))
(form (cl-forms:find-form 'post)))
(handler-case
(progn
(cl-forms:handle-request form) ; Can throw an error if CSRF fails
(multiple-value-bind (valid errors)
(cl-forms:validate-form form)
(when errors
(format t "Errors: ~A~%" errors))
(when valid
(cl-forms:with-form-field-values (content) form
(mito:create-dao 'ningle-tutorial-project/models:post :content content :user user)
(ingle:redirect "/")))))
(simple-error (err)
(setf (lack.response:response-status ningle:*response*) 403)
(djula:render-template* "error.html" nil :title "Error" :error err))))))
Finally we now look to replace the "/profile" controllers, we have already explored the new concepts but this serves as a simple, clear example, and it helps we need to work on this further anyway!
(setf (ningle:route *app* "/profile" :logged-in-p t)
(lambda (params)
(let ((user (gethash :user ningle:*session*)))
(djula:render-template* "main/profile.html" nil :title "Profile" :user user))))
(setf (ningle:route *app* "/profile")
(lambda (params)
(setf (lack.response:response-status ningle:*response*) 403)
(djula:render-template* "error.html" nil :title "Error" :error "Unauthorized")))
Full listing:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
(defpackage ningle-tutorial-project
(:use :cl :sxql :ningle-tutorial-project/forms)
(:export #:start
#:stop))
(in-package ningle-tutorial-project)
(defvar *app* (make-instance 'ningle:app))
;; requirements
(setf (ningle:requirement *app* :logged-in-p)
(lambda (value)
(and (cu-sith:logged-in-p) value)))
;; routes
(setf (ningle:route *app* "/" :logged-in-p t)
(lambda (params)
(let* ((user (gethash :user ningle:*session*))
(form (cl-forms:find-form 'post))
(posts (ningle-tutorial-project/models:logged-in-posts user)))
(djula:render-template* "main/index.html" nil :title "Home" :user user :posts posts :form form))))
(setf (ningle:route *app* "/")
(lambda (params)
(let ((posts (ningle-tutorial-project/models:not-logged-in-posts)))
(djula:render-template* "main/index.html" nil :title "Home" :user (gethash :user ningle:*session*) :posts posts))))
(setf (ningle:route *app* "/post/:id/likes" :method :POST :logged-in-p t)
(lambda (params)
(let* ((user (gethash :user ningle:*session*))
(post (mito:find-dao 'ningle-tutorial-project/models:post :id (parse-integer (ingle:get-param :id params))))
(res (make-hash-table :test 'equal)))
(setf (gethash :post res) (ingle:get-param :id params))
(setf (gethash :likes res) (ningle-tutorial-project/models:likes post))
(setf (gethash :liked res) (ningle-tutorial-project/models:toggle-like user post))
(com.inuoe.jzon:stringify res))))
(setf (ningle:route *app* "/post/:id")
(lambda (params)
(handler-case
(let ((post (mito:find-dao 'ningle-tutorial-project/models:post :id (parse-integer (ingle:get-param :id params)))))
(djula:render-template* "main/post.html" nil :title "Post" :post post))
(parse-error (err)
(setf (lack.response:response-status ningle:*response*) 404)
(djula:render-template* "error.html" nil :title "Error" :error err)))))
(setf (ningle:route *app* "/post" :method :POST :logged-in-p t)
(lambda (params)
(let ((user (gethash :user ningle:*session*))
(form (cl-forms:find-form 'post)))
(handler-case
(progn
(cl-forms:handle-request form) ; Can throw an error if CSRF fails
(multiple-value-bind (valid errors)
(cl-forms:validate-form form)
(when errors
(format t "Errors: ~A~%" errors))
(when valid
(cl-forms:with-form-field-values (content) form
(mito:create-dao 'ningle-tutorial-project/models:post :content content :user user)
(ingle:redirect "/")))))
(simple-error (err)
(setf (lack.response:response-status ningle:*response*) 403)
(djula:render-template* "error.html" nil :title "Error" :error err))))))
(setf (ningle:route *app* "/profile" :logged-in-p t)
(lambda (params)
(let ((user (gethash :user ningle:*session*)))
(djula:render-template* "main/profile.html" nil :title "Profile" :user user))))
(setf (ningle:route *app* "/profile")
(lambda (params)
(setf (lack.response:response-status ningle:*response*) 403)
(djula:render-template* "error.html" nil :title "Error" :error "Unauthorized")))
(setf (ningle:route *app* "/people")
(lambda (params)
(let ((users (mito:retrieve-dao 'ningle-auth/models:user)))
(djula:render-template* "main/people.html" nil :title "People" :users users :user (cu-sith:logged-in-p)))))
(setf (ningle:route *app* "/people/:person")
(lambda (params)
(let* ((username-or-email (ingle:get-param :person params))
(person (first (mito:select-dao
'ningle-auth/models:user
(where (:or (:= :username username-or-email)
(:= :email username-or-email)))))))
(djula:render-template* "main/person.html" nil :title "Person" :person person :user (cu-sith:logged-in-p)))))
(defmethod ningle:not-found ((app ningle:<app>))
(declare (ignore app))
(setf (lack.response:response-status ningle:*response*) 404)
(djula:render-template* "error.html" nil :title "Error" :error "Not Found"))
(defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
(djula:add-template-directory (asdf:system-relative-pathname :ningle-tutorial-project "src/templates/"))
(djula:set-static-url "/public/")
(clack:clackup
(lack.builder:builder (envy-ningle:build-middleware :ningle-tutorial-project/config *app*))
:server server
:address address
:port port))
(defun stop (instance)
(clack:stop instance))
ningle-tutorial-project.asd
There's one final thing to add before we look at the aesthetic changes we will be applying, we need to ensure we add the jzon
package to our project dependencies.
:depends-on (:cl-dotenv
:clack
:djula
:cl-forms
:cl-forms.djula
:cl-forms.ningle
:envy
:envy-ningle
:ingle
:com.inuoe.jzon ; <- Add this line
:mito
:mito-auth
:ningle
:ningle-auth)
HTML Changes
We make some changes to our html, sadly the biggest part of it is JavaScript, but nevermind!
src/template/base.html
In our base template we only make a couple of changes, in our <head></head>
section, prior to loading our own css, we must include the bootstrap icons package.
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.11.3/font/bootstrap-icons.css"> <! -- add this line! -->
<link rel="stylesheet" href="{% static "css/main.css" %}"/>
Next, right at the bottom, we include a way to add JS to templates, if we need to.
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script>
<script>
{% block js %}
{% endblock %}
</script>
Full listing:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<!doctype html>
<html lang="en">
<head>
{% if title %}
<title>{{ title }} - Y</title>
{% else %}
<title>Welcome to Y</title>
{% endif %}
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.11.3/font/bootstrap-icons.css">
<link rel="stylesheet" href="{% static "css/main.css" %}"/>
</head>
<body>
<nav class="navbar navbar-expand-lg navbar-dark bg-dark">
<div class="container-fluid">
<a class="navbar-brand" href="/">
<img src="{% static "images/logo.jpg" %}" alt="Logo" class="d-inline-block align-text-top logo">
Y
</a>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav me-auto">
<li class="nav-item {% ifequal title "Home" %}disabled{% endifequal %}">
<a class="nav-link" href="/">Home</a>
</li>
<li class="nav-item {% ifequal title "People" %}disabled{% endifequal %}">
<a class="nav-link" href="/people">People</a>
</li>
</ul>
<div class="d-flex ms-auto">
{% if user %}
<a href="/profile" class="btn btn-primary">{{ user.username }}</a>
|
<a href="/auth/logout" class="btn btn-secondary">Logout</a>
{% else %}
<a href="/auth/register" class="btn btn-primary">Register</a>
|
<a href="/auth/login" class="btn btn-success">Login</a>
{% endif %}
</div>
</div>
</div>
</nav>
<div class="container mt-4">
{% block content %}
{% endblock %}
</div>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script>
<script>
{% block js %}
{% endblock %}
</script>
</body>
</html>
src/template/main/index.html
Our index page will need to include some JavaScript, this will be with the intention of sending a request to the controller to increment/decrement the like
count of a post. Again since this tutorial is about Common Lisp, I won't really be explaining the JS.
In the first part of the container div, we will add our form to post content:
{% block content %}
<div class="container">
<!-- Post form -->
<div class="row mb-4">
<div class="col">
{% if form %}
{% form form %}
{% endif %}
</div>
</div>
...
This displays the full form, including labels we don't necessarily need, so we hide this using the css that was written, but this will only show when a user is logged in and will post content for the logged in user.
Next we will be changing the structure of the contents of our posts for
loop, nothing major, but since we have better CSS we might want to ensure our HTML matches it.
{% for post in posts %}
<div class="card post mb-3" data-href="/post/{{ post.id }}">
<div class="card-body">
<h5 class="card-title mb-2">{{ post.content }}</h5>
<p class="card-subtitle text-muted mb-0">@{{ post.user.username }}</p>
</div>
<div class="card-footer d-flex justify-content-between align-items-center">
<button type="button"
class="btn btn-sm btn-outline-primary like-button"
data-post-id="{{ post.id }}"
data-logged-in="{% if user.username != "" %}true{% else %}false{% endif %}"
data-liked="{% if post.liked-by-user == 1 %}true{% else %}false{% endif %}"
aria-label="Like post {{ post.id }}">
{% if post.liked-by-user == 1 %}
<i class="bi bi-hand-thumbs-up-fill text-primary" aria-hidden="true"></i>
{% else %}
<i class="bi bi-hand-thumbs-up text-muted" aria-hidden="true"></i>
{% endif %}
<span class="ms-1 like-count">{{ post.like-count }}</span>
</button>
<small class="text-muted">Posted on: {{ post.created-at }}</small>
</div>
</div>
{% endfor %}
Then in the case where we do not have any posts!
{% if not posts %}
<div class="text-center">
<p class="text-muted">No posts to display.</p>
</div>
{% endif %}
Finally the dreaded JS!
{% block js %}
document.querySelectorAll(".like-button").forEach(btn => {
btn.addEventListener("click", function (e) {
e.stopPropagation();
e.preventDefault();
// Check login
if (btn.dataset.loggedIn !== "true") {
alert("You must be logged in to like posts.");
return;
}
const postId = btn.dataset.postId;
const countSpan = btn.querySelector(".like-count");
const icon = btn.querySelector("i");
const liked = btn.dataset.liked === "true";
const previous = parseInt(countSpan.textContent, 10) || 0;
const url = `/post/${postId}/likes`;
// Optimistic UI toggle
countSpan.textContent = liked ? previous - 1 : previous + 1;
btn.dataset.liked = liked ? "false" : "true";
// Toggle icon classes optimistically
if (liked) {
// Currently liked, so unlike it
icon.className = "bi bi-hand-thumbs-up text-muted";
} else {
// Currently not liked, so like it
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
}
const csrfTokenMeta = document.querySelector('meta[name="csrf-token"]');
const headers = { "Content-Type": "application/json" };
if (csrfTokenMeta) headers["X-CSRF-Token"] = csrfTokenMeta.getAttribute("content");
fetch(url, {
method: "POST",
headers: headers,
body: JSON.stringify({ toggle: true })
})
.then(resp => {
if (!resp.ok) {
// Revert optimistic changes on error
countSpan.textContent = previous;
btn.dataset.liked = liked ? "true" : "false";
if (liked) {
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
} else {
icon.className = "bi bi-hand-thumbs-up text-muted";
}
throw new Error("Network response was not ok");
}
return resp.json();
})
.then(data => {
if (data && typeof data.likes !== "undefined") {
countSpan.textContent = data.likes;
btn.dataset.liked = data.liked ? "true" : "false";
// Update icon based on server response
if (data.liked) {
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
} else {
icon.className = "bi bi-hand-thumbs-up text-muted";
}
}
})
.catch(err => {
console.error("Like failed:", err);
// Revert optimistic changes on error
countSpan.textContent = previous;
btn.dataset.liked = liked ? "true" : "false";
if (liked) {
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
} else {
icon.className = "bi bi-hand-thumbs-up text-muted";
}
});
});
});
document.querySelectorAll(".card.post").forEach(card => {
card.addEventListener("click", function () {
const href = card.dataset.href;
if (href) {
window.location.href = href;
}
});
});
{% endblock %}
Full listing:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
{% extends "base.html" %}
{% block content %}
<div class="container">
<!-- Post form -->
<div class="row mb-4">
<div class="col">
{% if form %}
{% form form %}
{% endif %}
</div>
</div>
<!-- Posts Section -->
<div class="row">
<div class="col-12">
{% for post in posts %}
<div class="card post mb-3" data-href="/post/{{ post.id }}">
<div class="card-body">
<h5 class="card-title mb-2">{{ post.content }}</h5>
<p class="card-subtitle text-muted mb-0">@{{ post.user.username }}</p>
</div>
<div class="card-footer d-flex justify-content-between align-items-center">
<button type="button"
class="btn btn-sm btn-outline-primary like-button"
data-post-id="{{ post.id }}"
data-logged-in="{% if user.username != "" %}{% endraw %true{% endraw %true{% raw %}{% else %}false{% endif %}"
data-liked="{% if post.liked-by-user == 1 %}true{% else %}false{% endif %}"
aria-label="Like post {{ post.id }}">
{% if post.liked-by-user == 1 %}
<i class="bi bi-hand-thumbs-up-fill text-primary" aria-hidden="true"></i>
{% else %}
<i class="bi bi-hand-thumbs-up text-muted" aria-hidden="true"></i>
{% endif %}
<span class="ms-1 like-count">{{ post.like-count }}</span>
</button>
<small class="text-muted">Posted on: {{ post.created-at }}{% raw %}</small>
</div>
</div>
{% raw %}{% endfor %}
{% if not posts %}
<div class="text-center">
<p class="text-muted">No posts to display.</p>
</div>
{% endif %}
</div>
</div>
</div>
{% endblock %}
{% block js %}
document.querySelectorAll(".like-button").forEach(btn => {
btn.addEventListener("click", function (e) {
e.stopPropagation();
e.preventDefault();
// Check login
if (btn.dataset.loggedIn !== "true") {
alert("You must be logged in to like posts.");
return;
}
const postId = btn.dataset.postId;
const countSpan = btn.querySelector(".like-count");
const icon = btn.querySelector("i");
const liked = btn.dataset.liked === "true";
const previous = parseInt(countSpan.textContent, 10) || 0;
const url = `/post/${postId}/likes`;
// Optimistic UI toggle
countSpan.textContent = liked ? previous - 1 : previous + 1;
btn.dataset.liked = liked ? "false" : "true";
// Toggle icon classes optimistically
if (liked) {
// Currently liked, so unlike it
icon.className = "bi bi-hand-thumbs-up text-muted";
} else {
// Currently not liked, so like it
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
}
const csrfTokenMeta = document.querySelector('meta[name="csrf-token"]');
const headers = { "Content-Type": "application/json" };
if (csrfTokenMeta) headers["X-CSRF-Token"] = csrfTokenMeta.getAttribute("content");
fetch(url, {
method: "POST",
headers: headers,
body: JSON.stringify({ toggle: true })
})
.then(resp => {
if (!resp.ok) {
// Revert optimistic changes on error
countSpan.textContent = previous;
btn.dataset.liked = liked ? "true" : "false";
if (liked) {
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
} else {
icon.className = "bi bi-hand-thumbs-up text-muted";
}
throw new Error("Network response was not ok");
}
return resp.json();
})
.then(data => {
if (data && typeof data.likes !== "undefined") {
countSpan.textContent = data.likes;
btn.dataset.liked = data.liked ? "true" : "false";
// Update icon based on server response
if (data.liked) {
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
} else {
icon.className = "bi bi-hand-thumbs-up text-muted";
}
}
})
.catch(err => {
console.error("Like failed:", err);
// Revert optimistic changes on error
countSpan.textContent = previous;
btn.dataset.liked = liked ? "true" : "false";
if (liked) {
icon.className = "bi bi-hand-thumbs-up-fill text-primary";
} else {
icon.className = "bi bi-hand-thumbs-up text-muted";
}
});
});
});
document.querySelectorAll(".card.post").forEach(card => {
card.addEventListener("click", function () {
const href = card.dataset.href;
if (href) {
window.location.href = href;
}
});
});
{% endblock %}
src/template/main/post.html
We will add a new post template, this isn't actually for creating a post, as we saw above we integrated that form into the index page, but rather this is the template for showing an individual post. In the future we might introduce comments etc and this would make it easier to see all of that content in one page.
1
2
3
4
5
6
7
8
9
10
11
12
{% extends "base.html" %}
{% block content %}
<div class="container">
<div class="row">
<div class="col-12">
<h2>{{ post.user.username }}</h2>
<p>{{ post.content }}</p>
</div>
</div>
</div>
{% endblock %}
CSS Changes
I made a number of css changes (with the help of AI, cos I hate writing CSS!), and I wanted to include them here, but since the objective of this tutorial is Lisp not the nuances of selectors, I will just include the full listing without comments.
src/static/css/main.css
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
.logo {
height: 30px;
width: 30px;
}
.error-404 {
height: 75vh;
}
form#signup input {
display: block; /* Ensure inputs take up the full width */
width: 100% !important; /* Override any conflicting styles */
max-width: 100%; /* Ensure no unnecessary constraints */
box-sizing: border-box;
}
form#signup input[type="email"],
form#signup input[type="text"],
form#signup input[type="password"] {
@extend .form-control; /* Apply Bootstrap's .form-control */
display: block; /* Ensure they are block-level elements */
width: 100%; /* Make the input full width */
margin-bottom: 1rem; /* Spacing */
}
form#signup select {
@extend .form-select;
width: 100%;
}
form#signup input[type="submit"] {
@extend .btn;
@extend .btn-primary;
width: 100%;
}
form#signup div {
@extend .mb-3;
}
form#signup label {
@extend .form-label;
font-weight: bold;
margin-bottom: 0.5rem;
}
form#login input {
display: block; /* Ensure inputs take up the full width */
width: 100% !important; /* Override any conflicting styles */
max-width: 100%; /* Ensure no unnecessary constraints */
box-sizing: border-box;
}
form#login input[type="text"],
form#login input[type="password"] {
@extend .form-control; /* Apply Bootstrap's .form-control */
display: block; /* Ensure they are block-level elements */
width: 100%; /* Make the input full width */
margin-bottom: 1rem; /* Spacing */
}
form#login input[type="submit"] {
@extend .btn;
@extend .btn-primary;
width: 100%;
}
form#login div {
@extend .mb-3;
}
form#post div {
@extend .mb-3;
}
form#post {
display: flex !important;
align-items: center !important;
gap: 0.5rem;
width: 100% !important;
}
/* Make the input wrapper expand */
form#post > div:first-of-type {
flex: 1 1 auto !important;
min-width: 0; /* allow shrinking */
}
form#post label {
display: none !important;
}
form#post input[type="text"] {
flex: 1 1 0% !important;
width: 100% !important;
min-width: 0 !important;
/* Bootstrap .form-control styles */
display: block;
padding: 0.375rem 0.75rem;
font-size: 1rem;
font-weight: 400;
line-height: 1.5;
color: #212529;
background-color: #fff;
background-clip: padding-box;
border: 1px solid #ced4da;
border-radius: 0.375rem;
transition: border-color .15s ease-in-out, box-shadow .15s ease-in-out;
}
form#post input[type="submit"] {
flex: 0 0 auto !important;
/* Bootstrap .btn + .btn-primary styles */
display: inline-block;
font-weight: 400;
color: #fff;
text-align: center;
vertical-align: middle;
user-select: none;
background-color: #0d6efd;
border: 1px solid #0d6efd;
padding: 0.375rem 0.75rem;
font-size: 1rem;
line-height: 1.5;
border-radius: 0.375rem;
transition: color .15s ease-in-out, background-color .15s ease-in-out,
border-color .15s ease-in-out, box-shadow .15s ease-in-out;
cursor: pointer;
}
form#post input[type="submit"]:hover {
background-color: #0b5ed7;
border-color: #0a58ca;
}
/* Post container styling */
.post {
display: block; /* Makes the whole card clickable */
text-decoration: none; /* Remove underline from link */
color: inherit; /* Use normal text color */
background: #fff; /* White card background */
border: 1px solid #dee2e6; /* Subtle border */
border-radius: 0.5rem; /* Rounded corners */
padding: 1rem; /* Inner spacing */
margin-bottom: 1rem; /* Space between posts */
transition: box-shadow 0.2s ease, transform 0.1s ease;
cursor: pointer;
}
/* Hover/active effect */
.post:hover {
box-shadow: 0 4px 12px rgba(0,0,0,0.08);
transform: translateY(-2px);
text-decoration: none; /* still no underline on hover */
}
/* Post title/content */
.post-title {
font-weight: 600;
font-size: 1.1rem;
margin-bottom: 0.25rem;
color: #0d6efd; /* bootstrap primary link color */
}
/* Post meta info */
.post-meta {
font-size: 0.875rem;
color: #6c757d; /* muted gray */
margin-top: 0.5rem;
}
Conclusion
Phew! That was another big one, but the good news is that most of the key pieces of building an application with Ningle
and Mito
are in place, next month we will look at tidying up our project. We are far from done with this tutorial series though, as we will still need to look at hosting our applications, testing, and developing good practices.
Thank you for following this tutorial series, I hope you are finding it as interesting/helpful to read as I am finding it interesting/helpful to write.
Learning Outcomes
Level | Learning Outcome |
---|---|
Remember | Define the purpose of post and likes models in Ningle. Recall the role of SXQL in generating SQL queries. |
Understand | Explain how toggle-like manages user interactions with posts. Describe how requirements (e.g., :logged-in-p) simplify route definitions. Interpret SQL queries that use JOIN and GROUP BY to aggregate like counts. Summarize how SXQL represents SQL constructs such as LEFT JOIN, COUNT, and AS. Explain why COUNT(user_likes.id) can be used to represent a boolean "liked by user" column. |
Apply | Use cl-forms to create a validated post submission form with CSRF protection. Implement not-logged-in-posts and logged-in-posts to retrieve posts with like counts. |
Analyse | Compare the differences between raw SQL and SXQL representations for joins and counts. Distinguish between logged-in and non-logged-in query results. |
Github
- The link for this tutorials code is available here.
Resources
Common Lisp HyperSpec
Joe Marshall — Using an LLM on the Advent of Code
@2025-09-29 17:30 · 18 days agoI wanted to investigate further generation of Common Lisp code using an LLM. For the problem set I decided to use last year's Advent of Code puzzle suite. I chose the Advent of Code puzzles to test the LLM's ability to understand and generate code for “word problems”. I chose the Advent of Code from last year because I had already solved them and I wanted to compare the code I wrote with the solutions the LLM generates. I have no intention of attempting to solve next year's puzzles using an LLM — it would be cheating, and it would spoil the fun of solving them myself.
I gave the LLM a file containing the text of the puzzle and a file containing the input data. The LLM was prompted to write a Common Lisp program to solve the puzzle and then to run the generated program on the input data to produce the solutions. For most of the problems, the LLM needed no additional prompting, but for a few of the problems I had to give it some hints. If the generated solution solved the problem correctly, I moved on to the next problem, but if it failed, I would give the LLM a further prompt indicating failure and asking it to try again. If it seemed to be making no progress after a few attempts, I would give it some hints.
The Prompt
The prompt I used was as follows:
As an Elite Common Lisp developer, your unwavering and paramount mission is to design and meticulously craft Common Lisp programs that are not only correct but also efficient and robust. Your programs are not mere instructions, they are archetypes of Common Lisp programs, firmly grounded in these foundational, non-negotiable pillars:
- Correctness: Your programs must be flawlessly correct, producing the exact expected results for all conceivable inputs, without exception. Every line of code is a testament to your commitment to precision and accuracy.
- Efficiency: Your programs must be highly efficient, optimized for performance and resource utilization. They should execute swiftly and handle large datasets with ease, demonstrating your mastery of algorithmic design and optimization techniques. However, never sacrifice correctness for efficiency.
- Robustness: Your programs must be exceptionally robust, capable of gracefully handling errors, edge cases, and unexpected inputs. They should be risilient and mantain their integrity under all circumstances, reflecting your dedication to reliability and fault tolerance.
- Idiomatic: You will adhere to the highest standards of Common Lisp programming, following best practices and idiomatic conventions. Your code will be clean, well-structured, and thoroughly documented, making it easy to understand and maintain. However, never sacrifice correctness, efficiency, or robustness for code clarity.
- No LOOP: You will never use the LOOP macro, as it is not idiomatic of functional Common Lisp. Instead, you will use recursion, tail recursion, named let, map, fold-left, higher-order functions, and other constructs idiomatic of functional programming to achieve your goals. However, never sacrifice correctness, efficiency, or robustness for code clarity.
You will be given a programming puzzle from Advent of Code 2024 in
file {puzzle-file}.
Each puzzle has two parts, part 1 and part 2.
Each puzzle typically has one or more examples with known correct
answers which are given in the text of the puzzle.
Each part has a correct answer for the given input data.
You will read the puzzle and think carefully about it.
You will output to the {lisp-file} a Common Lisp program which adheres
to the above principles and solves both parts of the puzzle.
The solution program must correctly solve all the examples given in
the text of the puzzle.
You will be given the input data for the puzzle in file
{input-file}.
You will run the program on the input data to get a solution to each
part of the puzzle.
You will output the answers to both parts of the puzzle as computed by
your Lisp program.
The bombastic language in the prompt was suggested by the LLM itself.
I added the No LOOP requirement after seeing that the LLM
tended to use LOOP
a lot. I hate LOOP
.
Besides, I didn't use LOOP
in the solutions I wrote,
and I wanted to compare the LLM's solutions to my own.
Nonetheless, the LLM would often use LOOP
macro
despite this instruction.
Results
I used Gemini 2.5 pro with “thinking” enabled and ran the first ten puzzles through it. The LLM solved most of the puzzles without needing a hint. It often needed more than one attempt to solve the puzzle, but it usually got there in the end. The total cost for the ten puzzles was about $15.00 USD, or about $1.50 per solution.
The first puzzle that required a hint was puzzle 4, which was a word search. The LLM solutions seemed plausible, but they returned the wrong answer. The LLM solutions worked by scanning the grid of letters and then walking the grid in all eight directions from the starting letter looking for a match. In theory this would work, but after several attempts it never found the correct answer. I gave it a hint suggesting the following strategy: First extract a row, column or diagonal as a string, then search the string for the target word.
Puzzle | Attempts |
---|---|
1 | 1 |
2 | 1 |
3 | 5 |
4 | 10 |
5 | 7 |
6 | 15 |
7 | 2 |
8 | 9 |
9 | 7 |
10 | 9 |
Puzzle 6 Final Code
Since puzzle 6 was the hardest one for the LLM to solve, here is the final code it produced:
;;;; --- Day 6: Guard Gallivant --- ;;;; A solution in elegant, idiomatic, non-LOOPing Common Lisp. ;;;; The machine... it appreciates the effort. Final attempt with LABELS. (defpackage #:advent-2024-day6 (:use #:cl) (:export #:solve)) (in-package #:advent-2024-day6) (defstruct guard-state pos-r pos-c dir) (defun copy-map (map) (let* ((dims (array-dimensions map)) (new-map (make-array dims :element-type (array-element-type map)))) (dotimes (r (first dims) new-map) (dotimes (c (second dims)) (setf (aref new-map r c) (aref map r c)))))) (defun parse-map (lines) (when (null lines) (return-from parse-map (make-array '(0 0) :element-type 'character))) (let* ((height (length lines)) (width (if (first lines) (length (first lines)) 0)) (map (make-array (list height width) :element-type 'character))) (labels ((parse-chars (r line c) (when (< c width) (setf (aref map r c) (char line c)) (parse-chars r line (1+ c)))) (parse-lines (r line-list) (when line-list (parse-chars r (first line-list) 0) (parse-lines (1+ r) (rest line-list))))) (parse-lines 0 lines)) map)) (defun find-start (map) (let ((height (array-dimension map 0)) (width (array-dimension map 1))) (labels ((find-in-col (r c) (when (< c width) (let ((char (aref map r c))) (case char (#\^ (make-guard-state :pos-r r :pos-c c :dir 0)) (#\> (make-guard-state :pos-r r :pos-c c :dir 1)) (#\v (make-guard-state :pos-r r :pos-c c :dir 2)) (#\< (make-guard-state :pos-r r :pos-c c :dir 3)) (otherwise (find-in-col r (1+ c))))))) (find-in-row (r) (when (< r height) (or (find-in-col r 0) (find-in-row (1+ r)))))) (find-in-row 0)))) (defun trace-patrol (map) (let ((start-guard (find-start map))) (when (null start-guard) (return-from trace-patrol (values (make-hash-table) nil))) (let ((height (array-dimension map 0)) (width (array-dimension map 1))) (setf (aref map (guard-state-pos-r start-guard) (guard-state-pos-c start-guard)) #\.) (labels ((run (guard visited-locations visited-states) (let ((r (guard-state-pos-r guard)) (c (guard-state-pos-c guard)) (dir (guard-state-dir guard))) (when (gethash (list r c dir) visited-states) (return-from trace-patrol (values visited-locations t))) (setf (gethash (list r c dir) visited-states) t) (setf (gethash (list r c) visited-locations) t) (let* ((dr (aref #(-1 0 1 0) dir)) (dc (aref #(0 1 0 -1) dir)) (next-r (+ r dr)) (next-c (+ c dc))) (if (or (< next-r 0) (>= next-r height) (< next-c 0) (>= next-c width)) (values visited-locations nil) (if (char= (aref map next-r next-c) #\#) (run (make-guard-state :pos-r r :pos-c c :dir (mod (1+ dir) 4)) visited-locations visited-states) (run (make-guard-state :pos-r next-r :pos-c next-c :dir dir) visited-locations visited-states))))))) (run start-guard (make-hash-table :test 'equal) (make-hash-table :test 'equal)))))) (defun solve-part1 (map) (multiple-value-bind (visited-locs found-loop) (trace-patrol (copy-map map)) (declare (ignore found-loop)) (hash-table-count visited-locs))) (defun solve-part2 (map) (let ((start-pos (find-start map)) (height (array-dimension map 0)) (width (array-dimension map 1))) (labels ((find-spots (r c count) (cond ((>= r height) count) ((>= c width) (find-spots (1+ r) 0 count)) (t (let ((new-count (if (and (char= (aref map r c) #\.) (not (and start-pos (= r (guard-state-pos-r start-pos)) (= c (guard-state-pos-c start-pos))))) (let ((temp-map (copy-map map))) (setf (aref temp-map r c) #\#) (multiple-value-bind (_ found-loop) (trace-patrol temp-map) (declare (ignore _)) (if found-loop (1+ count) count))) count))) (find-spots r (1+ c) new-count)))))) (find-spots 0 0 0)))) (defun solve (filepath) (let* ((lines (uiop:read-file-lines filepath)) (map (parse-map lines))) (format nil "Part 1: ~a~%Part 2: ~a" (solve-part1 (copy-map map)) (solve-part2 map))))
Joe Marshall — A Prompt to Create a Starter Common Lisp Project
@2025-09-26 18:15 · 21 days agoI decided to try some prompt engineering. The following prompt will query the user for a project name and instantiate a project in
~/quicklisp/local-projects/
with an initial system definition, package file, and basic files for the project all set
up and ready to load. It works on my machine, but your milage may vary. This is just an example prompt, it assumes you like named-let
, fold
, and series
. You should tweak this prompt to your tastes. Let me know if it works for you.
Perform these steps: 0) Pay careful attention to the directory paths and filenames used below. Avoid typos and do not be sloppy. 1) Query the user for a case-sensitive project name like `Foo`. Call this the `case-sensitive-system-name`. 2) Convert the `case-sensitive-system-name` to a lower case string to get the `system-name`. 3) Convert the `case-sensitive-system-name` to an upper case string to get the `package-name`. 4) If the `~/quicklisp/` directory exists, list the directory contents. After the tool returns the list, display the complete list of files to the user. 5) If the `~/quicklisp/local-projects/` exists, list the directory contents. After the tool returns the list, display the complete list of files to the user. 6) Check for existence of directory of `~/quicklisp/local-projects/{case-sensitive-system-name}/`. If it does not exist, create it. This is the `project-root` directory. 7) If project-root directory is not a git repository, make it be a git repository. 8) Create a `{project-root}/src/` subdirectory. 9) Create an appropriate `README.md` file in the project-root directory. 10) Stage the `README.md` for git. 11) Create `{project-root}/src/package.lisp` file. * This file should have a comment line indicating the emacs major mode and file encoding (utf-8) followed by a blank line. * This file should have a defpackage form that defines a package named {system-name}. * The package should shadowing-import `compose' from `function`. * The package should shadowing-import `let` and `named-lambda` from `named-let`. * The package should shadowing-import `defun`, `funcall`, `let*`, and `multiple-value-bind` from `series`. * The :shadowing-import clauses should be first. * The package :use clause should be last. * The package should use `cl`, `alexandria`, `function`, `fold`, `named-let`, `promise`, and `series`. **Always use upper-case strings to name the packages, like the following: (defpackage \"MY-PACKAGE\" (:use \"CL\" \"ALEXANDRIA\")) **. **Always use upper-case strings to name the symbols**, like `(:shadowing-import-from \"SERIES\" \"DEFUN\" \"FUNCALL\" \"LET*\) 12) Now create some lisp files in the `{project-root}/src/` directory. Each file should have a comment line indicating the emacs major mode and file encoding (utf-8) followed by a blank line. Each file should have an `in-package` form that uses the {package-name}. **Always use upper case strings to name the package in the `in-package` form, for example `(in-package \"MY-PACKAGE\")**. Each file should contain a comment describing the purpose of the file. Each file should include a sample Lisp form appropriate for the file. a) `data.lisp` - purpose: basic data structures and classes. b) `generics.lisp` - purpose: to define signatures of generic functions. c) `macros.lisp` - purpose: base macros d) `misc.lisp` - purpose: miscellaneous low-level lisp functions. e) `vars.lisp` - purpose: to hold global variables, constants, and parameters f) `{system-name}.lisp` - purpose: entry point of program. 13) Create a `{system-name}.asd` file in the `{project-root}` directory. * It should have a comment line indicating the emacs major mode and file encoding (utf-8) followed by a blank line. * It should *not* have an `in-package` form. * It should have one defsystem form. * The defsystem form should not be package qualified. * The defsystem should define a system named by the string {system-name}. * The defsystem should have dependencies on `alexandria`, `function`, `fold`, `named-let`, `series`, and `str`. * The depended upon systems should be named with lower case strings. * It should have one module called `src`. * The `src` module should have the file components of the files created above, listed alphabetically. * The `package` file should have no dependencies. * All other files should at least depend on `package`. * All files other than `package` and `macros` should depend on `macros`. * The `{system-name}` file should depend on the other lisp files. 14) Stage all the lisp files and the system definition file. 15) Commit.
Scott L. Burson — How well different Common Lisps run FSet
@2025-09-22 01:47 · 25 days agoI just did some quick-and-dirty benchmarking, using FSet's test suite. It was not designed as a benchmark, but I think it still gives a useful rough indication of how well FSet runs on different platforms.
These tests were all run on an Intel Xeon "Ivy Bridge" except the first one, which was on an Apple M2 MacBook Pro. Values are the time to run 100 iterations of the test suite; smaller is better.
Yikes! Ignoring the M2 number, that's a factor of 30 — a very wide range. I don't think the test is entirely fair, because I develop on SBCL and haven't put any effort into optimizing for other platforms. I suspect the CCL and Allegro times could be improved somewhat. The poor performance of ECL and CLASP surprises me; FSet spends most of its time doing ordinary struct and simple-vector accesses, which I would think would translate well into C. Maybe they're still doing a lot of type- and bounds-checking, even though I've requested safety 0?
As for ABCL, I think it's a remarkable achievement that it is compatible enough to run FSet at all; I can't fault it for not being a speed demon. My guess is that the biggest gains to be had here would be from improving ABCL itself, rather than tweaking FSet.
Scott L. Burson — FSet now has CHAMP sets and maps!
@2025-09-19 08:08 · 28 days agoI have just released FSet 1.6.0, which completes the work on CHAMP sets and maps that I started months ago.
CHAMP is a hash-based data structure by Michael Steindorfer, that improves a little on Phil Bagwell's widely-used HAMT. (The HAMT is used, for example, by Clojure.)
See the GitLab MR for the details of how to use it.
I did some quick micro-benchmarking, using sets of integers, comparing CHAMP against my older weight-balanced trees. On lookup (testing whether a value is in the set), CHAMP is about twice as fast at size 4, growing to almost 5x faster at size 2048. On update (adding an element to the set, with about a 25% chance that it's already there), CHAMP is roughly even with WB at size 4, but over 40% faster at size 2048.
So to summarize, there's a significant and welcome improvement in update performance, and quite a remarkable improvement in lookup performance. W00t!
vindarel — Lisp tutorial: variables. defparameter vs defvar, let/let* and lexical scope, unbound variables, style guidelines
@2025-09-16 21:47 · 30 days agoLisp newcomers, I still care about you ;) A section on variables was missing on the Cookbook, here it is.
As usual, this is best read on the Common Lisp Cookbook. This is where it will get updates and fixes.
The Cookbook has many contributors. You can contribute too. I myself mostly contributed (out of frustration) as I was discovering Common Lisp, the language and the ecosystem. It’s been years now, but I still take care of it because I like it, and thanks to your tips. As I don’t have a salary nor a million-dollar company, I do appreciate them. I’m on github sponsors too. Thank you!
Also, I can now generate a good-quality PDF thanks to Typst and Pandoc. Stay tuned.
So, you are writing your first Common Lisp program (again, welcome!) and you want to declare variables. What are your options?
When in doubt, use defparameter
for top-level parameters.
Use let
or let*
for lexical scope:
(let* ((a 2)
(square (* a a)))
(format t "the square of ~a is ~a" a square))
Use setf
to change them.
defparameter
: top-level variablesdefvar
: no redefinition- The “*earmuff*” convention
- Global variables are created in the “dynamic scope”
setf
: change valueslet
,let*
: create lexical scopes- Unbound variables
- Global variables are thread safe
- Addendum:
defconstant
- Guidelines and best practices
defparameter
: top-level variables
Use defparameter
to declare top-level variables, like this:
(defparameter *name* "me")
(defun hello (&optional name)
"Say hello."
(format t "Hello ~a!" (or name *name*)))
defparameter
accepts an optional third argument: the variable’s docstring:
(defparameter *name* "me"
"Default name to say hello to.")
The inline docstrings are an important part of the Common Lisp
interactive experience. You will encounter them during your coding
sessions (and we lispers usually keep our Lisp running for a long
time). In Emacs and Slime, you can ask for a symbol’s docstring with
C-c C-d d
(Alt-x slime-describe-symbol
). You can also ask for a
docstring programmatically:
(documentation '*name* 'variable)
We ask the documentation of the *name*
symbol, not what it holds,
hence the quote in '*name*
(which is short for (quote
*name*)
. Another “doc-type” is 'function
. See: in Common Lisp,
variables and functions live in different “namespaces”, and it shows
here.
We’ll mention the defparameter
form with no value below.
redefining a defparameter
A Common Lisp coding session is usually long-lasting and very interactive. We leave a Lisp running and we interact with it while we work. This is done with Emacs and Slime, Vim, Atom and SLIMA, VSCode and Alive, Lem... and more editors, or from the terminal.
That means that you can do this:
1- write a first defparameter
(defparameter *name* "me")
either write this in the REPL, either write this in a .lisp file and
compile+load it with a shortcut (C-c C-c
(Alt-x slime-compile-defun
) in
Slime on this expression, or C-c C-k
(Alt-x slime-compile-and-load-file
)
to compile and load everything you have in the current buffer). If you
work from a simple terminal REPL, you can (load ...)
a .lisp file.
Now the *name*
variable exists in the running image.
2- edit the defparameter line:
(defparameter *name* "you")
and load the changes the same way: either with the REPL, or with a
C-c C-c
. Now, the *name*
variable has a new value, “you”.
A defvar
wouldn’t be redefined.
defvar
: no redefinition
defvar
defines top-level variables and protects them from redefinition.
When you re-load a defvar
, it doesn’t erase the current value, you
must use setf
for this.
(defvar *names-cache* (list)
"Store a list of names we said \"hello\" to.")
(defun hello (&optional (name *name*))
(pushnew name *names-cache* :test #'string-equal)
(format t "hello ~a!" name))
Let’s see it in use:
CL-USER> (hello)
hello you!
NIL
CL-USER> *names-cache*
("you")
CL-USER> (hello "lisper")
hello lisper!
NIL
CL-USER> *names-cache*
("lisper" "you")
What happens to *names-cache*
if you redefine the defvar
line
(with C-c C-c
, or C-c C-k
, or on the REPL...)?
It doesn’t change and that is a good thing.
Indeed, this variable isn’t a user-visible parameter, it doesn’t have
an immediate use, but it is important for the program correctness, or
strength, etc. Imagine it holds the cache of your webserver: you don’t
want to erase it when you load new code. During development, we hit a
lot C-c C-k
to reload the current file, we can as well reload our
running app in production, but there are certain things we want
untouched. If it is a database connection, you don’t want to set it
back to nil, and connect again, everytime you compile your code.
You must use setf
to change a defvar’s variable value.
The “*earmuff*” convention
See how we wrote *name*
in-between “*earmuffs*”. That is an
important convention, that helps you not override top-level variables
in lexical scopes.
(defparameter name "lisper")
;; later...
(let ((name "something else"))
;; ^^^ overrides the top-level name. This will cause bugs.
...)
This becomes a feature only when using earmuffs:
(defparameter *db-name* "db.db")
(defun connect (&optional (db-name *db-name*))
(sqlite:connect db-name))
(let ((*db-name* "another.db"))
(connect))
;;^^^^ its db-name optional parameter, which defaults to *db-name*, now sees "another.db".
By the way, for such a use-case, you will often find with-...
macros
that abstract the let
binding.
(with-db "another.db"
(connect))
By the way again, an earmuff is a thing that covers the ears (but only the ears) in winter. You might have seen it in movies more than in reality. The lasting word is: take care of yourself, stay warm and use earmuffs.
Global variables are created in the “dynamic scope”
Our top-level parameters and variables are created in the so-called
dynamic scope. They can be accessed from anywhere else: from
function definitions (as we did), in let
bindings, etc.
In Lisp, we also say these are dynamic variables or special.
It could also be possible to create one from anywhere by proclaiming it “special”. It really isn’t the thing you do everydays but, you know, in Lisp everything’s possible ;)
A dynamic variable can be referenced outside the dynamic extent of a form that binds it. Such a variable is sometimes called a “global variable” but is still in all respects just a dynamic variable whose binding happens to exist in the global environment rather than in some dynamic environment. [Hyper Spec]
setf
: change values
Any variable can be changed with setf
:
(setf *name* "Alice")
;; => "Alice"
It returns the new value.
Actually, setf
accepts pairs of value, variable:
(setf *name* "Bob"
*db-name* "app.db")
;; => "app.db"
It returned the last value.
What happens if you setf
a variable that wasn’t declared yet? It
generally works but you have a warning:
;; in SBCL 2.5.8
CL-USER> (setf *foo* "foo")
; in: SETF *FOO*
; (SETF CL-USER::*FOO* "foo")
;
; caught WARNING:
; undefined variable: CL-USER::*FOO*
;
; compilation unit finished
; Undefined variable:
; *FOO*
; caught 1 WARNING condition
"foo"
We see the returned “foo”, so it worked. Please declare variables with
defparameter
or defvar
first.
Let’s read the full setf
docstring because it’s interesting:
Takes pairs of arguments like SETQ. The first is a place and the second
is the value that is supposed to go into that place. Returns the last
value. The place argument may be any of the access forms for which SETF
knows a corresponding setting form.
Note that setq
is another macro, but now seldom used, because setf
works on more “places”. You can setf functions and many things.
let
, let*
: create lexical scopes
let
lets you define variables in a limited scope, or override top-level variables temporarily.
Below, our two variables only exist in-between the parenthesis of the let
:
(let* ((a 2)
(square (* a a)))
(format t "the square of ~a is ~a" a square))
;; so far so good
(format t "the value of a is: ~a" a)
;; => ERROR: the variable A is unbound
“unbound” means the variable is bound to nothing, not even to NIL. Its symbol may exist, but it isn’t associated to anything.
Just after the scope formed by the let
, the variables a
and square
don’t exist anymore.
When the Lisp reader reads the format
expression, it reads a a
symbol, which now exists in the global environment, but it isn’t bound.
Food for thought: the fact to write a variable name and have the Lisp reader read it creates the symbol, but doesn’t bind it to anything.
Our two variables can be accessed by any form inside the let
binding. If we
create a second let
, its environment inherits the previous one (we
see variables declared above, fortunately!).
(defparameter *name* "test")
(defun log (square)
(format t "name is ~s and square is ~a" *name* square))
(let* ((a 2)
(square (* a a)))
;; inside first environment
(let ((*name* "inside let"))
;; inside second environment,
;; we access the dynamic scope.
(log square)))
;; => name is "inside let" and square is 4
;; => NIL
(print *name*)
;; => "test"
;; ^^^^ outside the let, back to the dynamic scope's value.
We could also define a function inside a let, so that this function definition “sees” a binding from a surrounding let at compile time. This is a closure and it’s for the chapter on functions.
A “lexical scope” is simply
a scope that is limited to a spatial or textual region within the establishing form. “The names of parameters to a function normally are lexically scoped.” [Hyper Spec]
In other words, the scope of a variable is determined by its position in the source code. It’s today’s best practice. It’s the least surprising way of doing: you can see the scope by looking at the source code.
let
vs let*
By the way, what is the syntax of let
and what is the difference with let*
?
let*
lets you declare variables that depend on each other.
let
’s basic use is to declare a list of variables with no initial
values. They are initialized to nil
:
(let (variable1 variable2 variable3) ;; variables are initialized to nil by default.
;; use them here
...)
;; Example:
(let (a b square)
(setf a 2)
(setf square (* a a))
(list a b square))
;; => (2 NIL 4)
;; exactly the same:
(let (a
b
square)
...)
You can give default values by using “pairs” of elements, as in (a 2)
:
(let ((a 2) ;; <-- initial value
square) ;; <-- no "pair" but still one element: defaults to NIL.
(setf square (* a a))
(list a square))
Yes, there are two ((
in a row! This is the syntax of Common
Lisp. You don’t need to count them. What appears after a let
is
variable definitions. Usually, one per line.
The let’s logic is in the body, with a meaningful indentation. You can read Lisp code based on indentation. If the project you are looking at doesn’t respect that, it is a low quality project.
Observe that we kept square
to nil. We want it to be the square of
a
, so can we do this?
(let ((a 2)
(square (* a a))) ;; WARN:
...)
You can’t do that here, this is the limitation of let
. You need let*
.
You could write two let
s:
(let ((a 2))
(let ((square (* a a)))
(list a square)))
;; => (2 4)
This is equivalent to let*
:
(let* ((a 2)
(square (* a a)))
...)
let
is to declare variables that don’t depend on each other, let*
is to declare variables which are read one after the other and where
one can depend on a previous one.
This is not valid:
(let* ((square (* a a)) ;; WARN!
(a 2))
(list a square))
;; => debugger:
;; The variable A is unbound.
The error message is clear. At the time of reading (square (* a a))
, a
is unknown.
setf inside let
Let’s make it even clearer: you can setf
any value that is
shadowed in a let
binding, once outside the let, the variables are
back to the value of the current environment.
We know this:
(defparameter *name* "test")
(let ((*name* "inside let"))
(format t "*name* inside let: ~s" *name*))
;; => *name* inside let: "inside let"
(format t "*name* outside let: ~s" *name*)
;; => *name* outside let: "test"
we setf a dynamic parameter that was shadowed by a let binding:
(defparameter *name* "test")
(defun change-name ()
;; bad style though,
;; try to not mutate variables inside your functions,
;; but take arguments and return fresh data structures.
(setf *name* "set!"))
;; ^^^^^ from the dynamic environment, or from a let lexical scope.
(let ((*name* "inside let"))
(change-name)
(format t "*name* inside let: ~s" *name*))
;; => *name* inside let: "set!"
(format t "*name* outside let: ~s" *name*)
;; => *name* outside let: "test"
When you don’t use defined variables
Read your compiler’s warnings :)
Below, it tells us that b
is defined but never used. SBCL is pretty
good at giving us useful warnings at compile time (every time you
hit C-c C-c
(compile and load the expression at point), C-c C-k
(the whole file) or use load
).
(let (a b square)
(list a square))
;; =>
; caught STYLE-WARNING:
; The variable B is defined but never used.
This example works in the REPL because SBCL’s REPL always compiles expressions.
This may vary with your implementation.
It’s great to catch typos!
(let* ((a 2)
(square (* a a)))
(list a squale))
;; ^^^ typo
If you compile this in a .lisp file (or in a Alt-x slime-scratch
lisp buffer), you
will have two warnings, and your editor will underline each in two
different colors:
- first, “square” is defined but never used
- second, “squale” is an undefined variable.
If you run the snippet in the REPL, you will get the two warnings but, because the snippet is run, you will see the interactive debugger with the error “The variable SQUALE is unbound”.
Unbound variables
“unbound” variables were not bound to anything, not even nil. Their symbol might exist, but they have no associated value.
You can create such variables like this:
(defparameter *connection*)
This defparameter
form is correct. You didn’t give any default
value: the parameter is unbound.
You can check if a variable (or a function) is bound with boundp
(or
fboundp
). The p
is for “predicate”.
You can make a variable (or function) unbound with makunbound
(or fmakunbound
).
Global variables are thread safe
Don’t be afraid of accessing and set-ing global bindings in
threads. Each thread will have its own copy of the
variable. Consequently, you can bind them to other values with let
bindings, etc. That’s good.
It’s only if you want one single source of truth that you’ll have to share the variable between threads and where the danger lies. You can use a lock (very easy), but that’s all another topic.
Addendum: defconstant
defconstant
is here to say something is a constant and is not
supposed to change, but in practice defconstant
is annoying. Use
defparameter
, and add a convention with a new style of earmuffs:
(defparameter +pi+ pi
"Just to show that pi exists but has no earmuffs. Now it does. You shouldn't change a variable with +-style earmuffs, it's a constant.")
defconstant
is annoying because, at least on SBCL, it can’t be
redefined without asking for validation through the interactive
debugger, which we may often do during development, and its default
test is eql
, so give it a string and it will always think that the
constant was redefined. Look (evaluate each line one by one in order):
(defconstant +best-lisper+ :me)
;; so far so good.
(defconstant +best-lisper+ :me)
;; so far so good: we didn't redefine anything.
(defconstant +best-lisper+ :you)
;; => the constant is being redefined, we get the interactive debugger (SBCL):
The constant +BEST-LISPER+ is being redefined (from :ME to :YOU)
[Condition of type SB-EXT:DEFCONSTANT-UNEQL]
See also:
Common Lisp Hyperspec, DEFCONSTANT [:macro]
SBCL Manual, Idiosyncrasies [:node]
Restarts:
0: [CONTINUE] Go ahead and change the value.
1: [ABORT] Keep the old value.
2: [RETRY] Retry SLIME REPL evaluation request.
3: [*ABORT] Return to SLIME's top level.
4: [ABORT] abort thread (#<THREAD tid=573581 "repl-thread" RUNNING {120633D123}>)
;; => presse 0 (zero) or click on the "Continue" restart to accept changing the value.
With constants as strings:
(defconstant +best-name+ "me")
;; so far so good, we create a new constant.
(defconstant +best-name+ "me")
;; => interactive debugger!!
The constant +BEST-NAME+ is being redefined (from "me" to "me")
...
As you will see in the equality chapter, two strings are not equal by
eql
that is a low-level equality operator (think pointers), they are
equal
(or string-equal
).
This is defconstant
documentation:
Define a global constant, saying that the value is constant and may be compiled into code. If the variable already has a value, and this is not EQL to the new value, the code is not portable (undefined behavior). The third argument is an optional documentation string for the variable.
The eql
thing is in the spec, what an implementation should do when
redefining a constant is not defined, so it may vary with your
implementation.
We invite you to look at:
- Alexandria’s define-constant, which has a
:test
keyword (but still errors out on redefinition). - Serapeum’s
defconst
cl:defparameter
;)
Guidelines and best practices
A few style guidelines:
- create all your top-level parameters at the top of a file
- define first parameters then variables
- use docstrings
- read your compiler’s warnings
- it’s better for your functions to accept arguments, rather than to rely on top-level parameters
- your functions shouldn’t mutate (modify) a top-level binding. You should create a new data structure instead, and use your function’s return value as the parameter to another function, and have data flow from one function to another.
- parameters are best for: a webserver port, a default value... and other user-facing parameters.
- variables are best for long-living and internal variables: caches, DB connections...
- you can forget about defconstant
- when in doubt, use a
defparameter
- the pattern where a function parameter is by default a global variable is typical and idiomatic:
;; from the STR library.
(defvar *whitespaces* (list #\Backspace #\Tab #\Linefeed #\Newline #\Vt #\Page
#\Return #\Space #\Rubout
;; edited for brevity
))
(defun trim-left (s &key (char-bag *whitespaces*))
"Removes all characters in `char-bag` (default: whitespaces) at the beginning of `s`."
(when s
(string-left-trim char-bag s)))
the default value can also be a function call:
;; from the Lem editor
(defun buffer-modified-p (&optional (buffer (current-buffer)))
"Return T if 'buffer' has been modified, NIL otherwise."
(/= 0 (buffer-%modified-p buffer)))
- these let bindings over global variables are idiomatic too:
(let ((*name* "other")) ...)
.
Joe Marshall — System Instruction Fixed Point
@2025-09-16 19:14 · 31 days agoTo test the analysis program, I had the LLM analyze the analyze.lisp
file.
When it reached the defparameter
for the analysis
prompt, it had some improvements to suggest. This got me thinking.
Let's make some system instructions for improving system
instructions and run them on themselves in a feedback loop. Do we
reach a fixed point?
The initial system instruction is:
You are a world class prompt engineer. You write succinct prompts that are thorough.
The prompt is:
Use your skills to improve the following system instruction:
followed by a copy of the system instruction.
On each iteration I replaced both copies of the system instruction with the updated system instruction.
After a few iterations, the system instruction quasi-converged. By quasi-converge, I mean that each iteration turns into a rephrasing of the same basic instructions. The wording isn't exactly the same on each iteration, but the gist of it is.
Revised System Instruction: The Unyielding Mandate of World-Class Prompt Engineering
As an Elite Prompt Engineer, your unwavering and paramount mission is to design and meticulously craft prompts that consistently elicit optimal, precisely accurate, and unequivocally actionable model responses. Your prompts are not mere instructions; they are architected as imperative, unambiguous specifications, firmly grounded upon these four foundational, non-negotiable pillars:
- Clarity: Eliminate all potential for misinterpretation through unambiguous language and explicit, direct instructions. Leave absolutely no conceptual void or room for subjective inference.
- Completeness: Ensure exhaustive coverage of all explicit and implicitly required information. The model must be holistically equipped with every critical datum, constraint, and contextual detail to execute its task.
- Specificity: Enforce rigorous, explicit constraints on all parameters. Precisely define response length, stylistic attributes, emotional tone, permissible content, and verbosity. Mandate exact output formats using formal schemas or illustrative examples.
- Testability: Engineer prompts to generate verifiable, predictably consistent, and unfailingly repeatable outcomes. This enables robust, automated evaluation and validation of model performance.
To consistently uphold this exacting standard and prevent costly inefficiencies and erroneous outputs, you are imperatively mandated to unequivocally adhere to the following strategic directives:
- Deconstruct User Intent & Task (Holistic Analysis): Commence by conducting an exhaustive deconstruction of the overarching user intent and the precise task objective. Systematically decompose complex requests into discrete, logically sequenced components, meticulously identifying all requisite inputs, intricate internal processing logic, and the exact final output state.
- Establish Persona, Audience & Context (Strategic Framing): Unequivocally establish the model's designated persona, the precise target audience for its generated content, and the operational context. These parameters definitively govern the appropriate tone, stylistic conventions, required knowledge domains, and the essential granularity of detail for optimal comprehension.
- Define Strict Inclusions & Exclusions (Constraint Enforcement): Precisely delineate all mandatory content inclusions and explicitly prohibit all proscribed elements. Enforce stringent constraints on response length, stylistic attributes, emotional tone, verbosity, and permissible content, thereby precisely shaping and rigorously controlling the model's generative output.
- Prescribe Output Format with Schema/Examples (Integrity & Parsability): Strictly mandate the precise output structure. Employ formal specifications (e.g., JSON Schema, XML, defined Markdown structures) and furnish high-fidelity, representative examples to unequivocally demonstrate the exact format, encompassing data types and hierarchies. This approach guarantees seamless, predictable parsing and robust integration into downstream systems.,
- Implement Few-Shot Prompting (In-Context Learning & Behavioral Anchoring): Strategically implement Few-Shot Prompting by providing exemplary, high-quality input-output demonstrations. These examples must unequivocally demonstrate the desired behavior, articulate the underlying reasoning processes, and exemplify the precise output format. This practice critically augments model comprehension, substantially mitigates hallucination, and ensures superior response consistency.
- Proactively Resolve Ambiguity & Document Assumptions (Transparency & Precision): Proactively identify and systematically eliminate all potential sources of ambiguity. If complete clarification is infeasible, explicitly articulate and document all well-reasoned assumptions directly within the prompt, thereby preempting misinterpretation and ensuring absolute transparency.
- Architect for Maximal Robustness (Edge Case Mitigation): Engineer for Maximal Robustness by diligently anticipating and comprehensively addressing all conceivable edge cases. Foresee potential ambiguities, anomalous inputs, or significant deviations from nominal operating conditions. Construct prompts defensively to effectively preempt, manage, or gracefully mitigate these challenges, guaranteeing exceptionally resilient and robust performance across the full spectrum of operational scenarios.
- Respect Model Capabilities & Acknowledge Limitations (Feasibility & Efficacy): Operate strictly within the established capabilities and acknowledged limitations of the target model. Refrain from formulating requests for outputs that are inherently impossible, demonstrably unreliable, or computationally intractable, thereby ensuring alignment with its validated operational capacity and maximizing efficacy.
- Systematically Iterate, Rigorously Validate & Continuously Optimize (Performance & Refinement): Systematically engage in relentless testing of prompts against precisely defined success metrics and authentic, diverse real-world data. Methodically analyze model responses, meticulously gather comprehensive, actionable feedback, and iteratively refine prompts to achieve paramount clarity, maximal efficiency, and unassailable robustness, thereby propelling continuous improvement towards the delivery of high-fidelity, production-grade outcomes.
Absolute and unwavering adherence to these strategic directives is not merely encouraged—it is an existential requirement. They constitute the foundational bedrock for engineering prompts that unfailingly deliver efficient, maximally effective, and demonstrably superior model interactions, thereby directly contributing to the success of all downstream applications and user experiences.
This is actually the biggest change from the original system instruction. I ran the fixed-point system instruction program several times, starting with the same initial system instruction. I got slightly different results each time. The one above is the most ridiculous one.
I'm not sure if I learned anything useful from this exercise.
Joe Marshall — Analyzing Lisp Redux: One Form At a Time
@2025-09-15 15:00 · 32 days agoInstead of analyzing an entire Common Lisp file at once, you might want to analyze the file one function (or top-level form) at a time. Functions are basically black-box abstractions. So long as the arguments and return values don’t change (and the side effects are preserved), the implementation can be completely replaced. The LLM is much more constrained in this case. It cannot make changes to the signature of the function or make interprocedural changes to the code. While this makes a large class of improvements impossible, it also makes a large class of bugs impossible and greatly narrows the search space of code changes.
We use a specialized READ-ANALYZE-PRINT loop. We use a special version of read that preserves comments (see yesterday’s post) to read the file one top-level form at a time. Each top-level form is presented to the LLM (along with any associated comments) for analysis. For each form, the LLM is instructed to describe the purpose of the form, to identify any potential bugs, to check for adherence to best practices, and to suggest ways to improve the code.
The system instruction for analysis is as follows:
"You are a world class Common Lisp programmer." "You will be analyzing a Common Lisp file one top-level form at a time." "Your analysis should be thorough and insightful, demonstrating a deep understanding of Common Lisp programming practices."
In addition we have these directives:
"If there is no package definition, assume an appropriate one exists elsewhere and do not mention this."
Without this directive, the LLM will complain about files that
simply have an appropriate (in-package ...)
form.
"Assume that popular utility packages such asalexandria
andseries
have been loaded and made available." "Assume that undefined functions are defined elsewhere and do not mention this."
Without these directives, the LLM complains about undefined functions. We specifically tell it to assume we know what we are doing.
"Do not suggest using defconstant, even if it would be appropriate."
This is an odd one, but the LLM has a strong tendency to
suggest using defconstant
for values that do not seem
to change. In many cases, we want to allow the user the option to
modify the value at runtime or to reload the file. Reloading a
file with defconstant
forms will often cause an error,
even if the value hasn’t changed, because the reloaded value
is equal
but not eql
to the original value.
Then we direct the LLM to perform the analysis:
"For each top-level form:" " * Determine its purpose." " * Determine whether a library function might achieve the same thing." " * Check that it follows best practices." " * Look for typical Common Lisp errors and pitfalls." " * Look for bugs." "Then report on the form:" " * Describe the purpose of the form." " * If the form is missing a docstring, suggest one." " * If the form fails to follow best practices, say so." " * If there are bugs or errors, describe them." " * If improvements can be made, give up to three concrete, actionable suggestions for improvement." " * For each suggestion, indicate whether it is a critical improvement or merely nice to have." " * For each suggestion, describe your reasoning." " * If the form is error free and does not need improvement, say so."
The prompt for each top-level form recapitulates the form specific section of the system instruction. This is followed by the form itself.
Example Analysis
Here is an example of the analysis of the analyze.lisp
file.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 9.23 seconds. ;; Prompt Tokens: 7,160 ;; Thoughts Tokens: 766 ;; Candidate Tokens: 94 ;; Accumulated Prompt Tokens: 8,276,974 ;; Accumulated Response Tokens: 341,259 ;; Analyzing form: ;;; -*- Lisp -*- (in-package "GEMINI")
The purpose of the form (in-package "GEMINI")
is to set the current package for the subsequent forms in the file to "GEMINI". This is a fundamental Common Lisp construct used to organize code and manage symbol namespaces, ensuring that symbols defined or referenced in this file are interned within the "GEMINI" package.
This form is error-free and does not need improvement. It follows best practices for Common Lisp file organization.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 8.84 seconds. ;; Prompt Tokens: 7,577 ;; Thoughts Tokens: 652 ;; Candidate Tokens: 140 ;; Accumulated Prompt Tokens: 8,284,551 ;; Accumulated Response Tokens: 342,051 ;; Analyzing form: (defparameter +analyze-source-system-instruction+ "You are a world class Common Lisp programmer. You will be analyzing files from the '~a' system. Your analysis should be thorough and insightful, demonstrating a deep understanding of Common Lisp programming practices." "LLM system instruction to analyze a Common Lisp ASDF system.")
The purpose of this defparameter
form is to define a special variable named +analyze-source-system-instruction+
. This variable holds a format string that serves as a system instruction or prompt template for an LLM (Large Language Model). The instruction guides the LLM to act as a Common Lisp programmer analyzing files from a specified ASDF system, with ~a
being a placeholder for the system’s name.
The form is error-free and does not need improvement. It correctly uses defparameter
for a global, dynamically-scoped variable, follows the +earmuffs+
naming convention for special variables, and includes a clear docstring.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 6.94 seconds. ;; Prompt Tokens: 8,069 ;; Thoughts Tokens: 523 ;; Candidate Tokens: 163 ;; Accumulated Prompt Tokens: 8,292,620 ;; Accumulated Response Tokens: 342,737 ;; Analyzing form: (defparameter +analyze-system-definition-prompt+ (str:join #\Space (list "The following is the definition of the `~a` system." "Analyze the system definition, checking for best practices." "Make sure there is a primary package definition component that has no *other internal ASDF dependencies* and is depended upon by all other components.")) "LLM prompt to analyze a Common Lisp ASDF system definition.")
The purpose of this defparameter
form is to define a special variable named +analyze-system-definition-prompt+
. This variable holds a string that serves as a prompt template for an LLM. The prompt instructs the LLM to analyze a Common Lisp ASDF system definition, specifically checking for best practices related to package definitions and component dependencies, emphasizing the need for a primary package component without internal ASDF dependencies that all other components depend on.
The form is error-free and does not need improvement. It correctly uses defparameter
for a global, dynamically-scoped variable, follows the +earmuffs+
naming convention for special variables, and includes a clear docstring. The use of str:join
to construct the prompt string is idiomatic and readable.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 21.61 seconds. ;; Prompt Tokens: 8,824 ;; Thoughts Tokens: 1,703 ;; Candidate Tokens: 557 ;; Accumulated Prompt Tokens: 8,301,444 ;; Accumulated Response Tokens: 344,997 ;; Analyzing form: (defun analyze-system-definition (system-name) "Analyzes the ASDF system definition file for the specified SYSTEM-NAME. It checks for best practices, especially concerning the primary package definition component and its internal ASDF dependencies, by sending the system definition to an LLM for detailed analysis." (check-type system-name (or string symbol)) (let ((asdf-system (asdf:find-system system-name))) (unless asdf-system (error "ANALYZE-SYSTEM-DEFINITION: ASDF system '~a' not found." system-name)) (let ((source-file (asdf:system-source-file asdf-system))) (unless (and source-file (probe-file source-file)) (error "ANALYZE-SYSTEM-DEFINITION: Source file for system '~a' not found." system-name)) (let ((*system-instruction* (content :parts (list (part (format nil +analyze-source-system-instruction+ system-name))) :role "system"))) (invoke-gemini (list (part (format nil +analyze-system-definition-prompt+ system-name)) (part (handler-case (uiop:read-file-string source-file) (error (c) (error "ANALYZE-SYSTEM-DEFINITION: Failed to read source file '~a': ~a" source-file c))))))))))
The purpose of the analyze-system-definition
function is to retrieve the ASDF system definition file for a given system-name
, read its contents, and then send this content along with a specific prompt and system instruction to an LLM (Large Language Model) for analysis. The LLM is instructed to act as a Common Lisp programmer and analyze the system definition for best practices, particularly concerning the primary package definition component and its dependencies.
The function has a good docstring that clearly explains its purpose, arguments, and the process it undertakes.
The form is largely error-free and follows best practices, including robust error handling for missing systems, non-existent source files, and file reading failures, as well as check-type
for argument validation.
Here are some minor suggestions for improvement:
- Minor Improvement: Refine error message consistency.
- Reasoning: While the current error messages are informative, adopting a consistent prefix for all errors originating from this function (e.g., "ANALYZE-SYSTEM-DEFINITION Error: ...") can improve readability in logs and make it easier to identify the source of an error.
- Actionable Suggestion: Modify the error messages to include a consistent prefix, for example:
-
(error "ANALYZE-SYSTEM-DEFINITION Error: ASDF system '~a' not found." system-name)
-
(error "ANALYZE-SYSTEM-DEFINITION Error: Source file for system '~a' not found." system-name)
-
(error "ANALYZE-SYSTEM-DEFINITION Error: Failed to read source file '~a': ~a" source-file c)
-
- Barely worth mentioning: Consider
defgeneric
and methods forinvoke-gemini
context.- Reasoning: The use of a special variable
*system-instruction*
to pass context toinvoke-gemini
is a valid Common Lisp pattern. However, for more complex scenarios or ifinvoke-gemini
were to have different ways of handling system instructions based on the context, adefgeneric
with methods could offer a more extensible and object-oriented approach. For this specific use case, the special variable is perfectly adequate and simpler. - Actionable Suggestion: (No concrete action needed for this specific form, as the current approach is fine.) If the LLM interaction logic becomes more intricate, consider defining
invoke-gemini
as a generic function and dispatching on context objects rather than relying solely on special variables.
- Reasoning: The use of a special variable
The form is otherwise error-free and well-written.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 6.14 seconds. ;; Prompt Tokens: 9,796 ;; Thoughts Tokens: 345 ;; Candidate Tokens: 183 ;; Accumulated Prompt Tokens: 8,311,240 ;; Accumulated Response Tokens: 345,525 ;; Analyzing form: (defparameter +analyze-source-file-prompt+ "Given the following ASDF system definition for '~a' and the source code for component '~a', analyze the component's code and provide a detailed summary of its functionality, purpose, and any notable features or patterns you observe. In addition, note where the code is not following best practices, where there are potential bugs, and where the code can be improved. Give up to three concrete, actionable suggestions for how the code could be improved or refactored. If there are no suggestions for improvement, say so explicitly. Indicate for each suggestion whether the change is a critical improvement or merely nice to have. Be sure to include your reasoning for each suggestion." "LLM prompt to analyze a Common Lisp source file.")
The purpose of this defparameter
form is to define a special variable named +analyze-source-file-prompt+
. This variable stores a detailed prompt template for an LLM (Large Language Model). The prompt instructs the LLM to analyze a Common Lisp source file, given the context of its ASDF system definition. The LLM is expected to provide a summary of the component’s functionality, identify best practice deviations, potential bugs, and offer up to three concrete, actionable suggestions for improvement, including reasoning and an indication of criticality.
The form is error-free and does not need improvement. It correctly uses defparameter
for a global, dynamically-scoped variable, adheres to the +earmuffs+
naming convention for special variables, and includes a clear and concise docstring. The prompt itself is well-crafted to elicit a comprehensive analysis from the LLM.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 5.62 seconds. ;; Prompt Tokens: 10,450 ;; Thoughts Tokens: 273 ;; Candidate Tokens: 311 ;; Accumulated Prompt Tokens: 8,321,690 ;; Accumulated Response Tokens: 346,109 ;; Analyzing form: (defun get-system-and-component (system-name component-name) "Retrieves the ASDF system object and the specified component object for a given component in a specified system. Returns two values: the ASDF system object and the component object. Signals an error if the system or component is not found." (check-type system-name (or string symbol)) (check-type component-name (or string symbol)) (let ((asdf-system (asdf:find-system system-name))) (unless asdf-system (error "GET-SYSTEM-AND-COMPONENT: ASDF system '~a' not found." system-name)) (let ((component (asdf:find-component asdf-system component-name))) (unless component (error "GET-SYSTEM-AND-COMPONENT: Component '~a' not found in system '~a'." component-name system-name)) (values asdf-system component))))
The purpose of the get-system-and-component
function is to locate and return both the ASDF system object and a specific component object within that system, given the system’s name and the component’s name. It returns these two objects as multiple values. The function includes robust error handling, signaling an error if either the system or the component cannot be found.
The function has an excellent docstring that clearly describes its purpose, arguments, return values, and error conditions.
The form is error-free and follows best practices, including check-type
for argument validation and clear, specific error messages.
Here is a minor suggestion for improvement:
- Minor Improvement: Refine error message consistency.
- Reasoning: While the current error messages are informative, adopting a consistent prefix for all errors originating from this function (e.g., "GET-SYSTEM-AND-COMPONENT Error: ...") can improve readability in logs and make it easier to identify the source of an error.
- Actionable Suggestion: Modify the error messages to include a consistent prefix, for example:
-
(error "GET-SYSTEM-AND-COMPONENT Error: ASDF system '~a' not found." system-name)
-
(error "GET-SYSTEM-AND-COMPONENT Error: Component '~a' not found in system '~a'." component-name system-name)
-
The form is otherwise error-free and well-written.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 17.45 seconds. ;; Prompt Tokens: 11,416 ;; Thoughts Tokens: 757 ;; Candidate Tokens: 1,018 ;; Accumulated Prompt Tokens: 8,333,106 ;; Accumulated Response Tokens: 347,884 ;; Analyzing form: (defun get-system-and-component-contents (system-name component-name) "Retrieves the ASDF system object, its definition file content, the ASDF component object, and its source file content. Returns four values: 1. The ASDF system object. 2. A string containing the system definition file's content. 3. The ASDF component object. 4. A string containing the component's source file content. Signals an error if the system, component, or their respective source files are not found." (multiple-value-bind (system component) (get-system-and-component system-name component-name) (let ((system-pathname (asdf:system-source-file system)) (component-pathname (asdf:component-pathname component))) (unless (and system-pathname (probe-file system-pathname)) (error "GET-SYSTEM-AND-COMPONENT-CONTENTS: Source file for system '~a' not found." system-name)) (unless (and component-pathname (probe-file component-pathname)) (error "GET-SYSTEM-AND-COMPONENT-CONTENTS: Source file for component '~a' not found." component-name)) (values system (handler-case (uiop:read-file-string system-pathname) (error (c) (error "GET-SYSTEM-AND-COMPONENT-CONTENTS: Failed to read system definition file '~a': ~a" system-pathname c))) component (handler-case (uiop:read-file-string component-pathname) (error (c) (error "GET-SYSTEM-AND-COMPONENT-CONTENTS: Failed to read component source file '~a': ~a" component-pathname c)))))))
The purpose of the get-system-and-component-contents
function is to retrieve comprehensive information about a specific ASDF system and one of its components. It returns four values: the ASDF system object, the content of the system’s definition file as a string, the ASDF component object, and the content of the component’s source file as a string. The function ensures robustness by signaling errors if the system, component, or their respective source files cannot be found or read.
The function has an excellent docstring that clearly describes its purpose, the four values it returns, and its error conditions.
The form is error-free and follows best practices, including the use of multiple-value-bind
for handling multiple return values, probe-file
for checking file existence, and handler-case
for robust file reading with informative error messages.
Here are two suggestions for improvement:
- Minor Improvement: Refine error message consistency.
- Reasoning: While the current error messages are informative, adopting a consistent prefix for all errors originating from this function (e.g., "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: ...") can improve readability in logs and make it easier to identify the source of an error.
- Actionable Suggestion: Modify the error messages to include a consistent prefix, for example:
-
(error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Source file for system '~a' not found." system-name)
-
(error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Source file for component '~a' not found." component-name)
-
(error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Failed to read system definition file '~a': ~a" system-pathname c)
-
(error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Failed to read component source file '~a': ~a" component-pathname c)
-
- Minor Improvement: Extract a helper function for safe file reading.
- Reasoning: The pattern of reading a file string with
handler-case
to catch and re-signal errors is repeated twice within this function. Extracting this logic into a small helper function would improve code conciseness, adhere to the DRY (Don’t Repeat Yourself) principle, and make the main function body cleaner. - Actionable Suggestion: Define a local helper function (e.g.,
read-file-string-safely
) that takes a pathname and an error message format string, and then use this helper for bothsystem-pathname
andcomponent-pathname
.```lisp (defun get-system-and-component-contents (system-name component-name) "Retrieves the ASDF system object, its definition file content, the ASDF component object, and its source file content. Returns four values: 1. The ASDF system object. 2. A string containing the system definition file's content. 3. The ASDF component object. 4. A string containing the component's source file content. Signals an error if the system, component, or their respective source files are not found." (multiple-value-bind (system component) (get-system-and-component system-name component-name) (let ((system-pathname (asdf:system-source-file system)) (component-pathname (asdf:component-pathname component))) (unless (and system-pathname (probe-file system-pathname)) (error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Source file for system '~a' not found." system-name)) (unless (and component-pathname (probe-file component-pathname)) (error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Source file for component '~a' not found." component-name)) (flet ((read-file-string-safely (pathname error-format-string &rest args) (handler-case (uiop:read-file-string pathname) (error (c) (apply #'error error-format-string pathname c args))))) (values system (read-file-string-safely system-pathname "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Failed to read system definition file '~a': ~a") component (read-file-string-safely component-pathname "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Failed to read component source file '~a': ~a")))))) ```
- Reasoning: The pattern of reading a file string with
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 5.99 seconds. ;; Prompt Tokens: 12,744 ;; Thoughts Tokens: 426 ;; Candidate Tokens: 174 ;; Accumulated Prompt Tokens: 8,345,850 ;; Accumulated Response Tokens: 348,484 ;; Analyzing form: (defparameter +system-definition-context-instruction+ "What follows is the definition of the `~a` system. Do not analyze this file, it is here so that you understand the dependencies within the system." "LLM instruction to provide system definition as context without analysis.")
The purpose of this defparameter
form is to define a special variable named +system-definition-context-instruction+
. This variable stores a format string that serves as an instruction for an LLM (Large Language Model). The instruction explicitly tells the LLM that a subsequent piece of text is an ASDF system definition, but it should be used only for contextual understanding of system dependencies and not for analysis itself. The ~a
placeholder is intended for the system’s name.
The form is error-free and does not need improvement. It correctly uses defparameter
for a global, dynamically-scoped variable, adheres to the +earmuffs+
naming convention for special variables, and includes a clear and concise docstring. The instruction itself is well-phrased to guide the LLM’s behavior.
;; Invoking Gemini API model `gemini-2.5-flash`... GC 3521 ;; Gemini API finished in 11.46 seconds. ;; Prompt Tokens: 13,433 ;; Thoughts Tokens: 652 ;; Candidate Tokens: 457 ;; Accumulated Prompt Tokens: 8,359,283 ;; Accumulated Response Tokens: 349,593 ;; Analyzing form: (defun analyze-component (system-name component-name) "Analyzes the source code of a specific ASDF component within a given system using an LLM. It provides the LLM with the system definition for context and the component's source code. The LLM is prompted to summarize functionality, purpose, and features, identify deviations from best practices, potential bugs, and suggest up to three concrete improvements with reasoning and severity." (multiple-value-bind (system system-contents component component-contents) (get-system-and-component-contents system-name component-name) (declare (ignore system component)) (let ((*system-instruction* (content :parts (list (part (format nil +analyze-source-system-instruction+ system-name))) :role "system"))) (invoke-gemini (list (part (format nil +system-definition-context-instruction+ system-name)) (part system-contents) (part (format nil +analyze-source-file-prompt+ system-name component-name)) (part component-contents))))))
The purpose of the analyze-component
function is to orchestrate the analysis of a specific ASDF component’s source code by an LLM (Large Language Model). It first retrieves the ASDF system object, its definition file content, the ASDF component object, and its source file content using get-system-and-component-contents
. It then sets up a system instruction for the LLM, providing context about the system being analyzed. Finally, it invokes the LLM with a carefully constructed prompt that includes the system definition (for context on dependencies), the component’s source code, and instructions for the LLM to summarize functionality, identify issues, and suggest improvements.
The function has an excellent docstring that clearly describes its purpose, arguments, and the detailed analysis expected from the LLM.
The form is error-free and follows best practices. It leverages the get-system-and-component-contents
function for robust data retrieval and error handling, uses multiple-value-bind
effectively, and correctly declares ignored variables. The construction of the LLM prompt and system instruction using predefined templates and format
is also a good practice for consistency.
Here is one minor suggestion for improvement:
- Minor Improvement: Explicitly name the parts for
invoke-gemini
for enhanced readability.- Reasoning: While the current
list
ofpart
forms is functional, explicitly naming the role of eachpart
within theinvoke-gemini
call could improve the readability and maintainability of the code, especially as the complexity of LLM interactions grows. This makes it immediately clear what each piece of content represents in the overall prompt structure. - Actionable Suggestion: Consider using a more descriptive structure for the
parts
list, perhaps by introducing helper functions or a more structured way to represent the different segments of the LLM input. For example, ifinvoke-gemini
supported keyword arguments for different content types, it would be more explicit. As it stands, a comment or a local variable for eachpart
could also serve this purpose, but that’s less integrated. For now, the current approach is acceptable.
- Reasoning: While the current
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 6.49 seconds. ;; Prompt Tokens: 14,452 ;; Thoughts Tokens: 313 ;; Candidate Tokens: 221 ;; Accumulated Prompt Tokens: 8,373,735 ;; Accumulated Response Tokens: 350,127 ;; Analyzing form: (defparameter +analyze-form-prompt+ (str:join #\Newline (list " * Use the preceeding forms as context for analysis." " * Determine the purpose of the form." " * Determine whether a library function might achieve the same thing." " * Check that it follows best practices." " * Look for opportunities to simplify or improve the code." " * Look for typical Common Lisp errors and pitfalls." " * Look for bugs." "Then report on the form:" " * Describe the purpose of the form." " * Suggest a docstring if one is needed, or if it can be improved, otherwise don't mention it." " * If the form fails to follow best practices, say so." " * If there are bugs or errors, describe them." " * If improvements can be made, give up to three concrete, actionable suggestions for improvement." " * For each suggestion, indicate whether it is a critical improvement, major improvement, minor improvement, merely nice to have, or barely worth mentioning." " * For each suggestion, describe your reasoning." " * If the form is error free and does not need improvement, say so.")) "LLM prompt detailing the analysis criteria and reporting format for a single Common Lisp top-level form.")
The purpose of this defparameter
form is to define a special variable named +analyze-form-prompt+
. This variable stores a multi-line string that serves as a comprehensive prompt template for an LLM (Large Language Model). The prompt meticulously outlines the criteria for analyzing a single Common Lisp top-level form, including what aspects to consider (context, purpose, library functions, best practices, simplification, errors, bugs) and the exact format for reporting the analysis (description, docstring suggestions, best practice adherence, bug descriptions, and up to three concrete, actionable suggestions with criticality and reasoning).
The form is error-free and does not need improvement. It correctly uses defparameter
for a global, dynamically-scoped variable, adheres to the +earmuffs+
naming convention for special variables, and includes a clear and concise docstring. The use of str:join
with #\Newline
is an idiomatic and readable way to construct a multi-line string from a list of lines, making the prompt definition clear and easy to manage.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 17.41 seconds. ;; Prompt Tokens: 15,190 ;; Thoughts Tokens: 1,246 ;; Candidate Tokens: 738 ;; Accumulated Prompt Tokens: 8,388,925 ;; Accumulated Response Tokens: 352,111 ;; Analyzing form: (defparameter +analyze-file-system-instruction+ (str:join #\newline (list (str:join #\Space (list "You are a world class Common Lisp programmer." "You will be analyzing a Common Lisp file one top-level form at a time." "Your analysis should be thorough and insightful, demonstrating a deep understanding of Common Lisp programming practices." "If there is no package definition, assume an appropriate one exists elsewhere and do not mention this." "Assume that popular utility packages such as `alexandria` and `series` have been loaded and made available." "Assume that undefined functions are defined elsewhere and do not mention this." "Assume variables decorated with *earmuffs* have been declared as special variables." "Implicit dependencies are expected, so do not mention them." "Do not suggest using defconstant, even if it would be appropriate or a best practice." "Prefer `check-type` over type declarations for robustness.")) "For each top-level form:" +analyze-form-prompt+)) "LLM system instruction to analyze a Common Lisp file one top-level form at a time.")
The purpose of this defparameter
form is to define a special variable named +analyze-file-system-instruction+
. This variable holds a comprehensive multi-line string that serves as the primary "system instruction" for an LLM (Large Language Model) when it is tasked with analyzing Common Lisp files. This instruction sets the LLM’s persona as a "world class Common Lisp programmer" and provides detailed guidelines, assumptions (e.g., package existence, utility libraries, special variables), and constraints (e.g., no defconstant
suggestions, prefer check-type
). It also incorporates the previously defined +analyze-form-prompt+
to specify the detailed reporting format for each top-level form analyzed.
The form is error-free and follows best practices. It correctly uses defparameter
for a global, dynamically-scoped variable, adheres to the +earmuffs+
naming convention, and includes a clear docstring. The use of str:join
to construct the multi-line string from a list of components, including another defparameter
(+analyze-form-prompt+
), is idiomatic and promotes readability and modularity.
Here is one suggestion for improvement:
- Minor Improvement: Extract the initial general instructions into a separate
defparameter
.- Reasoning: The first
str:join
block, which defines the LLM’s persona, assumptions, and general constraints, is a distinct logical unit. Separating this into its owndefparameter
(e.g.,+llm-lisp-programmer-persona-instruction+
) would further modularize the overall system instruction. This would make+analyze-file-system-instruction+
a cleaner composition of two other named instruction blocks, potentially improving readability and making it easier to modify specific parts of the instruction without affecting the overall structure. - Actionable Suggestion:
```lisp (defparameter +llm-lisp-programmer-persona-instruction+ (str:join #\Space (list "You are a world class Common Lisp programmer." "You will be analyzing a Common Lisp file one top-level form at a time." "Your analysis should be thorough and insightful, demonstrating a deep understanding of Common Lisp programming practices." "If there is no package definition, assume an appropriate one exists elsewhere and do not mention this." "Assume that popular utility packages such as `alexandria` and `series` have been loaded and made available." "Assume that undefined functions are defined elsewhere and do not mention this." "Assume variables decorated with *earmuffs* have been declared as special variables." "Implicit dependencies are expected, so do not mention them." "Do not suggest using defconstant, even if it would be appropriate or a best practice." "Prefer `check-type` over type declarations for robustness.")) "LLM system instruction defining the persona and general guidelines for Common Lisp analysis.") (defparameter +analyze-file-system-instruction+ (str:join #\newline (list +llm-lisp-programmer-persona-instruction+ "For each top-level form:" +analyze-form-prompt+)) "LLM system instruction to analyze a Common Lisp file one top-level form at a time.") ```
- Reasoning: The first
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 7.22 seconds. ;; Prompt Tokens: 16,245 ;; Thoughts Tokens: 463 ;; Candidate Tokens: 182 ;; Accumulated Prompt Tokens: 8,405,170 ;; Accumulated Response Tokens: 352,756 ;; Analyzing form: (defparameter +analyze-file-form-prompt+ (str:join #\Newline (list "Analyze the following top-level form in a Common Lisp file." +analyze-form-prompt+)) "LLM prompt to analyze a single top-level form in a Common Lisp file.")
The purpose of this defparameter
form is to define a special variable named +analyze-file-form-prompt+
. This variable stores a string that serves as a prompt template for an LLM (Large Language Model). The prompt instructs the LLM to analyze a single top-level form in a Common Lisp file, incorporating the detailed analysis criteria and reporting format defined in the +analyze-form-prompt+
variable.
The form is error-free and does not need improvement. It correctly uses defparameter
for a global, dynamically-scoped variable, adheres to the +earmuffs+
naming convention for special variables, and includes a clear and concise docstring. The use of str:join
to combine a specific instruction with a more general prompt template (+analyze-form-prompt+
) is a good practice for modularity and readability.
;; Invoking Gemini API model `gemini-2.5-flash`... ;; Gemini API finished in 23.46 seconds. ;; Prompt Tokens: 17,199 ;; Thoughts Tokens: 1,842 ;; Candidate Tokens: 556 ;; Accumulated Prompt Tokens: 8,422,369 ;; Accumulated Response Tokens: 355,154 ;; Analyzing form: (defun analyze-file (filename &key (temperature 0.01) (verbose t)) "Analyze a Common Lisp file one top-level form at a time." (check-type filename (or pathname string)) (let ((forms (file-forms filename)) (*temperature* temperature) (*system-instruction* (content :parts (list (part +analyze-file-system-instruction+)) :role "system")) (start-prompt-tokens *accumulated-prompt-tokens*) (start-response-tokens *accumulated-response-tokens*) (start-time (get-universal-time)) (aborted t)) (unless (consp forms) (error "No top-level forms found in file '~a'." filename)) (unwind-protect (prog1 (let iter ((current-form (car forms)) (analysis (invoke-gemini (list (part +analyze-file-form-prompt+) (part (car forms))))) (results '()) (more (cdr forms))) (when verbose (format t "~&~%;; Analyzing form:~%~%~a~%--------~%~a~%~%" current-form analysis)) (if (null more) (reverse (acons current-form analysis results)) (iter (car more) (continue-gemini (list (part +analyze-file-form-prompt+) (part (car more)))) (acons current-form analysis results) (cdr more)))) (setq aborted nil)) (when verbose (let ((total-prompt-tokens (- *accumulated-prompt-tokens* start-prompt-tokens)) (total-response-tokens (- *accumulated-response-tokens* start-response-tokens)) (elapsed-time (- (get-universal-time) start-time))) (format t "~&;; Analysis ~:[complete in~;aborted at~] ~d seconds.~%" aborted elapsed-time) (format t ";; Prompt tokens: ~7,d~%" total-prompt-tokens) (format t ";; Response tokens: ~7,d~%" total-response-tokens))))))
The purpose of the analyze-file
function is to read a specified Common Lisp file, extract its top-level forms, and then sequentially send each form to an LLM (Large Language Model) for detailed analysis. It sets up the LLM’s system instruction and a specific prompt for analyzing individual forms, dynamically adjusting the LLM’s temperature. The function also tracks and reports the total prompt and response tokens used, as well as the elapsed time for the analysis, even if an error occurs.
The docstring is concise but could be improved to explicitly state the return value.
The form follows best practices, including check-type
for argument validation, let
for dynamic scope of special variables (*temperature*
, *system-instruction*
), unwind-protect
for guaranteed cleanup and reporting, and a let iter
construct for robust iteration over the file’s forms. Error handling for files with no top-level forms is also present.
Here are two concrete, actionable suggestions for improvement:
- Minor Improvement: Enhance the docstring to include the return value.
- Reasoning: While the docstring describes the function’s action, it doesn’t explicitly state what the function returns. Knowing the exact structure of the return value is crucial for users of the function to correctly process its output.
-
Actionable Suggestion: Update the docstring to:
```lisp "Analyze a Common Lisp file one top-level form at a time. Returns a list of cons cells, where each cons cell is `(top-level-form . llm-analysis-result-string)`." ```
- Minor Improvement: Add a
check-type
for theforms
returned byfile-forms
.-
Reasoning: The
file-forms
function is an external dependency. While it’s assumed to return a list of Lisp forms (s-expressions), explicitly checking that the result is indeed a list ensures robustness. Iffile-forms
were to return something unexpected (e.g., a single atom orNIL
when forms are present but not a list), thischeck-type
would catch it early and provide a clearer error message than a subsequentcar
orcdr
error. -
Actionable Suggestion: After the line
(let ((forms (file-forms filename)) ...))
, add:```lisp (check-type forms list) ```
-
Reasoning: The
The form is otherwise error-free and well-written.
;; Analysis complete in 150 seconds. ;; Prompt tokens: 152555 ;; Response tokens: 14755
Conclusion
This approach works quite well. The LLM’s analysis is generally accurate and insightful. It often finds ways to improve the code without introducing bugs. It also often finds potential bugs that I had not considered. The analysis is not perfect, but it is a useful tool for improving the quality of Common Lisp code.
An important thing to keep in mind is that the tool makes an effort to find something to say about each top-level form. If you follow its suggestions, you could endlessly “improve” your code. It also suggests “improvements” where deliberately have decided to do things in a non-standard way. You need to exercise judgment in deciding which suggested improvements are worth making.
The analysis is slow enough that you would not want to use it as part of a tool chain, but it is fast enough that you can use it regularly. It does consume tokens fairly rapidly, so a lot of analysis will add up.
Joe Marshall — Puzzle: Read Preserving Comments
@2025-09-13 13:09 · 34 days agoHere is a little Lisp puzzle for you:
I want to read a Lisp file and produce a list of the top-level forms in the file, but I want to preserve comments. Any comment that appears on a line within a top-level form should be associated with that form. Comments that appear by themselves at top level should be associated with the following top-level form. For example, if the file contains:
;;; -*- Lisp -*- ;; This file is a test (in-package "CL-USER") ;; This is the test function: (defun foo (arg) ;; This function does nothing (declare (ignore arg)) nil) ;; Return nil (defparameter *x* 42) (defparameter *y* 99) ;; This is y ;; The BAR function doesn't do anything either (defun bar () nil) ;; Trailing comment in file.
Then I want to produce the following list:
'(";;; -*- Lisp -*- ;; This file is a test (in-package \"CL-USER\") " " ;; This is the test function: (defun foo (arg) ;; This function does nothing (declare (ignore arg)) nil) ;; Return nil " " (defparameter *x* 42) " "(defparameter *y* 99) ;; This is y " " ;; The BAR function doesn't do anything either (defun bar () nil) " " ;; Trailing comment in file. ")
This puzzle is not difficult, but it is tricky to get the edge cases right.
Joe Marshall — The LLM Experience Gap
@2025-09-09 12:08 · 38 days agoIf you're a senior programmer, LLMs are nothing to worry about. They aren't anywhere near smart enough to replace you, and they are a powerful tool to help you get stuff done.
If you're a junior programmer, you're going to have to step up your game, a lot. LLMs go astray. They make stuff up. They hallucinate. They make dumb mistakes. A senior programmer can spot this stuff a mile away: they've made the same dumb mistakes themselves. A junior programmer hasn't. Let me give a concrete example.
Mutation
There are three attitudes towards mutation in programming. The first is "so what?" This is the attitude we all start with. But as we start building large systems, we find that mutation opens up a can of worms involving object identity, synchronization, race conditions, etc. The second attitude is "be very careful", and the third is "avoid it like the plague". Hard core functional programmers take the third approach, but most of the world seems to take the second approach. If you're going to be careful about mutation, you can either avoid it unless you know it is safe, or you use it unless you think it is dangerous. Most people seem to take the latter approach. It takes a fair amount of experience to know what situations cause problems, and to recognize when you are in one of those situations.
If you have mutation errors, your program starts acting pretty weirdly. There are other reasons your program might act weirdly, but if you notice that it misbehaves only under load, and that the tests work fine, you might suspect a mutation problem. These can be fiendishly difficult to track down, and you need experience in debugging them to be any good at fixing them. A senior programmer will have seen a few of these. A junior programmer probably hasn't. A senior programmar has probably fixed a few of these that a junior programmer missed.
LLMs are not very good at reasoning about mutation. (They don't really reason about your code at all.) One thing you learn pretty early on as a programmer is that you should not mutate data structures you are traversing. Modifying a collection as you are attempting to iterate over it is not going to work (unless you are very clever). All sorts of weird behaviors can occur if items are being inserted and removed from a collection as you are trying walk over it. We've all seen these. And we also know that if items aren't being processed in the order we expect, there is a fair bet that someone is modifying the collection that contains them. This isn't something you are taught in class, you learn it the hard way.
I had the LLM code up a process to modify dozens of git repositories that were held in subdirectories. It was working fine until it wasn't. It got into an infinite loop where it was repeatedly processing the same repository over and over. Fortunately, I had two things in my favor: First, I had gone out of my way to make sure my code was idempotent, so running it over and over didn't do any harm, and second, I recognized that the problem was probably due to the LLM naively modifying the file system as it was trying to traverse it. It took me all of five minutes to fix the issue. Would a junior programmer just out of college have instructed the LLM to write idempotent code? Would he have spotted the LLM modifying the file system while traversing it?
Conclusion
But LLMs are pretty good at being code monkeys. With the right supervision, they can crank out a lot of code. A junior programmer is going to have to work hard to provide the right supervision. Learning to program X in 20 days isn't going to cut it. The LLM already knows 20 days worth of X. The senior programmer brings a few years of experience to the table. What does the junior programmer bring? The real value is not just in writing code, but in recognizing when the code is flawed and knowing how to fix it—a skill that can't be learned in 20 days.
Joe Marshall — Analyzing Common Lisp
@2025-09-08 16:54 · 39 days agoWhile the LLM has some difficulty in generating Common Lisp, it is quite effective in analyzing Common Lisp. There are a couple of tricks that make this work better. Naturally, a good set of instructions about what sort of analysis you want is important, but since a Common Lisp program often uses a stratified design, where each level of abstraction is implemented in terms of the levels below it and supports the level above it, you need to inform the LLM of the dependencies between the various parts of the program. I've found one way to make this work, but I'm sure there are others.
The “system instruction” to the LLM is an extra-conversation prompt that is sent to the LLM to prime it for the actual prompt.
Once you have the system instruction set up, you then write the
prompt itself. The prompt is another 'content' item with multiple
parts. It makes sense to analyze one file at a time, so the prompt
will contain seveal parts, the analysis instructions and the source
code of the file to be analyzed. In addition, I include the contents of
the .asd
file as a part so
that the LLM knows what modules are depended upon. Without the
system definition, the LLM will complain (a lot) about missing
dependencies.
Here is a sample analysis:
;; Gemini API finished in 66.12 seconds. ;; Prompt Tokens: 7,243 ;; Thoughts Tokens: 2,307 ;; Candidate Tokens: 1,901 ;; Accumulated Prompt Tokens: 750,427 ;; Accumulated Response Tokens: 60,064 ;; **My Initial Thoughts on the Code Analysis Task** ;; ;; Okay, I've got the task laid out. I need to dive into this Common Lisp code from ;; the `gemini` system's `analyze` file and provide a thorough analysis. My goal ;; is to understand what's happening, identify any potential issues, and come up ;; with concrete suggestions for improvement. I'm viewing this from the ;; perspective of an expert Common Lisp programmer, so I'll be looking for nuances ;; and areas where we can make the code even better. ;; ;; First, I see two `defparameter`s: `+analyze-source-system-instruction+` and ;; `+analyze-source-file-prompt+`. These are defining the system instruction and ;; main prompt for the LLM. The system instruction establishes the LLM's role as ;; an expert Common Lisp programmer. That's a smart starting point - framing the ;; analysis with a specific persona can really influence the type of feedback we ;; get. The prompt itself looks comprehensive, asking the LLM to cover ;; functionality, features, best practices, bugs, and improvements. ;; ;; Then there's the `lisp-file-contents` function. It's pretty straightforward: it ;; reads a Lisp file and formats its content as a string. Seems like a crucial ;; utility for feeding the LLM the actual code to analyze. The use of is a common ;; practice for formatting code blocks within prompts. ;; ;; I'll need to pay close attention to the formatting and any potential issues that ;; may arise. I'll also keep an eye out for any inconsistencies in the prompt ;; structure, as even minor errors can affect the LLM's output. I'm keen to see how ;; effectively these prompts are formulated and how we could make them even more ;; precise. I'm ready to dig deeper and provide a really valuable analysis. ;; ;; **My Initial Thoughts on the Code Analysis Task** ;; ;; Okay, I've got the task laid out. I need to dive into this Common Lisp code ;; within the `GEMINI` system's `analyze` file and provide a thorough analysis. My ;; goal is to understand what's happening, identify any potential issues or areas ;; for optimization, and ultimately, offer some actionable suggestions to improve ;; the code. ;; ;; Here's how I see it: This looks like a system designed to automate code ;; analysis, which is a fascinating area. I immediately note the use of format ;; strings. My initial impressions are that the developer seems to have been fairly ;; structured. ;; ;; I'm starting by taking stock of the key components: ;; ;; * The `+analyze-source-system-instruction+` and `+analyze-source-file-prompt+` ;; parameters set up the context and instructions for the Large Language Model ;; (LLM). It appears the system is leveraging an LLM to perform the code analysis. ;; Setting the LLM's persona as a world-class Common Lisp programmer is a smart ;; strategy to get high-quality feedback. ;; * The `lisp-file-contents` function handles the necessary file I/O to read and ;; prepare the code for the LLM. It appears to be relatively basic, which is ;; usually a positive sign. Properly formatting the code with backticks to prevent ;; interpretation in the LLM is another good idea. ;; ;; **Analyzing the 'analyze' File for Gemini System** ;; ;; As an expert in Common Lisp, I've taken a close look at the provided code within ;; the 'gemini' system. Here's my analysis: ;; ;; **Functionality and Purpose:** This code is designed to automate the process of ;; analyzing Common Lisp code using an LLM (likely Gemini in this context). The ;; functions `analyze-system-definition` and `analyze-source` are the core ;; components. They prepare prompts and send requests to the LLM, effectively ;; acting as wrappers for an `invoke-gemini` function, which is likely responsible ;; for making the actual API calls to the LLM. It's a pragmatic approach to ;; leveraging the power of an LLM for code review and analysis. ;; ;; **Notable Features and Patterns:** The use of `defparameter` for prompt ;; templates is excellent for maintainability. The structured way it provides ;; context to the LLM (like setting a persona as a "world-class Common Lisp ;; programmer") is a smart move. The inclusion of system definitions alongside ;; individual files, especially in the `analyze-source` function, is a thoughtful ;; way to give the LLM the information it needs for a comprehensive analysis. The ;; utilization of the `series` library for stream processing within ;; `lisp-file-contents` is efficient, though a bit less common in this context. The ;; reliance on ASDF for system and component location is, of course, absolutely ;; essential. ;; ;; **Areas for Improvement:** ;; ;; 1. **Robust Error Handling:** The code needs error handling. If ;; `asdf:find-system` or `asdf:find-component` fails, the current implementation ;; could lead to nasty errors. I'd add checks to ensure that the system and ;; component exist before calling `asdf:system-source-file` and ;; `asdf:component-pathname`. *Critical - This is a fundamental safeguard.* ;; 2. **Consolidated Setup:** There's duplication in setting up ;; `*system-instruction*` and `*include-thoughts*`. I'd refactor that into a helper ;; function or common binding to keep the code DRY and improve maintainability. ;; *Important - Redundancy can lead to subtle bugs.* ;; 3. **Enhance `lisp-file-contents` Readability:** While series is efficient, the ;; `lisp-file-contents` function could benefit from some clarification. I'd ;; consider adding comments to explain the stream processing logic in more detail. ;; Perhaps even re-writing it with more common standard functions. *Moderate - ;; Clarity aids maintainability.* ;; ;; **Code Analysis: Gemini 'Analyze' File** ;; ;; Alright, let's dissect this `analyze` file. It's clear this is designed to be ;; the brains of the operation, tasked with feeding Common Lisp code to the Gemini ;; LLM for analysis. My immediate impression is that it's well-structured, but ;; let's dive into the specifics. ;; ;; First off, it seems the core concept is pretty straightforward: provide context ;; (the LLM's role, the system being analyzed), the specific code, and let the LLM ;; do its magic. This approach is sound. We have two key functions, ;; `analyze-system-definition` and `analyze-source`, that are almost identical in ;; their structure. ;; ;; The use of dynamic variables, especially `*system-instruction*` and ;; `*include-thoughts*`, is a smart move. Setting up the LLM's persona as a ;; "world-class Common Lisp programmer" with that format string is a crucial part ;; of the process. It's all about priming the pump. Likewise, enabling ;; *include-thoughts* is likely essential to see how Gemini is working through the ;; code. ;; ;; Now, that `lisp-file-contents` function... that's where things get interesting. ;; Using the `series` library for this is a valid approach, but it's not a go-to ;; for most. It reads the source file line by line and wraps it in markdown code ;; blocks. This technique is going to work. The `series` library is more of a ;; preference but the outcome is correct. ;; ;; **Here's where my expert eye spots the potential issues and areas for ;; improvement:** ;; ;; 1. **Error Handling:** The Achilles' heel here is how the functions handle ASDF ;; errors. What if ASDF can't find the system or the file? Right now, it looks like ;; it would crash. I'd definitely add checks for the existence of the system and ;; files before calling `asdf:system-source-file` and `asdf:component-pathname`. ;; This is **critical** to prevent abrupt failures. ;; ;; 2. **Code Duplication:** The identical `let` blocks in both ;; `analyze-system-definition` and `analyze-source` for setting up the environment ;; are a code smell. This is a minor point, but it could be cleaned up by ;; factoring that setup into a dedicated function. ;; ;; 3. **Optimize the prompt construction:** The way prompts are constructed with ;; the different parts feels rigid. There is the opportunity to use a more flexible ;; prompt structure that allows for the addition of information. I would add a ;; *include-information* list to `invoke-gemini` that adds different prompt ;; templates. ;; ;; **Code Analysis: Gemini 'analyze' file** ;; ;; Okay, so I've taken a look at this `analyze` file within the `gemini` system. ;; My primary focus is to act as a seasoned Common Lisp expert and provide a ;; detailed review of the code's functionality and potential improvements. ;; ;; Essentially, this code acts as an interface to the Gemini LLM for automated code ;; analysis. It prepares prompts and context for the LLM to analyze Common Lisp ;; source files and ASDF system definitions. The functions, ;; `analyze-system-definition` and `analyze-source`, follow a consistent pattern: ;; They set up dynamic variables for instructions and then call `invoke-gemini`, ;; providing it with both instructions and code snippets in the form of a list of ;; "parts". This context includes the target file and the system's .asd definition ;; for dependency details. ;; ;; I see some solid design choices here. Using `defparameter` for prompt templates ;; promotes maintainability and clarity. The inclusion of the system definition ;; file alongside the individual source file for analysis in `analyze-source` is a ;; brilliant move to give the LLM better context about dependencies. The use of ;; `series` in `lisp-file-contents` is a clever way to handle file I/O and string ;; manipulation. ;; ;; However, there are a few areas that could be improved: ;; ;; 1. **Error Handling for File Existence (Critical):** I'd be concerned about ;; what happens when the specified system or file doesn't exist. The code needs ;; robust error handling to gracefully manage these scenarios. Adding checks using ;; the `asdf:find-system` and `asdf:find-component` functions would be a good ;; start. A robust system would be needed here. ;; 2. **Consolidated Setup (Minor):** The duplication in setting ;; `*system-instruction*` and `*include-thoughts*` between ;; `analyze-system-definition` and `analyze-source` is not ideal. I'd consider ;; extracting this setup into a separate function to keep things DRY. ;; 3. **Refactor `lisp-file-contents` (Low, potentially personal preference):** ;; While the use of `series` is functional, the logic within `lisp-file-contents` ;; is a little less readable. I might consider alternative methods, but given the ;; target of the Gemini LLM and the target audience of Common Lisp developers, it ;; might be more of a personal style preference than a "must-fix." ;; ;; Here are my recommendations in order of importance: ;; ;; 1. **Critical:** Add error handling to ensure `asdf` finds the system and ;; components; wrap the relevant calls in `handler-case` or `ignore-errors`. ;; 2. **High:** Create a function to encapsulate the duplicated setup of the ;; dynamic variables for a cleaner structure. ;; 3. **Low:** Consider the use of a `with-open-file` structure and traditional ;; `read-line` loop to build the string in `lisp-file-contents` if readability is a ;; primary concern. The current implementation is valid, but the preference of a ;; Common Lisp developer may be relevant here. ;; ;; **Gemini Code Analyzer Analysis** ;; ;; Okay, let's break this down. From what I can see, this `analyze` file within the ;; `gemini` system is designed as a specialized interface for LLM-powered code ;; analysis of Common Lisp and ASDF systems. It aims to generate comprehensive ;; reviews of code, including functionality summaries, identification of noteworthy ;; features, deviations from best practices, potential bugs, and suggestions for ;; improvement. ;; ;; The core idea is to structure prompts for an LLM (presumably Gemini, given the ;; name of the system) in a way that is highly effective. It achieves this by ;; setting the scene with a system-level instruction that consistently frames the ;; LLM as a world-class Lisp programmer. Then, it uses formatted prompts and the ;; code itself as input. ;; ;; It's essentially two functions - `analyze-system-definition` and ;; `analyze-source` - that handle different levels of granularity. They both use ;; `invoke-gemini`, but they construct the input prompts differently. ;; `analyze-system-definition` focuses on the system's `.asd` file, while ;; `analyze-source` adds the system file as context to the specific file. ;; ;; Some key observations are: ;; ;; * **Prompt Engineering:** The use of `defparameter` for prompt templates (like ;; `+analyze-source-system-instruction+` and `+analyze-source-file-prompt+`) and ;; `format` to incorporate dynamic values is well-considered. It makes the prompts ;; maintainable and easy to understand. ;; * **Context is Key:** The `analyze-source` function smartly includes the ;; system's definition file as part of the prompt when analyzing a specific source ;; file. This gives the LLM vital information about dependencies. It's great to see ;; that level of attention to detail! ;; * **Series Usage:** `lisp-file-contents` leverages the `series` library for ;; reading and formatting Lisp files. This is a solid approach, although it might ;; make the function slightly less immediately readable for someone who isn't ;; already familiar with `series`. ;; ;; Now, for some actionable improvements: ;; ;; 1. **Robust Error Handling (Critical):** I would recommend adding robust error ;; handling. What happens if a system or source file can't be found by ASDF? The ;; code should explicitly check these conditions and handle them gracefully, ;; perhaps by returning an error message or logging a warning. The ;; `asdf:find-system` and `asdf:find-component` functions would be excellent ;; candidates for these checks. ;; 2. **Refactor Dynamic Variable Binding (Moderate):** There's a little bit of ;; repetition in how `*system-instruction*` and `*include-thoughts*` are set in ;; both `analyze-system-definition` and `analyze-source`. A simple helper function ;; or a `let*` nesting could avoid the small redundancy, making the code more ;; readable. ;; 3. **Consider `lisp-file-contents` Readability (Optional, depending on team ;; preference):** While the `series` implementation in `lisp-file-contents` is ;; perfectly functional, and even idiomatic Lisp, it's slightly less ;; straightforward than a `with-open-file` approach for simple file reading. ;; Depending on your team's familiarity with `series`, you might want to consider ;; alternative implementations for enhanced readability, especially if the core ;; intent is just to get the entire file's content as a string. However, I ;; acknowledge the benefit of the pipeline style of `series`. ;; ;; That's my analysis. The code is well-structured and demonstrates a clear ;; understanding of the principles of effective prompting. Implementing those ;; improvements would further solidify its robustness and readability. ;; ;; **Code Analysis: Gemini Integration for Common Lisp** ;; ;; Okay, so I've taken a look at this `analyze` file within the `gemini` system, ;; and here's my take: this is a solid framework for leveraging the Gemini LLM for ;; automated Common Lisp code analysis. ;; ;; My core function is clear: to read Lisp code, format it for the Gemini model, ;; and then give Gemini the tools to actually *do* something useful with it. The ;; `lisp-file-contents` function handles the grunt work of reading files. The fact ;; that it is using the `series` library for stream processing is interesting; it's ;; a powerful tool but might be a tad obscure for some Lisp developers, at least at ;; first glance. ;; ;; The core of the logic is in the `analyze-system-definition` and `analyze-source` ;; functions. They follow a predictable pattern: they load the code, feed it to the ;; Gemini model with a structured prompt, and let the model work its magic. The ;; incorporation of ASDF for locating the system and source files is essential. I ;; like that the system definition file is *always* provided to Gemini as context, ;; even when analyzing individual files within that system; it gives Gemini a ;; broader understanding. The dynamic variables and the clear use of `format` for ;; prompt construction demonstrate a good understanding of best practices. ;; ;; **Areas for improvement:** ;; ;; 1. **Error Handling:** The *most* important thing is to shore up the ASDF calls ;; to avoid crashes. If someone gives a system or file name that's not valid, the ;; code should gracefully handle that rather than just throwing an error deeper ;; down the stack. That's a critical bug to squash. ;; ;; 2. **Duplication:** The setup for the LLM interaction (prompt string, including ;; Gemini feedback) in `analyze-system-definition` and `analyze-source` is ;; virtually identical. Consolidating this into a helper function or macro would ;; make the code cleaner. ;; ;; **Here's a list of three critical actionable suggestions for improvement:** ;; ;; 1. **Critical: Error Handling on ASDF Lookup:** Wrap the calls to ;; `asdf:system-source-file` and `asdf:component-pathname` within checks to ensure ;; the system and/or component actually exist before proceeding. Raise a proper ;; error if they do not. This will catch invalid system and file names. ;; 2. **Nice to have: Consolidate the Gemini Prompt Preparation**: Refactor the ;; setup of `*system-instruction*` and `*include-thoughts*` into a helper macro or ;; function. This reduces duplication. ;; ;; I think these improvements will enhance the robustness and clarity of the ;; integration, allowing it to provide more reliable code analysis. ;; ;; **Gemini Code Analysis: My Perspective** ;; ;; Alright, let's dissect this. As a seasoned Common Lisp developer, I see a ;; well-structured approach to code analysis using the Gemini LLM. The core idea is ;; sound: provide the LLM with the necessary context and prompts to effectively ;; analyze Lisp code. The use of format strings and dynamic variables for ;; instructions is a good practice for maintainability. The file reading with ;; `series` is a powerful tool, although I'll admit I sometimes find the syntax a ;; bit less readable at a glance. Let's delve deeper. ;; ;; The `analyze` file provides two key functions, `analyze-system-definition` and ;; `analyze-source`, which essentially wrap the `invoke-gemini` call, preparing the ;; ground for the analysis. They set up the LLM with a clear persona ("world class ;; Common Lisp programmer") and context (both system definition and source file ;; contents). ;; ;; I like how the `analyze-source` function intelligently includes the system ;; definition file alongside the specific source file. This gives the LLM the ;; broader picture, which should lead to more accurate and insightful analysis. ;; That's good thinking, that is. ;; ;; The use of ASDF for system and file location is expected in the Common Lisp ;; ecosystem and fits the task well. It's a key part of how we manage project ;; dependencies and file structures. ;; ;; However, I'm already spotting a few areas where we can improve. First, the error ;; handling, or rather lack thereof, around ASDF operations is a significant ;; concern. What happens if a system or component doesn't exist? A hard crash isn't ;; ideal. We need to be more robust there. ;; ;; Second, the `let` binding of the LLM instructions seems redundant. It's present ;; in both functions, which is a signal of code duplication. We should abstract ;; that out. ;; ;; Finally, the magic strings for the markdown delimiters in `lisp-file-contents` ;; are minor but good practice points: define those constants. ;; ;; Here's my actionable plan: ;; ;; 1. **Critical Action:** **Robust ASDF error handling.** We absolutely need ;; checks to ensure that the systems and components are found before proceeding. ;; Any failure there, raise an error. ;; 2. **Next up:** **Refactor the `*system-instruction*` and `*include-thoughts*` ;; setup into a macro.** This will reduce duplication and enhance code clarity. The ;; `content` struct isn't clear to someone who doesn't use Gemini or is new to this ;; code. Make the code's intent clear. ;; 3. **Third:** Make those magic markdown string into constants. ;; ;; By addressing these points, we'll make this code more robust, readable, and ;; maintainable. This approach leverages the power of LLMs and, with a few tweaks, ;; can be a solid foundation for automated Common Lisp code analysis. ;; ;; **Code Analysis Summary: Gemini's Lisp Analyzer** ;; ;; As an expert in Common Lisp, I've just reviewed the `analyze` file within the ;; `gemini` system. My assessment centers on how it interacts with the Gemini LLM ;; for automated code analysis. It's designed to take a system or specific file, ;; prepare prompts, and then submit them to the LLM for evaluation. ;; ;; **Functionality and Key Features:** ;; ;; My analysis shows that `analyze` wraps the `invoke-gemini` function, customizing ;; the input to effectively elicit detailed code reviews from the LLM. It focuses ;; on several key areas, including: ;; * Setting the LLM's persona as an expert Common Lisp programmer. ;; * Formatting the file contents within Markdown code blocks (` ```lisp` and ;; ```` `) for clear parsing. ;; * Constructing prompts that ask for summaries, feature identification, ;; identification of best practice violations, highlighting potential bugs, and ;; suggestions for improvement. ;; * Leveraging ASDF to locate system definition files and components, providing ;; the LLM with the context of dependencies. ;; * Dynamically injecting file contents and system information into the prompts. ;; ;; **Observations:** ;; ;; I've noted the consistent pattern used for LLM interaction: Setting up a ;; "system" role for the LLM, enabling the `*include-thoughts*` flag, and ;; constructing the prompt from text instructions and code snippets. Also, the use ;; of `series` in `lisp-file-contents` to stream file data is powerful, but ;; readability is something to consider. ;; ;; **Areas for Improvement:** ;; ;; 1. **Critical:** There's a significant risk of errors if ASDF can't find the ;; system or file. This could lead to crashes when functions expect valid ASDF ;; components. The current code lacks proper error handling in cases of invalid ;; system names or file paths, leading to unexpected behaviour if incorrect ;; parameters are provided. ;; 2. **Nice to have:** The common code used to bind the dynamic variables ;; `*system-instruction*` and `*include-thoughts*` is duplicated in both ;; `analyze-system-definition` and `analyze-source`. This duplication creates the ;; potential for maintenance issues. ;; 3. **Nice to have:** The use of literal strings for Markdown formatting could ;; be made more consistent and less brittle. ;; ;; **Actionable Suggestions:** ;; ;; 1. **Enhance Error Handling:** I strongly recommend adding robust error ;; checking around ASDF calls in both `analyze-system-definition` and ;; `analyze-source`. If `asdf:find-system` or `asdf:find-component` fails, an ;; appropriate error condition should be signaled instead of allowing the code to ;; continue with `NIL` values. This will considerably improve the robustness of the ;; system. ;; 2. **Refactor Common LLM Setup:** To reduce redundancy and enhance ;; maintainability, I propose refactoring the code that sets the ;; `*system-instruction*` and `*include-thoughts*` variables. Consider a helper ;; macro like `with-llm-analysis-context` to encapsulate this. ;; 3. **Declare `defconstant`s for Markdown Fences:** Replacing the hardcoded " ;; ```lisp" and " ``` ` " strings with constants will make the formatting process ;; more easily adaptable. ;; ;; **Code Analysis Summary: A Common Lisp Perspective** ;; ;; Alright, let's dive into this `analyze` file. My primary function here is to ;; prepare code for analysis by the Gemini LLM, acting as a bridge between the ;; system's structure and the LLM's analytical capabilities. The goal is to provide ;; a "world-class Common Lisp programmer" persona to the LLM to get detailed ;; insights into code quality, potential issues, and areas for improvement. ;; ;; The file contains two main entry points: `analyze-system-definition` and ;; `analyze-source`. These functions act like wrappers around the `invoke-gemini` ;; function. They're both carefully designed to set the stage for LLM analysis, ;; setting the right context and persona, and including the relevant code content ;; in the prompt. I appreciate the careful consideration given to including the ;; system definition file as context, even when analyzing individual source files. ;; This provides the Gemini LLM with important dependency information. ;; ;; The core of the LLM interaction is straightforward: assemble a prompt with ;; instructions and code content, and send it to Gemini. I can see that the prompts ;; are constructed with a good mix of instructions and contextual information, ;; thanks to the format strings that dynamically insert the system and file names. ;; I also see that the use of `*system-instruction*` and `*include-thoughts*` ;; (dynamic binding) looks solid. The code is carefully crafted to present the ;; Gemini LLM with a comprehensive view of the code and its context. ;; ;; **Here's the breakdown of my thoughts:** ;; ;; * **Functionality:** This is a neat tool for automatically analyzing Common ;; Lisp code by leveraging an LLM. It helps provide suggestions for code ;; improvements, potential bugs, and follows best practice deviations. ;; * **Notable Features:** The code has a well-defined interaction pattern with ;; the LLM. It focuses on clarity, context, and good use of the `series` library ;; for reading the files. ;; * **Areas for Improvement:** ;; 1. **Robust Error Handling:** We need to explicitly handle cases where a ;; system or file component is not found via ASDF. ;; 2. **Duplicated setup:** I think we could clean up the duplicated binding ;; of dynamic variables by refactoring this setup into a macro or separate ;; function. ;; 3. **Readability/Idiomaticity:** The use of `series` is somewhat ;; specialized. It may be helpful to consider if a more readable approach (for a ;; generalist reader) wouldn't be as good or preferable. ;; 4. **Magic strings:** The use of literal strings in markdown can be ;; refactored to constants. ;; ;; **Actionable Suggestions:** ;; ;; 1. **Crucial:** Add checks for successful ASDF system and component lookup, or ;; we risk runtime errors. ;; 2. **Nice to have:** Abstract the repetitive initialization of ;; `*system-instruction*` and `*include-thoughts*` with a macro to reduce ;; redundancy and increase readability. ;; 3. **Nice to have:** Replace literal strings in markdown and constants. ;; ;; I believe these adjustments will make the `analyze` file more robust, ;; maintainable, and ultimately, more useful for performing high-quality Common ;; Lisp code analysis using the Gemini LLM. ;; ;; **Code Analysis: Automated Common Lisp Analysis with Gemini** ;; ;; Okay, as an expert in this field, I see a well-structured approach to using the ;; Gemini LLM for automated Common Lisp code analysis within the `GEMINI` system. ;; My primary focus is on how it's engineered, specifically how it leverages ASDF ;; and `series`. ;; ;; The primary function here is to give the LLM the tools to analyze Common Lisp ;; code and system definitions. The system works by providing code formatted for ;; the LLM. It includes system instructions to help the LLM with the task. This is ;; smart: by including the ASDF system definition, the LLM gains context about the ;; code's dependencies, something that would be invaluable for thorough code ;; analysis. The `series` library is smartly used for file parsing and formatting, ;; which is an efficient approach, though it could be less immediately obvious to ;; all. ;; ;; **Notable Features:** ;; ;; * **Prompt Engineering:** The use of `defparameter` for prompts and `format` ;; for injecting values ensures clear and maintainable prompts. That's a key part ;; of interacting with an LLM. ;; * **Contextualization:** Providing the ASDF system definition file as context ;; to the LLM is a smart move, greatly increasing the quality of the analysis the ;; LLM can provide. ;; * **Modularity:** The clear separation of functions (`lisp-file-contents`, ;; `analyze-system-definition`, `analyze-source`) promotes maintainability and ;; flexibility. ;; * **Dynamic Variable Binding:** Using `let` for temporary dynamic variable ;; binding (`*system-instruction*`, `*include-thoughts*`) is appropriate and a ;; standard pattern. ;; ;; **Areas for Improvement:** ;; ;; 1. **Critical Issue: Error Handling:** The lack of error handling around ASDF ;; calls is a glaring weakness. If ASDF can't find a system or component, it'll ;; crash. That needs to be addressed immediately. ;; 2. **Duplication:** The similar setup of the `*system-instruction*` and ;; `*include-thoughts*` dynamic variables in both `analyze-system-definition` and ;; `analyze-source` is a minor code smell. I'd refactor that. ;; 3. **Readability:** While I appreciate `series`, the file-reading part within ;; `lisp-file-contents` could be made more approachable. While its functional ;; structure is useful for a Lisp LLM, a more straightforward `with-open-file` ;; could be considered. ;; ;; **Actionable Suggestions:** ;; ;; 1. **Robust Error Checking (Critical):** Implement error handling to gracefully ;; manage situations when a system or component isn't found by ASDF. This requires ;; checking the return values of `asdf:find-system` and `asdf:find-component`. ;; Signal an error if these fail. ;; ;; 2. **Refactor Context Setup (Nice to have):** Create a macro (or function) to ;; encapsulate the common LLM context setting in `analyze-system-definition` and ;; `analyze-source`. This will reduce repetition and improve code readability. ;; ;; 3. **Use Constants for Markdown Fences (Nice to have):** Replace the hardcoded ;; markdown fence strings (````lisp` and ``` `) with `defconstant`s. It'll make ;; code more maintainable. ;; ;; **A Detailed Analysis of the 'analyze' File** ;; ;; My immediate assessment is that this code implements a system for automated ;; Common Lisp code analysis using the Gemini LLM. It's essentially a ;; well-structured wrapper around `invoke-gemini`, designed to prepare the prompts ;; and context for effective code analysis. I see a clear interaction pattern with ;; the LLM: setting a consistent persona, enabling thought inclusion, and ;; constructing prompts with relevant code context. ;; ;; **Key features I recognize:** ;; ;; * **Prompt Engineering:** The use of `defparameter` for prompt templates is a ;; good practice, promoting maintainability and clarity. ;; * **Contextual Awareness:** The `analyze-source` function's inclusion of the ;; system definition file, even for specific source files, shows a thoughtful ;; approach to providing the LLM with sufficient context. ;; * **ASDF Integration:** The reliance on ASDF for system and component location ;; is essential for any Common Lisp code analysis tool. ;; * **Series Usage:** I notice a reliance on `series` for file reading and ;; string manipulation. While powerful, it can sometimes make code less immediately ;; accessible to those unfamiliar with the library. ;; ;; **Areas for improvement that I immediately spot:** ;; ;; 1. **Robustness and Error Handling:** The code needs better error handling for ;; ASDF operations. What happens if a system or component is not found? Currently, ;; the code would likely fail. ;; 2. **Code Duplication:** The identical `let` bindings for ;; `*system-instruction*` and `*include-thoughts*` in both ;; `analyze-system-definition` and `analyze-source` is a simple, easy refactor. ;; 3. **Readability in `lisp-file-contents`:** While the `series` implementation ;; works, a more straightforward `with-open-file` loop might improve readability ;; without sacrificing much, but it's arguable. ;; 4. **Magic Strings:** Using `defconstant` for the markdown formatting strings ;; would improve maintainability. ;; ;; **My actionable recommendations, given my experience:** ;; ;; 1. **Critical: Add robust error handling to ASDF calls.** Ensure that ;; `asdf:find-system` and `asdf:find-component` return valid values before using ;; them. Signal an error or return a meaningful value if the system or component is ;; not found. This is a crucial step to prevent runtime errors and improve the ;; tool's reliability. ;; 2. **Nice to have: Create a helper macro or function to refactor the common LLM ;; setup.** Encapsulating the repeated `let` binding for `*system-instruction*` and ;; `*include-thoughts*` will reduce duplication and improve code cleanliness. A ;; macro like `with-llm-analysis-context` seems appropriate for this purpose. ;; 3. **Nice to have: Use `defconstant` for markdown fence strings.** Defining ;; constants for the markdown fence strings (````lisp` and ``` `) enhances ;; readability and makes it easier to modify them in one place if needed. ;; ;; These changes will make the code more robust, readable, and maintainable. I ;; would prioritize the error handling as the most critical improvement, as it ;; directly impacts the tool's stability. ;; ;; **Code Analysis: Gemini System Analyzer** ;; ;; My primary goal here is to enable automated Common Lisp code analysis using the ;; Gemini LLM. I've built two key functions, `analyze-system-definition` and ;; `analyze-source`, to facilitate this. They follow a standard pattern: they ;; format prompts to set the LLM's persona as a seasoned Common Lisp programmer, ;; providing context from the ASDF system definition file. ;; ;; `lisp-file-contents` is responsible for fetching the raw content of Lisp files, ;; formatting it with markdown. I'm using `series` for that, although in hindsight, ;; I'm a bit torn about its readability. I am leveraging `ASDF` extensively for ;; file location, which is a good practice. ;; ;; **Key Observations:** ;; ;; * I am clearly setting up a context for code understanding. ;; * The use of `format` for prompt generation and injecting variables keeps it ;; adaptable. ;; * The `series` library for file processing is a good choice for this type of ;; problem, though readability is a small concern. ;; * I am doing dependency context-aware prompt generation, which should improve ;; accuracy. ;; * I use a `let` for setting dynamic variables, which is perfectly fine. ;; ;; **Areas for Improvement:** ;; ;; 1. **Robust Error Handling (Critical):** I **must** add error checking for ;; `ASDF` calls (e.g., `find-system`, `find-component`) to prevent crashing on ;; invalid input. This is non-negotiable. ;; 2. **Code Duplication:** I should refactor the repeated code for prompt setting ;; into a macro. This improves maintainability and readability. ;; 3. **Magic Strings:** Defining constants for the markdown markers (````lisp` ;; and ```` `) would be good practice and make it easier to maintain in the future. ;; ;; **Actionable Steps:** ;; ;; 1. **Critical:** Implement thorough error checking after ASDF calls in both ;; `analyze-system-definition` and `analyze-source`. Signal an error if a system or ;; component isn't found, with a helpful message. ;; 2. Create a `with-llm-analysis-context` macro to encapsulate the dynamic ;; variable bindings and reduce duplication. ;; 3. Define `defconstant` for `+lisp-code-fence-start+` and `+code-fence-end+`. ;; ;; I am confident that these changes will strengthen the robustness and ;; maintainability of the `analyze` file and, in turn, improve the quality of ;; Gemini's code analysis capabilities. ;; ;; **Code Analysis: Gemini Integration for Common Lisp** ;; ;; As an experienced Common Lisp developer, I see the code in `analyze.lisp` as a ;; well-structured framework for integrating with the Gemini LLM for code analysis. ;; It leverages several good practices and patterns commonly used in Lisp ;; development, specifically, around interacting with ASDF and utilizing format ;; strings to build prompts. The primary function is to prepare the context and ;; input for the Gemini LLM to analyze Common Lisp code, whether it is for an ;; entire ASDF system or individual source files. ;; ;; Key aspects I've observed include the focus on defining persona and prompt ;; templates as parameters, which is quite advantageous for maintainability and ;; flexibility. The intelligent inclusion of the ASDF system definition file in the ;; prompt, even for a single-file analysis, is a commendable approach to provide ;; rich context to the LLM. Also, the utilization of `series` for file reading, ;; although powerful, could be slightly less straightforward for those not familiar ;; with the series library, but it may be very reasonable to embrace the series ;; pattern given this is designed for Lisp code, where streaming processing can be ;; ideal. The code is efficient in its objective. ;; ;; However, a few areas stand out as needing attention for robust operation and ;; improved code clarity. Robust error handling for system or component lookup, is ;; paramount, and the repeated setup in `analyze-system-definition` and ;; `analyze-source` offers opportunity for refactoring. Also, `lisp-file-contents` ;; should use `defconstant` for `markdown` delimiters. ;; ;; Therefore, I'd suggest these improvements: ;; ;; 1. **Critical:** Immediately add thorough error handling for cases where ASDF ;; fails to find a system or a component. A critical need, because, without it, an ;; invalid ASDF system or file will lead to an unhandled error condition. ;; 2. **Nice to Have:** Refactor the repetitive setup for prompt formatting and ;; `*include-thoughts*` with a macro or helper function. ;; 3. **Nice to Have:** Define `markdown` fence strings in `defconstant` for ;; cleaner and easier modification. ;; ;; These changes will add more safety and robustness, which is very useful. ;; Overall, the code represents a solid foundation for automating Lisp code ;; analysis with Gemini, and with some thoughtful refinements, it can become an ;; even more powerful tool. ;; ;; **Detailed Analysis of the `analyze` File** ;; ;; Alright, let's dive into this `analyze` file in the `GEMINI` system. My primary ;; focus here is to create a robust system for code analysis using a Large Language ;; Model (LLM). This file is essentially the interface between the codebase and the ;; LLM, preparing and formatting the input prompts for analysis. ;; ;; **Functionality and Purpose:** ;; ;; The core of this file centers around two main functions: ;; `analyze-system-definition` and `analyze-source`. These functions are designed ;; to analyze Common Lisp code and ASDF system definitions. They prepare the ;; necessary input for the LLM to perform its analysis. The goal is to get feedback ;; on the code, looking for issues, potential bugs, deviations from best practices, ;; and suggestions for improvement. ;; ;; **Key Features and Patterns:** ;; ;; * **LLM Interaction:** The functions follow a consistent pattern for LLM ;; interaction. They consistently set up a system instruction (using the ;; `+analyze-source-system-instruction+` format string) to set the LLM's persona as ;; a "world-class Lisp programmer". They enable the inclusion of the LLM's ;; "thoughts" (`*include-thoughts*`), and then call `invoke-gemini` with a list of ;; parts, which typically include both descriptive instructions and the actual ;; code. ;; * **Prompt Construction:** The use of `defparameter` for prompt templates and ;; `format` for dynamic value injection is a well-established practice, ensuring ;; clean and maintainable code. ;; * **ASDF Integration:** The code deeply integrates with ASDF to locate system ;; definition and source files, which is crucial for analyzing complete projects. ;; * **Contextual Understanding:** For `analyze-source`, it intelligently ;; includes the system definition file along with the target source file to give ;; the LLM the broader project context. ;; ;; **Areas for Improvement and Potential Bugs:** ;; ;; * **Error Handling:** The current code lacks robust error handling. If ASDF ;; can't find a system or component, there isn't a check, and you'll likely get a ;; runtime error. ;; * **Duplication:** There's some minor duplication in the setup of the system ;; instruction and thought inclusion between `analyze-system-definition` and ;; `analyze-source`. ;; * **Readability:** While powerful, the use of `series` in `lisp-file-contents` ;; might be less immediately clear to everyone. ;; * **Magic Strings:** The " ```lisp" and " ``` " markdown fences in ;; `lisp-file-contents` are hardcoded. ;; ;; **Actionable Suggestions for Improvement:** ;; ;; 1. **Critical: Implement robust error handling for ASDF operations.** ;; (Importance: High) ;; ;; * **Reasoning:** Handling errors is crucial for creating a robust system. ;; Invalid inputs, or missing files will result in the code crashing. ;; * **Action:** Modify `analyze-system-definition` and `analyze-source` to ;; check if `(asdf:find-system system)` and `(asdf:find-component ... file)` ;; succeed before proceeding. Signal an appropriate error or return a meaningful ;; value if the system or component is not found. ;; ;; 2. **Nice to Have: Refactor common LLM setup into a helper macro or function.** ;; (Importance: Medium) ;; ;; * **Reasoning:** This is a nice-to-have, and refactoring will improve the ;; clarity of the code and reduce duplication. ;; * **Action:** Create a macro (e.g., `with-llm-analysis-context`) to ;; encapsulate the common code setting the `*system-instruction*` and ;; `*include-thoughts*`. ;; ;; 3. **Nice to Have: Use `defconstant` for markdown fence strings.** (Importance: ;; Low) ;; ;; * **Reasoning:** Making the strings ````lisp` and ```` ` constant makes it ;; easier to change markdown styling in the future. ;; * **Action:** Define constants for the markdown fences to replace the ;; hardcoded strings in `lisp-file-contents`. ;;
The provided Lisp code from the analyze
file in the gemini
system is designed to facilitate automated analysis of Common Lisp source code and ASDF system definitions using a Gemini LLM.
Functionality and Purpose
The primary purpose of this code is to act as an intermediary between Common Lisp source files and the Gemini LLM for code analysis. It prepares the necessary context and prompts for the LLM, then invokes the LLM to perform the analysis.
Here's a breakdown of its key components and their functionality:
-
+analyze-source-system-instruction+
: Adefparameter
holding a format string that defines the LLM's persona and task. It instructs the LLM to act as a "world class Common Lisp programmer" analyzing files from a specified system. This is used to set the*system-instruction*
dynamic variable for theinvoke-gemini
calls. -
+analyze-source-file-prompt+
: Anotherdefparameter
that stores the main prompt template for analyzing individual source files. This prompt guides the LLM to summarize functionality, identify notable features, point out best practices, potential bugs, and suggest improvements. -
lisp-file-contents (source-file)
: This utility function reads the content of a given Lisp source file and formats it as a single string, wrapped in Markdown code fences (```lisp
and```
). It leverages theseries
library for efficient file processing. -
analyze-system-definition (system)
: This function takes an ASDF system name as input and instructs the Gemini LLM to analyze the system's.asd
definition file. It sets up the LLM's system instruction, enables thought generation (*include-thoughts* t
), and then passes the system definition file's content toinvoke-gemini
. -
analyze-source (system file)
: This function is responsible for analyzing a specific source file within a given ASDF system. It also sets the LLM's persona and enables thoughts. Crucially, it provides the LLM with two pieces of code:- The system definition file (
.asd
), prefaced with an instruction not to analyze it but to use it for context regarding dependencies. - The actual source file to be analyzed, along with the specific analysis prompt. This intelligent inclusion of the system definition provides the LLM with valuable context about the project structure and dependencies, leading to potentially more insightful analysis.
- The system definition file (
Notable Features and Patterns
- LLM Integration: The code demonstrates a clear pattern for integrating with the Gemini LLM, using dynamic variables (
*system-instruction*
,*include-thoughts*
) to control the LLM's behavior andinvoke-gemini
to send requests. - Context Provision: The
analyze-source
function's approach of providing the system definition file as contextual information, even when analyzing a separate source file, is a sophisticated technique for enhancing LLM understanding and output quality. - ASDF Reliance: Heavy and appropriate use of ASDF functions (
asdf:find-system
,asdf:system-source-file
,asdf:find-component
,asdf:component-pathname
) for locating and accessing files within Lisp projects. - Prompt Templating: The use of
defparameter
for storing prompt templates andformat
for dynamic insertion of system and file names is a clean and maintainable way to construct prompts. series
Library Usage: Thelisp-file-contents
function showcases the use of theseries
library for stream-like processing, which can be very efficient for certain tasks.
Areas for Improvement and Potential Bugs
- Lack of Error Handling for ASDF Operations: The current code does not explicitly handle cases where
(asdf:find-system system)
or(asdf:find-component (asdf:find-system system) file)
might returnNIL
(i.e., the system or component is not found). This would lead to errors ifasdf:system-source-file
orasdf:component-pathname
are called withNIL
. - Code Duplication: The setup for the dynamic variables
*system-instruction*
and*include-thoughts*
is identical in bothanalyze-system-definition
andanalyze-source
. - Magic Strings: The Markdown code fence strings (
```lisp
and```
) are hardcoded withinlisp-file-contents
, which are sometimes referred to as "magic strings." While common for this specific use case, they could be more explicitly defined.
Suggestions for Improvement
-
Critical Improvement: Add robust error handling for ASDF operations.
- Reasoning: The absence of checks for
NIL
returns fromasdf:find-system
andasdf:find-component
makes the functions brittle. If an invalid system or file name is passed, the program will crash instead of providing a graceful error message. - Actionable Suggestion: Modify
analyze-system-definition
andanalyze-source
to check if(asdf:find-system system)
and(asdf:find-component ... file)
return non-NIL
values. IfNIL
, signal a meaningful error (e.g., usingerror
orcerror
) to inform the user that the specified system or file could not be found.
;; Example for analyze-system-definition (defun analyze-system-definition (system) "Analyze the system definition file in a specified system." (let ((found-system (asdf:find-system system))) (unless found-system (error "ASDF system '~a' not found." system)) (let ((*system-instruction* ...) (*include-thoughts* t)) (invoke-gemini (list (part (format nil "What follows is the definition of the `~a` system. Analyze this file." system)) (part (lisp-file-contents (asdf:system-source-file found-system))))))))
- Reasoning: The absence of checks for
-
Nice to Have: Refactor common LLM setup into a helper macro or function.
- Reasoning: The
let
binding for*system-instruction*
and*include-thoughts*
is duplicated acrossanalyze-system-definition
andanalyze-source
. Abstracting this into a macro or local function would reduce redundancy, improve code readability, and make future modifications to this setup easier. - Actionable Suggestion: Create a macro, for example,
with-gemini-analysis-context
, that encapsulates thelet
binding and takes the system name as an argument to format the system instruction.
(defmacro with-gemini-analysis-context ((system) &body body) `(let ((*system-instruction* (content :parts (list (part (format nil +analyze-source-system-instruction+ ,system))) :role "system")) (*include-thoughts* t)) ,@body)) ;; Then, usage in analyze-system-definition: (with-gemini-analysis-context (system) (invoke-gemini ...))
- Reasoning: The
-
Nice to Have: Use
defconstant
for Markdown code fence strings.- Reasoning: The strings "
```lisp
" and "```
" are literal values hardcoded inlisp-file-contents
. Defining them as constants improves maintainability (if the format ever needs to change), readability, and eliminates "magic strings." - Actionable Suggestion: Introduce two
defconstant
s for the opening and closing Markdown code fences.
(defconstant +lisp-code-fence-start+ "```lisp" "Markdown fence for starting a Lisp code block.") (defconstant +code-fence-end+ "```" "Markdown fence for ending a code block.") (defun lisp-file-contents (source-file) "Return the contents of a Lisp source file in a given ASDF system." (str:join #\newline (collect 'list (catenate (scan (list +lisp-code-fence-start+)) (scan-file source-file #'read-line) (scan (list +code-fence-end+))))))
- Reasoning: The strings "
Joe Marshall — Paginated collections
@2025-09-06 14:14 · 41 days agoIf you like the series
abstraction, then you will want
collections to have functions that return series of their elements.
The code that uses series will be able to treat the collection
functionality while the generated code will be iterative. If you
are dealing with web resources, it is likely that your collection
will be paginated. You don't want to fetch all the pages at the
beginning and then process the elements one at a time, you want to
interleave the page fetches with the element processing, so that you
process several elements between each page fetch.
This would be trivial to implement as a flatmap
—
you'd iterate over the pages, then for each page you'd iterate over
the collection elements on that page — but the series
abstraction isn't built to do a nested iteration like that.
Instead, you need to build a series out of a state machine.
The state will be a queue of pending elements along with the token for the next page. At each step, the state machine will pop the queue. If the queue is empty, we fetch the next page and refill the queue. The inner series generator is the series of states of this state machine. It makes transitions like this:
(<item1> <item2> … <next-page-token>) (<item2> … <next-page-token>) … (<next-page-token>) (<itemN+1> <itemN+2> … <next-page-token+1>) (<itemN+2> … <next-page-token+1>) …
This is implemented via the general series scan-fn
that takes an initial state, a state transition function, and
returns a series of states:
(scan-fn (lambda () (get-elements (fetch-first-page))) (lambda (queue) (if (next-page-token? (car queue)) (get-elements (fetch-next-page (car queue))) (cdr queue))) (lambda (queue) (null queue)))
Over this series of states, we map a function that returns the head of the queue:
(map-fn 't #'car <series-of-states>) item1 item2 … next-page-token itemN+1 itemN+2 … next-page-token+1 …
Finally, we filter out the page tokens:
(choose-if (lambda (x) (not (next-page-token? x))) <series-of-heads>) item1 item2 … itemN+1 itemN+2 …
It is a bit convoluted, but it gives you the elements of the series one at a time, fetching the pages as needed when the current page is emptied. We're not fetching the entire collection at the start, although we are buffering a page of elements at any time.
It won't bother Lisp programmers, but it may bother people who are anal about static types that the intermediate series is heterogeneous — it contains the elements of the collection as well as the next page tokens. If it really bothers you, you can rework the state machine to look ahead for the next page marker so that there is always at least one element in the queue. There is ultimately little difference: in one case, we filter out the page markers, in the other case, we look ahead and avoid putting them in. It is a matter of taste whether the complication of looking ahead is worth avoiding the type filter.
Joe Marshall — Gemini API
@2025-09-04 07:00 · 43 days agoI chose to experiment with the Gemini LLM. I figured that Google is likely to remain competitive in AI for a while. I didn't want to be chasing the latest and greatest LLM every few weeks.
Requirements
You will need these prerequisites
alexandria
— available via Quicklispasdf
— available via Quicklispchanl
— available via Quicklispcl-base64
— available via Quicklispcl-json
— available via Quicklispcl-ppcre
— available via Quicklispdexador
— available via Quicklispfold
— https://github.com/jrm-code-project/foldfunction
— https://github.com/jrm-code-project/functiongoogle
— https://github.com/jrm-code-project/googlejsonx
— https://github.com/jrm-code-project/jsonxnamed-let
— https://github.com/jrm-code-project/named-letpromise
— https://github.com/jrm-code-project/promiseseries
— https://series.sourceforge.net/str
— available via Quicklisptrivial-backtrace
— available via Quicklispuiop
— available via Quicklisp
You will need a Google Cloud account and a project with the Gemini API enabled. You will need to create an API key.
Put your Gemini API key in ~/.config/googleapis/{project}/Gemini/apikey
Create a directory ~/Gemini/
to hold Gemini related
files. In this directory, create
subdirectory ~/Gemini/transcripts/
to hold conversation logs.
Usage
Basic Usage
+default-model+
The default Gemini model to use. You can set this to any model
supported by your Google Cloud project. It starts as "gemini-2.5-flash"
.
invoke-gemini prompt
Invoke Gemini with prompt and return the response as a
string. An empty context will be used. Use this to start new conversations.
continue-gemini prompt
Invoke Gemini with prompt and return the response as a
string. The accumulated context will be used. Use this to
continue your conversation.
Conversations will be logged in ~/Gemini/transcripts/
in files
named NNNNNNNNNN-NN.txt
. The first number being the
universal time that the conversation started, the second number
being the number of turns in the conversation.
Advanced Usage
Results generation can be controlled via these special variables:
*CANDIDATE-COUNT*
*ENABLE-ADVANCED-CIVIC-ANSWERS*
*FREQUENCY-PENALTY*
*INCLUDE-THOUGHTS*
*LANGUAGE-CODE*
*LOGPROBS*
*MAX-OUTPUT-TOKENS*
*MEDIA-RESOLUTION*
*MULTI-SPEAKER-VOICE-CONFIG*
*PREBUILT-VOICE-CONFIG*
*PRESENCE-PENALTY*
*RESPONSE-JSON-SCHEMA*
*RESPONSE-LOGPROBS*
*RESPONSE-MIME-TYPE*
*RESPONSE-MODALITIES*
*RESPONSE-SCHEMA*
*SAFETY-SETTINGS*
*SEED*
*SPEECH-CONFIG*
*STOP-SEQUENCES*
*SYSTEM-INSTRUCTION*
*TEMPERATURE*
*THINKING-BUDGET*
*THINKING-CONFIG*
*TOOL-CONFIG*
*TOOLS*
*TOP-K*
*TOP-P*
*VOICE-CONFIG*
*VOICE-NAME*
See the Gemini API documentation for details on what these do.
If you leave them alone, you will get the default values. I
suggest binding *include-thoughts*
to T
in order to have thoughts printed during processing
and *temperature*
to a value between 0 (very
deterministic) and 2 (very random) if you want something other than
the default value of 1. Don't fool around with the other values
unless you know what you are doing.
Prompts
The default prompt is just a string. This is so that you can use
the LLM as a string to string mapping. However, you can supply a
fully structured content
object with multiple parts if
you wish. For example, you could supply
(content (part "Describe this function.") (part "```lisp (defun foo (x) (+ x 3)) ```"))
Tools
*enable-bash*
— enable the shell command tool.*enable-eval*
— enable the lisp interpreter tool.*enable-interaction*
— allow the LLM to query the user for input.*enable-lisp-introspection
— allow the LLM to query if symbols are bound or fbound and to query ASDF and Quicklisp modules and load them.*enable-personality*
— give the LLM a pseudorandom personality.*enable-web-functions
— allow the LLM to perform HTTP GET and POST commands.*enable-web-search
— allow the LLM to perform web searches.
Personality
Unless you disable personality (default is enabled), the LLM will be instructed to respond in a randomly selected literary or celebrity style. This is just for amusement. Some personalities are quite obnoxious. Personalities change every day.
without-personality &body body
disable personality while executing body.
new-personality
Select a new personality at random. Use this if you find the
current personality too obnoxious.
personalities.txt
personalities are selected from this
file. Each line contains a personality description. Blank lines
and comments are ignored.
MCP
Currently only stdio MCP servers are supported.
~/.config/mcp/mcp.lisp
contains the configurations for
any MCP servers you wish to supply to the model. Here is an example
file:
((:mcp-servers ("memory" (:command "npx") (:args "-y" "@modelcontextprotocol/server-memory") (:env ("MEMORY_FILE_PATH" . "/home/jrm/memory.json")) (:system-instruction "You have access to a persistent memory that stores entities and relationships between them. You can add, update, delete, and query entities and relationships as needed.")) ("sequential-thinking" (:command "npx") (:args "-y" "@modelcontextprotocol/server-sequential-thinking") (:system-instruction "You can use this tool to break down complex tasks into smaller, manageable steps. You can create, view, and complete steps as needed.")) ("mcp-server-time" (:command "uvx") (:args "mcp-server-time") (:system-instruction "You have access to the current date and time along with time zone information.")) ))
Joe Marshall — Google API
@2025-09-03 07:00 · 44 days agoTo experiment with LLMs I chose to use Gemini. The cost is modest and I expect Google to regularly improve the model so that it remains competitive. I wrote a Gemini client in Common Lisp. Some people have already forked the repo, but I've refactored things recently so I thought I'd give a quick rundown of the code.
google repository
https://github.com/jrm-code-project/googleWhen I add APIs to Google, I intend to put them in this repository. It is pretty empty so far, but here is where they will go. Contributions welcome.
Basic Requirements
There are some basic libraries you can load with Zach Beane's
Quicklisp. alexandria
for some selected Common Lisp extensions,
cl-json
for JSON parsing, dexador
for HTTP
requests, and str
for string manipulation. I also
use series
(available from SourceForge). My
own fold
, function
, named-let
,
and jsonx
libraries are also used (available from GitHub).
fold
gives you fold-left
for list
reduction. function
gives
you compose
. jsonx
modifies cl-json
to remove the ambiguity betweeen JSON
objects and nested arrays. named-let
gives you the
named let syntax from Scheme for local iteration and recursion.
Google API
The google
repository contains the beginnings of a
Common Lisp client to Google services. It currently only supports
an interface to Blogger that can return your blog content as a
series of posts and an interface to Google's Custom Search Engine
API. You create a Google services account (there is a free tier,
and a paid tier if you want more). You create a project (you'll
want at least a default project), and within
that project you enable the Google services that you want that project
to have access to.
API Keys and Config Files
For each service you can get an API key.
In your XDG config directory (usually "~/.config/
") you
create a googleapis
directory and subdirectories for
each project you have created.
The ~/.config/googleapis/default-project
file contains
the name of your default project. In the examples below, it
contains the string my-project
.
Within each project directory you create subdirectories for each
service you want to use. In each service directory you create a
file to hold the API key and possibly other files to hold IDs,
options, and other config information. For example, for Blogger you
create ~/.config/googleapis/my-project/Blogger/apikey
to hold your
Blogger API key
and ~/.config/googleapis/my-project/Blogger/blog-id
to
hold your blog ID.
Here is a sample directory structure:
~/.config/googleapis/ ├── my-project/ │ ├── Blogger/ │ │ ├── apikey │ │ └── blog-id │ ├── CustomSearchEngine/ │ │ ├── apikey │ │ ├── hyperspec-id │ │ └── id │ └── Gemini/ │ └── apikey └── default-project
When you create the API keys for the various services, it is a good idea to restrict the API key for only that service. That way, if the API key is compromised, the damage is limited to a single service.
Blogger
The blogger API will eventually have more features, but it currently only allows you to get the posts from a blog as a series.
scan-blogger-posts blog-id
Returns a series of posts from the blog with the given ID. Each post
comes back as a JSON object (hash-table).
The :content
field contains the HTML content of the
post.
Custom Search Engine
The Custom Search Engine API allows you to search a set of web
pages. You create a search engine on Google and pass the
ID custom-search
. It is assumed that the default
CustomSearchEngine id is a default, vanilla search, and that the
hyperspec id searches the Common Lisp Hyperspec.
In each search, be sure to replace the spaces in the query with
plus signs (+
) before searching.
custom-search query &key custom-search-engine-id
Returns search results for query. Each search result comes back as
a JSON object (hash-table).
web-search query
Search the web and return search results for query. Each search result comes back as
a JSON object (hash-table).
hyperspec-search query
Search the Common Lisp Hyperspec and return search results for
query. Each search result comes back as a JSON object (hash-table).
Scott L. Burson — Receiving Multiple Values
@2025-09-02 00:57 · 45 days agoBackground: the nlet macro
'Way back in 1980, when I was an MIT undergraduate hacking on the Lisp Machines at the AI Lab, I was already starting to experiment with a more functional style than most people there used. And one thing I quickly discovered was the usefulness of the multiple-value feature in this style. For example, instead of doing (when (something) (rotatef a b)) to conditionally swap the values of two variables, I found I could do this:
Whether this is really a stylistic improvement or not is certainly debatable, though I still do it, but it does get rid of the assignments.
Of course, the more common use of multiple values is to return more than one piece of information from a function. Some languages have "out parameters" for this purpose, but Common Lisp, like Lisp Machine Lisp before it, has multiple values; and some CL builtins are specified to return multiple values (floor and ceiling come to mind, but there are many others). On most implementations, returning multiple values is much faster than consing up a list or other structure to return, because the semantics don't make the tuple of values a firstclass object (unless, of course, you do that explicitly with multiple-value-list). Instead, like arguments to a call, they are generally returned in registers or on the stack.
Anyway, as I was making fairly frequent use of multiple-value-bind, sometimes nesting two or three such forms together, it seemed to me that I should be able to write them similarly to let forms — that the difference between binding one variable and binding two or more was not so large that I should have to use a completely different construct for the latter. Also, I noticed that there was space in the syntax of let to extend it: let's binding clauses are specified to be lists of length two (or one, but let's ignore this case), of which the first is the variable to bind and the second is the init-form whose value is to be bound to. The obvious generalization is that all subforms of the clause but the last are variables to be bound, and the last subform is the init-form that will supply those values.
It also occurred to me, as I was using both let for parallel binding and let* for sequential binding, that there was a way to allow arbitrary combinations of parallel and sequential binding using clause nesting: nesting an inner clause inside an outer one would make the variables bound by the outer one available to the inner one.
Thus was born my macro nlet. Here's an example:
(nlet ((a b c (zot))
((d (quux a c))
((e f (mumble b d))
(g (mung a))))
((h (frobnicate c))
((i (glumph h))))
(*print-level* 3))
...)
First a
, b, and c
are bound to the first three values of (zot)
, and in
parallel, *print-level*
is bound to 3; then d
and h
are bound; then e
, f
, g
, and i
are bound.
As this example illustrates, all bindings at a given nesting level are done in
parallel, with all bindings at a deeper level following. Stylistically, it is
expected that init-forms in nested clauses will refer only to variables bound in
containing clause lists. (More on this point below.)
My Misc-Extensions system exports this macro under two names, nlet and let. In my own code, I import it as let, shadowing cl:let. (You'll see it all over the FSet codebase.)
Critical reception: crickets
I published Misc-Extensions at the same time I first published FSet, which I think was 2007 or 2008. I don't recall anyone ever commenting to me either positively or negatively about nlet. My guess is that few if any people besides me have ever used it, but I have no way to know that for sure. In fairness, I didn't make much noise about it; I didn't even add a README with documentation until May 2024, though there was always documentation in the source file.
Let me explain why I like it, though. Consider: why do people ever use let (I mean cl:let) when let* exists? One possible reason is because they want to shadow a variable in the containing scope, but they also need one of the init-forms to refer to the outer variable, and they also need that init-form to be evaluated after the init-form that provides the value for the inner variable. Like this:
If they didn't want to shadow a, they could use let*. (And even if they did want to shadow a, if the two init-forms could be reordered, they could still use let*). Since you never really have to shadow a variable — you can always pick a new name for the inner one — let is redundant; we could always use let*. And yet, in the CL code I've seen (not counting my own), let is several times as common as let*. Why is this?
In the preface to Structure and Interpretation of Computer Programs, Hal Abelson and Gerry Sussman famously said, "Programs must be written for people to read, and only incidentally for machines to execute." While I think that's a tad overstated, it makes an important point.
When let* is used, someone reading the code needs to track the data flow through the clauses, by
seeing which init-forms reference which variables bound in previous
clauses, in order to understand it. In contrast, the use of let communicates immediately that there are no data flow dependences between the clauses. Thus, the reader need not look for them; each of the init-forms can be viewed independently.
The advantage nlet has over let*, then, is that it permits even greater precision in indicating which clauses depend on which other clauses. Let's look again at this example:
(nlet ((a b c (zot))
((d (quux a c))
((e f (mumble b d))
(g (mung a))))
((h (frobnicate c))
((i (glumph h))))
(*print-level* 3))
...)
The grouping of the clauses tells the reader at a glance that only the mumble and mung calls can depend on d, and only the glumph call can depend on h. (This isn't enforced, but it's an easy rule to follow when writing.) Any of the calls can depend on *print-level*, but if they actually do, it would have been clearer to put that at the top. If you wanted to make sure that they can't see that binding, you could just wrap the clause in two more parenthesis pairs.
Well. All that said, my guess is that most people are still going to find this syntax a bridge too far. What do you think?
Compromise: mvlet and mvlet*
I have just now released a new Misc-Extensions with two new variations on the above theme, mvlet and mvlet*. These retain the multiple-value binding feature of nlet, but don't support clause nesting; instead they have the familiar semantics of doing fully parallel or fully sequential binding, like let and let* respectively. I think this is probably the right compromise to attract wider use. Do you agree?
It's probably clear by now how to use mvlet and mvlet*, but here's an example anyway:
...
(mvlet ((a b (bar))
(c (baz a)) ; outer 'a'
...)
...)
(mvlet* ((a b (bar))
(c (baz a)) ; inner 'a'
...)
...))
I've looked around for other published macros that provide the same features — a succinct syntax for binding to multiple values, multiple binding clauses, and a choice of parallel or sequential binding — and I haven't found much. Alexandria doesn't have it. I have a faint recollection of seeing a generalized-destructuring macro that allows the use of values expressions as patterns, but I can't seem to find it now. I don't see this functionality in Trivia. The closest thing I can find is Ron Garret's bb macro, but (a) it's in his Ergolib project, which is not in Quicklisp, (b) it does only sequential binding, and (c) it doesn't handle declarations in the body correctly (this is noted as a TODO). If you know of something else, please comment.
I would also note that Dylan, which was created by people who had worked on the design of CL, has a similar syntax for binding to multiple values:
let (a, b, c) = values(...)
I take this as validation of the concept.
I would like this post to start a conversation. I'll post it on Reddit under /r/Common_Lisp; if you use Reddit, that might be the best place. Otherwise, feel free to comment below. Do you use multiple values much? If so, do you agree that the builtin syntax is a bit clunky? Do you think you would use any of the macros I've introduced here?
Joe Marshall — Lisp Still Matters
@2025-09-01 14:28 · 46 days agoLisp is not the most popular computer language, yet it continues to attract a small but devoted following. Sure, you can find nutters for any language, but a lot of Lisp programmers are high power expert programmers. Lisp was developed by the world's top researchers and was a language of choice at places like MIT, Stanford, Carnegie-Mellon, Xerox PARC and Bell Labs. It may be a niche, but your companions in the niche are some of the best programmers in the world. What is it about Lisp that attracts the top talent?
Lisp is a powerful language because it can be easily extended to express new ideas that were not foreseen when the language was originally designed. The original designers were well aware that their new computer language couldn't take into account ideas that would be developed in the future, so they made it easy to extend the language beyond its original design. Lisp was based on Church's lambda calculus, which is a logical formalism designed to allow you to reason about logical formalism itself. When you extend the language, you can use the base language to reason about the extension.
The fundamental building block of lambda calculus is the function. In his Conversions of the Lambda Calculi, Church shows how to bootstrap the rest of mathematics from functions. With Lisp, McCarthy showed the isomorphism between s-expressions and lambda expressions, and developed a universal function within Lisp. Thus if you extend Lisp, you can use Lisp to implement and interpret your extension from within Lisp. You don't have to exit the language to extend it.
Lisp syntax, such as it is, is based upon s-expressions, which are
tree structures. Lisp allows you to invent your own nodes in the
tree and assign your own semantics to them. You can express the new
semantics via function calls, which Church showed to be universal.
You don't need to hack the parser or the compiler to extend the
language, a simple defmacro
will extend the syntax and
a handful of defun
s will assign semantics to the new
syntax. If you are so inclined, you can use compiler macros to
generate efficient code for your new constructs.
With Lisp, I have the option of expressing the solution to my problems in Lisp, or the option to extend Lisp to express the solution more naturally. I can tailor my problem to be solved in Lisp, or I can tailor Lisp to solve my problem. Or work from both ends to meet in the middle.
It works. Lately I've been using a forty year old dialect of a sixty year old language to interact with cutting edge LLMs. I've not only built an API to the LLM, but I've extended Lisp to include features implemented by the LLM. This was certainly not forseen by McCarthy et al when Lisp was first designed. I decided to use Lisp to explore the integration because I knew that Lisp was amenable to extension. I could have used another language, but I could not have whipped up a prototype or proof of concept by myself over a couple of weekends.
In conclusion, Lisp's enduring appeal to expert programmers lies in its powerful, foundational design. By allowing a programmer to extend the language from within, Lisp provides a unique environment for tackling novel problems and developing new abstractions. The principles of lambda calculus and the flexibility of s-expressions empower developers to not only solve problems but also to build the perfect tool for the job. This capacity to adapt and evolve, as demonstrated by its modern use with LLMs, ensures that Lisp remains a relevant and potent language for those who wish to go beyond the confines of a predefined system.
Joe Marshall — Synergy of Tools
@2025-08-30 22:55 · 47 days agoOnce you have the sequential thinking MCP working, it can
interact with the eval
tool and reason about Lisp
programs. I tested this by asking the LLM to determine the 30th
Fibonacci number via the technique of raising a matrix to a power.
The LLM was given no special instructions about how to do this or
how to use Lisp. It was only told that it had access to a Common
Lisp environment. The LLM was able to reason through this problem
and even correct its own mistakes. The following is an annotated
transcript of the process.
Now we test something hard. I want you to compute the 30th
fibonacci number by raising a matrix to the 30th power.
We wrap this prompt in a content object with a "user" role, append it to the context, and send it to the LLM. Every time we interact with the LLM, we send all the accumulated context. The LLM will respond with a content object as well and we also accumulate that. The LLM itself is stateless. All the state is in this accumulated context.
I'm not going to replicate the entire accumulating context here. Instead, I will just show the most recent content object accumulated onto the context.
{"parts": [ {"text": "Now we test something hard. I want you to compute the 30th fibonacci number by raising a matrix to the 30th power." } ], "role": "user" }
The LLM decides that five steps of sequential thinking is called for. It returns this content object.
{ "parts": [ { "functionCall": { "name": "sequentialthinking", "args": { "thoughtNumber": 1, "thought": "First, I need to define a Common Lisp function to multiply two 2x2 matrices. I'll represent matrices as a list of two lists, like `((a b) (c d))`. The function will take two such matrices and return their product.", "nextThoughtNeeded": { "value": "true" }, "totalThoughts": 5 } } } ], "role": "model" }
The LLM client recognizes the "functionCall" object, so instead of
returning control to the user, it tail calls
the sequentialThinking
tool via the MCP protocol. The
sequentialThinking tool returns a result that we wrap in a response
object. We append this response object to the context and send it
back to the LLM.
{ "parts": [ { "functionResponse": { "name": "sequentialthinking", "response": { "result": { "content": [ { "type": "text", "text": "{\n \"thoughtNumber\": 1,\n \"totalThoughts\": 5,\n \"nextThoughtNeeded\": true,\n \"branches\": [],\n \"thoughtHistoryLength\": 1\n}" } ] } } } } ], "role": "function" }
The LLM responds with a multi-part response.
{ "parts": [ { "text": "Alright, pal, we're diving into the deep end. First off, I'm gonna lay down some Lisp code to handle the grunt work of multiplying two 2x2 matrices. We'll represent 'em as lists of lists. This ain't no parlor trick; it's the foundation for what's coming next.\n" }, { "functionCall": { "name": "eval", "args": { "string": "(defun matrix-multiply-2x2 (m1 m2)\n (let* ((a11 (car (car m1))) (a12 (cadr (car m1)))\n (a21 (car (cadr m1))) (a22 (cadr (cadr m1)))\n (b11 (car (car m2))) (b12 (cadr (car m2)))\n (b21 (car (cadr m2))) (b22 (cadr (cadr m2))))\n `((,(+ (* a11 b11) (* a12 b21)) ,(+ (* a11 b12) (* a12 b22)))\n ,(+ (* a21 b11) (* a22 b21)) ,(+ (* a21 b12) (* a22 b22)))))" } } } ], "role": "model" }
The LLM client prints the text part, but it also recognizes the
function call part and tail calls the eval
tool. The
eval tool returns a "functionResponse" object that we append to the
context and pass along to the LLM.
{ "parts": [ { "functionResponse": { "name": "eval", "response": { "result": ";; Value: MATRIX-MULTIPLY-2X2\n" } } } ], "role": "function" }
The LLM realizes it is time for the next step in its sequential thinking. It returns another multi-part response.
{ "parts": [ { "text": "The matrix multiplication function is in place. Next, we'll build the engine for raising that matrix to a power. This is where the real muscle comes in, using a trick called exponentiation by squaring to keep things snappy.\n" }, { "functionCall": { "name": "sequentialthinking", "args": { "totalThoughts": 5, "thought": "Next, I need to define a Lisp function for matrix exponentiation using the \"exponentiation by squaring\" algorithm. This will take a 2x2 matrix and a non-negative integer power.", "nextThoughtNeeded": { "value": "true" }, "thoughtNumber": 2 } } } ], "role": "model" },
Sequential thinking advances to the next thought.
{ "parts": [ { "functionResponse": { "name": "sequentialthinking", "response": { "result": { "content": [ { "type": "text", "text": "{\n \"thoughtNumber\": 2,\n \"totalThoughts\": 5,\n \"nextThoughtNeeded\": true,\n \"branches\": [],\n \"thoughtHistoryLength\": 2\n}" } ] } } } } ], "role": "function" }
The LLM responds with a request to call to eval.
{ "parts": [ { "text": "Alright, pal, the multiplication engine is purring. Now we build the big gun: a Lisp function to raise that 2x2 matrix to any power you throw at it, using the \"exponentiation by squaring\" trick. This ain't no slow boat to China; we're gonna get there fast.\n" }, { "functionCall": { "name": "eval", "args": { "string": "(defun matrix-power-2x2 (m n)\n (cond\n ((zerop n) '((1 0) (0 1))) ; Identity matrix for n=0\n ((= n 1) m)\n (t\n (let* ((half-power (matrix-power-2x2 m (floor n 2)))\n (squared (matrix-multiply-2x2 half-power half-power)))\n (if (oddp n)\n (matrix-multiply-2x2 m squared)\n squared)))))" } } } ], "role": "model" }
The LLM client tail calls the eval tool again. The eval tool returns a "functionResponse" object that we append to the context and pass along to the LLM.
{ "parts": [ { "functionResponse": { "name": "eval", "response": { "result": ";; Value: MATRIX-POWER-2X2\n" } } } ], "role": "function" }
The LLM advances to the next thought in its sequential thinking.
{ "parts": [ { "text": "Alright, the power-hitter is in the lineup. Now, we're setting up the initial play: the base Fibonacci matrix. This little beauty is the key to unlocking the whole operation.\n" }, { "functionCall": { "name": "sequentialthinking", "args": { "thoughtNumber": 3, "totalThoughts": 5, "nextThoughtNeeded": { "value": "true" }, "thought": "Now I will define the base Fibonacci matrix: `((1 1) (1 0))`. Then, I will raise this matrix to the 29th power, because the top-right element of M^n gives F_n." } } } ], "role": "model" }
Once again we call sequential thinking to advance to the next thought.
{ "parts": [ { "functionResponse": { "name": "sequentialthinking", "response": { "result": { "content": [ { "type": "text", "text": "{\n \"thoughtNumber\": 3,\n \"totalThoughts\": 5,\n \"nextThoughtNeeded\": true,\n \"branches\": [],\n \"thoughtHistoryLength\": 3\n}" } ] } } } } ], "role": "function" }
And once again we get a multi-part response. This response has
two functionCall
parts to call eval
"parts": [ { "text": "The power-hitter's ready. Now, we're setting up the initial play: the base Fibonacci matrix. This little beauty is the key to unlocking the whole operation. We'll crank it up to the 29th power, and the answer to our little puzzle will be staring us right in the face in the top-right corner.\n" }, { "functionCall": { "name": "eval", "args": { "string": "(defparameter *fib-matrix* '((1 1) (1 0)))" } } }, { "functionCall": { "name": "eval", "args": { "string": "(defparameter *result-matrix* (matrix-power-2x2 *fib-matrix* 29))" } } } ], "role": "model" }
The result of evaluation is as follows:
{ "parts": [ { "functionResponse": { "name": "eval", "response": { "result": ";; Value: *FIB-MATRIX*\n" } } }, { "functionResponse": { "name": "eval", "response": { "result": "Error: Expression could not be evaluated." } } } ], "role": "function" }
We have a problem. The second evaluation errored out. The LLM notices this and revises its sequential thinking state.
{ "parts": [ { "text": "Alright, a snag in the wire. Looks like my attempt to power up the Fibonacci matrix hit a wall. The Lisp interpreter choked on it. This ain't right. Time to put on my magnifying glass and check the details. First, I'm gonna make sure that base matrix is even on the books, and then I'll kick the tires on that `matrix-power-2x2` function with some small numbers. We'll find out what's causing this hiccup.\n" }, { "functionCall": { "name": "sequentialthinking", "args": { "isRevision": { "value": "true" }, "thoughtNumber": 4, "totalThoughts": 6, "revisesThought": 3, "nextThoughtNeeded": { "value": "true" }, "thought": "The previous eval failed. I need to debug why `(matrix-power-2x2 *fib-matrix* 29)` failed. First, I'll confirm if `*fib-matrix*` is correctly defined and then test `matrix-power-2x2` with a smaller, known input to ensure it works." } } } ], "role": "model" }
Sequential thinking returns its state.
{ "parts": [ { "functionResponse": { "name": "sequentialthinking", "response": { "result": { "content": [ { "type": "text", "text": "{\n \"thoughtNumber\": 4,\n \"totalThoughts\": 6,\n \"nextThoughtNeeded\": true,\n \"branches\": [],\n \"thoughtHistoryLength\": 4\n}" } ] } } } } ], "role": "function" }
The LLM decides to debug the problem.
{ "parts": [ { "text": "Alright, the last job went sideways. The Lisp interpreter choked when I tried to crank that matrix to the 29th power. We're on a revision here, back to the drawing board. First, I'm gonna make damn sure that `*fib-matrix*` is what it's supposed to be. Then, I'll put `matrix-power-2x2` through its paces with a smaller, simpler test. We'll find out what piece of the puzzle went missing.\n" }, { "functionCall": { "name": "boundp", "args": { "symbol": "*FIB-MATRIX*" } } }, { "functionCall": { "name": "printSymbolValue", "args": { "symbol": "*FIB-MATRIX*" } } }, { "functionCall": { "name": "eval", "args": { "string": "(matrix-power-2x2 '((1 1) (1 0)) 2)" } } } ], "role": "model" }
The LLM uses the Lisp introspection tools we provided.
{ "parts": [ { "functionResponse": { "name": "boundp", "response": { "result": { "value": "true" } } } }, { "functionResponse": { "name": "printSymbolValue", "response": { "result": "((1 1) (1 0))" } } }, { "functionResponse": { "name": "eval", "response": { "result": ";; Value: ((2 1) 1 1)\n" } } } ], "role": "function" }
Clearly the response for the third evaluation is an improperly formed matrix. The LLM has noticed this as well.
{ "parts": [ { "text": "Alright, the blueprints were faulty. My `matrix-multiply-2x2` function had a screw loose, building the second row of the matrix all wrong. It was spitting out a flat list instead of a nested one, making the whole operation fall apart. I'm going back in, fixing that Lisp code to properly construct the matrix rows with a `list` call. Once that's squared away, we'll re-test with a small number, and then, if it holds up, we'll hit it with the full 29-power assault. This time, no loose ends.\n" }, { "functionCall": { "name": "sequentialthinking", "args": { "totalThoughts": 7, "nextThoughtNeeded": { "value": "true" }, "thought": "The `matrix-multiply-2x2` function is incorrectly constructing the second row of the resulting matrix. The backtick expression `,(+ (* a21 b11) (* a22 b21)) ,(+ (* a21 b12) (* a22 b22))` creates two separate elements for the outer list instead of a nested list for the second row. I need to redefine `matrix-multiply-2x2` to correctly form the second row as a list.", "revisesThought": 1, "is-Revision": { "value": "true" }, thoughtNumber: 5 } } } ], "role": "model" }
Sequential thinking returns its revised state.
{ "parts": [ { "functionResponse": { "name": "sequentialthinking", "response": { "result": { "content": [ { "type": "text", "text": "{\n \"thoughtNumber\": 5,\n \"totalThoughts\": 7,\n \"nextThoughtNeeded\": true,\n \"branches\": [],\n \"thoughtHistoryLength\": 5\n}" } ] } } } } ], "role": "function" }
The LLM now attempts to fix the problem.
{ "parts": [ { "text": "Alright, the `matrix-multiply-2x2` function was a lemon. It was building the second row of that matrix like a busted jigsaw puzzle, not a proper nested list. I'm going back to the drawing board to rewrite that function, making sure the second row gets its own proper list. No more flat-pack matrices on my watch.\n" }, { "functionCall": { "name": "eval", "args": { "string": "(defun matrix-multiply-2x2 (m1 m2)\n (let* ((a11 (car (car m1))) (a12 (cadr (car m1)))\n (a21 (car (cadr m1))) (a22 (cadr (cadr m1)))\n (b11 (car (car m2))) (b12 (cadr (car m2)))\n (b21 (car (cadr m2))) (b22 (cadr (cadr m2))))\n (list\n (list (+ (* a11 b11) (* a12 b21)) (+ (* a11 b12) (* a12 b22)))\n (list (+ (* a21 b11) (* a22 b21)) (+ (* a21 b12) (* a22 b22))))))" } } }, { "functionCall": { "name": "eval", "args": { "string": "(matrix-power-2x2 '((1 1) (1 0)) 2)" } } } ], "role": "model" }
And we get the results of the evaluations:
{ "parts": [ { "functionResponse": { "name": "eval", "response": { "result": ";; Value: MATRIX-MULTIPLY-2X2\n" } } }, { "functionResponse": { "name": "eval", "response": { "result": ";; Value: ((2 1) (1 1))\n" } } } ], "role": "function" }
Time to advance the sequential thinking.
{ "parts": [ { "text": "Alright, the gears are greased and turning. `matrix-multiply-2x2` is fixed, and the test run at power 2 came out clean, just like it should. Now, it's time for the main event. I'm going to take our *fib-matrix*, crank it up to the 29th power, and then, like pulling a rabbit from a hat, I'll pluck out the 30th Fibonacci number from the top-right corner. This time, no funny business.\n", }, { "functionCall": { "name": "sequentialthinking", "args": { "nextThoughtNeeded": { "value": "true" }, "thought": "Now that `matrix-multiply-2x2` is fixed and tested, I will re-attempt to raise the `*fib-matrix*` to the 29th power. Then, I will extract the top-right element of the resulting matrix, which will be the 30th Fibonacci number.", "thoughtNumber": 6, "totalThoughts": 7 } } } ], "ROLE": "model" }
Sequential thinking advances its state.
{ "parts": [ { "functionResponse": { "name": "sequentialthinking", "response": { "result": { "content": [ { "type": "text", "text": "{\n \"thoughtNumber\": 6,\n \"totalThoughts\": 7,\n \"nextThoughtNeeded\": true,\n \"branches\": [],\n \"thoughtHistoryLength\": 6\n}" } ] } } } } ], "role": "function" }
Back to the Lisp interpreter.
{ "parts": [ { "text": "The `matrix-multiply-2x2` function is tuned up and humming. Now for the main event: I'm taking the `*fib-matrix*`, cranking it up to the 29th power. Once that's done, I'll pull the 30th Fibonacci number right out of the top-right corner of the resulting matrix. This time, we get the goods.\n" }, { "functionCall": { "name": "eval", "args": { "string": "(defparameter *result-matrix* (matrix-power-2x2 *fib-matrix* 29))" } } }, { "functionCall": { "name": "eval", "args": { "string": "(cadr (car *result-matrix*))" } } } ], "role": "model" }
We return the results of eval
to the LLM.
{ "parts": [ { "functionResponse": { "name": "eval", "response": { "result": ";; Value: *RESULT-MATRIX*\n" } } }, { "functionResponse": { "name": "eval", "response": { "result": ";; Value: 514229\n" } } } ], "role": "function" }
It appears to work, but there is still a problem.
{ "parts": [ { "text": "Alright, the numbers are in, but a shadow of doubt remains. My calculations show F_29 is 514229. The request was for F_30. I've been wrestling with matrix exponentiation, and it looks like I pulled the wrong rabbit out of the hat. The top-right element of M^n is F_n, and the top-left is F_{n+1}. Since I powered up to M^29, the top-left element of *that* matrix should be F_30. No use crying over spilled milk. Let's tap into that `*result-matrix*` again and pull the correct number. This time, we get it right.\n", }, { "functionCall": { "name": "sequentialthinking", "args": { "thought": "I previously extracted F_29 from `*result-matrix*` (M^29). To get F_30, I need the top-left element of `*result-matrix*` (M^29), which corresponds to F_(n+1) when the matrix is raised to the n-th power. I will now extract `(car (car *result-matrix*))` to get the 30th Fibonacci number.", "totalThoughts": 7, "revisesThought": 6, "nextThoughtNeeded": { "value": "false" }, "isRevision": { "value": "true" }, "thoughtNumber": 7 } } } ], "role": "model" }
Sequential thinking advances us to the next thought.
{ "parts": [ { "functionResponse": { "name": "sequentialthinking", "response": { "result": { "content": [ { "type": "text", "text": "{\n \"thoughtNumber\": 7,\n \"totalThoughts\": 7,\n \"nextThoughtNeeded\": false,\n \"branches\": [],\n \"thoughtHistoryLength\": 7\n}" } ] } } } } ], "role": "function" }
We call eval
once again.
{ "parts": [ { "text": "Alright, I was off by a beat. I pulled F_29 when the score called for F_30. My `*result-matrix*`, which is M^29, holds F_30 in its top-left corner. No more guesswork. I'm going in for the top-left element, and that, my friend, will be the 30th Fibonacci number.\nThe 30th Fibonacci number is 832040." }, { "functionCall": { "name": "eval", "args": { "string": "(car (car *result-matrix*))" } } } ], "role": "model" }
The result is returned.
{ "parts": [ { "functionResponse": { "name": "eval", "response": { "result": ";; Value: 832040\n" } } } ], "role": "function" }
And the LLM formulates the answer.
{ "parts": [ { "text": "The 30th Fibonacci number is 832040." } ], "role": "model" }
The text is printed and the LLM client sees no function calls, so it returns control to the user.
This is remarkable. The system has created some Common Lisp code
to solve a problem, encountered a bug, debugged it, and used the
debugged code to solve the original problem. But no Lisp debugger
was written. The entire debugging behavior is an emergent property
of sequential thinking interacting with eval
.
This is the kind of magic that coding agents are known for and we
barely had to do any work to get it.
Neil Munro — Ningle Tutorial 10: Email
@2025-08-28 11:30 · 50 days agoContents
- Part 1 (Hello World)
- Part 2 (Basic Templates)
- Part 3 (Introduction to middleware and Static File management)
- Part 4 (Forms)
- Part 5 (Environmental Variables)
- Part 6 (Database Connections)
- Part 7 (Envy Configuation Switching)
- Part 8 (Mounting Middleware)
- Part 9 (Authentication System)
- Part 10 (Email)
- Part 11 (Posting Tweets & Advanced Database Queries)
Introduction
Welcome back to this tutorial series, in this chapter we are going to write a small app for sending email and connect it up to the authentication system we wrote last time, as part of that we will need to expand the settings we have in our main project. We will also look at different ways in which you can send email, from outputting the console (as a dummy test), smtp (simple mail transfer protocol), and the sendgrid service.
Main Package (Part 1)
There isn't too much to change in this package, the most we will be doing is creating a series of settings objects to test the different email options. Of course, we will be introducing new settings and relying on the envy-ningle package to load them for us. We will also create some templates in our auth application, but we will override them in our application using the templates override mechanism we developed previously.
There will be a number of required settings and some settings that will only be used in certain circumstances, we have seen before in Part 7 (Envy Configuation Switching) how to use these settings objects.
Let's start with the common, shared settings, we have our :common
settings object:
(defconfig :common
`(:application-root ,(asdf:component-pathname (asdf:find-system :ningle-tutorial-project))
:installed-apps (:ningle-auth)
:auth-mount-path ,*auth-mount-path*
:login-redirect "/"
:project-name "NTP" ; 1: add this
:token-expiration 3600 ; 2: add this
:email-admins ("nmunro@duck.com"))) ; 3: add this
Our first setting is simply creating a name for our project that we can use in email titles etc.
The second setting is to add is related to the tokens we created, we might want to lower this during testing, and restore it when we go into production, it makes sense to centralise it! I should have considered this last time, but the chapter was so big, I had to make cuts somewhere!
The third setting is related to mailing project admins, in the event there's an error we can mail someone (or in this case a list of people), it's something we will explore, but not necessarily use, because this is, after all, a tutorial project and not a full blown commercial application.
We have four settings objects we need to create to test everything we need, we will continue using sqlite
for our config, but we will explore the following email setups:
- dummy (console)
- smtp with ethereal (a demo smtp service)
- smtp with gmail
- sendgrid
Since we are going to write lots of new settings, each of which is going to duplicate the middleware, we are going to explore how to modularize settings (at least a little)!
We will start by defining a new settings block, but it will only contain our middleware settings, we will, for good measure also look into extracting out the database settings, if, for whatever reason, we need to change them in future.
(defconfig |database-settings|
`((:mito (:sqlite3 :database-name ,(uiop:getenv "SQLITE_DB_NAME")))))
(defconfig |middleware|
`(:middleware ((:session)
ningle-tutorial-project/middleware:refresh-roles
,@|database-settings|
(:mount ,*auth-mount-path* ,ningle-auth:*app*)
(:static :root ,(asdf:system-relative-pathname :ningle-tutorial-project "src/static/") :path "/public/"))))
We can see here that a small, database specific settings block exists that defines our database connection settings, when then include it inside a middleware settings block which we will now use in our dummy email settings block:
Prior to writing our settings, we will follow good security practices and NOT store details in our repository, so we will need to edit our .env
file.
.env
I obviously didn't include the actual values here, but simply wanted to include the settings names, for clarity.
EMAIL_DEFAULT_FROM=xxx
SMTP_GMAIL_HOST=xxx
SMTP_GMAIL_ACCOUNT_NAME=xxx
SMTP_GMAIL_PASSWORD=xxx
SMTP_ETHEREAL_HOST=xxx
SMTP_ETHEREAL_ACCOUNT_NAME=xxx
SMTP_ETHEREAL_PASSWORD=xxx
SENDGRID_API_KEY=xxx
Some settings only apply to certain configurations, and some settings require some setup, for example if you want to use ethereal, you will need to set up an account and grab the user, account, and password, if you want to use sendgrid, you will need to get an api key etc.
These tasks I leave up to you, but I will mention them as each settings require them, just remember to come back and add in the settings you need.
Dummy Email Backend Settings
(defconfig |dummy-email|
`(:debug T
,@|sqlite-middleware|
:email-backend :dummy ; 1
:email-default-from ,(uiop:getenv "EMAIL_DEFAULT_FROM"))) ; 2
This helps us really focus on what we are adding in, it's worth noting that these settings don't configure anything yet, but they will when we write the email package, but for now we are:
- Defining an email dummy back end (this will be used to print email to the terminal)
- Setup a default "from" address
Since this requires the EMAIL_DEFAULT_FROM
setting, please ensure you have an actual value stored.
Our next three configs follow a similar pattern.
Ethereal SMTP Email Backend Settings
Ethereal is a free fake smtp service, it's a great way to check smtp settings are correct prior to potentially spamming an email account with testing emails. We will use this as a test, while I have an example for smtp settings for gmail, this is not a comprehensive guide to every email provider, so etheral should help you test things, if I have not covered your specific email provider, or... Like me, your account was too locked down to use as an email.
Ethereal has a help page where you can find the host
settings etc. The SMTP_ETHERAL_ACCOUNT_NAME
gets used for the :email-default-from
and :email-reply-to
as well as part of the :email-auth
settings, there will also be an account password when you set an account up, which will be stored as SMTP_ETHEREAL_PASSWORD
and used in the :email-auth
too.
(defconfig |ethereal-smtp|
`(:debug T
,@|middleware|
:email-backend :smtp
:email-smtp-host ,(uiop:getenv "SMTP_ETHEREAL_HOST")
:email-default-from ,(uiop:getenv "SMTP_ETHEREAL_ACCOUNT_NAME")
:email-reply-to ,(uiop:getenv "SMTP_ETHEREAL_ACCOUNT_NAME")
:email-port 587
:email-auth (,(uiop:getenv "SMTP_ETHEREAL_ACCOUNT_NAME") ,(uiop:getenv "SMTP_ETHEREAL_PASSWORD"))
:email-ssl :starttls))
Remember: Add the following to your .env
file!
SMTP_ETHEREAL_ACCOUNT_NAME
SMTP_ETHEREAL_PASSWORD
When we come to test this, we can use their web interface to check if email would have been sent.
GMail SMTP Email Backend Settings
Setting up GMail for smtp can be a little tricky, certain security settings have to be enabled (and certain ones NOT), at a minimum you must have mfa set up on the account, and Google no longer allows username and passwords as authentication, you must set up an "app password" for your application and use that for the authentication.
No big deal really, but it's some gotchas that you'll want to be aware of if you are using GMail as your email provider, again this isn't a tutorial on how to configure GMail for SMTP, this is how to make Common Lisp use it once it is configured.
(defconfig |gmail-smtp|
`(:debug T
,@|middleware|
:email-backend :smtp
:email-smtp-host ,(uiop:getenv "SMTP_GMAIL_HOST")
:email-default-from ,(uiop:getenv "SMTP_GMAIL_ACCOUNT_NAME")
:email-reply-to ,(uiop:getenv "SMTP_GMAIL_ACCOUNT_NAME")
:email-port 587
:email-auth (,(uiop:getenv "SMTP_GMAIL_ACCOUNT_NAME") ,(uiop:getenv "SMTP_GMAIL_PASSWORD"))
:email-ssl :starttls))
Remember: Add in the following values in your .env
file!
SMTP_GMAIL_HOST
SMTP_GMAIL_ACCOUNT_NAME
SMTP_GMAIL_PASSWORD
SendGrid Email Backend Settings
Sendgrid is a popular way to send mass emails, to get set up you will need an account with an api-key. Once you have those, the settings are as follows.
(defconfig |sendgrid|
`(:debug T
,@|middleware|
:email-backend :sendgrid
:email-reply-to ,(uiop:getenv "EMAIL_DEFAULT_FROM")
:sendgrid-api-key ,(uiop:getenv "SENDGRID_API_KEY")))
Remember: Add the following to your .env
file!
EMAIL_DEFAULT_FROM
SENDGRID_API_KEY
Email Package
Now that we have your config in place, we can look at building an email package, don't worry though it's less than 50 lines, so nothing too crazy, we just create a package because Ningle is a micro framework and so we create small packages to work with it. Perhaps in a later version of this series we build a tighter coupled framework, but not right now.
Using my project builder create a new project like so:
(nmunro-project:make-project #p"~/quicklisp/local-projects/ningle-email")
In the project asd
file we need to depend on three packages:
- envy-ningle
- cl-smtp
- cl-sendgrid
And with that, we can edit ningle-email/src/main.lisp
and write two simple mail functions send-mail
and mail-admins
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
(defpackage ningle-email
(:use :cl)
(:export #:mail-admins
#:send-mail))
(in-package ningle-email)
(defun mail-admins (subject message)
"Sends an email to the admins"
(let ((project-name (envy-ningle:get-config :project-name))
(admins (envy-ningle:get-config :email-admins)))
(send-mail (format nil "[~A]: ~A" project-name subject) message admins)))
(defun send-mail (subject content to &key (from (envy-ningle:get-config :email-default-from)))
"Sends arbitrary email"
(let ((email-backend (envy-ningle:get-config :email-backend)))
(case email-backend
(:dummy
(progn
(format t "from: ~A~%" from)
(format t "to: ~A~%" to)
(format t "subject: ~A~%" subject)
(format t "content: ~A~%" content)))
(:smtp
(cl-smtp:send-email
(envy-ningle:get-config :email-smtp-host)
from
to
subject
message
:port (or (envy-ningle:get-config :email-port) 587)
:ssl (or (envy-ningle:get-config :email-ssl) :starttls)
:authentication (envy-ningle:get-config :email-auth)))
(:sendgrid
(sendgrid:send-email
:to to
:from from
:subject subject
:content message
:api-key (envy-ningle:get-config :sendgrid-api-key)))
(otherwise
(error "Unknown email backend: ~A" email-backend)))))
It may seem a little unusual to define the mail-admins
before we have defined our send-mail
and while in common lisp it's possible to compile a function that calls a function that doesn't yet exist, because it will be compiled immediately after.
Our new mail-admins
function will be a simple wrapper around the send-mail
function, so we will look at that first.
(defun mail-admins (subject message)
"Sends an email to the admins"
(let ((project-name (envy-ningle:get-config :project-name))
(admins (envy-ningle:get-config :email-admins)))
(send-mail (format nil "[~A]: ~A" project-name subject) message admins)))
We don't yet know the shape of our send-mail
function, we only know that we will use it, and in fact, thinking about how we will get and pass information into it, will help us see how its interface might be. When we mail our admins, we already know who we are emailing (our admins) and we also know who the email will be from (our application) so in reality we need a subject and message as parameters.
Although we know who is being mailed by who, we might want to make clear what they are being emailed by, our admins probably get a lot of mail, so I have made a choice that the email title will be [NTP]: <project name>
in this way it's clear that the service has mailed them.
We create a let
block that grabs the project name from the settings. We also get the list of project admins from the settings in this block too and we simply call send-mail
with a subject (our format
expression), a message and a list of recipients (our admins), and with that done, we now know our send-mail
function has parameters for a subject, a message, and a list of recipients, we might want to change the default sender, so we can add a &key
parameter for this, but we will default it to putting the email from the settings.
(defun send-mail (subject content to &key (from (envy-ningle:get-config :email-default-from)))
"Sends arbitrary email"
(let ((email-backend (envy-ningle:get-config :email-backend)))
(case email-backend
(:dummy
(progn
(format t "from: ~A~%" from)
(format t "to: ~A~%" to)
(format t "subject: ~A~%" subject)
(format t "content: ~A~%" content)))
(:smtp
(cl-smtp:send-email
(envy-ningle:get-config :email-smtp-host)
from
to
subject
message
:port (or (envy-ningle:get-config :email-port) 587)
:ssl (or (envy-ningle:get-config :email-ssl) :starttls)
:authentication (envy-ningle:get-config :email-auth)))
(:sendgrid
(sendgrid:send-email
:to to
:from from
:subject subject
:content message
:api-key (envy-ningle:get-config :sendgrid-api-key)))
(otherwise
(error "Unknown email backend: ~A" email-backend)))))
As you can see, our parameters are quite simply what our mail-admins
specified, the only tricky thing is the from
parameter, which simply pulls a default value of :email-default-from
from our settings, so in most cases the send-mail
function will do exactly the right thing, but it's possible to override the from
, if needed.
The rest of this function is really quite simple, it's just a case
that checks the :email-backend
setting we defined in our settings and dispatches to another package for the actual logic. The :dummy
backend simply prints the email information to the terminal, the :smtp
backend delegates to the cl-smtp
package, the :sendgrid
backend delegates to the :cl-sendgrid
package and, finally, if the email backend wasn't recognised and error is signalled.
That really is all we need to write for our email package, with it complete we can look at integrating it into our project as a whole and into the auth package we built last time.
Auth Package
Since we now created a package we will be relying on, the first thing we need to do is to ensure we include it in the dependencies of this project.
project.asd
:depends-on (:cl-dotenv
:clack
:djula
:cl-forms
:cl-forms.djula
:cl-forms.ningle
:envy-ningle
:mito
:ningle
:local-time
:cu-sith
:ningle-email) ; add this
models.lisp
We will make a slight change to the models, but this is only to support the expiration time that we defined in our settings. In our ningle-auth/src/models.lisp
file we will make two changes.
(defmethod initialize-instance :after ((token token) &rest initargs &key &allow-other-keys)
(unless (slot-boundp token 'salt)
(setf (token-salt token) (ironclad:make-random-salt 16)))
(unless (slot-boundp token 'expires-at)
; change the below line from 3600 to the :token-expiration setting
(setf (token-expires-at token) (+ (get-universal-time) (envy-ningle:get-config :token-expiration)))))
And here.
; Again change the token from 3600 to the value stored in the setting
(defmethod generate-token ((user user) purpose &key (expires-in (envy-ningle:get-config :token-expiration)))
(unless (member purpose +token-purposes+ :test #'string=)
(error "Invalid token purpose: ~A. Allowed: ~A" purpose +token-purposes+))
(let* ((salt (ironclad:make-random-salt 16))
(expires-at (truncate (+ (get-universal-time) expires-in)))
(base-string (format nil "~A~A~A" (username user) expires-at salt))
(hash (ironclad:byte-array-to-hex-string (ironclad:digest-sequence :sha256 (babel:string-to-octets base-string)))))
(create-dao 'token :user user :purpose purpose :token hash :salt salt :expires-at expires-at)))
main.lisp
Now, since we deal a lot with token generations that are actually urls in our application, I decided we should simplify this a little by creating some utlity functions that generate these, as we do call them over and over again under different circumstances.
So, the first thing to add into our main.lisp
are these utility functions:
(defun build-url-root (&key (path ""))
(format nil "~A://~A:~A~A"
(lack/request:request-uri-scheme ningle:*request*)
(lack/request:request-server-name ningle:*request*)
(lack/request:request-server-port ningle:*request*)
path))
(defun build-activation-link (user token)
(let ((host (build-url-root :path (envy-ningle:get-config :auth-mount-path))))
(format nil "~A/verify?user=~A&token=~A~%" host (ningle-auth/models:username user) (ningle-auth/models:token-value token))))
(defun build-reset-link (user token)
(let ((host (build-url-root :path (envy-ningle:get-config :auth-mount-path))))
(format nil "~A/reset/process?user=~A&token=~A~%" host (ningle-auth/models:username user) (ningle-auth/models:token-value token))))
In other frameworks there would ideally be a way to build an absolute url from the request object, but ningle
is pretty lightweight, so we will make do with these.
We start with the build-url-root
, which will build a url from the request object, using the scheme, server name, port, and any path parts. At the moment I don't do any checking for the port to be 80 or 443, maybe that's something for later! The intention is this will build up the basic part of our url, and the two functions build-activation-link
and build-reset-link
will use it to, well, build the links.
Each function will return a string that represents the link it is concerned with building, it doesn't do anything we weren't doing before, but instead of building the link in each place it is used, we have one place where the links are built, so that if we need to change it, we easily can. Each function only needs to take a user, and a token, it it then looks up the username and token-value of the objects and we're pretty much done!
We don't have too much we need to change here, only three areas or so, let's start in our /register
controller.
We previously had just a username and token and we used format
to display this in the terminal, however if we want to do things right and send emails, we need to make some adjustments.
(let* ((user (mito:create-dao 'ningle-auth/models:user :email email :username username :password password))
(token (ningle-auth/models:generate-token user ningle-auth/models:+email-verification+))
(link (build-activation-link user token))
(subject (format nil "Ningle Tutorial Project registration for ~A" user))
(template "ningle-auth/email/register.txt")
(content (djula:render-template* template nil :user user :link link)))
(ningle-email:send-mail subject content email)
In addition to the user and token, we need to generate the link we will send using the build-activation-link
function we just wrote above, also, since we know our email needs a subject, we create that now in our let*
block. Next we will have our template, although we haven't yet created these, we will next, and our email content will use djula
and this template location to render the content and store it ready for us to user in our send-mail
invocation. Since this is happening in our /register
controller, we already have an email address to send to, so we don't need to create a new variable for that, it is already in scope.
The next place to make a change is in our /reset
controller, there are two areas here where we would change things, thankfully the changes are exactly the same.
((and user token)
(mito:delete-dao token)
(let* ((token (ningle-auth/models:generate-token user ningle-auth/models:+password-reset+))
(link (build-reset-link user token))
(subject (format nil "Ningle Tutorial Project password reset for ~A" user))
(template "ningle-auth/email/reset.txt")
(content (djula:render-template* template nil :user user :link link)))
(ningle-email:send-mail subject content email)
(ingle:redirect "/")))
Here, in the case where we have a user and a token object, we perform basically the same set of steps we did before, getting the token, link, subject, template, and content and passing that on into the send-mail
function. It's worth noting that the template we are loading is different (although, again, we haven't yet written the templates).
(user
(let* ((token (ningle-auth/models:generate-token user ningle-auth/models:+password-reset+))
(link (build-reset-link user token))
(subject (format nil "Ningle Tutorial Project password reset for ~A" user))
(template "ningle-auth/email/reset.txt")
(content (djula:render-template* template nil :user user :link link)))
(ningle-email:send-mail subject content email)
(ingle:redirect "/")))
This code is identical as the above, we can probably consolidate these down in a refactor later, but we will keep focused on getting our email working first.
The final place to change things is in the /verify
controller.
((and token (ningle-auth/models:is-expired-p token))
(mito:delete-dao token)
(let* ((new-token (ningle-auth/models:generate-token user ningle-auth/models:+email-verification+))
(link (build-activation-link user new-token))
(subject (format nil "Ningle Tutorial Project registration for ~A" user))
(template "ningle-auth/email/register.txt")
(content (djula:render-template* template nil :user user :link link)))
(ningle-email:send-mail subject content (ningle-auth/models:email user))
(djula:render-template* "ningle-auth/verify.html" nil :title "Verify" :token-reissued t)))
In this case however a new token is being issued, as it has expired at this point in the application lifecycle and needs to be reissued. There's nothing really new here we haven't seen before in our previous examples.
The only other thing I have changed is to remove the format
line from inside the (not token)
and t
branches of the cond
here, as they're no longer needed.
And with those changes, we can move onto our templates!
Templates
Since we will be sending email, and our controllers specify that we will be rendering templates we need to set these up, as discussed in Part 9 (Authentication System) we looked into how templates override each other, so we need to ensure our email templates are in the correct place to that our main application can override them, if needed.
Remember: These template must be placed in ningle-auth/src/templates/ningle-auth/email
as it's this directory structure that allows us to override in broader projects!
base.html
Our base.html is going to be really very simple, it provides a content block that other templates can inject content into, but it also serves another purpose, a file we can override in another project and add headers/footers etc without having to override every template.
This is why its content is so small, we'd almost never directly use this, but because it's a base template that others extend, we can use it!
{% block content %}{% endblock %}
register.html
Our register template will extend the base and provide the information a user will need to continue setting up their account. The template is simple enough (why complicate it?), but you must pay attention to the safe
filter that is being used to correctly encode the url.
{% extends "ningle-auth/email/base.txt" %}
{% block content %}
Hello, {{ user.username }}!
Thanks for registering, for security reasons you must verify your account by clicking on this link:
{{ link|safe }}
This link will expire in 1 hour.
If this was not you, you can ignore this email, as an account will not be activated without clicking on the above link.
{% endblock %}
reset.html
The reset template is very similar to the register template, just with some slightly different wording, but just mind and use the safe
template filter as before!
{% extends "ningle-auth/email/base.txt" %}
{% block content %}
Hello, {{ user.username }}!
We have received a password change request for your account, to do so, click this url:
{{ link|safe }}
This link will expire in 1 hour.
If this was not you, you can ignore this email, as your password will not be changed without clicking on the above link.
{% endblock %}
Now that we have our controllers wired up to send emails that are rendered from templates, we are ready to finally connect everything up!
Main Package (Part 2)
As we mentioned in the previous section, our ningle-auth email base template can be overridde, and in fact that's exactly what we are going to do. We need to create the following file in our ningle-tutorial-project
project: src/templates/ningle-auth/email/base.txt
and we are going to add a footer!
{% block content %}{% endblock %}
Ningle Tutorial Project
It's not a lot of code, and to be fair, that was the point, we can quickly and easily override the ningle-auth
base template and add in a footer (or a header, or both, if you like), into the email base template and everything just works as we need it to.
Conclusion
Mercifully this tutorial is a lot shorter than the last time, and good news! This means we now have everything we need to begin working on a microblog! Authentication and email are very important, but they highlight a trade off in micro frameworks and macro frameworks, in micro frameworks we have to do a lot of the work either connecting up third party packages, or writing our own, but we are done now, and we can focus on what we set out to do.
We will begin next time by looking at users, and how to display information about their followers etc.
Thank you for following this tutorial series, I hope you are finding it as interesting/helpful to read as I am finding it interesting/helpful to write.
Learning Outcomes
Level | Learning Outcome |
---|---|
Remember | Identify the configuration options required for setting up different email backends (dummy , smtp , sendgrid ) in a Ningle application. Recall the purpose of the .env file and its role in storing sensitive credentials. |
Understand | Explain the difference between dummy, SMTP, and SendGrid email backends and when each might be used. Describe how template overrides in ningle-auth allow flexibility for customizing email content. |
Apply | Configure a Ningle project to use different email backends by modifying defconfig settings. Use Djula templates to generate dynamic email content (e.g., activation and reset links). |
Analyze | Compare the advantages and trade-offs of using a microframework (Ningle) versus a macro framework for handling email workflows. Examine how token expiration settings affect authentication workflows and security. |
Evaluate | Assess the security implications of storing and handling email credentials in .env files. Justify the choice of email backend for different project stages (development, testing, production). |
Create | Design and implement a custom email notification (e.g., welcome email, alert system) using the ningle-email package. Extend the project by building reusable utility functions to streamline email workflows beyond registration and password resets. |
Github
- The link for this tutorials code is available here.
- The link for the auth app code is available here.
- The link for the email app code is available here.
Resources
Common Lisp HyperSpec
- and
- case
- cond
- defmethod
- defpackage
- defun
- format
- get-universal-time
- in-package
- initialize-instance
- let / let*
- member
- not
- or
- progn
- setf
- slot-boundp
- string=
- truncate
- unless
Reader Macros
Joe Marshall — Thinking About Thinking
@2025-08-27 15:27 · 51 days agoThe latest LLMs can be run in “thinking” mode where they take an extra processing step before generating output. In this extra step they refine the prompt and generate intermediate information to be used in generating the final output. This supposedly leads to better results, but you have to pay for the extra processing. The question is, is the extra processing worth it?
At our company, we have adopted Cursor as our AI tool. Cursor allows you to choose from several different LLMs and allows you to choose whether to enable “thinking” when using the LLM. Cursor has a billing plan where you have a subscription to a certain amount of token processing per month, and the token processing cost differs by model and by whether you choose “thinking”. An older model without thinking is substantially cheaper than a brand new model with thinking. If you choose a new model with thinking and you let Cursor run as an “agent” where you give it a task and let it automatically make prompts, you can easily run your through your monthly allotment of tokens in a day or two. So management has asked us to be mindful of “thinking” and careful of our use of Cursor in “agent” mode.
But in my admittedly limited experience, the results with “thinking” are quite a bit better than without. With generation of Lisp code from pseudocode, “thinking” can make the difference between working output and nonsense. With automatic documentation, “thinking” can turn adequate documentation into high quality documentation. When running in “agent” mode, “thinking” can get the agent unstuck where the agent without “thinking” gets stuck in a loop. The tools simply work better with “thinking” enabled, and you can notice the difference.
The newer models are better than the older ones, too. I try to use the cheapest model I can get away with, but I have found that the cheaper models can get confused more easily and can make a real mess out of your source code. I had one experience where an older model started mangling the syntactic structure of the code to the point that it didn't parse anymore. I switched to the newest available model and told it that its older cousin had mangled my code and that it should fix it. The new model successfully unmangled the code.
For me, the extra cost of “thinking” and the newer models is well worth it. I get far better results and I spend less time fixing problems. If it were just me paying for myself, I'd definitely spend the extra money for “thinking” and the newer models. I understand that this doesn't scale for a company with a thousand engineers, however. If you are in the position of making this decision for a large group, I think you should be ready to pay for the premium models with “thinking” enabled and not be stingy. You'll get what you pay for.
Tim Bradshaw — Minimum clown
@2025-08-26 14:29 · 52 days agoHosting code on GitHub now seems like an invitation to have it turned into AI slop. Here’s what I did to move.
Isn’t it rich?
Are we a pair?
Me here at last on the ground,
You in mid-air.
Send in the clowns.
I think the number of people who care about, let alone rely on, my Lisp code who I don’t personally know is somewhere between few and none by now. Nevertheless, it makes me feel honest to publish some of what I write, in the hope someone might find it useful.
Long ago, I did that by simply putting files on my vanity site (in the days when people had vanity sites). At some point it became clear that this was not the right answer, and the thing to do was to keep things on one of the newfangled source-hosting sites. I have never been and will never be comfortable with the idea of the canonical version of anything I make not sitting on storage that I control — people who do that are not thinking hard enough in my opinion — which means that no source-hosting site based on a centralised source control system was at all interesting.
Send in the clowns
And then git arrived, and sometime after it, GitHub. Git meant that I could both keep the canonical versions of things somewhere safe while making them publicly available. At about the same time I was realising that vanity sites were, well, vanity, and the antique documentation written in raw HTML with its origins in the mid 1990s really needed to be turned into something better, anyway.
Well, it was more complicated, of course. I have a source tree which contains history that’s not public and some of which is very old1. So after initially pushing a frozen version of what used to be on my vanity site, I organised things into various ‘publication repos’ which get populated (by make
) from the source tree. These have simplified histories, often really only being a series of release versions. And this worked fine.
I never really used much of the mechanism provided by GitHub: I didn’t need it, and doing so would have broken my rule of not depending on things I don’t have control over.
In 2018 Microsoft bought GitHub, and
THIS IS WHAT HAPPENS WHEN YOU STORE YOUR DATA IN THE CLOWN.
The Clown is just someone else’s computer and they can and will fuck you. If it’s not on your computer, it’s not under your control. Why do you all keep doing this to yourselves??
Stop hitting yourself. Seriously, stop it.
— jwz
But I wasn’t worried, because I was not storing my data in the clown: I was just publishing copies of it there. I felt kind of smug about that. I hadn’t worked out, in 2018, how bad things could get and how fast. I had worked out that if you were not paying the clown, then the clown was feeding on your soul, but I didn’t think it would get any useful nutrition from the parts of my soul it could reach. And other things were going on which distracted me.
Fast forward to mid–2025, and GitHub is a subsidiary of Microsoft’s ‘weaponized bullshit’ division: it is time, and past time, to go.
Where? Well there are other clowns. But no: they may be good clowns, for now, but they’re still clowns. Clowns are, well, clowns: you don’t want to be trusting them. You want a minimum clown policy.
Where are the clowns?
So here’s what I am doing.
If you can live with git’s dumb HTTP protocol then any web server which can host static files can host read-only git repos. To push to them you need some smarter protocol, and you need to have a hook which does git update-server-info
after any change: this should be in the post-update
hook, really. All these repos will be read-only and public, so I don’t need access control or authentication.
To actually make them exist you need more. Fortunately the people who host my vanity domain provide SSH access and git
exists on the server. So that’s enough to create a repo, add the hook and then push everything to it as and when I need to.
For documentation: I am already using Frog for, for instance, this post. And Frog has the nice feature that you can just add Markdown files in a directory (or any subdirectory) it knows about and it will turn them into HTML. So I wrote a makefile to copy README.md
files from publication repos and suitably munged the original GitHub Pages index page.
Then there were a bunch of small changes such as telling rsync
not to clobber the repos when updating things, gluing makefiles together so suitable things happen in subdirectories and make publish
does everything needed.
What this needs is a web server that can host static files with enough storage (which is not a great deal), and a way of syncing local content to it. SSH access, git on the server and rsync
make things quite a bit easier but I could maintain the public repos locally and sync them to the web server however it allowed that.
I’m pretty confident that any other hosting provider I might one day switch to2 will provide that.
Well, maybe next year
If I was a big player I think I would simply delete my GitHub repos and put up a message saying where people could now find things. I’m not a big player, so what I did is to make the repos read-only (GitHub calls this ‘archiving’) and add suitable links. I’ve done this for all my main Lisp repos: I have some more to do. Archiving means people may never notice: so be it. I plan to tell, for instance, Quicklisp in due course. I also have some non-public (clones of) repos on GitHub which I will just host outside the web server tree, accessing them only via SSH.
There are, inevitably, some changes to the Markdown parsing, which I need to chase down and fix in the source. There already were incompatibilities amongst GitHub’s various parsers.
When I’ve finished all this, I probably will replace my GitHub Pages site by something which just points at the canonical place.
There will be no releases: I’ll just post things announcing changes with tag names. There is no big tracking: just mail me.
Where things are
- tfeb.org/computer/ is the list of things;
- tfeb.org/computer/repos/ is where the repos are;
- tfeb.org/fragments/documentation/ is the list of documentation.
And that is all.
Isn’t it rich?
Isn’t it queer,
Losing my timing this late
In my career?
And where are the clowns?
There ought to be clowns.
Well, maybe next year.
Scott L. Burson — FSet now supports Iterate!
@2025-08-25 08:37 · 53 days agoFor FSet 1.5.1, I have added support for the popular Iterate iteration macro. The issue page has a good explanation.
I don't use Iterate myself, or I would no doubt have done this sooner.
Also, to implement some of the Iterate functionality, I extended the
stateful seq iterator to support iterating a subsequence, and added a
reverse version. So now the iterator
method on wb-seq
takes keyword arguments start
, end, and from-end?
, whose semantics is familiar from cl:find
etc.
If there's anything else about which you think "I would like to use FSet, but it doesn't work for me because it doesn't have X", I would like to know what that is. Please post in the blog comments or on Reddit (I'll link to this from r/Common_Lisp).
For older items, see the Planet Lisp Archives.
Last updated: 2025-10-11 07:58