Planet Lisp

LispjobsSoftware Developer, Shareablee, New York, NY

· 7 days ago

SOFTWARE DEVELOPER

Shareablee engineers grapple with some of the hardest challenges in social media analytics and big data. We are involved in every aspect of the product development process from brainstorming to coding and deployment. Products that we create drive decision-making at some of the world's most influential brands such as NFL, ESPN, Facebook, HBO, Ebay, Subway and Samsung.

Responsibilities

  • Build features and modules that our customers fall in love with
  • Work on high volume data collection, storage, and transformation
  • Develop reusable code and libraries for future use
  • Optimize various processes for maximum speed and scalability
  • Develop and maintain data collectors and aggregators
  • Collaborate with product managers and team leads to build the right thing
  • Collaborate with directors and senior developers to build things the right way

Qualifications

  • 2+ years of work experience as a software developer.
  • A degree in computer science or a related field.
  • Proficiency in one or more of the following:
    • Clojure, and/or another functional language
    • Clojurescript and/or Javascript 
    • Python or a similar high-level general purpose language
  • Experience with (or very strong interest to learn) the following frameworks and tools: Storm, Hadoop, Cassandra, PostgreSQL, Redis, RabbitMQ, Kafka, Django, Elasticsearch
  • Basic understanding of distributed computing paradigms
  • Experience with Amazon Web Services, Big Data processing and Map/Reduce
  • Integration of multiple data sources and databases into one system
  • Data migration, transformation, and scripting
  • Implement automated testing platforms and unit tests
  • Proficiency with code versioning tools such as Git
  • The ability to map continuous learning to every aspect of your life
  • Humble confidence

Please send resumes and cover letter to pierre@shareablee.com


McCLIMFundraiser success

· 9 days ago

Dear Lispers,

We are delighted and overwhelmed by the support that many of you have shown for our effort to crowdfund the maintenance and continued development of McCLIM. Never in our wildest dreams had we imagined that we would reach our highest monthly goal of 2000 USD in only a few days.

In addition to our regular maintenance and programmed improvements, at this level of funding, we will be able to post so-called "bounties", i.e., specific sums of cash for solving specific problems. Once posted, they will be available at this URL:

https://www.bountysource.com/teams/mcclim/bounties

Like we mentioned in our initial appeal for contributions, this campaign has an important side-effect, beyond that of providing a budget for improvements, namely that of creating new excitement around McCLIM. This excitement will ultimately result in contributions in the form of code, thereby multiplying the importance of the direct monetary support.

In order to make our work on this project as transparent as possible, we will do our utmost to provide monthly reports on our progress, and we will provide details on how the cash was put to work.

We sincerely hope to make sufficient progress, sufficiently fast, that you will consider additional contributions, in the form of cash or code, in the future.

Sincerely,

Robert Strandh and Daniel Kochmański

McCLIMCrowdfunding McCLIM maintenance and development

· 13 days ago

McCLIM is currently the only free native toolkit for developing GUI applications in Common Lisp. A while ago, I took over the maintenance, because I did not want to see McCLIM abandoned.

But since I have many other projects going, I decided to hire Daniel Kochmański part time (25%) to do some of the urgent work. This experience turned out so well that I would like for him to continue this work. Not only am I very pleased with the quality of Daniel's work, but I was also very surprised about the additional excitement that his work generated.

While I could continue funding Daniel myself for a while, I can not do so indefinitely. For that reason, I decided that we should try crowdfunding. Steady funding at the 25% level for a year or more would allow us to address some of the issues that often come up, such as the numerous remaining defects, the lack of modern visual appearance, and more.

If you are interested in seeing this happen, I strongly urge you to consider contributing to this project. Our target is modest. We would like to obtain at least 600 USD per month for at least one year. Even a small monthly contribution from a significant number of people can make this happen.

To contribute, please visit https://salt.bountysource.com/teams/mcclim and follow the instructions.

Sincerely,

Robert Strandh

McCLIMCLIM, the galaxy's most advanced graphics toolkit

· 14 days ago

Setting up the basic environment

We assume that you already have configured the Common Lisp environment (including Quicklisp) and that you know the basics of the Common Lisp programming. The necessary systems to load are mcclim and clim-listener, you may load them with the Quicklisp. After that launch the listener in a separate thread:

(ql:quickload '(mcclim clim-listener))
(clim-listener:run-listener :new-process t)

Listener

Finding your way around CLIM

CLIM is the global minimum in graphics toolkits. Geometry, and means of abstracting it. Switch the listener into the CLIM-INTERNALS package to get started. Type (in-package climi) in the Listener REPL.

Animations

Evaluate the 4 following forms in Slime REPL, then call (cos-animation) in the Listener REPL to demonstrate CLIM's animation capabilities. You cannot evaluate (cos-animation) in the Slime REPL, as *standard-output* is bound to the its output stream which isn't a sheet, and thus cannot be drawn on.

(in-package climi)

(defparameter *scale-multiplier* 150
  "try modifying me while running!")

(defparameter *sleep-time* 0.0001
  "modify and eval to speed or slow the animation, set to `nil' to stop")

(defun cos-animation ()
  (let* ((range (loop for k from 0 to (* 2 pi) by 0.1 collect k)) ; length of 62
         (idx 0)
         (record
          (updating-output (*standard-output*)
            (loop for x from (nth idx range) to (+ (nth idx range) (* 2 pi)) by 0.01
               with y-offset = 150
               for x-offset = (- 10 (* *scale-multiplier* (nth idx range)))
               for y-value = (+ y-offset (* *scale-multiplier* (cos x)))
               for x-value = (+ x-offset (* *scale-multiplier* x))
               do (draw-point* *standard-output*
                               x-value
                               y-value
                               :ink +green+
                               :line-thickness 3)))))
    (loop while *sleep-time*
       do (progn (sleep *sleep-time*)
                 (if (= 61 idx) (setq idx 0) (incf idx))
                 (redisplay record *standard-output*)))))

If you want to stop the animation, issue in the Slime REPL:

(setf *sleep-time* nil)

If it wasn't already obvious, you can plot w/e.

(CLIM-LISTENER::DRAW-FUNCTION-FILLED-GRAPH
 #'tanh :min-x (- 0 pi pi) :max-x pi :min-y -1.1 :max-y 1.5 :ink +blue+)

Drawning class hierarchy

Type ,clear output history in the Listener REPL and RET to clear the screen.

"," indicates that you are activating a command. Try typing comma, then C-/ to activate completion. C-c C-c to dismiss.

Children of the class CLIMI::SHEET can be drawn on using arbitrary geometry. Try (clim-listener::com-show-class-subclasses 'sheet) in the Listener REPL to view the subclasses of it.

Commands and presentations

The name COM-whatever indicates that the function in question is a clim command, which you can define in the Slime REPL like so,

(in-package clim-listener)

;;; Runme! We will need these in a moment.
(dolist (image-name '("mp-avatar.png"
                      "vulpes-avatar.png"
                      "stas-avatar.png"
                      "suit-avatar.png"
                      "rainbow-dash-avatar.png"
                      "chaos-lord-avatar.png"))
  (uiop:run-program (format nil "curl https://common-lisp.net/project/mcclim/static/media/tutorial-1/~A --output /tmp/~A" 
                            image-name image-name)))

(define-listener-command (com-ls :name t)
  ((path 'string))
  (clim-listener::com-show-directory path))

try ,ls /tmp/ -- then ,display image <SPACE> and click on one of the displayed paths to supply it as an argument. At the core of CLIM is the notion of a presentation. Objects have presentation methods, ie, some arbitrary rendering geometry, and when PRESENT'd on the screen CLIM remembers the type. Thus one can supply objects of the appropriate types as arguments to a command simply by clicking on them. Read about CLIMI::DEFINE-COMMAND in the specification to learn more. Let's define our first presentation method.

Intermixing S-expressions with the presentation types

Evaluate the forms below in the Slime REPL:

(in-package climi)

(defvar lords '("mircea_popescu" "asciilifeform" "ben_vulpes"))

(defclass wot-identity ()
  ((name :accessor name :initarg :name :initform nil)
   (avatar :accessor avatar :initarg :avatar :initform nil)))

(defmethod lord? ((i wot-identity))
  (member (name i) lords  :test #'string=))

(define-presentation-type wot-identity ())

(defun make-identity (name avatar-pathname)
  (make-instance 'wot-identity
         :name name
         :avatar avatar-pathname))

(defparameter *identities*
  (mapcar (lambda (l) (apply #'make-identity l))
          '(("mircea_popescu" #P"/tmp/mp-avatar.png")
        ("ben_vulpes" #P"/tmp/vulpes-avatar.png")
        ("asciilifeform" #P"/tmp/stas-avatar.png")
        ("Suit" #P"/tmp/suit-avatar.png")
        ("RainbowDash" #P"/tmp/rainbow-dash-avatar.png")
        ("ChaosLord" #P"/tmp/chaos-lord-avatar.png"))))

(define-presentation-method present (object (type wot-identity)
                                            stream
                                            (view textual-view)
                                            &key acceptably)
  (declare (ignorable acceptably))
  (multiple-value-bind (x y)
      (stream-cursor-position stream)
    (with-slots (name avatar) object      
      (draw-pattern* stream
                     (climi::make-pattern-from-bitmap-file avatar :format :png)
                     (+ 150 x)
                     (+ 30 y))
      (draw-text* stream name (+ 153 x) (+ 167 y)
                  :ink +black+
                  :text-size 20)
      (draw-text* stream name (+ 152 x) (+ 166 y)
                  :ink (if (lord? object)
                           +gold+
                           +blue+)
                  :text-size 20))
    (setf (stream-cursor-position stream)
          (values x (+ 200 y)))
    object))

(defun eval-and-then-call-me-in-the-listener ()
  (let* ((n 8)
         (sheet *standard-output*))
    (labels ((gen (i)
                (let* ((out-and-start '(f x)))
                  (loop
                   for k from 0 to i
                   do (setq out-and-start 
                            (apply #'append
                                   (mapcar 
                                    (lambda (s)
                                      (case s
                                        ;; (x '(y f + f f + + x)) 
                                        ;; (y '(y f + f f x - - f f x))
                                        (x '(+ y f + f f + y y +))
                                        (y '(f - y f f x f f))
                                        )) out-and-start))))
                  (remove-if (lambda (sym)
                               (member sym '(x y) :test 'eq))
                             out-and-start))))

      (let* ((x 300) (y 300) (new-x x) (new-y y) (a 1) (step 15))
        (loop
         for r in (gen n)
         do (progn (cond ((= a 1) (setq new-y (+ step y)))
                         ((= a 2) (setq new-x (- x step)))
                         ((= a 3) (setq new-y (- y step)))
                         ((= a 4) (setq new-x (+ step x))))
                   (case r
                     (f (clim:draw-line* sheet x y new-x new-y
                                         :ink clim:+blue+
                                         :line-thickness 6
                                         :line-cap-shape :round)
                        (setq x new-x y new-y))
                     (- (setq a (if (= 1 a) 4 (1- a))))
                     (+ (setq a (if (= 4 a) 1 (1+ a))))
                     (t nil)))))
      
      (let* ((x 300) (y 300) (new-x x) (new-y y) (a 1) (step 15))
        (loop
         for r in (gen n)
         do (progn (cond ((= a 1) (setq new-y (+ step y)))
                         ((= a 2) (setq new-x (- x step)))
                         ((= a 3) (setq new-y (- y step)))
                         ((= a 4) (setq new-x (+ step x))))
                   (case r
                     (f (clim:draw-line* sheet x y new-x new-y
                                         :ink clim:+white+
                                         :line-thickness 2
                                         :line-cap-shape :round)
                        (setq x new-x y new-y))
                     (- (setq a (if (= 1 a) 4 (1- a))))
                     (+ (setq a (if (= 4 a) 1 (1+ a))))
                     (t nil))))))))

Now type (dolist (i *identities*) (present i)) at the CLIM Listener.

Try typing (lord? at the listener and then clicking on one of the identities. Add a closing paren and RET. Notice how objects can be seamlessly intermixed with S-expressions. If this example fails for you it may be that you have not recent enough version of McCLIM.

Notes

Unripe fruits. The future isn't what it used to be - some assembly required.

  • (CLIM-DEMO::DEMODEMO) (available with system clim-examples)

  • The essential machinery of a 'live' GUI builder

  • Navigator (essentially an extended `apropos')

Marco AntoniottiCL-UNIFICATION bug fixing

· 36 days ago

Thanks to Masayuki Takagi, a not-so-obvious bug was fixed in the basic unification machinery of CL-UNIFICATION.  This led to the addition of a couple of new utility functions and some other cleanups (hopefully).

The result should be in Quicklisp in the next round.  Otherwise you can always clone/update/pull from the main repository.


(cheers)

Nicolas HafnerGo FORth - Confession 66

· 43 days ago

header
As part of yet another yak-shaving quest I've come to write a library that I only now realise I've been missing for a while: an extensible and generic iteration macro.

Now, you might be familiar with the Iterate library that is supposed to fill some of the same niches as For does. Despite being available for a very long time, Iterate is not very commonplace as far as I've been able to tell. Most people still use the Common Lisp standard loop, do* or map* variants- I do too.

I've tried to get into iterate multiple times, but I never really was able to get along with it, especially when it came to figuring out how to extend it for further constructs. Now, naturally this may just be my problem, but nevertheless I'm bold enough to consider that enough justification for me to go ahead and write my own attempt at a solution for the problem. I won't elaborate why I don't like iterate here as I believe there isn't much constructive or useful input to be gained from doing so.

Instead I will try to illustrate what For does, why it does it, and how it does it. A good part of that is already covered in the documentation but I will allow myself to be a bit more prosaic rather than declarative here. So let's dive in and have a look at the most simple of loops- an infinite one!

(for ())

Absolutely breathtaking.

The main idea behind For in contrast to Loop and Iterate is to mirror the structure of let. As such we always have a list of bindings and a body. Unlike let however, every variable is followed by a symbol that describes what kind of binding it is- how it is initialised, stepped, how if at all it terminates the loop, and whether it delivers a return value. So let's take a look at something a bit more sophisticated.

(for ((i ranging 1 10))
  (print i))

Here we see an actual binding in action. We bind a variable i using the ranging type with the arguments 1 and 10. As probably expected this will go through the numbers 1 to 10 and print each of them. Bindings can accept any number and kind of argument they want to:

(for ((i ranging 1 10 :by 2))
  (print i))

And now it will step by twos. Stunning.

Similar to Loop, For can of course also iterate over various sequences and other objects. Out of the box iteration bindings for lists, vectors, hash-tables, and packages is provided. Also just like loop we can accumulate values in various ways. Here too we support the same things as Loop does, namely collecting, appending, nconcing, counting, summing, maximising, minimising.

Additionally however, For provides a generic iterator mechanism for the cases where you do not really know or care what type your sequence is. This can also be used to update the sequence in-place if doing so makes sense. Let's see a practical example of converting a generic kind of sequence into a list:

(for ((item over my-sequence)
      (list collecting item)))

This will function without any work required from you for lists, vectors, arrays, streams, wild pathnames, packages, and hash-tables. It can also be extended to be able to iterate over any kind of sequence you might want by writing a new iterator class and the implementation for three simple methods.

Sometimes it's also necessary to terminate the loop according to some condition, and a binding does not seem like the correct place to put this kind of constraint. This is why in addition to bindings we have clauses that can appear in the body of the For.

(for ((i from 0))
  (while (< i 10))
  (print i))

This is nice and can easily be implemented by a macrolet. At least that's what I went for until I started trying to wrap my brain around the problem of return values. Some clauses like thereis would like to return a value- in this case whether the test has succeeded at all.

In the case of bindings where we have full control over the expressions and literals we can easily transform them however we want. This allows bindings to establish forms that wrap around the For body, add return values, and so on. However, in the case of clauses implemented through a macrolet the expansion happens within the body and at a different time. It is thus impossible¹ for the clause macro to communicate that it would like to hook into the mechanism surrounding the body. We could do it at run-time of course, but that would mean run-time consing and unnecessary tests every iteration- way too costly.

So, unfortunately I had to retract that idea and instead go for a minimal code-walking. It is so minimal that I don't know if it can even really be called that. What For now does is look through each item in its body, test whether it is a cons with a symbol in its car that refers to a clause, and if so call the clause expander function for that. This allows us to give clauses the same amount of power as bindings have and fixes the issues we had before. As you can see however there is a cost associated with it. In order to avoid full-blown code-walking (something understandably frowned upon) we can only recognise literal top-level For body expressions as clauses.

Nevertheless I think this is a small price to pay. The amount of times I would want to use a clause within another form seems very minimal to me at this point in time. Who knows though, I may come to eat my words at a later date.

Another thing worth mentioning I think is the actual extension mechanism of For itself. As per usual for my systems there's varying levels of support to help you, but you can always ignore them and just get full control so you can define exactly what's going on how.

So- the lowest we can go is defining a clause or binding function directly. We can define the above while clause like that simply enough. After all, all it needs to do is expand to a test that ends the loop if passed instead of the clause form.

(define-direct-clause while (form)
  (values NIL `(unless ,form (end-for)))

Generally if we think about what an iteration is about, we can distinguish three sections: an initialisation that introduces some values and initialises them, a body section that is executed on every step, and an end section that determines a return value. This is reflected in the three values that a binding or clause function must return- a form to wrap around the rest of the For block, a form to evaluate every step, and an optional form to evaluate as a return value if we think we have data that would be useful to return. To illustrate the return value we'll also look at the returning clause, which is useful if we have a non-standard value we'd like to give back, or if we want to force the primary value to something else.

(define-direct-clause returning (form)
  (values NIL NIL form))

All return values from bindings and clauses are gathered together into a single values form at the end of the For. This allows us to have multiple collect bindings or combine an accumulation with a clause and things like that. In general it's just convenient to allow the loop to return multiple values.

Most bindings and clauses outside of the most primitive ones will want to establish some kind of helper variables around the loop to keep, say, the head and tail of the list being accumulated. In order to provide this conveniently the &aux arguments in the lambda-list of the next definition macros are rewritten such that their value inside the definition body is a gensym and it automatically expands to a let that binds the gensym to the specified value. Thus writing our collecting binding becomes very simple:

(define-form-binding collecting (var form &aux (head (cons NIL NIL)) (tail head))
  `(setf ,tail (setf (cdr ,tail) (cons ,form NIL))
         ,var (cdr ,head)))

The head and tail are bound to fresh gensyms in the body, so they insert gensyms into our backquote expression. Simultaneously the definition takes care of the first return value for us by constructing an appropriate let* form that binds the gensym contained in head and tail to (cons NIL NIL) and head respectively. This form is then wrapped around the rest of the For so that the variables are available within the body.

Finally, often times we also know that all of the arguments passed to the binding need to only be evaluated once before the loop. To make this convenient we have define-value-binding which treats the actual binding arguments similar to how the &aux arguments work. Using this, defining something like across becomes trivial as well:

(define-value-binding across (var vector &aux (i -1) (length (length vector)))
  `(if (= ,length (incf ,i))
       (end-for)
       (update ,var (aref ,vector ,i))))

But just as mentioned before, if you don't trust the system to do this for you or simply don't like it, you can always return to the low-level definition macros and do the plumbing yourself.

Given that For allows you to both expand into body forms, surrounding forms, and return value forms, I think it is safe to say that pretty much every feature you might need to express in an iterator can be done and without much to write either. Take a look at the definitions of the standard bindings and clauses to get a feel for it.

Finally I'd like to take a look at the previously mentioned iterator system that For bundles with it. As stated, there's very little you need to do to add support for a new data type. Subclass iterator, add methods for has-more, next, and make-iterator and you're done. With that in place, you can directly go ahead and use the over binding to go through your sequence. If it makes sense you can also add support for (setf current), enabling you to use the updating binding which permits setting the current element in the sequence as well.

All in all I hope that I've figured out some good solutions to the problems presented by an extensible looping construct. Since the system is still very young, I don't really have too much experience with it myself yet and can't make any grand claims like this being the "be all end all iteration macro" or whatever. But it doesn't have to be that either. It solves the problem that drove me into this direction, as well as a few other ones on top of that, so I'm fine with it being what it is. If I've managed to convince you to look at For to see if it fits into your toolbelt, then I would already have achieved much more than I initially set out to do.


[1] This is not quite true, as pointed out to me by Mark Cox. It is indeed possible to make macros communicate with a bit of ingenuity. Relevant to this are the COMPILER-LET-CONFUSION Issue in the CLHS, and an example he was kind enough to write up to illustrate it.

Luís OliveiraRe: Querying plists

· 45 days ago
Zach's Querying plists blog post showcases a neat little querying DSL for plists. I couldn't shake the feeling that it looked an awful lot like pattern matching. I've often been impressed by optima, but I barely get to use it, so I thought I should try and see what querying plists looked like using pattern matching.

Here's what I came up with. Zach's example
(query-plists '(:and (:= :first-name "Zach") (:= :state "ME")
                     (:not (:= :last-name "Beane")))
              *people*)
becomes:
(remove-if-not (lambda-match ((plist :first-name "Zach" :state "ME"
                                     :last-name (not "Beane")) t))
               *people*)

It turned out more succinct than I initially expected! Also, it's trivially adaptable to other kinds of objects. E.g., given the following class:
(defclass person ()
  ((first-name :initarg :first-name)
   (last-name  :initarg :last-name)
   (state      :initarg :state)))
all we have to do is swap plist with the class name person and we're all set:
(remove-if-not (lambda-match ((person :first-name "Zach" :state "ME"
                                      :last-name (not "Beane")) t))
               *people*)

We can't quite define something exactly like Zach's query-plists because, AFAICT, optima's patterns are not first-class objects but perhaps we can cheat a little bit.
;; naming things is hard. :-/
(defmacro matchp (pattern) `(lambda-match (,pattern t)))
(defun filter (predicate list) (remove-if-not predicate list))

(filter (matchp (plist :first-name "Zach" :state "ME"
                       :last-name (not "Beane")))
        *people*)

Making this equally succinct when the query criteria are not constant is a challenge for another day and makes it clear that matchp is a lousy abstraction. ;-)

Zach BeaneQuerying plists

· 48 days ago

For normal, “serious” data, I like to stick things in a database and uses its query system to full effect. But sometimes I have a temporary need to work on a bunch of temporary data in plist form, and want to easily poke through it. Here’s the latest version of something I find myself doing pretty often for that:

(defun compile-plist-query (query)
  (labels ((callfun (object)
             (lambda (fun)
               (funcall fun object)))
           (compile-= (keyword value)
             (lambda (plist)
               (equal (getf plist keyword) value)))
           (compile-and (funs)
             (lambda (plist)
               (every (callfun plist) funs)))
           (compile-or (funs)
             (lambda (plist)
               (some (callfun plist) funs)))
           (compile-not (fun)
             (lambda (plist)
               (not (funcall fun plist)))))
    (let ((operator (first query))
          (operands (rest query)))
      (ecase operator
        (:=
         (compile-= (first operands) (second operands)))
        (:and
         (compile-and (mapcar #'compile-plist-query operands)))
        (:or
         (compile-or (mapcar #'compile-plist-query operands)))
        (:not
         (compile-not (compile-plist-query (first operands))))))))

(defun query-plists (query plists)
  (remove-if-not (compile-plist-query query) plists))

With the definitions above, I can use any combination of logic for querying the plists for particular property values:

(query-plists '(:and (:= :first-name "Zach") (:= :state "ME")
                     (:not (:= :last-name "Beane")))
              *people*)

This is similar to the MP3 query system described in Practical Common Lisp, but a little simpler and self-contained, suitable for random-ish plist structure.

Zach BeaneCCL "office hours"

· 53 days ago

Nice idea from Matt Emerson:

I don’t know if anyone will find it useful, but as an experiment, I’ve decided to try holding IRC “office hours” on #ccl on irc.freenode.net.

So, on Tuesday, July 5, from 10:00 am to 11:00 am and 4:00 pm to 5:00 pm Eastern time (that’s 14:00 to 15:00 and 20:00 to 21:00 UTC), I’ll be available in the channel to talk about whatever you want, as long as it relates to CCL somehow.

If you don’t have an IRC client, you can use http://webchat.freenode.net. Enter your preferred nickname (I use “rme” for myself, for example) in the nickname field, and use “#ccl” for the channels field. Leave the “Auth to services” checkbox unchecked.

Clozure CL BlogCustomizing the listener prompt

· 56 days ago

Clozure CL’s default prompt is “? “.  You can customize this by setting ccl:*listener-prompt-format* to a format control of your choice.

Note that a format control “string” can be a function.  Here’s an example that makes the prompt contain the current package name.

(defun prompt-formatter (stream break-level)
  (princ (package-name *package*) stream)
  (if (plusp break-level)
    (format stream " ~d > " break-level)
    (write-string "> " stream)))

To start using this, do

(setq ccl:*listener-prompt-format* #'prompt-formatter)

Quicklisp newsJune 2016 Quicklisp dist update now available

· 56 days ago
New projects:
  • cl-libfarmhash — Common Lisp Binding for Google's Farmhash. — MIT
  • cl-libhoedown — Common Lisp Binding for Hoedown. — MIT
  • cl-sat — Common Lisp API to Boolean SAT Solvers — LLGPL
  • cl-sat.glucose — CL-SAT instance to Glucose state-of-the-art SAT solver. This downloads the later 2014 version (2nd in the 2014 SAT competition). — LLGPL
  • cl-sat.minisat — Common Lisp API to minisat — LLGPL
  • floating-point-contractions — Numerically stable contractions of floating-point operations. — MIT
  • metering — Portable Code Profiling Tool — Public Domain
  • neo4cl — Basic library for interacting with Neo4J — MIT license
  • network-addresses — A network addresses manipulation library. — MIT
  • ugly-tiny-infix-macro — A tiny and simple macro to allow writing binary operations in infix notation — Apache License, Version 2.0
Updated projects3d-vectorsacclimationarchitecture.service-providercaveman2-widgetscity-hashcl-anacl-asynccl-autowrapcl-conspackcl-hash-utilcl-htmlpragcl-i18ncl-messagepack-rpccl-monitorscl-mysqlcl-oclapicl-odecl-rediscl-scancl-sdl2clackclack-static-asset-middlewareclinchcloser-mopcopy-directorycss-selectorsdeedsdefpackage-plusdjulaeazy-projectepigraphesrap-liquidexscribefast-iofixedglkitglsl-specgsllhash-setinlined-generic-functionlegitlocal-timelquerymap-setmcclimminheapmontezumaoclclprometheus.clqt-libsrestasscribblesdl2kitserapeumsmart-bufferspinneretsquirltriviatrivial-nntptrivial-openstacktrivial-wstrivial-yencubiquitousutilities.binary-dumputilities.print-treeverbosevomweblocksweblocks-utilswoowookiewu-sugarzs3.

Removed projectsbackportsext-blogstp-query, and stringprep have all been abandoned by their author and deleted from github. cells no longer builds on SBCL, and ext-blog no longer builds at all.

To get this update, use (ql:update-dist "quicklisp").

The big Quicklisp fundraiser is still in limbo. Until it happens, if you want to contribute a little every month, see the (unfortunately PayPal-only) donations page.

ECL NewsECL Quarterly Volume IV

· 71 days ago

1 Preface

Hello,

I've managed to assemble the fourth volume of the ECL Quarterly. As always a bit off schedule but I hope you'll find it interesting.

This issue will revovle around ECL news, some current undertakings and plans. Additionally we'll talk about Common Lisp implementations in general and the portability layers. I believe it is important to keep things portable. Why? Keep reading!

Lately we're working with David O'Toole on making support for ECL on Android better. He wants to distribute his games on this platform and was kind enough to write an article for ECL Quarterly. Thanks to his work we've discovered various rough edges and bugs in ECL and gained some invaluable insight into the cross compilation problems of Common Lisp applications.

As the final remark - I've found some time to establish a proper RSS subscription feed for ECL and ECL Quarterly. I hope that this issue will finally land on the Planet Lisp - a well known Lisp-related blog posts aggregator maintained by Zach Beane.

I want to thank for the valuable feedback and proofreading to many people, especially Antoni Grzymała, Javier Olaechea, Michał Posta, Ilya Khaprov and David O'Toole.

Have a nice lecture,


Daniel Kochmański ;; aka jackdaniel | TurtleWare
Poznań, Poland
June 2016

2 ECL's "what's going on"

I've added a milestone with a deadline for the ECL 16.1.3 release with the bugs I want to fix. You may find it here. I'm very happy to receive a lot of positive feedback, merge requests and awesome bug reports. Thank you for that! :-)

Backporting CLOS changes from CLASP was successful but we won't incorporate them in the main branch. The recently resurrected cl-bench has shown that these changes impact performance and consing negatively (check benchmarks). If you are curious about the changes, you may checkout the branch feature-improve-clos in the repository.

I'm slowly working on the new documentation. This is very mundane task which I'm not sure I'll be able to finish. Rewriting DocBook to TexInfo and filling the missing parts is hard. I'm considering giving up and improving the DocBook instead.

In the near future I plan to make a crowdfunding campaign to improve support for cross-compilation, Android and Java interoperability in order to boost development. More details will be probably covered in the next Quarterly issue.

3 Porting Lisp Games to Android with Embeddable Common Lisp, Part 1

3.1 Introduction

Recently I ported my Common Lisp game engine "Xelf" to the Android operating system using Embeddable Common Lisp.

Some work remains to be done before I can do a proper beta test release, but ECL Quarterly provides a good opportunity to pause and share the results thus far. This is the first part of a two-part article. The focus of Part 2 will be on performance optimization, testing, and user interface concerns.

Special thanks to Daniel Kochmański, 3-B, oGMo, and the rest of the Lisp Games crew for their inspiration and assistance.

3.1.1 About the software

Xelf is a simple 2-D game engine written in Common Lisp. It is the basis of all the games I have released since 2008, and can currently be used with SBCL to deliver optimized standalone game executables for GNU/Linux, MS Windows, and Mac OSX.

I've also published a Git repository with all the work-in-progress scripts, patches, and libraries needed to compile Xelf for Android with Embeddable Common Lisp, OpenGL, and SDL.

Please note that this is a pre-alpha release and is mainly intended for Common Lisp developers looking to get a head start in building an Android game. Use with caution.

Xelf is not required; you can substitute your own Lisp libraries and applications and just use the repo as a springboard.

I would like to add support for CL-SDL2 as well, both as a prelude to porting Xelf to SDL 2.0, and as a way to help the majority who use SDL 2.0 for current projects.

3.2 Problems

3.2.1 Choosing an implementation

As I use only Free Software for my projects, I did not consider any proprietary Lisps.

Steel Bank Common Lisp now runs on Android, but SBCL as a whole cannot yet be loaded as a dynamic shared library. This is a show-stopper because Android requires the entry point of a native application to be in a shared library specially embedded in the app.

Xelf works very well with Clozure Common Lisp, but CCL's Android support is not fully functional at present. So I've been quite happy to discover Embeddable Common Lisp. Its technique of translating Common Lisp into plain C has made integration with the Android NDK toolchain relatively simple.

3.2.2 Cross-compilation

For performance reasons the Lisp stack (meaning LISPBUILDER-SDL, CL-OPENGL, CFFI, Xelf, the game, and all their dependencies) must be compiled to native ARM machine code and loaded as shared libraries.

There is a complication in this task as regards ECL. The latter produces native code by translating Common Lisp into plain C, and then invoking the C compiler. But the C compiler toolchain is not typically present on Android, and building one that is properly configured for this task has proved difficult so far.

Therefore we must cross-compile the entire Lisp stack. ECL's Android build procedure already cross-compiles the Lisp contained in ECL, but there were additional difficulties in compiling Lisp libraries which I'll cover below in the "Solutions" section.

3.2.3 Legacy code

Xelf has improved a lot over time and gained new features, but is now outdated in some respects. When I first wrote Xelf in the 2006-2007 period SDL 1.2 was current and OpenGL Immediate mode had not yet been officially deprecated. This hasn't been a terrible problem in practical terms, given that both are still widely supported on PC platforms. But porting to Android would mean I could not procrastinate any longer on updating Xelf's SDL and OpenGL support.

3.3 Solutions

3.3.1 CommanderGenius to the rescue

Help arrived for my SDL woes in the form of Sergii Pylypenko's "CommanderGenius", a fancy port of SDL 1.2/2.0 to Android. I can utilize the existing LISPBUILDER-SDL bindings for SDL, SDL-MIXER, SDL-TTF, SDL-IMAGE, and SDL-GFX. Not only that, there are extra features such as gamepad support, floating virtual joysticks, access to touchscreen gesture data and Android system events, support for the Android TV standard, and much more.

CommanderGenius is actually designed from the start to rebuild existing SDL 1.2 / 2.0 / OpenGL projects as Android applications, and includes dozens of examples to work with. So in mid-May this year I set about splicing together Daniel Kochmański's ECL-ANDROID Java wrapper and startup code (which together load ECL as a shared object from within the app) into the CommanderGenius SDL application code and build procedures.

The result is a fullscreen SDL/OpenGL application with Embeddable Common Lisp, optionally running Swank. There's even a configurable splash screen!

3.3.2 Do a little dance with ASDF

ECL can compile an entire system into one FASL file, but I ran into a snag with the ASDF-based build procedure. The typical way is to compile each Lisp file and then load the resulting compiled file. But on the cross-compiler,

(load (compile-file "myfile.lisp"))

fails because the output of COMPILE-FILE is a binary for the wrong architecture. Likewise, alien shared libraries cannot be loaded during Lisp compilation, which broke CL-OPENGL and LISPBUILDER-SDL.

My temporary solution was to redefine the function ASDF:PERFORM-LISP-LOAD-FASL in my build script. My modified version does something like this instead:

(compile-file "myfile.lisp")
(load "myfile.lisp")

I then invoke ECL's system builder, which spits out a big binary FASB file containing the whole system. But thanks to the LOAD statements, each Lisp file has had access to the macros and other definitions that preceded it in compilation.

I'm sure this is really wrong, but it works, and the resulting FASBs load very quickly. (App startup time went from over 30 seconds when loading byte-compiled FASCs, to about 3.5 seconds.)

In the end, it was simple to deal with CL-OPENGL and LISPBUILDER-SDL wanting to open shared libraries during compilation. I used Grep to find and then comment out calls to CFFI:USE-FOREIGN-LIBRARY, leaving the DEFINE-FOREIGN-LIBRARY definitions intact. This allows cross-compilation to proceed normally.

Then on Android, after the FASBs are loaded I invoke USE-FOREIGN-LIBRARY on each of the required definitions.

So tricking ASDF works. But aside from being a hack, it's not enough for some of the things I'd like to do. The INLINED-GENERIC-FUNCTION technique looks like a highly promising way to increase performance, but my cross-compilation trick led in this case to invalid FASB's with embedded FASC bytecodes. Indeed, to work with ECL in this situation would require actually loading the ARM-architecture compiled INLINED-GENERIC-FUNCTION binary before compiling systems that use inlining—which as mentioned above cannot be done during cross-compilation.

I'm exploring other potential solutions, such as installing a GNU/Linux container on my Android development unit in order to give ECL access to a native C compiler toolchain (see below). I may even attempt to write a custom cross-compilation procedure using Clang and LLVM. But this is less urgent for now, because tweaking ASDF is sufficient to produce a working application.

3.3.3 Use OpenGL ESv1 with CL-OPENGL

Luckily the the path of least resistance could prevail here. OpenGL ES version 1 is widely supported on Android devices, and is easier to port to from Immediate mode than is GLESv2. CL-OPENGL supports it right out of the box. (I'd like to thank 3-B and oGMo for their help in bridging the gap with my own code.)

Some tasks remain to be done here but most of Xelf's drawing functions are now working, including TrueType fonts and vertex coloring.

I've also written some code to partially emulate vertex coloring as a way of increasing render performance, and this will be covered in the forthcoming Part 2 of this article.

3.3.4 ProTip: Use the byte-compiler

One issue has gone unmentioned. How do I interactively redefine functions and set variables in order to develop the running game via SLIME/Swank, if everything must be cross-compiled on an X86 system?

The answer is that ECL's built-in bytecode compiler is used in these cases, and the bytecoded definitions replace the originals. I can freely use COMPILE-FILE, LOAD, and even ASDF:LOAD-SYSTEM during "live" development; under normal circumstances the only real difference is execution speed of the resulting code. The final game app will ship without Swank, of course, and with a fully native Lisp stack.

Now you have a new problem, which is how to edit the Lisp files on your Android device so that Swank can compile and load them.

3.3.5 ProTip: Use Emacs TRAMP with ADB

To make this useful you need a rooted android device.

(add-to-list 'tramp-default-user-alist '("adb" nil "root"))
(find-file "/adb::/")

This can integrate with Emacs' "bookmarks" and "desktop" features for even more convenience.

3.3.6 ProTip: Use Emacs to inspect your APK package

They're just zip files. Missing libraries or assets? Check the APK by opening it as a file in GNU Emacs.

3.3.7 ProTip: Use a GNU/Linux container for SSH and native Emacs with X11!

You can actually install a GNU/Linux "container" with Debian, Ubuntu, or some other distribution on your Android development system in order run the Secure Shell daemon and many other applications. I use it to run a graphical Emacs on the Android box, with Emacs' X11 connection forwarded through SSH so that its windows open on my desktop GNU/Linux PC's X server—right alongside my native Emacs. I use different color themes to avoid mixing them up.

This gives me full access to everything on both systems from a single mouse/keyboard/monitor, and I can cut and paste text freely between applications.

Setting up such a container is beyond the scope of this article, but I highly recommend it. It was pretty easy on a rooted device, and works very well.

3.4 Conclusion

In less than a month we went from "let's do it" to "wow, it works!" What more can you ask for?

This concludes Part 1 of my article on building Lisp games for Android with Embeddable Common Lisp. To read my running commentary and see news and test results as they are posted, you can visit the project README:

https://gitlab.com/dto/ecl-android-games-src/blob/master/README.org

More details and all scripts and configurations can be found in that repository.

Thanks for reading,


David O'Toole (dto@xelf.me)
11 June, 2016

4 Common Lisp implementations

Some time ago I've created with the help of many kind people (most notably from Rainer Joswig and Fare Rideau) a graph presenting Common Lisp implementations and the relations between them. This version is improved over drafts presented on twitter and linkedin. If you find any errors, please contact me.

all-hierarchy.png

It is worth noting that LispWorks and VAX share the code with Spice Lisp which later evolved into Common Lisp implementation CMUCL. Striped lines lead to CMUCL, because I didn't want to add pre-CL implementations.

There is also suspicion that Lucid shares code with Spice Lisp and/or VAX, but I couldn't confirm that, so I'm leaving it as is.

"JavaScript Lisp Implementations" classifies some lisps as CL, but I've added only Acheron and parenscript to the list, because rest is just CL-ish, not even being a subset.

Resources I've found on the internet: CMU FAQ, ALU list, CLiki overview, Wikipedia article, JavaScript Lisp Implementations.

5 Building various implementations

I've built various lisps to perform some benchmarks and to have some material for comparison. Ultimately I've decided to polish it a little and publish. I had some problems with Clasp and Mezzano so I've decided to not include them and leave building these as an exercise for the reader ;-). Also, if you feel adventurous, you may try to build Poplog, which has Common Lisp as one of the supported languages.

If you want to read about various implementations, please consult Daniel's Weinreb Common Lisp Implementations: A Survey (material from 2010, definitely worth reading).

First we create a directory for the lisp implementations (we'll build as an ordinary user) and download the sources. Each implementation has a list of building prerequisites, but it may be not comprehensive.

export LISPS_DIR=${HOME}/lisps
mkdir -p ${LISPS_DIR}/{src,bin}
pushd ${LISPS_DIR}/src

# Obtain sources
svn co http://abcl.org/svn/trunk/abcl/ abcl
# git clone git@github.com:drmeister/clasp.git
svn co http://svn.clozure.com/publicsvn/openmcl/trunk/linuxx86/ccl ccl
hg clone http://hg.code.sf.net/p/clisp/clisp clisp
git clone git@common-lisp.net:cmucl/cmucl.git cmucl
git clone https://gitlab.com/embeddable-common-lisp/ecl.git ecl
git clone git://git.sv.gnu.org/gcl.git gcl
git clone git@github.com:davazp/jscl.git jscl
# git clone https://github.com/froggey/Mezzano.git
git clone https://gitlab.common-lisp.net/mkcl/mkcl.git mkcl
git clone git://git.code.sf.net/p/sbcl/sbcl sbcl
git clone git@github.com:wadehennessey/wcl.git wcl
git clone https://github.com/gnooth/xcl.git

5.0.1 ABCL (Armed Bear Common Lisp)

Requires
jdk, ant
pushd abcl
ant
cp abcl ${LISPS_DIR}/bin/abcl-dev
popd

5.0.2 CCL (Clozure Common Lisp)

Requires
gcc, m4, gnumake
pushd ccl
echo '(ccl:rebuild-ccl :full t)' | ./lx86cl64 -n -Q -b

# installation script is inspired by the AUR's PKGBUILD
mkdir -p ${LISPS_DIR}/ccl-dev
cp -a compiler contrib level-* lib* lisp-kernel objc-bridge \
   tools x86-headers64 xdump lx86cl64* examples doc \
   ${LISPS_DIR}/ccl-dev

find ${LISPS_DIR}/ccl-dev -type d -name .svn -exec rm -rf '{}' +
find ${LISPS_DIR}/ccl-dev -name '*.o' -exec rm -f '{}' +
find ${LISPS_DIR}/ccl-dev -name '*.*fsl' -exec rm -f '{}' +

cat <<EOF > ${LISPS_DIR}/bin/ccl-dev
#!/bin/sh
exec ${LISPS_DIR}/ccl-dev/lx86cl64 "\$@"
EOF
chmod +x ${LISPS_DIR}/bin/ccl-dev
popd

5.0.3 CLISP

Requires
gcc, make
Notes
don't build with ASDF (it's old and broken)
pushd clisp
./configure --prefix=${LISPS_DIR}/clisp-dev/ \
            --with-threads=POSIX_THREADS \
            build/
cd build
make && make install
ln -s ${LISPS_DIR}/clisp-dev/bin/clisp ${LISPS_DIR}/bin/clisp-dev
popd

5.0.4 CMUCL (CMU Common Lisp)

Requires
cmucl binary, gcc, make, openmotif
Notes
it needs another CMUCL to bootstrap (release 21a)
pushd cmucl
mkdir -p prebuilt
pushd prebuilt
wget https://common-lisp.net/project/cmucl/downloads/release/21a/cmucl-21a-x86-linux.tar.bz2 \
     https://common-lisp.net/project/cmucl/downloads/release/21a/cmucl-21a-x86-linux.extra.tar.bz2
mkdir ${LISPS_DIR}/cmucl-21a
tar -xf cmucl-21a-x86-linux.tar.bz2 -C ${LISPS_DIR}/cmucl-21a/
tar -xf cmucl-21a-x86-linux.extra.tar.bz2 -C ${LISPS_DIR}/cmucl-21a/
cat <<EOF > ${LISPS_DIR}/bin/cmucl-21a
#!/bin/sh
exec ${LISPS_DIR}/cmucl-21a/bin/lisp "\$@"
EOF
chmod +x ${LISPS_DIR}/bin/cmucl-21a
# Note, that this is already a fully functional lisp now
popd
bin/build.sh -C "" -o "cmucl-21a"
bin/make-dist.sh -I ${LISPS_DIR}/cmucl-dev/ linux-4/
cat <<EOF > ${LISPS_DIR}/bin/cmucl-dev
#!/bin/sh
exec ${LISPS_DIR}/cmucl-dev/bin/lisp "\$@"
EOF
chmod +x ${LISPS_DIR}/bin/cmucl-dev
popd

5.0.5 ECL (Embeddable Common Lisp)

Requires
gcc, make
./configure --prefix=${LISPS_DIR}/ecl-dev/
make && make install
ln -s $LISPS_DIR/ecl-dev/bin/ecl ${LISPS_DIR}/bin/ecl-dev

5.0.6 JSCL (Java Script Common Lisp)

Requires
Conforming CL implementation, web browser, nodejs
Notes
Doesn't provide LOAD yet (no filesystem), but author confirmed that this will be implemented (virtual filesystem on the browser and the physical one on the nodejs).
mkdir ${LISPS_DIR}/jscl-dev
pushd jscl
./make.sh

# Run in the console (node-repl)
cp jscl.js repl-node.js ${LISPS_DIR}/bin/jscl-dev
cat <<EOF > ${LISPS_DIR}/bin/jscl-dev
#!/bin/sh
exec node ${LISPS_DIR}/jscl-dev/repl-node.js
EOF
chmod +x ${LISPS_DIR}/bin/jscl-dev

# Run in the web browser (optional)
cp jscl.js repl-web.js jquery.js jqconsole.min.js jscl.html style.css \
   ${LISPS_DIR}/jscl-dev/
# replace surf with your favourite browser supporting JS
cat <<EOF > ${LISPS_DIR}/bin/jscl-dev-browser
#!/bin/sh
exec surf ${LISPS_DIR}/jscl-dev/jscl.html
EOF
chmod +x ${LISPS_DIR}/bin/jscl-dev-browser

popd

5.0.7 GCL (GNU Common Lisp)

Requires
gcc, make
# Doesn't work both with head and the release, luckily it works with
# the next pre-release branch
git checkout Version_2_6_13pre
./configure --prefix=${LISPS_DIR}/gcl-2.6.13-pre
make && make install
ln -s ${LISPS_DIR}/gcl-2.6.13-pre/bin/gcl ${LISPS_DIR}/bin/gcl-2.6.13-pre

5.0.8 MKCL (Man-Kai Common Lisp)

Requires
gcc, make
pushd mkcl
./configure --prefix=${LISPS_DIR}/mkcl-dev
make && make install
ln -s ${LISPS_DIR}/mkcl-dev/bin/mkcl ${LISPS_DIR}/bin/mkcl-dev
popd

5.0.9 SBCL (Steel Bank Common Lisp)

Requires
ANSI-compliant CL implementation
Notes
  • Lisp has to close on EOF in top-level (CMUCL doesn't do that),
  • ECL has some bug regarding Lisp-to-C compiler apparently triggered by the SBCL compilation - don't use it here,
  • we could use precompiled SBCL like with the CMUCL, but let's exploit the fact, that we can compile from the C-bootstrapped implementation (we'll use already built clisp-dev),
  • it is advised to run the script in fast terminal (like xterm) or in the terminal multiplexer and to detach it - SBCL compilation process is very verbose,
  • if you build SBCL on Windows, consider using MinGW to preserve POSIX compatibility.
pushd sbcl
export GNUMAKE=make
./make.sh "clisp"
INSTALL_ROOT=${LISPS_DIR}/sbcl-dev ./install.sh
cat <<EOF > ${LISPS_DIR}/bin/sbcl-dev
#!/bin/sh
SBCL_HOME=${LISPS_DIR}/sbcl-dev/lib/sbcl exec ${LISPS_DIR}/sbcl-dev/bin/sbcl "\$@"
EOF
chmod +x ${LISPS_DIR}/bin/sbcl-dev
popd

5.0.10 WCL

Requires
tcsh, gcc, git
Notes
very incomplete implementation
pushd wcl
REV=`git rev-parse HEAD`
sed -i -e "s/WCL_VERSION = \"3.0.*$/WCL_VERSION = \"3.0-dev (git-${REV})\"/" CONFIGURATION
LD_LIBRARY_PATH=`pwd`/lib make rebuild
mkdir ${LISPS_DIR}/wcl-dev
cp -a bin/ lib/ doc/ ${LISPS_DIR}/wcl-dev/
cat <<EOF > ${LISPS_DIR}/bin/wcl-dev
#!/bin/sh
LD_LIBRARY_PATH=${LISPS_DIR}/wcl-dev/lib exec ${LISPS_DIR}/wcl-dev/bin/wcl "\$@"
EOF
chmod +x ${LISPS_DIR}/bin/wcl-dev
popd

5.0.11 XCL

Requires
gcc
Notes
last commit in 2011
pushd xcl
mkdir ${LISPS_DIR}/xcl-dev
XCL_HOME=${LISPS_DIR}/xcl-dev make
cp -a clos compiler lisp COPYING README xcl ${LISPS_DIR}/xcl-dev
# This will build in XCL_HOME, even if run in source directory
./xcl <<EOF
(rebuild-lisp)
EOF

ln -s ${LISPS_DIR}/xcl-dev/xcl ${LISPS_DIR}/bin/xcl-dev
popd

6 Portability libraries

It is important to know the difference between the language standard, implementation-specific extensions and the portability libraries. The language standard is something you can depend on in any conforming implementation.

Sometimes it's just not enough. You may want to do ** serializethreading*, or to *data, which is very hard to express (or even impossible) in the language provided by the standard. That's where the implementation-specific extensions kick in. Why are they called "implementation-specific"? Because the API may be different between implementations - reaching consensus is a hard thing1.

The most straightforward approach I can imagine is to reach for the documentation of the Common Lisp implementation you are currently using and to use the API provided by this implementation. I dare you not to do that! It's definitely the easiest thing to do at first, but mind the consequences. You lock yourself, and your users in the implementation you prefer. What if you want to run it on the JVM or to make it a shared library? Nope, you're locked-in.

"What can I do then?" - you may ask. Before I answer this question, I'll tell you how many people do it (or did it in the past) - they used read-time conditionals directly in the code. Something like the following:

(defun my-baz ()
  #+sbcl                        (sb-foo:do-baz-thing 'quux)
  #+ccl                         (ccl:baz-thing       'quux)
  #+(and ecl :baz-thing)        (ext:baz             'quux)
  #+abcl                        (ext:baz             'quux)
  #+(and clisp :built-with-baz) (ext:baz-thingie     'quux)
  #-(or sbcl ccl ecl abcl clisp)
  (error "Your implementation isn't supported. Fix me!"))

If the creator felt more fancy and had some extra time, they put it in the package my-app-compat. It's all great, now your application works on all supported implementations. If somebody wants theirs implementation to work, send the creator a patch, who incorporates it into the code and voila, everything works as desired.

We have one problem however. Libraries tend to depend on one another. There is also a lot of software which uses features beyond the ANSI specification (it's all good, programmers need these!). Do you see code duplication everywhere? How many times does a snippet above have to be copy-pasted, or rewritten from scratch? It's not black magic after all. APIs between ad-hoc implementations don't exactly match, covered CL implementations differ…

So you quickload your favorite library which depends on 10 other libraries which implement BAZ functionality in theirs own unique way, with a slightly different API on the unsupported implementation - that's why we have my-baz abstraction after all, right? Now, to make it work, a user has to:

  1. Find which of the ten libraries don't work (not trivial!),
  2. find and clone the repositories (we want to use git for patches),
  3. fix each one of them (grep helps!) and commit the changes,
  4. push the changes to your own forked repository and create a pull request (or send a diff to the mailing list) - *ten times*,
  5. voila, you're done, profit, get rich, grab a beer.

It's a lot of work which the user probably won't bothered to do. They will just drop the task, choose another implementation or hack their own code creating the Yet Another Baz Library for the implementations he cares for reinventing the wheel once more. It's a hacker's mortal sin.

I'm going to tell you now what is the Right Thing™ here. Of course you are free to disagree. When you feel that there is a functionality you need which isn't covered by the standard you should

  1. Look if there is a library which provides it.

    You may ask on IRC, the project's mailing list, check out the CLiki, do some research on the web. Names sometimes start with trivial-*, but it's not a rule. In other words: do your homework.

  2. If you can't find such a library, create one.

    And by creating such a library I mean comparing the API proposed by at least two CL implementations (three would be optimal IMHO), carefully designing your own API which covers the functionality (if it's trivial, this should be easy) and implementing it in your library.

    Preferably (if possible) add a fallback implementation for implementations not covered (with the appropriate warning, that it may be inefficient or not complete in one way or another).

    It may be worth reading the Maintaining Portable Lisp Programs paper written by Christophe Rhodes.

  3. Write beautiful documentation.

    A CL implementation docs may be very rough. It takes time to write them and programmers tend to prioritize code over the documentation. It's really bad, but it's very common for the documentation to be incomplete or outdated.

    Document your library, describe what it does, how to use it. Don't be afraid of the greatness! People will praise you, success will come, world will be a better place. And most importantly, your library will be useful to others.

  4. Publish the library.
  5. Make that library your project's dependency.

I know it's not easy, but in the long term it's beneficial. I guarantee you that. That's how the ecosystem grows. Less duplication, more cooperation - pure benefit.

Some people don't follow this path. They didn't think it through, or maybe they did and decided that keeping the dependency list minimal is essential to their project, or were simply lazy and hacked their own solution. There are also some old projects which exported a number of features being a very big portability library and an application at the same time (ACL-compat, McCLIM and others). What to do then?

If it's a conscious decision of the developer (who doesn't want to depend on /anything/), you can do nothing but provide a patch adding your own implementation to the supported list. It's their project, their choice, we have to respect that.

But before doing that you may simply ask if they have something against plugging these hacks with the proper portability library. If they don't - do it, everybody will benefit.

There are a few additional benefits of the presented portability library approach for the implementations itself. Having these internal details in one place makes it more probable that your implementation is already supported. If the library has a bug it's easier to fix it in one place. Also, if the CL implementation changes it's API, it's easy to propagate changes to the corresponding portability libraries. New CL implementation creators have a simplified task of making their work usable with existing libraries.

It is worth noting, that creating such library paves the way to the new quasi-standard functionalities. For instance Bordeaux Threads has added recently CONDITION-WAIT function, which isn't implemented on all implementations. It is a very good stimulus to add it. This is how library creators may have real impact on the implementation creators decisions about what to implement next.

6.1 Portability layer highlights

Here are some great projects helping CL implementations be part of a more usable ecosystem. Many of these are considered being part of the de-facto standard:

bordeaux-threads
Provides thread primitives, locks and conditionals
cl-store
Serializing and deserializing CL objects from streams
cffi
Foreign function interface (accessing foreign libraries)
closer-mop
Meta-object protocol - provides it's own closer-common-lisp-user package (redefines for instance defmethod)
usocket
TCP/IP and UDP/IP socket interface.
osicat
Osicat is a lightweight operating system interface for Common Lisp on POSIX-like systems, including Windows
cl-fad
Portable pathname library
trivial-garbage
trivial-garbage provides a portable API to finalizers, weak hash-tables and weak pointers
trivial-features
trivial-features ensures consistent *FEATURES* across multiple Common Lisp implementations
trivial-gray-streams
trivial-gray-streams system provides an extremely thin compatibility layer for gray streams
external-program
external-program enables running programs outside the Lisp process

There are many other very good libraries which span multiple implementations. Some of them have some drawbacks though.

For instance IOlib is a great library, but piggy-backs heavily on UN*X - if you develop for many platforms you may want to consider other alternatives..

UIOP is also a very nice set of utilities, but isn't documented well, does too many things at once and tries to deprecate other actively maintained projects - that is counterproductive and socially wrong. I'd discourage using it.

There are a few arguments supporting UIOP's state - it is a direct dependency of ASDF, so it can't (or doesn't want to) depend on other libraries, but many utilities are needed by this commonly used system definition library. My reasoning here is as follows: UIOP goes beyond ASDF's requirements and tries to make actively maintained projects obsolete. Additionally it works only on supported implementations even for features which may be implemented portably.

6.2 UIOP discussion

I'm aware that my opinion regarding UIOP may be a bit controversial. I've asked the library author and a few other people for feedback which I'm very grateful for. I'm publishing it here to keep opinions balanced.

6.2.1 Fare Rideau

Dear Daniel,

while there is a variety of valid opinions based on different interests and preferences, I believe your judgment of UIOP is based on incorrect premises.

First, I object to calling UIOP "not well documented". While UIOP isn't the best documented project around, all its exported functions and variables have pretty decent DOCSTRINGs, and there is at least one automatic document extractor, HEΛP, that can deal with the fact that UIOP is made of many packages, and extract the docstrings into a set of web pages, with a public heλp site listed in the UIOP README.md. The fact that some popular docstring extractors such as quickdocs can't deal with the many packages that UIOP creates with its own uiop:define-package doesn't mean that UIOP is less documented than other projects on which these extractors work well, it's a bug in these extractors.

Second, regarding the deprecation of other projects: yes, UIOP does try to deprecate other projects, but (a) it's a good thing, and (b) I don't know that any of the projects being deprecated is "actively maintained". It's a good thing to try to deprecate other lesser libraries, as I've argued in my article Consolidating Common Lisp libraries: whoever writes any library should work hard so it will deprecate all its rivals, or so that a better library will deprecate his and all rivals (such as optima deprecating my fare-matcher). That's what being serious about a library is all about. As for the quality of the libraries I'm deprecating, one widely-used project the functionality of which is completely covered by UIOP is cl-fad. cl-fad was a great improvement in its day, but some of its API is plain broken (e.g. the :directories argument to its walk-directory function has values with bogus names, while its many pathname manipulation functions get things subtly wrong in corner cases), and its implementation not quite as portable as UIOP (that works on all known actively used implementations). There is no reason whatsoever to ever choose cl-fad over UIOP for a new project. Another project is trivial-backtrace. I reproduced most of its functionality, except in a more stable, more portable way (to every single CL implementation). The only interface I didn't reproduce from it is map-backtrace, which is actually not portable in trivial-backtrace (only for SBCL and CCL), whereas serious portable backtrace users will want to use SLIME's or SLY's API, anyway. As for external-program, a good thing it has for it is some support for asynchronous execution of subprocesses; but it fails to abstract much over the discrepancies between implementations and operating systems, and is much less portable than uiop:run-program (as for trivial-shell, it just doesn't compete).

UIOP is also ubiquitous in a way that other libraries aren't: all implementations will let you (require "asdf") out of the box at which point you have UIOP available (exception: mostly dead implementations like Corman Lisp, GCL, Genera, SCL, XCL, may require you to install ASDF 3 on top of their code; still they are all supported by UIOP, whereas most portability libraries don't even bother with any of them). This ubiquity is important when writing scripts. Indeed, all the functionality in UIOP is so basic that ASDF needed it at some point — there is nothing in UIOP that wasn't itself required by some of ASDF's functionality, contrary to your claim that "UIOP goes beyond ASDF's requirements" (exception: I added one function or two to match the functionality in cl-fad, such as delete-directory-tree which BTW has an important safeguard argument :validate; but even those functions are used if not by ASDF itself, at least by the scripts used to release ASDF itself). I never decided "hey, let's make a better portability library, for the heck of it". Instead, I started making ASDF portable and robust, and at some point the portability code became a large chunk of ASDF and I made it into its own library, and because ASDF is targetting 16 different implementations and has to actually work on them, this library soon became much more portable, much more complete and much more robust than any other portability library, and I worked hard to achieve feature parity with all the libraries I was thereby deprecating.

Finally, a lot of the functionality that UIOP offers is just not offered by any other library, much less with any pretense of universal portability.

6.2.2 David Gu

For the documentation thing, I really think Quickdocs could do a better job. The bug #24 stated that problem, however, it's remain to be solved. I will check this out if I have free time recently.

I use UIOP a lot in my previous company, the reason is simple and maybe a little naive: my manager didn't want to involve too many add-ons in the software. UIOP is shipped together with ASDF, it's really "convenient", and its robustness is the final reason why I will stick to it. If people understand how UIOP came out in the history from ASDF2 to ASDF3, I think people will understand why it's acting like deprecating several other projects – that's not the original idea of it.

But anyway, I really learned a lot from this post and also the comments. In my opinion, avoid reinventing the wheels is the right idea and directions for this community. So from that perspective, I support @fare's idea "It's a good thing to try to deprecate other lesser libraries". Including this article and along with Maintaining Portable Lisp Programs and @fare's Consolidating Common Lisp Libraries, we should let more people involved in this topic.

Footnotes:

1

If you are Common Lisp implementer and plan to add a feature beyond ANSI specification, please consider writing a proposal and submitting it to Common Lisp Document Repository. It will make everybody's life easier.

Michael MalisBuilding Fizzbuzz in Fractran from the Bottom Up

· 74 days ago

In this post, I am going to show you how to write Fizzbuzz in the programming language Fractran. If you don’t know, Fractran is an esoteric programming language. That means it is extraordinary difficult to write any program in Fractran. To mitigate this difficultly, instead of writing Fizzbuzz in raw Fractran, what we are going to do is build a language that compiles to Fractran, and then write Fizzbuzz in that language.

This post is broken up into three parts. The first part covers what Fractran is and a way of understanding what a Fractran program does. Part 2 will go over the foundation of the language we will build and how it will map to Fractran. Finally, in Part 3, we will keep adding new features to the language until it becomes easy to write Fizzbuzz in it.


Part 1: Understanding Fractran

Before we can start writing programs in Fractran, we have to first understand what Fractran is. A Fractran program is represented as just a list of fractions. To execute a Fractran program, you start with a variable N=2. You then go through the list of fractions until you find a fraction F, such that N*F is an integer. You then set N=N*F and go back to the beginning of the list of fractions. You keep repeating this process until there is no fraction F such that N*F is an integer.

Since there is no way to print anything with the regular Fractran rules, we are going to add one additional rule on top of the ordinary ones. In addition to the list of fractions, each program will have a mapping from numbers to characters representing the “alphabet” of the program. After multiplying N by F, whenever the new N is a multiple of one of the numbers in the alphabet, that will “print” the character that the number maps to. I have written a function, run-fractran, which implements this version of Fractran and included it here. It takes a list of fractions and an alphabet as an alist and executes the program.

Let’s walk through a simple example. Let’s say we have the following Fractran program:

9/2, 1/5, 5/3

with the alphabet 5->’a’. To run this program, we start with N=2. We then go through the list fractions until we find a fraction F such that N*F is an integer. On this first step, F becomes 9/2, since N*F = 2 * 9/2 = 9 which is an integer. We then set N to N*F so that N now becomes 9. Repeating this process again, we get F=5/3 and N=N*F=15. Since the number 5 is in the alphabet, and N is now a multiple of 5, we output the character that 5 maps to, ‘a’. If we keep repeating these steps, we eventually reach a point where N=1 and we have outputted the string “aa”. Since 1 times any of the fractions does not result in an integer, the program terminates with the output “aa”.

At this point, you may be thinking that writing any program in Fractran is nearly impossible. The truth is that there is a simple trick you can use that makes it much easier program Fractran. All you need to do is look at the prime factorization of all of the numbers. Let’s see what the above Fractran program looks like if we convert every number into a tuple (a,b,c) where a is the how many times 2 divides the number, b is how many times 3 does, and c is how many times 5 does. The program then becomes:

(0, 2, 0) / (1, 0, 0)
(0, 0, 0) / (0, 0, 1)
(0, 0, 1) / (0, 1, 0)

We also have the tuple (0,0,1) mapping to ‘a’ for our alphabet. We start with N = (1,0,0). If you don’t know, multiplying two numbers is the same as adding the counts of each prime factors, and division is the same as subtracting the counts. For example, 2 * 6 = (1,0,0) + (1,1,0) = (2,1,0) = 12. With this way of looking at the program, finding a fraction F such that N*F is an integer becomes finding a “fraction” F such that each element in the tuple N is greater than or equal to the corresponding element in the tuple in the denominator of F. Once we find such F, instead of multiplying N by it, you subtract from each element of N the corresponding value in the denominator of F (equivalent to dividing by the denominator), and add the corresponding value in the numerator (equivalent to multiplying by the numerator). Executing the program with this interpretation proceeds as follows.

We start with N = (1,0,0). Since every value in N is greater than or equal to their corresponding values in the denominator of the first fraction, we subtract every value in the first denominator and then add every value in the numerator to get N = (1,0,0) – (1,0,0) + (0,2,0) = (0,2,0). Repeating this again, F becomes the third fraction. Subtracting the denominator and adding the numerator gets us N = (0,1,1). Then since every value in N is greater than or equal to their corresponding element in (0,0,1), we print ‘a’. The program continues, just like it did for the original Fractran program.

Basically we can think of every prime number as having a “register” which can take on non-negative integer values. Each fraction is an instruction that operates on some of the registers. You can interpret a fraction as saying if the current value of each register is greater than or equal to the the value specified by the denominator (the number of times the prime for that register divides the denominator), you subtract from the registers all of the values in the denominator, add all the values specified in the numerator (the number of times the prime for each register divides the numerator), and then jump back to the first instruction. Otherwise, if any register is less than the value specified in the denominator, continue to the next fraction. For example, the fraction 9/2 can be translated into the following pseudocode:

;; If the register corresponding to the prime number 2 
;; is greater or equal to 1
if reg[2] >= 1
  ;; Decrement it by 1 and increment the register 
  ;; corresponding to 3 by 2. 
  reg[2] = reg[2] - 1
  reg[3] = reg[3] + 2
  goto the beginning of the program
;; Otherwise continue with the rest of the program.

Although programming Fractran is still difficult, this technique suddenly makes writing Fizzbuzz in Fractran tractable.


Part 2: Compiling to Fractran

For our compiler, we are going to need to generate a lot of primes. To do so, we will use a function, new-prime, which will generate a different prime each time it is called.1

(defun prime (n)
  "Is N a prime number?"
  (loop for i from 2 to (isqrt n)
        never (multiple n i)))

(defparameter *next-new-prime* nil)

(defun new-prime ()
  "Returns a new prime we haven't used yet."
  (prog1 *next-new-prime*
    (setf *next-new-prime*
          (loop for i from (+ *next-new-prime* 1)
                if (prime i)
                  return i))))

So now that we’ve got new-prime, we’ve we can start figuring out how we are going to compile to Fractran. The first detail we will need to figure out is how to express control flow in Fractran.  In other words, we need a way to specify which fractions will execute after each other fractions. This is a problem because after a fraction executes, you always jump back to the first fraction.

Expressing control flow actually winds up being surprisingly easy. For each fraction we can designate a register. Then, we only execute a fraction if its register is set. It is easy to have a fraction conditionally execute depending on whether its register is set by using the trick we are using to interpret a Fractran program. All we need to do is multiply the denominator of each fraction by the prime for the register of that fraction. This way, we will pass over a fraction unless its register is set. Also, all we need to do to specify which fraction should execute after a given fraction is to multiply the numerator of the given fraction by the prime of the register for the next fraction. By doing this, after a fraction executes, it will set the register of the next fraction.

In order to keep track of the primes for the current fraction and for the next fraction, we will have two global variables. The first will be the prime number for the current instruction, and the second will be the prime number for the next instruction:

(defparameter *cur-inst-prime* nil)
(defparameter *next-inst-prime* nil)

We will also need a function advance which will advance the values of the variables once we move on to the next instruction.

(defun advance ()
  (setf *cur-inst-prime* *next-inst-prime*
        *next-inst-prime* (new-prime)))

Now that we’ve got a way of expressing control flow, we can start planning out what the language we will build will look like. From this point on, I am going to call the language we are building, Lisptran. An easy way we represent a Lisptran program is as just a list of expressions. We can have several different kinds of expressions each of which does something different.

The simplest kind of expression we will want is an inline fraction. If a Lisptran expression is just a fraction, we can just add that fraction to the Fractran program being generated.

Another kind of expression that would be useful are labels. Whenever a Lisptran expression is a Lisp symbol, we can interpret that as a label. Each label will be converted into that fraction that is the prime of the next instruction after the label divided by the prime of the label. This way we can jump to the instruction after the label by setting the register for the label. In order to make keeping track of the primes of labels easy, we are going to keep a hash-table, *lisptran-labels*, mapping from labels to the primes for those labels. We will also have a function prime-for-label, which will lookup the prime for a label or assign a new prime if one hasn’t been assigned yet:

(defparameter *lisptran-labels* nil)

(defun prime-for-label (label)
  (or (gethash label *lisptran-labels*)
      (setf (gethash label *lisptran-labels*)
            (new-prime))))

One last kind of expression that will be useful are macro calls. A macro call will be a list whose first element is the name of a macro followed by a list of arbitrary Lisp expressions (The expressions don’t have to be Fractran expressions. They can be interpreted however the macro wants them to be.). In order to compile a macro call, we will lookup the function associated with the macro, and call it on the expressions in the rest of the macro call. That function should then return a list of Lisptran expressions which will then be compiled in place of the macro call. After that we just continue compiling the new code generated by the macro expansion.

To keep track of the definitions of macros, we will keep a hash-table *lisptran-macros*, which will map from the name of the macro to the function for that macro. In order to make defining Lisptran macros easy, we can create a Lisp macro deftran, that works in a similar way to defmacro. When defining a macro with deftran, you are really just defining a function which will take the expressions in the macro call, and return a list of Lisptran instructions to be compiled in its place. Here is the definition for deftran:

(defparameter *lisptran-macros* (make-hash-table))

(defmacro deftran (name args &body body)
  "Define a Lisptran macro."
  `(setf (gethash ',name *lisptran-macros*)
         (lambda ,args ,@body)))

And that’s all of the different kinds of expressions we will need in Lisptran.

Although we now have all of the expressions we need, there are a few more pieces of the compiler we need to figure out. For example, we still haven’t figured out how we are going to represent variables yet. Ultimately this is trivial. We can just assign a register to every variable and keep a mapping from variable names to primes in the same way we have the mapping for labels:

(defparameter *lisptran-vars* nil)

(defun prime-for-var (var)
  (or (gethash var *lisptran-vars*)
      (setf (gethash var *lisptran-vars*)
            (new-prime))))

One last piece of the compiler we need to figure out is how we are going to represent the alphabet of the program. One way we can do this is just represent the characters in our alphabet as variables. The alphabet of a program could just be all of the variables that have characters for names and the primes of the registers for those variables. By doing it this way, we can print a character by just incrementing and then immediately decrementing a variable! Here is code that can be used to obtain the alphabet from *lisptran-vars*:

(defun alphabet (vars)
  "Given a hash-table of the Lisptran variables to primes, 
   returns an alist representing the alphabet."
  (loop for var being the hash-keys in vars 
        using (hash-value prime)
        if (characterp var)
          collect (cons var prime)))

Now that we can express control flow, variables, and macros, we have everything we need to write the actual Lisptran to Fractran compiler:

(defun assemble (insts)
  "Compile the given Lisptran program into Fractran. 
   Returns two values. The first is the Fractran program 
   and the second is the alphabet of the program."
  (let* ((*cur-prime* 2)
         (*cur-inst-prime* (new-prime))
         (*next-inst-prime* (new-prime))
         (*lisptran-labels* (make-hash-table))
         (*lisptran-vars* (make-hash-table)))
    (values (assemble-helper insts)
            (alphabet *lisptran-vars*))))

(defun assemble-helper (exprs)
  (if (null insts)
      '()
      (let ((expr (car exprs))
            (rest (cdr exprs)))
        (cond
          ;; If it's a number, we just add it to the 
          ;; Fractran  program and compile the rest 
          ;; of the Lisptran program
          ((numberp expr)
           (cons expr (assemble-helper rest)))

          ;; If it's a symbol, we divide the prime for 
          ;; the next instruction by the prime for the 
          ;; label.
          ((symbolp expr)
           (cons (/ *cur-inst-prime* 
                    (prime-for-label expr))
                 (assemble-helper rest)))

          ;; Otherwise it's a macro call. We look up the 
          ;; macro named by the first symbol in the 
          ;; expression and call it on the rest of the 
          ;; rest of the expressions in the macro call. 
          ;; We then append all of the instructions 
          ;; returned by it to the rest of the program 
          ;; and compile that.
          (:else
            (let ((macrofn (gethash (car inst)
                                    *lisptran-macros*)))
              (assemble-helper (append (apply macrofn
                                              (cdr inst))
                                       rest))))))))

The function assemble takes a Lisptran program and returns two values. It returns the generated Fractran program and the alphabet of that program. assemble first initializes all of the global variables for the program and then goes to assemble-helper which recursively processes the Lisptran program according to the specification above. Using the function run-fractran that I mentioned above, we can write a function that will execute a given Lisptran program as follows:

(defun run-lisptran (insts)
  "Run the given Lisptran program."
  (multiple-value-call #'run-fractran (assemble insts)))

Part 3: Building Lisptran

Now that we’ve completed the core compiler, we can start adding actual features to it. From here on out, we will not touch the core compiler. All we are going to do is define a couple Lisptran macros. Eventually we will have enough macros such that programming Lisptran seems like programming a high level assembly language.

The first operations we are going should define are basic arithmetic operations. For example, addition. In order to add addition to Lisptran, we can define a macro addi, which stands for add immediate. Immediate just means that we know what number we are adding at compile time. The macro addi will take a variable and a number, and will expand into a fraction which will add the given number to the register for the variable. In this case, the denominator for the fraction will just be the prime for the current instruction (execute this instruction when that register is set) and the numerator will be the prime for the next instruction (execute the next instruction after this one) times the prime for the variable raised to the power of the number we are adding (add the immediate to the register). Here is what the definition for addi looks like:

(deftran addi (x y)
  (prog1 (list (/ (* *next* (expt (prime-for-var x) y))
                  *cur*))
    (advance)))

With are also going to want an operation that performs subtraction. It’s a bit tricky, but we can implement a macro subi (subtract immediate) in terms of addi, since adding a number is the same as adding the negative of that number:2

(deftran subi (x y) `((addi x ,(- y))))

Now that we’ve got some macros for performing basic arithmetic, we can start focusing on macros that allow us to express control flow. The first control flow macro we will implement is >=i (jump if greater than or equal to immediate). In order to implement >=i, we will have it expand into three fractions. The first fraction will test if the variable is greater or equal to the immediate. If the test succeeds, we will then advance to the second fraction which will restore the variable (since when a test succeeds, all of the values from the denominator are decremented from the corresponding registers), and then jump to the label passed in to >=i. If the test fails, we will fall through to the third fraction which will just continue onto the next fraction after that.

The denominator of the first fraction will be the prime for current instruction (execute the instruction if that register is set) times the prime for the register raised to the power of the constant (how we test that the register is greater than or equal to the immediate) and the numerator will be the prime for the second instruction (so we go to the second instruction if the test succeeds). The second fraction is just the prime for the label passed into >=i (so we jump to wherever the label designates) divided the prime for that instruction. Lastly, the denominator of the third fraction is the prime for the current instruction (so we fall through to it if the test in the first fraction fails), and the numerator is just the prime for the next instruction so that we continue to that if the test fails:

(deftran >=i (var val label)
  (prog1 (let ((restore (new-prime)))
           (list (/ restore
                    (expt (prime-for-var var) val)
                    *cur-inst-prime*)
                 (/ (* (prime-for-label label)
                       (expt (prime-for-var var) val))
                    restore)
                 (/ *next-inst-prime* *cur-inst-prime*)))
    (advance)))

Believe it or not, but after this point, we won’t need to even think about fractions anymore. Lisptran now has enough of a foundation that all of the further macros we will need can be expressed in terms of addisubi and >=i. The only two functions that actually need to be implemented in terms of Fractran are addi and >=i. That means no more thinking about Fractran. From here on out, all we have is Lisptran!

We can easily define unconditional goto in terms of >=i. Since all of the registers start at 0, we can implement goto as greater than or equal to zero. We use the Lisp function gensym to generate a variable without a name so that the variable doesn’t conflict with any other Lisptran variables:

(deftran goto (label) `((>=i ,(gensym) 0 ,label)))

Then through a combination of >=i and goto, we can define <=i:

(deftran <=i (var val label)
  (let ((gskip (gensym))) 
    `((>=i ,var (+ ,val 1) ,gskip)
      (goto ,label)
      ,gskip)))

Now that we have several macros for doing control flow, we can start building some utilities for printing. As mentioned previously printing a character is the same as incrementing the variable with the character as its name and then immediately decrementing it:

(deftran print-char (char)
  `((addi ,char 1)
    (subi ,char 1)))

Then if we want to write a macro that prints a string, it can just expand into a series of calls to print-char, each of which prints a single character in the string:

(deftran print-string (str)
  (loop for char across str
        collect `(print-char ,char)))

We are also going to need a function to print a number. Writing this with the current state of Lisptran is fairly difficult since we haven’t implemented several utilities such as mod yet, but we can start by implementing a macro print-digit that prints the value of a variable that is between 0 and 9. We can implement it, by having it expand into a series of conditions. The first one will check if the variable is less than or equal to zero. If so it will print the character zero and jump past the rest of the conditions. Otherwise it falls through to the next condition which tests if the variable is less than or equal to one and so on. We don’t have to manually write the code for print-digit because we can use Lisp to generate the code for us:

(deftran print-digit (var)
  (loop with gend = (gensym)
        for i from 0 to 9
        for gprint = (gensym)
        for gskip = (gensym)
        append `((<=i ,var ,i ,gprint)
                 (goto ,gskip)
                 ,gprint
                 (print-char ,(digit-char i))
                 (goto ,gend)
                 ,gskip)
        into result
        finally (return `(,@result ,gend))))

At this point, now that we have macros for performing basic arithmetic, basic control flow, and printing, we can start writing some recognizable programs. For example here is a program that prints the numbers from zero to nine:

(start
 (>=i x 10 end)
 (print-digit x)
 (print-char #\newline)
 (addi x 1)
 (goto start)
 end)

If you are curious I have included the Fractran program generated by this Lisptran program here. It’s hard to believe that the above Lisptran program and the Fractran program are equivalent. They look completely different!

Now that we have a bunch of low level operations, we can start building some higher level ones. You may not have thought of it, but instructions don’t need to just have flat structure. For example, now that we have goto, we can use it to define while loops (just like in Loops in Lisp):

(deftran while (test &rest body)
  (let ((gstart (gensym))
        (gend (gensym)))
    `((goto ,gend)
      ,gstart
      ,@body
      ,gend
      (,@test ,gstart))))

In order to implement while, we are assuming that all predicates take labels as their last argument which is where they will jump to if the predicate succeeds. Now that we have while loops, we can start writing some much more powerful macros around manipulating variables. Here’s two useful ones, one that sets a variable to zero, and one that copies the value in one variable to another:

(deftran zero (var)
  `((while (>=i ,var 1)
      (subi ,var 1))))

(deftran move (to from)
  (let ((gtemp (gensym)))
    `((zero ,to)
      (while (>=i ,from 1)
        (addi ,gtemp 1)
        (subi ,from 1))
      (while (>=i ,gtemp 1)
        (addi ,to 1)
        (addi ,from 1)
        (subi ,gvar 1)))))

For move, we first have to decrement the number we are moving from and increment a temporary variable. Than we restore both the original variable and the variable we are moving the value to at the same time.

With all of these macros, we can finally start focusing on macros that are actually relevant to Fizzbuzz. One operation that is absolutely going to be necessary for Fizzbuzz is mod. We can implement a macro modi by repeatedly subtracting the immediate until the variable is less than the immediate.

(deftran modi (var val)
  `((while (>=i ,var ,val)
      (subi ,var ,val))))

We only need one more real feature before we can start writing Fizzbuzz. We are going to need a way of printing numbers. In order to print an arbitrary number, we are going to need a way of doing integer division. We can implement a macro divi by repeatedly subtracting the immediate until the variable is less than the immediate and keeping track of the number of times we’ve subtracted the immediate.

(deftran divi (x y)
  (let ((gresult (gensym)))
    `((zero ,gresult)
      (while (>=i ,x ,y)
        (addi ,gresult 1)
        (subi ,x ,y))
      (move ,x ,gresult))))

Now for the final macro we will need. A macro for printing numbers. Actually, we are going to cheat a little. Printing numbers winds up being pretty difficult since you have to print the digits from left to right, but you can only look at the lowest digit at a time. To make things easier, we are only to write a macro that is able to print two digit numbers. We won’t need to print 100 since “buzz” will be printed instead.

(deftran print-number (var)
  (let ((gtemp (gensym))
        (gskip (gensym)))
    `((move ,gtemp ,var)
      (divi ,gtemp 10)
      (>=i ,gtemp 0 ,gskip)
      (print-digit ,gtemp)
      ,gskip
      (move ,gtemp ,var)
      (modi ,gtemp 10)
      (print-digit ,gtemp)
      (print-char #\newline))))

Now our language is sufficiently high enough that Fizzbuzz is going to be practically as easy as it will get. Here is an implementation of Fizzbuzz in Fractran.

((move x 1)
 (while (<=i x 100)
   (move rem x)
   (modi rem 15)
   (<=i rem 0 fizzbuzz)

   (move rem x)
   (modi rem 3)
   (<=i rem 0 fizz)

   (move rem x)
   (modi rem 5)
   (<=i rem 0 buzz)

   (print-number x)
   (goto end)

   fizzbuzz
   (print-string "fizzbuzz")
   (goto end)

   fizz
   (print-string "fizz")
   (goto end)

   buzz
   (print-string "buzz")
   (goto end)

   end
   (addi x 1)))

I’ve also included the generated Fractran program here and included all of the full source code for this blog post here.

I find it absolutely amazing that we were able to build a pretty decent language by repeatedly adding more and more features on top of what we already had. To recap, we implemented a basic arithmetic operation (addi) in terms of raw Fractran and then defined a second (subi) in terms of that. From there we defined three macros for doing control flow (>=igoto<=i), with the second two being defined in terms of the first. Then we were then able to define macros for printing (print-charprint-stringprint-digit). At this point we had all of the low level operations we needed so we could start implement while loops (while), a high level control flow construct. With while loops, we were able to define several macros for manipulating variables (zeromove). With these new utilities for manipulating variables we could define more advanced arithmetic operations (modidivi). Then with these new operations we were able to define a way to print an arbitrary two digit number (print-number). Finally, using everything we had up to this point, we were able to write Fizzbuzz. It’s just incredible that we could make a language by always making slight abstractions on top of the operations we already had.

The post Building Fizzbuzz in Fractran from the Bottom Up appeared first on Macrology.

CL Test Gridquicklisp 2016-05-31

· 76 days ago
The difference between this and previous months:

Grouped by lisp implementation first and then by library:
https://common-lisp.net/project/cl-test-grid/ql/quicklisp-2016-05-31-diff.html

Grouped by library first and then by lisp impl:
https://common-lisp.net/project/cl-test-grid/ql/quicklisp-2016-05-31-diff2.html

(Both reports show the same data, just arranged differently)

This time the diff is smaller than the past month. There are some regressions, some improvements.

If you're interested in some particular failure or need help investigating something, just ask in the comments or on the mailing list.

Wimpie NortjeDaemonizing Common Lisp services.

· 78 days ago

Update (2016-06-09):

During a discussion on Reddit I realised that the issues discussed in this post are not relevant to the systemd init system.

At the time of writing my servers run Ubuntu 14.04 which uses Upstart as the init system. Upstart detaches the tty of the started services. That is the cause of the issues discussed below.

Most Linux distributions seem to migrate to systemd as the init system. systemd does not detach the tty, it binds stdin to /dev/null and stdout and stderr to a logging stream.

If your OS uses systemd you can turn a foreground application into a service without using GNU Screen or the daemon package.

Deploying a Common Lisp server application in production requires that it runs as a daemon.

A Unix init system like Upstart or systemd can be used to ensure that the service is always running. In this case the application can daemonize itself or it can run in the foreground and let the init system take care of daemonization.

It is usually easier to create an init script for a self-daemonizing application but then the source code gets more complicated.

When the application runs in the foreground the init script is slightly more tricky but the application is easier to debug.

Self-daemonizing

The procedure to daemonize an application is well documented but most of these documentation is focused on C. To implement the complete procedure in Common Lisp can become a mission in itself because it involves forking and opening and closing standard IO stream at the right times.

The daemon package implements all the necessary steps and provides a trivial API for daemonizing a program.

Though daemon is easy to use, all the forking makes it difficult to get backtraces when something goes wrong. The application then appears to be hanging while it is actually waiting in the debugger for input but the debugger can't be used because standard IO is closed.

When a service daemonizes itself it is advisable to devise some method to interact with the application while it is waiting in the debugger. One of the swank libraries could be useful.

Run in foreground

Foreground applications can be used as-is for a service but it can not be used as the main application instantiated by the init system.

During the daemonizing process the init system closes all the standard IO streams. This causes the REPL to exit and the application to end.

GNU Screen can be used to keep the standard IO open for a daemonized service. However, when an error occurs one is in much the same situation as with a self-daemonizing application. The application appears to hang because it is waiting in the debugger but the debugger can't easily be reached.

Screen has options to log standard IO to a file. Debugging then consists of killing the service and working through the logged backtrace. This is not ideal but it has the advantage of not introducing any code complexity in order to make a daemon.

Like in the self-daemonizing case, a swank library can be used to get live debugging back.

Comparing the options

Self-daemonizing Foreground
Daemonizing responsibility Common Lisp code Unix init system
Tool daemon library GNU Screen
Init script complexity Less complex More complex
Code complexity Increased complexity Same complexity as normal program
Debugging options Embedded swank server Logged data or embbeded swank server

Zach BeaneLisp stuff on YouTube

· 78 days ago

If you want to see some neat videos, subscribe to dto, Baggers, and WarWeasle on YouTube. They all regularly post neat graphical stuff done in Common Lisp.

If you know any more people I should follow on YouTube, let me know.

LispjobsSecure Outcomes, Contract Common Lisp programmer

· 82 days ago

Secure Outcomes builds and provides digital livescan fingerprinting systems for use by law enforcement, military, airports, schools, Fortune 500s, etc.

All of our systems are constructed in Common Lisp.

We are looking for a contract CL developer located anywhere in the world that can build software for us.

Strong CL/LW background, of course, but also knowledge of foreign function interfacing etc needed.

Work full or part time.

See what we do at http://www.secureoutcomes.net

Resumes to lisp@secureoutcomes.net. No calls please.


Timofei ShatrovAll you need is PROGV

· 82 days ago

I have never seen PROGV in useErik Naggum

Common Lisp is very, very old. Tagbody and progv, anyone? – Hacker News user pwnstigator

I haven't written anything on this blog lately, mostly because of lack of time to work on side projects and consequently the lack of Lisp things to talk about. However recently I've been working on various improvements to my Ichiran project, and here's the story of how I came to use the much maligned (or rather, extremely obscure) special operator PROGV for the first time.

Ichiran is basically a glorified Japanese dictionary (used as the backend for the web app ichi.moe) and it heavily depends on a Postgres database that contains all the words, definitions and so on. The database is based on a dump of an open JMdict dictionary, which is constantly updated based on the users' submissions.

Well, the last time I generated the database from this dump was almost a year ago, and I wanted to update the definitions for a while. However this tends to break the accuracy of my word segmenting algorithm. For this reason I want to keep the old and the new database at the same time and be able to run the whatever code with either of the databases.

I'm using Postmodern to access the database, which has a useful macro named with-connection. If I have a special variable *connection* and consistently use (with-connection *connection* ...) in my database-accessing functions then I can later call


(let ((*connection* '("foo" "bar" "baz" "quux")))
  (some-database-accessing-function))

and it will use connection ("foo" "bar" "baz" "quux") instead of the default one. I can even encapsulate it as a macro

(defmacro with-db (dbid &body body)
  `(let ((*connection* (get-spec ,dbid)))
     (with-connection *connection*
       ,@body)))

(dbid and get-spec are just more convenience features, so that I can refer to the connection by a single keyword instead of a list of 4 elements).

So far so good, but there’s a flaw with this approach. For performance reasons, some of the data from the database is stored in certain global variables. For example I have a variable *suffix-cache* that contains a mapping between various word suffixes and objects in the database that represent these suffixes. Obviously if I run something with a different connection, I want to use *suffix-cache* that’s actually suitable for this connection.

I created a simple wrapper macro around defvar that looks like this:

(defvar *conn-vars* nil)

(defmacro def-conn-var (name initial-value &rest args)
  `(progn
     (defvar ,name ,initial-value ,@args)
     (pushnew (cons ',name ,initial-value) *conn-vars* :key 'car)))

Now with-db can potentially add new dynamic variable bindings together with *connection* based on the contents of *conn-vars*. It’s pretty trivial to add the new bindings at the macro expansion time. However that poses another problem: now all the conn-vars need to be defined before with-db is expanded. Moreover, if I introduce a new conn-var, all instances of with-db macro must be recompiled. This might be not a problem for something like a desktop app, but my web app usually runs for months without being restarted, with new code being hot-swapped into the running image. I certainly don’t need the extra hassle of having to recompile everything in a specific order.

Meanwhile I had the definition of let opened in the Hyperspec, and there was a link to progv at the bottom. I had no idea what it does, and thinking that my Lisp has gotten rusty, clicked through to refresh my memory. Imagine my surprise when I found that 1) I have never used this feature before and 2) it was exactly what I needed. Indeed, if I can bind dynamic variables at runtime, then I don’t need to re-expand the macro every time the set of these variables changes.

The final code ended up being pretty messy, but it worked:

(defvar *conn-var-cache* (make-hash-table :test #'equal))

(defmacro with-db (dbid &body body)
  (alexandria:with-gensyms (pv-pairs var vars val vals iv key exists)
    `(let* ((*connection* (get-spec ,dbid))
            (,pv-pairs (when ,dbid
                         (loop for (,var . ,iv) in *conn-vars*
                            for ,key = (cons ,var *connection*)
                            for (,val ,exists) = (multiple-value-list (gethash ,key *conn-var-cache*))
                            collect ,var into ,vars
                            if ,exists collect ,val into ,vals
                            else collect ,iv into ,vals
                            finally (return (cons ,vars ,vals))))))
       (progv (car ,pv-pairs) (cdr ,pv-pairs)
         (unwind-protect
              (with-connection *connection*
                ,@body)
           (loop for ,var in (car ,pv-pairs)
              for ,key = (cons ,var *connection*)
              do (setf (gethash ,key *conn-var-cache*) (symbol-value ,var))))))))

Basically the loop creates a pair of list of variables and list of their values (no idea why progv couldn’t have accepted an alist or something). The values are taken from *conn-var-cache* which takes the pairing of variable name and connection spec as the key. Then I also add an unwind-protect to save the values of the variables that might have changed within the body back into the cache. Note that this makes nested with-db’s unreliable! The fix is possible, and left as an exercise to the reader. Another problem is that dynamic variables bindings don’t get passed into new threads, so no threads should be spawned within the with-db macro.

And this is how I ended up using progv in production. This probably dethrones displaced array strings as the most obscure feature in my codebase. Hopefully I’ll have more things to write about in the future. Until next time!

Quicklisp newsMay 2016 Quicklisp dist update now available

· 84 days ago
New projects:
  • cl-ecs — An implementation of the Entity-Component-System pattern mostly used in game development. — MIT
  • cl-htmlprag — A port of Neil Van Dyke's famous HTMLPrag library to Common Lisp. — LGPL 2.1
  • cl-messagepack-rpc — A Common Lisp implementation of the MessagePack-RPC specification, which uses MessagePack serialization format to achieve efficient remote procedure calls (RPCs). — MIT
  • cl-monitors — Bindings to libmonitors, allowing the handling of monitors querying and resolution changing. — Artistic
  • cl-oclapi — Yet another OpenCL API bindings for Common Lisp. — MIT
  • cl-pack — Perl compatible binary pack() and unpack() library — BSD-3-Clause
  • cl-scan — port scanner. — ISC
  • cl-scsu — An implementation of 'Standard Compression Scheme for Unicode'. — MIT
  • clack-static-asset-middleware — A cache busting static file middleware for the clack web framework. — MIT
  • easing — Easing functions. — MIT
  • fare-scripts — Various small programs that I write in CL in lieu of shell scripts — MIT
  • fixed — A fixed-point number type. — MIT
  • focus — Customizable FORMAT strings and directives — BSD
  • glsl-spec — The GLSL Spec as a datastructure — The Unlicense
  • injection — Dependency injection for Common Lisp — GPLv3
  • json-mop — A metaclass for bridging CLOS and JSON — LGPLv3+
  • leveldb — LevelDB bindings for Common Lisp. — BSD
  • moira — Monitor and restart background threads. — MIT
  • recursive-restart — Restarts that can invoke themselves. — MIT
  • trivial-nntp — Simple tools for interfacing to NNTP servers — MIT
  • trivial-openstack — A simple Common Lisp OpenStack REST client. — MIT
  • trivial-yenc — Decode yenc file to a binary file — MIT
  • weblocks-prototype-js — Weblocks JavaScript backend for PrototypeJs — LLGPL
Updated projects3d-vectorsbackportsbinfixbit-smasherblackbirdcaveman2-widgetsceplcepl.cameracepl.devilcl-anacl-autowrapcl-bloomcl-cairo2cl-charmscl-dotcl-erlang-termcl-gamepadcl-geometrycl-hash-table-destructuringcl-html5-parsercl-i18ncl-jpegcl-liballegrocl-libssh2cl-mediawikicl-mongocl-mpicl-ohmcl-pangocl-projectcl-qrencodecl-rediscl-sdl2cl-seleniumcl-storeclackcloser-mopcoleslawcroatoandeedsdissectdjula,esrapexscribefemlispfiascofirephpflareform-fiddleglkitglopgraphhalftonehelambdaphl7-parserhu.dwim.partial-evalhu.dwim.quasi-quotehu.dwim.utilkenzolegitlisp-interface-librarylisp-namespacelisp-unit2mcclimoclcl,pathname-utilspzmqqt-libsqtoolsqtools-uiquery-fsqueuesrclrestasrtg-mathrutilsscalplsdl2kitserapeumsimple-taskssketchskitterslimesnakesstp-querystringprepstumpwmtemporal-functionstriviatrivial-backtracetrivial-benchmarktrivial-string-templatetrivialib.bddubiquitousutilities.print-treevarjoverbosevgplotweblocksweblocks-storesxhtmlambdazs3.

To get this update, use: (ql:update-dist "quicklisp")

You didn't miss it -- there wasn't a fundraiser in April. Or May, either. I've got my fingers crossed to see one soon, but I'm not sure exactly when it might happen. Stay tuned!

McCLIMNew website

· 95 days ago

We are happy to announce that the McCLIM website has been refreshed. All broken links has been replaced and the infrastructure has been moderated into something easier to maintain.

Website comparison

On the left is the old version while on the right is the current design. Website is responsive, has the RSS stream and all the goods coleslaw provides.

You may expect frequent improvements to the codebase in the near future - stay tuned!

McCLIMOld news list

· 95 days ago

For posterity we are publishing the archival news:

  • 2008-04-23: McCLIM 0.9.6 "St. George's Day" released.

  • 2007-09-02: McCLIM 0.9.5 "Eastern Orthodox Lithurgical New Year" released.

  • 2007-01-14: McCLIM 0.9.4 "Orthodox New Year" released.

  • 2006-11-02: McCLIM 0.9.3 "All Souls' Day" released.

  • 2006-03-30: Highly-experimental binaries of McCLIM 0.9.2, set up to start up the McCLIM listener, and incorporating the McCLIM demos as well as a graphical debugger and inspector, are available for download. Supported platforms: PPC/OS X, x86/Linux.

  • 2006-03-26: McCLIM 0.9.2 "Laetare Sunday" released.

  • 2005-03-06: McCLIM 0.9.1 "Mothering Sunday" released.

  • 2004-12-09: McCLIM CVS hosting moved to common-lisp.net; if you're a developer you should already have heard about this (if not, mail admin@common-lisp.net).

  • 2002-10-29: Tim Moore presented a paper written by Robert Strandh and himself at the International Lisp Conference during the last week of October 2002.

Vsevolod DyomkinImproving Lisp UX One Form at a Time

· 99 days ago

At the recent ELS, I presented a lightning talk about RUTILS and how I see it as a way of "modernizing" CL, i.e. updating the basic language elements to be simpler, clearer and more generic. Thus improving the everyday user experience and answering the complaints of outsiders about "historical cruft" in the Lisp standard. Indeed, Lisp has a lot of unrecognizable names (like mapcar and svref) or just unnecessary long ones (multiple-value-bind or defparameter), and out-of-the-box it lacks a lot of things that many current programmers are used to: unified generic accessors, generators, literal syntax for defining hash-tables or dynamic vectors etc. This may not be a problem for the people working with the language on a regular basis (or if it is they probably have a personal solution for that already), but it impedes communication with the outside world. I'd paid extra attention to that recently as I was preparing code examples for the experimental course on algorithms, which I teach now using Lisp instead of pseudocode (actually, modulo the naming/generics issue, Lisp is a great fit for that).

Unfortunately, the lightning talk format is too short for a good presentation of this topic, so here's a more elaborate post, in which I want to show a few examples from the RUTILS library of using Lisp's built-in capabilities to introduce clear, uniform, and generic syntactic abstractions that may be used alongside the standard Lisp operators, as well as replace them in the cases when we want to get more concise and understandable code.

What's cool about this problem is that, in Lisp, besides a common way to extend the language with functions and methods (and even macros/templates, which find they way into more and more languages), there are several other approaches to the problem that allow to tackle issues that can't be covered by functions and even macros. Those include, for instance, reader macros and aliasing. Aliasing is, actually, a rather simple idea (and can be, probably, implemented in other dynamic languages): duplicating functionality of existing functions or macros with a new name. The idea for such operator came from Paul Graham's "On Lisp" and it may be implemented in the following way (see a full implementation here):


(defmacro abbr (short long &optional; lambda-list)
`(progn
(cond
((macro-function ',long)
(setf (macro-function ',short) (macro-function ',long)))
((fboundp ',long)
(setf (fdefinition ',short) (fdefinition ',long))
,(when lambda-list
`(define-setf-expander ,short ,lambda-list
(values ,@(multiple-value-bind
(dummies vals store store-form access-form)
(get-setf-expansion
(cons long (remove-if (lambda (sym)
(member sym '(&optional; &key;)))
lambda-list)))
(let ((expansion-vals (mapcar (lambda (x) `(quote ,x))
(list dummies
vals
store
store-form
access-form))))
(setf (second expansion-vals)
(cons 'list vals))
expansion-vals))))))
(t (error "Can't abbreviate ~a" ',long)))
(setf (documentation ',short 'function) (documentation ',long 'function))
',short))

As you may have noticed, it is also capable of duplicating a setf-expander for a given function if the lambda-list is provided. Using abbr we can define a lot of shorthands or alternative names, and it is heavily used in RUTILS to provide more than 50 alternative names; we'll see some of them in this post. What this example shows is the malleability of Lisp, which allows approaching its own improvement from different angles depending on the problem at hand and the tradeoffs you're willing to make.

Introducing generic element access

One of the examples of historic baggage in CL is a substantial variety of different methods to access elements of collections, hash-tables, structures, and objects with no generic function unifying them. Not to say that other languages have a totally uniform accessor mechanism. Usually, there will be two or three general-purpose ways to organize it: dot notation for object field access, something square-braketish for array and other collections access, and some generic operator like get for all the other cases. And occasionally (e.g. in Python or C++) there are hooks to plug into the built-in operators. Still, it's a much smaller number than in Lisp, and what's more important, it's sufficiently distinct and non-surprising.

In Lisp, actually, nothing prevents us from doing even better — both better than the current state and than other languages — i.e. from having a fully uniform and extensible solution. At first approximation, it's just a matter of defining a generic function that will work on different container types and utilize all the existing optimized accessor functions in its methods. This interface will be extensible for any container object. In RUTILSX (a part of RUTILS where any experiments are allowed) this function is called generic-elt:


(defgeneric generic-elt (obj key &rest; keys)
(:method :around (obj key &rest; keys)
(reduce #'generic-elt keys :initial-value (call-next-method obj key))))

One important aspect you can see in this definition is the presence of an :around method that allows to chain multiple accesses in one call and dispatch each one to an appropriate basic method via call-next-method. Thus, we may write something like (generic-elt obj 'children 0 :key) to access, for instance, an element indexed by :key in a hash-table that is the first element of a sequence that is the contents of the slot children of some object obj.

The only problem with this function is its long name. Unfortunately, most of good short element access names, like elt and nth are already taken in the Common Lisp standard, while for RUTILS I've adopted a religious principle to retain full backward compatibility and don't alter anything from the standard. This is a critical point: not redefining CL, but building on top of it and extending it!

Moreover, element access has two features: it's a very common operation and it's also not a usual function that does some computation, so ideally it should have a short but prominent look in the code. The perfect solution occurred to me at one point: introduce an alias ? for it. Lisp allows to name operations with any characters, and a question mark, in my opinion, matches very well the inner intent of this operation: query a container-like object using a certain key. With it, our previous example becomes very succinct and cool: (? obj 'children 0 :key).

Additionally to element reading, there's also element write access. This operation in Lisp, like in most other languages, has a unified entry point called setf. There's a special interface to provide specific "methods" for it based on the accessor function. Yet, what to do when an access function is polymorphic? Well, provide polymorphic setter companion. (defsetf generic-elt generic-setf). Like generic-elt, generic-setf defers work to already defined specific setters:


(defmethod generic-setf ((obj list) key &rest; keys-and-val)
(setf (nth key obj) (atomize keys-and-val)))

And it also supports key chaining, so you can write: (setf (? obj 'children 0 :key) new-value).

Having this unified access functionality is nice and cool, but some people may still linger for the familiar dot object slot access syntax. We can't blame them: habits are a basis of good UX. Unfortunately, this is contrary to the Lisp way... But Lisp is a pro-choice and future-proof language: if you want something badly, even something not in the usual ways, almost always you can, actually, find a clean and supported means of implementing it. And this case is not an exception. If you can tolerate an small addition — a @-prefix to the object reference (that's also an extra prominent indicator of something unusual going on) — when accessing its slots you can define a reader macro that will expand forms @obj.slot into our (? obj 'slot) or a standard (slot-value obj 'slot). With it, we can write something like (? tokens @dep.govr.id), which is much more succinct and, arguably, readable than (elt tokens (slot-value (slot-value dep 'govr) 'id)).

Still, one issue remains unsolved in this approach: the preferred Lisp slot-access method is not via slot-value, but with an accessor method that is exported. And one of the reasons for it is that slot-names, which are usually short and can clash, are kept private to the package where they are defined. It means that in most cases @obj.slot will not work across packages. (Unlike the OO-languages in which every class is its own namespace, in Lisp, this function is not "complected" within the OO-system, and packages are a namespacing method, while objects serve for encapsulation and inheritance.)

There are two ways to tackle this problem. As I said, Lisp is future-proof: being thoroughly dynamic and extensible, CLOS defines a method that is called when there's a problem accessing an object's slot — slot-missing. Once again, we can define an :around method that will be a little smarter (?) and try to look up slot-name not only in the current package, but also in the class' original package.


(defmethod slot-missing :around
(class instance slot-name (operation (eql 'slot-value)) &optional; new-value)
(declare (ignore new-value))
(let ((class-package (symbol-package (class-name (class-of instance)))))
(if (eql class-package (symbol-package slot-name)) ;; to avoid infinite looping
(call-next-method)
(if-it (find-symbol (string-upcase slot-name) class-package)
(slot-value instance it)
(call-next-method)))))

This is a rather radical way and comes at a cost: two additional virtual function calls (of the slot-missing method itself and an additional slot-value one). But in most of the cases it may be worth paying it for convenience's sake, especially, since you can always optimize a particular call-site by changing the code to the most direct (slot-value obj 'package::slot) variant. By the way, using slot accessor method is also costlier than just slot-value, so we are compensating here somewhat. Anyway, it's cool to have all the options on the table: beautiful slow and ugly fast method that our backward-compatibility approach allows us. As usual, you can't have a cake and eat it too...

Though, sometimes, you can. :) If you think more of this it becomes apparent that slot-value could be implemented this way from the start: look up the slot name in the class'es original package. As classes or structs are defined together with their slots it is very rare if not almost impossible to see slot-names not available in the package where their class is defined (you have to explicitly use a private name from another package when defining a class to do such a trick). So, slot-value should always look for slot names in the class'es package first. We can define a "smart" slot-value variant that will do just that, and with our nice generic-elt frontend it can easily integrated without breaking backward-compatibility.


(defun smart-slot-value (object slot-name)
(slot-value object
(or (find-symbol (string-upcase slot-name)
(symbol-package (class-name (class-of instance))))
slot-name)))

Unifying variable binding with with

Almost everything in functional variable definition and binding was pioneered by Lisp at some point, including the concept of destructuring. Yet, the CL standard, once again, lacks unification in this area. There are at least 4 major constructs: let and let*, destructuring-bind and multiple-value-bind, and also a few specialized ones like with-slots or ppcre:register-groups-bind. One more thing to mention is that parallel assignment behavior of plain let can be implemented with destructuring-bind and multiple-value-bind. Overall, it just screams for uniting in a single construct, and already there have been a few attempts to do that (like metabang-bind). In RUTILS, I present a novel implementation of generic bind that has two distinct features: a more plausible name — with — and a simple method-based extension mechanism. The implementation is very simple: the binding construct selection is performed at compile-time based on the structure of the clause and, optionally, presence of special symbols in it:


(defmacro with ((&rest; bindings) &body; body)
(let ((rez body))
(dolist (binding (reverse bindings))
(:= rez `((,@(call #'expand-binding binding rez)))))
(first rez)))

A very short number of methods covering the basic cases are defined:

  • the first one expands to let or multiple-value-bind depending on the number of symbols in the clause (i.e. for multiple values you should have more than 2)
  • the second group triggers when the first element of the clause is a list and defaults to destructruing-bind, but has special behaviors for 2 symbols ? and @ generating clauses for our generic element access and smart slot access discussed in the previous section

(defun expand-binding (binding form)
(append (apply #'bind-dispatch binding)
form))

(defgeneric bind-dispatch (arg1 arg2 &rest; args)
(:method ((arg1 symbol) arg2 &rest; args)
(if args
`(multiple-value-bind (,arg1 ,arg2 ,@(butlast args)) ,(last1 args))
`(let ((,arg1 ,arg2)))))
(:method ((arg1 list) (arg2 (eql '?)) &rest; args)
`(let (,@(mapcar (lambda (var-key)
`(,(first (mklist var-key))
(? ,(first args) ,(last1 (mklist var-key)))))
arg1))))
(:method ((arg1 list) (arg2 (eql '@)) &rest; args)
(with-gensyms (obj)
`(let* ((,obj ,(first args))
,@(mapcar (lambda (var-slot)
`(,(first (mklist var-slot))
(smart-slot-value ,obj ',(last1 (mklist var-slot)))))
arg1)))))
(:method ((arg1 list) arg2 &rest; args)
`(destructuring-bind ,arg1 ,arg2)))
In a sense, it's a classic example of combining generic-functions and macros to create a clean and extensible UI. Another great benefit of using with is reduced code nesting that can become quite deep with the standard operators. Here's one of the examples from my codebase:

(with (((stack buffer ctx) @ parser)
(fs (extract-fs parser interm))
(((toks :tokens) (cache :cache)) ? ctx))
...)
And here's how it would have looked in plain CL:

(with-slots (stack buffer ctx) parser
(let ((fs (extract-fs parser interm)))
(toks (gethash :tokens ctx))
(cache (gethash :cache ctx)))
...))

Implementing simple generators on top of signals

One of my friends and a Lisp enthusiast, Valery Zamarayev, who's also a long-time Python user, once complained that the only thing that he misses in CL from Python is generators. This feature is popular in many dynamic languages, such as Ruby or Perl, and even Java 8 has introduced something similar. Sure, there are multiple ways to implement lazy evaluation in Lisp with many libraries for that, like SERIES, pygen or CLAZY. We don't have to wait for another version of the spec (especially, since it's not coming 8-)

In RUTILS I have discovered, I believe, a novel and a very clean way to implement generators — on top of the signal system. The signal or condition facility is, by the way, one of the most underappreciated assets of Common Lisp that often comes to rescue in seemingly dead ends of control flow implementation. And Kent Pitman's description of it is one of my favorite reads in Computer Science. Anyway, here's all you need to implement Python-style generators in Lisp:


(define-condition generated ()
((item :initarg :item :reader generated-item)))

(defun yield (item)
(restart-case (signal 'generated :item item)
(resume () item)))

(defmacro doing ((item generator-form &optional; result) &body; body)
(with-gensyms (e)
`(block nil
(handler-bind ((generated (lambda (,e)
(let ((,item (generated-item ,e)))
,@body
(invoke-restart (find-restart 'resume))))))
,generator-form)
,result)))

The doing macro works just like dolist, but iterating the generator form instead of an existing sequence. As you can see from this example, restarts are like generators in disguise. Or, to be more correct, they are a more general way to handle such functionality, and it takes just a thin layer of syntactic sugar to adapt them to a particular usage style.

And a few mischiefs

We have seen three different approaches to extending CL in order to accommodate new popular syntactic constructs and approaches. Lastly, I wanted to tread a little in the "danger zone" that may be considered unconventional or plain bad-style by many lispers — modifying syntax at the reader level. One thing that Clojure (following other dynamic languages before it), I believe, has proven is the importance of shorthand literal notation for popular operations. CL standard has predated this understanding: although it has specific print representations for various important objects, and even a special syntax for static arrays. Yet, the language is really future-proof in this respect, because it provides a way to hook into the reader mechanism by modifying the readtables. It was further smoothed and packaged by the popular NAMED-READTABLES library, which allows to treat readtables similar to packages. In RUTILS I have defined several extended readtables that implement a few shortcuts that are used literally in every second function or macro I define in my code. These include:

  • a shorthand notation for zero-, one- or two-argument lambda functions: ^(+ % %%) expands into (lambda (% %%) (+ % %%))
  • a literal syntax for hash-tables: #h(equal "key" "val") will create a EQUAL-hash-table with one key-value pair
  • a syntax for heredoc-strings: #/this quote (") shouldn't be escaped/# (which, unfortunately, doesn't always work smoothly in the repl)

Overall, I have experimented a lot with naming — it was sort of my obsession in this work to find short and obvious names for new things, many of which substitute the existing functionality, under the constraints of not altering what's already in the standard. For this sake, I've ventured into non-character symbols and even the keyword package — a major offence, I reckon... And here are a few of the findings I wanted to share (besides ? and with mentioned previously):

  • call is a new alias for funcall — I suppose, in the 70's it was a really fun experience to call a function hence the name, but now its too clumsy
  • get#, set#, and getset# are aliases and new operations for #-tables (when you can't or won't use ? for that)
  • finally, the grandest mischief is := (alongside :+, :-, :*, :/), which is an alias for setf (and, you've guessed it, incf etc). The justification for this is that everyone is confused about the -f, that setting a variable is a very important operation that we should immediately notice in our clean and functional code ;), and that := is a very familiar syntax for it even used by some languages, such as Pascal or golang. It may be controversial, but it's super-convenient.

The only thing I failed to find a proper renaming for so far is mapcar. It is another one of those emblematic operations that should be familiar to everyone, yet -car creates confusion. For now, I resist the temptation to rename map into map-into and make map smarter by using the first sequence's type for the result expression. However, there's no plausible alternative variant I was able to find even among the zoo of other language's naming of this concept. Any thoughts?

PS. Those were a few prominent examples, but RUTILS, in fact, has much more to offer. A lot of stuff was borrowed from other utility projects, as well as implemented from scratch: anaphoric operators, the famous iter — a replacement for loop, Clojure-style threading macros, a new semantic pair data type to replace cons-cells, lots of utilities to work with the standard data structures (sequences, vectors, hash-tables, strings) making them truly first-class, iteration with explicit indices etc etc. With all that in the toolbox, there's now no ground to claim that Lisp is in any aspect inferior in terms of day-to-day UX compared to some other language, be it Haskell, Ruby or Clojure. Surely, I'm not talking about the semantic differences here.

Wimpie NortjeDo you really want to use conditional compilation?

· 100 days ago

Edit (2016-05-18): Use featurep to improve readability in run time check. Thanks to @ogamita for the pointer.

Edit (2016-05-21): Fix spelling mistakes. featurep not featuresp. Thanks to @ngnghm for pointing that out.

How do I conditionally include code?

When code must change behaviour based on build time settings people often reach for the conditional reader macros (#+ and #-).

These macros have two properties one must be aware of.

  1. The compiler sees different code based on the condition.
  2. The macros are only evaluated at compile time.

Another thing to be aware of is that ASDF only recompiles files when their modification timestamps have changed. This is an optimization to decrease compilation time.

There are two situations where conditional code inclusion are most often used. The first is for writing code which is portable between different computing environments and the second is for setting behaviour options at build time.

In the first scenario the same source file must work on different platforms (e.g. 32 bit and 64 bit) and different compilers. The variance in target environments makes it a necessity to present different code to the compiler based on the environment. Once the file is compiled there is no reason to recompile it until it is moved to a new environment. For this scenario the two properties above (and ASDF's partial compilation) is exactly what is needed and the correct solution is the #+ and #- macros.

The second scenario happens when the application's behaviour can be modified by setting appropriate variables at build time. This technique if often used to switch a code base between development and production modes.

If conditional macros are used to perform this task it is easy to end up with files which were compiled under different conditions. This will almost certainly result in transient bugs, i.e. bugs which disappear when the complete project is recompiled.

Since a complete recompile avoids transient bugs the next logical step is to do a complete recompile every time strange behaviour is encountered. Doing such a recompile negates much of the benefit of ASDF's partial compilation.

Another issue is that compiling different code based on the environment means that you can never test the complete code base in a single environment. One problem with this is that there is always uncertainty about the source of a bug which is present in only one of the environments. Another problem is that one can get into a situation where buggy code is only ever present in an environment with no debugging facilities1.

In summary, using conditional macros to implement build time settings have the following problems2:

  • Transient bugs,
  • Long compile times, and
  • Multiple execution environments.

These problems can be avoided while keeping the build time settings by using run time checks instead of compile time checks. The code examples below illustrate both the compile time and run time methods.

Conditional behaviour using reader macros

This method causes trouble.

(pushnew :app-release *FEATURES*)

#+app-release
(do-release-stuff)
#-app-release
(do-dev-stuff)

Conditional behaviour using run time checks

One possible solution to rid your code of conditional macros.

(pushnew :app-release *FEATURES*)

(if (uiop:featurep :app-release) 
    (do-release-stuff)
    (do-dev-stuff))

Conditional reader macros or run time detection?

Though I have not seen much discussion about this topic, I have seen a few projects which implement build time settings as I suggest in this post.

Method Use case
#+ and #- The same functionality is implemented by different pieces of code for different environments.
Run time detection. Different behaviours are selected based on build time options.

  1. An example is to move between SLIME and Buildapp with debugging disabled.

  2. Also see Buildapp fails when using uncompiled libraries for another problem caused by conditional macros.

Nicolas Hafner9th European Lisp Symposium - Confession 62

· 105 days ago

header
As I'm writing this I'm still in Krakow. Sitting next to me is Till, who joined me for ELS this year. It was a blast, but I'm also really exhausted and my throat is still hurting a bit from talking all the time the past three days. Our flight back to Zürich is in about an hour from now and I have a test to study for on the coming Thursday; I actually would've really liked to stay a bit longer, especially considering there were a few people I would've loved to talk to a bit more. Alas, you can't always get what you want.

But before I go through the entire thing by backtracking, let's instead reverse time all the way back to Sunday. Our flight to Krakow was scheduled for 17:00, so we had ample time to lounge around at home and try to relax a bit before the inevitable stress that is airport security and flying in general. Till had also packed way too much stuff, so we unloaded a bunch to make the carrying lighter. In hindsight I'm really glad we did that, as it turned out that we had to walk around quite a bit in Krakow.

At around two we then set off for the airport, where we had a quick lunch and noticed that Till had left his boarding pass in a book he had packed but we then left at home. Fortunately enough -after a bit of trouble with the Swiss website- we managed to download a copy of it to his tablet, so that all turned out fine. I suppose you really can't go for any kind of journey without at least some kind of oversight that gives you a hefty scare.

The plane we flew in was a small jet, but it was still pretty packed. I assume it was mostly Polish people returning home after a quick holiday break. On the flight my stomach got upset a bit, but otherwise it went by just fine. Once we finally arrived in Krakow we got a bit confused about the airport layout, as it was under pretty heavy construction. After some wandering about we managed to find the proper bus stop and get our tickets. I also exchanged way too much money for zloty, most of which is still in my wallet now. I didn't get much of an opportunity to waste it.

The bus ride to the hotel took around three quarters of an hour, so we got to have a good look around the outskirts of the city and its landscapes. The hotel itself was located in an area that looked rather worn down, the streets were not up to par and long stretches of the sidewalk were opened up for construction. After check-in and a short look-see at our room we decided to head on down to the bar and wait for someone to show up. Most of the conference people had already arrived in Krakow before us and were having a jolly time at the pre-conference registration party from what I overheard.

About an hour later we were joined by Christian Schafmeister and Joram Schrijver and the discussions immediately fired up. We talked a lot about Clasp and its near future- Christian was pretty worried about what he could show for his talk. He had the impression that people wouldn't be impressed by a new Common Lisp alone. I can't say I agree with that viewpoint, Clasp brings a lot of new stuff to the table that makes it a great addition to the list of implementations. The C++ interop alone is already noteworthy enough, but there's lots of smaller features that could prove very useful for larger projects. The biggest problem with Clasp remains however; there's just not enough people working on it to move it along quicker. Even with Christian's incredible speed and dedication, there's only so much he can do on his own. I've been trying to push Clasp into a situation where it is more accessible to other people for quite a while now, but especially with recent changes there's a lot left to be done for that- something that was reflected again throughout the discussions we had during the conference.

Later we were joined by a group of other lispers that were just returning from their previous party to start a new one at the bar. Things got rather lively and all sorts of topics got brought up. At around midnight I had to excuse myself however, as I wanted to be at least somewhat fresh on the coming morning. The hotel room was alright, at least it didn't smell terribly and was otherwise nicely roomy. Unfortunately the heating was also turned up enough that I couldn't fall asleep for about an hour. Opening the window cooled things down sufficiently and we finally managed to get some good rest in.

Finding the conference building in the morning was a bit tricky, but we managed to discover a kiosk along the way to get some snacks and drinks in. The conference provided for plenty of that on its own, but I was still glad to have a nice bottle of ice tea in my bag at all times. We got to the conference hall on time for the registration, but the actual conference organisation was oddly delayed. Nevertheless, discussion between the few people that had already showed up sparked almost immediately, so it didn't feel like we had to wait at all.

It was great to see Robert Strandh again as well, although I barely got to talk to him this year, much to my dismay. I'm hoping to remedy that next year. I also met Masatoshi Sano again, but I only got to talk to him on the second day. I met a few other people that I already knew from the previous ELS and had the pleasure of talking to them, but I unfortunately am terrible at remembering names, so I can't list them all here. My apologies.

Moving on to the talks. The first was about Lexical Closures and Complexity, by Francis Sergeraert. At points it was unfortunately -despite the rather heavy focus on Math at ETH- a bit over my head or moving too quickly, so I had a bit of trouble following along what exactly was happening. From what I could gather he uses closures to model potentially infinite or very large problem spaces and then perform various computations and mappings on those, thus still being able to compute real-value results without wasting enormous amounts of resources to try and model it all.

Next up was the language design section of the talks, the first of which focused on automated refactoring tools to aid students in finding style problems in their Racket code. It was fairly interesting to see a brief introduction to the tools used to both analyse and restructure source forms automatically to determine more succinct and idiomatic ways of achieving the same semantic result. It was also rather depressing to see some real-world snippets of code they had gathered from actual students. I can't say I'm surprised that this kind of absolutely horrendous code gets written by people being introduced to a language or programming in general, but one can't help but wonder if these 20-level nested ifs might stem from something other than the writer being new to it.

Following this was the demonstration of a library that extended the CL type system for a way to type-check sequences, allowing you to express things like plists as a type that you otherwise could not. It appears to me that this system might prove useful for succinct pattern checking, but unfortunately because these type definitions don't actually really communicate with the compiler in any way it is rather useless for inference or other potential optimisations that could be done if the compiler had actual knowledge of what kind of structure is being described. The code presented also used (declare (type ..)), which is the wrong way to go about something like this. Declarations are intended as promises from the programmer to the compiler. That all sane compilers also insert checks for the type on standard optimisation levels is not something that should be relied upon. check-type on the other hand would be perfectly suitable for this. However, at that point you might as well drop the type charade altogether and just have something like a check-pattern function that performs the test. Still, the talk presented an interesting view into how the actual sequence type descriptors are compiled into efficient finite state machines. The mechanism behind the type set merging was very intriguing.

Afterwards we heard a talk from Robert about his implementation of editor buffers, presenting an efficient and extensible way to handle text editing. I had read his paper on it before so I was already familiar with the ideas behind it, but it was a nice refresher to hear about it again. I'll make sure to see about hooking in his system when I inevitably get to the point of writing a source editor for QUI. It was also pretty surprising to hear about a topic like this since usually when using editors, one doesn't think about the potential efficiency problems presented by them- it all just works so well most of the time already. The biggest grievance for me in Emacs at the moment isn't necessarily the editing of text itself, even if that slows down to a crawl sometimes when I'm cruising about with a couple hundred cursors at the same time, no, the problem is with dynamic line wrapping. Emacs goes down to a complete crawl if you have any kind of buffer with long lines without line breaks. This however I assume has more to do with the displaying of the buffer than the internal textual manipulation algorithm. Maybe Robert has ideas for a good way to solve that problem as well.

During Lunch I had the great pleasure of meeting and talking to Chris Bagley of CEPL fame. I'll really have to look into that myself to see which parts of it can be incorporated into Trial. I'll definitely incorporate the Varjo part so that we can use sexprs for GLSL code, but there might be lots of other little gems in there that can be repurposed.

Following Lunch was the session on DSLs, starting off with a system to describe statistical tests in lisp. For this talk too I felt like I wasn't familiar enough with the areas it touched upon to really be able to appreciate what was being done. Apparently the system was used to do some hefty number-crunching. Besides, it's always great to see new discoveries in language evolution that allow a convenient description of a problem without sacrificing computational power for it.

The following talk stepped right into that line as well, presenting a high-performance image processing language called CMera (I believe). It does some really nifty stuff like automatically unrolling loops to avoid having to do edge testing in your tight loops all the time, expanding a single loop over an image into nine different parts that are all optimised for their individual parts. When comparing the code written against an implementation of the same algorithm in hand-written C++, the difference is absolutely astounding. Not only does this reduction in code size make things much more readable and maintainable, it also allows them to move much faster and prototype things quickly, all without having to sacrifice any computation speed. If anything they can gain speed by letting the compiler use higher-level information about the code to transform it into specialised and large but performant code.

Finally the last session of the day focused on demonstrations, firing right off with CL-MPI, a library to use the MPI system from within lisp while taking care of all the usual C idiosyncrasies, thus presenting a very handy API to run highly parallel and distributed code. Really interesting to me was their system to synchronise memory between the individual machines automatically. While the system seems pretty neat I couldn't help but wonder whether this might a bit too easily lead to either assuming synchronisation when none is present, or introducing a bottleneck in the system when it has to synchronise too often. Either way, I'm glad I don't have worry about highly distributed systems myself- measly threading alone is enough of a headache for me.

After this we had a really nice showing of an interactive computer vision system in Racket using the Kinect. That sounded like a very fun way to introduce students to Racket and computer vision in general. The few demos he showed also seemed very promising. Given last year's talk on computer vision, I might really have to take the time to look into this stuff more closely some time.

The last talk for the day focused on the problem of lexical variables in CL when debugging. Since lexical variables are often compiled away completely, it's tough to see their values during debugging. SBCL often does a pretty good job at retaining this information when compiling with (debug 3) in my experience, but there's certainly times when it doesn't, and I can see the value in implementing a system that can ensure that this gets preserved in every case, especially one that's portable across implementations. Apparently there's some really nasty code walking necessary to get the job done, and there's apparently still no code walker around that actually works as expected on all major implementations, which is a bit of a downer.

As usual closing off the day was a session of lightning talks. This year mine was a form of continuation on my last year's talk about Qtools. I talked very briefly about Qtools-UI, the effort to replace Qt parts that aren't extensible enough and thus provide a more convenient base for people to work with. I'm not sure if I managed to convince anyone to contribute to it, but hopefully it'll at least linger around in some people's heads so that they might remember it if they ever come across the need to write a GUI.

The rest of the lightning talks I'm afraid to say I can't quite remember. My memory is rather shoddy at the moment and the only reason I remember all the other talks is because I looked up their titles on the website. So, my apologies for skipping out on this, but I think the article is already plenty long as it is so going into detail on all these would only make it all the longer.

The first day was concluded by Chris, Christian, Joram, Till, and I going back to the hotel for a brief chat at the bar, followed by a quest in search for pizza. We looked up a bunch of places near the hotel and went on our way. The first we encountered was too full and the other was near a campus that was filled with students drinking booze and doing BBQ; we deemed it a bit too lively for us. The third one was inside a student dorm building, but had enough space for us to spend some hours talking and eating. The pizza tasted very differently from what I'm used to. It wasn't bad, but also not really my kind of thing.

Some more talking and a good night's rest later it was already Tuesday. Time flies when you're having a blast. We got up a tad later this time around and walked through a convenience store. I was relieved to see that just like all the stores I'm used to the layout is as confusing as possible so that you have to waste lots of time walking by everything except what you're looking for.

Since we were close on time and there was a group photo to shoot we didn't get any time to talk before the first talk. It started off with a presentation of the Julia language by Stefan Karpinski. Julia seems like a really nice replacement for Matlab and I'd very much welcome it if it got more ground that way. However, some of the points that were presented here didn't really seem to make much sense to me. One thing that was emphasised as distinguishing Julia is that number types and arithmetic aren't in the specification, but rather defined in Julia code itself. This sounds like a neat little thing to do for curiosity's sake, but I just can't see the benefit of it. Not to mention that now instead of reading some pages of a spec you have to read some pages of code with possibly arcane interconnections and optimisations going on. Whether this makes anything more clear is really dubious to me. I'm guessing that this part was mostly mentioned at all to at least bring something new to the table since it would otherwise be pretty hard to impress lispers. Another thing I was confused about is that he seemed to hint at the possibility of writing functions that get the information about the inferred type of the compiler and can use that to generate different code, which is something that I've missed in CL in places where macro functions could be further optimised with that kind of information, but the example he showed didn't seem to use that in any way or even get it at all, so I'm not sure if I didn't catch that part or what exactly is going on with that.

A quick break later we got to the implementations part of the talks. Robert presented his modern implementation of the loop macro, which uses a system of combinatory parsing and full CLOS to allow it to be extensible. I'd love to have a portable way to extend loop as iterate really doesn't appeal to me much at all and there's currently nothing else that is extensible for custom sequences and clauses and the like. I'm not sure if his implementation will be adopted, but I would definitely welcome it.

The next two talks, which were about source translation in Racket and STM in Clojure, I'm sad to say I can't really talk about because I was distracted by a bug I had discovered momentarily and couldn't help myself but try to fix. I got absorbed all too easily, so I didn't catch much of it.

During the lunch break I got to talk to Masatoshi Sano for a good while, we mostly discussed the prospect of using Roswell for my Portacle project and talked about some of the difficulties or ways to deal with what I'll just call "The Windows Situation". I later talked to Joram a bit about potentially getting him involved in the Colleen3 or Trial projects, for which I'd heartily welcome some contributors or even just discussion partners. I'm very excited about the prospect of working with him on that.

And then came the big one. Christian's talk was, just like last year, pretty comparable to a bomb dropping. Some suspect that he's not of this world. Ignoring the question of his conception, hearing him in his element talking about Chemistry is always a treat. He showed off a nice demo of what CANDO is capable of and it really looks like a nicely lispy way of performing chemistry modelling. This is said from what I can tell with my practically nonexistent knowledge of chemistry, so I can't really claim to have a grasp on what you can actually do with it. Given that he's been at this for such a long time though, I'm convinced he knows what he needs to do in order to create things like what he presented to us- a perfect water filtering membrane. I'm still glad to have gotten involved with Clasp, it has given me lots of really great talking and thinking opportunities. It's exciting to listen with or discuss the in-depth details of what's going on inside Clasp. Now though Christian needs to get his chemistry stuff off the ground so that he can get enough funding to continue Clasp. Unfortunately grants have been hard to come by for him and that's a looming pressure that has been haunting the project many times before. Hopefully he'll be able to prove the worth of Clasp and Cando in the near future. I wish him all the luck.

Next up we had a presentation about the question of how different implementations of a depth of field effect perform on different hardware. This was mostly a concerning example as to how much code still needs to be tuned to the hardware it's being run on today. Maybe the effect is actually even more so now, since lots of hardware that lies in the same category is still very differing on what it is adept at. Thankfully I am mostly staying clear of such optimisation lunacy.

Some more coffee passed by and we were ready for the last session of talks for the conference. The debut was made by James Anderson, presenting his research into how source files are connected with each other and what kind of dependencies exist between them. He wrote a system that analysed the entire Quicklisp ecosystem's files for symbol references between things and then crunched all that data down into interesting graphs using his own database technology. The graphs for larger systems like qtools-ui look like a complete jumble as expected. He also mentioned the difficulties of trying to extract this kind of relationship information since he could only inspect code by reading it in. This is particularly a problem for methods, since they are likely to be defined from lots of different source files and potentially packages, but without at least type inference or even runtime information you can't really know where the dependency goes. Initially his idea for doing this seemed to be that he doesn't want to have to write the dependency information into his system definition files and the system should be able to infer it automatically. I'm not so sure that this is is a good idea, or that it is in fact such a problem. It seems like a rather minor inconvenience to me, but then again I've never written systems on a very large scale.

Closing it all off we had a presentation of Bazel and how it can be used to build Lisp. The most promising feature of it all being that you are able to use it to statically link libraries into an SBCL binary. I'm not convinced that Bazel is the tool to use for building unless you have a gigantic project or ecosystem surrounding it already however. It seems ridiculously heavy-weight and paying the price for it and its different way of configuration and operation does not seem worth the benefits unless you really need static linking and cannot do with shared libraries. Still, it gave me some things to think about for the eventual effort of writing my own build system, whenever that will happen.

Then as before we had some more lightning talks to round it all off. I didn't do a second one this time around, mostly because I didn't really know what to talk about for Trial. It did not seem finished enough to present yet. Maybe next year.

Finally the conference was rounded off by some goodbye messages from the conference organisation and the announcement that the next ELS might be happening in Brussels. We then had two hours left before the banquet. Michal Herda guided us into the inner parts of the city where we got to see some nice architecture. Along the way Chris Bagley and I chatted about the problems in writing game engines and games in general and some other assorted topics.

Once we arrived at the banquet I was pretty beat. The place we stayed at looked oddly high-brow. The tables were set with all the usual you get in fancy restaurants- multiple wine glasses, forks and knives. It seemed in an odd conflict with the rest of the getup of the conference attendees. We all looked far too casual for this kind of thing. Due to my pickiness I couldn't eat much of the food that was being served either. It certainly looked fancy, but I don't think the taste was in accordance to that. From what I've heard from others or noticed in their expressions it was nothing exceptional. No matter for me either way though, since I came all this way not to eat, but to finally be with people that understood me and vice versa. And I got ample opportunity to do exactly that. During the dinner I mostly talked with Joram about Colleen3 and Markless.

Soon enough it hit 22:00 and we had to leave. Due to the long way back to the hotel and other delays in saying goodbye to everyone, we only arrived around two hours later, at which point I just slammed myself into bed after making sure that I got the boarding pass for next morning.

And so today I woke up at six, just to make sure that we had plenty of time for potential mistakes. Three quarters of an hour later we were on the bus to the airport. Half an hour later we had already passed through security and were waiting at the gate, at which point I started writing this. The flight after was rather annoying. It was pretty packed and there were lots of screaming children on board, the bane of any flight passenger. I have no idea why there were so many families on board, let alone on a Wednesday morning, let alone from Krakow to Zürich. Despite hellish screams of tortured souls haunting us along the way we made it back safely.

Now it's already 16:00 and aside from getting home and continuing to write this I only got the time to cook some nice lunch- the kitchen remains to be cleaned.

I suppose I should try to form some sort of a conclusion here to end this overly long article on a good note. If it wasn't already apparent from my descriptions, I had a grand time at the conference and I'm really glad that I could attend again this year. A huge thanks to everyone that was willing to talk to me and especially to all the people that got involved to make it all happen. Hopefully it'll happen again next year; I'm definitely looking forward to it.

For now though I have to get back to work hacking lisp. There's so much left to be done.

footer

Daniel Kochmański9th European Lisp Symposium in Kraków

· 106 days ago

The 9th European Lisp Symposium in Kraków has ended. It was my second ELS (first time I've attended it just a year before in London). It is really cool to spend some time and talk with so knowledgeable people. The European Lisp Symposium is a unique event because it gathers people from all around the world who are passionate about what they're doing. The mixture was astonishing - the university professors, professional programmers, individual hackers, visionaries, students.

I'm glad that I have met in person many people with whom I had only the contact over the internet. I've heard about various exciting projects and ideas either during the sessions and the breaks. I have even an autograph from miss Kathleen Callaway on my Lisp in Small Pieces book. I'm also very excited that this year there was a Clojure talk, which is a modern incarnation of the Lisp idea.

During the event I had a chance to stand in front of this "angry crowd" (actually crowd of a very nice people - I still was very stressed though) during my lightning talk. I was talking about my opinions on how the contributing to the Common Lisp ecosystem should look like, how people can get involved in a productive and efficient way.

Yesterday we were on a banquet which officially closed the symposium. The food was delicious and the company was great. The only thing is that it could last a little longer. In fact many of the attendees moved to some other place to continue the meeting. I've heard they've ended late in the night (or early in the morning). I was too sleepy, so I've left to my kind host's flat.

I want to thank all the organizers and speakers for the effort they've put to make the symposium happen. Michał Psota did a tremendous job as a local chair - he managed all the local stuff and it was all perfect, Irène Durand and Didier Verna were managing things very well - everything went very smooth and only a few people got shot during the lightning talks by Didier for exceeding the time frame. I hope that I'll be able to attend the next ELS which will probably take place in Brussels.

Vsevolod DyomkinEuropean Lisp Symposium 2016

· 106 days ago

The last two days, I'm at the ELS2016. So far, it's being a great experience - I've actually forgotten the joy of being in one room with several dozens of Lisp enthusiasts. The peculiarity of this particular event is that it's somewhere in the middle between a scientific conference, like ACL, that I had a chance to attend in the recent years thanks to my work at Grammarly, and a tech gathering: it employs the same peer reviewed approach and a scientific presentation style you will find at the research conferences, but most of the topics are very applied and engineering-related.

Anyway, the program was really entertaining with several deep and insightful presentations (here are the proceedings). The highlights for me were the talks on the heterogenous sequences type-checker implementation based on the Lisp declare facility (that I'm growing more and more fond) by Jim Newton and a presentation of an image-processing DSL that's an excellent example of the Lisp state-of-the-art approach in DSL design by Kai Selgrad. Other things like a description of the editor buffers protocol, local variables preservation technic were also quite insightful. And other good stuff is coming...

It's also great to hear new people bringing fresh ideas alongside old-timers sharing their wisdom and perspective - one of the things I appreciate in the Common Lisp community.

Near the end, I'm going to present a lightning talk about RUTILS and how I view it as a vehicle for evolving the Common Lisp user experience.

Sugaring Lisp for the 21st Century from Vsevolod Dyomkin

Christophe Rhodesnot going to els2016

· 109 days ago

I'm not going to the European Lisp Symposium this year.

It's a shame, because this is the first one I've missed; even in the height of the confusion of having two jobs, I managed to make it to Hamburg and Zadar. But organizing ELS2015 took a lot out of me, and it feels like it's been relentless ever since; while it would be lovely to spend two days in Krakow to recharge my batteries and just listen to the good stuff that is going on, I can't quite spare the time or manage the complexity.

Some of the recent complexity: following one of those "two jobs" link might give a slightly surprising result. Yes, Teclo Networks AG was acquired by Sandvine, Inc. This involved some fairly intricate and delicate negotiations, sucking up time and energy; some minor residual issues aside, I think things are done and dusted, and it's as positive an outcome for all as could be expected.

There have also been a number of sadder outcomes recently; others have written about David MacKay's recent death; I had the privilege to be in his lecture course while he was writing Information Theory, Inference, and Learning Algorithms, and I can trace the influence of both the course material and the lecturing style on my thought and practice. I (along with many others) admire his book about energy and humanity; it is beautifully clear, and starts from the facts and argues from those. "Please don't get me wrong: I'm not trying to be pro-nuclear. I'm just pro-arithmetic." - a rallying cry for advocates of rationality. I will also remember David cheefully agreeing to play the viola for the Jesus College Music Society when some preposterous number of independent viola parts were needed (my fallible memory says "Brandenburg 3"). David's last interview is available to view; iPlayer-enabled listeners can hear Alan Blackwell's (computer scientist and double-bassist) tribute on BBC Radio 4's Last Word.

So with regret, I'm not travelling to Krakow this year; I will do my best to make the 10th European Lisp Symposium (how could I miss a nice round-numbered edition?), and in the meantime I'll raise a glass of Croatian Maraschino, courtesy of my time in Zadar, to the success of ELS 2016.

drmeisterLinking LLVM bitcode files for a dynamic language

· 112 days ago

Clasp Common Lisp is a dynamic language in which every top-level form needs to be evaluated in the top level environment. Clasp compiles Common Lisp code to bitcode files and then links them together into a shared library or an executable. When the library or the executable are loaded, each top-level form needs to be evaluated. This is until Clasp gains the ability to save a running environment to a file, which it doesn’t have yet. Even then, the ability to play back the top-level forms will be needed to create the environment to write to a file.

So the compiled bitcode files need to keep track of the top-level forms so that they can be played back at startup. Clasp does this by defining a “main” function with internal linkage (called “run-all”) for each llvm::Module. “run-all” evaluates every top-level form that was compiled into the llvm:Module. The tricky part is how does this main function get exposed to the outside world so that it can be called if a single bitcode file is loaded into clasp or linked together into a library or executable and invoked with other “run-all” functions from other modules.

Clasp creates a global variable in each module called “global-run-all-array” that stores an array of initially one function pointer that points to the module’s “run-all” function.  The “global-run-all-array” global variable is defined with “appending” linkage. What this does is when bitcode files get linked together by the system linker, the “global-run-all-array” will have all of the “run-all” functions appended together and put back into the “global-run-all-array” global variable.

Then there is the problem to determine the number of entries in the “global-run-all-array”.  Clasp solves that by ensuring that the last module linked in a list of modules has a two element “global-run-all-array” where the second element is NULL and by adding a second global variable called “global-epilogue”.

When a bitcode file or a shared library is loaded into Clasp, it checks for the “global-epilogue” symbol, if it finds it then it knows that the “global-run-all-array” contains a NULL terminated array of function pointers to call.   If “global-epilogue” is not present, then it knows that “global-run-all-array” contains a single function pointer.

Clasp then invokes each of the “global-run-all-array” functions one after the other and each one of them invokes the compiled functions for the top-level forms for each of the bitcode files.

This only takes a few seconds when starting up Clasp.

Note: I haven’t been actively blogging – because I’m very, very actively programming.  If you want to say hi, I’m on IRC, freenode.org #clasp almost every day.


CL Test Gridquicklisp 2016-04-21

· 114 days ago
The difference between this and previous months:

Grouped by lisp implementation first and then by library:
https://common-lisp.net/project/cl-test-grid//ql/quicklisp-2016-04-21-diff.html

Grouped by library first and then by lisp impl:
https://common-lisp.net/project/cl-test-grid//ql/quicklisp-2016-04-21-diff2.html

(Both reports show the same data, just arranged differently)

As usually, some new libraries start to fail on old lisp implementations because they need newer ASDF.

3d-vectors redefines constant to a value not eql to the previous value. clinch starts to fail on several lisps. hyperluminal-mem refers undefined variable MOST-POSITIVE-B+SIZE. rutils crashes CCL. And some other failures. There are improvements too of course.

If you're interested in some particular failure or need help investigating something, just ask in the comments or on the mailing list.

Wimpie NortjeBuildapp fails when using uncompiled libraries.

· 114 days ago

Note: I use CCL 64-bit on Linux. I have not checked this on anything else.

I discovered that Buildapp fails to build a project when the FASLs1 for some 'standard' libraries are not available. 'Standard' meaning well known, mature libraries like Alexandria.

When I started using Common Lisp I tended to use the reader macros #+ and #- to conditionally compile for development or production. Since ASDF uses file modification times instead of code dependencies to determine what to recompile it would often happen that the project's various files were compiled using different compilation conditions. This caused many mysterious bugs.

The solution to this environment mismatch is to force a recompile of the whole project. I use a two-phase process of generating a manifest and then building the application binary. The recompile is forced by deleting2 the project FASLs before each phase.

At one point I tracked an elusive bug to a library which uses conditional reader macros for conditions which differ between my development and production environments. I extended my FASL deletion to include all the Quicklisp libraries for both building phases. This caused Buildapp to fail.

By using (ql:quickload) in its verbose mode during the build process I saw that CCL emitted 'compilation failure' warnings for some libraries while generating the manifest. Many mature and often used libraries caused such warnings. Compilation completed successfully in spite of the warnings and these libraries have been used like that for years so it does not seem to be a serious problem.

During the binary creation phase Buildapp exited at the first occurrence of a 'compiler failure' warning with an error. It seems that Buildapp escalated all warnings to errors which caused it to fail.

When your own code triggers this behaviour it is useful because it helps you ship better software. However, when external libraries trigger the failure it is extremely annoying because it blocks your development effort.

The solution for making a complete build in a consistent environment is to do a full clean before generating the manifest and project-only clean before building the binary. This enables Buildapp to load the libraries while still compiling the complete set in a known environment but it requires that the environment conditions for the libraries remain constant during the manifest generation and building phases3.

  1. 'FASL' is short for 'FASt Loading'. It is a binary file containing compiled code. ASDF prefers to load code from a FASL rather than a source file when it determines that the source has not changed since being compiled.

  2. ASDF documentation provides three options for forcing a recompile: (1) The (clear-system) API call, (2) touching the system's .asd file, and (3) deleting the project FASLs.

  3. This can be a tricky requirement because some libraries use #+quicklisp which definitely changes from manifest to building.


For older items, see the Planet Lisp Archives.


Last updated: 2016-08-17 19:33