Planet Lisp

Pascal CostanzaA Lisper's first impression of Julia

· 3 days ago
I have recently looked at Julia, a new programming language developed at MIT that promises to be a dynamic programming language that is suitable for scientific computing with a high-performance implementation. It is an interesting project that heavily borrows from Common Lisp, Dylan, and Scheme, and you can rightfully argue that Julia itself is actually a Lisp dialect. While I wasn't very impressed with some other recent Lisp dialects (Arc, Clojure), Julia puts a couple of very interesting features on the table. Below is a discussion of the more salient aspects of Julia, as seen from a Common Lispers perspective. It is based on the documentation of the prerelease version 0.3 of Julia.

Julia is closest to Dylan in many regards. It uses a somewhat mainstream syntax rather than s-expressions. Unlike Dylan, you can nevertheless write 'full' macros, since macro definitions are implemented in Julia, not some template language, and backquote/quasiquote is integrated with the Julia syntax. Julia is a Lisp-1 (like Scheme or Dylan) rather than a Lisp-2 (like Common Lisp or ISLISP), which makes it necessary to add macro hygiene features. Fortunately, this does not mean you have to deal with the rather painful syntax-case construct of some Scheme dialects, but you can still use far simpler backquote/quasiquote constructions, just with macro hygiene taken care of by default. Julia also allows you to selectively break hygiene. Although I usually strongly prefer the simplicity of Common Lisp's non-hygienic macro system, the fact that Julia is a Lisp-1 turns macro hygiene into a real problem, so I guess this is a reasonable design.

Julia provides object-oriented programming from the ground up, similar to Dylan. It centers on generic functions rather than classes, where methods are defined outside classes and allow for multiple dispatch, just like in Dylan and Common Lisp. Like Dylan, and unlike Common Lisp, it does not distinguish between functions and generic functions: All functions can have methods, you do not have to make up your mind whether you want methods or plain functions. Unlike in Common Lisp, there are no method combinations, no before/after/around methods, and call-next-method is not directly supported, but has to be done manually. This is probably to simplify method dispatch, maybe to have some performance advantages, though I find it hard to imagine that adding method combinations would make things substantially worse.

You still need a class hierarchy to drive method dispatch. Unlike in Common Lisp, there is no multiple inheritance, only single inheritance. In fact, there is actually no real inheritance, because in Julia, only leaf classes of the class hierarchy are allowed to define slots/fields. All superclasses are required to be "abstract," without any slot definitions. Also, Julia classes cannot be redefined at runtime, so in fact Julia classes are much closer to Common Lisp's structured types rather than classes.

Julia's execution model is based on dynamic compilation. As a user, you don't have to compile your code at all, source code is just compiled on the fly (similar as in Clozure Common Lisp). Julia inlines functions on the fly, including generic functions, and can de-optimize when function definitions change at runtime. This is more flexible than in Common Lisp, where inlined functions can get out of sync with their potentially changed definitions. Also, while the Common Lisp specification does not say anything with regard to being able to inline generic functions or not, there are aspects in the CLOS MOP specification that prevent generic functions from being inlined, at least for user-defined extensions of generic functions. Julia definitely seems more "modern" here.

In Julia, there is no distinction between variable binding and variable assignment. If you assign to a variable that has not been used before in the same lexical environment, it is silently introduced. In Common Lisp/Scheme/Dylan, there is a distinction between 'let forms that introduce variable bindings, and assignments (setq/setf/set!) that perform assignments. I'm highly skeptical of Julia's design here, because this potentially leads to bugs that are hard to find: A simple typo in your source code just may go unnoticed.

In Julia, all variables are lexically scoped (except for some seemingly quirky scoping semantics for global variables, see below). There are no special / dynamically scoped variables in Julia, which is a major omission in my book. Some academics don't like special scoping, but their presence in Common Lisp is incredibly useful in practice, especially but not only for multi-threading!

Julia's default representation for integers is either 32-bit or 64-bit integers, depending on the target architecture, which silently wrap around. Julia also supports "BigInts" that can be arbitrarily large, but you have to ask for them explicitly. In Common Lisp, integers are by default arbitrarily large, which I think is an advantage. Due to type tagging in Common Lisp implementations, even "big" integers are typically allocated as immediate values rather than on the heap when they fall into the "fixnum" range. I didn't find anything in the Julia documentation that discusses this aspect of "BigInt."

In Julia, all mathematical operations are generic functions and can be extended by user-defined methods. This is a strong advantage for Julia. In Common Lisp, mathematical operations are "plain" functions which cannot be extended. Due to some aspects in the design of Common Lisp's generic functions, it's hard to inline (or open-code) them, which is why for performance reasons, it's better to express mathematical (and other such performance-critical functions) as "plain" functions. Apart from that, the support for number types seems to be on par between Julia and Common Lisp (complex types, rational numbers, floating point numbers, etc.)

In Julia, strings are unicode strings by default. My knowledge about unicode support in Common Lisp implementations is limited, so I cannot really make any comparisons here. One interesting aspect in Julia's string support is that there can be user-defined macros to parse them and construct other syntactic entities out of them. This feels somewhat similar to read macros in Common Lisp, although with a slightly different scope.

Julia's support for functions is similar to Common Lisp: They can be first class and anonymous (lambda expressions). There are varargs (&rest), optional and keyword arguments. In Common Lisp, optional and keyword arguments cannot be dispatched on in methods. In Julia, optional arguments can be dispatched on, but not keywords. (This is a pity, dispatch on keyword arguments would be very interesting, and is something I wanted to add as part of Closer to MOP for a long time!)

Julia's support for control flow is much more limited than in Common Lisp. There are equivalents for progn, cond/if, for and while loops. Unlike Common Lisp, there is no support for a full loop facility, or even for a simple goto construct. Common Lisp clearly wins here. Julia's support for exception handling is also limited: No handler-bind, no restarts, unlike in Common Lisp, which are also really useful features.

Julia's type system has some interesting differences to Common Lisp: There is a distinction between mutable and immutable classes. Immutable classes disallow any side effects on their fields. This seems to be primarily directed at enabling stack allocation as an optimization. In Common Lisp, you would use dynamic extent declarations when allocating structs (or other data types) to achieve similar performance improvements. I'm not sure why it would matter that the fields need to be read-only for such an optimization, but if this covers most cases, maybe this is good enough.

Julia allows for returning and receiving multiple values from function calls, similar to Common Lisp. This is a feature I like a lot in Common Lisp, so I'm happy it's also in Julia. In Julia, this is achieved by an explicit tuple type representation which doesn't exist in this form in Common Lisp. In Lisp, you could also return/receive lists instead of multiple values, which would correspond to this kind of tuple type, but lists add an additional performance overhead, which multiple values and, presumably, tuples in Julia don't have.

Julia supports parametric types. I don't see at the moment why this is relevant. You could achieve similar functionality also with some macrology. Maybe I'm missing something here.

There is a whole section on constructors (make-instance / make-struct) in the Julia manual. This makes me suspicious, this should be an easier topic.

Julia has a module system. It supports export and automatic import of explicitly exported identifiers. You can also still access non-exported identifiers with additional syntax. This is good, because module designers may not always perfectly anticipate what users actually need to access. Common Lisp's package system supports a similar distinction between external and internal definitions that can be accessed in different ways. I slightly prefer Common Lisp's ability to use the package name as a prefix even for explicitly exported definitions. There is no feature in Julia to rename imported identifiers, which is something where Common Lisp's support for explicit package name prefixes can come in very handy. I would like to see something like Oberon's support for renaming identifiers on import in some Lisp dialect someday, because I believe that is the most complete solution for dealing with potential name conflicts.

In terms of meta-programming, apart from macros, Julia also supports (top-level) eval, like Common Lisp. Julia's support for "reflection" is much weaker than Common Lisp's CLOS MOP: You can only inspect types at runtime, but you cannot modify them (and language designers should stop calling something "reflection" that is clearly just "introspection").

Both Julia and Common Lisp support multi-dimensional arrays. Common Lisp's arrays are in row-major order, starting at index 0 in every dimension. Julia's arrays are column-major order, starting at index 1 in every dimension. Julia's support for multi-dimensional arrays is a library feature, whereas Common Lisp's support is built into the language. Julia supports both dense and sparse matrices, where Common Lisp supports only dense matrices out of the box.

There are a lot of libraries that ship with Julia targeted at scientific computing.

Julia supports parallel programming with a model built on top of message passing: If you want to run parallel algorithms, you essentially start several instances of Julia that communicate with each other. The model does not support shared memory, and there is no multi-threading within a Julia instance (although there seem to be discussions among the Julia designers to add this in the future). The model is built on top of MPI as an implementation backend. However, the actual programming model supports single-sided communication: You can ask a function to be executed in some other Julia worker process, and can later synchronize with it to fetch results. On top of that, there are some high-level constructs provided as library features, such parallel maps and loops. Julia's message passing model ensures that within a Julia instance, only one task is executed at a time, so there is yet no need to provide low-level synchronization mechanisms, such as locks or atomic operations. The lack of shared-memory parallelism is problematic because many parallel algorithms that are very easy to express with shared memory become quite complicated in a distributed memory setting. On the other hand, Julia's model easily supports true distributed programming: You can configure Julia to run several instances across a cluster, and use them in a quite straightforward way: substantially easier than what you have to do with, say, MPI, and much closer with regard to ease of use to modern PGAS languages like Chapel or X10.

The ANSI specification for Common Lisp does not mention anything about multi-threading or parallel programming at all, but many Common Lisp implementations add support for shared-memory parallelism. I will not go into details here, but let me just briefly state that, for example, the LispWorks implementation of Common Lisp provides excellent support for symmetric multiprocessing that is at least on par with what you can find in most other language implementations in terms of parallel programming support. However, unfortunately, support for true distributed memory models seems almost non-existent in Common Lisp, apart for some basic support for MPI in a library that was maintained only for a very short period of time a couple of years ago. Julia looks like a good source of inspiration for adding such features to Common Lisp.

However, one aspect of Julia's message passing approach seems problematic, as far as I can tell: You can pass closures between different instances of Julia, but it's not clear how free variables in a lambda expression are bound. It seems that lexical variables are bound and serialized to another process, but global variables are not serialized and need to be present in any presence that may execute the closure. Experiments with adding side effects to free variables in lambda expressions that are passed to other processes seem to suggest that the semantics of this combination of features are unpredictable. At least, I have not been able to figure out what happens when — sometimes variables seem to be updated at the sender's side, sometimes not — and I didn't find a discussion of this topic in the Julia manual.

Anyway, as you can tell, there are a lot of interesting features in Julia, and it's definitely worth a try.

Colin LuptonSLIME for Emacs Live

· 6 days ago

Emacs Live is a frakkin’ epic compilation of customizations and packages, streamlined for the ultimate hacking experience. As I mentioned in my last post, Adventures in Clojure, it’s under development by the same team as the Clojure bridge to SuperCollider, Overtone. The one downside? It was designed for hacking Clojure, so it doesn’t include SLIME and Common Lisp support out of the box, and of course, it completely replaces your ~/.emacs.d/ directory and ~/.emacs config file, so you lose your existing SLIME setup (and all your other customizations) when you install Emacs Live. Don’t panic, the Emacs Live installer is smart enough to move your existing ~/.emacs.d/ folder and ~/.emacs config file to a safe place.

Emacs Live does, however, offer a pretty neat interface for creating Live Packs, boxed collections of Emacs packages and customizations, that can all be loaded together as a single package via ~/.emacs-live.el, stored outside the managed ~/emacs.d/ directory so that they can be maintained across updates. This made it only slightly less trivial than normal to get SLIME set up and running in Emacs Live.

To get the full Emacs Live experience for Common Lisp, however, you also need another package, AC-SLIME. It provides the auto-completion and pop-up documentation for Common Lisp, both in file buffers and the REPL.

I have packaged both together in an Emacs Live pack, which you can get at: https://github.com/thephoeron/slime-pack. Installation is a cinch. After installing Emacs Live, just clone SLIME-PACK into your ~/.live-packs/ directory, and add the following line to your ~/.emacs-live.el config file:

(live-append-packs '(~/.live-packs/slime-pack))

The default inferior-lisp-program for SLIME-PACK is SBCL. You can change this, as normal, by setq‘ing inferior-lisp-program to your chosen Lisp implementation after the code above.

Once that’s all done and saved, either restart Emacs or compile and load your modified config file. You can then simply M-x slime as normal, and enjoy all the extra Emacs Live awesomeness for Common Lisp!


Quicklisp newsJuly 2014 Quicklisp dist update now available

· 9 days ago
New projects:
Updated projects: 3bmd, access, aws-sign4, btrie, caveman, chirp, cl-ana, cl-async, cl-autowrap, cl-charms, cl-colors, cl-conspack, cl-coroutine, cl-ftp, cl-fuse-meta-fs, cl-html5-parser, cl-ltsv, cl-mustache, cl-plplot, cl-ply, cl-project, cl-qrencode, cl-rdfxml, cl-rethinkdb, cl-sdl2, cl-xul, clack, closer-mop, clsql-helper, coleslaw, colleen, com.informatimago, conium, contextl, crane, datafly, drakma-async, esrap, function-cache, gbbopen, glyphs, hctsmsl, ieee-floats, lisp-interface-library, lisp-unit2, marching-cubes, mime4cl, ningle, packet, paiprolog, plump, protobuf, racer, readable, repl-utilities, rutils, sexml, shelly, slime, spinneret, talcl, trivial-ldap, vgplot, weblocks, weblocks-utils, wookie.

SBCL 1.2.1 changed some internals that SLIME 2.7 relied on. This update includes SLIME 2.8, which works fine with SBCL 1.2.1.

To get the dist update, use (ql:update-dist "quicklisp").

Enjoy!

Zach BeaneInternational Lisp Conference 2014

· 10 days ago
I'm going to the International Lisp Conference in 2014 and so should you. It's great to meet Lispers in person and swap stories. Montreal is also a great city.

The deadline for early registration is tomorrow, Monday, July 14th. If you join the ALU for $25, early registration is only $300. (If you don't join the ALU, early registration is $400.) After tomorrow the registration fee goes up signifcantly!

Go register today! See you in Montreal next month!

Colin LuptonHacking Lisp in the Cloud, Part 2

· 15 days ago

This morning I got access to the new Cloud9 IDE beta—and I have to say… WOW. It’s slicker, it’s faster, it’s more stable, auto-complete recognizes Lisp definition forms from your open workspace files such as defun and defmacro, and most importantly, it only takes seconds to get your workspace set up with RLWRAP, SBCL and Quicklisp.

The new Cloud9 IDE is running on an Ubuntu backend workspace. Cloud9 has had terminal access to your project workspace for quite some time now, but I’ve found the terminal experience to be significantly smoother in the new beta. It stays connected now, no longer timing-out on you when switching tabs or stepping away from the computer for a minute. Users can also use sudo for root access, and as a result install any debian package from apt (amongst many other things, of course). Emacs 24 is already installed by default. I suspect that SSH tunneling to a remote SWANK server from the Cloud9 workspace is also now possible.

The Collaboration tools seem to be more streamlined. Workspace Members, Notifications, and Group Chat all appear together in one panel. I expect, with all the other improvements in the beta, that collaborative editing of your workspace files is likewise improved.

There’s a new Outline panel that lists the symbol-names of your top-level definition forms for the current active view—yes, even for Lisp. You can select a symbol-name and jump right to its definition in the file. Also, this functionality appears to be integrated with auto-complete, allowing you to jump to a definition of a function, macro, or variable from the auto-complete list as you type.

An interesting set of features I have not yet tried, is the custom Run/Build configurations. These features appear to allow you to write custom Run/Build scripts for arbitrary programming languages, so you should now be able to integrate Lisp into the IDE better, and write/test/debug/deploy your Lisp applications for the most part automatically.

One step closer to my hopes stated in my previous post on Cloud9 IDE from January, the Cloud9 IDE beta includes a JavaScript REPL. Combined with all the other helpful tools that they’ve included to support Lisp cloud development, it seems reasonable to suppose that a full REPL plugin is in the works.

I’ve barely scratched the surface here—there are so many new features to try out, I’ll probably be discovering new things every day for the next week. And to think, this is just the Beta! If you do your part and show your support for Cloud9 as a Lisp Hacker, I’m quite certain that the next full version of Cloud9 IDE will include everything we need to Hack on Lisp seamlessly in the Cloud.

If you want to get beta access to the new Cloud9 IDE yourself, all you have to do is follow the instructions on their blog post. If your Cloud9 username is different from your Twitter handle, you may need to provide that to them as well to get Beta access.

As always, Happy Hacking!


Zach BeaneCommon Lisp bits

· 15 days ago
A collection of Lisp Usenet gems, including articles from Kent Pitman, Erik Naggum, Chris Riesbeck, Pascal Costanza, and Will Hartung.

Things I Want in Common Lisp by Robert Smith.

Video demo of cl-notebook by lnaimathi. By the same author: [T]he primary endeavor of every programmer is twofold: To understand, and to be understood, which demonstrates generating code from a visual diagram.

Looking to start a band? Stumped on a name? See this twitter thread for inspiration.

simple-search is a "smaller alternative to Montezuma" that "allows you to index documents you have stored in-memory and query them in various ways." Looks good. From Andrew Lyon.

It is not hard to read Lisp code, by Jisang Yoo.

mathkit is "
a purely math-related utility kit, providing functions which can be useful for games, 3D, and GL in general" by Ryan Pavlik.

Nick LevineInternational Lisp Conference, Montréal

· 21 days ago
(Updated) Program and registration details of next month's International Lisp Conference can now be found at http://international-lisp-conference.org/. Note the changed dates; note that the deadline for early registration is July 14.

Ben HydeDocker, part 2

· 27 days ago

The San Francisco Hook

I played with Docker some more.  It’s still in beta so, unsurprisingly, I ran into some problems.   It’s cool, none the less.

I made a repository for running OpenMCL, aka ccl, inside a container.   I set this up so the Lisp process expects to be managed using slime/swank.  So it exports port where swank listens for clients to connect.  When you run it you export that port, i.e. “-p 1234:4005″ in the example below.

Docker shines at making it easy to try things like this.  Fire it up: “docker run –name=my_ccl -i -d -p 1234:4005 bhyde/crate-of-ccl”.   Docker will spontaneously fetch the everything you need.   Then you M-x slime-connect to :1234 and you are all set.  Well, almost, the hard part is  .

I have run this in two ways, on my Mac, and on DigitalOcean.  On the Mac you need to have a virtual machine running linux that will hold your containers – the usual way to do that is the boot2docker package.  On Digital Ocean you can either run a Linux droplet and then installed Docker, or you can use the application which bundles that for you.

I ran into lots of challenges getting access to the exported port.  In the end I settled on using good old ssh LocalForward statements in my ~/.ssh/config to bring the exported port back to my workstation.  Something like “LocalForward 91234 172.17.42.1:1234″ where that IP address that of an interface (docker0 for example) on the machine where the container is running.  Lots of other things look like they will work, but didn’t.

Docker consists of a client and a server (i.e. daemon).  Both are implemented in the same executable.  The client chats with the server using HTTP (approximately).  This usually happens over a Unix socket.  But you can ask the daemon to listen on a TCP port, and if you LocalForward that back to your workstation you can manage everything from there.  This is nice since you can avoid cluttering you container hosting machine with source files.  I have bash functions like this one “dfc () { docker -H tcp://localhost:2376 $@ ; }” which provides a for chatting with the docker daemon on my Digital Ocean machine.

OpenMCL/ccl doesn’t really like to be run as a server.   People work around by running it under something like screen (or tmux, detachtty, etc.).  Docker bundles this functionality, that’s what the -i switch (for interactive) requests in that docker run command.  Having done that you can then uses “docker log my_ccl” or “docker attach my_ccl” to dump the output or open a connection to Lisp process’ REPL.   You exit a docker attach session using control-C.  That can be difficult if you are inside of an Emacs comint session, in which case M-x comint-kill-subjob is sometimes helpful.

For reasons beyond my keen doing “echo ‘(print :hi)’ | docker attach my_ccl” get’s slightly different results depending on Digital Ocean v.s. boot2docker.  Still you can use that to do assorted simple things.   UIOP is included in the image along with Quicklisp, so you can do uiop:runprogram calls … for example to apt-get etc.

Of course if you really want to do apt-get, install a bundle of Lisp code, etc. you ought to create a new container built on this one.  That kind of layering is another place where Docker shines.

So far I haven’t puzzled out how to run one liners.  Something like: “docker run –rm bhyde/crate-of-ccl ccl -e ‘(print :hi)’” doesn’t work out as I’d expect.  It appears that argument pass thru, arg. quoting, and that the plumbing of standard IO et. al. is full of personality which I haven’t comprehended.  Or maybe there are bugs.

That’s frustrating - I undermines my desire to do sterile testing.

 

LispjobsLisp Developer, Ravenpack, Marbella, Spain

· 29 days ago

Location: Marbella, Spain
No. of positions available: 1

Position immediately available for an experienced software professional. You will work with an international team of developers skilled in Common Lisp, PL/SQL, Java and Python.

The ideal candidate will have excellent skills as a software engineer, with a strong computer science background and professional experience delivering quality software. You must be fluent in modern software development practices, including multi-threading, distributed systems, and cloud computing. If you are not already an expert in Common Lisp, you aspire to become one. Innovative problem solving and engaging human interaction drive you. With high degree of independence, you will design and implement maintainable software in Common Lisp based on loose and changing specifications.

Familiarity with SQL including query optimization and PL/SQL is very much a plus. Comfort in a growing, fast-paced environment with a premium on problem solving is required. Must be adaptable and willing to learn new technologies. You work successfully in a small team environment, with a willingness to teach and to learn. Lead reviews of your code and participate in the reviews of others.

The ability to communicate effectively in English, both in writing and verbally is a must. Knowledge of Spanish is not a business requirement. European Union legal working status is strongly preferred.

Email CV and a Cover Letter to employment@ravenpack.com with subject “Lisp Developer”.


Colin LuptonAnnouncing BIT-SMASHER

· 29 days ago

BIT-SMASHER is a lean, straightforward and admittedly naive Common Lisp library for handling the oft-overlooked bit-vector type, bit-vector arithmetic, and type conversion between bit-vectors, octet-vectors, hexadecimal strings, and non-negative integers, extending the related functionality in the Common Lisp standard. While of little use to the average Lisp project, it was designed for those cases where working with bit-vectors is either necessary, or would be ideal if it were not for the lack of the functions this library provides.

You can get BIT-SMASHER now at: https://github.com/thephoeron/bit-smasher — or wait for it to come out in the next Quicklisp release.

The most obvious use-case for BIT-SMASHER is when you need to convert universally between bit-vectors, octet-vectors, hexadecimal strings, and non-negative integers. The library provides manual conversion functions for all twelve cases, plus type-checking, type-casting style convenience functions:

For example:

; universal type-casting style functions
(bits<- "F0") => #*11110000
(bits<- 240) => #*11110000
(int<- #*11110000) => 240

; manual conversions without type-checking
(hex->bits "F0") => #*11110000
(int->bits 10) => #*00001010
(octets->bits (int->octets 244)) => #*11110100

The lack of bit-vector arithmetic in the Common Lisp standard was my main motivation to write this library. However, an obvious limitation of the type to values in the set of non-negative integers forced an equal limitation on return results from the bit-vector arithmetic functions. That is to say, when an arithmetic function would normally return a negative integer, float, or fraction as one of its values, it returns the absolute ceiling value (as a bit-vector) instead. For example:

(bit+ #*0100 #*0010) => #*00000110 ; +6, as expected

(bit- #*0000 #*0010) => #*00000010 ; returns +2, not -2

(bit/ #*1010 #*0010 #*0010) => #*00000010 #*00000001 ; returns +2, remainder +1, not 0.5

The library also contains measurement and predicate utility functions, hopefully serving all your bit-vector needs excluded from the standard.

If you encounter any bugs, or have a feature you would like to see, let me know in the comments or create an issue on GitHub.


Hans HübnerBerlin Lispers Meetup: Tuesday June 24th, 2014, 8.00pm

· 29 days ago
You are kindly invited to the next "Berlin Lispers Meetup", an informal gathering
for anyone interested in Lisp, beer or coffee:

Berlin Lispers Meetup
Tuesday, June 24th, 2014
8 pm onwards

St Oberholz, Rosenthaler Straße 72, 10119 Berlin
U-Bahn Rosenthaler Platz

We will try to occupy a large table on the first floor, but in case you don't see us,
please contact Christian: 0157 87 05 16 14.

Please join for another evening of parentheses!

Quicklisp newsJune 2014 dist update now available

· 37 days ago
New projects:
Updated projects: bknr-web, cffi, cl-6502, cl-ana, cl-async, cl-async-future, cl-autowrap, cl-cairo2, cl-charms, cl-closure-template, cl-glfw3, cl-html5-parser, cl-launch, cl-mustache, cl-permutation, cl-protobufs, cl-reexport, cl-rethinkdb, cl-sdl2, cl-spark, cl-test-more, cl-xul, clickr, clos-fixtures, closer-mop, clss, coleslaw, colleen, com.informatimago, common-lisp-actors, crane, csv-parser, data-table, drakma-async, function-cache, gbbopen, gendl, graph, helambdap, ieee-floats, iolib, lisp-unit2, lquery, more-conditions, plump, postmodern, qmynd, repl-utilities, slime, stumpwm, trivial-download, verbose, vgplot, vom, weblocks, weblocks-stores, weblocks-utils, yason.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Nick LevineMoving to Spain

· 43 days ago

I've taken up a position with RavenPack, and so my 14 years at Ravenbrook come to an end. I'm sorry to be leaving, but I hadn't been able to generate anywhere enough income recently and something had to change. I've never worked anywhere for so long, and although the dips in available work sometimes made life very stressy I've also been very happy there. If you ever need someone to help you increase the value of the software industry to society, this is the place to look.

A job! Lisp! The lean times are over! This is all very exciting.

RavenPack are based in Marbella. We'd been wondering for some time whether we would ever get around to leaving Cambridge, and it turns out now that the answer is "yes". But it's all going to be very strange. My name badge might not be changing by very much, but there's no mistaking either consulting for full-time employment, or the Fens for the Costa del Sol.

We'll be staying somewhere temporary over the summer, and then when the tourists have gone home we'll find somewhere to live which has space for visitors. Enough said.

PS: leaving do probably Saturday July 12th

Zach BeaneCL news

· 43 days ago
Crane is a new Common Lisp ORM by Fernando Borretti. "Crane doesn't drink the ORM Kool Aid: You won't spend a single minute struggling with an interface that claims to be 'simple' yet forces you into a limited vision of how databases should work."

Mark Fedurin surveys ASDF system version strings in the wild. The cartoon at the end is great.

From Rainer Joswig: Why Lisp is Different (2007) and 30 years of Common Lisp (2014).

a-cl-logger is a Common Lisp logging library with node-logstash integration, support for swank presentations, context-sensitive logging, and more.

LispjobsDoctoral Studentship in Computational Musicology, London

· 50 days ago

(Note: this is not explicitly a Lisp job, but the student is free to use Lisp, and may even be encouraged to do so.)

AHRC Doctoral Studentship in Computational Musicology

http://www.transforming-musicology.org/news/2014-06-03_ahrc-doctoral-studentship-in-computational-musicology/

Award: fees and tax-free stipend at £15,726 p.a. (inc. of London weighting)
Application deadline: Tuesday 1 July 2014
Expected start date: October 2014

We invite applications for a Doctoral Studentship, funded by the Arts
and Humanities Research Council, in Computational Musicology, located
at Queen Mary University of London, under the supervision of Professor
Geraint Wiggins.

The studentship is part of the “Transforming Musicology” project,
including Goldsmiths, University of London, Queen Mary University of
London, the University of Oxford and Lancaster University. This
project, led by Prof Tim Crawford in the Computing Department of
Goldsmiths, University of London, brings together 15 researchers to
effect a Digital Transformation of the discipline of musicology.

The aim of the open studentship is to research and develop new methods
for the representation of, and inference about, music-theoretic and
perceptual aspects of music, based on, but not restricted to, past
work by Prof. Wiggins and colleagues. This will be deployed using
Semantic Web technology.

The studentship will be located in a very rich research environment,
first within the Transforming Musicology project, but also within the
Computational Creativity Lab at QMUL, and the successful candidate
will be encouraged to interact with other researchers in both of these
contexts.

This studentship, funded by an AHRC Doctoral Training Account, is for
fees plus a tax-free stipend starting at £15,726 per annum. Further
details of the AHRC scheme including terms and conditions can be found
here:

http://www.ahrc.ac.uk/Funding-Opportunities/Postgraduate-funding/Pages/Current-award-holders.aspx

Applicants must satisfy the AHRC’s UK residence requirements:

http://www.ahrc.ac.uk/Funding-Opportunities/Documents/Guide%20to%20Student%20Eligibility.pdf

Candidates must have a first class or 2.1 undergraduate degree or
equivalent, either with a significant component of music theory, in
which case evidence of exceptionally well-developed practical
expertise in computing, including programming, will be required, or in
computer science or equivalent, in which case evidence of formal
training in music theory (e.g. to grade V or equivalent) will be
required. Candidates with relevant postgraduate qualifications will be
particularly welcome, especially if they are qualified in both music
and computer science. Other relevant qualifications and/or areas of
expertise include (but are not limited to): artificial intelligence,
informatics, formal logic and automated reasoning, musicology,
knowledge representation, deductive database theory. The successful
applicant may be required to undertake relevant undergraduate and
postgraduate interdisciplinary courses as part of the programme of
study.

Informal enquiries can be made by email to Prof. Geraint Wiggins
(geraint.wiggins@qmul.ac.uk). Please note that Prof. Wiggins is unable
to advise, prior to interview, whether an applicant is likely to be
selected. To apply please follow the on-line process (see
http://www.qmul.ac.uk/postgraduate/howtoapply/) by selecting
“Electronic Engineering” in the “A-Z list of research opportunities”
and following the instructions on the right hand side of the web page.

Please note that instead of the ‘Research Proposal’ we request a
‘Statement of Research Interests’. Your Statement of Research Interest
should answer two questions: (i) Why are you interested in the
proposed area? (ii) What is your experience in the proposed area? Your
statement should be brief: no more than 500 words or one side of A4
paper. In addition we would also like you to send a sample of your
written work, such as your final year dissertation. More details can
be found at: http://www.eecs.qmul.ac.uk/phd/how-to-apply

Applications must be received by Tuesday 1 July 2014. Interviews are
expected to take place during July 2014.


LispjobsUIx expert, Clojure architect, remote/LA for movie startup

· 54 days ago

UIx expert, Clojure architect

You can be anywhere. We are located in Virginia Beach, but the company is likely to settle in Los Angeles. You would be coming in extremely early in the game.

Our startup will build a sort of wikipedia for movies, supporting medium form essays on movies and movie ideas. Differences from W:

– You annotate directly on the film using a unique space-time scrubber. Annotations include purely cinematic attributes.

– External content is interpreted and shown in-line, using a novel 'outliner.'

– Everything is reactive functions, specified by users via drawing typed links.

– We support 'situation theory' (per Barwise and Devlin) using a categoric second sort. (Every user sees something different.)

– Something like OWL will be used for dynamic ontology graph creation, supporting ontological federation.

– We love movies, and use a novel sense of narrative dynamics in reasoning about film and in constructing essays.

– We'll use Clojure for all the forward components and build a categoric, visual DSL. Some early work (now abandoned) was in Erlang.

– A second use (in a couple years) is multi-system biomedical research.

– We want to do something fun and significant. Maybe we will make someone rich. Maybe us, maybe not.

The project is inspired by work done for the intelligence community by folks that are now disgusted, and determined to repurpose what was invented.

Our current state is that many components are mocked up. Some LA heavyweights are involved. Some patents are granted, others pending. Funding is being sought. We think it better to work on prototype code than fake demos at this point, starting with the scrubber then outliner.

You would help design the overall architecture and code prototype bits. The position is expected to lead to being team leader for one of the components, and one responsibility will be to help build the team.

A downside is that we are inventing as we build. Some things will use simple good engineering principles; others will require new science and creative vision. Many of the UI conventions are experimental and will have to evolve as we go. You probably won't be able to use an off the shelf framework/stack. It will be hard, hard, hard.

It is possible that a partner may emerge that wants native apps so we'll have to pivot to include their stinky frameworks. We don't yet know what the balance will be between contributing open source and keeping things proprietary, but we will do both.

Contact Ted Goranson at tedg then the at sign then alum.MIT then dot then edu.

We will initially send two academic papers: one for the biomedical community and the other on the scrubber for a forthcoming ACM UI conference.


PixelFTP: Apparently, It's Not Dead Yet

· 55 days ago

(Why yes, I am still alive. Just very bad at writing lately. :'()

Much to my surprise, the FTP protocol has managed to not die yet, and I recently received a patch from Rafael Jesús Alcántara Pérez to get cl-ftp running on ABCL.

So thanks to Rafael's patch, I've finally taken the time to convert cl-ftp from darcs to git, and tossed it up on github like all the cool kids are doing these days. Supposedly it even now runs on ABCL.

Enjoy!

Don't you just love it when people move to a new blog? I'd 301 redirect you if I could, but since I can't you'll have to click through to read comment count unavailable comments or leave your own.

Quicklisp newsMay 2014 Quicklisp dist update now available

· 58 days ago
New projects:
Updated projects: 3bmd, alexandria, asdf-linguist, basic-binary-ipc, cells, city-hash, cl-algebraic-data-type, cl-ana, cl-async, cl-autowrap, cl-bert, cl-bibtex, cl-charms, cl-cron, cl-date-time-parser, cl-emb, cl-freetype2, cl-html5-parser, cl-launch, cl-mustache, cl-permutation, cl-plplot, cl-protobufs, cl-rethinkdb, cl-sdl2, cl-tcod, cl-tuples, clack, cleric, clinch, closer-mop, clsql-helper, clsql-orm, codata-recommended-values, coleslaw, collectors, colleen, com.google.base, com.informatimago, commonqt, crane, dbus, drakma, dynamic-collect, fare-quasiquote, fast-io, flexi-streams, gbbopen, gendl, helambdap, http-parse, hu.dwim.perec, hunchentoot, inferior-shell, inner-conditional, ip-interfaces, lfarm, lisp-gflags, lisp-matrix, local-time, lol-re, lparallel, lquery, more-conditions, named-readtables, nibbles, nst, optima, pgloader, plump, policy-cond, protobuf, quux-time, random, romreader, rutils, sdl2kit, sip-hash, slime, snappy, spinneret, static-vectors, stumpwm, swank-client, swank-crew, swap-bytes, sxql, symbol-munger, talcl, trivial-gray-streams, trivial-ldap, uiop, verbose, weblocks-stores, wookie.

To get this update, use (ql:update-dist "quicklisp").

Last month, there were some reports of people getting a badly-sized-local-archive error during update. I haven't seen that myself on this month's update, but it is safe to choose the delete-and-retry restart when that happens. It may get you past the error.


Nick LevineWukix? I think this one's for you.

· 58 days ago

From: "Daniel" <miller.app.packs.services@gmail.com>
Subject: Planet Lisp
To: ndl@ravenbrook.com
Date: Mon, 26 May 2014 15:21:37 +0200

Hello Wukix, Inc.,

I would very much like to work with you on your app Planet Lisp.

Lots of developers have worked with me before on a wide range of
projects. Better rankings can be achieved for almost any app.

Hans HübnerBerlin Lispers Meetup: Tuesday, May 24th, 2014, 8:00pm

· 60 days ago
You are kindly invited to the next "Berlin Lispers Meetup", an informal gathering for anyone interested in Lisp, beer or coffee:
Berlin Lispers Meetup
Tuesday, May 24th, 2014
8 pm onwards
St Oberholz, Rosenthaler Straße 72, 10119 Berlin

U-Bahn Rosenthaler Platz
We will try to occupy a large table on the first floor, but in case you don't see us, please contact Christian: 0157 87 05 16 14.
Please join for another evening of parentheses!

Zach BeaneCommon Lisp bits

· 61 days ago
The Internet Archive has a giant tarball of old Lucid stuff, a 3.6GB download. Here's a file index, 1MB. The dump includes:
It's unclear who published this dump on archive.org and what they expect people to do with it. As Rainer Joswig points out, "this dump does not mean there is a usable license" for anything it contains.

Gábor Melis's mgl-pax is an exploratory programming environment and documentation generator. Gábor presented PAX at ELS.

Challenging Clojure in Common Lisp, by Chris Kohlhepp. Uses Kenny Tilton's Cells!

You can now create OS X apps in mocl. There's a 15-minute screencast demo of a Planet Lisp app.

SBCL now has an MPFR contrib that provides "arbitrary precision arithmetic on floating-point numbers". It will be available in the next release after 1.1.18.

SBCL now has an ARM port. It is incomplete but under active development.

Mariano Montone is working on CLDM, a distributed dependency manager.

Videos for the 7th European Lisp Symposium are now available. There are two sets: Monday's videos and Tuesday's videos.

Jeff Massung recently published a ton of useful LispWorks-specific libraries. Apache licensed.

Jorge TavaresELS2014: Proceedings, Videos and Slides

· 63 days ago

The organizers of this year’s European Lisp Symposium made available all the videos of the talks with the respective slides, as well as the PDF containing all the accepted papers.

This is really great and everyone involved in making this happening should be congratulated! For everyone that didn’t attend (like me) it’s a good opportunity to see what happened at ELS. Even though the talks, discussions and networking between participants is the most valuable thing while attending a conference, this is very nice for everyone who was not able to be there.

Once again, kudos to the organizers!


Filed under: Programming Tagged: Common Lisp, ELS, Lisp, Meetings

Christophe Rhodeslanguages for teaching programming

· 69 days ago

There's seems to be something about rainy weekends in May that stimulates academics in Computing departments to have e-mail discussions about programming languages and teaching. The key ingredients probably include houseboundness, the lull between the end of formal teaching and the start of exams, the beginning of contemplation of next year's workload, and enthusiasm; of course, the academic administrative timescale is such that any changes that we contemplate now, in May 2014, can only be put in place for the intake in September 2015... if you've ever wondered why University programmes in fashionable subjects seem to lag about two years behind the fashion (e.g. the high growth of Masters programmes in "Financial Engineering" or similar around 2007 - I suspect that demand for paying surprisingly high tuition fees for a degree in Synthetic Derivative Construction weakened shortly after those programmes came on stream, but I don't actually have the data to be certain).

Meanwhile, I was at the European Lisp Symposium in Paris last week, where there was a presentation of a very apposite nature: the Computing Department at Middlesex university has implemented an integrated first-year of undergraduate teaching, covering a broad range of the computing curriculum (possibly not as broad as at Goldsmiths, though) through an Arduino and Raspberry Pi robot with a Racket-based programmer interface. Students' progress is evaluated not through formal tests, courseworks or exams, but through around 100 binary judgments in the natural context of "student observable behaviours" at three levels ('threshold', which students must exhibit to progress to the second year; 'typical', and 'excellent').

This approach has a number of advantages, I think, over a more traditional division of the year into four thirty-credit modules (e.g. Maths, Java, Systems, and Profession): for one, it pretty much guarantees a coherent approach to the year, where in the divided modules case it is surprisingly easy for one module to be updated in syllabus or course content without adjustments to the others, leaving for example some programming material insufficiently supported by the maths (and some maths taught without motivation). The assessment method is in principle transparent to the students, who know what they have to do to progress (and to get better marks); I'm not convinced that this is always a good thing, but for an introductory and core course I think the benefits substantially outweigh the disadvantages. The use of Racket as the teaching language has an equalising effect - it's unlikely that students will have prior experience with it, so everyone starts off at the same point at least with respect to the language - and the use of a robot provides visceral feedback and a sense of achievement when it is made to do something in a way that text and even pixels on a screen might not. (This feedback and tangible sense of achievement is perhaps why the third-year option of Physical Computing at Goldsmiths is so popular: often oversubscribed by a huge margin).

With these thoughts bubbling around in my head, then, when the annual discussion kicked off at the weekend I decided to try to articulate my thoughts in a less ephemeral way than in the middle of a hydra-like discussion: so I wrote a wiki page, and circulated that. One of the points of having a personal wiki is that the content on it can evolve, but before I eradicate the evidence of what was there before, and since it got at least one response (beyond "why don't you allow comments on your wiki?") it's worth trying to continue the dialogue.

Firstly, Chris Cannam pulls me up on not including an Understand goal, or one like it: teaching students to understand and act on their understanding of computing artifacts, hardware and software. I could make the argument that this lives at the intersection of my Think and Experiment goals, but I think that would be retrospective justification and that there is a distinct aim there. I'm not sure why I left it out; possibly, I am slightly hamstrung in this discussion about pedagogy by a total absence of formal computing education; one course in fundamentals of computing as a 17-year-old, and one short course on Fortran and numerical methods as an undergraduate, and that's it. It's in some ways ironic that I left out Understand, given that in my use of computers as a hobby it's largely what I do: Lisp software maintenance is often a cross between debugger-oriented programming and software archaeology. But maybe that irony is not as strong as it might seem; I learnt software and computing as a craft, apprenticed to a master; I learnt the shibboleths ("OAOO! OAOO! OAOO!") and read the training manuals, but I learnt by doing, and I'm sure I'm far from alone, even in academic Computing let alone among programmers or computing professionals more generally.

Maybe the Middlesex approach gets closer, in the University setting, to the apprenticeship (or pre-apprenticeship) period; certainly, there are clear echoes in their approach of the switch by MIT from the SICP- and Scheme-based 6.001 to the Robotics- and Python-based introduction to programming - and Sussman's argument from the time of the switch points to a qualitative difference in the role of programmers, which happens to dovetail with current research on active learning. In my conversation with the lecturers involved after their presentation, they said that the students take a more usual set of languages in their second-years (Java, C++); obviously, since this is the first year of their approach, they don't yet know how the transition will go.

And then there was the slightly cynical Job vs Career distinction that I drew, being the difference between a graduate-level job six months after graduation as distinct from a fulfilling career. One can of course lead to the other, but it's by no means guaranteed, and I would guess that if asked most people would idealistically say we as teachers in Universities should be attempting to optimize for the latter. Unfortunately, we are measured in our performance in the former; one of the "Key Information Sets" collected by higher-education agencies and presented to prospective undergraduates is the set of student 'destinations'. Aiming to optimize this statistic is somewhat akin to schools optimizing their GCSE results with respect to the proportion of pupils gaining at least 5 passes: ideally, the measurement should be a reflection of the pedagogical practice, but the foreknowledge of the measurement has the potential to distort the allocation of effort and resources. In the case of the school statistic, there's evidence that extra effort is concentrated on borderline pupils, at the expense of both the less able and potential high-fliers; the distortion isn't so stark in the case of programming languages, because the students have significant agency between teaching and measurement, but there is certainly pressure to ensure that we teach the most common language used in assessment centres.

In Chris' case, C++ might have had a positive effect on both; the programming language snob in me, though, wants to believe that there are hordes of dissatisfied programmers out there, having got a job using their competence in some "industry-standard" language, desperately wanting to know about a better way of doing things. I might be overprojecting the relationship of a professional programmer with their tools, of course: for many a career in programming and development is just that, rather than a love-hate relationship with their compiler and linker. (Also, there's the danger of showing the students that there is a better way, but that the market doesn't let them use it...)

Is it possible to conclude anything from all of this? Well, as I said in my initial thoughts, this is an underspecified problem; I think that a sensible decision can only be taken in practice once the priorities for teaching programming at all have been established, in combination with the resources available for delivering teaching. I'm also not enough of a polyglot in practice to offer a mature judgment on many fashionable languages; I know enough of plenty of languages to modify and maintain code, but am comfortable writing from scratch in far fewer.

So with all those caveats, an ideal programming curriculum (reflecting my personal preferences and priorities) might include in its first two years: something in the Lisp family, for Think and Experiment; ARM Assembler, for Think and Understand; C++, for Job and Career (and all three together for Society). Study should probably be handled in individual third-year electives, and I would probably also include a course hybrid between programming, database and system administration to cover a Web programming stack for more Job purposes. Flame on (but not here, because system administration is hard).

Ben HydeMOCL demo

· 70 days ago

When I first heard about MOCL a few years ago I was pretty sure it wouldn’t survive, but it looks like I was wrong.  See this nice video that the folks at wukix.com have recently posted.  It’s an impressive 15 minute video demo of using MOCL to author in Common Lisp an application targeted to IOS.

Love that remote REPL for debugging your application!

MOCL has a fair amount of extended syntax so it can play nice with Objective C.

I’m surprised they don’t have a free demo version.  But then, I’m a cheapskate!

So, go watch the video :) .

Dimitri FontaineWhy is pgloader so much faster?

· 70 days ago

pgloader loads data into PostgreSQL. The new version is stable enough nowadays that it's soon to be released, the last piece of the 3.1.0 puzzle being full debian packaging of the tool.

The pgloader logo is a loader truck, just because.

As you might have noticed if you've read my blog before, I decided that pgloader needed a full rewrite in order for it to be able to enter the current decade as a relevant tool. pgloader used to be written in the python programming language, which is used by lots of people and generally quite appreciated by its users.

Why changing

Still, python is not without problems, the main ones I had to deal with being poor performances and lack of threading capabilities. Also, the pgloader setup design was pretty hard to maintain, and adding compatiblity to other loader products from competitors was harder than it should.

As I said in my pgloader lightning talk at the 7th European Lisp Symposium last week, in searching for a modern programming language the best candidate I found was actually Common Lisp.

After some basic performances checking as seen in my Common Lisp Sudoku Solver project where I did get up to ten times faster code when compared to python, it felt like the amazing set of features of the language could be put to good use here.

So, what about performances after rewrite?

The main reason why I'm now writing this blog post is receiving emails from pgloader users with strange feelings about the speedup. Let's see at the numbers one user gave me, for some data point:

 select rows, v2, v3,
        round((  extract(epoch from v2)
               / extract(epoch from v3))::numeric, 2) as speedup
   from timing;
        
  rows   |        v2         |       v3        | speedup 
---------+-------------------+-----------------+---------
 4768765 | @ 37 mins 10.878  | @ 1 min 26.917  |   25.67
 3115880 | @ 36 mins 5.881   | @ 1 min 10.994  |   30.51
 3865750 | @ 33 mins 40.233  | @ 1 min 15.33   |   26.82
 3994483 | @ 29 mins 30.028  | @ 1 min 18.484  |   22.55
(4 rows)
The raw numbers have been loaded into a PostgreSQL table

So what we see in this quite typical CSV Loading test case is a best case of 30 times faster import. Which brings some questions on the table, of course.

Wait, you're still using COPY right?

The PostgreSQL database system provides a really neat COPY command, which in turn is only exposing the COPY Streaming Protocol, that pgloader is using.

So yes, pgloader is still using COPY. This time the protocol implementation is to be found in the Common Lisp Postmodern driver, which is really great. Before that, back when pgloader was python code, it was using the very good psycopg driver, which also exposes the COPY protocol.

So, what did happen here?

Well it happens that pgloader is now built using Common Lisp technologies, and those are really great, powerful and fast!

Not only is Common Lisp code compiled to machine code when using most Common Lisp Implementations such as SBCL or Clozure Common Lisp; it's also possible to actually benefit from parallel computing and threads in Common Lisp.

That's not how I did it!

In the pgloader case I've been using the lparallel utilities, in particular its queuing facility to be able to implement asynchronous IOs where a thread reads the source data and preprocess it, fills up a batch at a time in a buffer that is then pushed down to the writer thread, that handles the COPY protocol and operations.

So my current analysis is that the new thread based architecture used with a very powerful compiler for the Common Lisp high-level language are allowing pgloader to enter a whole new field of data loading performances.

Conclusion

Not only is pgloader so much faster now, it's also full of new capabilities and supports several sources of data such as dBase files, SQLite database files or even MySQL live connections.

Rather than a configuration file, the way to use the new pgloader is using a command language that has been designed to look as much like SQL as possible in the pgloader context, to make it easy for its users. Implementation wise, it should now be trivial enough to implement compatibility with other data load software that some PostgreSQL competitor products do have.

Also, the new code base and feature set seems to attract way more users than the previous implementation ever did, despite using a less popular programming language.

You can already download pgloader binary packages for debian based distributions and centos based ones too, and you will even find a Mac OS X package file ( .pkg) that will make /usr/local/bin/pgloader available for you on the command line. If you need a windows binary, drop me an email.

The first stable release of the new pgloader utility is scheduled to be named 3.1.0 and to happen quite soon. We are hard at work on packaging the dependencies for debian, and you can have a look at the Quicklisp to debian project if you want to help us get there!

Christophe Rhodeseuropean lisp symposium 2014

· 70 days ago

My train ride to Paris was uneventful, and I arrived at my accommodation only several hours after bedtime. I did manage to write my talk, and it was good to discover the number of obsessive planet.lisp readers when I showed up to register - "good to see you again. How's the talk?" was the median greeting. For the record, I had not only written my talk on the train but also had a chance to relax a little. Go trains.

The conference was fun; gatherings of like-minded people usually are, but of course it takes substantial effort from determined people to create the conditions for that to happen. Kent's work as programme chair, both before and during the conference, came with a feeling of apparent serenity while never for a moment looking out of control, and the groundwork that the local organizing team (Didier Verna, Gérard Assayag and Sylvie Benoit) had done meant that even the problem of the registrations exceeding the maximum capacity of the room - nice problem to have! - could be dealt with.

I liked the variety in the keynotes. It was very interesting to hear Richard Gabriel's talk on his research in mood in Natural Language processing and generation; in many ways, those research directions are similar to those in the Transforming Musicology world. The distinction he drew between programming for a purpose and programming for exploration was very clear, too: he made it clear that he considered them two completely different things, and with my hat as a researcher on I have to agree: usually when I'm programming for research I don't know what I'm doing, and the domain of investigation is so obviously more unknown than the program structure that I had better have a malleable environment, so that I can minimize the cost of going down the wrong path. Pascal Costanza gave a clear and detailed view of the problems in parallel programming, drawing a distinction between parallelism and concurrency, and happened to use examples from several of my previous lives (Smooth-Particle Hydrodynamics, Sequence Alignment) to illustrate his points. Gábor Melis talked about his own learning and practice in the context of machine learning, with a particular focus on his enviable competition record; his call to aim for the right-hand side of the curves (representing clear understanding and maximum use-case coverage) was accompanied by announcements of two libraries, mgl-pax and mgl-mat.

My own presentation was, I suppose, competent enough (slides). Afterwards, I had a good talk with my previous co-author in the generalizes research line, Jim Newton, about the details, and Gábor told me he'd like to try it "on by default". But the perils of trying to get across a highly-technical topic struck, and I got a number of comments of the form that the talk had been enjoyable but I'd "lost them at compute-applicable-methods-using-classes". I suppose I could still win if the talk was enjoyable enough for them to read and work through the paper; maybe next time I might risk the demo effect rather more than I did and actually do some programming live on stage, to help ground the ideas in people's minds. I did get a specific request: to write a blog post about eval-when in the context of metaobject programming, and hopefully I'll find time for the in the next few train journeys...

Meanwhile, highlights (for me) among the contributed papers: Nick Levine driving Lispworks' CAPI graphical user interface library from SBCL using his Common Lisp AUdience Expansion toolkit (preaching to the choir, though: his real target is Python developers); Faré Rideau's description of a decade-long exploration of defsystem design space; François-Xavier Bois' demonstration of web-mode.el, an Emacs mode capable of handling CSS, Javascript and PHP simultaneously; and two talks motivated by pedagogy: Pedro Ramos' discussion of the design tradeoffs involved in an implementation of Python in Racket, and the team presentation of the approach taken for a new robotics- and Scheme-oriented undergraduate first-year at Middlesex University, on which more in a subsequent post.

Lightning talks of particular note to me: Martin Simmons talking about Lispworks for mobile; Didier Verna and Marco Antoniotti talking about their respective documentation generation systems (my response); Mikhail Raskin's argument about the opportunity to push Julia in a lispy direction; and probably others which will come back to mind later.

I was also pleased to be able to contribute to the last full session of the symposium, a workshop/panel about Lisp in the area of music applications: an area which is serendipitously close to the day job. I worked on getting a version of audioDB, our feature-vector search engine for similarity matching, built and working on my laptop, and finding a sufficiently interesting search among my Gombert/Josquin collection to demo - and I also had the chance to talk about Raymond Whorley's work on using multiple viewpoint systems for hymn harmonizations, and what that teaches us about how people do it (slides, for what they're worth). Other systems of interest in the session included OpenMusic (of course, given where we were), PWGL, OMax, modalys, and overtone; there was an interesting conversation about whether the choice of implementation language was restricting the userbase, particularly for tools such as OpenMusic where the normal interface is a graphical one but a large fraction of users end up wanting to customize behaviour or implement their own custom patches.

And then it was all over bar the dinner! On a boat, sadly immobile, but with good conversation and good company. The walk home in company was fun, though in retrospect it was probably a mistake to stop off at a bar for a nightcap... the train journey back to the UK the following morning was definitely less productive than it could have been; closing eyes and letting the world go past was much more attractive.

But now I'm on another train, going off to record the complete works of Bernadino de Ribera. Productivity yay.

François-René RideauThe Great ASDF Bug Hunt

· 71 days ago

With the release of ASDF 3.1.2 this May 2013, I am now officially retiring not just from ASDF maintenance (Robert Goldman has been maintainer since ASDF 3.0.2 in July 2013), but also from active ASDF development. (NB: ASDF is the de facto standard Common Lisp build system, that I took over in November 2009.) I'm still willing to give information on where the code is coming from and advice where it might go. I'm also still willing to fix any glaring bug that I may have introduced, especially so in UIOP (indeed I just committed a few simple fixes (for Genera of all platforms!)). But I won't be writing new features anymore. (However, you will hopefully soon see a bunch of commits with my name on them, of code I have already written that addresses the issue of syntax modularity; the code was completed and is committed in a branch, but is not yet merged into the master branch, pending tests and approval by the new maintainer).

Before I left, though, I wanted to leave the code base in order, so I made sure there are no open bugs beside wishlist items, I dumped all my ideas about what more could be done in the TODO file, and I did a video walkthrough of the more subtle parts of the code. I also wrote a 26-page retrospective article on my involvement with ASDF, a reduced version of which I submitted to ELS 2014. There, I gave a talk on Why Lisp is Now an Acceptable Scripting Language.

The talk I would have liked to give instead (and probably should have, since I felt like preaching to the converted) was about the great ASDF bug hunt, which corresponds to the last appendix of my paper (not in the reduced version), a traverse across the build. It would have been a classic monster hunt story:

The final illumination is that inasmuch as software is "invented", it isn't created ex nihilo so much as discovered: Daniel Barlow, who wrote the initial version ASDF, obviously didn't grok what he was doing, and can't be said to have created the ASDF algorithm as it now stands, since what he wrote had such deep conceptual flaws; instead, he was experimenting wildly, and his many successes overshadow and more than redeem his many failures. I, who wrote the correct algorithm, which required a complete deconstruction of what was done and reconstruction of what should have been done instead, cannot be said to have created it either, since in a strong sense I "only" debugged Daniel's implicit specification. And so, the code evolved, and as a result, an interesting algorithm was discovered. But no one created it.

An opposite take on the same insight, if you know Non-Standard Analysis, is that Daniel did invent the algorithm indeed, but specified it with a non-standard formula: his formula is simple (a few hundreds of lines of code), and captures the desired behaviour in simple enough cases with standard parameters (using SBCL on Unix, without non-trivial dependency propagation during an incremental build) but fails in non-standard cases (using other implementations, or dealing with timestamp propagation). My formula specifies the desired behaviour in all cases with all the details correct, and is much more elaborate (a few thousands of lines of code), but is ultimately only a Standardization of Daniel's formula — a formal elaboration without any of Daniel's non-standard shortcuts, but one that doesn't contain information not already present in Daniel's version, only making it explicit rather than implicit.

The two interpretations together suggest the following strategy for future software development: There is a lot of untapped potential in doing more, more daring, experimentations, like Daniel Barlow did, to more quickly and more cheaply discover new interesting designs; and conceivably, a less constrained non-standard representations could allow for more creativity. But this potential will remain unrealized unless Standardization is automated, i.e. the automatic specification of a "standard" formal program from a "non-standard" informal one; a more formal standard representation is necessary for robustly running the program. This process could be viewed as automated debugging: as the replacement of informal variables by sets of properly quantified formal variables; as an orthogonal projection onto the hyperplane of typed programs; as search of a solution to a higher-order constraint problem; as program induction or machine learning; etc. In other word, as good old-fashioned or newfangled AI. This process itself is probably hard to formalize; but maybe it can be bootstrapped by starting from a non-standard informal specification and formalizing that.

Paul KhuongInteger Division, Step 0: No Remainder

· 73 days ago

Exciting times in SBCL-land! Not only will Google Summer of Code support two students to work on SBCL (one will improve our support for correct Unicode manipulation, and the other our strength reduction for integer division), but we also sprouted a new Linux/ARM port! As Christophe points out, this a nice coincidence: (most?) ARM chips lack hardware integer division units. I find the integer division project even more interesting because I believe we can cover all three standard division operators (floor, truncate, and ceiling) with an unified code generator.

I first looked into integer division by constants four years ago, and I was immediately struck by the ad hoc treatment of the transformation: I have yet to find a paper that summarises and relates algorithms that are currently in use. Worse, the pseudocode tends to assume fixed-width integer, which drowns the interesting logic in bignum-management noise. Back when I had free time, I uploaded an early draft of what may become a more enlightening introduction to the topic. My goal was to unite all the simplification algorithms I’d seen and to generalise them to SBCL’s needs: our optimisers benefit from precise integer range derivation, and codegen ought to deal with tagged fixnums. The draft should take shape as the GSoC project progresses.

There is one widespread – but very specialised – integer division algorithm that does not fit in the draft: multiplication by modular inverses. I’m guessing it’s common because it’s the first thing that comes to mind when we say division-by-multiplication. The transformation is also so specialised that I find it’s usually mentioned in contexts where it wouldn’t work. Still, it’s a nice application of algebra and the coefficients are simple enough to generate at runtime (even in C or assembly language), so here goes.

Multiplicative inverses for integer division

Let \(a\) and \(m\) be naturals. The multiplicative inverse of \(a\) modulo \(m\) is a natural \(b\) such that \(a \times b \equiv 1 \mod m\). Machine arithmetic is naturally modular (e.g., mod \(m = 2\sp{32}\)). This seems perfect!

There are a couple issues here:

  1. we have to find the modular inverse;
  2. the modular inverse only exists if \(\mathop{gcd}(a, m) = 1\);
  3. multiplicative inversion and integer division only coincide when the remainder is zero.

For a concrete example of the third issue, consider \(11\), the multiplicative inverse of \(3 \mod 16\): \(3\times 11 = 33 \equiv 1 \mod 16\) and \(6 \times 11 = 66 \equiv 2 \mod 16\). However, \(4 \times 11 = 44 \equiv 12 \mod 16\), and \(12\) is nowhere close to \(4 \div 3\).

This post addresses the first two points. There is no workaround for the last one.

We can generate a modular inverse with the extended Euclidean algorithm. Wikipedia shows the iterative version, which I can never remember, so I’ll instead construct the simple recursive one.

We already assume that \[\mathop{gcd}(a, b) = 1 \] and we wish to find \(x, y\) such that \[ax + by = 1.\] Bézout’s identity guarantees that such coefficients exist.

Things are simpler if we assume that \(a < b\) (they can only be equal if \(a = b = 1\), and that case is both annoying and uninteresting).

If \(a = 1\), \(a + b0 = 1\).

Otherwise, let \(q = \lfloor b/a\rfloor\) and \(r = b - qa\). \[\mathop{gcd}(a, r) = \mathop{gcd}(a, b) = 1,\] and, given \[ax’ + ry’ = 1,\] we can revert our change to find \[ax’ + (b - qa)y’ = a(x’ - qy’) + by’ = 1.\]

We’re working in modular arithmetic, so we can sprinkle mod m without changing the result. In C, this will naturally happen for unsigned integers, via overflows. In CL, we can still force modular reduction, just to convince ourselves that we don’t need bignums.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(defun inverse (a m)
  (labels ((egcd (a b)
             (cond ((= 1 a)
                    (values 1 0))
                   ((> a b)
                    (multiple-value-bind (y x)
                        (egcd b a)
                      (values x y)))
                   (t
                    (multiple-value-bind (q r)
                        (truncate b a)
                      (multiple-value-bind (y x)
                          (egcd r a)
                        (values (mod (- x (* y q)) m)
                                y)))))))
    (let* ((x (egcd a m))
           (i (if (< x 0) (+ x m) x)))
      ;; i better be a's inverse...
      (assert (= 1 (mod (* a i) m)))
      i)))

And a quick sanity check:

CL-USER> (loop for m from 2 upto (ash 1 10)
               do (loop for i from 1 below m
                        when (= 1 (gcd i m))
                        do (inverse i m)))
NIL ; no assertion failure

The second issue is that the multiplicative inverse only exists if our divisor and our modulo (e.g., \(2\sp{32}\)) are coprime. The good news is that \(\mathop{gcd}(a, 2\sp{w})\) can only be a power of two. We only have to factor our divisor \(a = 2\sp{s} v\), and find \(i\), the multiplicative inverse of \(v\). Division by \(a\) is then a right shift by \(s\) and a multiplication by \(i\).

1
2
3
4
5
6
7
8
9
10
(defun trailing-zeros (x)
  "Return the largest integer s such that 2^s | x"
  (assert (plusp x))
  (1- (integer-length (logxor x (1- x)))))

(defun divisor (d m)
  (let* ((zeros (trailing-zeros d))
         (inverse (inverse (ash d (- zeros)) m)))
    (lambda (x)
      (mod (* (ash x (- zeros)) inverse) m))))

And now, a final round of tests:

CL-USER> (defun test-divisor (d m)
           (let ((divisor (divisor d m)))
             (loop for i upfrom 0
                   for j from 0 by d below m
                   do (assert (= (funcall divisor j) i)))))
TEST-DIVISOR
CL-USER> (loop for width from 1 upto 20
               for mod = (ash 1 width)
               do (loop for x from 1 below mod
                        do (test-divisor x mod)))
NIL

A simple transformation from integer division to shift and multiplication... that works only in very specific conditions.

What are modular inverses good for, then?

I’ve only seen this transformation used for pointer subtractions in C-like languages: machines count in chars and programs in whatever the pointers point to. Pointer arithmetic is only defined within the same array, so the compiler can assume that the distance between the two pointers is a multiple of the object size.

The following program is deep in undefined behaviour, for example.

foo.c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#include <stdio.h>

struct foo {
  char buffer[7];
};

int main(void)
{
  struct foo *x = (struct foo *)0;
  struct foo *y = (struct foo *)9;

  printf("%zd %i\n", y - x, y < x);
  return 0;
}
pkhuong:tmp pkhuong $ clang foo.c && ./a.out
-2635249153387078801 0

What I find interesting is that, if we pay attention to the correctness analysis, it’s clear that general div-by-mul transformations benefit from known common factors between the divisor and the dividend. In the extreme case, when the dividend is always a multiple of the divisor, we can convert the division to a single double-wide multiplication, without any shift or additional multi-word arithmetic. On architectures with fast multipliers or ones that let us compute the high half of product without the low part, the general case (coupled with a tight analysis) may be marginally quicker than this specialised transformation. Yet, both GCC and clang convert pointer subtractions to shifts and multiplications by modular inverses.

In the end multiplicative inverses seem mostly useful as a red herring, and as a minimal-complexity low hanging fruit. The only reason I use them is that it’s easy to generate the coefficients in C, which is helpful when allocation sizes are determined at runtime.

Nick LevineCLAUDE and CLAUDE-SETUP

· 79 days ago

CLAUDE (the Common Lisp Library Audience Expansion Toolkit) exports libraries written in Common Lisp, so that applications being developed in other languages can access them. CLAUDE co-operates with foreign runtimes in the management of CLOS objects, records, arrays and more primitive types. Lisp macros make the task of exporting a library simple and elegant; template documentation along with C headers and sample code files relieve some of the burden of explaining such exports to the application programmer.

CLAUDE-SETUP configures CLAUDE for your library.

Rapid Example

(defclass-external frob () ()) (defun-external (new-frob :result-type object) () (make-instance 'frob))

and then in Python...

>>> claude.Frob() <Claude Frob handle=0x200538b0> >>>

Take a Closer Look

Brit ButlerBreaking Radio Silence

· 80 days ago

Long time, no blog.

I've been offline for a while. I burned out last July and only really started hacking on my lisp projects again in March. So what's changed in the last two months? Actually, kind of a lot.

Coleslaw 0.9.4

Coleslaw 0.9.4 is hereby released. I apologize that 0.9.3 which went out in the last quicklisp release had an embarrassing escaping bug.

The most fun part of Coleslaw is trying my hand at API design. Lisp is a great tool for writing extensible software and Coleslaw has been a good proving ground for that since everyone has a slightly different set of requirements for their blogware.

I've been reading Sonya Keene's Object Oriented Programming in CL lately which led to a large refactoring around the new Document Protocol. I'm not prepared to say anything intelligent about protocols yet, but thankfully plenty of people have done so elsewhere. This blog post by sykopomp isn't a bad place to start.

In addition to the document protocol and the usual litany of bugfixes, Coleslaw now has a new theme based on bootswatch readable, user-defined routing, support for static pages, and greatly expanded docs.

The main things to tackle before 1.0 are a plugin to support incremental compilation for very large sites and a twitter/tumblr cross-posting plugin.

cl-6502 0.9.7

Additionally, someone actually found a use for my Readable CPU emulator! Dustin Long was working on a homebrew Nintendo game and wanted a way to unit test his code, so he's been using cl-6502 to get cycle counts and otherwise check behavior. Naturally, the very basic assembler got on his nerves so he sent me a nice pull request adding support for labels, compile-time expressions, and decimal, hex, and binary literals. Thanks, Dustin!

I also rewrote the addressing modes again, reduced consing, and made debugging easier by using Alexandria's named-lambda for all the opcodes. The cl-6502 book has been updated, of course.

Upcoming

With any luck, I'll get back to work on famiclom or tools for analyzing old NES games like Super Mario Bros and Mega Man 2. It's good to be back.


For older items, see the Planet Lisp Archives.


Last updated: 2014-07-20 14:49