(loop for i below 10 finally (return i))return 9 or 10?
(loop for i upto 10 finally (return i))return 10 or 11?
(loop for i below 10 for j upto 10 finally (return (list i j)))return?
(loop for i below 10 and j upto 10 finally (return (list i j)))?
FOR ... ANDnot only mimics
LET*) in terms of binding visibility, it also influences when the loop termination checks take place. That was new to me. I initially expected examples 3 and 4 to return the same values. What about you? Which ones, if any, did you get wrong? :-)
A few days ago Jeff Shrager posted that James Markevitch translated some 1966 BBN paper tape source code with the oldest known Eliza program. (Jeff’s site, elizagen.org, tracks the genealogy of Eliza.)
Picture from elizagen.org
(doctor (lambda nil (prog (sentence keystack phraselist) (setsepr " " " " " ") (setbrk "." "," ? | - + "(" ")" L32 @ BS L14) (setq flipflop 0) (control t) (sentprint (quote (tell me your troubles"." please terminate input with an enter))) (setnone) a (prin1 xarr) (makesentence) (cond ((equal sentence (quote (goodbye))) (return (sentprint (quote (it's been my pleasure)))))) (analyze) (terpri) (go a) )))
The 1966 Eliza code is on github.
Jeff’s post prompted some historical context from Jeff Barrett:
The original Eliza was moved to the ANFS Q32 at SDC (one of the (D)ARPA block grant sites) in the mid 1960’s. The programmer responsible was John Burger who was involved with many early AI efforts. Somehow, John talked to one of the Playboy writers and the next thing we knew, there was an article in Playboy much to Weizenbaum’s and everybody else’s horror. We got all sorts of calls from therapists who read the article and wanted to contribute their “expertise” to make the program better. Eventually we prepared a stock letter and phone script to put off all of this free consulting.
The crisis passed when the unstoppable John Burger invited a husband and wife, both psychology profs at UCLA, to visit SDC and see the Doctor in action. I was assigned damage control and about lost it when both visitors laughed and kept saying the program was perfect! Finally, one of them caught their breath and finished the sentence: “This program is perfect to show our students just exactly how NOT to do Rogerian* therapy. *I think Rogerian was the term used but it’s been a while.
A little latter we were involved in the (D)ARPA Speech Understanding Research (SUR) Program and some of the group was there all hours of day and night. Spouses and significant others tended to visit particularly in the crazy night hours and kept getting in our way. We would amuse them by letting them use Eliza on the Q32 Time Sharing System. One day, the Q32 became unavailable in those off hours for a long period of time. We had a Raytheon 704 computer in the speech lab that I thought we could use to keep visitors happy some of the time. So one weekend I wrote an interpretive Lisp system for the 704 and debugged it the next Monday. The sole purpose of this Lisp was to support Eliza. Someone else adopted the Q32 version to run on the new 704 Lisp. So in less than a week, while doing our normal work, we had a new Lisp system running Eliza and keeping visitors happy while we did our research.
The 704 Eliza system, with quite a different script, was used to generate a conversation with a user about the status of a computer. The dialogue was very similar to one with a human playing the part of a voice recognition and response system where the lines are noisy. The human and Eliza dialogues were included/discussed in A. Newell, et al., “Speech Understanding Systems; Final Report of a Study Group,” Published for Artificial Intelligence by North-Holland/ American Elsevier (1973). The content of that report was all generated in the late 1960s but not published immediately.
The web site, http://www.softwarepreservation.org/projects/LISP/, has a little more information about the Raytheon 704 Lisp. The SUR program was partially funded and on-going by 1970.
After posting about the Quicklisp verbosity conundrum, a few people emailed me with variations on this theme: “Since Quicklisp knows what the dependencies of a system are, can’t you just load those quietly first and then load your project verbosely?”
The problem is that the premise is not true. Quicklisp has an idea about the dependencies of Quicklisp-provided systems, but not of any other systems available through ASDF.
And it’s actually pretty difficult to answer the question, for a given system, “What systems must be loaded first?” It’s not as simple as loading the system definition and then looking at it. The act of loading the system definition may trigger the loading of other systems, which then load other systems, which then load other systems. System definition files are not simply data files. They’re Lisp programs that can do arbitrary computation and manipulation of the environment.
Quicklisp knows about its system dependency structures because, for every system in Quicklisp, I load it, and record what got loaded to support it. That dependency structure is then saved to a file, and that file is fetched by the Quicklisp client as part of a Quicklisp dist. This data is computed and saved once, on my dist-constructing computer, not each time, on the Quicklisp client computer. The data is evident whenever you see something like “To load foo, installing 5 Quicklisp releases: …”
But that “installing 5 Quicklisp releases” only works when foo itself is provided by Quicklisp. No dependency info is printed otherwise.
Quicklisp then loads foo by calling asdf:load-system. If some system that foo requires isn’t present, ASDF signals an asdf:missing-dependency error, which Quicklisp handles. If Quicklisp knows how to fetch the missing dependency, it does so, then retries loading foo. Otherwise, the missing dependency error is fatal.
Ultimately, though, only the top-level asdf:load-system can be wrapped with the verbosity-controlling settings. The fetching-on-demand error handling only happens the first time a system is installed, so it’s not a predictable point of intercession. After that first time, the system is found via asdf:find-system and no error handling takes place.
Writing this up has given me some twisted ideas, so maybe a fix is possible. I’ll keep you posted.
Here’s the scoop: Quicklisp hides too much information when building software, and it can’t easily be controlled.
That is partly intentional. Remember the post about clbuild a few days ago? The information hiding is a reaction to the (often joyous) sense, when using clbuild, that you were on the cutting, unstable edge of library development, likely at any given time to hack on a supporting library in addition to (or instead of) your primary application.
To muffle that sense, I wanted the libraries Quicklisp provided to be loaded quietly. Loaded as though they were building blocks, infrastructure pieces that can be taken for granted. Loaded without seeing pages of style-warnings, warnings, notices, and other stuff that you shouldn’t need to care about. (I realize, now, that this voluminous output isn’t common to all CL implementations, but even so, one of the loudest implementations is also one of the most popular.)
I still feel good about the concept. I don’t usually want to see supporting library load output, but if I do, there’s always
(ql:quickload "foo" :verbose t).
But the default quiet-output mode of quickload interacts with something else in a way I didn’t expect, a way I don’t like, and a way that I don’t really know how to fix.
I switched from using
(asdf:load-system "foo") to using
(ql:quickload "foo"). This works because Quicklisp’s quickload can “see” any system that can be found via ASDF, even if it isn’t a system provided by Quicklisp. Quickload also automatically fetches, installs, and loads Quicklisp-provided systems on demand, as needed, to make the system load. It’s super-convenient.
Unfortunately, that now means that the quiet-output philosophy is being applied to very non-infrastructure-y code, the code I’m working on at the moment, the code where I really do want to know if I’m getting warnings, style-warnings, notes, and other stuff.
It didn’t bother me a lot at first. When you’re writing something interactively in slime, C-c C-c (for a single form) and C-c C-k (for an entire file) will highlight the things you need to care about. But over time I’ve really started to miss seeing the compile and load output of my own projects differently, and more verbosely, than the output from “infrastructure.” It would be nice to be able to see and fix new warnings I accidentally introduce, in code that I’m directly responsible for.
Unfortunately, I don’t know enough about ASDF to know if it’s possible, much less how to implement it.
The special variables and condition handlers that implement quiet-output are installed around a single toplevel call to
asdf:load-system. Everything after that point is handled by ASDF. Loading a given system may involve loading an unknown mix of Quicklisp-provided systems and other systems. I can think of many ways to identify systems as originating from Quicklisp, but even if they’re identified as such, I can’t think of a way to intercede and say “When loading a system provided by Quicklisp, be quiet, otherwise, be verbose.”
Ideally, of course, it would be nice to be able to be totally verbose, totally quiet, or a mix of the two, depending on some property of a system. But at the moment, I just don’t see where I can hook into things temporarily to implement the policy I want.
If you have any ideas about how this might be done, please email me at firstname.lastname@example.org. Working proof-of-concept code would be the most welcome form of help; I don’t have much time to chase down a lot of “have-you-tried-this?” speculation. But I’ll gratefully take whatever I can get.
I wrote a bit about elPrep, a “high-performance tool for preparing SAM/BAM/CRAM files for variant calling in DNA sequencing pipelines,” back in August. The initial version was LispWorks-only.
There’s a new version 2.0 available, and, along with new features and bugfixes, elPrep now supports SBCL. (Charlotte Herzeel’s announcement cautions that “performance on LispWorks 64bit editions is generally better” and “the use of servers with large amounts of RAM is also more convenient with LispWorks.”)
Andrew Lyon has updated cl-async to use libuv as the backend, switching from libevent. This is an incompatible change, so if you use cl-async, be sure to check the upgrade guide.
There is some discussion about the change on reddit.
Listen, friends, to the story of clbuild and how it influenced the design and implementation of Quicklisp.
I can’t tell a full story of clbuild, since I didn’t use it very much, but here’s what I remember.
clbuild was trivial to install. Download a single file, a shell script, to get started. From there, clbuild could fetch the source code of dozens of interesting projects and set up an environment where it was easy to use that code to support your own projects. It was also trivial to hack on most of the projects, since in most cases you were getting a source control checkout. It was nice to be able to hack directly on a darcs or git checkout of a useful library and then send patches or pull requests upstream.
Luke Gorrie created it, and, like many of his projects, quickly encouraged a community of contributors and hackers that kept evolving and improving clbuild.
clbuild was fantastic in many ways. So why didn’t I use it? Why create Quicklisp, which lacks some of the best features of clbuild?
My biggest initial issue was the firewall at work.
Since clbuild checked out from various version control systems, some of them used ports outside of the range allowed by a typical corporate firewall. I was limited almost exclusively to HTTP or HTTPS service.
A subsequent problem was obtaining all the prerequisite version control tools. Although git and github are dominant today, in 2007, cvs, darcs, svn, and several other version control systems were more frequently used than today. It took a series of errors about missing commands before I could finally get things rolling.
In 2007, for 20 different projects, there might be 20 different computers hosting them. In 2014, it’s more likely that 18 are hosted on github. Because of the diversity of hosting back then, it wasn’t all that uncommon for a particular source code host to be unavailable. When that happened to a critical project’s host, it could mean that bootstrapping your project from clbuild was dead in the water, waiting for the host to come back.
Even if everything was available, there was no particular guarantee that everything actually worked together. If the package structure of a particular project changed, it could break everything that depended on it, until everything was updated to work together again.
Pulling from source control also meant that the software you got depended heavily on the time you got it. If you had separate clbuild setups on separate computers, things could get out of sync unless you made an effort to sync them.
One final, minor issue was that clbuild was Unix-only. If you wanted to use it on Windows, you had to set up a Unix-like environment alongside your Lisp environment so you could run shell scripts and run cvs, darcs, svn, etc. as though they were Unix command-line programs. This didn’t affect me personally, since I mostly used Linux and Mac OS X. But it did limit the audience of clbuild to a subset of CL users.
Elements of Quicklisp’s design are in reaction to these issues.
Quicklisp’s software structure shifts the task of fetching from source control, building, and distribution of software from the end user to a central server. Rather than continuously updating all sources all the time, the updates happen periodically, typically once per month.
This is based on the observation that although there are intermittent problems with software incompatibility and build-breaking bugs, most of the time things work out ok. So the Quicklisp process is meant to slow down the pace of updates and “freeze” a configuration of the Common Lisp project universe at a working, tested, known-good point in time.
In Quicklisp terms, that universe is called a dist, and a dist version represents its frozen state at a particular point in time. The software is checked out of every source control system, archived into a .tar.gz file, built and tested, and then finally frozen into a set of HTTP-accessible archive files with a few metadata and index files. Fetching libraries is then a matter of connecting to the central server via HTTP to get the metadata and archives. There are no source-control programs to install or firewall ports to open. The build testing means there is a reduced risk of one project’s updates being fatally out-of-sync with the rest of the project universe.
By default, a Quicklisp installation uses the latest version of the standard dist, but it’s a short, easy command to get a specific version instead either at installation time or at some later time. So even if you install Quicklisp multiple times on multiple computers, you can make sure each has the same software “universe” available for development. The uncertainty introduced by the time of installation or update can be completely managed.
This works even for the oldest dist versions; if you started a project in October, 2010, you can still go back to that state of the Common Lisp library world and continue work. That’s because no archive file is ever deleted; it’s made permanently available for just this purpose.
In the mid-2000s, it would have been hard to make a design like that very reliable for a reasonable cost. Amazon web services have made it cheap and easy. I have had only a few minutes of HTTP availability issues with Amazon in the past four years. I’ve never lost a file.
Quicklisp mitigates the Unix-only issue by using Common Lisp for the installation script and Common Lisp as the library management program. It fetches via HTTP, decompresses, and untars archives with code that has been adapted to work on each supported Common Lisp on each platform. No Unix shell or command-line tools are required.
There are still some bugs and issues with Quicklisp on Windows, because it doesn’t receive as much testing and use as non-Windows platforms, but it’s just as easy to get started on Windows as it is anywhere else.
Despite fixing some of my personal issues with clbuild, Quicklisp is missing a big, key feature. When using clbuild, it’s easy to get to the forefront of development for the universe of CL software. You can work with the bleeding-edge sources easily and submit bug fixes and features. With Quicklisp, it’s harder to find out where a particular library came from, and it’s harder to get a source-control copy of it suitable for hacking and tweaking. It’s harder to be a contributor, rather than just a consumer, of projects that aren’t your own.
I’d like to improve the situation in Quicklisp, but some of the old obstacles remain. It would require a bunch of Unix-only or Unix-centric command-line tools to be installed and properly configured. Maybe that’s not such a big deal, but it’s loomed large in my mind and blocked progress. Maybe someone will take a look at the Quicklisp project metadata and write a nice program that makes it easy to combine the best of clbuild and Quicklisp. If you do, please send me a link.
PS. clbuild lives on in clbuild2. It looks like it’s still active, with commits from just a few months ago. Maybe that’s the right thing to use when the hacking urge strikes? I’ll have to give it a try.
I'm getting so used to the M-. plus documentation generation hack that's MGL-PAX, that I use it for all new code which highlighted an issue of with code examples.
The problem is that [the ideally runnable] examples had to live in docstrings. Small code examples presented as verifiable transcripts within docstrings were great, but developing anything beyond a couple of forms of code in docstrings or copy-pasting them from source files to docstrings is insanity or an OOAO violation, respectively.
In response to this, PAX got the INCLUDE locative (see the linked documentation) and became its own first user at the same time. In a nutshell, the INCLUDE locative can refer to non-lisp files and sections of lisp source files which makes it easy to add code examples and external stuff to the documentation without duplication. As always, M-. works as well.
Kwelia is building the most accurate, up-to-date and comprehensive data and analytics platform for rental housing data ever constructed. We're looking for our first non-founder software engineering hire to work closely with the founders to help grow our application, analytics stack and engineering team. The ideal candidate is an experienced full-stack developer with analytics experience who can hit the ground running, and thrives in a fast-paced early-stage startup environment.
In this position, you'll be called upon to lead by example and to help establish our engineering culture. Because of our early stage (and your level of responsibility in the company's growth) compensation will include a significant equity stake. Despite our early stage, we will also provide a competitive salary and benefits. Our current team is spread between Philadelphia, PA and Austin, TX. You're welcome to join us in either of those cities, or remotely from your preferred location.
Our Technology Stack:
PostgreSQL (OLTP database)
Clojure (data collection & processing)
Apache Spark (analytics cluster)
Amazon Web Services (hosting platform)
Please contact us at email@example.com if you are interested.
ELS'15 - 8th European Lisp Symposium Goldsmiths College, London, UK April 20-21, 2015 http://www.european-lisp-symposium.org/ Sponsored by EPITA, Franz Inc. and Lispworks Ltd. The purpose of the European Lisp Symposium is to provide a forum for the discussion and dissemination of all aspects of design, implementation and application of any of the Lisp and Lisp-inspired dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP, Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We encourage everyone interested in Lisp to participate. The 8th European Lisp Symposium invites high quality papers about novel research results, insights and lessons learned from practical applications and educational perspectives. We also encourage submissions about known ideas as long as they are presented in a new setting and/or in a highly elegant way. Topics include but are not limited to: - Context-, aspect-, domain-oriented and generative programming - Macro-, reflective-, meta- and/or rule-based development approaches - Language design and implementation - Language integration, inter-operation and deployment - Development methodologies, support and environments - Educational approaches and perspectives - Experience reports and case studies We invite submissions in the following forms: Papers: Technical papers of up to 8 pages that describe original results or explain known ideas in new and elegant ways. Demonstrations: Abstracts of up to 2 pages for demonstrations of tools, libraries, and applications. Tutorials: Abstracts of up to 4 pages for in-depth presentations about topics of special interest for at least 90 minutes and up to 180 minutes. The symposium will also provide slots for lightning talks, to be registered on-site every day. All submissions should be formatted following the ACM SIGS guidelines and include ACM classification categories and terms. For more information on the submission guidelines and the ACM keywords, see: http://www.acm.org/sigs/publications/proceedings-templates and http://www.acm.org/about/class/1998. Important dates: - 22 Feb 2015: Submission deadline - 15 Mar 2015: Notification of acceptance - 29 Mar 2015: Early registration deadline - 05 Apr 2015: Final papers - 20-21 Apr 2015: Symposium Programme chair: Julian Padget, University of Bath, UK Local chair: Christophe Rhodes, Goldsmiths, University of London, UK Programme committee: To be announced Search Keywords: #els2015, ELS 2015, ELS '15, European Lisp Symposium 2015, European Lisp Symposium '15, 8th ELS, 8th European Lisp Symposium, European Lisp Conference 2015, European Lisp Conference '15
M-x package-install RET slime RETEnjoy!
load-path, otherwise the old version will take precedence.
Amazon.com is the leading online retailer in the United States, with over $75bn in global revenue. At Amazon, we are passionate about using technology to solve business problems that have big customer impact.
Clojure SDE role
Location: Seattle, WA (cannot take remote SDE's - must be onsite. Amazon pays for relocation)
CORTEX is our next generation platform that handles real-time financial data flows and notifications. Our stateless event-driven compute engine for dynamic data transforms is built entirely in Clojure and is crucial to our ability to provide a highly agile response to financial events. We leverage AWS to operate on a massive scale and meet high-availability, low-latency SLAs.
Combining a startup atmosphere with the ambition to build and utilize cutting-edge reactive technology, the Cortex team at Amazon is looking for a passionate, results-oriented, innovative Sr. Software Engineer who wants to move fast, have fun and be deeply involved in solving business integration problems across various organizations within Amazon.
If this describes you - our team is a great fit:
Our technology stack: Clojure, JVM, AWS tools, Sable
If you are interested, contact Janney Jaxen, Technical Recruiter, Retail Systems firstname.lastname@example.org
I have uploaded a new version of my Alternatives library. In addition to the
ALTERNATIVES macro, there is an
ALTERNATIVES* macro which allows one to specify a name for the set of choices. Then, one can check the
DOCUMENTATION to see which alternative was last macroexpanded.
It's apparently been just 508 days since I first joined github. In that time I've written a lot of Common Lisp code and apparently made around 4000-5000 commits. I now want to make a retrospective and go over all the projects I've started. I'll omit some of the smaller, uninteresting ones though.
The projects are very roughly in the order I remember creating them. I can't recall exactly what it was, so things might be all over the place, but it matters not. An approximate order like this is sufficient.
This was my first big CL project that I started as I was investigating tools for radiance. Radiance already began conceptually before this, but I didn't write significant enough code for it to count. lQuery tries to bring the very convenient jQuery syntax for manipulating the DOM to CL. I did this because I knew jQuery and I did not find the alternatives very appealing. Initially it was supposed to help with templating for the most part, but it turned out to be more useful for other tasks in the end.
The first version of lQuery was written in a hotel room in Japan during my one-week holiday there. Time well spent! Don't worry though, I got out often enough as well.
lQuery was also my first library to be published and distributed via Quicklisp, so I needed it to have easy to read documentation. Docstrings are great, but I wanted that information to be on the documentation page as well, so I looked for libraries that allowed me to bundle that somehow. Given that I couldn't find anything I liked, I quickly wrote up my own thing that used lQuery to generate the page. It was a matter of some hours and made me very pleased at the time.
Radiance is sort of the reason I really got into CL to begin with. The previous versions of the TyNET framework that my websites run on were written in PHP and I got really sick of the sources, so I never really went to fix bugs or make necessary improvements. Things worked, but they didn't work great.
As I picked up CL I had to look for a project to really get used to the language and rewriting my framework seemed like the first, obvious step. I wanted Radiance to become a mature, stable and usable system that other people could profit from as well. So, unlike in previous attempts I tried to take good care to do things right, even if my understanding of the language at that point was questionable at best.
One and a half years and almost a complete re-write (again) later, I still don't regret choosing this as my major project, as I'm now fairly confident that it will become something that people can use in the future. It's not quite there yet, but well on its way.
I dislike breakpoints and love good logging, so the next step that Radiance demanded was a good logging solution. I first tried my hands on log4cl, but didn't quite like it, mostly for a lack of being able to figure out how to make it work the way I wanted. So, rolling my own it was. I wanted something very flexible, so I though up a pipeline system for log message processing and distribution.
That was this library; a very small thing that allowed you to create (albeit in a cumbersome fashion) pipelines that could be used to process and distribute arbitrary messages.
From there on out I went to write the actual logger mechanisms, including threading support. Verbose was the result, and I still use and like it today.
For a while then I was occupied with the task of writing a bot for the Encyclopedia Dramatica wiki that should handle new registrations and bannings by adding templates to the user pages. In order to make this possible I checked out a few IRC libraries and wrote a crude thing that would sit on a channel and accept simple commands.
In order for it to actually do its thing though, I had to interact with the mediawiki API, so I wrote a tiny wrapper library around some of the calls that I needed. I never put this on Quicklisp because it was never fleshed-out enough to be there and it still isn't. Maybe some day I'll revise this to be a properly usable thing.
After I finished the bot I wanted to extend it to be able to interact with the forums of ED, which ran on XenForo. Unfortunately that forum offered absolutely zero APIs to access. There was a plugin, but I couldn't get the admins to install it as the forum was apparently so twisted that doing anything could make it crash and burn. Oh well.
So, I set out the classic way of parsing webpage content. Thanks to lQuery this was not that huge of a pain in the butt, but it still took a lot of fiddling to get things to run. This library too is not on QL as it is a big hack and far from complete as well.
At this point I'm really unsure about the order of the projects. Either way, the little bot project I made for ED was a mess and I wanted a proper bot framework to replace my previous bot, Kizai. As I wasn't impressed by the available IRC libraries either, I wrote Colleen from scratch.
Colleen is still being worked on every now and again today, but (with some bigger and smaller rewrites along the way) it has proven to be a very good framework that I am very glad I took the time to write.
In order to test out Radiance and because I was sick of pastebin as a paste service, I set out to write my own. This, too, has proven to be a good investment of my time as I still use plaster as my primary pasting service today. There's a few things I'd like to improve about it whenever I do get the time to, but for the most part it just works.
At some point I noticed that I'd like to have twitter interaction for some of my web-services, so I looked around for API interfaces for that. However there wasn't anything that really worked well. So, once more I went to write something that fit my needs.
This was my first really frustrating project to get going, mostly because figuring out how oAuth is supposed to work is a huge pain. Web-APIs are some of the worst things to interact with, as often enough there is absolutely no way to figure out what exactly went wrong, so you're left stumbling in the dark until you find something that works.
Even though I haven't really used Chirp much myself, it seems to have been of use to a couple of people at least, if Github stars are anything to go by.
Since oAuth is a repeating pattern on the web and it was sufficiently painful to figure out for Chirp, I segregated that part out into its own library. I'm not sure if anyone aside from myself has used South for anything though.
During one of my rewriting iterations of Colleen I noticed that a very common pattern was to save and load some kind of storage. Moving that pattern out into the framework and thus automating configuration and storage seemed like a good idea. However, since Colleen was also an end-user application, I needed to make sure that the configuration could be saved in a format that the user wanted, rather than simple sexprs.
And that's what Universal-Config is supposed to do: Generalise the access of configuration as well as the storage. It works really well on the code side; accessing parts and changing the config is very simple and convenient. It only works so-so on the configuration storage side of things though, as I needed to strike some gross compromises in the serialisation of the objects to ensure compatibility between formats.
Maybe some day I'll figure out a smarter solution to the problems UC has.
Deferred was an attempt at providing mechanisms for optional features of your code. Meaning that your could would work depending on what kind of libraries are loaded at the time. Therefore I could for example provide a local server based authentication with South without explicitly requiring Hunchentoot or some other webserver. Deferred is more a proof-of-concept than anything though, as I haven't actually utilised it in any of my projects.
However, the problem is an interesting one and whenever I do return to it, I want to try to tackle it from a different angle (extending ASDF to allow something like optional dependencies and conditional components).
The first version of lQuery used Closure-HTML, CXML, and css-selectors to do most of the work. However, CHTML and CXML suffered from big problems: CXML would not parse regular HTML (of course) and CHTML would not parse HTML5 as it required a strict DTD to conform to. Also, css-selectors' performance wasn't the greatest either.
So, in order to clean up all these issues I set out to write my own HT/X/ML parser that should both be fast and lenient towards butchered documents. Well, fast it is, and lenient it is as well. Plump is probably so far my best project in my opinion, as its code is straight-forward, extensible, and just does its job very well.
The next step was to build a CSS-selectors DOM search engine on top of Plump. This turned out to be quite simple, as I could re-use the tools from Plump to parse the selectors and searching the DOM efficiently was not that big of a deal either.
After these two were done, the last job was to re-write lQuery to work with the new systems Plump and CLSS provided. The re-write was a very good idea, as it made lQuery a lot more extensible and easier to read and test. It was quite funny to read such old code, after having worked with CL for about a year by then.
The templating engine I used in Radiance so far had been a combination of lQuery and “uibox”, which provided some very crude tools to fill in fields of nodes on the DOM. I didn't like this approach very much as there was too much lQuery clutter in the code that should've been in the template.
Clip now provides a templating system that hasn't been done in CL before and I don't think has really been done ever. All the code that manipulates your template is in the template itself, but the template is a valid HTML5 document at all times. The trick is to take advantage what HTML already allows you to do: custom tags and attributes. Clip picks those up, parses them and then modifies the DOM according to their instructions. All you have to do in your CL code is to pass in the data the page needs.
lQuery-Doc left a lot to wish for, so another rewrite was in order. This time I took advantage of Clip's capabilities to provide a very straight-forward, no-bullshit tool to generate documentation.
The only drawback it has currently is that its default template doesn't have the greatest stylesheet in the world, but that hardly bothers me. Maybe I'll get to writing a fancy one some day.
I always wanted to write my own painting application, mostly because MyPaint and others were never completely to my liking. I even took attempts at this before in Java. At some point, out of curiosity, I looked into how I would go about grabbing tablet input. Investigating the jPen library brought me absolutely nothing but confusion, so I looked for other ways. Luckily enough it turned out that Qt already provided a built-in way to grab events from tablets and from previous experience with a minor project I knew that CommonQt allowed me to use Qt rather easily from CL out.
So, what started out as a quick test to see whether it would even be possible to make a painting application quickly turned into a big thing that had a lot of potential. You can read more about it here.
A lot of time had passed since I last worked on Radiance. I took time off as I noticed that the framework had turned into something uncanny and I needed to fix that. And the way to fix it was to write a lot of design drafts and work out all the issues that came to mind on paper.
My conclusion after all this was: Radiance needed a complete, from scratch, rewrite. Oh boy. The first part that needed to be done is a proper library to provide the encapsulation into modules. Modules are Radiance's primary abstraction that allow you to neatly separate parts, but also unify the access and interaction between them.
Modularize was the solution for this and it works pretty well. In fact, it works so well that I don't even think about it anymore nowadays, it just does its job as I expect it to. Aside from Modularize itself I wrote two extensions that tucker on support for triggers and the much-needed interfaces and implementations mechanism that is vital to Radiance. I won't explain what these do exactly right now, that'll be for when I write the comprehensive guide to Radiance.
After a long time of rewriting Radiance's core and contribs, it was time to rewrite another component from the old version of TyNET: my blog. This time I tried to focus on simplicity and getting it done well. Simple it is indeed, it's barely 200 lines of code. And as you can probably see as you read this, it works quite nicely.
Writing CSS is a pain in the butt, as it involves a lot of duplication and other annoyances. At some point I had the idea of writing a Lisp to CSS compiler. Taking inspiration from Sass this idea grew into LASS in a matter of.. a day or two, I think?
I now use LASS for all of my style sheet writing concerns as it just works very well and with some minor emacs fiddling I don't even have to worry about compiling it to CSS myself.
Sometimes Xach would talk on IRC about wanting to interact with Tumblr through CL. As Tumblr is a service I use too and the biggest hurdle (oAuth) was already handled by South I took the challenge of writing yet another web-API client.
Humbler turned out a lot nicer than Chirp did in terms of end-user experience, I would say. However, I cannot at all say the same about my own experience while writing it. Tumblr's API “documentation” is quite bad, to be blunt. A lot of the returned fields are not noted on the page, some things are plain wrong (probably out of date) and in general there's just not enough things actually being documented. The worst part about it all was the audacity that the staff had to proclaim in a blog post that they wanted to encourage experimentation!, as if having to figure out the API by yourself was actually a great thing.
Anyway, I haven't actually used Humbler for anything myself, but some people seem to be using it and that's good enough to me.
Returning back to Radiance problems, one of the recurring issues was validating user input. There didn't seem to be a library that did this in any way. And so the same old game of ‘write it yourself’ began. Ratify's development mostly included reading RFCs and trying to translate them into tests and finally bundling it all together in some easy to use macros.
On twitter I encountered a really nice screenshot of an error page on some Clojure project. I couldn't find the tweet again later, so I don't know what features it had exactly, but suffice to say it was better than anything I'd seen up to that point.
That lead me to wonder how I could actually get the stack trace myself if an error occurred. There was already a library that provided rudimentary support for that, trivial-backtrace. Taking a look at its source code filled me with everything else but esteem though, so I headed out to write something that would allow people to inspect the stack, restarts, and accompanied source code easily.
A quick question by eudoxia on twitter inspired me to write a very quick toolkit to extract and infuse CSS from/into HTML. The main use case for the former would be to turn HTML into HTML+CSS and the latter to reverse the process (for, say, emails). Using lQuery and LASS this turned out to be a super easy thing to do and I had it done in no time.
Hooray for great code re-use!
Aside from the blog, the only really actively used component of TyNET was the imageboard, Stevenchan. Stevenchan ran on my own software called Purplish. In order to be able to dump everything of the old-code base I was driven to re-write Purplish for Radiance.
However, Purplish now takes a much different approach. A lot of traditional imageboard features are missing and a couple of unconventional features were added. Plus, having it written in CL has the advantage of being much easier to maintain, so if anything ever does crop up I'll tend much more towards wanting to fix it than I did before with PHP.
I like language a lot. I also like to try and reduce things to their minimum. So, the idea came to me of a site that allowed people to review things with only a single keyword. The idea behind that was to, with sufficient data, see what kind of patterns emerge and find out what people think the essence of an experience is.
Certainly it wouldn't be useful for an actual ‘review’ of anything, but it's nevertheless an interesting experiment. I don't know if I'll ever get enough data to find patterns in this or anything that could lead to scientifically significant correlations, but it's a fun enough thing on its own.
Having completed pretty much everything that I wanted to work on and stalling on some major issues with Radiance I was on the lookout for things to do. Parasol was still on hold and nothing else really picked my interest. In an attempt to start out right and not dive head over heels into it again, I first considered ways in which to make the C++-ness of Qt more lispy.
Born out of this was Qtools, a collection of tools to aid development with CommonQt and make it all feel a bit more homely. Of course, some major issues still remain; you still need to pay attention to garbage and there's still C++ calls lingering about, but all in all the heavy hackery of Qtools does make it more pleasing to the eye.
Qtools forced me to go deeper into the guts of CLOS and MOP than I've ever gone before and I had to spend a lot of time in implementation code to figure out how to make the things I needed work. I wouldn't advise Qtools as a good use of MOP, but it could be considered an impressive work in exercising the flexibility Lisp offers.
So, that's it then for now. I'd like to amend here that during the most part of all these projects I should've been studying for university. I'm not sure if working on these projects was the right choice, but I have learned a huge bunch and I hope that my produce of my efforts has been of use to other people. If not, then it certainly was not the right choice to indulge myself this much in programming.
Before I go into another long rant about my questionable situation in university I'll cap this here. Until another time!
Post scriptum: If you have ideas for features or new projects for me to work on, please let me know! More ideas is always better.
I have now released the code that I mentioned in my previous post Code That Tells You Why which lets one keep multiple implementations around in code and switch between them manually without much trouble.
A link to the source code is here: nklein.com/software/alternatives/.
It makes a good point. However, it got me thinking that for cases like the binary-search example in the article, it might be nice to see all of the alternatives in the code and easily be able to switch between them.
One way to accomplish this in Lisp is to abuse the
#- reader macros:
This is less than ideal for a number of reasons, including: one needs to make sure to pick “feature” names that won’t actually ever get turned on, the sense of
- seem backwards here, and switching to a different alternative requires editing two places.
Another Lisp alternative is to abuse the
This is better. No one can doubt which alternative is in use. It is only one edit to switch which alternative is used. It still feels pretty hackish to me though.
One can clean it up a bit with some macrology.
With this macro, one can now rewrite the
sum-i^2 function quite readably:
If I wanted to try the
my-first-attempt-was-something-like-this clause, I could stick a
*** before that clause or change its name to
blessed, or I could move that clause into the last spot.
There is still an onus on the developer to chose useful alternative names. In most production code, one wants to clean out all of the dead code. On the other hand, during development or for more interactive code bodies, one might prefer to be able to see the exact “How” that goes with the “Why” and easily be able to swap between them.
(Above macro coming in well-documented library form, hopefully this weekend.)
I think this might be my last blog entry on the subject of building SBCL for a while.
One of the premises behind SBCL as a separate entity from CMUCL, its parent, was to make the result of its build be independent of the compiler used to build it. To a world where separate compilation is the norm, the very idea that building some software should persistently modify the state of the compiler probably seems bizarre, but the Lisp world evolved in that way and Lisp environments (at least those written in themselves) developed build recipes where the steps to construct a new Lisp system from an old one and the source code would depend critically on internal details of both the old and the new one: substantial amounts of introspection on the build host were used to bootstrap the target, so if the details revealed by introspection were no longer valid for the new system, there would need to be some patching in the middle of the build process. (How would you know whether that was necessary? Typically, because the build would fail with a more-or-less - usually more - cryptic error.)
Enter SBCL, whose strategy is essentially to use the source files
first to build an SBCL!Compiler running in a host Common Lisp
implementation, and then to use that SBCL!Compiler to compile the
source files again to produce the target system. This requires some
contortions in the source files: we must write enough of the system in
portable Common Lisp so that an arbitrary host can execute
SBCL!Compiler to compile SBCL-flavoured sources (including the
(defun car (list) (car list)) and
similar, which works because SBCL!Compiler knows how to compile calls
How much is "enough" of the system? Well, one answer might be when the build output actually works, at least to the point of running and executing some Lisp code. We got there about twelve years ago, when OpenMCL (as it was then called) compiled SBCL. And yet... how do we know there aren't odd differences that depend on the host compiler lurking, which will not obviously affect normal operation but will cause hard-to-debug trouble later? (In fact there were plenty of those, popping up at inopportune moments).
I've been working intermittently on dealing with this, by attempting to make the Common Lisp code that SBCL!Compiler is written in sufficiently portable that executing it on different implementations generates bitwise-identical output. Because then, and only then, can we be confident that we are not depending in some unforseen way on a particular implementation-specific detail; if output files are different, it might be a harmless divergence, for example a difference in ordering of steps where neither depends on the other, or it might in fact indicate a leak from the host environment into the target. Before this latest attack at the problem, I last worked on it seriously in 2009, getting most of the way there but with some problems remaining, as measured by the number of output files (out of some 330 or so) whose contents differed depending on which host Common Lisp implementation SBCL!Compiler was running on.
Over the last month, then, I have been slowly solving these problems, one by one. This has involved refining what is probably my second most useless skill, working out what SBCL fasl files are doing by looking at their contents in a text editor, and from that intuiting the differences in the implementations that give rise to the differences in the output files. The final pieces of the puzzle fell into place earlier this week, and the triumphant commit announces that as of Wednesday all 335 target source files get compiled identically by SBCL!Compiler, whether that is running under Clozure Common Lisp (32- or 64-bit versions), CLISP, or a different version of SBCL itself.
Oh but wait. There is another component to the build: as well as
SBCL!Compiler, we have SBCL!Loader, which is responsible for taking
those 335 output files and constructing from them a Lisp image file
which the platform executable can use to start a Lisp session.
(SBCL!Loader is maybe better known as "genesis"; but it is to
what SBCL!Compiler is to
And it was slightly disheartening to find that despite having 335
identical output files, the resulting
cold-sbcl.core file differed
between builds on different host compilers, even after I had
remembered to discount the build fingerprint constructed to be
different for every build.
Fortunately, the actual problem that needed fixing was relatively
small: a call to
which (understandably) makes no guarantees about ordering, was used to
affect the Lisp image data directly. I then spent a certain amount of
time being thoroughly confused, having managed to construct for myself
a Lisp image where the following forms executed with ... odd results:
(loop for x being the external-symbols of "CL" count 1) ; => 1032 (length (delete-duplicates (loop for x being the external-symbols of "CL" collect x))) ; => 978
(unless (member (package-name package) '("COMMON-LISP" "KEYWORD" :test #'string=)) ...)
was not the same as
(unless (member (package-name package) '("COMMON-LISP" "KEYWORD") :test #'string=) ...)
and all was well again, and as of
cold-sbcl.core output file is identical no matter the build
It might be interesting to survey the various implementation-specific behaviours that we have run into during the process of making this build completely repeatable. The following is a probably non-exhaustive list - it has been twelve years, after all - but maybe is some food for thought, or (if you have a particularly demonic turn of mind) an ingredients list for a maximally-irritating CL implementation...
most-negative-fixnum- particularly since they could end up being used in ways where their presence wasn't obvious. For example,
(deftype fd () `(integer 0 ,most-positive-fixnum))has, in the SBCL build process, a subtly different meaning from
(deftype fd () (and fixnum unsigned-byte))- in the second case, the
fdtype will have the intended meaning in the target system, using the target's
fixnumrange, while in the first case we have no way of intercepting or translating the host's value of
most-positive-fixnum. Special mentions go to
array-dimension-limit, which caused Bill Newman to be cross on the Internet, and to
internal-time-units-per-second; I ended up tracking down one difference in output machine code from a leak of the host's value of that constant into target code.
sxhashquite justifiably differ between implementations. The practical upshot of that is that these functions can't be used to implement a cache in SBCL!Compiler, because the access patterns and hence the patterns of cache hits and misses will be different depending on the host implementation.
maphashdoes not iterate over hash-table contents in a specified order, and in fact that order need not be deterministic; similarly,
with-package-iteratorcan generate symbols in arbitrary orders, and set operations (
set-differenceand friends) will return the set as a list whose elements are in an arbitrary order. Incautious use of these functions tended to give rise to harmless but sometimes hard-to-diagnose differences in output; the solution was typically to sort the iteration output before operating on any of it, to introduce determinism...
sortisnot specified to be stable. In some implementations, it actually is a stable sort in some conditions, but for cases where it's important to preserve an already-existing partial order,
stable-sortis the tool for the job.
(make-array 8 :element-type (unsigned-byte 8))will give a zero-filled array, but there are circumstances in some implementations where the returned array will have arbitrary data.
*gensym-counter*is affected by macroexpansion if the macro function calls
gensym, and implementations are permitted to macroexpand macros an arbitrary number of times. That means that our use of
gensymneeds to be immune to whatever the host implementation's macroexpansion and evaluation strategy is.
byteto represent a bitfield with size and position is implementation-defined. Implementations (variously) return bitmasks, conses, structures, vectors; host return values of
bytemust not be used during the execution of SBCL!Compiler. More subtly, the various
boole-related constants (
boole-andand friends) also need special treatment; at one point, their host values were used when SBCL!Compiler compiled the
boolefunction itself, and it so happens that CLISP and SBCL both represent the constants as integers between 0 and 15... but with a different mapping between operation and integer.
(quote foo). In fact printing in general has been a pain, and there are still significant differences in interpretation or at least in implementation of pretty-printing: to the extent that at one point we had to minimize printing at all in order for the build to complete under some implementations.
(log 2d0 10d0)more accurately than others, including SBCL itself, do. The behaviour of the host implementation on legal but dubious code is also potentially tricky: SBCL's build treats full
warnings as worthy of stopping, but some hosts emit full warnings for constructs that are tricky to write in other ways: for example to write portable code to handle multiple kinds of string, one might write
(typecase string (simple-base-string ...) ((simple-array character (*)) ...)) (string ...))but some implementations emit full
warnings if a clause in a
typecaseis completely shadowed by other clauses, and if
characterare identical in that implementation the
typecaseabove will signal.
There were probably other, more minor differences between implementations, but the above list gives a flavour of the things that needed doing in order to get to this point, where we have some assurance that our code is behaving as intended. And all of this is a month ahead of my self-imposed deadline of SBCL's 15th birthday: SBCL was announced to the world on December 14th, 1999. (I'm hoping to be able to put on an sbcl15 workshop in conjunction with the European Lisp Symposium around April 20th/21st/22nd - if that sounds interesting, please pencil the dates in the diary and let me know...)
SparX is a small engineering team focused on applying online machine learning and predictive modeling to eCommerce (impacting a 24 billion dollar business).
Our stack is 100% Clojure, service oriented, targeting 50 million users with 1ms SLAs. We apply engineering and data science to tough problems such as dynamic pricing, shipping estimations, personalized emails, and multi-variate testing.
We are always looking for talent in data-science, engineering and devops. Bonus points if you can bridge 2 of these together. We love people with strong fundamentals who can dive deep.
In my previous posts (part 1, part 2, part 3) I described the development process of a romanization algorithm for texts in Japanese language. However the ultimate goal was always to make a simple one-purpose web application that makes use of this algorithm. It took quite a while, but it’s finally here. In this post I will describe the technical details behind the development of this website.
I decided to build it with bare Hunchentoot; while there are some nice Lisp web frameworks developed lately like Restas or Caveman, my app would be too simple to need them. There would be a single handler that takes a query and various options as GET parameters, and returns a nicely formatted result.
One thing I didn’t concern myself about when writing the backend Ichiran algorithm was thread safety. However, as Hunchentoot uses threads to process requests, this matter becomes very important. Fortunately writing thread-safe code in Lisp is not that hard. Mostly you should just avoid modifying global special variables (binding them with let is okay) and be careful with writing persistent data. Since my app is pretty much read-only, there was only one such issue. I am storing a cache of word suffixes in a special variable. Generating this cache takes several seconds, but is only done once per session. As you can guess, this creates problems with thread safety, so I put a lock around this procedure and called it when the server is launched. Each server launch would therefore take several seconds, which is suboptimal. Later I would make the lock non-blocking and display a warning if the init-suffixes procedure is in progress.
(defmethod closure-template:fetch-property :around ((map list) key) "Support for jsown dict objects" (if (and (not (integerp key)) (eql (car map) :obj) (every #'listp (cdr map))) (call-next-method (cdr map) key) (call-next-method))) (defmethod closure-template:fetch-keys :around ((map list)) (if (and (eql (car map) :obj) (every #'listp (cdr map))) (call-next-method (cdr map)) (call-next-method)))
Theoretically this would fail if passed a valid plist like ‘(:obj (1 2)), but this cannot possibly happen in my application.
Now, at some point I had to actually put my app online. I needed a server and a domain name and I needed them cheap (because I’m currently unemployed (pls hire me)). For the server I chose Linode VPS, and I bought ichi.moe domain from Name.com. I still think these new TLDs are a pretty stupid idea, but at least it gives us all an opportunity to buy a short and memorable domain name. I spent the rest of the day configuring my Linode server, which I never did before. Thankfully the documentation they provide is really good.
Because I wanted to get the most juice out of my cheap-ass server, the plan was to put hunchentoot server behind Nginx and to cache everything. There are existing guides on how to do this setup, which were very helpful. In my setup everything is served by Nginx except for URLs that start with /cl/, which are passed to Hunchentoot. The static pages (including error pages) are also generated by closure-template (so that the design is consistent), but they are just dumped into .html files served by Nginx. Nginx also caches dynamic content, which might help if some high-traffic site links to a certain query. This, and the fact that Linodes are hosted on SSD made the site run pretty smooth.
Now let’s talk about my infrastructure. As described in the guides above, I have a special hunchentoot user in addition to the main user. The main user’s quicklisp directory is symlinked to hunchentoot’s so the server can load the code but cannot write there. The code is stored in 2 repositories. One is the open-source core of the project (ichiran) and the other one is a private bitbucket repository ichiran-web which holds web-related code. However a simple git pull doesn’t update the code running on the server. If I’m lazy, I do “sudo service hunchentoot restart”, which restarts everything and reloads the code. This might of course create service interruptions for the users. Another option is hot swapping all the changes. For this purpose my hunchentoot server also starts a swank server like this:
(defun start-app (&optional (port 8080)) (handler-case (swank:create-server :dont-close t) (error ())) (ichiran/dict:init-suffixes) (refresh-handlers) (let ((acceptor (make-instance 'easy-acceptor :port port :access-log-destination *access-log* :message-log-destination *message-log* ))) (setf *ichiran-web-server* (start acceptor))))
Swank is, of course, the server-side component of SLIME. It runs on a port that is not accessible remotely and can only be connected to locally or via SSH tunnel. I use the latter to connect SLIME on my PC to Swank running on my server, which allows me to apply various fixes without restarting, either from the REPL or by using C-c C-c to recompile some function.
Anyway, I’m pretty happy with the way things turned out, and I got some positive feedback already. The biggest thing left is tightening up the web design, which is my least favorite part of web development. The other thing is attracting enough traffic so that I can analyze the performance (I’m only getting a few people a day right now, which barely makes a blip on my server’s CPU graph).
In retrospect, getting this website up and running was pretty easy. I spent much more time trying to tweak ichiran library to split the sentences in a correct way (and I’m still working on it). It’s not much harder than, say, building a Django-based site. The tools are all there, the documentation is out there (kind of). VPSes are cheap. And it spreads awareness of Common Lisp. No reason not to try!
It's been too long since my last entry. I just haven't had much that I felt safe talking about. But, now that I'm mostly done with everything that occupied me for a while (Radiance and Purplish), I have more time available for other things. One of these things happens to be Parasol.
As you may or may not know, Parasol is a Common Lisp painting application that was born out of curiosity and wasn't really meant to be anything big, but then quickly exploded in size. But as it is with these things, at some point I hit a big roadblock: It couldn't process events fast enough and was thus not getting enough feedback from the tablet pen. This happened because the drawing operations took too long and clogged up the event loop.
To solve this problem, we need to put the operations that take a long time into a separate thread and queue up events while it runs. I tried to hack this in, but the results were less than fantastic. It could go either one of two ways; either the entire CL environment had a segmentation fault at a random time, or the painting would be littered with strange drawing artefacts.
The only thing that I could guess from this was that Qt wasn't happy with me using CL threads. But that wasn't the only issue. I really didn't like what I had built over the weeks of working on Parasol, as it reeked of patchwork design, without much of an ulterior architecture. So I put the project to rest for the time being, hoping to return to it some day and rewrite it anew.
In preparation for this day I recently wrote Qtools, a collection of utilities that should make working with Qt easier. Writing that library alone caused me quite some amounts of grief, so I'm not very enthusiastic about diving deeper into the sea of tears that is C++ interaction.
Regardless, I couldn't keep my mind off of it and used some lecture time yesterday to write together a crude design document that should lay out a basic plan of action and architecture for the new Parasol version. I have started to set this plan into motion today.
First in line is building a skeleton main window with a pane for “gizmos” and a main area that can hold a range of documents or other forms of tabs. With that I'll have a minimal environment to test things out on. After that there's a large chunk of internal structure that needs to be completed: the document representation.
As I've laid it out so far, a document is composed of a set of layer-type objects, a history stack, and metadata such as the size and file associated with it. Layer objects themselves are composed of a position, size, drawables list, and an image buffer. A drawable itself is not specified to be anything fixed. Unlike before, it does not have to be a stroke, but simply an object that has the general methods a drawable has. This allows me to later add things that behave differently, such as images and 3D objects.
This change away from Parasol's original model necessitates two further, drastic alterations: First we need to have tools aside from a general brush that allow manipulating other kinds of objects, so we need to have a way to define ‘tools’ and their effects when used, in a uniform fashion. Secondly, we need a proper, generalised history that allows us to un/re-do any kind of operation on the document. Both of those things offer significant design challenges and I really hope I can pull it off.
Fortunately however, integrating these is a long while off, as after implementing a basic document I first need to add a way to publish these –so far purely virtual– documents to the UI. To tie these two worlds together, we need a view object that defines where to and how we're currently looking at the document. The view's job will also be to relay input events, such as tablet or mouse movement, to the actual document (and preserve the correct coordinate translation while doing so).
At this point I'd like to quickly talk about how I'm intending on solving the issue that initially brought Parasol to halt. After thinking things over I came to the realisation that my previous attempt of adding complex splines to the strokes to make smooth lines possible was an artefact of never getting enough tablet input to begin with. With enough events, a simple linear interpolation is a feasible approache. Knowing this it becomes apparent that we do not need to precalculate the spline's effective points (as linear interpolation is dirt cheap). From this it follows that I do not need to put the actual stroke updates into a separate thread, but can simply add points to the stroke in the event loop. The only thing that does have to be outsourced is the drawing of the buffers in the document.
In order to solve the drawing artefacts problem that arose in my previous attempt, I thought I'd take a look at a solution of threading from Qt out. After all, it might just be that Qt isn't happy with being called from threads it doesn't know about. After looking around I found out that the library CommonQt uses to bind to Qt (smokeqt) does not include these classes by default, even though they could be parsed.
Adding these classes to the library was easy enough, a simple XML config change and a recompilation made it available. But whether it actually worked or not was a different question. At first it seemed that it would not work at all. A callback from a QThread back into lisp would always cause SBCL to segfault, which was a rather devastating sign. Fortunately enough it seems that CCL instead works just fine with foreign threads. Halleluyah!
I haven't tested whether everything works just fine and dandy with this idea yet, but hopefully I'll get to that soon enough. However, threading always comes at the price of an immense complexity increase. Object access needs to be carefully synchronised with mutexes or other techniques. Thanks to the fact that interaction between the two threads is minimal, this doesn't pose too big of an issue. Once the drawing thread is done it only needs a single atomic call to set the buffer of the object to its finished value and otherwise it only needs to read things from vectors, where it won't make a big difference if it misses an element or two ahead.
Or at least so I hope. I'm sure that I'll get plenty of headaches with threading once I get to it. For now I'll be content with imagining that it'll all work out perfectly fine.
So, when this is all well and done I can move on to writing the architecture for the tools and history operations and with that the general brushes tool, which I hope I can mostly copy over from the previous version.
With image operations, threaded drawing, history and document presentation all sorted out the final step will then be to make a front-end for it all: Layer management, colour picking, brush configuration toolbar, a pluggable gizmos panel and so on and so forth.
If all goes according to my dreams I'll end up with a painting application that is extensible in every aspect imaginable, exhibits the most powerful user-configurable brush-engine I've seen, offers support for integrating and rendering 3D models as well as plenty of other constructs, and allows rudimentary image manipulation.
All of this is left up to your and my imagination for the time being, but I certainly hope to make it a reality at some point.
I might talk more about my progress as I advance, but maybe I'll keep on thinking I have nothing to talk about, irregardless of how much truth there is to that in actuality.
Until the sun shines again.
If you can't attend, or want to review my talk before/after I give it, you can find it here. Feel free to offer any feedback. It's much too late to make major changes at this point, but feedback is welcome regardless.
(Technically the talk is about PHP, but tagged Lisp because I blame Common Lisp for inspiring a bunch of it.)
DiligenceEngine is a Toronto-based startup using machine learning to automate legal work. We're looking for a DevOps engineer to help us manage and automate our technology stack. Our team is small, pragmatic, and inquisitive; we love learning new technologies and balance adoption with good analysis. We prefer to hire in the Toronto area, but also welcome remote work in a time zone within North America.
Full job listing at their blog: We’re hiring a Clojure engineer!
D-Wave is looking for exceptionally motivated people who love to see the impact of their work on a daily basis, who will do whatever it takes to ensure success of the company, and who want to be a part of something special.
D-Wave is working to radically change what it is possible to do with computers. Our mission is to integrate new discoveries in physics and computer science into new breakthrough approaches to computation. We are committed to commercializing quantum computers. The company’s flagship product, the D-Wave Two, is built around a novel type of superconducting quantum processor. D-Wave Two systems are currently in use by on-line customers and by customers in the field such as NASA & Google
D-Wave is seeking an experienced Software Developer to join the Processor Development group. The successful candidate will work closely with physicists to develop and optimize measurement routines used to calibrate D-Wave’s quantum processor. You will be self-driven, but comfortable working closely with others. You will share responsibility for designing, implementing, testing and maintaining the suite of software necessary to support the testing and operation of D-Wave's quantum computing hardware. The software is implemented in Common Lisp (SBCL) and is an integral part of the quantum computing system. It is used for a variety of purposes including calibration, operation, testing and benchmarking.
We thank all applicants for their interest, however, only those who are selected for interviews will be contacted. It is D-Wave Systems Inc policy to provide equal employment opportunity (EEO) to all persons regardless of race, color, religion, sex, national origin, age, sexual orientation, genetic information, physical or mental disability, protected veteran status, or any other characteristic protected by federal, state/provincial, local law.
Lindsay Andrea <email@example.com>
Talent Acquisition Specialist, Human Resources
D-Wave Systems Inc.
604.630.1428 Ext. 119
My work at AppNexus mostly involves performance optimisation, at any level from microarchitecture-driven improvements to data layout and assembly code to improving the responsiveness of our distributed system under load. Technically, this is similar to what I was doing as a lone developer on research-grade programs. However, the scale of our (constantly changing) code base and collaboration with a dozen other coders mean that I approach the task differently: e.g., rather than single-mindedly improving throughput now, I aim to pick an evolution path that improves throughput today without imposing too much of a burden on future development or fossilising ourselves in a design dead-end. So, although numbers still don’t lie (hah), my current approach also calls for something like judgment and taste, as well as a fair bit of empathy for others. Rare are the obviously correct choices, and, in that regard, determining what changes to make and which to discard as over-the-top ricing feels like I’m drafting a literary essay.
This view is probably tainted by the fact that, between English and French classes, I spent something like half of my time in High School critiquing essays, writing essays, or preparing to write one. Initially, there was a striking difference between the two languages: English teachers had us begin with the five paragraph format where one presents multiple arguments for the same thesis, while French teachers imposed a thesis/antithesis/synthesis triad (and never really let it go until CÉGEP, but that’s another topic). When I write that performance optimisation feels like drafting essays, I’m referring to the latter “Hegelian” process, where one exposes arguments and counterarguments alike in order to finally make a stronger case.
I’ll stretch the analogy further. Reading between the lines gives us access to more arguments, but it’s also easy to get the context wrong and come up with hilariously far-fetched interpretations. When I try to understand a system’s performance, the most robust metrics treat the system as a black box: it’s hard to get throughput under production data wrong. However, I need finer grained information (e.g., performance counters, instruction-level profiling, or application-specific metrics) to guide my work, and, the more useful that information can be – like domain specific metrics that highlight what we could do differently rather than how to do the same thing more efficiently – the easier it is to measure incorrectly. That’s not a cause for despair, but rather a fruitful line of skepticism that helps me find more opportunities.
Just two weeks ago, questioning our application-specific metrics lead to an easy 10% improvement in throughput for our biggest consumer of CPU cycles. The consumer is an application that determines whether internet advertising campaigns are eligible to bid on an ad slot, and if so, which creative (ad) to show and at what bid price. For the longest time, the most time-consuming part of that process was the first step, testing for campaign eligibility. Consequently, we tracked the execution of that step precisely and worked hard to minimise the time spent on ineligible campaigns, without paying much attention to the rest of the pipeline. However, we were clearly hitting diminishing returns in that area, so I asked myself how an adversary could use our statistics to mislead us. The easiest way I could think of was to have campaigns that are eligible to bid, but without any creative compatible with the ad slot (e.g., because it’s the wrong size or because the website forbids Flash ads): although the campaigns are technically eligible, they are unable to bid on the ad slot. We added code to track these cases and found that almost half of our “eligible” campaigns simply had no creative in the right size. Filtering these campaigns early proved to be a low-hanging fruit with an ideal code complexity:performance improvement ratio.
I recently learned that we also had to second-guess instruction level profiles. Contemporary x86oids are out of order, superscalar, and speculative machines, so profiles are always messy: “blame” is scattered around the real culprit, and some instructions (pipeline hazards like conditional jumps and uncached memory accesses, mostly) seem to account for more than their actual share. What I never realised is that, in effect, some instructions systematically mislead and push their cycles to others.
Some of our internal spinlocks use
mfence. I expected that to be
suboptimal, since it’s
locked instruction are more efficient barriers: serialising
mfence have to affect streaming stores and other
weakly ordered memory accesses, and that’s a lot more work than just
preventing store/load reordering. However, our profiles showed that
we spent very little time on locking so I never gave it much thought...
until eliminating a set of locks had a much better impact on
performance than I would have expected from the profile. Faced with
this puzzle, I had to take a closer look at the way
locked instructions affect hardware-assisted instruction profiles on
our production Xeon E5s.
I came up with a simple synthetic microbenchmark to simulate locking
on my E5-4617: the loop body is an adjustable set of memory accesses
(reads and writes of out-of-TLB or uncached locations) or computations
(divisions) bracketed by pairs of normal stores,
inc/dec to cached memory (I would replace the fences with an
increment/decrement pair and it looks like all read-modify-write
instructions are implemented similarly on Intel). Comparing runtimes
for normal stores with the other instructions helps us gauge their
overhead. I can then execute each version under
perf and estimate
the overhead from the instruction-level profile. If
indeed extra misleading, there should be a greater discrepancy between
the empirical impact of the
mfence pair and my estimate from the
locked instructions and random reads that miss the L3 cache,
the (cycle) profile for the microbenchmark loop is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Looking at that profile, I’d estimate that the two random reads
account for ~50% of runtime, and the pair of
lock inc/dec for ~40%.
The picture is completely different for
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
It looks like the loads from uncached memory represent ~85% of the
runtime, while the
mfence pair might account for at most ~15%, if
I include all the noise from surrounding instructions.
If I trusted the profile, I would worry about eliminating
instructions, but not so much for
mfence. However, runtimes (in
cycles), which is what I’m ultimately interested in, tell a different
story. The same loop of LLC load misses takes 2.81e9 cycles for 32M
iterations without any atomic or fence, versus 3.66e9 for
inc/dec and 19.60e9 cycles for
mfence. So, while the profile for
mfence loop would let me believe that only ~15% of the time is
spent on synchronisation, the
mfence pair really represents 86%
\(((19.6 - 2.81) / 19.6)\) of the runtime for that loop! Inversely,
the profile for the
locked pair would make me guess that we spend
about 40% of the time there, but, according to the timings, the real
figure is around 23%.
The other tests all point to the same conclusion: the overhead of
mfence is strongly underestimated by instruction level profiling,
and that of
locked instructions exaggerated, especially when
adjacent instructions write to memory.
setup cycles (est. overhead) ~actual overhead div [ALU] (100 Mi iterations) atomic: 20153782848 (20%) ~ 3.8% mfence: 28202315112 (25%) ~31.3% vanilla: 19385020088 Reads: TLB misses (64Mi iterations) atomic: 3776164048 (80%) ~39.3% mfence: 12108883816 (50%) ~81.1% vanilla: 2293219400 LLC misses (32Mi iterations) atomic: 3661686632 (40%) ~23.3% mfence: 19596840824 (15%) ~85.7% vanilla: 2807258536 Writes: TLB (64Mi iterations) atomic: 3864497496 (80%) ~10.4% mfence: 13860666388 (50%) ~75.0% vanilla: 3461354848 LLC (32Mi iterations) atomic: 4023626584 (60%) ~16.9% mfence: 21425039912 (20%) ~84.4% vanilla: 3345564432
I can guess why we observe this effect; it’s not like Intel is
intentionally messing with us.
mfence is a full pipeline flush: it
slows code down because it waits for all in-flight instructions to
complete their execution. Thus, while it’s flushing that slows us
down, the profiling machinery will assign these cycles to any of the
instructions that are being flushed. Locked instructions instead
affect stores that are still queued. By forcing such stores to
retire, locked instructions become responsible for the extra cycles
and end up “paying” for writes that would have taken up time anyway.
Losing faith in hardware profiling being remotely representative of
reality makes me a sad panda; I now have to double check
profiles when hunting for misleading metrics. At least I can tell
myself that knowing about this phenomenon helps us make better
informed – if less definite – decisions and ferret out more easy
P.S., if you find this stuff interesting, feel free to send an email (pkhuong at $WORK.com). My team is hiring both experienced developers and recent graduates (:
I've just committed a major feature to MGL-PAX: the ability to include code examples in docstrings. Printed output and return values are marked up with ".." and "=>", respectively.
(values (princ :hello) (list 1 2)) .. HELLO => :HELLO => (1 2)
The extras are:
The documentation provides a tutorialish treatment. I hope you'll find it useful.
It's been nearly fifteen years, and SBCL still can't be reliably built by other Lisp compilers.
Of course, other peoples' definition of "reliably" might differ. We
did achieve successful building under unrelated Lisp compilers
twelve years ago;
there were a couple of nasty bugs along the way, found both before and
after that triumphant announcement, but at least with a set of
compilers whose interpretation of the standard was sufficiently
similar to SBCL's own, and with certain non-mandated but expected
features (such as the type
(array (unsigned-byte 8) (*)) being
single-float being distinct from
double-float), SBCL achieved its aim of being buildable on a system
without an SBCL binary installed (indeed, using CLISP or XCL as a
build host, SBCL could in theory be bootstrapped starting with only
For true "reliability", though, we should not be depending on any
particular implementation-defined features other than ones we actually
require - or if we are, then the presence or absence of them should
not cause a visible difference in the resulting SBCL. The most common
kind of leak from the host lisp to the SBCL binary was the host's
influencing the target, causing problems from documentation errors all
the way up to type errors in the assembler. Those leaks were mostly
plugged a while ago, though they do recur every so often; there are
other problems, and over the last week I spent some time tracking down
three of them.
The first: if you've ever done
(apropos "PRINT") or something
similar at the SBCL prompt, you might wonder at the existence of
functions named something like
SB-VM::|CACHED-FUN--PINSRB[(EXT-2BYTE-XMM-REG/MEM ((PREFIX (QUOTE (102))) (OP1 (QUOTE (58))) (OP2 (QUOTE (32))) (IMM NIL TYPE (QUOTE IMM-BYTE))) (QUOTE (NAME TAB REG , REG/MEM ...)))]-EXT-2BYTE-XMM-REG/MEM-PRINTER|.
What is going on there? Well, these functions are a part of the
disassembler machinery; they are responsible for taking a certain
amount of the machine code stream and generating a printed
representation of the corresponding assembly: in this case, for the
instruction. Ah, but (in most instruction sets) related instructions
share a fair amount of structure, and decoding and printing a
instruction is basically the same as for
PINSRB, with just one
#x20 changed to a
#x22 - in both cases we want the name of the
instruction, then a tab, then the destination register, a comma, the
source, another comma, and the offset in the destination register. So
SBCL arranges to reuse the
PINSRB instruction printer for
it maintains a cache of printer functions, looked up by printer
specification, and reuses them when appropriate. So far, so normal;
the ugly name above is the generated name for such a function,
constructed by interning a printed, string representation of some
Hm, but wait. See those
(QUOTE (58)) fragments inside the name?
They result from printing the list
(quote (58)). Is there a
consensus on how to print that list? Note that
is bound to
for this printing; prior experience has shown that there are strong
divergences between implementations, as well as long-standing
individual bugs, in pretty-printer support. So, what happens if I do
(write-to-string '(quote foo) :pretty nil)?
"(QUOTE FOO)", unconditionally
ccl:*print-abbreviate-quote*is set to
"'FOO", unconditionally (I read the
.dcode with comments in half-German to establish this)
So, if SBCL was compiled using CLISP, the name of the same function in
the final image would be
SB-VM::|CACHED-FUN--PINSRB[(EXT-2BYTE-XMM-REG/MEM ((PREFIX '(102)) (OP1 '(58)) (OP2 '(32)) (IMM NIL TYPE 'IMM-BYTE)) '(NAME TAB REG , REG/MEM ...))]-EXT-2BYTE-XMM-REG/MEM-PRINTER|.
Which is shorter, and maybe marginally easier to read, but importantly
for my purposes is not bitwise-identical.
Thus, here we have a difference between host Common Lisp compilers
which leaks over into the final image, and it must be eliminated.
Fortunately, this was fairly straightforward to eliminate; those names
are never in fact used to find the function object, so generating a
unique name for functions based on a counter makes the generated
object file bitwise identical, no matter how the implementation prints
two-element lists beginning with
The second host leak is also related to
quote, and to our old friend
backquote - though not related in any way to the
new implementation. Consider this
apparently innocuous fragment, which is a simplified version of some
code to implement the
:type option to
(macrolet ((def (name type n) `(progn (declaim (inline ,name (setf ,name))) (defun ,name (thing) (declare (type simple-vector thing)) (the ,type (elt thing ,n))) (defun (setf ,name) (value thing) (declare (type simple-vector thing)) (declare (type ,type value)) (setf (elt thing ,n) value))))) (def foo fixnum 0) (def bar string 1))
What's the problem here? Well, the functions are declaimed to be
so SBCL records their source code. Their source code is generated by
a macroexpander, and so is made up of conses that are generated
programmatically (as opposed to freshly consed by the reader). That
source code is then stored as a literal object in an object file,
which means in practice that instructions for reconstructing a similar
object are dumped, to be executed when the object file is processed by
is a reader macro that expands into code that, when evaluated,
generates list structure with appropriate evaluation and splicing of
unquoted fragments. What does this mean in practice? Well, one
reasonable implementation of reading
`(type ,type value) might
(cons 'type (cons type '(value)))
and indeed you might (no guarantees) see something like that if you do
(macroexpand '`(type ,type value))
in the implementation of your choice. Similarly, reading
(elt thing ,n) value) will eventually generate code like
(cons 'setf (cons (cons 'elt (list 'thing n)) '(value)))
Now, what is "similar"? In this context, it has a technical definition: it relates two objects in possibly-unrelated Lisp images, such that they can be considered to be equivalent despite the fact that they can't be compared:
similar adj. (of two objects) defined to be equivalent under the similarity relationship.
similarity n. a two-place conceptual equivalence predicate, which is independent of the Lisp image so that two objects in different Lisp images can be understood to be equivalent under this predicate. See Section 3.2.4 (Literal Objects in Compiled Files).
Following that link, we discover that similarity for
is defined in the obvious way:
Two conses, S and C, are similar if the car of S is similar to the car of C, and the cdr of S is similar to the cdr of C.
and also that implementations have some obligations:
Objects containing circular references can be externalizable objects. The file compiler is required to preserve eqlness of substructures within a file.
and some freedom:
With the exception of symbols and packages, any two literal objects in code being processed by the file compiler may be coalesced if and only if they are similar [...]
Put this all together, and what do we have? That
def macro above
generates code with similar literal objects: there are two instances
'(value) in it. A host compiler may, or may not, choose to
coalesce those two literal
'(value)s into a single literal object;
if it does, the inline expansion of
bar) will have a
circular reference, which must be preserved, showing up as a
difference in the object files produced during the SBCL build. The
fix? It's ugly, but portable: since we can't stop an aggressive
compiler from coalescing constants which are similar but not
identical, we must make sure that any similar substructure is in fact
(macrolet ((def (name type n) (let ((value '(value))) `(progn (declaim (inline ,name (setf ,name))) (defun ,name (thing) (declare (type simple-vector thing)) (the ,type (elt thing ,n))) (defun (setf ,name) (value thing) (declare (type simple-vector thing)) (declare (type ,type . ,value)) (setf (elt thing ,n) . ,value))))) (def foo fixnum 0) (def bar string 1))
Having dealt with a problem with
and a problem with
what might the Universe serve up for my third problem? Naturally, it
would be a problem with a code walker. This code walker is
somewhat naïve, assuming as it does
that its body is made up of forms or tags; it is the
which is used implicitly in the definition of VOPs (reusable assembly
units); for example, like
(assemble () (move ptr object) (zeroize count) (inst cmp ptr nil-value) (inst jmp :e DONE) LOOP (loadw ptr ptr cons-cdr-slot list-pointer-lowtag) (inst add count (fixnumize 1)) (inst cmp ptr nil-value) (inst jmp :e DONE) (%test-lowtag ptr LOOP nil list-pointer-lowtag) (error-call vop 'object-not-list-error ptr) DONE))
which generates code to compute the length of a list. The expander
assemble scans its body for any atoms, and generates binding
forms for those atoms to labels:
(let ((new-labels (append labels (set-difference visible-labels inherited-labels)))) ... `(let (,@(mapcar (lambda (name) `(,name (gen-label))) new-labels)) ...))
The problem with this, from a reproducibility point of view, is that
(and the other set-related functions:
and their n-destructive variants) do not return the sets with a
specified order - which is fine when the objects are truly treated as
sets, but in this case the
DONE label objects ended up in
different stack locations depending on the order of their binding.
Consequently the machine code for the function emitting code for
computing a list's length - though not the machine code emitted by
that function - would vary depending on the host's implementation of
The fix here was to sort the result of the set operations, knowing
that all the labels would be symbols and that they could be treated as
And after all this is? We're still not quite there: there are three to four files (out of 330 or so) which are not bitwise-identical for differing host compilers. I hope to be able to rectify this situation in time for SBCL's 15th birthday...
For older items, see the Planet Lisp Archives.
Last updated: 2014-12-19 18:27