Planet Lisp

Patrick SteinFog of Light - Starting to Add Star-Fields

· 2 days ago

I have finally written my first OpenGL code using GLSL. Whew. That took way too long to get all working correctly. I promise, soon, I will upload some sample code so that others may not have to stumble as long as I did.

For the star-field, I generate a few thousand 2-D points. Each point has its own radius, its own opacity, and its own color.

I put these all into an OpenGL array buffer. Then, the vertex shader copies data out of my struct to set the color and the point size. Then, the fragment shader turns the color into two, overlapping radial gradients (one that is half the radius of the other) by modulating the color’s opacity.

screenshot of sample starfield

Next up will be nebulae, then planets/asteroids in the local system.

Zach BeaneRoger Corman talk in the Bay Area

· 2 days ago

Quicklisp newsMay 2017 Quicklisp dist update now available

· 9 days ago
New projects:
  • cepl.glop — glop host for cepl — BSD 2 Clause
  • cepl.sdl2-image — Some helper methods for using sdl2-image to load images to CEPL types — BSD 2 Clause
  • cepl.sdl2-ttf — A few additional helpers for making working with sdl2-ttf even easier from CEPL — BSD 2 Clause
  • cl-clblas — clBLAS binding — Apache License, Version 2.0
  • cl-emoji — cl-emoji provides the Unicode emoji characters — MIT
  • cl-fond — Bindings to libfond, a simple text rendering engine for OpenGL — Artistic
  • cl-hamcrest — This library makes your CL unittests more readable. — New BSD
  • cl-ntp-client — A simple NTP (Network Time Protocol) client in Common Lisp — BSD
  • cl-pcg — A bare-bones Permuted Congruential Generator implementation in pure Common Lisp. — MIT
  • cl-sdl2-mixer — Bindings for SDL2_mixer — MIT
  • cl-trie — Common Lisp implementation of Trie data structure. — MIT
  • cl-why — (X)HTML generation macros — BSD
  • doubly-linked-list — An implementation of the doubly linked list data structure. — MIT
  • flac-parser — A parser for FLAC audio files. — MIT
  • fs-utils — Utilities for working with files and file paths. — MIT
  • gamebox-dgen — A procedural dungeon generator. — MIT
  • gamebox-ecs — An implementation of the Entity-Component System (ECS) pattern, popular with game development. — MIT
  • gamebox-frame-manager — A manager for frames within a game loop. — MIT
  • gamebox-grids — Create and manipulate tiles in a two-dimensional grid layout. — MIT
  • gamebox-math — A high performance math library useful for making games. — MIT
  • genie — A simple wrapper to generate portably seedable pseudo-random numbers. — MIT
  • markdown.cl — A markdown parser for Common Lisp — MIT
  • narrowed-types — Type definitions narrowed with predicates — BSD
  • simple-logger — A simple message logging system. — MIT
  • simple-routes — Facility for straightforward http routing on top of Hunchentoot. — 2 clause BSD
  • stealth-mixin — Library for creating stealth mixin classes. — FreeBSD, see file LICENSE.text
  • the-cost-of-nothing — Determine the cost of things in Common Lisp. — GPLv3
  • trivial-clipboard — trivial-clipboard let access system clipboard. — MIT
Updated projects3d-matrices3d-vectorsalexandriaarchitecture.builder-protocolarchitecture.service-providerarray-utilsbabelbeastcaveman2-widgetsceplcepl.cameracepl.devilcepl.sdl2cepl.skitterchirpcl-anacl-ascii-artcl-bencodecl-cache-tablescl-cuddcl-custom-hash-tablecl-digraphcl-enumerationcl-gamepadcl-gpiocl-html5-parsercl-ixfcl-jpegcl-json-templatecl-k8055cl-monitorscl-mpg123cl-oclapicl-openglcl-out123cl-passcl-pslibcl-pythoncl-sandboxcl-sdl2cl-sdl2-imagecl-sdl2-ttfcl-slugcl-soilcl-spidevcl-strcl-tasuketecl-unificationcl-vectorscl-videocl-xkbclackclassimpclazyclinchclipclmlcloser-mopclssclxcoleslawcolleencolorizecroatoancrypto-shortcutsdeedsdefenumdeferreddendritedexadordirtdissectdocumentation-utilsdynaesrapfare-memoizationfast-ioflarefnforform-fiddleglsl-specglsl-toolkithu.dwim.asdfhu.dwim.debughu.dwim.defhu.dwim.perechu.dwim.presentationhu.dwim.rdbmshu.dwim.reiteratehu.dwim.urihu.dwim.utilhu.dwim.web-serverhumblerinquisitoriolibironcladjonathanjson-streamsjsonrpckenzolambda-fiddlelasslegitlichat-protocollichat-serverliblichat-tcp-clientlichat-tcp-serverlichat-ws-serverlisp-namespacelocal-timelquerymaidenmcclimmd5mel-basemodularizemodularize-hooksmodularize-interfacesmonkeylib-htmlmonkeylib-jsonneo4clnew-opninglenorthoclclomer-countparachuteparser.common-rulespathname-utilspipingplumpplump-bundleplump-sexpqlotqt-libsqtoolsqtools-uirandom-stateratifyread-csvredirect-streamrtg-mathrutilsserapeumsimple-inferiorssimple-tasksskittersoftdrinksouthspinneretstaplestructy-defclassstumpwmtemporal-functionstmtranslatetriviatrivial-argumentstrivial-benchmarktrivial-indenttrivial-main-threadtrivial-mimestrivial-thumbnailubiquitousuiopvarjoverboseweblocksxhtmlambda.

Removed projects: cl-geo, cl-wkb, cl4l, clim-pkg-doc, gsharp, lifoo, lisp-binary.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Patrick SteinFog of Light - Getting Underway

· 14 days ago

Dauntless (The Lost Fleet, Book 1) was the first science-fiction book I read that tried to deal with space combat with the real-world constraint that light only travels so fast. It takes light eight minutes to get from the Sun to Earth. It takes light more than a second to get from the Earth to the Moon. Depending on where they are in their orbits, it takes between three minutes and twenty-two minutes to get light from Mars to Earth.

Imagine that you’re a star-ship. You and your companions have just warped into a new star system. You see a flotilla of enemy ships about 45 light-minutes away. That means, you’ve got 45 minutes before that flotilla can possibly even know that you’re in their star system. How much can you get done in that time? Once they can see you, how much can you mislead them on your target if they’re going to be operating on data about where you were heading more than half an hour ago?

For years, I have been batting around this concept, hammering it into a game. I have finally gotten started on it.

Armed with some functions like these, I am constructing values which change at points in space-time and querying the value visible from other points in space-time.

(defgeneric get-nearest-value (space-time-value space-time-point)
  ...
  (:documentation "Find the observable value of a quantity
SPACE-TIME-VALUE when observed from a given location
SPACE-TIME-POINT. This method finds the most-recent
value V0 (at location P0) for this data when viewed from
the given location. This method returns (VALUES V0 P0).
This method makes no effort to interpolate the results."
))

Here are my first, visually-demonstrable results:

Hopefully, there will be plenty more coming in the near future.

LispjobsLinux systems engineer with Common Lisp experience, m-creations, Mainz, Germany

· 27 days ago

Full time position for German speaking Linux admin with Docker and
Common Lisp experience near Frankfurt, Germany

We are a small German software shop based in Mainz, Germany, founded in
2000. We create custom software solutions for mid-size to big companies
in finance/payment, health care, and media research.

For some of our customers, we also cover operational aspects of the
software lifecycle by creating and running Docker containers in
development, test, and production environments on clusters of servers
running Apache Mesos.

Missing pieces of infrastructure are written in Common Lisp (CL) and
interact with existing software components of the cluster (DNS, load
balancer etc.). Docker images are based on the embedded Linux
distribution OpenWrt.

We are looking for new colleagues who ideally

– have 3+ years of Linux experience (e.g. are fluent in shell scripting
and have a good overview of the GNU/Linux tools)

– have a working knowledge of Docker, its interaction with the host, and
the role of the container image

– have experience in Common Lisp (not necessarily professional)

– want to use CL to solve systems engineering problems (e.g. dynamic
load balancing, DNS re-configuration)

– are interested in mastering the OpenWrt build system (buildroot +
make/cmake) to create a secure in-container distribution

Experience in the mentioned areas is not as important as curiosity,
intelligence and open-mindedness. You will get the necessary time to
learn the missing skills. We are interested in a long-term relationship
rather than just staffing a project with ‘resources’.

Due to our size as a small company, we do care about each one of our
colleagues and react flexibly to the (sometimes changing) necessities of
their life. Together we try to develop a plan for your personal career,
depending on your own goals.

Curious? Please contact Kambiz Darabi at information@m-creations.com
He’ll be happy to give you more information and answer all your
questions.

m-creations gmbh
Acker 2
55116 Mainz


Zach BeaneCommon Lisp Standard Draft

· 28 days ago
Common Lisp Standard Draft:

This is a nice PDF version of the CL spec built from the final draft TeX sources. There's also a gitlab repo that can be used to reproduce the PDF locally. (Thanks to Rainer Joswig for sharing this on twitter.)

François-René RideauDesign at the confluence of programming languages and build systems

· 29 days ago

This short article discusses upcoming changes and future challenges for ASDF, the Common Lisp build system. It also draws lessons for a hypothetical successor to ASDF, for build systems in general, languages in which to write them, and languages that would have an internal build system that could rival with modern build systems.

ASDF, "Another System Definition Facility", is the de facto standard build system for Common Lisp (CL). It is relatively lightweight (13 kloc, over half of which for the portability layer UIOP, the "Utilities for Implementation- and OS- Portability"), quite portable (17 supported implementations), configurable (though importantly it "just works" by default), well-featured (it can create standalone executables), extensible (e.g. with support for linking C code, or for compiling FORTRAN through Lisp, etc.). But it lacks many features of modern build systems like e.g. Bazel: it does not support determinism and reproducibility, distribution and caching, cross-compilation to other platforms, building software written in languages other than CL, integration with non-CL build systems, management of multiple versions of the same software, or scaling to millions of files, etc. Historically, these limitations are due to ASDF being at heart an in-image build system in direct line of the original Lisp Machine DEFSYSTEM: it is designed to build and load software into the current Lisp image. But the challenges in possibly transforming ASDF into a modern build system touch limitations of Common Lisp itself and tell us something about language design in general.

I have essentially two development branches more or less ready for merge in the upcoming ASDF 3.3: the "plan" branch that provides proper phase separation (briefly discussed in my ELS 2017 demo), and the "syntax-control" branch that binding for syntax variables around ASDF evaluation (briefly discussed in my ELS 2014 extended article, section 3.5 "Safety before Ubiquity").

Phase Separation

The first branch solves the problem of phase separation. The branch is called "plan" because I started with the belief that most of the changes would be centered around how ASDF computes its plan. But the changes run deeper than that: 970 lines were added or modified all over the source code, not counting hundreds more were moved around as the code got reorganized. That's double the number of lines of the original ASDF, and it took me several months (part time, off hours) to get just right. Still, it is up-to-date, passes all tests, and works fine for me.

To understand what this is about, consider that a basic design point in ASDF 1.0 to 3.2 is that it first plans your entire build, then it performs the plan. The plan is a list of actions (pair of OPERATION and COMPONENT), obtained by walking the action dependency graph implicitly defined by the COMPONENT-DEPENDS-ON methods. Performing the plan is achieved by calling the PERFORM generic function on each action, which in turn will call INPUT-FILES and OUTPUT-FILES to locate its inputs and outputs.

This plan-then-perform strategy works perfectly fine as long as you don't need ASDF extensions (such as, e.g. cffi-grovel, or f2l). However, if you need extensions, there is a problem: how do you load it? Well, it's written in Lisp, so you could use a Lisp build system to load it, for instance, ASDF! And so people either use load-system (or an older equivalent) from their .asd files, or more declaratively use :defsystem-depends-on in their (defsystem ...) form, which in practice is about the same. Now, since ASDF up until 3.2 has no notion of multiple loading phases, what happens is that a brand new separate plan is computed then performed every time you use this feature. This works well enough in simple cases: some actions may be planned then performed in multiple phases, but performing should be idempotent (or else you deserve to lose), therefore ASDF wastes some time rebuilding a few actions that were planned before an extension was loaded that also depended on them. However, the real problems arise when something causes an extension to be invalidated: then the behavior of the extension may change (even subtly) due to its modified dependency, and the extension and all the systems that directly or indirectly depend on should be invalidated and recomputed. But ASDF up until 3.2 fail to do so, and the resulting build can thus be incorrect.

The bug is quite subtle: to experience it, you must be attempting an incremental build, while meaningful changes were made that affect the behavior of an ASDF extension. This kind of situation is rare enough in the small. And it is easily remedied by manually building from scratch. In the small, you can afford to always build from scratch the few systems that you modify, anyway. But when programming in the large, the bug may become very serious. What is more, it is a hurdle on the road to making a future ASDF a robust system with deterministic builds.

Addressing the issue was not a simple fix, but required deep and subtle changes that introduce notions neglected in the previous simpler build models: having a session that spans multiple plan-then-perform phases and caches the proper information not too little not too much; having a notion that loading a .asd file is itself an action that must be taken into account in the plan; having a notion of dynamically detecting the dependencies of loading a .asd file; being able to check cross-phase dependencies before to keep or invalidate a previously loaded version of a .asd file without causing anything to be loaded in the doing; expanding the state space associated to actions as they are traversed potentially many times while building the now multi-phase dependency graph. And all these things interfere with each other and have to be gotten just right.

Now, while my implemented solution is obviously very specific to ASDF, the issue of properly staging build extensions is a common user need; and addressing the issue would require the introduction of similar notions in any build system. Yet, most build systems, like ASDF up until 3.2, fail to offer proper dependency tracking when extensions change: e.g. with GNU Make you can include the result of a target into the Makefile, but there is no attempt to invalidate targets if recipes have changed or the Makefile or some included file was modified. Those build systems that do implement proper phase separation to track these dependencies are usually language-specific build systems (like ASDF); but most of them (unlike ASDF) only deal with staging macros or extensions inside the language (e.g. Racket), not with building arbitrary code outside the language. An interesting case is Bazel, which does maintain a strict plan-then-perform model yet allows user-provided extensions (e.g. to support Lisp). However, its extensions, written in a safe restricted DSL (that runs into plan phase only, with two subphases, "load" and "analysis") are not themselves subject to extension using the build system (yet the DSL being a universal language, you could implement extensibility the hard way).

Fixing the build model in ASDF 3.3 led to subtle backward-incompatible changes. Libraries available on Quicklisp were inspected, and their authors contacted if they depended on modified functionality or abandoned internals. Those libraries that are still maintained were fixed. Still, I'd just like to see how compatible it is with next month's Quicklisp before I can recommend releasing these changes upon the masses.

Syntax Control

The current ASDF has no notion of syntax, and uses whatever *readtable*, *print-pprint-dispatch*, *read-default-float-format* or many other syntax variables are ambient at the time ASDF is called. This means that if you ever side-effect those variables and/or the tables that underlie the first two, (e.g. to enable fare-quasiquote for the sake of matching with optima or trivia), then call ASDF, the code will be compiled with those modified tables, which will make fasl that are unloadable unless the same side-effects are present. If systems are modified and compiled that do not have explicit dependencies on those side-effects, or worse, that those side-effects depend on (e.g. fare-utils, that fare-quasiquote depends on), then your fasl cache will be polluted and the only way out will be to rm -rf the contaminated parts of the fasl cache and/or to build with :force :all until all parts are overwritten. Which is surprising and painful. In practice, this means that using ASDF is not compatible with making non-additive modifications to the syntax.

Back in the 3.1 days, I wrote a branch whereby each system has its own bindings for the syntax variables, whereas the default tables be read-only (if possible, which it is in many implementations). With that branch, the convention is each system can do modify the syntax in whatever way it wants, and that will only affect that system; however, changes to syntax tables must be done after explicitly creating new tables, and any attempt to side-effect the default global tables will result in an error.

This was the cleanest solution, but alas it is not compatible with a few legacy systems that explicitly depend on modifying the syntax tables (and/or variables?) for the next system to use, as ugly as that is. My initial opinion was that this should be forbidden, and that these legacy systems should be fixed; however, these were legacy systems at a notable Lisp company, with no one willing to fix them; also, I had resigned from maintainership and the new maintainer is more conservative than I am, so in the end the branch was delayed until after said Lisp company would investigate, which never happened, and the branch was never merged.

A simpler and more backward-compatible change to ASDF would have been to have global settings for the variables that are bound around any ASDF session. Then, the convention would be that you are not allowed to use ASDF again to load regular CL systems after you modify these variables in a non-additive way; and the only additive changes you can make are to add new entries to the shared *readtable* and *print-pprint-dispatch* tables that do not conflict with any default entry or earlier entry (and that includes default entries on any implementation that you may want to support, so e.g. no getting #_ or #/ if you want to support CCL). Even additive changes, if made, must somehow not clash with each other, or they become non-additive; but there is no way to automatically check that this is the case and issue a warning. After you make non-additive changes (if you do), then ASDF can't be used anymore to build normal systems that may conflict with those changes, and if they are modified and you call ASDF on a system that depends on them, you lose (or you must first make all those systems immutable).

Note that because ASDF would already break in those cases, most of these constraints de facto exist, are enforced, and are respected by all ASDF users. There remains the question of binding the variables around the build, which allows normal systems to be built even if a user changes the variables, or to not bind them, which puts the onus on most users of keeping these variables bound to reasonable values around calls to ASDF for the benefit of a few users would want their own breaking changes to persist after the build. I believe the first option (bind the variables) is cleaner, though the second (basically, do nothing) is more backward-compatible.

In all cases, you can always make non-additive changes to a readtable (such as enabling fare-quasiquote) by locally binding *readtable* to a different value, e.g. using named-readtables:in-readtable. A local binding won't adversely affect the ASDF build; but unless ASDF is changed to enforce its own bindings, you'll have to make sure to manually undo your local bindings before you call ASDF again.

The problem with not adding any syntax-control to ASDF is that it forces Lispers to always be conservative about modifying the readtable and calling ASDF (or having it called indirectly by any function whatsoever that they call, which they can't always predict). In practice this makes hacking CL code hostile to interactive development with non-additive syntax modification; which defeats in social conventions a technical feature of the language often touted as cool by its zealots. If syntax-control is added to ASDF, then you can freely do your syntax modifications and be confident that building code won't be adversely affected.

The current branch implements the simpler option of binding variables around ASDF sessions, and using a mutable shared readtable that should only be modified additively. It has probably bitrotten, and should be updated or rewritten. The current maintainer, Robert Goldman, should probably opine on which change to adopt with what schedule (3.3.0? 3.2.2? 3.3.1? 3.4.0?) and sign off the API.

Vanquishing Language Limitations

These two modifications are ((now)low)-hanging fruits in making ASDF a more robust build tool, one that supports working with non-trivial extension to the build system or the Lisp syntax. And in both cases, the limit reached by ASDF is ultimately that CL is a hippie language that allows unrestricted global side-effects and disallows disallowing. Therefore extensions necessarily introduce potential conflict with each other that have to be solved in wetware via convention, whereby all users are to be trusted not go wild with side-effects. The system cannot even detect violations and warn users of a potential mistake; users will have to experience subtle or catastrophic failure and figure out what went wrong.

A better language for a build system should be purer: inasmuch as it has "global" side-effects, it should allow to "fork" the "global" state in an efficient incremental way. Or even better, it should make it easy to catch side-effects and write this forking support in userland. At the very least, it would make it possible to detect violations and warn the user. Bazel is an example build system with an extension language that has local side-effects, but globally has pure forked environments. A successor to ASDF could similarly provide a suitably pure dialect of Lisp for extensions.

Happily, adding better syntax control to ASDF suggests an obvious solution: ASDF extensions could be written in an enforceable subset of a suitable extension of Common Lisp. Thus, ASDF extensions, if not random Common Lisp programs, can be made to follow a discipline compatible with a deterministic, reproducible build.

What would be an ideal language in which to write a extensible build system? Well, I tackled that question in another article, the Chapter 9: "Build Systems" of my blog "Ngnghm". That's probably too far from CL to be in the future of ASDF as such, though: the CL extension would be too large to fit ASDF's requirement of minimalism. On the other hand, if such a language and build system is ever written, interest for CL and ASDF might wane in favor of said latter build system.

In any case, in addition to not being a blub language, features that will make for a great programming language for an integrated build system include the following: making it possible to directly express functional reactive programming, determinism as well as system I/O, laziness as well as strictness, reflection to map variables to filesystem and/or version control as well as to stage computations in general including dynamic build plans, hygiene in syntax extension and file reference, modularity in the large as well as in the small, programmable namespace management, the ability to virtualize computations at all sizes and levels of abstractions, to instrument code, etc.

Towards cross-compilation

Now, before we get reproducible builds, we also need to enable cross-compilation for ASDF systems, so the necessarily unrestricted side-effects of compiling Common Lisp code cannot interfere with the rest of the build. Cross-compilation also allows building on a different platform, which would be important to properly support MOCL, but would probably also mesh well with support for building software in arbitrary other languages.

Importantly, instead of the (perform operation component) protocol that specifies how to build software in the current image, a (perform-form target operation component) protocol (or maybe one where the target information has been made part of the operation object) would return forms specifying how to build software, which could then happen in separate Lisp or non-Lisp process, on the same machine or on another worker of a distributed build farm.

Note however, that one essential constraint of ASDF is that it should keep working in-image in the small and not depend on external processes or additional libraries. Any serious effort towards a "deterministic build" should therefore remain an extension indeed (though one users would load early).

Still, if this extension is to remain compatible with ASDF and its .asd files, providing a backward-compatible path forward, then modifications and cleanups may have to be done to ASDF itself so it behaves well. Even keeping that hypothetical deterministic build separate, I expect non-trivial changes to the ASDF API to enable it, such as the perform-form protocol mentioned above. But backward-compatibility and smooth transition paths have always been the name of the game for ASDF; they are what make possible an ecosystem with thousands of packages.

There is a precedent to an ASDF extension leading to (most positive) changes in ASDF: POIU, the "Parallel Operators on Independent Units", Andreas Fuchs' extension to compile files in forks (but still load them in-image). Making sure that POIU can be expressed as an extension of ASDF without redefining or breaking the provided abstractions, was instrumental in the evolution of ASDF: it led to many cleanups in ASDF 2, it inspired several of the breakthroughs that informed what became ASDF 3, and it kept influencing ASDF 3.3.

Thus, even though ASDF will stay forever an in-image build system, and even though a deterministic build extension (let's call it FDSA, the Federated Deterministic System Assembler) may ultimately remain as little used as POIU (i.e. because it lacks sufficient benefits to justify the transition costs), I expect the design of the base ASDF to be deeply influenced by the development of such a tool (if it happens).

Looking for new developers

Robert Goldman and I are not getting younger, not getting more interested in ASDF, and we're not getting paid to hack on it. We are looking for young Common Lisp hackers to join us as developers, and maybe some day become maintainers, while we're still there to guide them through the code base. Even without the ambition (and resources) to actually work towards a hypothetical FDSA, our TODO file is full of items of all sizes and difficulties that could use some love. So, whatever your level of proficiency, if you feel like hacking on a build system both quite practical and full of potentiality, there are plenty of opportunities for you to work on ASDF (or a successor?) and do great, impactful work.

McCLIMProgress report #7

· 34 days ago

Dear Community,

During this iteration I have worked on the Debugger (system clim-debugger) to bring it closer to sldb:

Debugger capture

More work on the module is planned with a final goal to integrate it with the Listener and to have it as a default debugger for McCLIM applications. Suggestions on how to improve the interface, testing and help with coding are appreciated. Preliminary documentation has been written and put in the McCLIM manual draft.

I've started working on a library called Slim[1] the goal of which is to provide some of the CLIM interfaces in easy to learn and write manner (i.e more context dependent, less verbose names, unified abstractions etc.). For now it is only a skeleton having barely four macros, but I'd love to hear suggestions, what should it contain, in what form etc. A sketch may be found in the source code Libraries/Slim/. If you think it is a bad idea to have such library shipped with McCLIM - let me know about that too!

The documentation was extended in some places. Also building the info document works now (long standing issue). An updated version of manual draft may be found on the McCLIM website. The Drei documentation has been put in a separate document due to its size and independent scope from the rest of McCLIM.

Nisar Ahmad has solved one of the bounties related to menus and command tables. He also submitted documentation for the menu functionality, thereby earning $150. Congratulations!

Speaking of finances - all money is now accumulated solely for bounties and development tasks, none is withdrawn by me since the beginning of year 2017.

Currently we have $1226 at our disposal and active bounties worth $700. Declared monthly donations at the moment equal $297. Additionally one-time contributions come every month. That means that we can post two or three bounties a month without draining the current resources, or spawn a bunch of worthwhile tasks and keep going as money comes. This is fantastic news. Thank you all for your support to the project!

At the moment we have five active bounties worth $700 which may be found here:

https://www.bountysource.com/teams/mcclim/bounties

New bounties have a time limit assigned to them (six months) - thanks to that we are able to reclaim money from unresolved issues and propose it somewhere else (or repost the same bounty).

To improve the visibility of the issues which have bounties on them I've added a label to GitHub issue tracker: bounty.

Suggestions as to which other issues should have a bounty on them are appreciated and welcome. Please note that Bountysource has a functionality "Suggest an Issue" which may be found on the bounties page. If you feel that you may solve some problem, but there is no bounty on it, feel free to suggest it too.

If you have any questions, doubts or suggestions - please contact me either by email (daniel@turtleware.eu) or on IRC (my nick is jackdaniel).

We are very happy that the number of McCLIM users grows, which may be infered from number of questions on the IRC channel, bug reports and pull requests.

Sincerely yours,
Daniel Kochmański

[1] Simplified Lisp Interface Manager.

Vsevolod DyomkinPretty-Printing Trees

· 42 days ago
  (or The Ugliest Code I've Ever Written)

In the last couple of days, I was ill and had to stay in bed, so I've used this time also to tidy up the work that accumulated over the past year in cl-nlp. That was especially timely, considering the interest that was expressed in using it by some people who I've met at the recent Lisp-related events.

I've even assembled a rough checklist of the things that need to be finished to get it to v.1.0 and beyond.

Besides, after finishing the basic cleaning, I've returned to one of the programming tasks that has racked my head for long: tree pretty-printing. In NLP, we constantly have to deal with various versions of parse trees, like the constituency or dependency ones, but the problem is that they are not easily visualized. And good visualization plays, at least for me, a critical role in effective debugging, ideation and programming. It's an essential part of a solid interactive experience that is one of the fundamental traits of Lisp development.

For instance, a constituency tree is usually presented as a Lisp list. Here's an infamous example from the Penn Treebank:


(S
(NP-SBJ
(NP (NNP Pierre) (NNP Vinken) )
(, ,)
(ADJP
(NP (CD 61) (NNS years) )
(JJ old) )
(, ,) )
(VP (MD will)
(VP (VB join)
(NP (DT the) (NN board) )
(PP-CLR (IN as)
(NP (DT a) (JJ nonexecutive) (NN director) ))
(NP-TMP (NNP Nov.) (CD 29) )))
(. .) )

A dependency tree has several representations, all of which are not really intuitive to grasp. This is the Stanford format:


amod(ideas-2, Colorless-0)
amod(ideas-2, green-1)
nsubj(sleep-3, ideas-2)
root(sleep-3, sleep-3)
advmod(sleep-3, furiously-4)
punct(sleep-3, .-5)

And here's the CoNLL one:


0 Colorless _ _ ADJ 2
1 green _ _ ADJ 2
2 ideas _ _ NOUN 3
3 sleep _ _ NOUN 3
4 furiously _ _ ADV 3
5 . _ _ PUNCT 3

Also, Google's Parsey McParseface offers another - presumably, more visual - representation (using the asciitree lib). Still, it is not good enough, as it messes with the order of words in a sentence.


Input: Bob brought the pizza to Alice .
Parse:
brought VBD ROOT
+-- Bob NNP nsubj
+-- pizza NN dobj
| +-- the DT det
+-- to IN prep
| +-- Alice NNP pobj
+-- . . punct

As you see, dependency trees are not trivial to visualize (or pretty-print) in ASCII. The authors of Spacy creatively approached solving this problem by using CSS in their displaCy tool:

However, it seems like an overkill to bring a browser to with you for such a small task. And it's also not very scalable:

I, in fact, was always interested in creative ways of text-based visualization. So, I thought of ways to represent parse trees in ASCII.

With constituency ones, it's rather trivial:


> (pprint-tree '(TOP (S (NP (NN ))
(VP (VBZ )
(NP (DT )
(JJ )
(NN )))
(|.| <.:5 22..23>)))
TOP
:
S
.-----------:---------.
: VP :
: .---------. :
NP : NP :
: : .----:-----. :
NN VBZ DT JJ NN .
: : : : : :
This is a simple test .

The dependencies are trickier, but I managed to find a way to show them without compromising the sentence word order:


> (pprint-deps '(<This:0 0..4> <is:1 5..7> <a:2 8..9> <simple:3 10..16> <test:4 17..21> <.:5 22..23>)
'(root(_ROOT_-0, is-1) nsubj(is-1, This-0) dobj(is-1, test-4) det(test-4, a-2) amod(test-4, simple-3) punct(is-1, .-5)))
Colorless green ideas sleep furiously .
^ ^ .^ .^. ^ ^
: `. amod .´: ::: : :
`..... amod .....´: ::: : :
`. nsubj .´:: : :
:`. advmod .´ :
:`.... punct .....´
root

And it looks pretty neat even for longer sentences:


We hold these truths to be self - evident , that all men are created equal , that they are endowed by their Creator with certain unalienable Rights , that among these are Life , Liberty and the pursuit of Happiness .
^ .^. ^ .^ ^ .^. ^ ^ .^ ^ ^ ^ .^ ^ .^. ^ ^ ^ ^ ^ .^. ^. ^ .^. ^. ^ ^ .^ ^ ^ ^. ^ .^. ^. ^ ^. ^ ^ .^. ^. ^ ^
`. nsubj .´:: `. det .´: `. aux .´:: : `. punct .´: : : `. det .´: `. auxpass .´:: : : : : `. auxpass .´:: :: `. poss .´:: :: : `. amod .´: : : :`. pobj .´ ::: :`. punct .´ :`. cc .´ `. det .´:: :`. pobj .´ :
:`... dobj ...´ :: `. npadvmod .´: : : : ::`. advcl .´ : : : ::: :: :: :: `...... amod ......´: : : : ::: :: :: :`. prep .´ :
:: :`..... acomp .....´ : : `.. nsubjpass ..´:: : : : ::: :: :: :`......... pobj .........´ : : : ::: :: :`...... conj .......´ :
:`......... advcl ..........´ : : ::`... punct ...´ : : ::: :: :`. prep .´ : : : ::: :`.... conj ....´ :
:`..................... punct ......................´ `........... mark ...........´:: : : ::: :`... pobj ....´ : : : ::`. attr .´ :
:: :: : : ::`. agent .´ : : `... prep ....´: :
:: :: : `.. nsubjpass ..´:: : `...... mark ......´: :
:: :: `....... mark .......´:: : : :
:: :: :`............................ punct .............................´ : :
:: :: :`........................................ advcl .........................................´ :
:: :`................ advcl ................´ :
:`...................................... ccomp .......................................´ :
:`............................................................................................................................................ punct .............................................................................................................................................´
root

However, writing the visualization code was one of the most intimidating programming tasks I've ever encountered. One explanation is that trees are most naturally processed in depth-first order top-down, while the visualization requires bottom-up BFS approach. The other may be that pixel-perfect (or, in this case, character-perfect display is always tedious). As far as I'm concerned, this is not a sufficient explanation, but I couldn't find any other. The ugliest part of this machinery is deps->levels function that prints the dependency relations in a layered fashion. The problem is to properly calculate minimal space necessary to accommodate both tokens and dependency labels and to account for different cases when the token has outgoing dependency arcs or doesn't. In theory sounds pretty easy, but in practice, it turned out a nightmare.

And all of this assumes projective trees (non-intersecting arcs), as well as doesn't know how to show on one level two arcs going from one token in two directions. Finally, I still couldn't align the two trees (constituency and dependency) above and under the sentence. Here's the target:


TOP
:
S
.----------------:--------------.
: VP :
: .---------. :
NP : NP :
: : .----:---------. :
NN VBZ DT JJ NN .
This is a simple test .
^ .^. ^ ^ .^ ^
`. nsubj .´:: : `. amod .´: :
:: `.... det ....´: :
:`..... dobj .....´ :
:`...... punct ......´
root

and this is how it prints for now (one more challenge was to transfer additional offsets from dependencies into the constituency tree):


TOP
:
S
.-----------:---------.
: VP :
: .---------. :
NP : NP :
: : .----:-----. :
NN VBZ DT JJ NN .
This is a simple test .
^ .^. ^ ^ .^ ^
`. nsubj .´:: : `. amod .´: :
:: `.... det ....´: :
:`..... dobj .....´ :
:`...... punct ......´
root

Well, the good news is that it is usable, but it still needs more work to be feature complete. I wonder what was I doing wrong: maybe, someone can come up with a clean and simple implementation of this functionality (in any language)? I consider it a great coding challenge, although it may require a week of your free time and a bunch of dead neurons to accomplish. But if you're willing to take it, I'd be glad to see the results... :D

Eugene ZaikonnikovAbout Time

· 45 days ago

This week I put together a small NTP client. To keep dependencies at minimum and to avoid forcing a permanently running process onto users, it does not attempt to adjust system RTC clock, compensate jitter or evaluate time server quality. As I see it, much of that behaviour is easy enough to add via mixins with the defined NTP class.

NTP timestamp is two 32-bit values: seconds and fraction of a second. NTP conveniently counts seconds from Jan 1 1900, just like universal time in Common Lisp. There is however no portable Common Lisp representation for fractions of a second. Thus the client sticks to using NTP formatted fraction for that. It is way more precision than any existing CL implementation has in INTERNAL-TIME-UNITS-PER-SECOND, but this makes the value comparable across implemenations. The new GET-ADJUSTED-UNIVERSAL-TIME method then returns a pair of values: universal time and NTP fraction. The fraction can be converted to the implementation's internal time scale with FRACTION-TO-INTERNAL.

Internally we define no special arithmetic on NTP timestamps but provide two conversion macros for single integer space. BIG-TIME converts NTP stamp into a large integer. We then do all calculations in that domain, and convert back to NTP timestamp using SMALL-TIME when it's time to send it over the wire. An NTP instance stores adjusted time as an offset from internal real time. The offset is roughly intialized with universal time and then adjusted after each server request.

Nicolas HafnerRadiance Release - Confession 73

· 52 days ago

header
Right now I'm at Brussels Airport, waiting for my departing flight back to Zürich. The 10th European Lisp Symposium is over, and I got to have my first "real" talk at it. It was, as you might guess, about Radiance and some of the core concepts behind the project. With that, I think it is finally time to announce Radiance's proper release into the wild!

It's been a long time coming- starting back when I made my first steps in Lisp in June of 2013. Radiance's full story started much earlier though, back when I was still dabbling in PHP and Java for most of my projects. The changes that this project has undergone since then are massive, to the point where hardly a single aspect of it now has any connection to its initial beginnings. One thing has always stood the same, though: the intention to make Radiance a framework that eases deployment and housing of multiple, different web services within the same instance.

Circumventing a long talk about the history of how everything got together though, I'll instead try to say a bit about what Radiance's goals right now are, so that you may judge whether it might be a good fit for your next web project. First, it is important to mention that Radiance is not like Weblocks and similar projects that try to present new and interesting ways to develop web applications. Its strengths lie elsewhere. On the surface, it is very classic in approach: you write a program that has "handlers" to which the framework dispatches for each request. The handler then returns the data that should be sent back to the user. And that's it. There's no extra support for JavaScript/AJAX interaction, no continuations, no widgets, no presentations, not even a template system. All of those other choices are up to you to decide and settle on, depending on your needs.

So what is Radiance good for then? Why not just use Hunchentoot? Well, depending on your project size and intentions, Hunchentoot may well be a viable alternative. What Radiance does do over Hunchentoot however, is that it offers you a layer around the webserver allowing you to exchange it later, similar to Clack. It also offers many more layers around various other features that are useful to develop web applications, however. Radiance is intended to be an adaptable intermediate layer between an application and the features that it depends on. It provides these features in such a way that it is still possible for the administrator of an installation of your application to decide what the implementations of those features are, and leaves them a choice to select their priorities.

Now, this probably sounds rather abstract and confusing, so let me try and illustrate what I mean a bit more clearly. Central to this aspect of Radiance is the standard-interfaces.lisp file and section 2 of the documentation. A quick look at them should make a few things clear: rather than implementing all sorts of features like a database layer, sessions, user accounts, authentication, and so forth, Radiance provides them through interface definitions. These definitions outline the signatures of functions, macros, and so forth that the interface provides. It does not, however, actually implement the features. Your application can make use of these features by depending on the interfaces it needs, without having to specify a particular underlying implementation. In the end, the administrator decides which implementing system to use for each interface, and Radiance takes care of loading the appropriate one whenever your application is loaded.

I won't go into a discrete example here, as I've already described how to use interfaces and what they can do for you in increasing levels of detail in the conference paper, the documentation, and the tutorial. If you're still with me and do intend on jumping in or having a more in-depth look, I really recommend starting with the tutorial. It's lengthy and touches on pretty much every aspect involved in writing a fully-fledged web application from the ground up. It doesn't touch on every single piece Radiance gives to you, but it will show you where to look and how to proceed should you still need more.


Outside of the interfaces and pluggable features, Radiance also offers a powerful and flexible routing system. Unlike other frameworks that associate pages with tags or directly hard-code the URL into the source code, Radiance uses an "internal URL representation" and an "external URL representation". The former is what your application and templates speak in, and the latter is what the user and web server deal in. The translation between the two is handled by regular functions, called routes, which rewrite and transform URLs in order to achieve the URL namespace setup that is desired on a particular installation. This allows the administrator quick and easy control over the setup of an application.

Finally, Radiance has a built in configuration and file management system that is responsible for keeping all the run-time data of an installation in one place that is easy to track. It offers you easy access to application parameters that are configurable by the administrator, and bundles everything together in such a way that multiple configuration sets can be kept on the same machine easily, thus allowing you to switch between different setups quickly. For example, you might have a "development" and "production" setup on your machine that pick different settings and implementations for the interfaces.

Aside from these three major features of interfaces, routing, and configuration, Radiance offers a variety of tools and related functionality to help you with your development. In the end it is all constructed and written with the intention of making your specific web application work in such a way that it can be deployed on systems other than your own without much further work, and that it can be deployed alongside other web applications within the same Radiance instance. This allows the applications to share data like users, database, sessions, and so forth between each other, without tripping over each others' toes.

While it is of course possible to use Radiance for an application that is just for you and you alone, this is not where its strengths lie. It's intended for people that want to write web applications that can be redistributed and used by other people, and focuses on allowing someone to gather together the services they want and run them all together in a common environment, leaving them as much control over the system as possible without having to touch the applications' code.

Now, mind you, this does have a price associated with it. You will need to give in to certain conventions that Radiance follows and give up certain amounts of control and freedom in order to really make use of the features. That's how things go for everything. However, I dare say that the price is, in most cases, not very high. Most applications can be written with the tools the interfaces provide to you. And even if they do not, Radiance in no way forces you to use the interfaces. You can always break out of the layers and directly make use of whatever library you might need, at the cost of making your application share less with others in the system, or constraining the administrator further.

Because almost everything in Radiance is optional, it becomes rather hard to advertise it fully. I'm aware of the drawbacks and the strengths, so I can't in good conscience just praise it for all of its aspects. The only thing I can say with certainty is that it's a system I've worked with for many years now, and a system I've already written a bunch of applications with. I've also been running these applications on public servers for a good many years, so it isn't without testing either. You're actually reading this on one of my services right now.

In the end, it's unfortunately still your choice which framework you're going to use for your next project. I can't make that choice for you. In the very least, though, I can now recommend Radiance without having to list a bunch of "but"s. Radiance is documented, it works, and it is now, finally, officially released!

footer

I'd like to thank everyone who helped me along the way, by reading through my documentation and tutorial, testing things out, and giving me advice all around the project. I'd particularly like to thank Janne Pakarinen, Joram Schrijver, and Till Ehrengruber for their invaluable input.

Marco AntoniottiCLAST: a Common Lisp AST and Code Walking library

· 55 days ago
I guess this is a good time to publicize the CLAST library I have been working on with Matteo Crespi. CLAST is a Common Lisp AST and Code Walking library that stands apart because it is geared at producing a source-level Abstract Syntax Tree (AST) of Common Lisp code.

Of course the usual issues with MACROEXPAND are all there, but I believe the choices made to handle it are quite sensible.

The library is still "in fiery", but most of the heavy lifting is done. The development branch is the most up-to-date one

Cheers

Marco AntoniottiELS 2017: thank you!

· 55 days ago
Dear all
just got back for ELS 2017 in Brussels, which went very well; thanks especially to Didier Verna, Irene Durand and Alberto Riva.  It was a particularly good edition of the event.


Cheers

Quicklisp newsApril 2017 Quicklisp dist update now available

· 56 days ago
New projects:
  • cl-cudd — A two-layered binding to the CUDD binary decision diagram library. See README.md for more details. — BSD Style (see LICENSE)
  • cl-marklogic — Common Lisp library for accessing MarkLogic Server. — LGPL3
  • cl-sandbox — Utility package for creating safe experimental environment. — MIT
  • esrap-peg — A wrapper around Esrap to allow generating Esrap grammars from PEG definitions — MIT
  • glsl-toolkit — A library to parse and modify OpenGL Shader Language (GLSL) source code — Artistic
  • horner — Inline polynomial evaluation using Horner's rule. — MIT
  • http-get-cache — Common Lisp library for caching HTTP GET responses — MIT
  • random-sample — Random sample of a sequence with uniform distribution. — MIT
Updated projects3d-matrices3d-vectorsagnostic-lizardarchitecture.builder-protocolarchitecture.service-providerarnesiasdf-dependency-grovelasdf-finalizersassoc-utilsbeastbuildnodecamblcaveman2-widgetscaveman2-widgets-bootstrapcl+sslcl-anacl-arxiv-apicl-ascii-artcl-association-rulescl-autorepocl-autowrapcl-bsoncl-conspackcl-containerscl-csvcl-cudacl-custom-hash-tablecl-digraphcl-feedparsercl-freeimagecl-html5-parsercl-influxdbcl-jpegcl-llvmcl-online-learningcl-opsresearchcl-pangocl-protobufscl-pythoncl-scriptingcl-sdl2cl-secure-readcl-strcl-tcodcl-videocl4lclackclinchclipclmlcloser-mopcoleslawcroatoancserial-portdaemondecltdefmacro-enhancedexadoreasy-audioexit-hooksexscribef2clfare-scriptsfemlispfocusfolio2fxmlglisphhermetichu.dwim.asdfhu.dwim.perechu.dwim.presentationhu.dwim.rdbmshu.dwim.reiteratehu.dwim.stefilhu.dwim.utilhu.dwim.web-serverinlined-generic-functionjsonrpckenzolegitlifoolispbuilderlmdblol-remaidenmcclimmedia-typesmetacopymetatilities-basemitomodularize-interfacesmonkeylib-utilitiesmoptilitiesnibblesomer-countopticlpostmodernprovepzmqqlotretrospectiffrtg-mathrutilsscriptlserapeumsketchspinneretstaplestumpwmtriviatrivial-argumentstrivial-ldaptrivial-main-threadwebsocket-driverworkout-timerxhtmlambdazenekindarlzlib.

To get this update, use (ql:update-dist "quicklisp").

Enjoy!

Paul KhuongThree-universal Hashing in Four Instructions

· 57 days ago

... with one caveat: the hash functions only generate one bit.

“hash.c”
1
2
3
4
5
6
bool
bit_hash(uint64_t x, uint64_t table, uint64_t bit)
{
    /* table is a random uniform uint64_t, bit is a random bit. */
  return __builtin_parityll((x & table) ^ bit);
}

With hardware popcount, this compiles to something like the following.

“hash.s”
1
2
3
4
5
        andq    %rsi, %rdi # x & table
        xorl    %eax, %eax # work around a hardware perf bug in popcnt
        xorq    %rdi, %rdx # () ^ bit
        popcntq %rdx, %rax # get the popcount
        andl    $1, %eax   # isolate parity

This should raise a few questions:

  1. Why?
  2. Why does it work?
  3. Is it useful?

Someone with a passing familiarity with x86 would also ask why we use popcnt instead of checking the parity flag after xor. Unfortunately, the parity flag only considers the least significant byte of the result (:

One-bit hash functions: but why?

When implementing something like the hashing trick or count sketches (PDF), you need two sets of provably strong hash functions: one to pick the destination bucket, and another to decide whether to increment or decrement by the sketched value.

One-bit hash functions are ideal for the latter use case.

How does that even work?

The bitwise operations in bit_hash implement a degenerate form of tabulation hashing. It considers the 64 bit input value x as a vector of 64 bits, and associates a two intermediate output values with each index. The naďve implementation would be something like the following.

“hash_slow.c”
1
2
3
4
5
6
7
8
9
10
11
bool
bit_hash_slow(uint64_t x, bool random_table[64][2])
{
    int acc = 0

    for (size_t i = 0; i < 64; i++, x >>= 1) {
        acc ^= random_table[i][x & 1];
    }

    return acc;
}

Of course, the representation of random_table is inefficient, and we should hand-roll a bitmap. However, the loop itself is a problem.

The trick is to notice that we can normalise the table so that the value for random_table[i][0] is always 0: in order to do so, we have to fix the initial value for acc to a random bit. That initial value is the hash value for 0, and the values in random_table[i][1] now encode whether a non-zero bit i in x flips the hash value or leaves it as is.

The table argument for bit_hash is simply the 64 bits in random_table[i][1], and bit is the hash value for 0. If bit i in table is 0, bit i is irrelevant to the hash. If bit i in table is 1, the hash flips when bit i in x is 1. Finally, the parity counts how many times the hash was flipped.

Is it even useful?

I don’t think so. Whenever we need a hash bit, we also want a hash bucket; we might as well steal one bit from the latter wider hash. Worse, we usually want a few such bucket/bit pairs, so we could also compute a wider hash and carve out individual bits.

I only thought about this trick because I’ve been reading a few empirical evaluation of sketching techniques, and a few authors find it normal that computing a hash bit doubles the CPU time spent on hashing. It seems to me the right way to do this is to map columns/features to not-too-small integers (e.g., universal hashing to [0, n^2) if we have n features), and apply strong hashing to these integers. Hashing machine integers is fast, and we can always split strong hashes in multiple values.

In the end, this family of one-bit hash functions seems like a good solution to a problem no one should ever have. But it’s still a cute trick!

Christophe Rhodeskaratsuba multiplication in sbcl

· 57 days ago

Possible alternative title: I'm on a train!

In particular, I'm on the train heading to the European Lisp Symposium, and for the first time since December I don't have a criticially urgent piece of teaching to construct. (For the last term, I've been under the cosh of attempting to teach Algorithms & Data Structures to a class, having never learnt Algorithms & Data Structures formally, let along properly, myself).

I have been giving the students some structure to help them in their learning by constructing multiple-choice quizzes. "But multiple-choice quizzes are easy!", I hear you cry! Well, they might be in general, but these quizzes were designed to probe some understanding, and to help students recognize what they did not know; of the ten quizzes I ran this term, several had a period where the modal mark in the quiz was zero. (The students were allowed take the quizzes more than once; the idea behind that being that they can learn from their mistakes and improve their score; the score is correlated to a mark in some minor way to act as a tiny carrot-bite of motivation; this means I have to write lots of questions so that multiple attempts aren't merely an exercise in memory or screenshot navigation).

The last time I was on a train, a few weeks ago, I was travelling to and from Warwick to sing Haydn's Nelson Mass ("Missa in angustiis"; troubled times, indeed), and had to write a quiz on numbers. I'd already decided that I would show the students the clever Karatsuba trick for big integer multiplication, and I wanted to write some questions to see if they'd understood it, or at least could apply some of the results of it.

Standard multiplication as learnt in school is, fairly clearly, an Ω(d2) algorithm. My children learn to multiply using the "grid method", where: each digit value of the number is written out along the edges of a table; the cells of the table are the products of the digit values; and the result is found by adding the cells together. Something like:

       400     20      7
300 120000   6000   2100
 90  36000   1800    630
  3   1200     60     21

for 427×393 = 167811. Similar diagrammatic ways of multiplying (like [link]) duplicate this table structure, and traditional long multiplication or even the online multiplication trick where you can basically do everything in your head all multiply each digit of one of the multiplicands with each digit of the other.

But wait! This is an Algorithms & Data Structures class, so there must be some recursive way of decomposing the problem into smaller problems; divide-and-conquer is classic fodder for Computer Scientists. So, write a×b as (ahi×2k+alo)×(bhi×2k+blo), multiply out the brackets, and hi and lo and behold we have ahi×bhi×22k+(ahi×blo+alo×bhi)×2k+alo×blo, and we've turned our big multiplication into four multiplications of half the size, with some additional simpler work to combine the results, and big-O dear! that's still quadratic in the number of digits to multiply. Surely there is a better way?

Yes there is. Karatsuba multiplication is a better (asymptotically at least) divide-and-conquer algorithm. It gets its efficiency from a clever observation[1]: that middle term in the expansion is expensive, and in fact we can compute it more cheaply. We have to calculate chi=ahi×bhi and clo=alo×blo, there's no getting around that, but to get the cross term we can compute (ahi+alo)×(bhi+blo) and subtract off chi and clo: and that's then one multiply for the result of two. With that trick, Karatsuba multiplication lets us turn our big multiplication into three multiplications of half the size, and that eventaully boils down to an algorithm with complexity Θ(d1.58) or thereabouts. Hooray!

Some of the questions I was writing for the quiz were for the students to compute the complexity of variants of Karatsuba's trick: generalize the trick to cross-terms when the numbers are divided into thirds rather than halves, or quarters, and so on. You can multiply numbers by doing six multiplies of one-third the size, or ten multiplies of one-quarter the size, or... wait a minute! Those generalizations of Karatsuba's trick are worse, not better! That was completely counter to my intuition that a generalization of Karatsuba's trick should be asymptotically better, and that there was probably some sense in which the limit of doing divide-bigly-and-conquer-muchly would turn into an equivalent of FFT-based multiplication with Θ(d×log(d)) complexity. But this generalization was pulling me back towards Θ(d2)! What was I doing wrong?

Well what I was doing wrong was applying the wrong generalization. I don't feel too much shame; it appears that Karatsuba did the same. If you're Toom or Cook, you probably see straight away that the right generalization is not to be clever about how to calculate cross terms, but to be clever about how to multiply polynomials: treat the divided numbers as polynomials in 2k, and use the fact that you need one more value than the polynomial's degree to determine all its coefficients. This gets you a product in five multiplications of one-third the size, or seven multiplications of one-quarter, and this is much better and fit with my intuition as to what the result should be. (I decided that the students could do without being explicitly taught about all this).

Meanwhile, here I am on my train journey of relative freedom, and I thought it might be interesting to see whether and where there was any benefit to implement Karatsuba multiplication in SBCL. (This isn't a pedagogy blog post, or an Algorithms & Data Structures blog post, after all; I'm on my way to a Lisp conference!). I had a go, and have a half-baked implementation: enough to give an idea. It only works on positive bignums, and bails if the numbers aren't of similar enough sizes; on the other hand, it is substantially consier than it needs to be, and there's probably still some room for micro-optimization. The results?

Linear model fit for built-in and Karatsuba multiply

The slopes on the built-in and Karatsuba mulitply (according to the linear model fit) are 1.85±0.04 and 1.52±0.1 respectively, so even million-(binary)-digit bignums aren't clearly in the asymptotic régimes for the two multiplication methods: but at that size (and even at substantially smaller sizes, though not quite yet at Juho's 1000 bits) my simple Karatsuba implementation is clearly faster than the built-in multiply. I should mention that at least part of the reason that I have even heard of Karatsuba multiplication is Raymond Toy's implementation of it in CMUCL.

Does anyone out there use SBCL for million-digit multiplications?

Christophe Rhodesgoing to els2017

· 57 days ago

I'm going to the European Lisp Symposium this year! In fact, I have to catch a train in two and a half hours, so I should start thinking about packing.

I don't have a paper to present, or indeed any agenda beyond catching up with old and new friends and having a little bit of space to think about what might be fun to try. If you're there and you want to make suggestions, or just tell me what you're up to: I'd love to hear. (And if you're not there: bad luck, and see you another time!)

Eugene ZaikonnikovSifting through ImageNet dataset in Lisp

· 60 days ago

..there is no shortage of rainy evenings in the rain capital of the world, so I used a few of them to put together this small application that I called (perhaps overly ambitiously) cl-imagenet. It uses a bunch of Lisp libraries: opticl and cl-jpeg for image processing, cxml for extracting bounding boxes from the annotations, cl-fad for filesystem operations, and trivial-channels in combination with clx for streaming to display.

The code tries to detect how many cores the host machine has, then creates the corresponding number of worker units. The workset ImageNet subunits list is built up, which are then assigned to the workunits. Each workunit fetches annotation file, extracts the bounding boxes and image file reference, decodes the corresponding JPEG file, handles processing with OptiCL and sends the result via shared channel to display thread. It is impressive how compact the code can be when leveraging random bits of the ecosystem available through Quicklisp.

In this setup only the luminance component of JPEG is extracted and then thresholded from medium gray. The video is filmed on an old quad i5-2500. On my 8-core i7-6700 box with visualisation off, it averages some 200K processed images per hour.

Tested lightly with SBCL on Linux. One known problem place is the message channel gradually eating up memory with visualization on, but as it's only used in tests it hasn't been a pressing need to fix yet.

Zach BeanePlanet Lisp Archives - March, 2007

· 62 days ago
Planet Lisp Archives - March, 2007:

Ten years ago, headlines from Planet Lisp. (Many links are broken, but fewer than I expected. And I still have the article text saved in my Planet Lisp database...)

Eugene ZaikonnikovSymbolics-inspired WASD keyboard layout

· 62 days ago

I'm enjoying my customized mechanical WASD keyboard since last summer:

Symbolics tribute keyboard

Its signage and cap colors are inspired by early Lisp machine 'space cadet' keyboards. Not anywhere a close functonal match certainly but I do like the looks. And unlike the the original it comes in short, messy-desk-space-preserving 87-key version.

The inscriptions are changed to oldskool. Alt is META, AltGr is GREEK, caps is HYPER and left-Win is SUPER. SUPER is bound to X11 layout switch in my system, GREEK is just the normal European 2nd-level shift. HYPER is remapped to.. well actual Hyper. This adds a bunch of new keybinding space in Emacs:

(when (eq window-system 'x)
  (shell-command "xmodmap -e 'remove Mod4 = Hyper_L' -e 'add Mod3 = Hyper_L'")
  (shell-command "xmodmap -e 'clear Lock' -e 'keycode 66 = Hyper_L'"))

(global-set-key (kbd "H-k") 'kill-this-buffer)
(global-set-key (kbd "H-f") 'find-file)
(global-set-key (kbd "H-h") 'hyperspec-lookup)
(global-set-key (kbd "H-3") 'split-window-right)
(global-set-key (kbd "H-2") 'split-window-below)
(global-set-key (kbd "H-1") 'delete-other-windows)
(global-set-key (kbd "H-0") 'delete-window)
(global-set-key (kbd "H-s") 'save-buffer)
(global-set-key (kbd "H-a") 'mark-whole-buffer)
(global-set-key (kbd "H-r") 'raise-sexp)
(global-set-key (kbd "H-c") 'slime-selector)

Norwegian-based layout is available here.

François-René RideauWhy I haven't jumped ship from Common Lisp to Racket (just yet)

· 65 days ago

Matthias Felleisen jested "Why are you still using CL when Scrbl/Racket is so much better :-)" ? My response was as follows:

Dear Matthias,

you are right Racket is so much better in so many dimensions. I use Lisp because I just can't bear programming in a language without proper syntactic abstraction, and that is a dimension where Racket is far ahead of Common Lisp (CL), which sadly also remains far ahead of the rest of competition. Racket also has managed to grow a remarkable way to mix typed and untyped program fragments, which sets it ahead of most. But I am under the impression that there are still many dimensions in which Racket lags behind other languages and Common Lisp (CL) in particular.

  1. The Common Lisp Object System (CLOS) has multiple-inheritance, multi-methods, method combinations, introspection and extensibility via the MOP, generic functions that work on builtin classes, support for dynamic instance class change (change-class, update-instance-for-changed-class) and class redefinition (defclass, update-instance-for-redefined-class), a semi-decent story for combining parametric polymorphism and ad hoc polymorphism (my own lisp-interface-library), etc. Racket seems to still be playing catch-up with respect to ad hoc polymorphism, and is lacking a set of good data structure libraries that take advantage of both functional and object-oriented programming (a good target is Scala's scalaz and its rival cats).
  2. While the ubiquity of global side-effects in CL is very bad, the facts that all objects that matter are addressable by a path from some global namespace and that live redefinition is actively supported makes debugging and maintaining long-lived systems with in-image persistent data more doable (see again CLOS's update-instance-for-redefined-class). This is in contrast with the Racket IDE which drops live data when you recompile the code, which is fine for student exercises, but probably wrong for live systems. CL is one of the few languages that takes long-term data seriously (though not quite as seriously as Erlang).
  3. Libraries. CL seems to have much more libraries than Racket, and though the quality varies, these libraries seem to often have more feature coverage and more focus on production quality. From a cursory look, Racket libraries seem to often stop at "good enough for demo". An effort on curating libraries, homogeneizing namespaces, etc., could also help Racket (I consider CL rather bad in this respect, yet Racket seems worse). My recent experience with acmart, my first maintained Racket library, makes me think that writing libraries is even higher overhead in Racket than in CL, which is already mediocre.
  4. Speedwise, SBCL still produces code that runs noticeably faster than Racket (as long as you don't need full delimited control, which would requires a much slower CL-to-CL compiler). This difference may be reduced as Racket adopts the notoriously fast Chez Scheme as a backend (or not). Actually, the announcement of the new Racket backend really makes me eager to jump ship.
  5. As for startup latency, Common Lisp is also pretty good with its saved images (they start in tens of milliseconds on my laptop), making it practical to write trivial utilities for interactive use from the shell command-line with an "instantaneous" feel. Racket takes hundreds of milliseconds at startup which puts it (barely) in the "noticeable delay" category.

All these reasons, in addition to inertia (and a non-negligible code-base and mind-base), have made me stick to CL — for now. I think Racket is the future of Lisp (at least for me), I just haven't jumped ship right yet. If and when I do, I'll probably be working on some of these issues.

PS: Here are ways that Racket is indeed vastly superior to CL, that make me believe it's the future of Lisp:

  • First and foremost, Racket keeps evolving, and not just "above" the base language, but importantly *below*. This alone makes it vastly superior to CL (that has evolved tremendously "above" its base abstractions, but hasn't evolved "below", except for FFI purpose, in the last 20 years), and superior to most languages (that tend to not evolve much "above", and not at all "below" their base abstractions).
  • Racket is by far ahead of the pack in terms of Syntactic abstraction
  • Racket has a decent module system, including build and phase separation, and symbol selection and renaming.
  • Racket has typed modules, and a good interface between typed and untyped modules.
  • Racket has lots of great teaching material.
  • Racket has a one-stop-shop for documentation, though it isn't always easy to navigate and often lack examples. That still puts it far ahead of CL and a lot of languages.
  • Racket provides purity by default a decent set of data structures.
  • Racket has many primitives for concurrency, virtualization.
  • Racket has standard primitives for laziness, pattern-matching, etc.
  • Racket has a standard, portable, gui.
  • Racket has a lively, healthy, user and developer community.

I probably forget more.

McCLIMProgress report #6

· 77 days ago

Dear Community,

I owe you apologies for skipping reports in the meantime. Since January I'm not withdrawing money from our fundraiser thanks to other paid work, which means we have budget for more bounties. Keep in mind, that doesn't mean any work has ceased from my side.

During this iteration I was preparing a paper^1 for the upcoming European Lisp Symposium^2. Unfortunately it wasn't good enough to be accepted.

I'm still working on a tutorial and demo application mentioned in the paper proposal. During that bugs and incompatibilities get fixed and pull requests are accepted. When questions arise on IRC or mailing list I try to answer them.

As a reminder: we have two active bounties at the moment:

Suggestions which other issues should have a bounty on them are appreciated and welcome.

If you have any questions, doubts or suggestions - please contact me either by email (daniel@turtleware.eu) or on IRC (my nick is jackdaniel).

Sincerely yours,
Daniel Kochmański

Nick Levineenlivend @ 2017-03-09T08:52:00

· 82 days ago
My domain lisp-book.org expires in two months (on 2017-05-07). I do not intend to renew it. The material which it serves will remain available via http://nicklevine.org/lisp-book/

If anyone wishes to inherit the domain from me and put it to better use, they should get in touch.

Zach BeaneBALisp - YouTube

· 82 days ago
BALisp - YouTube:

There are a bunch of new videos as of yesterday.

Nathan Froydnibbles and ironclad releases

· 83 days ago

I have released new versions of nibbles (0.13) and ironclad (0.34). They are available from their respective tags in their github repositories; I have not created tarballs for them. Ironclad, in particular, has many new features; please see the NEWS files for both packages for some of the changes.

This is also an appropriate time to announce that I will no longer be maintaining nibbles, ironclad, nor any of my other Common Lisp packages. This has been the de facto state of affairs for several years now; we might as well make it official.

Quicklisp newsQuicklisp client update: bundling local-projects

· 84 days ago
The Quicklisp library bundle feature has been around for a while.  It creates a "bundle" of libraries from Quicklisp that can be used standalone, without loading or using Quicklisp at all.

Today, I published an updated client with a new bundle feature: if :include-local-projects is true, everything in Quicklisp's ql:*local-project-directories* is copied into the bundle and made available when the bundle is loaded.

To get this update, use (ql:update-client). The new code will be loaded when Lisp is restarted.

This work was commissioned by Rigetti Computing.

If you have Quicklisp feature needs, feel free to get in touch with me!


LispjobsClojure Web Developer, Kira Systems, Toronto, Ontario, Canada

· 88 days ago

Full list: https://kirasystems.com/careers#op-162601-clojure-web-developer

We're looking for a developer to be a part of the core team for our Clojure and ClojureScript web application. Our stack includes reactive single-page web client code and a distributed backend to handle internal computations.

You will be responsible for designing, implementing, and testing functionality in our web application, collaborating closely with other developers to improve your and other's code, and working with our UI/UX team to make it slick. Most of all, we are looking for team members that are interested in learning and knowledge sharing.


Zach BeaneThe slime-selector

· 89 days ago

I use and love the slime-selector. It’s a very quick way to jump around between various SLIME buffers.

In my .emacs, C-c s is bound to slime-selector with

(global-set-key "\C-cs" 'slime-selector)

Then, in any buffer, I can use \C-c s to pop up a minibuffer selector that takes a single additional key. Here’s the list of keys and what they do:

4:      Select in other window
?:      Selector help buffer.
c:      SLIME connections buffer.
d:      *sldb* buffer for the current connection.
e:      most recently visited emacs-lisp-mode buffer.
i:      *inferior-lisp* buffer.
l:      most recently visited lisp-mode buffer.
n:      Cycle to the next Lisp connection.
p:      Cycle to the previous Lisp connection.
q:      Abort.
r:      SLIME Read-Eval-Print-Loop.
s:      *slime-scratch* buffer.
t:      SLIME threads buffer.
v:      *slime-events* buffer.

I use r the most to instantly get into the repl. But I also use l to jump to the last Lisp file I was working on, or d to find a debugger buffer I might have buried.

It’s also quite helpful with c, to bring up a list of active Lisp connections. For Quicklisp work, I sometimes want to switch between four or five active slime connections, and \C-c s c makes it quick and easy to choose.

Enjoy!

Didier VernaDeclt 2.1 "Jonathan Archer" is out

· 91 days ago

I'm happy to announce the release of Declt 2.1 "Jonathan Archer".

New in this release:

  • Handle recent change in SBCL's SB-INT:INFO API.
  • Handle list of contacts (strings) in ASDF systems author and maintainer slots.
  • Some backward-incompatible changes in the keyword arguments to the DECLT function.
  • More hyperlinks between systems and source files.
  • More robust system's packages collection (no more code walking).
  • More robust handling of unavailable ASDF components.
  • More robust naming of cross-references.

Find it at the usual place...

Quicklisp newsFebruary 2017 Quicklisp dist update now available

· 91 days ago
New projects:
  • agnostic-lizard — A portable code walker that makes a best effort to be correct in most cases — GPLv3+
  • bit-ops — Optimized bit-vector operations — LLGPL
  • cl-forest — Unofficial Common Lisp bindings to Rigetti Forest. — Apache 2.0
  • cl-httpsqs — A client lib for accessing HTTPSQS written in Common Lisp — MIT
  • cl-ledger — Double-entry accounting system. — BSD-3
  • cl-password-store — Password management for Common Lisp (web) applications. — LLGPL
  • cl-str — Modern, simple and consistent Common Lisp string manipulation library. — MIT
  • cl-video — Video decoder implemented in Common Lisp — BSD
  • cmu-infix — Mathematical infix notation for Common Lisp. — Custom (See LICENSE.txt)
  • cserial-port — library for serial communication inspired by lispworks' serial-port — MIT
  • cue-parser — A library for parsing audio CUE files — 2-clause BSD
  • freebsd-sysctl — Sysctl kernel control mechanism for common lisp — 2-clause BSD
  • jsonrpc — JSON-RPC 2.0 server/client implementation — BSD 2-Clause
  • lifoo — a fresh take on Forth in the spirit of Common Lisp — MIT
  • maiden — A modern and extensible chat bot framework. — Artistic
  • named-read-macros — Make read macros more Lispy. Attach read macros to symbols. — BSD-3
  • weblocks-examples — Example applications for Weblocks framework — Public Domain
Updated projectsa-cl-loggeralexandriaanaphoraarchitecture.service-providerbeastbinfixbit-smasherbt-semaphoreceplchirpcl-anacl-autowrapcl-charmscl-custom-hash-tablecl-decimalscl-embcl-enchantcl-general-accumulatorcl-gsscl-jpegcl-k8055cl-mpg123cl-online-learningcl-openglcl-out123cl-sdl2cl-slugcl-spidevcl4lclackclmlcloser-mopclxcroatoandbusdeedsdexadordjulaeasy-audioexit-hooksf2clfare-scriptsfast-httpfemlispfnglisphglsl-specgrovel-locallyhu.dwim.perechu.dwim.presentationhu.dwim.sdlhu.dwim.web-serverinquisitorjson-mopkenzol-systemlambda-fiddlelassliblmdblichat-protocollichat-serverliblichat-tcp-serverlichat-ws-serverlionchatlisp-invocationlivesupportlmdblocal-time-durationmaxpcmcclimmoiramontezumamore-conditionsopticlopticl-coreparser.iniqlotqt-libsrtg-mathrutilsserapeumslimespinneretstaplestructy-defclassstumpwmtalcltemporal-functionstimer-wheeltrivial-dump-coretrivial-escapestrivial-wsubiquitousvarjowebsocket-driverweftwoowookiewuweixml.locationzlib.

To get this update, use (ql:update-dist "quicklisp").

Enjoy!


For older items, see the Planet Lisp Archives.


Last updated: 2017-05-28 05:49