During this iteration I have worked on the Debugger (system
clim-debugger) to bring it closer to
More work on the module is planned with a final goal to integrate it
Listener and to have it as a default debugger for McCLIM
applications. Suggestions on how to improve the interface, testing and
help with coding are appreciated. Preliminary documentation has been
written and put in
McCLIM manual draft.
I've started working on a library called Slim the goal of which is
to provide some of the CLIM interfaces in easy to learn and write
manner (i.e more context dependent, less verbose names, unified
abstractions etc.). For now it is only a skeleton having barely four
macros, but I'd love to hear suggestions, what should it contain, in
what form etc. A sketch may be found in the source code
Libraries/Slim/. If you think it is a bad idea to have such library
shipped with McCLIM - let me know about that too!
The documentation was extended in some places. Also building the
info document works now (long standing issue). An updated version of
manual draft may be found on the McCLIM website. The
documentation has been put in
separate document due
to its size and independent scope from the rest of McCLIM.
Nisar Ahmad has solved one of the bounties related to menus and command tables. He also submitted documentation for the menu functionality, thereby earning $150. Congratulations!
Speaking of finances - all money is now accumulated solely for bounties and development tasks, none is withdrawn by me since the beginning of year 2017.
Currently we have $1226 at our disposal and active bounties worth $700. Declared monthly donations at the moment equal $297. Additionally one-time contributions come every month. That means that we can post two or three bounties a month without draining the current resources, or spawn a bunch of worthwhile tasks and keep going as money comes. This is fantastic news. Thank you all for your support to the project!
At the moment we have five active bounties worth $700 which may be found here:
New bounties have a time limit assigned to them (six months) - thanks to that we are able to reclaim money from unresolved issues and propose it somewhere else (or repost the same bounty).
To improve the visibility of the issues which have bounties on them
I've added a label to GitHub issue tracker:
Suggestions as to which other issues should have a bounty on them are appreciated and welcome. Please note that Bountysource has a functionality "Suggest an Issue" which may be found on the bounties page. If you feel that you may solve some problem, but there is no bounty on it, feel free to suggest it too.
If you have any questions, doubts or suggestions - please contact me
either by email (email@example.com) or on IRC (my nick is
We are very happy that the number of McCLIM users grows, which may be infered from number of questions on the IRC channel, bug reports and pull requests.
 Simplified Lisp Interface Manager.
In the last couple of days, I was ill and had to stay in bed, so I've used this time also to tidy up the work that accumulated over the past year in cl-nlp. That was especially timely, considering the interest that was expressed in using it by some people who I've met at the recent Lisp-related events.
I've even assembled a rough checklist of the things that need to be finished to get it to v.1.0 and beyond.
Besides, after finishing the basic cleaning, I've returned to one of the programming tasks that has racked my head for long: tree pretty-printing. In NLP, we constantly have to deal with various versions of parse trees, like the constituency or dependency ones, but the problem is that they are not easily visualized. And good visualization plays, at least for me, a critical role in effective debugging, ideation and programming. It's an essential part of a solid interactive experience that is one of the fundamental traits of Lisp development.
For instance, a constituency tree is usually presented as a Lisp list. Here's an infamous example from the Penn Treebank:
(NP (NNP Pierre) (NNP Vinken) )
(NP (CD 61) (NNS years) )
(JJ old) )
(, ,) )
(VP (MD will)
(VP (VB join)
(NP (DT the) (NN board) )
(PP-CLR (IN as)
(NP (DT a) (JJ nonexecutive) (NN director) ))
(NP-TMP (NNP Nov.) (CD 29) )))
(. .) )
A dependency tree has several representations, all of which are not really intuitive to grasp. This is the Stanford format:
And here's the CoNLL one:
0 Colorless _ _ ADJ 2
1 green _ _ ADJ 2
2 ideas _ _ NOUN 3
3 sleep _ _ NOUN 3
4 furiously _ _ ADV 3
5 . _ _ PUNCT 3
Also, Google's Parsey McParseface offers another - presumably, more visual - representation (using the asciitree lib). Still, it is not good enough, as it messes with the order of words in a sentence.
Input: Bob brought the pizza to Alice .
brought VBD ROOT
+-- Bob NNP nsubj
+-- pizza NN dobj
| +-- the DT det
+-- to IN prep
| +-- Alice NNP pobj
+-- . . punct
As you see, dependency trees are not trivial to visualize (or pretty-print) in ASCII. The authors of Spacy creatively approached solving this problem by using CSS in their displaCy tool:
However, it seems like an overkill to bring a browser to with you for such a small task. And it's also not very scalable:
I, in fact, was always interested in creative ways of text-based visualization. So, I thought of ways to represent parse trees in ASCII.
With constituency ones, it's rather trivial:
> (pprint-tree '(TOP (S (NP (NN ))
(VP (VBZ )
(NP (DT )
(|.| <.:5 22..23>)))
: VP :
: .---------. :
NP : NP :
: : .----:-----. :
NN VBZ DT JJ NN .
: : : : : :
This is a simple test .
The dependencies are trickier, but I managed to find a way to show them without compromising the sentence word order:
> (pprint-deps '(<This:0 0..4> <is:1 5..7> <a:2 8..9> <simple:3 10..16> <test:4 17..21> <.:5 22..23>)
'(root(_ROOT_-0, is-1) nsubj(is-1, This-0) dobj(is-1, test-4) det(test-4, a-2) amod(test-4, simple-3) punct(is-1, .-5)))
Colorless green ideas sleep furiously .
^ ^ .^ .^. ^ ^
: `. amod .´: ::: : :
`..... amod .....´: ::: : :
`. nsubj .´:: : :
:`. advmod .´ :
:`.... punct .....´
And it looks pretty neat even for longer sentences:
We hold these truths to be self - evident , that all men are created equal , that they are endowed by their Creator with certain unalienable Rights , that among these are Life , Liberty and the pursuit of Happiness .
^ .^. ^ .^ ^ .^. ^ ^ .^ ^ ^ ^ .^ ^ .^. ^ ^ ^ ^ ^ .^. ^. ^ .^. ^. ^ ^ .^ ^ ^ ^. ^ .^. ^. ^ ^. ^ ^ .^. ^. ^ ^
`. nsubj .´:: `. det .´: `. aux .´:: : `. punct .´: : : `. det .´: `. auxpass .´:: : : : : `. auxpass .´:: :: `. poss .´:: :: : `. amod .´: : : :`. pobj .´ ::: :`. punct .´ :`. cc .´ `. det .´:: :`. pobj .´ :
:`... dobj ...´ :: `. npadvmod .´: : : : ::`. advcl .´ : : : ::: :: :: :: `...... amod ......´: : : : ::: :: :: :`. prep .´ :
:: :`..... acomp .....´ : : `.. nsubjpass ..´:: : : : ::: :: :: :`......... pobj .........´ : : : ::: :: :`...... conj .......´ :
:`......... advcl ..........´ : : ::`... punct ...´ : : ::: :: :`. prep .´ : : : ::: :`.... conj ....´ :
:`..................... punct ......................´ `........... mark ...........´:: : : ::: :`... pobj ....´ : : : ::`. attr .´ :
:: :: : : ::`. agent .´ : : `... prep ....´: :
:: :: : `.. nsubjpass ..´:: : `...... mark ......´: :
:: :: `....... mark .......´:: : : :
:: :: :`............................ punct .............................´ : :
:: :: :`........................................ advcl .........................................´ :
:: :`................ advcl ................´ :
:`...................................... ccomp .......................................´ :
:`............................................................................................................................................ punct .............................................................................................................................................´
However, writing the visualization code was one of the most intimidating programming tasks I've ever encountered. One explanation is that trees are most naturally processed in depth-first order top-down, while the visualization requires bottom-up BFS approach. The other may be that pixel-perfect (or, in this case, character-perfect display is always tedious). As far as I'm concerned, this is not a sufficient explanation, but I couldn't find any other. The ugliest part of this machinery is deps->levels function that prints the dependency relations in a layered fashion. The problem is to properly calculate minimal space necessary to accommodate both tokens and dependency labels and to account for different cases when the token has outgoing dependency arcs or doesn't. In theory sounds pretty easy, but in practice, it turned out a nightmare.
And all of this assumes projective trees (non-intersecting arcs), as well as doesn't know how to show on one level two arcs going from one token in two directions. Finally, I still couldn't align the two trees (constituency and dependency) above and under the sentence. Here's the target:
: VP :
: .---------. :
NP : NP :
: : .----:---------. :
NN VBZ DT JJ NN .
This is a simple test .
^ .^. ^ ^ .^ ^
`. nsubj .´:: : `. amod .´: :
:: `.... det ....´: :
:`..... dobj .....´ :
:`...... punct ......´
and this is how it prints for now (one more challenge was to transfer additional offsets from dependencies into the constituency tree):
: VP :
: .---------. :
NP : NP :
: : .----:-----. :
NN VBZ DT JJ NN .
This is a simple test .
^ .^. ^ ^ .^ ^
`. nsubj .´:: : `. amod .´: :
:: `.... det ....´: :
:`..... dobj .....´ :
:`...... punct ......´
Well, the good news is that it is usable, but it still needs more work to be feature complete. I wonder what was I doing wrong: maybe, someone can come up with a clean and simple implementation of this functionality (in any language)? I consider it a great coding challenge, although it may require a week of your free time and a bunch of dead neurons to accomplish. But if you're willing to take it, I'd be glad to see the results... :D
This week I put together a small NTP client. To keep dependencies at minimum and to avoid forcing a permanently running process onto users, it does not attempt to adjust system RTC clock, compensate jitter or evaluate time server quality. As I see it, much of that behaviour is easy enough to add via mixins with the defined
NTP timestamp is two 32-bit values: seconds and fraction of a second. NTP conveniently counts seconds from Jan 1 1900, just like universal time in Common Lisp. There is however no portable Common Lisp representation for fractions of a second. Thus the client sticks to using NTP formatted fraction for that. It is way more precision than any existing CL implementation has in
INTERNAL-TIME-UNITS-PER-SECOND, but this makes the value comparable across implemenations. The new
GET-ADJUSTED-UNIVERSAL-TIME method then returns a pair of values: universal time and NTP fraction. The fraction can be converted to the implementation's internal time scale with
Internally we define no special arithmetic on NTP timestamps but provide two conversion macros for single integer space.
BIG-TIME converts NTP stamp into a large integer. We then do all calculations in that domain, and convert back to NTP timestamp using
SMALL-TIME when it's time to send it over the wire. An
NTP instance stores adjusted time as an offset from internal real time. The offset is roughly intialized with universal time and then adjusted after each server request.
Right now I'm at Brussels Airport, waiting for my departing flight back to Zürich. The 10th European Lisp Symposium is over, and I got to have my first "real" talk at it. It was, as you might guess, about Radiance and some of the core concepts behind the project. With that, I think it is finally time to announce Radiance's proper release into the wild!
It's been a long time coming- starting back when I made my first steps in Lisp in June of 2013. Radiance's full story started much earlier though, back when I was still dabbling in PHP and Java for most of my projects. The changes that this project has undergone since then are massive, to the point where hardly a single aspect of it now has any connection to its initial beginnings. One thing has always stood the same, though: the intention to make Radiance a framework that eases deployment and housing of multiple, different web services within the same instance.
So what is Radiance good for then? Why not just use Hunchentoot? Well, depending on your project size and intentions, Hunchentoot may well be a viable alternative. What Radiance does do over Hunchentoot however, is that it offers you a layer around the webserver allowing you to exchange it later, similar to Clack. It also offers many more layers around various other features that are useful to develop web applications, however. Radiance is intended to be an adaptable intermediate layer between an application and the features that it depends on. It provides these features in such a way that it is still possible for the administrator of an installation of your application to decide what the implementations of those features are, and leaves them a choice to select their priorities.
Now, this probably sounds rather abstract and confusing, so let me try and illustrate what I mean a bit more clearly. Central to this aspect of Radiance is the standard-interfaces.lisp file and section 2 of the documentation. A quick look at them should make a few things clear: rather than implementing all sorts of features like a database layer, sessions, user accounts, authentication, and so forth, Radiance provides them through interface definitions. These definitions outline the signatures of functions, macros, and so forth that the interface provides. It does not, however, actually implement the features. Your application can make use of these features by depending on the interfaces it needs, without having to specify a particular underlying implementation. In the end, the administrator decides which implementing system to use for each interface, and Radiance takes care of loading the appropriate one whenever your application is loaded.
I won't go into a discrete example here, as I've already described how to use interfaces and what they can do for you in increasing levels of detail in the conference paper, the documentation, and the tutorial. If you're still with me and do intend on jumping in or having a more in-depth look, I really recommend starting with the tutorial. It's lengthy and touches on pretty much every aspect involved in writing a fully-fledged web application from the ground up. It doesn't touch on every single piece Radiance gives to you, but it will show you where to look and how to proceed should you still need more.
Outside of the interfaces and pluggable features, Radiance also offers a powerful and flexible routing system. Unlike other frameworks that associate pages with tags or directly hard-code the URL into the source code, Radiance uses an "internal URL representation" and an "external URL representation". The former is what your application and templates speak in, and the latter is what the user and web server deal in. The translation between the two is handled by regular functions, called routes, which rewrite and transform URLs in order to achieve the URL namespace setup that is desired on a particular installation. This allows the administrator quick and easy control over the setup of an application.
Finally, Radiance has a built in configuration and file management system that is responsible for keeping all the run-time data of an installation in one place that is easy to track. It offers you easy access to application parameters that are configurable by the administrator, and bundles everything together in such a way that multiple configuration sets can be kept on the same machine easily, thus allowing you to switch between different setups quickly. For example, you might have a "development" and "production" setup on your machine that pick different settings and implementations for the interfaces.
Aside from these three major features of interfaces, routing, and configuration, Radiance offers a variety of tools and related functionality to help you with your development. In the end it is all constructed and written with the intention of making your specific web application work in such a way that it can be deployed on systems other than your own without much further work, and that it can be deployed alongside other web applications within the same Radiance instance. This allows the applications to share data like users, database, sessions, and so forth between each other, without tripping over each others' toes.
While it is of course possible to use Radiance for an application that is just for you and you alone, this is not where its strengths lie. It's intended for people that want to write web applications that can be redistributed and used by other people, and focuses on allowing someone to gather together the services they want and run them all together in a common environment, leaving them as much control over the system as possible without having to touch the applications' code.
Now, mind you, this does have a price associated with it. You will need to give in to certain conventions that Radiance follows and give up certain amounts of control and freedom in order to really make use of the features. That's how things go for everything. However, I dare say that the price is, in most cases, not very high. Most applications can be written with the tools the interfaces provide to you. And even if they do not, Radiance in no way forces you to use the interfaces. You can always break out of the layers and directly make use of whatever library you might need, at the cost of making your application share less with others in the system, or constraining the administrator further.
Because almost everything in Radiance is optional, it becomes rather hard to advertise it fully. I'm aware of the drawbacks and the strengths, so I can't in good conscience just praise it for all of its aspects. The only thing I can say with certainty is that it's a system I've worked with for many years now, and a system I've already written a bunch of applications with. I've also been running these applications on public servers for a good many years, so it isn't without testing either. You're actually reading this on one of my services right now.
In the end, it's unfortunately still your choice which framework you're going to use for your next project. I can't make that choice for you. In the very least, though, I can now recommend Radiance without having to list a bunch of "but"s. Radiance is documented, it works, and it is now, finally, officially released!
I'd like to thank everyone who helped me along the way, by reading through my documentation and tutorial, testing things out, and giving me advice all around the project. I'd particularly like to thank Janne Pakarinen, Joram Schrijver, and Till Ehrengruber for their invaluable input.
Of course the usual issues with MACROEXPAND are all there, but I believe the choices made to handle it are quite sensible.
The library is still "in fiery", but most of the heavy lifting is done. The development branch is the most up-to-date one
just got back for ELS 2017 in Brussels, which went very well; thanks especially to Didier Verna, Irene Durand and Alberto Riva. It was a particularly good edition of the event.
- cl-cudd — A two-layered binding to the CUDD binary decision diagram library. See README.md for more details. — BSD Style (see LICENSE)
- cl-marklogic — Common Lisp library for accessing MarkLogic Server. — LGPL3
- cl-sandbox — Utility package for creating safe experimental environment. — MIT
- esrap-peg — A wrapper around Esrap to allow generating Esrap grammars from PEG definitions — MIT
- glsl-toolkit — A library to parse and modify OpenGL Shader Language (GLSL) source code — Artistic
- horner — Inline polynomial evaluation using Horner's rule. — MIT
- http-get-cache — Common Lisp library for caching HTTP GET responses — MIT
- random-sample — Random sample of a sequence with uniform distribution. — MIT
To get this update, use (ql:update-dist "quicklisp").
... with one caveat: the hash functions only generate one bit.“hash.c”
1 2 3 4 5 6
With hardware popcount, this compiles to something like the following.“hash.s”
1 2 3 4 5
This should raise a few questions:
- Why does it work?
- Is it useful?
Someone with a passing familiarity with x86 would also ask why we use
popcnt instead of checking the parity flag after
Unfortunately, the parity flag only considers the least significant
byte of the result (:
One-bit hash functions: but why?
When implementing something like the hashing trick or count sketches (PDF), you need two sets of provably strong hash functions: one to pick the destination bucket, and another to decide whether to increment or decrement by the sketched value.
One-bit hash functions are ideal for the latter use case.
How does that even work?
The bitwise operations in
bit_hash implement a degenerate form of
tabulation hashing. It considers
the 64 bit input value
x as a vector of 64 bits, and associates a
two intermediate output values with each index. The naďve
implementation would be something like the following.
1 2 3 4 5 6 7 8 9 10 11
Of course, the representation of
random_table is inefficient, and we
should hand-roll a bitmap. However, the loop itself is a problem.
The trick is to notice that we can normalise the table so that the
random_table[i] is always 0: in order to do so, we have
to fix the initial value for
acc to a random bit. That initial
value is the hash value for
0, and the values in
random_table[i] now encode whether a non-zero bit
flips the hash value or leaves it as is.
table argument for
bit_hash is simply the 64 bits in
bit is the hash value for
0. If bit
table is 0, bit
i is irrelevant to the hash. If bit
table is 1, the hash flips when bit
x is 1. Finally, the
parity counts how many times the hash was flipped.
Is it even useful?
I don’t think so. Whenever we need a hash bit, we also want a hash bucket; we might as well steal one bit from the latter wider hash. Worse, we usually want a few such bucket/bit pairs, so we could also compute a wider hash and carve out individual bits.
I only thought about this trick because I’ve been reading a few
empirical evaluation of sketching techniques, and a few authors find
it normal that computing a hash bit doubles the CPU time spent on
hashing. It seems to me the right way to do this is to map
columns/features to not-too-small integers (e.g., universal hashing to
[0, n^2) if we have
n features), and apply strong hashing to
these integers. Hashing machine integers is fast, and we can always
split strong hashes in multiple values.
In the end, this family of one-bit hash functions seems like a good solution to a problem no one should ever have. But it’s still a cute trick!
Possible alternative title: I'm on a train!
In particular, I'm on the train heading to the European Lisp Symposium, and for the first time since December I don't have a criticially urgent piece of teaching to construct. (For the last term, I've been under the cosh of attempting to teach Algorithms & Data Structures to a class, having never learnt Algorithms & Data Structures formally, let along properly, myself).
I have been giving the students some structure to help them in their learning by constructing multiple-choice quizzes. "But multiple-choice quizzes are easy!", I hear you cry! Well, they might be in general, but these quizzes were designed to probe some understanding, and to help students recognize what they did not know; of the ten quizzes I ran this term, several had a period where the modal mark in the quiz was zero. (The students were allowed take the quizzes more than once; the idea behind that being that they can learn from their mistakes and improve their score; the score is correlated to a mark in some minor way to act as a tiny carrot-bite of motivation; this means I have to write lots of questions so that multiple attempts aren't merely an exercise in memory or screenshot navigation).
The last time I was on a train, a few weeks ago, I was travelling to and from Warwick to sing Haydn's Nelson Mass ("Missa in angustiis"; troubled times, indeed), and had to write a quiz on numbers. I'd already decided that I would show the students the clever Karatsuba trick for big integer multiplication, and I wanted to write some questions to see if they'd understood it, or at least could apply some of the results of it.
Standard multiplication as learnt in school is, fairly clearly, an Ω(d2) algorithm. My children learn to multiply using the "grid method", where: each digit value of the number is written out along the edges of a table; the cells of the table are the products of the digit values; and the result is found by adding the cells together. Something like:
400 20 7 300 120000 6000 2100 90 36000 1800 630 3 1200 60 21
for 427×393 = 167811. Similar diagrammatic ways of multiplying (like [link]) duplicate this table structure, and traditional long multiplication or even the online multiplication trick where you can basically do everything in your head all multiply each digit of one of the multiplicands with each digit of the other.
But wait! This is an Algorithms & Data Structures class, so there must be some recursive way of decomposing the problem into smaller problems; divide-and-conquer is classic fodder for Computer Scientists. So, write a×b as (ahi×2k+alo)×(bhi×2k+blo), multiply out the brackets, and hi and lo and behold we have ahi×bhi×22k+(ahi×blo+alo×bhi)×2k+alo×blo, and we've turned our big multiplication into four multiplications of half the size, with some additional simpler work to combine the results, and big-O dear! that's still quadratic in the number of digits to multiply. Surely there is a better way?
Yes there is. Karatsuba multiplication is a better (asymptotically at least) divide-and-conquer algorithm. It gets its efficiency from a clever observation: that middle term in the expansion is expensive, and in fact we can compute it more cheaply. We have to calculate chi=ahi×bhi and clo=alo×blo, there's no getting around that, but to get the cross term we can compute (ahi+alo)×(bhi+blo) and subtract off chi and clo: and that's then one multiply for the result of two. With that trick, Karatsuba multiplication lets us turn our big multiplication into three multiplications of half the size, and that eventaully boils down to an algorithm with complexity Θ(d1.58) or thereabouts. Hooray!
Some of the questions I was writing for the quiz were for the students to compute the complexity of variants of Karatsuba's trick: generalize the trick to cross-terms when the numbers are divided into thirds rather than halves, or quarters, and so on. You can multiply numbers by doing six multiplies of one-third the size, or ten multiplies of one-quarter the size, or... wait a minute! Those generalizations of Karatsuba's trick are worse, not better! That was completely counter to my intuition that a generalization of Karatsuba's trick should be asymptotically better, and that there was probably some sense in which the limit of doing divide-bigly-and-conquer-muchly would turn into an equivalent of FFT-based multiplication with Θ(d×log(d)) complexity. But this generalization was pulling me back towards Θ(d2)! What was I doing wrong?
Well what I was doing wrong was applying the wrong generalization. I don't feel too much shame; it appears that Karatsuba did the same. If you're Toom or Cook, you probably see straight away that the right generalization is not to be clever about how to calculate cross terms, but to be clever about how to multiply polynomials: treat the divided numbers as polynomials in 2k, and use the fact that you need one more value than the polynomial's degree to determine all its coefficients. This gets you a product in five multiplications of one-third the size, or seven multiplications of one-quarter, and this is much better and fit with my intuition as to what the result should be. (I decided that the students could do without being explicitly taught about all this).
Meanwhile, here I am on my train journey of relative freedom, and I thought it might be interesting to see whether and where there was any benefit to implement Karatsuba multiplication in SBCL. (This isn't a pedagogy blog post, or an Algorithms & Data Structures blog post, after all; I'm on my way to a Lisp conference!). I had a go, and have a half-baked implementation: enough to give an idea. It only works on positive bignums, and bails if the numbers aren't of similar enough sizes; on the other hand, it is substantially consier than it needs to be, and there's probably still some room for micro-optimization. The results?
The slopes on the built-in and Karatsuba mulitply (according to the linear model fit) are 1.85±0.04 and 1.52±0.1 respectively, so even million-(binary)-digit bignums aren't clearly in the asymptotic régimes for the two multiplication methods: but at that size (and even at substantially smaller sizes, though not quite yet at Juho's 1000 bits) my simple Karatsuba implementation is clearly faster than the built-in multiply. I should mention that at least part of the reason that I have even heard of Karatsuba multiplication is Raymond Toy's implementation of it in CMUCL.
Does anyone out there use SBCL for million-digit multiplications?
I'm going to the European Lisp Symposium this year! In fact, I have to catch a train in two and a half hours, so I should start thinking about packing.
I don't have a paper to present, or indeed any agenda beyond catching up with old and new friends and having a little bit of space to think about what might be fun to try. If you're there and you want to make suggestions, or just tell me what you're up to: I'd love to hear. (And if you're not there: bad luck, and see you another time!)
..there is no shortage of rainy evenings in the rain capital of the world, so I used a few of them to put together this small application that I called (perhaps overly ambitiously) cl-imagenet. It uses a bunch of Lisp libraries: opticl and cl-jpeg for image processing, cxml for extracting bounding boxes from the annotations, cl-fad for filesystem operations, and trivial-channels in combination with clx for streaming to display.
The code tries to detect how many cores the host machine has, then creates the corresponding number of worker units. The workset ImageNet subunits list is built up, which are then assigned to the workunits. Each workunit fetches annotation file, extracts the bounding boxes and image file reference, decodes the corresponding JPEG file, handles processing with OptiCL and sends the result via shared channel to display thread. It is impressive how compact the code can be when leveraging random bits of the ecosystem available through Quicklisp.
In this setup only the luminance component of JPEG is extracted and then thresholded from medium gray. The video is filmed on an old quad i5-2500. On my 8-core i7-6700 box with visualisation off, it averages some 200K processed images per hour.
Tested lightly with SBCL on Linux. One known problem place is the message channel gradually eating up memory with visualization on, but as it's only used in tests it hasn't been a pressing need to fix yet.
Ten years ago, headlines from Planet Lisp. (Many links are broken, but fewer than I expected. And I still have the article text saved in my Planet Lisp database...)
I'm enjoying my customized mechanical WASD keyboard since last summer:
Its signage and cap colors are inspired by early Lisp machine 'space cadet' keyboards. Not anywhere a close functonal match certainly but I do like the looks. And unlike the the original it comes in short, messy-desk-space-preserving 87-key version.
The inscriptions are changed to oldskool. Alt is META, AltGr is GREEK, caps is HYPER and left-Win is SUPER. SUPER is bound to X11 layout switch in my system, GREEK is just the normal European 2nd-level shift. HYPER is remapped to.. well actual Hyper. This adds a bunch of new keybinding space in Emacs:
(when (eq window-system 'x) (shell-command "xmodmap -e 'remove Mod4 = Hyper_L' -e 'add Mod3 = Hyper_L'") (shell-command "xmodmap -e 'clear Lock' -e 'keycode 66 = Hyper_L'")) (global-set-key (kbd "H-k") 'kill-this-buffer) (global-set-key (kbd "H-f") 'find-file) (global-set-key (kbd "H-h") 'hyperspec-lookup) (global-set-key (kbd "H-3") 'split-window-right) (global-set-key (kbd "H-2") 'split-window-below) (global-set-key (kbd "H-1") 'delete-other-windows) (global-set-key (kbd "H-0") 'delete-window) (global-set-key (kbd "H-s") 'save-buffer) (global-set-key (kbd "H-a") 'mark-whole-buffer) (global-set-key (kbd "H-r") 'raise-sexp) (global-set-key (kbd "H-c") 'slime-selector)
Norwegian-based layout is available here.
Matthias Felleisen jested "Why are you still using CL when Scrbl/Racket is so much better :-)" ? My response was as follows:
you are right Racket is so much better in so many dimensions. I use Lisp because I just can't bear programming in a language without proper syntactic abstraction, and that is a dimension where Racket is far ahead of Common Lisp (CL), which sadly also remains far ahead of the rest of competition. Racket also has managed to grow a remarkable way to mix typed and untyped program fragments, which sets it ahead of most. But I am under the impression that there are still many dimensions in which Racket lags behind other languages and Common Lisp (CL) in particular.
- The Common Lisp Object System (CLOS) has multiple-inheritance, multi-methods, method combinations, introspection and extensibility via the MOP, generic functions that work on builtin classes, support for dynamic instance class change (change-class, update-instance-for-changed-class) and class redefinition (defclass, update-instance-for-redefined-class), a semi-decent story for combining parametric polymorphism and ad hoc polymorphism (my own lisp-interface-library), etc. Racket seems to still be playing catch-up with respect to ad hoc polymorphism, and is lacking a set of good data structure libraries that take advantage of both functional and object-oriented programming (a good target is Scala's scalaz and its rival cats).
- While the ubiquity of global side-effects in CL is very bad, the facts that all objects that matter are addressable by a path from some global namespace and that live redefinition is actively supported makes debugging and maintaining long-lived systems with in-image persistent data more doable (see again CLOS's update-instance-for-redefined-class). This is in contrast with the Racket IDE which drops live data when you recompile the code, which is fine for student exercises, but probably wrong for live systems. CL is one of the few languages that takes long-term data seriously (though not quite as seriously as Erlang).
- Libraries. CL seems to have much more libraries than Racket, and though the quality varies, these libraries seem to often have more feature coverage and more focus on production quality. From a cursory look, Racket libraries seem to often stop at "good enough for demo". An effort on curating libraries, homogeneizing namespaces, etc., could also help Racket (I consider CL rather bad in this respect, yet Racket seems worse). My recent experience with acmart, my first maintained Racket library, makes me think that writing libraries is even higher overhead in Racket than in CL, which is already mediocre.
- Speedwise, SBCL still produces code that runs noticeably faster than Racket (as long as you don't need full delimited control, which would requires a much slower CL-to-CL compiler). This difference may be reduced as Racket adopts the notoriously fast Chez Scheme as a backend (or not). Actually, the announcement of the new Racket backend really makes me eager to jump ship.
- As for startup latency, Common Lisp is also pretty good with its saved images (they start in tens of milliseconds on my laptop), making it practical to write trivial utilities for interactive use from the shell command-line with an "instantaneous" feel. Racket takes hundreds of milliseconds at startup which puts it (barely) in the "noticeable delay" category.
All these reasons, in addition to inertia (and a non-negligible code-base and mind-base), have made me stick to CL — for now. I think Racket is the future of Lisp (at least for me), I just haven't jumped ship right yet. If and when I do, I'll probably be working on some of these issues.
PS: Here are ways that Racket is indeed vastly superior to CL, that make me believe it's the future of Lisp:
- First and foremost, Racket keeps evolving, and not just "above" the base language, but importantly *below*. This alone makes it vastly superior to CL (that has evolved tremendously "above" its base abstractions, but hasn't evolved "below", except for FFI purpose, in the last 20 years), and superior to most languages (that tend to not evolve much "above", and not at all "below" their base abstractions).
- Racket is by far ahead of the pack in terms of Syntactic abstraction
- Racket has a decent module system, including build and phase separation, and symbol selection and renaming.
- Racket has typed modules, and a good interface between typed and untyped modules.
- Racket has lots of great teaching material.
- Racket has a one-stop-shop for documentation, though it isn't always easy to navigate and often lack examples. That still puts it far ahead of CL and a lot of languages.
- Racket provides purity by default a decent set of data structures.
- Racket has many primitives for concurrency, virtualization.
- Racket has standard primitives for laziness, pattern-matching, etc.
- Racket has a standard, portable, gui.
- Racket has a lively, healthy, user and developer community.
I probably forget more.
I owe you apologies for skipping reports in the meantime. Since January I'm not withdrawing money from our fundraiser thanks to other paid work, which means we have budget for more bounties. Keep in mind, that doesn't mean any work has ceased from my side.
I'm still working on a tutorial and demo application mentioned in the paper proposal. During that bugs and incompatibilities get fixed and pull requests are accepted. When questions arise on IRC or mailing list I try to answer them.
As a reminder: we have two active bounties at the moment:
Suggestions which other issues should have a bounty on them are appreciated and welcome.
If you have any questions, doubts or suggestions - please contact me
either by email (firstname.lastname@example.org) or on IRC (my nick is
If anyone wishes to inherit the domain from me and put it to better use, they should get in touch.
There are a bunch of new videos as of yesterday.
I have released new versions of nibbles (0.13) and ironclad (0.34). They are available from their respective tags in their github repositories; I have not created tarballs for them. Ironclad, in particular, has many new features; please see the NEWS files for both packages for some of the changes.
This is also an appropriate time to announce that I will no longer be maintaining nibbles, ironclad, nor any of my other Common Lisp packages. This has been the de facto state of affairs for several years now; we might as well make it official.
Today, I published an updated client with a new bundle feature: if :include-local-projects is true, everything in Quicklisp's ql:*local-project-directories* is copied into the bundle and made available when the bundle is loaded.
To get this update, use (ql:update-client). The new code will be loaded when Lisp is restarted.
This work was commissioned by Rigetti Computing.
If you have Quicklisp feature needs, feel free to get in touch with me!
We're looking for a developer to be a part of the core team for our Clojure and ClojureScript web application. Our stack includes reactive single-page web client code and a distributed backend to handle internal computations.
You will be responsible for designing, implementing, and testing functionality in our web application, collaborating closely with other developers to improve your and other's code, and working with our UI/UX team to make it slick. Most of all, we are looking for team members that are interested in learning and knowledge sharing.
I use and love the slime-selector. It’s a very quick way to jump around between various SLIME buffers.
In my .emacs,
C-c s is bound to
(global-set-key "\C-cs" 'slime-selector)
Then, in any buffer, I can use
\C-c s to pop up a minibuffer selector that takes a single additional key. Here’s the list of keys and what they do:
4: Select in other window ?: Selector help buffer. c: SLIME connections buffer. d: *sldb* buffer for the current connection. e: most recently visited emacs-lisp-mode buffer. i: *inferior-lisp* buffer. l: most recently visited lisp-mode buffer. n: Cycle to the next Lisp connection. p: Cycle to the previous Lisp connection. q: Abort. r: SLIME Read-Eval-Print-Loop. s: *slime-scratch* buffer. t: SLIME threads buffer. v: *slime-events* buffer.
r the most to instantly get into the repl. But I also use
l to jump to the last Lisp file I was working on, or
d to find a debugger buffer I might have buried.
It’s also quite helpful with
c, to bring up a list of active Lisp connections. For Quicklisp work, I sometimes want to switch between four or five active slime connections, and
\C-c s c makes it quick and easy to choose.
I'm happy to announce the release of Declt 2.1 "Jonathan Archer".
New in this release:
- Handle recent change in SBCL's
- Handle list of contacts (strings) in ASDF systems author and maintainer slots.
- Some backward-incompatible changes in the keyword arguments to the
- More hyperlinks between systems and source files.
- More robust system's packages collection (no more code walking).
- More robust handling of unavailable ASDF components.
- More robust naming of cross-references.
Find it at the usual place...
- agnostic-lizard — A portable code walker that makes a best effort to be correct in most cases — GPLv3+
- bit-ops — Optimized bit-vector operations — LLGPL
- cl-forest — Unofficial Common Lisp bindings to Rigetti Forest. — Apache 2.0
- cl-httpsqs — A client lib for accessing HTTPSQS written in Common Lisp — MIT
- cl-ledger — Double-entry accounting system. — BSD-3
- cl-password-store — Password management for Common Lisp (web) applications. — LLGPL
- cl-str — Modern, simple and consistent Common Lisp string manipulation library. — MIT
- cl-video — Video decoder implemented in Common Lisp — BSD
- cmu-infix — Mathematical infix notation for Common Lisp. — Custom (See LICENSE.txt)
- cserial-port — library for serial communication inspired by lispworks' serial-port — MIT
- cue-parser — A library for parsing audio CUE files — 2-clause BSD
- freebsd-sysctl — Sysctl kernel control mechanism for common lisp — 2-clause BSD
- jsonrpc — JSON-RPC 2.0 server/client implementation — BSD 2-Clause
- lifoo — a fresh take on Forth in the spirit of Common Lisp — MIT
- maiden — A modern and extensible chat bot framework. — Artistic
- named-read-macros — Make read macros more Lispy. Attach read macros to symbols. — BSD-3
- weblocks-examples — Example applications for Weblocks framework — Public Domain
To get this update, use (ql:update-dist "quicklisp").
As announced in my previous post, I've decided to do a write-up that illustrates my adventures in developing Portacle, a cross-platform, portable, installer-less, self-contained Common Lisp development environment.
The reason why I started with portacle in the first place is that, some day, probably in the distant future, I want to write a long series of long articles that should eventually accumulate into a book that introduces complete novices and interested people to programming in Common Lisp. As a prerequisite for that, I deem it absolutely necessary that the readers have an easy way to try out examples. Forcing them to perform a strange setup process that has absolutely nothing to do with the rest of the book and is likely to be error-prone in multiple ways is completely out of the question to me. They should be able to download an archive, extract it, and then just click on an icon to get a fully-featured IDE with which they can start out and play around with.
So, right off the bat this sets a few constraints that are rather hard to deal with.
- It needs to be cross-platform. I can't constrain the users to a certain operating system. Requiring them to set up a virtual machine would be madness.
- It needs to be cross-distribution. I can't know which distribution of Linux, or which version of it users are going to have. Requiring a specific one or a specific version would, too, be madness.
- It needs to be self-contained and should not poison the rest of the environment unless necessary or intended. This is necessary for it to be truly portable and work from a single folder.
- It needs to be reproducible and upgradable in order to be future-proof. This means I cannot simply assemble a package by hand once and call it a day once it works. It needs to be automatically buildable.
- It needs to be able to run on all platforms simultaneously as a single distribution. Otherwise, you would not be able to put it onto a USB stick and use it from any machine.
- It needs to be fully-featured enough to provide for both quick experiments and larger projects. As such, it will need to package a few different components and make them all work alongside, with the above constraints on top.
Since I want the eventual article series to be for absolute beginners as well, I also had some serious concerns about the usability aspects. It shouldn't be too confusing or hindering to use. Despite this, I still settled for Emacs+SLIME for the IDE, simply because there isn't really any alternative out there that is free and supports Lisp to a sufficient degree. I'm still not entirely set on the way the Emacs customisation is currently laid out, though. I might have to make more or less severe changes to make it easier for people to get started with. But, that's something for another time.
Now, considering the last constraint listed above, I decided on the following packages to be included in Portacle:
- Emacs, for the primary editor.
- Emacs customisation files, since the default is horrid.
- SBCL, for a capable Lisp implementation.
- ASDF, for building and system management.
- Quicklisp, for package management and library access.
- Git, for version control and project sharing.
ASDF was included specifically so that I would not have to rely on the implementation updating its internally shipped version and could instead deliver one that is possibly ahead of it, and thus less buggy/more feature-rich.
The first goal was to figure out how to build everything on OS X, Linux, and Windows. This alone took its fair share of experimentation and scouring as I had to look for the right combination of feature flags and build flags to make things work minimally.
In order to make this process work, I've developed a series of bash scripts that take care of downloading the source files of the various parts, configuring them, building them, and installing them in the proper locations in the resulting Portacle distribution tree. I settled on the following layout:
portacle -- Base Portacle directory build -- Build scripts and artefacts $package -- Build artefacts for the package config -- Configuration files and user files projects -- User projects directory all -- Cross-platform packages and resources emacsd -- Emacs configuration package quicklisp -- Quicklisp installation $platform -- Platform-specific packages and resources $package -- Installed package files lib -- Shared library files bin -- Shared executable files
The layout used to be different at first, namely the
$package directories used to be flipped around in the hierarchy. That proved to be less than ideal, however. I won't go into the details as to why. Suffice to say that complications that cropped up along the way ended up poisoning the hierarchy. Having a platform directory for all the specific files means you can create a Portacle distribution that works on all platforms simultaneously too.
Now, in order to properly illustrate the problems that cropped up, I'm going to talk about the development process for each platform separately. Pretty much every one of them had unique problems and complications that lead to a lot of wasted time and questions about the universe.
Linux was the first platform I started developing for, and ironically enough the last one I finished for good. The first phase of just getting everything built was comparatively painless. Things just worked.
However, as soon as I tried to run a complete installation on another Linux setup, things just segfaulted left and right. No wonder, too. After all, every component has shared library dependencies. Those are going to have different versions, and possibly even different names on different distributions. The first idea to deal with this mess was to simply copy every shared library the applications touched to a directory and set
LD_LIBRARY_PATH in a wrapper script before the respective application was launched.
In order to do this, I extended the build scripts with tools that would statically analyse all the executables that were shipped and recursively trace a full list of shared library dependencies. It would then copy them all over to the shared library files directory.
After doing this, things worked a bit better. But only a tiny bit. SBCL worked now, at least. Everything else still crashed. Unfortunately however, SBCL also only worked on my particular system. Much later I would find out that on others it would still fail. The root of the problem lay in the fact that
LD_LIBRARY_PATH does not cover everything. The system will still pick some shared libraries from elsewhere. Particularly,
libc and so forth.
So I scoured the web in search of a solution that would somehow let me constrain where linux looked for shared libraries. After a long while, I finally found something:
ld-linux.so. This file, which lives on every Linux system, is responsible for setting up the shared libraries an application depends on, and then running the application. It can be called directly, and it accepts a library-path argument, with which I can force it to look in a particular place first. So I changed the wrappers around to also start things with
ld-linux.so. Doing this allowed me to get Emacs working.
However, now SBCL didn't work anymore. For some reason that is still completely beyond me, if I try to run SBCL under
ld-linux.so, it acts as if the argument that is the SBCL binary path was simply not there. You can try it for yourself:
/lib64/ld-linux.so $(which sbcl) will show you the help text, as if that argument was just not there. If you pass it the path twice it does something, but it won't work right either. Fortunately, since SBCL doesn't have many dependencies to speak of, I've been able to get by without wrapping it in
ld-linux.so. I can't wait for the day for that to break as well, though.
Anyway, back to where I was. Emacs now worked. Kind of. Running other binaries with it did not, because the
LD_LIBRARY_PATH would poison their loading. Regular binaries on the system somewhere would not execute. So I had to do away with that. Thanks to the
ld-linux.so trick though, it wasn't really needed, at least for Emacs. For Git, the situation was worse. Git does some weird stuff where it has a multitude of
git-* binaries that each perform certain tasks. Some of those tasks call other Git binaries. Some of them call bash or things like that. I couldn't set
LD_LIBRARY_PATH because that would crash bash and others, and I couldn't use
ld-linux.so because that doesn't work when a new process is created. Back to Google, then.
After even more endless searching I finally found a solution. Using
LD_PRELOAD I could inject a shared library to be loaded before any other. This library could then replace the
exec* family of libc system functions and ensure that the process you wanted to call was actually called under
ld-linux.so. You can do this kind of symbol replacement by using
libdl, fetching the function locations of the functions you want to replace using
dlsym in the library's
init function, and then defining your own versions that call out to the original functions. You can find a few interesting blog articles out there that use
LD_PRELOAD and this kind of technique for malicious purposes. This culminated in something I called
ld-wrap.so. Getting all the
exec* calls to work again was its own adventure. On that path I discovered that
execv will actually call out to a shell if it finds a shell file, and, even worse than that,
excecvp* will actually call a shell any time if they can't seem to execute the file regularly. I think it's pretty insane for a "low-level system call" to potentially invoke a shell to run a script file, rather than "just" executing an ELF binary. In fact, there is no way to "just" execute a binary without implementing your own system calls directly. That's bonkers.
ld-wrap.so to work right involved multiple iterations, the last of which finally got Git working proper. I had to incorporate tests to check whether the binary was in the Portacle distribution, as well as checks to see whether the binary was static or not. Apparently launching a static binary under
ld-linux.so just results in a segfault. How very C-like. Anyway, it has not been the least bit of fun to go through this and pretty much every step of getting all of this worked out cost me weeks.
Aside from the SBCL problem mentioned above, there's an outstanding issue regarding the usage of fonts in Emacs. Since there's absolutely no guarantee for any particular font to exist on the system, and since I'd like to provide for some nice-looking default, I need to ship and install fonts automatically. I haven't gotten that to work quite yet, but I've been very disappointed to find that Emacs apparently has no support for loading a font from a File, and that Linux and OS X don't really support "temporarily" loading a font either. You have to install it into a user directory for it to be visible, so I have to touch things outside of the Portacle distribution folder.
Finally, apparently if you try to compile Emacs on some Linux systems, it will fail to run under
ld-linux.so and just crashes immediately with a "Memory Exhausted" warning. I have no clue what that is about and haven't received any feedback at all from the Emacs mailing lists either. So far this is not a problem, since the virtual system I build on works, but it is a major hassle because it means that building a distribution is out of the question for a lot of people. You can find out more about this bug on the issue tracker.
Windows has been an interesting system to work with. On one hand it has given me a lot of problems, but on the other it has also avoided a lot of them. The biggest pleasure was that shared library deployment "just worked". No need for complicated
ld-linux shenanigans, Windows is just consistent and faithful to loading the first library that it can find on its
PATH. Given that most of the libraries I need are already either provided by the Windows system or from the self-contained MSYS installation, there really haven't been any issues with getting things running on a deployed system.
However, it makes up for this in other areas.
While Emacs and SBCL have been very easy to get compiled and running, Git has once again proven to be a problem child. First, Git insists on keeping around
git-* prefixed hard-links to its binary because apparently a lot of things both internal and external still directly reference those. Hard links are difficult to ship because most archiving systems don't support them, making the resulting archive humongous. The current size of ~80 megabytes is already a lot by my measures, but the hard link issues exploded it to well over 100. Thus I had to go hunt for an archiver that allowed both self-extracting SFX executables -after all I couldn't ask people to install an archiver first- and was able to compress well and handle hard links. 7zip was up to the task, but it required complicating the deployment quite a bit in order to get the SFX working.
Next, Git is primarily a system written for Linux. As such, it has some "interesting" ideas about what the filesystem really looks like. I'm still not entirely sure how the path translation happens, but apparently Git interprets the root of the filesystem somewhere relative to its application location. This is fortunate for me, but took a long while to figure out. It is fortunate, because Git needs CURL to work, and CURL needs a certificate file to be able to do HTTPS. Naturally, Windows does not provide this by itself, so I have to do it. Git does allow you to set the path of the certificate file through a configuration variable, but it was one hell of a journey to get the entire setup working right. I'll try to explain why.
Because of - or rather thanks to Git's weird interpretation of the filesystem root, I can create a "fake" Git system configuration. Usually, Git looks up
/etc/gitconfig as part of the configuration files. Now, since the root is sort of relative to the Git binary, I can create an
etc subdirectory with the configuration file in the Git platform directory. This allows me to specify the certificate file path without having to disturb any of the other platforms. Then, thanks again to this root interpretation I can specify the path to the certificate file as an absolute path within the configuration file. Since the Git interpreted root moves with the Git binary, it becomes effectively relative. Naturally this would not work on Linux or OS X, but thankfully there I don't need to resort to such tricks.
Finally, if I try to compile Git without gettext on Windows, it fails to run properly and just exits with
vsnprintf is broken. I did find some issues related to
vsnprintf on Google, but nothing conclusive. Either way, if I do compile with gettext it seems to work fine. It doesn't work with gettext on OS X though, so I can't just enable it everywhere either.
Last but not least, Windows is primarily responsible for me not shipping a spell checker system. I really wanted to include that, as a spell checker is a really great thing to have when you're writing lots of comments and documentation, or just using Emacs for prose. However, I was simply not able to get
hunspell to compile and run no matter how much I tried. I was also unable to find any other compatible alternatives to those three that could be used as a replacement. If anyone else knows of a solution that I can compile successfully, and for which I can generate working dictionaries, I'd be very grateful.
Actually, I suppose it's also worth mentioning that for a while, before I wrote the launcher, I was trying to use batch scripts to launch things. Please, don't ever try to do that. Batch files are unbelievably cryptic to read and a huge pain to work with. There's no easy way to pass the arglist to another program for example, as it'll just reinterpret spaces within an argument as an argument separator. If you can, use PowerShell, which is supposedly much better, or just write a proper launcher application that does that kind of logic.
Mac OS X
Finally, OS X. This is a bit of a problematic system for me because Apple has, for reasons beyond me, decided to make it as difficult as possible to test for. Since I needed Portacle to work on multiple versions of OS X if possible, and even beyond that just ensure that it works outside of a development environment, I had to get my hands on virtual machines somehow. I first bought a MacBook Air just for this purpose -that's a thousand dollars wasted in the wind- only to realise that trying to run any kind of Virtual Machine on it is futile because it's just unbearably slow. Fortunately there are ways to get VMWare Workstation on Linux to run OS X as well if you use specially prepared images and some nefarious tools to unlock OS X as an option. However, only versions 10.11+ seem to run at any usable speed. 10.10 is so slow that you can pretty much forget trying to use it for anything.
Anyway, even just setting up a suitable build and test environment proved to be a major hassle. Thanks to Homebrew and MacPorts the building aspect wasn't too bad, though there too I've found weird inconsistencies like the aforementioned gettext problem. Aside from the compilation though, it really bothers me a lot that the OS X coreutils lack so many useful features. The build scripts have several special cases just for OS X because some Unix utility lacks some option that I need to get it done succinctly or efficiently.
Aside from the virtualisation thing, Apple also seems to try their hardest to prevent anyone from developing software for their system without also paying them a substantial fee every year. If you want to launch Portacle as a normal user, you just get a "security" error message that blocks the application from running. To get it to launch, you have to start the systems settings application, navigate to the security tab, unlock it, then click on "run this application" in order to finally run it. They completely removed the option to allow foreign apps by default, too, so there's no to me visible option to make it shut up. Windows has a security popup similar to that too, but at least you can immediately tell it to launch it, rather than having to waste multiple minutes in menus to do so.
In addition to the "security" popup, Apple has also recently decided to activate a feature that makes it virtually impossible for me to properly ship additional libraries, or versions of the libraries that Portacle requires, thus forever constraining the possible versions it can run on. OS X will now automatically clear out
DYLD_LIBRARY_PATH whenever you execute a new application. You can only disable this lunatic behaviour by booting into recovery mode and changing a security option- definitely not something I can ask people to do just to use Portacle. Thus, it seems it is impossible for me to, in the long term, cover a wider version scheme than a single one. This is very annoying, especially given that lots of people seem to stop upgrading now that Apple is screwing up ever more with every new version.
Now that most of the platform problems are finished, Portacle is almost done. There are a few more things left to do, however. The Emacs configuration as it is right now is somewhat passable, but I'd like to add and change a few more things to make it more comfortable, especially for beginners. I don't want to change too much either though, as that would make it hard for beginners to find help to a specific issue online.
Otherwise, I'd also like to work a bunch more on the included documentation and help file. As it stands it gives an overview over almost everything one needs to know to get started, but I haven't had it run by anyone yet, so I can't really give any estimates as to whether it's comprehensible, readable, or even useful in the first place.
I might write about Portacle again another time, hopefully when it has finally reached something I can call "version 1.0". Until then, I'll try to write some more articles about other subjects.
I’ve seen this idiom a few times when working with a binary stream:
(let ((elements-read (read-sequence buffer stream))) ...)
While the return value is the number of elements read in that situation, it’s a useful coincidence in a common case. The actual return value of read-sequence is:
the index of the first element of sequence that was not updated
Therefore, treating the return value as the number of elements read is completely wrong if you use the
start option to read-sequence.
For example, if
stream has one or more elements available to read:
(read-sequence buffer stream :start 42 :end 43) => 43
The actual number of elements read is 1, not 43.
It's been a good while since I last wrote an entry. I didn't write anything all January for a multitude of reasons, at the forefront being that it was exam season again. That's over with now, fortunately, so I do have some more time on my hands to goof off. Unfortunately though, I only have one more week left of this precious "free" time before university strikes me in the back again. I best use it wisely.
The first week of my holidays now I've spent on a heavy reworking of Portacle, a portable, multiplatform, upgradable, install-less Common Lisp development environment project. I've only just now finally got it to work properly on all three platforms and differing versions thereof. It has been an unbelievable struggle, as every platform posed unique problems and nasty edge-cases that were hard to catch without testing everything on a multitude of virtual machines all the time. If you look through the commit history you can find plentyful examples of the borderline insanity that I reached at points. I might write an in-depth article about my adventures another time.
During the exam session itself I was mostly occupied with working on Lichat, my own light-weight chat protocol. It has the nice properties that it is very uniform and rather simple to implement a client for. Among the nice aspects of the protocol are that everything goes over a channel, even private messages, that each update is associated with an ID and failures or responses can be tracked even if they happen to be out of order, and that connection multiplexing is provided directly by the protocol, so you can log into the same user account from multiple machines or instances simultaneously. I've written both a TCP and a WebSockets server, and a CL and JS client for it as well, so you can easily set up your own distributions of it. These CL and JS clients also have respective user interfaces as well, namely LionChat and Lichat-JS. Lionchat might turn into a generic chat client at some point in the future.
I've also continued work on Maiden, and finished a Lichat client for it as well. It's now running as a bot alongside the previous Colleen and I'm intending on switching over fully soon. The only thing that I haven't switched over yet is the chat logging, as I haven't had time to test that extensively yet. I've also been using its Text To Speech module to provide some fun interactivity for my video game streams. Before I can unleash Maiden onto Quicklisp I'll need to heavily document everything though, as it is quite a lot more complex -and capable- than Colleen was.
Then, I've also taken the time to switch over TyNET to the new Radiance 1.0. Radiance is now complete and ready for release as far as I'm aware. I've been holding off on releasing it for good as I've been waiting for some feedback on its tutorial from a couple of friends of mine. Once they get ahead with that I'll type up a good, long entry to announce everything. If my paper for ELS 2017 gets accepted, I will even be presenting it at the symposium in April. Speaking of the symposium- I've been working on a new website for it on and off, to which it will be switching in a while. You can get a preview peek at it here.
For Christmas I gave my father a Raspberry Pi 3 and installed Emacs and CCL onto it. He's been using it, and a DAQ board, to interface with a Korg synthesizer, since he's interested in making music on a more low, analog level. To do this I've been trying to teach Lisp to him since Christmas. While it's been a hard road so far, he's slowly getting into it and making advances. Given that he's been a Fortran programmer for almost all his life, with probably close to forty years of experience or more in it, I'm actually surprised at how well he's been doing with something that probably seems very different and alien to what he's used to. As part of this, though, I've had to write a few libraries, namely cl-gpio to interface with the Pi's GPIO pins, cl-spidev to use the serial port interface, cl-k8055 for the DAQ kit he's been using, and pi-plates for the pi-plates extension boards. The last library isn't done yet as I haven't had the time to test it yet, but the other three should work just fine.
So, I've been slowly, but surely, working off my long stack of projects that need to be completed someday. With Maiden out of the way soon too, I'll finally be able to get back to two of the remaining major projects: Trial, my game engine, and Parasol, my painting application. I'm still undecided as to which I'll tackle first, though. If you have a preference, let me know!
Finally, aside from programming work, I've gotten back into the swing of drawing. Before I was constantly feeling down and unmotivated about it. Everything just seemed to suck and I could never come up with ideas for things that I actually wanted to draw. I'm still not exploding with ideas now, but at least I've found something that inspires me a lot. Specifically, I was reading through Yokohama Shopping Trip and it was filled with exactly the kind of scenery-heavy, romantic drawings that I always wanted to make. "Romantic" referring to the Romanticism art movement. For the past few days then I've been trying to get a scenery drawing done every evening. You can see them on my tumblr. I still have a very long way to go and I'm painfully aware of all the things I do badly at or can't figure out how to draw properly, but I just have to keep on keeping on. It's too late to throw it all to the wind now.
In the near future I'd like to focus some more on writing things again, and perhaps also on actually reading some of the books that I've started accumulating over the years. But then, I'd also like to continue streaming video games, as I've been enjoying that quite a lot. Ah, there's just so much I'd like to do. If only someone were to figure out the age-old problem of finite capacity. Or in the very least: if only I didn't have to worry about earning any money to make a living, I could just merrily focus on being as productive as I can be instead of having to focus on what's marketable. Ah well, you can't have everything, eh?
So, in the more immediate future you can, if you want to, look forward to a full release announcement of Portacle, Radiance, and Maiden, more drawings, and hopefully more articles and stories as well. No promises, though. I'm not very good at keeping to deadlines for projects.
Here’s a message from Andrew Shalit to openmcl-devel:
To the CCL Community -
As many of you know, Gary Byers has been on a leave of absence from his Clozure CL work to attend to his health. I'm writing to let you know that, unfortunately, this leave will be permanent. Gary has resigned from his position at Clozure and will no longer be the lead developer of Clozure CL.
Over the last 30+ years Gary has made a tremendous contribution to the Common Lisp programming community, first through his work on Macintosh Common Lisp, and then through the creation of CCL. Gary has always been ready to share his deep knowledge of the Common Lisp standard and the issues attending its implementation. I don't know anyone whose understanding of the details of Common Lisp could match Gary's. His many contributions will be missed.
The work on CCL will continue, though. In Gary's absence, Matthew Emerson will be stepping up as the primary maintainer of CCL. Matthew is moving CCL to GitHub, and he hopes that others will pitch in to continue the development of this great software platform. Clozure Associates will also continue to support CCL, through direct contributions and through our consulting work. I'll be supporting those efforts, and I hope you'll consider supporting them as well.
To run it, CL-JPEG version 2.7 is required. Since it's not in Quicklisp yet as of time of writing, it must be cloned into your
local-projects. Then you can load
cl-video-player system and run
(cl-video::play <your filename.avi>).
The implementation is still very much naive, as expected from a weekend project. There is no support for indexing, and a number of edge cases of legit AVI encodings might fail. The library will decode video frames in the order they occur. No parsing of
BITMAPINFOHEADER structures; the assumption is they are MJPG DIB. No audio playback, although the stream is being read and adding at least PCM/WAV playback shouldn't be too big an effort.The decoder is factored out into independent implementation in
cl-video.lisp. A primitive CLX media player is included in
player.lisp. Each AVI instance contains one or more (audio or video) stream record objects, which hold ring buffers of frames with read and write cursors. The interaction between the decoder and player frontend runnning in separate threads is done via mutexed access to this ring buffer. The player thread can be kicked off a callback supplied to decode method: it will be called once the header part is parsed.
Since CL-JPEG doesn't use any SIMD extensions, the performance is modest. On my 6 year old 3GHz i5 (running one core of course) it decodes 480p 25fps file without hiccups, but going beyond that would require implementing multicore support in decode. Still it might be useful as is in applications not requiring real-time decoding performance.
- ahungry-fleece — A general utility library of convenience functions and features. — GPLv3
- asd-generator — Automatic directory scanner/generator for .asd project files. — GPLv3
- cl-cache-tables — A wrapper around native hash-tables to facilitate in-process caching of common lisp data structures. — MIT
- cl-feedparser — Common Lisp universal feed parser — LLGPL
- cl-gpio — A library for the Linux GPIO kernel module as used on hobby kits such as the Raspberry Pi — Artistic
- cl-k8055 — Bindings to the k8055 DAQ hobby board. — Artistic
- cl-online-learning — Online Machine Learning for Common Lisp — MIT Licence
- cl-spidev — A library for the Linux SPIDEV kernel module as used on hobby kits such as the Raspberry Pi — Artistic
- cl-xdg — freedesktop.org standards handling — GNU General Public License
- cl4l — esoteric CL essentials — MIT
- glisph — Glyph rendering engine using OpenGL shading language — MIT
- lichat-protocol — The independent protocol part of Lichat. — Artistic
- lichat-serverlib — Tools to help build a server using the lichat protocol. — Artistic
- lichat-tcp-client — A simple TCP client implementation for lichat — Artistic
- lichat-tcp-server — A simple TCP server implementation for lichat. — Artistic
- lichat-ws-server — A simple WebSocket server implementation for lichat. — Artistic
- lionchat — A GUI client for the Lichat protocol — Artistic
- lisp-binary — Declare binary formats as structs and then read and write them. — GPLv3
- omer-count — A script to assist in counting the time period between Pesach and Shavuot. — GPL V3
- ook — A CL compiler and enviroment for literate Orangutans. — Public Domain
- opticl-core — A library for representing and processing images — BSD
- rate-monotonic — A periodic thread scheduler inspired by RTEMS. — GPL-v3
- timer-wheel — A timer wheel implementation with BORDEAUX-THREADS backend. — MIT
- translate-client — A client to online web-server translators, currently only google translate — MIT
- trivial-escapes — C-style escape directives for Common Lisp. — Public Domain
Removed projects: cl-ledger, weblocks-examples.
To get this update, use: (ql:update-dist "quicklisp")
In course of last few months there were numerous small changes introduced to CL-JPEG. None substantial enough to warrant own announcement, but taken together perhaps it's due for an update. So here we go, as of version 2.6:
- Pre-allocated buffers in
DECODE-STREAMare now supported. This should help reduce consing in bulk-processing applications. The buffer can be pre-allocated based on dimensions via
- CMYK JPEG support. YCCK to CMYK conversion is now performed by decoder. To convert into 3-component RGB, use
- An option to disable colorspace conversion via
:COLORSPACE-CONVERSION NILsupplied to decode functions has been added. Can be useful e.g. if one needs the luminance component. Support of the corresponding option for encoder to improve performance in transcoding applications is in the plans.
- Small bugfixes and performance tweaks.
For older items, see the Planet Lisp Archives.
Last updated: 2017-04-26 01:00