Planet Lisp

Lispers.deLisp-Meetup in Hamburg on Monday, 6th May 2019

· 21 days ago

We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CET on 6th May 2019.

Christian was at ELS and will report, and we will talk about our attempts at a little informal language benchmark.

This is an informal gathering of Lispers of all experience levels.

Update: the fine folks from stk-hamburg.de will be there and talk about their Lisp-based work!

Paul KhuongFractional Set Covering With Experts

· 27 days ago

Last winter break, I played with one of the annual capacitated vehicle routing problem (CVRP) “Santa Claus” contests. Real world family stuff took precedence, so, after the obvious LKH with Concorde polishing for individual tours, I only had enough time for one diversification moonshot. I decided to treat the high level problem of assembling prefabricated routes as a set covering problem: I would solve the linear programming (LP) relaxation for the min-cost set cover, and use randomised rounding to feed new starting points to LKH. Add a lot of luck, and that might just strike the right balance between solution quality and diversity.

Unsurprisingly, luck failed to show up, but I had ulterior motives: I’m much more interested in exploring first order methods for relaxations of combinatorial problems than in solving CVRPs. The routes I had accumulated after a couple days turned into a set covering LP with 1.1M decision variables, 10K constraints, and 20M nonzeros. That’s maybe denser than most combinatorial LPs (the aspect ratio is definitely atypical), but 0.2% non-zeros is in the right ballpark.

As soon as I had that fractional set cover instance, I tried to solve it with a simplex solver. Like any good Googler, I used Glop... and stared at a blank terminal for more than one hour.

Having observed that lack of progress, I implemented the toy I really wanted to try out: first order online “learning with experts” (specifically, AdaHedge) applied to LP optimisation. I let this not-particularly-optimised serial CL code run on my 1.6 GHz laptop for 21 hours, at which point the first order method had found a 4.5% infeasible solution (i.e., all the constraints were satisfied with \(\ldots \geq 0.955\) instead of \(\ldots \geq 1\)). I left Glop running long after the contest was over, and finally stopped it with no solution after more than 40 days on my 2.9 GHz E5.

Given the shape of the constraint matrix, I would have loved to try an interior point method, but all my licenses had expired, and I didn’t want to risk OOMing my workstation. Erling Andersen was later kind enough to test Mosek’s interior point solver on it. The runtime was much more reasonable: 10 minutes on 1 core, and 4 on 12 cores, with the sublinear speed-up mostly caused by the serial crossover to a simplex basis.

At 21 hours for a naïve implementation, the “learning with experts” first order method isn’t practical yet, but also not obviously uninteresting, so I’ll write it up here.

Using online learning algorithms for the “experts problem” (e.g., Freund and Schapire’s Hedge algorithm) to solve linear programming feasibility is now a classic result; Jeremy Kun has a good explanation on his blog. What’s new here is:

  1. Directly solving the optimisation problem.
  2. Confirming that the parameter-free nature of AdaHedge helps.

The first item is particularly important to me because it’s a simple modification to the LP feasibility meta-algorithm, and might make the difference between a tool that’s only suitable for theoretical analysis and a practical approach.

I’ll start by reviewing the experts problem, and how LP feasibility is usually reduced to the former problem. After that, I’ll cast the reduction as a surrogate relaxation method, rather than a Lagrangian relaxation; optimisation should flow naturally from that point of view. Finally, I’ll guess why I had more success with AdaHedge this time than with Multiplicative Weight Update eight years ago.1

The experts problem and LP feasibility

I first heard about the experts problem while researching dynamic sorted set data structures: Igal Galperin’s PhD dissertation describes scapegoat trees, but is really about online learning with experts. Arora, Hazan, and Kale’s 2012 survey of multiplicative weight update methods. is probably a better introduction to the topic ;)

The experts problem comes in many variations. The simplest form sounds like the following. Assume you’re playing a binary prediction game over a predetermined number of turns, and have access to a fixed finite set of experts at each turn. At the beginning of every turn, each expert offers their binary prediction (e.g., yes it will rain today, or it will not rain today). You then have to make a prediction yourself, with no additional input. The actual result (e.g., it didn’t rain today) is revealed at the end of the turn. In general, you can’t expect to be right more often than the best expert at the end of the game. Is there a strategy that bounds the “regret,” how many more wrong prediction you’ll make compared to the expert(s) with the highest number of correct predictions, and in what circumstances?

Amazingly enough, even with an omniscient adversary that has access to your strategy and determines both the experts’ predictions and the actual result at the end of each turn, a stream of random bits (hidden from the adversary) suffice to bound our expected regret in \(\mathcal{O}(\sqrt{T}\,\lg n)\), where \(T\) is the number of turns and \(n\) the number of experts.

I long had trouble with that claim: it just seems too good of a magic trick to be true. The key realisation for me was that we’re only comparing against invidivual experts. If each expert is a move in a matrix game, that’s the same as claiming you’ll never do much worse than any pure strategy. One example of a pure strategy is always playing rock in Rock-Paper-Scissors; pure strategies are really bad! The trick is actually in making that regret bound useful.

We need a more continuous version of the experts problem for LP feasibility. We’re still playing a turn-based game, but, this time, instead of outputting a prediction, we get to “play” a mixture of the experts (with non-negative weights that sum to 1). At the beginning of each turn, we describe what weight we’d like to give to each experts (e.g., 60% rock, 40% paper, 0% scissors). The cost (equivalently, payoff) for each expert is then revealed (e.g., \(\mathrm{rock} = -0.5\), \(\mathrm{paper} = 0.5\), \(\mathrm{scissors} = 0\)), and we incur the weighted average from our play (e.g., \(60\% \cdot -0.5 + 40\% \cdot 0.5 = -0.1\)) before playing the next round.2 The goal is to minimise our worst-case regret, the additive difference between the total cost incurred by our mixtures of experts and that of the a posteriori best single expert. In this case as well, online learning algorithms guarantee regret in \(\mathcal{O}(\sqrt{T} \, \lg n)\)

This line of research is interesting because simple algorithms achieve that bound, with explicit constant factors on the order of 1,3 and those bounds are known to be non-asymptotically tight for a large class of algorithms. Like dense linear algebra or fast Fourier transforms, where algorithms are often compared by counting individual floating point operations, online learning has matured into such tight bounds that worst-case regret is routinely presented without Landau notation. Advances improve constant factors in the worst case, or adapt to easier inputs in order to achieve “better than worst case” performance.

The reduction below lets us take any learning algorithm with an additive regret bound, and convert it to an algorithm with a corresponding worst-case iteration complexity bound for \(\varepsilon\)-approximate LP feasibility. An algorithm that promises low worst-case regret in \(\mathcal{O}(\sqrt{T})\) gives us an algorithm that needs at most \(\mathcal{O}(1/\varepsilon\sp{2})\) iterations to return a solution that almost satisfies every constraint in the linear program, where each constraint is violated by \(\varepsilon\) or less (e.g., \(x \leq 1\) is actually \(x \leq 1 + \varepsilon\)).

We first split the linear program in two components, a simple domain (e.g., the non-negative orthant or the \([0, 1]\sp{d}\) box) and the actual linear constraints. We then map each of the latter constraints to an expert, and use an arbitrary algorithm that solves our continuous version of the experts problem as a black box. At each turn, the black box will output a set of non-negative weights for the constraints (experts). We will average the constraints using these weights, and attempt to find a solution in the intersection of our simple domain and the weighted average of the linear constraints.

Let’s use Stigler’s Diet Problem with three foods and two constraints as a small example, and further simplify it by disregarding the minimum value for calories, and the maximum value for vitamin A. Our simple domain here is at least the non-negative orthant: we can’t ingest negative food. We’ll make things more interesting by also making sure we don’t eat more than 10 servings of any food per day.

The first constraint says we mustn’t get too many calories

\[72 x\sb{\mathrm{corn}} + 121 x\sb{\mathrm{milk}} + 65 x\sb{\mathrm{bread}} \leq 2250,\]

and the second constraint (tweaked to improve this example) ensures we ge enough vitamin A

\[107 x\sb{\mathrm{corn}} + 400 x\sb{\mathrm{milk}} \geq 5000,\]

or, equivalently,

\[-107 x\sb{\mathrm{corn}} - 400 x\sb{\mathrm{milk}} \leq -5000,\]

Given weights \([¾, ¼]\), the weighted average of the two constraints is

\[27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}} \leq 437.5,\]

where the coefficients for each variable and for the right-hand side were averaged independently.

The subproblem asks us to find a feasible point in the intersection of these two constraints: \[27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}} \leq 437.5,\] \[0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.\]

Classically, we claim that this is just Lagrangian relaxation, and find a solution to

\[\min 27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}}\] subject to \[0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.\]

In the next section, I’ll explain why I think this analogy is wrong and worse than useless. For now, we can easily find the maximum one variable at a time, and find the solution \(x\sb{\mathrm{corn}} = 0\), \(x\sb{\mathrm{milk}} = 10\), \(x\sb{\mathrm{bread}} = 0\), with objective value \(-92.5\) (which is \(530\) less than \(437.5\)).

In general, three things can happen at this point. We could discover that the subproblem is infeasible. In that case, the original non-relaxed linear program itself is infeasible: any solution to the original LP satisfies all of its constraints, and thus would also satisfy any weighted average of the same constraints. We could also be extremely lucky and find that our optimal solution to the relaxation is (\(\varepsilon\)-)feasible for the original linear program; we can stop with a solution. More commonly, we have a solution that’s feasible for the relaxation, but not for the original linear program.

Since that solution satisfies the weighted average constraint, the black box’s payoff for this turn (and for every other turn) is non-positive. In the current case, the first constraint (on calories) is satisfied by \(1040\), while the second (on vitamin A) is violated by \(1000\). On weighted average, the constraints are satisfied by \(\frac{1}{4}(3 \cdot 1040 - 1000) = 530.\) Equivalently, they’re violated by \(-530\) on average.

We’ll add that solution to an accumulator vector that will come in handy later.

The next step is the key to the reduction: we’ll derive payoffs (negative costs) for the black box from the solution to the last relaxation. Each constraint (expert) has a payoff equal to its level of violation in the relaxation’s solution. If a constraint is strictly satisfied, the payoff is negative; for example, the constraint on calories is satisfied by \(1040\), so its payoff this turn is \(-1040\). The constraint on vitamin A is violated by \(1000\), so its payoff this turn is \(1000\). Next turn, we expect the black box to decrease the weight of the constraint on calories, and to increase the weight of the one on vitamin A.

After \(T\) turns, the total payoff for each constraint is equal to the sum of violations by all solutions in the accumulator. Once we divide both sides by \(T\), we find that the divided payoff for each constraint is equal to its violation by the average of the solutions in the accumulator. For example, if we have two solutions, one that violates the calories constraint by \(500\) and another that satisfies it by \(1000\) (violates it by \(-1000\)), the total payoff for the calories constraint is \(-500\), and the average of the two solutions does strictly satisfy the linear constraint by \(\frac{500}{2} = 250\)!

We also know that we only generated feasible solutions to the relaxed subproblem (otherwise, we’d have stopped and marked the original LP as infeasible), so the black box’s total payoff is \(0\) or negative.

Finally, we assumed that the black box algorithm guarantees an additive regret in \(\mathcal{O}(\sqrt{T}\, \lg n)\), so the black box’s payoff of (at most) \(0\) means that any constraint’s payoff is at most \(\mathcal{O}(\sqrt{T}\, \lg n)\). After dividing by \(T\), we obtain a bound on the violation by the arithmetic mean of all solutions in the accumulator: for all constraint, that violation is in \(\mathcal{O}\left(\frac{\lg n}{\sqrt{T}}\right)\). In other words, the number of iteration \(T\) must scale with \(\mathcal{O}\left(\frac{\lg n}{\varepsilon\sp{2}}\right)\), which isn’t bad when \(n\) is in the millions but \(\varepsilon \approx 0.01\).

Theoreticians find this reduction interesting because there are concrete implementations of the black box, e.g., the multiplicative weight update (MWU) method with non-asymptotic bounds. For many problems, this makes it possible to derive the exact number of iterations necessary to find an \(\varepsilon-\)feasible fractional solution, given \(\varepsilon\) and the instance’s size (but not the instance itself).

That’s why algorithms like MWU are theoretically useful tools for fractional approximations, when we already have subgradient methods that only need \(\mathcal{O}\left(\frac{1}{\varepsilon}\right)\) iterations: state-of-the-art algorithms for learning with experts explicit non-asymptotic regret bounds that yield, for many problems, iteration bounds that only depend on the instance’s size, but not its data. While the iteration count when solving LP feasibility with MWU scales with \(\frac{1}{\varepsilon\sp{2}}\), it is merely proportional to \(\lg n\), the log of the the number of linear constraints. That’s attractive, compared to subgradient methods for which the iteration count scales with \(\frac{1}{\varepsilon}\), but also scales linearly with respect to instance-dependent values like the distance between the initial dual solution and the optimum, or the Lipschitz constant of the Lagrangian dual function; these values are hard to bound, and are often proportional to the square root of the number of constraints. Given the choice between \(\mathcal{O}\left(\frac{\lg n}{\varepsilon\sp{2}}\right)\) iterations with explicit constants, and a looser \(\mathcal{O}\left(\frac{\sqrt{n}}{\varepsilon}\right)\), it’s obvious why MWU and online learning are powerful additions to the theory toolbox.

Theoreticians are otherwise not concerned with efficiency, so the usual answer to someone asking about optimisation is to tell them they can always reduce linear optimisation to feasibility with a binary search on the objective value. I once made the mistake of implementing that binary search last strategy. Unsurprisingly, it wasn’t useful. I also tried another theoretical reduction, where I looked for a pair of primal and dual -feasible solutions that happened to have the same objective value. That also failed, in a more interesting manner: since the two solution had to have almost the same value, the universe spited me by sending back solutions that were primal and dual infeasible in the worst possible way. In the end, the second reduction generated fractional solutions that were neither feasible nor superoptimal, which really isn’t helpful.

Direct linear optimisation with experts

The reduction above works for any “simple” domain, as long as it’s convex and we can solve the subproblems, i.e., find a point in the intersection of the simple domain and a single linear constraint or determine that the intersection is empty.

The set of (super)optimal points in some initial simple domain is still convex, so we could restrict our search to the search of the domain that is superoptimal for the linear program we wish to optimise, and directly reduce optimisation to the feasibility problem solved in the last section, without binary search.

That sounds silly at first: how can we find solutions that are superoptimal when we don’t even know the optimal value?

Remember that the subproblems are always relaxations of the original linear program. We can port the objective function from the original LP over to the subproblems, and optimise the relaxations. Any solution that’s optimal for a realxation must have an optimal or superoptimal value for the original LP.

Rather than treating the black box online solver as a generator of Lagrangian dual vectors, we’re using its weights as solutions to the surrogate relaxation dual. The latter interpretation isn’t just more powerful by handling objective functions. It also makes more sense: the weights generated by algorithms for the experts problem are probabilities, i.e., they’re non-negative and sum to \(1\). That’s also what’s expected for surrogate dual vectors, but definitely not the case for Lagrangian dual vectors, even when restricted to \(\leq\) constraints.

We can do even better!

Unlike Lagrangian dual solvers, which only converge when fed (approximate) subgradients and thus make us (nearly) optimal solutions to the relaxed subproblems, our reduction to the experts problem only needs feasible solutions to the subproblems. That’s all we need to guarantee an \(\varepsilon-\)feasible solution to the initial problem in a bounded number of iterations. We also know exactly how that \(\varepsilon-\)feasible solution is generated: it’s the arithmetic mean of the solutions for relaxed subproblems.

This lets us decouple finding lower bounds from generating feasible solutions that will, on average, \(\varepsilon-\)satisfy the original LP. In practice, the search for an \(\varepsilon-\)feasible solution that is also superoptimal will tend to improve the lower bound. However, nothing forces us to evaluate lower bounds synchronously, or to only use the experts problem solver to improve our bounds.

We can find a new bound from any vector of non-negative constraint weights: they always yield a valid surrogate relaxation. We can solve that relaxation, and update our best bound when it’s improved. The Diet subproblem earlier had

\[27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}} \leq 437.5,\] \[0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.\]

Adding the original objective function back yields the linear program

\[\min 0.18 x\sb{\mathrm{corn}} + 0.23 x\sb{\mathrm{milk}} + 0.05 x\sb{\mathrm{bread}}\] subject to \[27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}} \leq 437.5,\] \[0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10,\]

which has a trivial optimal solution at \([0, 0, 0]\).

When we generate a feasible solution for the same subproblem, we can use any valid bound on the objective value to find the most feasible solution that is also assuredly (super)optimal. For example, if some oracle has given us a lower bound of \(2\) for the original Diet problem, we can solve for

\[\min 27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}}\] subject to \[0.18 x\sb{\mathrm{corn}} + 0.23 x\sb{\mathrm{milk}} + 0.05 x\sb{\mathrm{bread}}\leq 2\] \[0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.\]

We can relax the objective value constraint further, since we know that the final \(\varepsilon-\)feasible solution is a simple arithmetic mean. Given the same best bound of \(2\), and, e.g., a current average of \(3\) solutions with a value of \(1.9\), a new solution with an objective value of \(2.3\) (more than our best bound, so not necessarily optimal!) would yield a new average solution with a value of \(2\), which is still (super)optimal. This means we can solve the more relaxed subproblem

\[\min 27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}}\] subject to \[0.18 x\sb{\mathrm{corn}} + 0.23 x\sb{\mathrm{milk}} + 0.05 x\sb{\mathrm{bread}}\leq 2.3\] \[0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.\]

Given a bound on the objective value, we swapped the constraint and the objective; the goal is to maximise feasibility, while generating a solution that’s “good enough” to guarantee that the average solution is still (super)optimal.

For box-constrained linear programs where the box is the convex domain, subproblems are bounded linear knapsacks, so we can simply stop the greedy algorithm as soon as the objective value constraint is satisfied, or when the knapsack constraint becomes active (we found a better bound).

This last tweak doesn’t just accelerate convergence to \(\varepsilon-\)feasible solutions. More importantly for me, it pretty much guarantees that out \(\varepsilon-\)feasible solution matches the best known lower bound, even if that bound was provided by an outside oracle. Bundle methods and the Volume algorithm can also mix solutions to relaxed subproblems in order to generate \(\varepsilon-\)feasible solutions, but the result lacks the last guarantee: their fractional solutions are even more superoptimal than the best bound, and that can make bounding and variable fixing difficult.

The secret sauce: AdaHedge

Before last Christmas’s CVRP set covering LP, I had always used the multiplicative weight update (MWU) algorithm as my black box online learning algorithm: it wasn’t great, but I couldn’t find anything better. The two main downsides for me were that I had to know a “width” parameter ahead of time, as well as the number of iterations I wanted to run.

The width is essentially the range of the payoffs; in our case, the potential level of violation or satisfaction of each constraints by any solution to the relaxed subproblems. The dependence isn’t surprising: folklore in Lagrangian relaxation also says that’s a big factor there. The problem is that the most extreme violations and satisfactions are initialisation parameters for the MWU algorithm, and the iteration count for a given \(\varepsilon\) is quadratic in the width (\(\mathrm{max_violation} \cdot \mathrm{max_satisfaction}\)).

What’s even worse is that the MWU is explicitly tuned for a specific iteration count. If I estimate that, give my worst-case width estimate, one million iterations will be necessary to achieve \(\varepsilon-\)feasibility, MWU tuned for 1M iterations will need 1M iterations, even if the actual width is narrower.

de Rooij and others published AdaHedge in 2013, an algorithm that addresses both these issues by smoothly estimating its parameter over time, without using the doubling trick.4 AdaHedge’s loss (convergence rate to an \(\varepsilon-\)solution) still depends on the relaxation’s width. However, it depends on the maximum width actually observed during the solution process, and not on any explicit worst-case bound. It’s also not explicily tuned for a specific iteration count, and simply keeps improving at a rate that roughly matches MWU. If the instance happens to be easy, we will find an \(\varepsilon-\)feasible solution more quickly. In the worst case, the iteration count is never much worse than that of an optimally tuned MWU.

These 400 lines of Common Lisp implement AdaHedge and use it to optimise the set covering LP. AdaHedge acts as the online blackk box solver for the surrogate dual problem, the relaxed set covering LP is a linear knapsack, and each subproblem attempts to improve the lower bound before maximising feasibility.

When I ran the code, I had no idea how long it would take to find a feasible enough solution: covering constraints can never be violated by more than \(1\), but some points could be covered by hundreds of tours, so the worst case satisfaction width is high. I had to rely on the way AdaHedge adapts to the actual hardness of the problem. In the end, \(34492\) iterations sufficed to find a solution that was \(4.5\%\) infeasible.5 This corresponds to a worst case with a width of less than \(2\), which is probably not what happened. It seems more likely that the surrogate dual isn’t actually an omniscient adversary, and AdaHedge was able to exploit some of that “easiness.”

The iterations themselves are also reasonable: one sparse matrix / dense vector multiplication to convert surrogate dual weights to an average constraint, one solve of the relaxed LP, and another sparse matrix / dense vector multiplication to compute violations for each constraint. The relaxed LP is a fractional \([0, 1]\) knapsack, so the bottleneck is sorting double floats. Each iteration took 1.8 seconds on my old laptop; I’m guessing that could easily be 10-20 times faster with vectorisation and parallelisation.

In another post, I’ll show how using the same surrogate dual optimisation algorithm to mimick Lagrangian decomposition instead of Lagrangian relaxation guarantees an iteration count in \(\mathcal{O}\left(\frac{\lg \#\mathrm{nonzero}}{\varepsilon\sp{2}}\right)\) independently of luck or the specific linear constraints.


  1. Yes, I have been banging my head against that wall for a while.

  2. This is equivalent to minimising expected loss with random bits, but cleans up the reduction.

  3. When was the last time you had to worry whether that log was natural or base-2?

  4. The doubling trick essentially says to start with an estimate for some parameters (e.g., width), then adjust it to at least double the expected iteration count when the parameter’s actual value exceeds the estimate. The sum telescopes and we only pay a constant multiplicative overhead for the dynamic update.

  5. I think I computed the \(\log\) of the number of decision variables instead of the number of constraints, so maybe this could have gone a bit better.

Lispers.deBerlin Lispers Meetup, Monday, 15th April 2019

· 41 days ago

We meet again on Monday 8pm, 15th April. Our host this time is James Anderson (www.dydra.com).

Berlin Lispers is about all flavors of Lisp including Emacs Lisp, Common Lisp, Clojure, Scheme.

We will have two talks this time.

Hans Hübner will tell us about "Reanimating VAX LISP - A CLtL1 implementation for VAX/VMS".

And Ingo Mohr will continue his talk "About the Unknown East of the Ancient LISP World. History and Thoughts. Part II: Eastern Common LISP and a LISP Machine."

We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.

Didier VernaQuickref 2.0 "Be Quick or Be Dead" is released

· 43 days ago

Surfing on the energizing wave of ELS 2019, the 12 European Lisp Symposium, I'm happy to announce the release of Quickref 2.0, codename "Be Quick or Be Dead".

The major improvement in this release, justifying an increment of the major version number (and the very appropriate codename), is the introduction of parallel algorithms for building the documentation. I presented this work last week in Genova so I won't go into the gory details here, but for the brave and impatient, let me just say that using the parallel implementation is just a matter of calling the BUILD function with :parallel t :declt-threads x :makeinfo-threads y (adjust x and y as you see fit, depending on your architecture).

The second featured improvement is the introduction of an author index, in addition to the original one. The author index is still a bit shaky, mostly due to technical problems (calling asdf:find-system almost two thousand times simply doesn't work) and also to the very creative use that some library authors have of the ASDF author and maintainer slots in the system descriptions. It does, however, a quite decent job for the majority of the authors and their libraries'reference manuals.

Finally, the repository now has a fully functional continuous integration infrastructure, which means that there shouldn't be anymore lags between new Quicklisp (or Quickref) releases and new versions of the documentation website.

Thanks to Antoine Hacquard, Antoine Martin, and Erik Huelsmann for their contribution to this release! A lot of new features are already in the pipe. Currently documenting 1720 libraries, and counting...

Lispers.deLisp-Meetup in Hamburg on Monday, 1st April 2019

· 53 days ago

We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CET on 1st April 2019.

This is an informal gathering of Lispers. Svante will talk a bit about the implementation of lispers.de. You are invited to bring your own topics.

Lispers.deBerlin Lispers Meetup, Monday, 25th March 2019

· 61 days ago

We meet again on Monday 8pm, 25th March. Our host this time is James Anderson (www.dydra.com).

Berlin Lispers is about all flavors of Lisp including Common Lisp, Scheme, Dylan, Clojure.

We will have a talk this time. Ingo Mohr will tell us "About the Unknown East of the Ancient LISP World. History and Thoughts. Part I: LISP on Punchcards".

We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.

Quicklisp newsMarch 2019 Quicklisp dist update now available

· 74 days ago
New projects:
  • bobbin — Simple (word) wrapping utilities for strings. — MIT
  • cl-mango — A minimalist CouchDB 2.x database client. — BSD3
  • cl-netpbm — Common Lisp support for reading/writing the netpbm image formats (PPM, PGM, and PBM). — MIT/X11
  • cl-skkserv — skkserv with Common Lisp — GPLv3
  • cl-torrents — This is a little tool for the lisp REPL or the command line (also with a readline interactive prompt) to search for torrents and get magnet links — MIT
  • common-lisp-jupyter — A Common Lisp kernel for Jupyter along with a library for building Jupyter kernels. — MIT
  • conf — Simple configuration file manipulator for projects. — GNU General Public License v3.0
  • eventbus — An event bus in Common Lisp. — GPLv3
  • open-location-code — Open Location Code library. — Modified BSD License
  • piggyback-parameters — This is a configuration system that supports local file and database based parameter storage. — MIT
  • quilc — A CLI front-end for the Quil compiler — Apache License 2.0 (See LICENSE.txt)
  • qvm — An implementation of the Quantum Abstract Machine. — Apache License 2.0 (See LICENSE.txt)
  • restricted-functions — Reasoning about functions with restricted argument types. — MIT
  • simplet — Simple test runner in Common Lisp. — GPLv3
  • skeleton-creator — Create projects from a skeleton directory. — GPLv3
  • solid-engine — The Common Lisp stack-based application controller — MIT
  • spell — Spellchecking package for Common Lisp — BSD
  • trivial-continuation — Provides an implementation of function call continuation and combination. — MIT
  • trivial-hashtable-serialize — A simple method to serialize and deserialize hash-tables. — MIT
  • trivial-json-codec — A JSON parser able to identify class hierarchies. — MIT
  • trivial-monitored-thread — Trivial Monitored Thread offers a very simple (aka trivial) way of spawning threads and being informed when one any of them crash and die. — MIT
  • trivial-object-lock — A simple method to lock object (and slot) access. — MIT
  • trivial-pooled-database — A DB multi-threaded connection pool. — MIT
  • trivial-timer — Easy scheduling of tasks (functions). — MIT
  • trivial-variable-bindings — Offers a way to handle associations between a place-holder (aka. variable) and a value. — MIT
  • ucons — Unique conses and functions for working on them. — MIT
  • wordnet — Common Lisp interface to WordNet — CC-BY 4.0
Updated projectsagnostic-lizardaprilbig-stringbinfixceplchancerychirpcl+sslcl-abstract-classescl-allcl-asynccl-collidercl-conllucl-croncl-digraphcl-eglcl-gap-buffercl-generatorcl-generic-arithmeticcl-gracecl-hamcrestcl-lascl-ledgercl-locativescl-marklesscl-messagepackcl-ntriplescl-patternscl-prevalencecl-projcl-projectcl-qrencodecl-random-forestcl-stopwatchcl-string-completecl-string-matchcl-tcodcl-waylandcladclemclodcloser-mopclx-xembedcoleslawcommon-lisp-actorscroatoandartsclhashtreedata-lensdefrecdoplusdoubly-linked-listdynamic-collecteclectorescalatorexternal-programfiascoflac-parsergame-mathgamebox-dgengamebox-mathgendlgeneric-clgeniegolden-utilshelambdapinterfaceironcladjp-numeraljson-responsesl-mathletreclisp-chatlistopialiterate-lispmaidenmap-setmcclimmitonodguioverlordparachuteparameterized-functionpathname-utilsperiodspetalisppjlinkplumppolicy-condportable-threadspostmodernprotestqt-libsqtoolsqtools-uirecurregular-type-expressionroveserapeumshadowsimplified-typesslyspinneretstaplestumpwmsuclesynonymstaggertemplatetriviatrivial-batterytrivial-benchmarktrivial-signaltrivial-utilitiesubiquitousumbrausocketvarjovernacularwith-c-syntax.

Removed projects: mgl, mgl-mat.

To get this update, use: (ql:update-dist "quicklisp")

Enjoy!

Lispers.deLisp-Meetup in Hamburg on Monday, 4th March 2019

· 82 days ago

We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CET on 4th March 2019.

This is an informal gathering of Lispers. Come as you are, bring lispy topics.

Paul KhuongThe Unscalable, Deadlock-prone, Thread Pool

· 83 days ago

Epistemic Status: I’ve seen thread pools fail this way multiple times, am confident the pool-per-state approach is an improvement, and have confirmed with others they’ve also successfully used it in anger. While I’ve thought about this issue several times over ~4 years and pool-per-state seems like a good fix, I’m not convinced it’s undominated and hope to hear about better approaches.

Thread pools tend to only offer a sparse interface: pass a closure or a function and its arguments to the pool, and that function will be called, eventually.1 Functions can do anything, so this interface should offer all the expressive power one could need. Experience tells me otherwise.

The standard pool interface is so impoverished that it is nearly impossible to use correctly in complex programs, and leads us down design dead-ends. I would actually argue it’s better to work with raw threads than to even have generic amorphous thread pools: the former force us to stop and think about resource requirements (and lets the OS’s real scheduler help us along), instead of making us pretend we only care about CPU usage. I claim thread pools aren’t scalable because, with the exception of CPU time, they actively hinder the development of programs that achieve high resource utilisation.

This post comes in two parts. First, the story of a simple program that’s parallelised with a thread pool, then hits a wall as a wider set of resources becomes scarce. Second, a solution I like for that kind of program: an explicit state machine, where each state gets a dedicated queue that is aware of the state’s resource requirements.

Stages of parallelisation

We start with a simple program that processes independent work units, a serial loop that pulls in work (e.g., files in a directory), or wait for requests on a socket, one work unit at a time.

At some point, there’s enough work to think about parallelisation, and we choose threads.2 To keep things simple, we simply spawn a thread per work unit. Load increases further, and we observe that we spend more time switching between threads or contending on shared data than doing actual work. We could use a semaphore to limit the number of work units we process concurrently, but we might as well just push work units to a thread pool and recycle threads instead of wasting resources on a thread-per-request model. We can even start thinking about queueing disciplines, admission control, backpressure, etc. Experienced developers will often jump directly to this stage after the serial loop.

The 80s saw a lot of research on generalising this “flat” parallelism model to nested parallelism, where work units can spawn additional requests and wait for the results (e.g., to recursively explore sub-branches of a search tree). Nested parallelism seems like a good fit for contemporary network services: we often respond to a request by sending simpler requests downstream, before merging and munging the responses and sending the result back to the original requestor. That may be why futures and promises are so popular these days.

I believe that, for most programs, the futures model is an excellent answer to the wrong question. The moment we perform I/O (be it network, disk, or even with hardware accelerators) in order to generate a result, running at scale will have to mean controlling more resources than just CPU, and both the futures and the generic thread pool models fall short.

The issue is that futures only work well when a waiter can help along the value it needs, with task stealing, while thread pools implement a trivial scheduler (dedicate a thread to a function until that function returns) that must be oblivious to resource requirements, since it handles opaque functions.

Once we have futures that might be blocked on I/O, we can’t guarantee a waiter will achieve anything by lending CPU time to its children. We could help sibling tasks, but that way stack overflows lie.

The deficiency of flat generic thread pools is more subtle. Obviously, one doesn’t want to take a tight thread pool, with one thread per core, and waste it on synchronous I/O. We’ll simply kick off I/O asynchronously, and re-enqueue the continuation on the pool upon completion!

Instead of doing

A, I/O, B

in one function, we’ll split the work in two functions and a callback

A, initiate asynchronous I/O
On I/O completion: enqueue B in thread pool
B

The problem here is that it’s easy to create too many asynchronous requests, and run out of memory, DOS the target, or delay the rest of the computation for too long. As soon as the I/O requests has been initiated in A, the function returns to the thread pool, which will just execute more instances of A and initiate even more I/O.

At first, when the program doesn’t heavily utilise any resource in particular, there’s an easy solution: limit the total number of in-flight work units with a semaphore. Note that I wrote work unit, not function calls. We want to track logical requests that we started processing, but for which there is still work to do (e.g., the response hasn’t been sent back yet).

I’ve seen two ways to cap in-flight work units. One’s buggy, the other doesn’t generalise.

The buggy implementation acquires a semaphore in the first stage of request handling (A) and releases it in the last stage (B). The bug is that, by the time we’re executing A, we’re already using up a slot in the thread pool, so we might be preventing Bs from executing. We have a lock ordering problem: A acquires a thread pool slot before acquiring the in-flight semaphore, but B needs to acquire a slot before releasing the same semaphore. If you’ve seen code that deadlocks when the thread pool is too small, this was probably part of the problem.

The correct implementation acquires the semaphore before enqueueing a new work unit, before shipping a call to A to the thread pool (and releases it at the end of processing, in B). This only works because we can assume that the first thing A does is to acquire the semaphore. As our code becomes more efficient, we’ll want to more finely track the utilisation of multiple resources, and pre-acquisition won’t suffice. For example, we might want to limit network requests going to individual hosts, independently from disk reads or writes, or from database transactions.

Resource-aware thread pools

The core issue with thread pools is that the only thing they can do is run opaque functions in a dedicated thread, so the only way to reserve resources is to already be running in a dedicated thread. However, the one resource that every function needs is a thread on which to run, thus any correct lock order must acquire the thread last.

We care about reserving resources because, as our code becomes more efficient and scales up, it will start saturating resources that used to be virtually infinite. Unfortunately, classical thread pools can only control CPU usage, and actively hinder correct resource throttling. If we can’t guarantee we won’t overwhelm the supply of a given resource (e.g., read IOPS), we must accept wasteful overprovisioning.

Once the problem has been identified, the solution becomes obvious: make sure the work we push to thread pools describes the resources to acquire before running the code in a dedicated thread.

My favourite approach assigns one global thread pool (queue) to each function or processing step. The arguments to the functions will change, but the code is always the same, so the resource requirements are also well understood. This does mean that we incur complexity to decide how many threads or cores each pool is allowed to use. However, I find that the resulting programs are better understandable at a high level: it’s much easier to write code that traverses and describes the work waiting at different stages when each stage has a dedicated thread pool queue. They’re also easier to model as queueing systems, which helps answer “what if?” questions without actually implementing the hypothesis.

In increasing order of annoyingness, I’d divide resources to acquire in four classes.

  1. Resources that may be seamlessly3 shared or timesliced, like CPU.
  2. Resources that are acquired for the duration of a single function call or processing step, like DB connections.
  3. Resources that are acquired in one function call, then released in another thread pool invocation, like DB transactions, or asynchronous I/O semaphores.
  4. Resources that may only be released after temporarily using more of it, or by cancelling work: memory.

We don’t really have to think about the first class of resources, at least when it comes to correctness. However, repeatedly running the same code on a given core tends to improve performance, compared to running all sorts of code on all cores.

The second class of resources may be acquired once our code is running in a thread pool, so one could pretend it doesn’t exist. However, it is more efficient to batch acquisition, and execute a bunch of calls that all need a given resource (e.g., a DB connection from a connection pool) before releasing it, instead of repetitively acquiring and releasing the same resource in back-to-back function calls, or blocking multiple workers on the same bottleneck.4 More importantly, the property of always being acquired and released in the same function invocation, is a global one: as soon as even one piece of code acquires a given resource and releases in another thread pool call (e.g., acquires a DB connection, initiates an asynchronous network call, writes the result of the call to the DB, and releases the connection), we must always treat that resource as being in the third, more annoying, class. Having explicit stages with fixed resource requirements helps us confirm resources are classified correctly.

The third class of resources must be acquired in a way that preserves forward progress in the rest of the system. In particular, we must never have all workers waiting for resources of this third class. In most cases, it suffices to make sure there at least as many workers as there are queues or stages, and to only let each stage run the initial resource acquisition code in one worker at a time. However, it can pay off to be smart when different queued items require different resources, instead of always trying to satisfy resource requirements in FIFO order.

The fourth class of resources is essentially heap memory. Memory is special because the only way to release it is often to complete the computation. However, moving the computation forward will use even more heap. In general, my only solution is to impose a hard cap on the total number of in-flight work units, and to make sure it’s easy to tweak that limit at runtime, in disaster scenarios. If we still run close to the memory capacity with that limit, the code can either crash (and perhaps restart with a lower in-flight cap), or try to cancel work that’s already in progress. Neither option is very appealing.

There are some easier cases. For example, I find that temporary bumps in heap usage can be caused by parsing large responses from idempotent (GET) requests. It would be nice if networking subsystems tracked memory usage to dynamically throttle requests, or even cancel and retry idempotent ones.

Once we’ve done the work of explicitly writing out the processing steps in our program as well as their individual resource requirements, it makes sense to let that topology drive the structure of the code.

Over time, we’ll gain more confidence in that topology and bake it in our program to improve performance. For example, rather than limiting the number of in-flight requests with a semaphore, we can have a fixed-size allocation pool of request objects. We can also selectively use bounded ring buffers once we know we wish to impose a limit on queue size. Similarly, when a sequence (or subgraph) of processing steps is fully synchronous or retires in order, we can control both the queue size and the number of in-flight work units with a disruptor, which should also improve locality and throughput under load. These transformations are easy to apply once we know what the movement of data and resource looks like. However, they also ossify the structure of the program, so I only think about such improvements if they provide a system property I know I need (e.g., a limit on the number of in-flight requests), or once the code is functional and we have load-testing data.

Complex programs are often best understood as state machines. These state machines can be implicit, or explicit. I prefer the latter. I claim that it’s also preferable to have one thread pool5 per explicit state than to dump all sorts of state transition logic in a shared pool. If writing functions that process flat tables is data-oriented programming, I suppose I’m arguing for data-oriented state machines.


  1. Convenience wrappers, like parallel map, or “run after this time,” still rely on the flexibility of opaque functions.

  2. Maybe we decided to use threads because there’s a lot of shared, read-mostly, data on the heap. It doesn’t really matter, process pools have similar problems.

  3. Up to a point, of course. No model is perfect, etc. etc.

  4. Explicit resource requirements combined with one queue per stage lets us steal ideas from SEDA.

  5. One thread pool per state in the sense that no state can fully starve out another of CPU time. The concrete implementation may definitely let a shared set of workers pull from all the queues.

Christophe Rhodessbcl 1 5 0

· 85 days ago

Today, I released sbcl-1.5.0 - with no particular reason for the minor version bump except that when the patch version (we don't in fact do semantic versioning) gets large it's hard to remember what I need to type in the release script. In the 17 versions (and 17 months) since sbcl-1.4.0, there have been over 2900 commits - almost all by other people - fixing user-reported, developer-identified, and random-tester-lesser-spotted bugs; providing enhancements; improving support for various platforms; and making things faster, smaller, or sometimes both.

It's sometimes hard for developers to know what their users think of all of this furious activity. It's definitely hard for me, in the case of SBCL: I throw releases over the wall, and sometimes people tell me I messed up (but usually there is a resounding silence). So I am running a user survey, where I invite you to tell me things about your use of SBCL. All questions are optional: if something is personally or commercially sensitive, please feel free not to tell me about it! But it's nine years since the last survey (that I know of), and much has changed since then - I'd be glad to hear any information SBCL users would be willing to provide. I hope to collate the information in late March, and report on any insight I can glean from the answers.

Chaitanya GuptaLOAD-TIME-VALUE and prepared queries in Postmodern

· 99 days ago

The Common Lisp library Postmodern defines a macro called PREPARE that creates prepared statements for a PostgreSQL connection. It takes a SQL query with placeholders ($1, $2, etc.) as input and returns a function which takes one argument for every placeholder and executes the query.

The first time I used it, I did something like this:

(defun run-query (id)
  (funcall (prepare "SELECT * FROM foo WHERE id = $1") id))

Soon after, I realized that running this function every time would generate a new prepared statement instead of re-using the old one. Let's look at the macro expansion:

(macroexpand-1 '(prepare "SELECT * FROM foo WHERE id = $1"))
==>
(LET ((POSTMODERN::STATEMENT-ID (POSTMODERN::NEXT-STATEMENT-ID))
      (QUERY "SELECT * FROM foo WHERE id = $1"))
  (LAMBDA (&REST POSTMODERN::PARAMS)
    (POSTMODERN::ENSURE-PREPARED *DATABASE* POSTMODERN::STATEMENT-ID QUERY)
    (POSTMODERN::ALL-ROWS
     (CL-POSTGRES:EXEC-PREPARED *DATABASE* POSTMODERN::STATEMENT-ID
                                POSTMODERN::PARAMS
                                'CL-POSTGRES:LIST-ROW-READER))))
T

ENSURE-PREPARED checks if a statement with the given statement-id exists for the current connection. If yes, it will be re-used, else a new one is created with the given query.

The problem is that the macro generates a new statement id every time it is run. This was a bit surprising, but the fix was simple: capture the function returned by PREPARE once, and use that instead.

(defparameter *prepared* (prepare "SELECT * FROM foo WHERE id = $1"))

(defun run-query (id)
  (funcall *prepared* id))

You can also use Postmodern's DEFPREPARED instead, which similarly defines a new function at the top-level.

This works well, but now are using top-level forms instead of the nicely encapsulated single form we used earlier.

To fix this, we can use LOAD-TIME-VALUE.

(defun run-query (id)
  (funcall (load-time-value (prepare "SELECT * FROM foo WHERE id = $1")) id))

LOAD-TIME-VALUE is a special operator that

  1. Evaluates the form in the null lexical environment
  2. Delays evaluation of the form until load time
  3. If compiled, it ensures that the form is evaluated only once

By wrapping PREPARE inside LOAD-TIME-VALUE, we get back our encapsulation while ensuring that a new prepared statement is generated only once (per connection), until the next time RUN-QUERY is recompiled.

Convenience

To avoid the need to wrap PREPARE every time, we can create a converience macro and use that instead:

(defmacro prepared-query (query &optional (format :rows))
  `(load-time-value (prepare ,query ,format)))

(defun run-query (id)
  (funcall (prepared-query "SELECT * FROM foo WHERE id = $1") id))

Caveats

This only works for compiled code. As mentioned earlier, the form wrapped inside LOAD-TIME-VALUE is evaluated once only if you compile it. If uncompiled, it is evaluated every time so this solution will not work there.

Another thing to remember about LOAD-TIME-VALUE is that the form is evaluated in the null lexical environment. So the form cannot use any lexically scoped variables like in the example below:

(defun run-query (table id)
  (funcall (load-time-value
            (prepare (format nil "SELECT * FROM ~A WHERE id = $1" table)))
           id))

Evaluating this will signal that the variable TABLE is unbound.

Wimpie NortjeBe careful with Ironclad in multi-threaded applications.

· 103 days ago

Update

Thanks to eadmund and glv2 the issue described in this article is now fixed and documented clearly. The fixed version of Ironclad should find its way into the Quicklisp release soon.

Note that there are other objects in Ironclad which are still not thread-safe. Refer to the documentation on how to handle them.

Whenever you write a program that uses cryptographic tools you will use cryptographically secure random numbers. Since most people never write security related software they may be surprised to learn how often they are in this situation.

Cryptographically secure pseudo random number generators (PRNG) is a core building block in cryptographic algorithms which include things like hashing algorithms and generation algorithms for random identifiers with low probably of repetition. The two main uses are to securely store hashed passwords (e.g. PBKDF2, bcrypt, scrypt) and to generate random UUIDs. Most web applications with user accounts fall into this category and many other non-web software too.

If your program falls into this group you are almost certainly using Ironclad. The library tries hard to be easy to use even for those without cryptography knowledge. To that end it uses a global PRNG instance with a sensible setting for each particular target OS and expects that most users should never bother to learn about PRNGs.

The Ironclad documentation is clear, don't change the default PRNG! First "You should very rarely need to call make-prng; the default OS-provided PRNG should be appropriate in nearly all cases." And then "You should only use the Fortuna PRNG if your OS does not provide a sufficiently-good PRNG. If you use a Unix or Unix-like OS (e.g. Linux), macOS or Windows, it does."

These two quotes are sufficient to discourage any idle experimentation with PRNG settings, especially if you only want to get the password hashed and move on.

The ease of use comes to a sudden stop if you try to use PRNGs in a threaded application on CCL. The first thread works fine but all others raise error conditions about streams being private to threads. On SBCL the problem is much worse. No error is signaled and everything appears to work but the PRNG frequently returns repeated "random" numbers.

These repeated numbers may never be detected if they are only used for password hashing. If however you use random UUIDs you may from time-to-time get duplicates which will cause havoc in any system expecting objects to have unique identifiers. It will also be extremely difficult to find the cause of the duplicate IDs.

How often do people write multi-threaded CL programs? Very often. By default Hunchentoot handles each HTTP request in its own thread.

The cause of this problem is that Ironclad's default PRNG, :OS, is not implemented to be thread safe. This is the case on Unix where it is a stream to /dev/urandom. I have not checked the thread-safety on Windows where it uses CryptGenRandom.

Solutions

There exists a bug report for Ironclad about the issue but it won't be fixed.

Two options to work around the issue are:

  1. Change the global *PRNG* to Fortuna

     (setf ironclad:*PRNG* (ironclad:make-prng :fortuna))
    
    Advantage:
    It is quick to implement and it appears to be thread safe.
    Disadvantage:
    :FORTUNA is much slower than :OS
  2. Use a thread-local instance of :OS

     (make-thread
      (let ((ironclad:*PRNG* (ironclad:make-prng :os)))
        (use-prng)))
    
    Advantage:
    :OS is significantly faster that :FORTUNA. It is also Ironclad's recommended PRNG.
    Disadvantages:
    When the PRNG is only initialized where needed it is easy to miss places where it should be initialized..
    When the PRNG is initialized in every thread it causes unnecessary processing overhead in threads where it is not used.

Summary

It is not safe to use Irondclad dependent libraries in multi-threaded programs with the default PRNG instance. On SBCL it may appear to work but you will eventually run into hard-to-debug problems with duplicate "random" numbers. On CCL the situation is better because it will signal an error.

Quicklisp newsFebruary 2019 Quicklisp dist update now available

· 107 days ago
New projects:
  • async-process — asynchronous process execution for common lisp — MIT
  • atomics — Portability layer for atomic operations like compare-and-swap (CAS). — Artistic
  • game-math — Math for game development. — MIT
  • generic-cl — Standard Common Lisp functions implemented using generic functions. — MIT
  • simplified-types — Simplification of Common Lisp type specifiers. — MIT
  • sn.man — stub man launcher.it should be a man parser. — mit
Updated projectsagutilalso-alsaantikaprilcerberuschipzchronicitycl+sslcl-allcl-asynccl-collidercl-dbicl-embcl-environmentscl-fluent-loggercl-glfw3cl-json-pointercl-lascl-marklesscl-patternscl-readlinecl-rulescl-satcl-sat.glucosecl-sat.minisatcl-sdl2-imagecl-syslogcl-tiledcl-whoclackcloser-mopclsscommonqtcovercroatoandexadoreasy-audioeasy-bindeazy-projecteruditefast-websocketgendlglsl-toolkitgolden-utilsgraphjonathanjp-numeralkenzolichat-tcp-serverlistopialiterate-lisplocal-timeltkmcclimnodguioverlordpetalisppetripgloaderphoe-toolboxpngloadpostmodernqmyndqt-libsqtoolsqtools-uiquery-fsremote-jsreplicrpcqs-xml-rpcsafety-paramssc-extensionsserapeumshadowshould-testslystatic-dispatchstumpwmsucletime-intervaltriviatrivial-clipboardtrivial-utilitiestype-rutilityvernacularwith-c-syntaxwuwei.

To get this update, use (ql:update-dist "quicklisp")

Enjoy!

Didier VernaFinal call for papers: ELS 2019, 12th European Lisp Sympoiusm

· 109 days ago
		ELS'19 - 12th European Lisp Symposium

			 Hotel Bristol Palace
			    Genova, Italy

			    April 1-2 2019

		   In cooperation with: ACM SIGPLAN
		In co-location with <Programming> 2019
		  Sponsored by EPITA and Franz Inc.

	       http://www.european-lisp-symposium.org/

Recent news:
- Submission deadline extended to Friday February 8.
- Keynote abstracts now available.
- <Programming> registration now open:
  https://2019.programming-conference.org/attending/Registration
- Student refund program after the conference.


The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.

The 12th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.

Topics include but are not limited to:

- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies

We invite submissions in the following forms:

  Papers: Technical papers of up to 8 pages that describe original
    results or explain known ideas in new and elegant ways.

  Demonstrations: Abstracts of up to 2 pages for demonstrations of
    tools, libraries, and applications.

  Tutorials: Abstracts of up to 4 pages for in-depth presentations
    about topics of special interest for at least 90 minutes and up to
    180 minutes.

  The symposium will also provide slots for lightning talks, to be
  registered on-site every day.

All submissions should be formatted following the ACM SIGS guidelines
and include ACM Computing Classification System 2012 concepts and
terms. Submissions should be uploaded to Easy Chair, at the following
address: https://www.easychair.org/conferences/?conf=els2019

Note: to help us with the review process please indicate the type of
submission by entering either "paper", "demo", or "tutorial" in the
Keywords field.


Important dates:
 -    08 Feb 2019 Submission deadline (*** extended! ***)
 -    01 Mar 2019 Notification of acceptance
 -    18 Mar 2019 Final papers due
 - 01-02 Apr 2019 Symposium

Programme chair:
  Nicolas Neuss, FAU Erlangen-Nürnberg, Germany

Programme committee:
  Marco Antoniotti, Universita Milano Bicocca, Italy
  Marc Battyani, FractalConcept, France
  Pascal Costanza, IMEC, ExaScience Life Lab, Leuven, Belgium
  Leonie Dreschler-Fischer, University of Hamburg, Germany
  R. Matthew Emerson, thoughtstuff LLC, USA
  Marco Heisig, FAU, Erlangen-Nuremberg, Germany
  Charlotte Herzeel, IMEC, ExaScience Life Lab, Leuven, Belgium
  Pierre R. Mai, PMSF IT Consulting, Germany
  Breanndán Ó Nualláin, University of Amsterdam, Netherlands
  François-René Rideau, Google, USA
  Alberto Riva, Unversity of Florida, USA
  Alessio Stalla, ManyDesigns Srl, Italy
  Patrick Krusenotto, Deutsche Welle, Germany
  Philipp Marek, Austria
  Sacha Chua, Living an Awesome Life, Canada

Search Keywords:

#els2019, ELS 2019, ELS '19, European Lisp Symposium 2019,
European Lisp Symposium '19, 12th ELS, 12th European Lisp Symposium,
European Lisp Conference 2019, European Lisp Conference '19

Zach BeaneWant to write Common Lisp for RavenPack? | R. Matthew Emerson

· 125 days ago

Paul KhuongPreemption Is GC for Memory Reordering

· 131 days ago

I previously noted how preemption makes lock-free programming harder in userspace than in the kernel. I now believe that preemption ought to be treated as a sunk cost, like garbage collection: we’re already paying for it, so we might as well use it. Interrupt processing (returning from an interrupt handler, actually) is fully serialising on x86, and on other platforms, no doubt: any userspace instruction either fully executes before the interrupt, or is (re-)executed from scratch some time after the return back to userspace. That’s something we can abuse to guarantee ordering between memory accesses, without explicit barriers.

This abuse of interrupts is complementary to Bounded TSO. Bounded TSO measures the hardware limit on the number of store instructions that may concurrently be in-flight (and combines that with the knowledge that instructions are retired in order) to guarantee liveness without explicit barriers, with no overhead, and usually marginal latency. However, without worst-case execution time information, it’s hard to map instruction counts to real time. Tracking interrupts lets us determine when enough real time has elapsed that earlier writes have definitely retired, albeit after a more conservative delay than Bounded TSO’s typical case.

I reached this position after working on two lock-free synchronisation primitives—event counts, and asymmetric flag flips as used in hazard pointers and epoch reclamation—that are similar in that a slow path waits for a sign of life from a fast path, but differ in the way they handle “stuck” fast paths. I’ll cover the event count and flag flip implementations that I came to on Linux/x86[-64], which both rely on interrupts for ordering. Hopefully that will convince you too that preemption is a useful source of pre-paid barriers for lock-free code in userspace.

I’m writing this for readers who are already familiar with lock-free programming, safe memory reclamation techniques in particular, and have some experience reasoning with formal memory models. For more references, Samy’s overview in the ACM Queue is a good resource. I already committed the code for event counts in Concurrency Kit, and for interrupt-based reverse barriers in my barrierd project.

Event counts with x86-TSO and futexes

An event count is essentially a version counter that lets threads wait until the current version differs from an arbitrary prior version. A trivial “wait” implementation could spin on the version counter. However, the value of event counts is that they let lock-free code integrate with OS-level blocking: waiters can grab the event count’s current version v0, do what they want with the versioned data, and wait for new data by sleeping rather than burning cycles until the event count’s version differs from v0. The event count is a common synchronisation primitive that is often reinvented and goes by many names (e.g., blockpoints); what matters is that writers can update the version counter, and waiters can read the version, run arbitrary code, then efficiently wait while the version counter is still equal to that previous version.

The explicit version counter solves the lost wake-up issue associated with misused condition variables, as in the pseudocode below.

bad condition waiter:

while True:
    atomically read data
    if need to wait:
        WaitOnConditionVariable(cv)
    else:
        break

In order to work correctly, condition variables require waiters to acquire a mutex that protects both data and the condition variable, before checking that the wait condition still holds and then waiting on the condition variable.

good condition waiter:

while True:
    with(mutex):
        read data
        if need to wait:
            WaitOnConditionVariable(cv, mutex)
        else:
            break

Waiters must prevent writers from making changes to the data, otherwise the data change (and associated condition variable wake-up) could occur between checking the wait condition, and starting to wait on the condition variable. The waiter would then have missed a wake-up and could end up sleeping forever, waiting for something that has already happened.

good condition waker:

with(mutex):
    update data
    SignalConditionVariable(cv)

The six diagrams below show the possible interleavings between the signaler (writer) making changes to the data and waking waiters, and a waiter observing the data and entering the queue to wait for changes. The two left-most diagrams don’t interleave anything; these are the only scenarios allowed by correct locking. The remaining four actually interleave the waiter and signaler, and show that, while three are accidentally correct (lucky), there is one case, WSSW, where the waiter misses its wake-up.

If any waiter can prevent writers from making progress, we don’t have a lock-free protocol. Event counts let waiters detect when they would have been woken up (the event count’s version counter has changed), and thus patch up this window where waiters can miss wake-ups for data changes they have yet to observe. Crucially, waiters detect lost wake-ups, rather than preventing them by locking writers out. Event counts thus preserve lock-freedom (and even wait-freedom!).

We could, for example, use an event count in a lock-free ring buffer: rather than making consumers spin on the write pointer, the write pointer could be encoded in an event count, and consumers would then efficiently block on that, without burning CPU cycles to wait for new messages.

The challenging part about implementing event counts isn’t making sure to wake up sleepers, but to only do so when there are sleepers to wake. For some use cases, we don’t need to do any active wake-up, because exponential backoff is good enough: if version updates signal the arrival of a response in a request/response communication pattern, exponential backoff, e.g., with a 1.1x backoff factor, could bound the increase in response latency caused by the blind sleep during backoff, e.g., to 10%.

Unfortunately, that’s not always applicable. In general, we can’t assume that signals corresponds to responses for prior requests, and we must support the case where progress is usually fast enough that waiters only spin for a short while before grabbing more work. The latter expectation means we can’t “just” unconditionally execute a syscall to wake up sleepers whenever we increment the version counter: that would be too slow. This problem isn’t new, and has a solution similar to the one deployed in adaptive spin locks.

The solution pattern for adaptive locks relies on tight integration with an OS primitive, e.g., futexes. The control word, the machine word on which waiters spin, encodes its usual data (in our case, a version counter), as well as a new flag to denote that there are sleepers waiting to be woken up with an OS syscall. Every write to the control word uses atomic read-modify-write instructions, and before sleeping, waiters ensure the “sleepers are present” flag is set, then make a syscall to sleep only if the control word is still what they expect, with the sleepers flag set.

OpenBSD’s compatibility shim for Linux’s futexes is about as simple an implementation of the futex calls as it gets. The OS code for futex wake and wait is identical to what userspace would do with mutexes and condition variables (waitqueues). Waiters lock out wakers for the futex word or a coarser superset, check that the futex word’s value is as expected, and enters the futex’s waitqueue. Wakers acquire the futex word for writes, and wake up the waitqueue. The difference is that all of this happens in the kernel, which, unlike userspace, can force the scheduler to be helpful. Futex code can run in the kernel because, unlike arbitrary mutex/condition variable pairs, the protected data is always a single machine integer, and the wait condition an equality test. This setup is simple enough to fully implement in the kernel, yet general enough to be useful.

OS-assisted conditional blocking is straightforward enough to adapt to event counts. The control word is the event count’s version counter, with one bit stolen for the “sleepers are present” flag (sleepers flag).

Incrementing the version counter can use a regular atomic increment; we only need to make sure we can tell whether the sleepers flag might have been set before the increment. If the sleepers flag was set, we clear it (with an atomic bit reset), and wake up any OS thread blocked on the control word.

increment event count:

old <- fetch_and_add(event_count.counter, 2)  # flag is in the low bit
if (old & 1):
    atomic_and(event_count.counter, -2)
    signal waiters on event_count.counter

Waiters can spin for a while, waiting for the version counter to change. At some point, a waiter determines that it’s time to stop wasting CPU time. The waiter then sets the sleepers flag with a compare-and-swap: the CAS (compare-and-swap) can only fail because the counter’s value has changed or because the flag is already set. In the former failure case, it’s finally time to stop waiting. In the latter failure care, or if the CAS succeeded, the flag is now set. The waiter can then make a syscall to block on the control word, but only if the control word still has the sleepers flag set and contains the same expected (old) version counter.

wait until event count differs from prev:

repeat k times:
    if (event_count.counter / 2) != prev:  # flag is in low bit.
        return
compare_and_swap(event_count.counter, prev * 2, prev * 2 + 1)
if cas_failed and cas_old_value != (prev * 2 + 1):
    return
repeat k times:
    if (event_count.counter / 2) != prev:
        return
sleep_if(event_count.center == prev * 2 + 1)

This scheme works, and offers decent performance. In fact, it’s good enough for Facebook’s Folly.
I certainly don’t see how we can improve on that if there are concurrent writers (incrementing threads).

However, if we go back to the ring buffer example, there is often only one writer per ring. Enqueueing an item in a single-producer ring buffer incurs no atomic, only a release store: the write pointer increment only has to be visible after the data write, which is always the case under the TSO memory model (including x86). Replacing the write pointer in a single-producer ring buffer with an event count where each increment incurs an atomic operation is far from a no-brainer. Can we do better, when there is only one incrementer?

On x86 (or any of the zero other architectures with non-atomic read-modify-write instructions and TSO), we can... but we must accept some weirdness.

The operation that must really be fast is incrementing the event counter, especially when the sleepers flag is not set. Setting the sleepers flag on the other hand, may be slower and use atomic instructions, since it only happens when the executing thread is waiting for fresh data.

I suggest that we perform the former, the increment on the fast path, with a non-atomic read-modify-write instruction, either inc mem or xadd mem, reg. If the sleepers flag is in the sign bit, we can detect it (modulo a false positive on wrap-around) in the condition codes computed by inc; otherwise, we must use xadd (fetch-and-add) and look at the flag bit in the fetched value.

The usual ordering-based arguments are no help in this kind of asymmetric synchronisation pattern. Instead, we must go directly to the x86-TSO memory model. All atomic (LOCK prefixed) instructions conceptually flush the executing core’s store buffer, grab an exclusive lock on memory, and perform the read-modify-write operation with that lock held. Thus, manipulating the sleepers flag can’t lose updates that are already visible in memory, or on their way from the store buffer. The RMW increment will also always see the latest version update (either in global memory, or in the only incrementer’s store buffer), so won’t lose version updates either. Finally, scheduling and thread migration must always guarantee that the incrementer thread sees its own writes, so that won’t lose version updates.

increment event count without atomics in the common case:

old <- non_atomic_fetch_and_add(event_count.counter, 2)
if (old & 1):
    atomic_and(event_count.counter, -2)
    signal waiters on event_count.counter

The only thing that might be silently overwritten is the sleepers flag: a waiter might set that flag in memory just after the increment’s load from memory, or while the increment reads a value with the flag unset from the local store buffer. The question is then how long waiters must spin before either observing an increment, or knowing that the flag flip will be observed by the next increment. That question can’t be answered with the memory model, and worst-case execution time bounds are a joke on contemporary x86.

I found an answer by remembering that IRET, the instruction used to return from interrupt handlers, is a full barrier.1 We also know that interrupts happen at frequent and regular intervals, if only for the preemption timer (every 4-10ms on stock Linux/x86oid).

Regardless of the bound on store visibility, a waiter can flip the sleepers-are-present flag, spin on the control word for a while, and then start sleeping for short amounts of time (e.g., a millisecond or two at first, then 10 ms, etc.): the spin time is long enough in the vast majority of cases, but could still, very rarely, be too short.

At some point, we’d like to know for sure that, since we have yet to observe a silent overwrite of the sleepers flag or any activity on the counter, the flag will always be observed and it is now safe to sleep forever. Again, I don’t think x86 offers any strict bound on this sort of thing. However, one second seems reasonable. Even if a core could stall for that long, interrupts fire on every core several times a second, and returning from interrupt handlers acts as a full barrier. No write can remain in the store buffer across interrupts, interrupts that occur at least once per second. It seems safe to assume that, once no activity has been observed on the event count for one second, the sleepers flag will be visible to the next increment.

That assumption is only safe if interrupts do fire at regular intervals. Some latency sensitive systems dedicate cores to specific userspace threads, and move all interrupt processing and preemption away from those cores. A correctly isolated core running Linux in tickless mode, with a single runnable process, might not process interrupts frequently enough. However, this kind of configuration does not happen by accident. I expect that even a half-second stall in such a system would be treated as a system error, and hopefully trigger a watchdog. When we can’t count on interrupts to get us barriers for free, we can instead rely on practical performance requirements to enforce a hard bound on execution time.

Either way, waiters set the sleepers flag, but can’t rely on it being observed until, very conservatively, one second later. Until that time has passed, waiters spin on the control word, then block for short, but growing, amounts of time. Finally, if the control word (event count version and sleepers flag) has not changed in one second, we assume the incrementer has no write in flight, and will observe the sleepers flag; it is safe to block on the control word forever.

wait until event count differs from prev:

repeat k times:
    if (event_count.counter / 2) != prev:
        return
compare_and_swap(event_count.counter, 2 * prev, 2 * prev + 1)
if cas_failed and cas_old_value != 2 * prev + 1:
    return
repeat k times:
    if event_count.counter != 2 * prev + 1:
        return
repeat for 1 second:
    sleep_if_until(event_count.center == 2 * prev + 1,
                   $exponential_backoff)
    if event_count.counter != 2 * prev + 1:
        return
sleep_if(event_count.center == prev * 2 + 1)

That’s the solution I implemented in this pull request for SPMC and MPMC event counts in concurrency kit. The MP (multiple producer) implementation is the regular adaptive logic, and matches Folly’s strategy. It needs about 30 cycles for an uncontended increment with no waiter, and waking up sleepers adds another 700 cycles on my E5-46xx (Linux 4.16). The single producer implementation is identical for the slow path, but only takes ~8 cycles per increment with no waiter, and, eschewing atomic instruction, does not flush the pipeline (i.e., the out-of-order execution engine is free to maximise throughput). The additional overhead for an increment without waiter, compared to a regular ring buffer pointer update, is 3-4 cycles for a single predictable conditional branch or fused test and branch, and the RMW’s load instead of a regular add/store. That’s closer to zero overhead, which makes it much easier for coders to offer OS-assisted blocking in their lock-free algorithms, without agonising over the penalty when no one needs to block.

Asymmetric flag flip with interrupts on Linux

Hazard pointers and epoch reclamation. Two different memory reclamation technique, in which the fundamental complexity stems from nearly identical synchronisation requirements: rarely, a cold code path (which is allowed to be very slow) writes to memory, and must know when another, much hotter, code path is guaranteed to observe the slow path’s last write.

For hazard pointers, the cold code path waits until, having overwritten an object’s last persistent reference in memory, it is safe to destroy the pointee. The hot path is the reader:

1. read pointer value *(T **)x.
2. write pointer value to hazard pointer table
3. check that pointer value *(T **)x has not changed

Similarly, for epoch reclamation, a read-side section will grab the current epoch value, mark itself as reading in that epoch, then confirm that the epoch hasn’t become stale.

1. $epoch <- current epoch
2. publish self as entering a read-side section under $epoch
3. check that $epoch is still current, otherwise retry

Under a sequentially consistent (SC) memory model, the two sequences are valid with regular (atomic) loads and stores. The slow path can always make its write, then scan every other thread’s single-writer data to see if any thread has published something that proves it executed step 2 before the slow path’s store (i.e., by publishing the old pointer or epoch value).

The diagrams below show all possible interleavings. In all cases, once there is no evidence that a thread has failed to observe the slow path’s new write, we can correctly assume that all threads will observe the write. I simplified the diagrams by not interleaving the first read in step 1: its role is to provide a guess for the value that will be re-read in step 3, so, at least with respect to correctness, that initial read might as well be generating random values. I also kept the second “scan” step in the slow path abstract. In practice, it’s a non-snapshot read of all the epoch or hazard pointer tables for threads that execute the fast path: the slow path can assume an epoch or pointer will not be resurrected once the epoch or pointer is absent from the scan.

No one implements SC in hardware. X86 and SPARC offer the strongest practical memory model, Total Store Ordering, and that’s still not enough to correctly execute the read-side critical sections above without special annotations. Under TSO, reads (e.g., step 3) are allowed to execute before writes (e.g., step 2). X86-TSO models that as a buffer in which stores may be delayed, and that’s what the scenarios below show, with steps 2 and 3 of the fast path reversed (the slow path can always be instrumented to recover sequential order, it’s meant to be slow). The TSO interleavings only differ from the SC ones when the fast path’s steps 2 and 3 are separated by something on slow path’s: when the two steps are adjacent, their order relative to the slow path’s steps is unaffected by TSO’s delayed stores. TSO is so strong that we only have to fix one case, FSSF, where the slow path executes in the middle of the fast path, with the reversal of store and load order allowed by TSO.

Simple implementations plug this hole with a store-load barrier between the second and third steps, or implement the store with an atomic read-modify-write instruction that doubles as a barrier. Both modifications are safe and recover SC semantics, but incur a non-negligible overhead (the barrier forces the out of order execution engine to flush before accepting more work) which is only necessary a minority of the time.

The pattern here is similar to the event count, where the slow path signals the fast path that the latter should do something different. However, where the slow path for event counts wants to wait forever if the fast path never makes progress, hazard pointer and epoch reclamation must detect that case and ignore sleeping threads (that are not in the middle of a read-side SMR critical section).

In this kind of asymmetric synchronisation pattern, we wish to move as much of the overhead to the slow (cold) path. Linux 4.3 gained the membarrier syscall for exactly this use case. The slow path can execute its write(s) before making a membarrier syscall. Once the syscall returns, any fast path write that has yet to be visible (hasn’t retired yet), along with every subsequent instruction in program order, started in a state where the slow path’s writes were visible. As the next diagram shows, this global barrier lets us rule out the one anomalous execution possible under TSO, without adding any special barrier to the fast path.

The problem with membarrier is that it comes in two flavours: slow, or not scalable. The initial, unexpedited, version waits for kernel RCU to run its callback, which, on my machine, takes anywhere between 25 and 50 milliseconds. The reason it’s so slow is that the condition for an RCU grace period to elapse are more demanding than a global barrier, and may even require multiple such barriers. For example, if we used the same scheme to nest epoch reclamation ten deep, the outermost reclaimer would be 1024 times slower than the innermost one. In reaction to this slowness, potential users of membarrier went back to triggering IPIs, e.g., by mprotecting a dummy page. mprotect isn’t guaranteed to act as a barrier, and does not do so on AArch64, so Linux 4.16 added an “expedited” mode to membarrier. In that expedited mode, each membarrier syscall sends an IPI to every other core... when I look at machines with hundreds of cores, \(n - 1\) IPI per core, a couple times per second on every \(n\) core, start to sound like a bad idea.

Let’s go back to the observation we made for event count: any interrupt acts as a barrier for us, in that any instruction that retires after the interrupt must observe writes made before the interrupt. Once the hazard pointer slow path has overwritten a pointer, or the epoch slow path advanced the current epoch, we can simply look at the current time, and wait until an interrupt has been handled at a later time on all cores. The slow path can then scan all the fast path state for evidence that they are still using the overwritten pointer or the previous epoch: any fast path that has not published that fact before the interrupt will eventually execute the second and third steps after the interrupt, and that last step will notice the slow path’s update.

There’s a lot of information in /proc that lets us conservatively determine when a new interrupt has been handled on every core. However, it’s either too granular (/proc/stat) or extremely slow to generate (/proc/schedstat). More importantly, even with ftrace, we can’t easily ask to be woken up when something interesting happens, and are forced to poll files for updates (never mind the weirdly hard to productionalise kernel interface).

What we need is a way to read, for each core, the last time it was definitely processing an interrupt. Ideally, we could also block and let the OS wake up our waiter on changes to the oldest “last interrupt” timestamp, across all cores. On x86, that’s enough to get us the asymmetric barriers we need for hazard pointers and epoch reclamation, even if only IRET is serialising, and not interrupt handler entry. Once a core’s update to its “last interrupt” timestamp is visible, any write prior to the update, and thus any write prior to the interrupt is also globally visible: we can only observe the timestamp update from a different core than the updater, in which case TSO saves us, or after the handler has returned with a serialising IRET.

We can bundle all that logic in a short eBPF program.2 The program has a map of thread-local arrays (of 1 CLOCK_MONOTONIC timestamp each), a map of perf event queues (one per CPU), and an array of 1 “watermark” timestamp. Whenever the program runs, it gets the current time. That time will go in the thread-local array of interrupt timestamps. Before storing a new value in that array, the program first reads the previous interrupt time: if that time is less than or equal to the watermark, we should wake up userspace by enqueueing in event in perf. The enqueueing is conditional because perf has more overhead than a thread-local array, and because we want to minimise spurious wake-ups. A high signal-to-noise ratio lets userspace set up the read end of the perf queue to wake up on every event and thus minimise update latency.

We now need a single global daemon to attach the eBPF program to an arbitrary set of software tracepoints triggered by interrupts (or PMU events that trigger interrupts), to hook the perf fds to epoll, and to re-read the map of interrupt timestamps whenever epoll detects a new perf event. That’s what the rest of the code handles: setting up tracepoints, attaching the eBPF program, convincing perf to wake us up, and hooking it all up to epoll. On my fully loaded 24-core E5-46xx running Linux 4.18 with security patches, the daemon uses ~1-2% (much less on 4.16) of a core to read the map of timestamps every time it’s woken up every ~4 milliseconds. perf shows the non-JITted eBPF program itself uses ~0.1-0.2% of every core.

Amusingly enough, while eBPF offers maps that are safe for concurrent access in eBPF programs, the same maps come with no guarantee when accessed from userspace, via the syscall interface. However, the implementation uses a hand-rolled long-by-long copy loop, and, on x86-64, our data all fit in longs. I’ll hope that the kernel’s compilation flags (e.g., -ffree-standing) suffice to prevent GCC from recognising memcpy or memmove, and that we thus get atomic store and loads on x86-64. Given the quality of eBPF documentation, I’ll bet that this implementation accident is actually part of the API. Every BPF map is single writer (either per-CPU in the kernel, or single-threaded in userspace), so this should work.

Once the barrierd daemon is running, any program can mmap its data file to find out the last time we definitely know each core had interrupted userspace, without making any further syscall or incurring any IPI. We can also use regular synchronisation to let the daemon wake up threads waiting for interrupts as soon as the oldest interrupt timestamp is updated. Applications don’t even need to call clock_gettime to get the current time: the daemon also works in terms of a virtual time that it updates in the mmaped data file.

The barrierd data file also includes an array of per-CPU structs with each core’s timestamps (both from CLOCK_MONOTONIC and in virtual time). A client that knows it will only execute on a subset of CPUs, e.g., cores 2-6, can compute its own “last interrupt” timestamp by only looking at entries 2 to 6 in the array. The daemon even wakes up any futex waiter on the per-CPU values whenever they change. The convenience interface is pessimistic, and assumes that client code might run on every configured core. However, anyone can mmap the same file and implement tighter logic.

Again, there’s a snag with tickless kernels. In the default configuration already, a fully idle core might not process timer interrupts. The barrierd daemon detects when a core is falling behind, and starts looking for changes to /proc/stat. This backup path is slower and coarser grained, but always works with idle cores. More generally, the daemon might be running on a system with dedicated cores. I thought about causing interrupts by re-affining RT threads, but that seems counterproductive. Instead, I think the right approach is for users of barrierd to treat dedicated cores specially. Dedicated threads can’t (shouldn’t) be interrupted, so they can regularly increment a watchdog counter with a serialising instruction. Waiters will quickly observe a change in the counters for dedicated threads, and may use barrierd to wait for barriers on preemptively shared cores. Maybe dedicated threads should be able to hook into barrierd and check-in from time to time. That would break the isolation between users of barrierd, but threads on dedicated cores are already in a privileged position.

I quickly compared the barrier latency on an unloaded 4-way E5-46xx running Linux 4.16, with a sample size of 20000 observations per method (I had to remove one outlier at 300ms). The synchronous methods mprotect (which abuses mprotect to send IPIs by removing and restoring permissions on a dummy page), or explicit IPI via expedited membarrier, are much faster than the other (unexpedited membarrier with kernel RCU, or barrierd that counts interrupts). We can zoom in on the IPI-based methods, and see that an expedited membarrier (IPI) is usually slightly faster than mprotect; IPI via expedited membarrier hits a worst-case of 0.041 ms, versus 0.046 for mprotect.

The performance of IPI-based barriers should be roughly independent of system load. However, we did observe a slowdown for expedited membarrier (between \(68.4-73.0\%\) of the time, \(p < 10\sp{-12}\) according to a binomial test3) on the same 4-way system, when all CPUs were running CPU-intensive code at low priority. In this second experiment, we have a sample size of one million observations for each method, and the worst case for IPI via expedited membarrier was 0.076 ms (0.041 ms on an unloaded system), compared to a more stable 0.047 ms for mprotect.

Now for non-IPI methods: they should be slower than methods that trigger synchronous IPIs, but hopefully have lower overhead and scale better, while offering usable latencies.

On an unloaded system, the interrupts that drive barrierd are less frequent, sometimes outright absent, so unexpedited membarrier achieves faster response times. We can even observe barrierd’s fallback logic, which scans /proc/stat for evidence of idle CPUs after 10 ms of inaction: that’s the spike at 20ms. The values for vtime show the additional slowdown we can expect if we wait on barrierd’s virtual time, rather than directly reading CLOCK_MONOTONIC. Overall, the worst case latencies for barrierd (53.7 ms) and membarrier (39.9 ms) aren’t that different, but I should add another fallback mechanism based on membarrier to improve barrierd’s performance on lightly loaded machines.

When the same 4-way, 24-core, system is under load, interrupts are fired much more frequently and reliably, so barrierd shines, but everything has a longer tail, simply because of preemption of the benchmark process. Out of the one million observations we have for each of unexpedited membarrier, barrierd, and barrierd with virtual time on this loaded system, I eliminated 54 values over 100 ms (18 for membarrier, 29 for barrierd, and 7 for virtual time). The rest is shown below. barrierd is consistently much faster than membarrier, with a geometric mean speedup of 23.8x. In fact, not only can we expect barrierd to finish before an unexpedited membarrier \(99.99\%\) of the time (\(p<10\sp{-12}\) according to a binomial test), but we can even expect barrierd to be 10 times as fast \(98.3-98.5\%\) of the time (\(p<10\sp{-12}\)). The gap is so wide that even the opportunistic virtual-time approach is faster than membarrier (geometric mean of 5.6x), but this time with a mere three 9s (as fast as membarrier \(99.91-99.96\%\) of the time, \(p<10\sp{-12}\)).

With barrierd, we get implicit barriers with worse overhead than unexpedited membarrier (which is essentially free since it piggybacks on kernel RCU, another sunk cost), but 1/10th the latency (0-4 ms instead of 25-50 ms). In addition, interrupt tracking is per-CPU, not per-thread, so it only has to happen in a global single-threaded daemon; the rest of userspace can obtain the information it needs without causing additional system overhead. More importantly, threads don’t have to block if they use barrierd to wait for a system-wide barrier. That’s useful when, e.g., a thread pool worker is waiting for a reverse barrier before sleeping on a futex. When that worker blocks in membarrier for 25ms or 50ms, there’s a potential hiccup where a work unit could sit in the worker’s queue for that amount of time before it gets processed. With barrierd (or the event count described earlier), the worker can spin and wait for work units to show up until enough time has passed to sleep on the futex.

While I believe that information about interrupt times should be made available without tracepoint hacks, I don’t know if a syscall like membarrier is really preferable to a shared daemon like barrierd. The one salient downside is that barrierd slows down when some CPUs are idle; that’s something we can fix by including a membarrier fallback, or by sacrificing power consumption and forcing kernel ticks, even for idle cores.

Preemption can be an asset

When we write lock-free code in userspace, we always have preemption in mind. In fact, the primary reason for lock-free code in userspace is to ensure consistent latency despite potentially adversarial scheduling. We spend so much effort to make our algorithms work despite interrupts and scheduling that we can fail to see how interrupts can help us. Obviously, there’s a cost to making our code preemption-safe, but preemption isn’t an option. Much like garbage collection in managed language, preemption is a feature we can’t turn off. Unlike GC, it’s not obvious how to make use of preemption in lock-free code, but this post shows it’s not impossible.

We can use preemption to get asymmetric barriers, nearly for free, with a daemon like barrierd. I see a duality between preemption-driven barriers and techniques like Bounded TSO: the former are relatively slow, but offer hard bounds, while the latter guarantee liveness, usually with negligible latency, but without any time bound.

I used preemption to make single-writer event counts faster (comparable to a regular non-atomic counter), and to provide a lower-latency alternative to membarrier’s asymmetric barrier. In a similar vein, SPeCK uses time bounds to ensure scalability, at the expense of a bit of latency, by enforcing periodic TLB reloads instead of relying on synchronous shootdowns. What else can we do with interrupts, timer or otherwise?

Thank you Samy, Gabe, and Hanes for discussions on an earlier draft. Thank you Ruchir for improving this final version.

P.S. event count without non-atomic RMW?

The single-producer event count specialisation relies on non-atomic read-modify-write instructions, which are hard to find outside x86. I think the flag flip pattern in epoch and hazard pointer reclamation shows that’s not the only option.

We need two control words, one for the version counter, and another for the sleepers flag. The version counter is only written by the incrementer, with regular non-atomic instructions, while the flag word is written to by multiple producers, always with atomic instructions.

The challenge is that OS blocking primitives like futex only let us conditionalise the sleep on a single word. We could try to pack a pair of 16-bit shorts in a 32-bit int, but that doesn’t give us a lot of room to avoid wrap-around. Otherwise, we can guarantee that the sleepers flag is only cleared immediately before incrementing the version counter. That suffices to let sleepers only conditionalise on the version counter... but we still need to trigger a wake-up if the sleepers flag was flipped between the last clearing and the increment.

On the increment side, the logic looks like

must_wake = false
if sleepers flag is set:
    must_wake = true
    clear sleepers flag
increment version
if must_wake or sleepers flag is set:
    wake up waiters

and, on the waiter side, we find

if version has changed
    return
set sleepers flag
sleep if version has not changed

The separate “sleepers flag” word doubles the space usage, compared to the single flag bit in the x86 single-producer version. Composite OS uses that two-word solution in blockpoints, and the advantages seem to be simplicity and additional flexibility in data layout. I don’t know that we can implement this scheme more efficiently in the single producer case, under other memory models than TSO. If this two-word solution is only useful for non-x86 TSO, that’s essentially SPARC, and I’m not sure that platform still warrants the maintenance burden.

But, we’ll see, maybe we can make the above work on AArch64 or POWER.


  1. I actually prefer another, more intuitive, explanation that isn’t backed by official documentation.The store buffer in x86-TSO doesn’t actually exist in silicon: it represents the instructions waiting to be retired in the out-of-order execution engine. Precise interrupts seem to imply that even entering the interrupt handler flushes the OOE engine’s state, and thus acts as a full barrier that flushes the conceptual store buffer.

  2. I used raw eBPF instead of the C frontend because that frontend relies on a ridiculous amount of runtime code that parses an ELF file when loading the eBPF snippet to know what eBPF maps to setup and where to backpatch their fd number. I also find there’s little advantage to the C frontend for the scale of eBPF programs (at most 4096 instructions, usually much fewer). I did use clang to generate a starting point, but it’s not that hard to tighten 30 instructions in ways that a compiler can’t without knowing what part of the program’s semantics is essential. The bpf syscall can also populate a string buffer with additional information when loading a program. That’s helpful to know that something was assembled wrong, or to understand why the verifier is rejecting your program.

  3. I computed these extreme confidence intervals with my old code to test statistical SLOs.

Zach BeaneConverter of maps from Reflex Arena to QuakeWorld

· 132 days ago

Quicklisp newsJanuary 2019 Quicklisp dist update now available

· 132 days ago
New projects: 
  • cl-markless — A parser implementation for Markless — Artistic
  • data-lens — Utilities for building data transormations from composable functions, modeled on lenses and transducers — MIT
  • iso-8601-date — Miscellaneous date routines based around ISO-8601 representation. — LLGPL
  • literate-lisp — a literate programming tool to write common lisp codes in org file. — MIT
  • magicl — Matrix Algebra proGrams In Common Lisp — BSD 3-Clause (See LICENSE.txt)
  • nodgui — LTK — LLGPL
  • petri — An implementation of Petri nets — MIT
  • phoe-toolbox — A personal utility library — BSD 2-clause
  • ql-checkout — ql-checkout is library intend to checkout quicklisp maintained library with vcs. — mit
  • qtools-commons — Qtools utilities and functions — Artistic License 2.0
  • replic — A framework to build readline applications out of existing code. — MIT
  • slk-581 — Generate Australian Government SLK-581 codes. — LLGPL
  • sucle — Cube Demo Game — MIT
  • water — An ES6-compatible class definition for Parenscript — MIT
  • winhttp — FFI wrapper to WINHTTP — MIT
Updated projects3d-matrices3d-vectorsaprilasd-generatorchirpchronicitycl-asynccl-batiscl-collidercl-dbicl-dbi-connection-poolcl-enumerationcl-formscl-hamcrestcl-hash-utilcl-lascl-libevent2cl-libuvcl-mixedcl-neovimcl-openglcl-patternscl-punchcl-satcl-sat.glucosecl-sat.minisatcl-syslogcl-unificationcladclazyclimacsclipcloser-mopcroatoandbusdeedsdefenumdefinitionsdufyeasy-bindeasy-routeseclectoresrapf2clflareflexi-streamsflowgendlglsl-toolkitharmonyhelambdaphu.dwim.debughumblerinquisitorlakelegitlichat-protocollisp-binarylisp-chatlog4cllqueryltkmcclimnew-opomer-countookoverlordpetalisppjlinkplumppostmodernprotestqtoolsquery-fsratifyread-numberrpcqsafety-paramssc-extensionsserapeumslimeslyspecialization-storespinneretstaplestatic-dispatchstumpwmsxqltootertrivial-clipboardtrivial-socketsutilities.print-itemsvernacularwebsocket-driverwild-package-inferred-systemxhtmlambda.

Removed projects: cl-llvm, cl-skkserv

The removed projects no longer build for me.

To get this update, use (ql:update-dist "quicklisp"). Enjoy!

Zach BeaneASCII Art Animations in Lisp

· 133 days ago

TurtleWareMy everlasting Common Lisp TODO list

· 136 days ago

We have minds capable of dreaming up almost infinitely ambitious plans and only time to realize a pathetic fraction of them. If God exists this is a proof of his cruelty.

This quote is a paraphrase of something I've read in the past. I couldn't find where it's from. If you do know where it comes from - please contact me!

I've hinted a few times that I have "lengthy" list of things to do. New year is a good opportunity to publish a blog post about it. I'm going to skip some entries which seem to be too far fetched (some could have slipped in anyway) and some ideas I don't want to share yet.

Please note, that none of these entries nor estimates are declarations nor project road maps - this is my personal to do list which may change at any time. Most notably I am aware that these estimates are too ambitious and it is unlikely that all will be met.

ECL improvements

In its early days ECL had both. They were removed in favor of native threads. I think that both are very valuable constructs which may function independently or even work together (i.e native thread have a pool of specialized green threads sharing data local to them). I want to add locatives too since I'm at adding new built-in classes.

ETA: first quarter of 2019 (before 16.2.0 release).

There might be better interfaces for the same goal, but there are already libraries which benefit from APIs defined in CLtL2 which didn't get through to the ANSI standard. They mostly revolve around environment access and better control over compiler workings (querying declarations, writing a code walker without gross hacks etc).

ETA: first quarter of 2019 (before 16.2.0 release).

ECL has two major performance bottlenecks. One is compilation time (that is actually GCC's fault), second is generic function dispatch. In a world where many libraries embrace the CLOS programming paradigm it is very important area of improvement.

Professor Robert Strandh paved the way by inventing a method to greatly improve generic function dispatch speed. The team of Clasp developers implemented it, proving that the idea is a practical one for ECL (Clasp was forked from ECL and they still share big chunks of architecture - it is not rare that we share bug reports and fixes across our code bases). We want to embrace this dispatch strategy.

ETA: second quarter of 2019 (after 16.2.0 release).

I think about adding optional modules for SSL and file SQL database. Both libraries may be statically compiled what makes them good candidates which could work even for ECL builds without FASL loading support.

ETA: third quarter of 2019.

  • Compiler modernization

Admittedly I already had three tries at this task (and each ended with a failure - changes were too radical to propose in ECL). I believe that four makes a charm. Currently ECL has two passes (which are tightly coupled) - the first one for Common Lisp compilation to IR and the second one for transpiling to C/C++ code and compiling it with GCC.

The idea is to decouple these passes and have: a frontend pass, numerous optimization passes (for sake of maintainability) and the code generation pass which could have numerous backends (C, C++, bytecodes, LLVM etc).

ETA: third quarter of 2019.

CDR has many valuable proposals I want to implement (some proposals are already implemented in ECL). Functions compiled-file-p and abi-version are a very useful addition from the implementation and build system point of view. Currently ECL will "eat" any FASL it can (if it meets certain criteria, most notably it will try to load FASL files compiled with incompatible ECL build). ABI validation should depend on a symbol table entries hash, cpu architecture and types used by the compiler.

ETA: (this task is a sub-task of compiler modernization).

  • Replacing ecl_min with EuLisp Level 0 implementation

ecl_min is an small lisp interpreter used to bootstrap the implementation (it is a binary written in C). Replacing this custom lisp with a lisp which has the standard draft would be a big step forward.

I expect some blockers along the way - most notably EuLisp has one namespace for functions and variables. Overcoming that will be a step towards language agnostic runtime.

ETA: fourth quarter of 2019.

McCLIM improvements

  • Thread safety and refactored event processing loop

standard-extended-input-stream has a quirk - event queue is mixed with the input buffer. That leads to inconsistent event processing between input streams and all other panes. According to my system and specification analysis this may be fixed. This task requires some refactor and careful documentation.

Thread safety is about using CLIM streams from other threads to draw on a canvas being part of the CLIM frame from inside the external REPL. This ostensibly works, but it is not thread safe - output records may get corrupted during concurrent access.

ETA: first quarter of 2019.

  • Better use of XRender extension

We already use XRedner extensions for drawing fonts and rectangles with a solid background. We want to switch clx backend to use it to its fullest. Most notably to have semi-transparency and accelerated transformations. Some proof of concept code is stashed in my source tree.

ETA: second quarter of 2019.

  • Extensive tests and documentation improvements

I've mentioned it in in the last progress report. We want to spend the whole mid-release period on testing, bug fixes and documentation improvements. Most notably we want to write documentation for writing backends. This is a frequent request from users.

ETA: third quarter of 2019

This task involves writing a console-based backend (which units are very big compared to a pixel and they are not squares). That will help me to identify and fix invalid assumptions in McCLIM codebase. The idea is to have untyped coordinates being density independent pixels which have approximately the same size on any type of a screen. A natural consequence of that will be writing examples of specialized sheets with different default units.

ETA: fourth quarter of 2019

Other tasks

  • Finish writing a CLIM frontend to ansi-test (tested lisps are run as external processes).
  • Create a test suite for Common Lisp pathnames which goes beyond the standard.
  • Contribute UNIX domain sockets portability layer to usocket.
  • Explore the idea of making CLX understand (and compile) XCB xml protocol definitions.
  • Write a blog post about debugging and profiling ECL applications.
  • Resume a project with custom gadgets for McCLIM.
  • Do more research and write POC code for having animations in McCLIM.
  • Resume a project for writing Matroska build system with ASDF compatibility layer.
  • Use cl-test-grid to identify most common library dependencies in Quicklisp which doesn't support ECL and contribute such support.

Conclusions

This list could be much longer, but even suggesting more entries as something scheduled for 2019 would be ridiculous - I'll stop here. A day has only 24h and I need to balance time between family, friends, duties, commercial work, free software, communities I'm part of etc. I find each and every task in it worthwhile so I will pursue them whenever I can.

Zach BeaneWriting a natural language date and time parser

· 139 days ago

Chaitanya GuptaWriting a natural language date and time parser

· 140 days ago

In the deftask blog I described how it lets users search for tasks easily by using natural language date queries. It accomplishes this by using a natural language date and time parser I wrote a long time ago called Chronicity.

But how exactly does Chronicity work? In this post, we'll dig into its innards and get a sense of the steps involved in writing it.

If you want to hack into Chronicity, or write your own NLP date parser, this might help.

Note: credit for Chronicity's architecture goes to the Ruby library Chronic. It served both as an inspiration and as the implementation reference.

Broadly, Chronicity follows these steps to parse date and time strings:

  1. Normalize text
  2. Tokenize
  3. Pre-process tokens
  4. Pattern matching
  5. Returning the result

Normalize text

We normalize the text before tokenizing it by doing the following:

  1. Lower case the string
  2. Convert numeric words (like "one", "ten", "third", etc.) to the corresponding numbers
  3. Replace all the common synonyms of a word or phrase so that tokenizing becomes simpler.

All of this is accomplished by the PRE-NORMALIZE function. To convert numeric words to numbers the NUMERIZE function is used. One caveat: do not immediately normalize the term "second" - it can either mean the ordinal number or the unit of time. So we wait until after tokenization (see pre-process tokens) to resolve this ambiguity.

CHRONICITY> (pre-normalize "tomorrow at seven")
"next day at 7"

CHRONICITY> (pre-normalize "20 days ago")
"20 days past"

Tokenize

Next we assign a token to each word in the normalized text.

(defclass token ()
  ((word :initarg :word
         :reader token-word)
   (tags :initarg :tags
         :initform nil
         :accessor token-tags)))

(defun create-token (word &rest tags)
  (make-instance 'token
                 :word word
                 :tags tags))

As you can see, besides the word, a token also contains a list of tags. Each tag indicates a possible way to interpret the given word or number. Take the phrase "20 days ago". The number 20 can be interpreted in many ways:

  • It might refer to the 20th day of the month
  • It might be the year 2020
  • Or maybe just the number 20 (which is what is actually meant in the given phrase)
  • It could also refer to the time 8 PM in 24-hour format (20:00 hours)

Remember, we are still in the tokenization phase so we don't know which interpretation is correct. So we will assign all four tags to the token for this number.

Each tag is a subclass of the TAG class, which is defined as follows.

(defclass tag ()
  ((type :initarg :type
         :reader tag-type)
   (now :initarg :now
        :accessor tag-now
        :initform nil)))

(defun create-tag (class type &key now)
  (make-instance class :type type :now now))

The slot TYPE is a misnomer - it actually indicates the designated value of the token for this tag. For example, the TYPE for the year 2020 above will be the integer 2020. For the time 8 PM it will be an object denoting the time.

The slot NOW has the current timestamp. It is used by some tag classes like REPEATER for date-time computations (discussed later).

The various subclasses of TAG are:

  • SEPARATOR - Things like slash "/", dash "-", "in", "at", "on", etc.
  • ORDINAL - Numbers like 1st, 2nd, 3rd, etc.
  • SCALAR - Simple numbers like 1, 5, 10, etc. It is further subclassed by SCALAR-DAY (1-31), SCALAR-MONTH (1-12) and SCALAR-YEAR. A token for any number will usually contain the SCALAR tag plus one or more of the subclassed tags as applicable.
  • POINTER - Indicates whether we are looking forwards ("hence", "after", "from") or backwards ("ago", "before"). These words are normalized to "future" and "past" before they are tagged.
  • GRABBER - The terms "this", "last" and "next" (as in this month or last month).
  • REPEATER - Most of the date and time terms are tagged using this class. This is described in more detail below.

There are a number of subclasses of REPEATER to indicate the numerous date and time terms. For example:

  • Unit names like "year", "month", "week", "day", etc., use the subclasses REPEATER-YEAR, REPEATER-MONTH, REPEATER-WEEK, REPEATER-DAY.
  • REPEATER-MONTH-NAME is used to indicate month names like "jan" or "january".
  • REPEATER-DAY-NAME indicates day names like "monday".
  • REPEATER-TIME is used to indicate time strings like 20:00.
  • Parts of the day like AM, PM, morning, evening use the subclass REPEATER-DAY-PORTION.

In addition, all the REPEATER subclasses need to implement a few methods that are needed for date-time computations.

  • R-NEXT - Given a repeater and a pointer i.e. :PAST or :FUTURE, returns a time span in the immediate past or future relative to the NOW slot. For example, assume the date in NOW is 31st December 2018.
    • (r-next repeater :past) for a REPEATER-MONTH will return a time span starting 1st November 2018 and ending at 30th November.
    • (r-next repeater :future) will return a span for all of January 2019.
    • Similarly, for a REPEATER-DAY this would have returned 30th December for :PAST and 1st January for the :FUTURE pointer.
  • R-THIS is similar to R-NEXT except it works in the current context. The width of the span also depends on whether direction of the pointer.
    • (r-this repeater :past) for a REPEATER-DAY will return a span from the start of day until now.
    • (r-this repeater :future) will return a span from now until the end of day.
    • (r-this repeater :none) will return the whole day today.
  • R-OFFSET - Given a span, a pointer and an amount, returns a new span offset from the given span. The offset is roughly the amount mulitplied by the width of the repeater.

Now we can put the whole tokenization and tagging piece together:

(defun tokenize (text)
  (mapcar #'create-token
          (cl-ppcre:split #?r"\s+" text)))

(defun tokenize-and-tag (text)
  (let ((tokens (tokenize text)))
    (loop
       for type in (list 'repeater 'grabber 'pointer 'scalar 'ordinal 'separator)
       do (scan-tokens type tokens))
    tokens))

As you can see, computing the tags for each token is accomplished by the SCAN-TOKENS. This is a generic function specialized on the class name of the tag.

One of the methods implementing SCAN-TOKENS is shown below.

(defmethod scan-tokens ((tag (eql 'grabber)) tokens)
  (let ((scan-map '(("last" :last)
                    ("this" :this)
                    ("next" :next))))
    (dolist (token tokens tokens)
      (loop
         for (regex value) in scan-map
         when (cl-ppcre:scan regex (token-word token))
         do (tag (create-tag 'grabber value) token)))))

(defmethod tag (tag token)
  (push tag (token-tags token)))

Going back to our original example, for the text "20 days ago", these are the tags set for each token (after normalization).

Token      Tags
-----      ----
20         [SCALAR-YEAR, SCALAR-DAY, SCALAR, REPEATER-TIME]
days       [REPEATER-DAY]
past       [POINTER]

Pre-process tokens

We are almost ready to run pattern matching to figure out the input date, but first, we need to resolve the ambiguity related to the term second that we faced during normalization. At that time, we did not convert it to the number 2 since it could refer to either the unit of time or the number.

Now with tokenization done, we resolve this ambiguity with a simple hack: if the term second is followed by a repeater (i.e. month, day, year, january, etc.), we assume that it is the ordinal number 2nd and not the unit of time. See PRE-PROCESS-TOKENS for more details.

Pattern matching

The last piece of the puzzle is pattern matching. Armed with tokens and their corresponding tags, we define several date and time patterns that we know of and try to match them to their input tokens.

First we name a few pattern classes - each pattern we define belongs to one of these classes.

  • DATE - patterns that match an absolute date and time e.g. "1st January", "January 1 at 2 PM", etc.
  • ANCHOR - patterns that typically involve a grabber e.g. "yesterday", "tuesday" "last week", etc.
  • ARROW - patterns like "2 days from now", "3 weeks ago", etc.
  • NARROW - patterns like "1st day this month", "3rd wednesday in 2007", etc.
  • TIME - simple time patterns like "2 PM", "14:30", etc.

A pattern, at its simplest, is just a list of tag classes. A list of input tokens successfully matches a pattern if, for every token, at least one of its tags is an instance of the tag class mentioned at the corresponding position in the pattern. For example, the text "20 days ago" had these tags:

Token      Tags
-----      ----
20         [SCALAR-YEAR, SCALAR-DAY, SCALAR, REPEATER-TIME]
days       [REPEATER-DAY]
past       [POINTER]

It will match any of these patterns:

(scalar repeater pointer)
(scalar repeater-day pointer)
((? scalar) repeater pointer)

The last example shows a pattern with an optional tag - (? scalar). It will match tokens with or without the scalar e.g. both "20 days ago" and "week ago" will match.

Our pattern matching engine also allows us to match an entire pattern class. For example,

(repeater-month-name scalar-day (? separator-at) (? p time))

(? p time) here means that any pattern that belongs to the TIME pattern class can match. So all of "January 1 at 12:30", "January 1 at 2 PM" and "January 1 at 6 in the evening" will match without us needing to duplicate all the time patterns.

Note: There's one limitation - a pattern class can only be specified at the end of a pattern in Chronicity. So a pattern like (repeater (p time) pointer) won't work. This will be fixed in the future.

Each pattern has a handler function that decides how to convert the matching tokens to a date span.

A pattern and its handler function are defined using the DEFINE-HANDLER macro. It assigns one or more patterns to a pattern class, and if either of these patterns match, the function body is run. Its general form is:

(define-handler (pattern-class)
    (tokens-var)
    (pattern1 pattern2 ...)
  ... body ...
  )

An example handler is shown below.

(define-handler (date)
    (tokens)
    ((repeater-month-name scalar-year))
  (let* ((month-name (token-tag-type 'repeater-month-name (first tokens)))
         (month (month-index month-name))
         (year (token-tag-type 'scalar-year (second tokens)))
         (start (make-date year month)))
    (make-span start (datetime-incr start :month))))

Most handler functions will use make use of the the repeater methods R-NEXT, R-THIS and R-OFFSET that we described above.

Chronicity implements this pattern matching logic in the TOKENS-TO-SPAN function. All the patterns and their handler functions are defined inside handler-defs.lisp. Patterns defined earlier in the file get precedence over those defined later. If you add, remove or modify a handler, you should reload the whole file rather than just evaluating that handler's definition.

Returning the result

Finally, we put everything together.

(defun parse (text &key (guess t))
  (let ((tokens (tokenize-and-tag (pre-normalize text))))
    (pre-process-tokens tokens)
    (values (guess-span (tokens-to-span tokens) guess) tokens)))

By default PARSE will return a timestamp instead of a time span. This depends on the value passed to the :GUESS keyword - see the GUESS-SPAN function to see how it is interpreted. If you want to return a time span send NIL instead.

The second value that this function returns is the list of tokens alongwith all its tags. This is useful for debugging Chronicity results in the REPL.

CHRONICITY> (parse "20 days ago")
@2018-12-12T12:01:53.758578+05:30
(#<TOKEN 20 [SCALAR-YEAR, SCALAR-DAY, SCALAR, REPEATER-TIME] {1007639243}>
 #<TOKEN days [REPEATER-DAY] {10076AF5D3}> #<TOKEN past [POINTER] {1007553443}>)

CHRONICITY> (parse "20 days ago" :guess nil)
#<SPAN 2018-12-12T00:00:00.000000+05:30..2018-12-13T00:00:00.000000+05:30>
(#<TOKEN 20 [SCALAR-YEAR, SCALAR-DAY, SCALAR, REPEATER-TIME] {1001B78BC3}>
 #<TOKEN days [REPEATER-DAY] {1001B78C03}> #<TOKEN past [POINTER] {1001B78C43}>)

The actual PARSE function has a few more bells and whistles than the one defined here:

  • :ENDIAN-PREFERENCE to parse ambiguous dates as dd/mm (:LITTLE) or mm/dd (:MIDDLE)
  • :AMBIGUOUS-TIME-RANGE to specify whether a time like 5:00 is in the morning (AM) or evening (PM).
  • :CONTEXT can be :PAST, :FUTURE or :NONE. This determines the time span returned for strings like "this day". See the definition of R-THIS above.

McCLIM"Yule" progress report

· 141 days ago

Dear Community,

Winter solstice is a special time of year when we gather together with people dear to us. In pagan tradition this event is called "Yule". I thought it is a good time to write a progress report and a summary of changes made since the last release. I apologise for infrequent updates. On the other hand we are busy with improving McCLIM and many important (and exciting!) improvements have been made in the meantime. I'd love to declare it a new release with a code name "Yule" but we still have some regressions to fix and pending improvements to apply. We hope though that the release 0.9.8 will happen soon.

We are very excited that we have managed to resurrect interest in McCLIM among Common Lisp developers and it is thanks to the help of you all - every contributor to the project. Some of you provide important improvement suggestions, issue reports and regression tests on our tracker. Others develop applications with McCLIM and that helps us to identify parts which need improving. By creating pull requests you go out of your comfort zone to help improve the project and by doing a peer review you prevent serious regressions and make code better than it would be without that. Perpetual testing and frequent discussions on #clim help to keep the project in shape. Financial supporters allow us to create bounties and attract by that new contributors.

Finances and bounties

Speaking of finances: our fundraiser receives a steady stream of funds of approximately $300/month. We are grateful for that. Right now all money is destined for bounties. A few times bounty was not claimed by bounty hunters who solved the issue - in that case I've collected them and re-added to funds after talking with said people. Currently our funds available for future activities are $3,785 and active bounties on issues waiting to be solved are $2,850 (split between 7 issues). We've already paid $2,450 total for solved issues.

Active bounties:

  • [$600] drawing-tests: improve and refactor (new!).
  • [$600] streams: make SEOS access thread-safe (new!).
  • [$500] Windows Backend.
  • [$450] clx: input: english layout.
  • [$300] listener: repl blocks when long process runs.
  • [$150] UPDATING-OUTPUT not usable with custom gadgets.
  • [$150] When flowing text in a FORMATTING-TABLE, the pane size is used instead of the column size.

Claimed bounties (since last time):

  • [$100] No applicable method for REGION-CONTAINS-POSITION-P -- fixed by Cyrus Harmon and re-added to the pool.
  • [$200] Text rotation is not supported -- fixed by Daniel Kochmański.
  • [$400] Fix Beagle backend -- cancelled and re-added to the pool.
  • [$100] with-room-for-graphics does not obey height for graphics not starting at 0,0 -- fixed by Nisar Ahmad.
  • [$100] Enter doesn't cause input acceptance in the Listener if you hit Alt first -- fixed by Charles Zhang.
  • [$100] Listener commands with "list" arguments, such as Run, cannot be executed from command history -- fixed by Nisar Ahmad.
  • [$200] add PDF file generation (PDF backend) -- fixed by Cyrus Harmon; This bounty will be re-added to the pool when the other backer Ingo Marks accepts the solution.

Improvements

I'm sure you've been waiting for this part the most. Current mid-release improvements and regressions are vast. I'll list only changes which I find the most highlight-able but there are more and most of them are very useful! The whole list of commits and contributors may be found in the git log. There were also many changes not listed here related to the CLX library.

  • Listener UX improvements by Nisar Ahmad.
  • Mirrored sheet implementation refactor by Daniel Kochmański.
  • New demo applications and improvements to existing ones,
  • Font rendering refactor and new features:

This part is a joint effort of many people. In effect we have now two quite performant and good looking font rendered. Elias Mårtenson resurrected FFI Freetype alternative text renderer which uses Harfbuzz and fontconfig found in the foreign world. Daniel Kochmański inspired by Freetype features implemented kerning, tracking, multi-line rendering and arbitrary text transformations for the native TTF renderer. That resulted in a major refactor of font rendering abstraction. Missing features in the TTF renderer are font shaping and bidirectional text.

  • Experiments with xrender scrolling and transformations by Elias Mårtenson,
  • Image and pattern rendering refactor and improvements by Daniel Kochmański.

Both experiments with xrender and pattern rendering were direct inspiration for work-in-progress migration to use xrender as default rendering mechanism.

Patterns have now much better support coverage than they used to have. We may treat pattern as any other design. Moreover it is possible to transform patterns in arbitrary ways (and use other patterns as inks inside parent ones). This has been done at expense of a performance regression which we plan to address before the release.

  • CLX-fb refactor by Daniel Kochmański:

Most of the work was related to simplifying macrology and class hierarchy. This caused small performance regression in this backend (however it may be fixed with the current abstraction present there).

  • Performance and clean code fixes by Jan Moringen:

Jan wrote a very useful tool called clim-flamegraph (it works right now only on SBCL). It helped us to recognize many performance bottlenecks which would be hard to spot otherwise. His contributions to the code base were small (LOC-wise) and hard to pin-point to a specific feature but very important from the maintanance, correctness and performance point of view.

  • Text-size example for responsiveness and UX by Jan Moringen,
  • Various gadget improvements by Jan Moringen,
  • Box adjuster gadget rewrite by Jan Moringen:

clim-extensions:box-adjuster-gadget deserves a separate mention due to its usefulness and relatively small mind share. It allows resizing adjacent panes by dragging a boundary between them.

  • New example for output recording with custom record types by Robert Strandh,
  • PostScript and PDF renderer improvements by Cyrus Harmon,
  • Scrigraph and other examples improvements by Cyrus Harmon,
  • Multiple regression tests added to drawing-tests by Cyrus Harmon,
  • Ellipse drawing testing and fixes by Cyrus Harmon,
  • Better contrasting inks support by Jan Moringen,
  • Output recording and graphics-state cleanup by Daniel Kochmański,
  • WITH-OUTPUT-TO-RASTER-IMAGE-FILE macro fixed by Jan Moringen,
  • Regions may be printed readably (with #. hack) by Cyrus Harmon,
  • event-queue processing rewrite by Nisar Ahmad and Daniel Kochmański:

This solves a long standing regression - McCLIM didn't run correctly on implementations without support for threading. This rewrite cleaned up a few input processing abstractions and provided thread-safe code. SCHEDULE-EVENT (which was bitrotten) works as expected now.

  • Extensive testing and peer reviews by Nisar Ahmad:

This role is easy to omit when one looks at commits but it is hard to overemphasize it - that's how important testing is. Code would be much worse if Nisar didn't put as much effort on it as he did.

Plans

Before the next release we want to refactor input processing in McCLIM and make all stream operations thread-safe. Refactoring input processing loop will allow better support for native McCLIM gadgets and streams (right now they do not work well together) and make the model much more intuitive for new developers. We hope to get rid of various kludges thanks to that as well. Thread-safe stream operations on the other hand are important if we want to access CLIM application from REPL in other process than the application frame (right now drawing from another process may for instance cause output recording corruption). This is important for interactive development from Emacs. When both tasks are finished we are going to release the 0.9.8 version.

After that our efforts will focus on improving X11 backend support. Most notably we want to increase use of the xrender extension of clx and address a long standing issue with non-english layouts. When both tasks are accomplished (some other features may land in but these two will be my main focus) we will release 0.9.9 version.

That will mark a special time in McCLIM development. Next release will be 1.0.0 what is a significant number. The idea is to devote this time explicitly for testing, cleanup and documentation with a feature freeze (i.e no new functionality will be added). What comes after that nobody knows. Animations? New backends? Interactive documentation? If you have some specific vision in which direction McCLIM should move all you have to do is to take action and implement the necessary changes :-).

Merry Yule and Happy New Year 2019

This year was very fruitful for McCLIM development. We'd like to thank all contributors once more and we wish you all (and ourselves) that the next year will be at least as good as this one, a lot of joy, creativeness and Happy Hacking!

Sincerely yours,
McCLIM Development Team

Zach BeanePersonal Notes on Corman Lisp 3.1 Release

· 141 days ago

Wimpie NortjeImplementing Hunchentoot custom sessions.

· 153 days ago

Hunchentoot has a built-in session handling mechanism for dealing with web server sessions. The implementation appears to be quite thorough and it complies to most of the OWASP recommendations for secure session handling. Many of its properties are easily customisable and it should be sufficient for most situations but there are some properties which can only be customised by completely replacing the session mechanism.

Hunchentoot stores the session data in RAM and only sends a random token to the client. As per OWASP recommendations, the token is a meaningless number and contains no session data. It only serves as a database lookup key with some embedded security features.

The implication of keeping session information on the server and only in RAM is that a session is only accessible on the same server where it was created and that all session information will be lost when the application exits. When this happens all the users will have to log in again.

This may be an acceptable situation if the website is only served from a single server and when loss of the session data is not a problem. When sessions do contain important data it is better to store it in persistent storage.

When the website is served from multiple servers the same session information must be accessible from all the server instances otherwise a user will need to log in each time his request is handled by a different server.

To address these two issues require changing Hunchentoot's session storage behaviour. This is also one of the properties which can not be customised without replacing session handling completely.

The documentation makes some passing remarks about replacing the session mechanism and there are some comments in the Github issues too.

Hunchentoot is implemented in CLOS and getting from the documentation to your own customised session implementation requires more than a superficial knowledge of CLOS. Without that extra bit of CLOS knowledge it is far from clear how one should approach this customisation.

Below is some code describing the beginning of what such a session replacement looks like.

(defclass custom-session ()
  ( 
   ;; Implement session class 
   )
  (:documentation "Custom session class does not have to descend from hunchentoot:sessions."))

(defclass custom-session-request (hunchentoot:request)
  ( 
  ;; This class does not need any additional code
  )
  (:documentation "Subclass hunchentoot:request to allow using own session class."))

(defmethod hunchentoot:session-cookie-value ((session custom-session))
  ;; Implementation code
  )

(defmethod hunchentoot:session-verify ((request custom-session-request))
  ;; Implementation code
  )

;; Instantiate acceptor to use the custom session
(setf *acceptor*
      (make-instance 'hunchentoot:easy-acceptor
                     :request-class 'custom-session-request))

The documentation states that the only requirements for replacing the session mechanism is implementations for hunchentoot:session-verify and hunchentoot:session-cookie-value specialised on the custom session class.

At a high level this is true but on a more practical level it omits some crucial information.

hunchentoot:session-cookie-value is specialised on the session class and it returns the value to be sent as a cookie to the user.

hunchentoot:session-verify returns the current session object associated with the current request. It specialises on the request rather than the session class, thus you also need a custom request class in addition to the session class.

Even with these two functions and classes implemented Hunchentoot will still not use your custom sessions. To achieve that you must instantiate the acceptor object with your custom request class as parameter.

These steps are sufficient to make Hunchentoot use a custom session class but it is still a long way from a working implementation, much less a secure one.

Replacing the session mechanism is not a trivial project. If you are forced down this path by the limitations of the built-in mechanism the easiest approach is to copy Hunchentoot's session code and modify it to resolve your issue.

Quicklisp newsDecember 2018 Quicklisp dist update now available

· 160 days ago
New projects:
  • agutil — A collection of utility functions not found in other utility libraries. — MIT
  • aserve — AllegroServe, a web server written in Common Lisp — LLGPL 
  • cl-batis — SQL Mapping Framework for Common Lisp — MIT
  • cl-dbi-connection-pool — CL-DBI-Connection-Pool - connection pool for CL-DBI — LLGPL
  • cl-json-pointer — A JSON Pointer (RFC6901) implementation for Common Lisp. — MIT
  • cl-punch — Scala-like anonymous lambda literal — MIT
  • definitions-systems — Provides a simple unified extensible way of processing named definitions. — Public Domain
  • easy-bind — Easy-bind - easy local binding for Common Lisp — MIT
  • first-time-value — Returns the result of evaluating a form in the current lexical and dynamic context the first time it's encountered, and the cached result of that computation on subsequent evaluations. — Public Domain
  • hyperspec — A simple library for looking up common-lisp symbols in the hyperspec. — LLGPLv3+
  • its — Provides convenient access to multiple values of an object in a concise, explicit and efficient way. — Public Domain
  • mra-wavelet-plot — Plot MRA-based wavelets (scaling function and mother wavelet) with given coefficients of the dilation equation — 2-clause BSD
  • openid-key — Get OpenID keys from issuer. — MIT
  • pjlink — A library for communicating with PJLink-compatible projectors over TCP/IP. see https://pjlink.jbmia.or.jp/english/ for information on PJLink and compatible devices. — CC0 1.0 Universal
  • poler — Infix notation macro generator — LLGPL
  • rpcq — Message and RPC specifications for Rigetti Quantum Cloud Services. — Apache 2
  • shadowed-bindings — Establishes a new lexical context within which specified bindings are explicitly shadowed, making it clear that they are not referenced within, thereby reducing cognitive load. — Public Domain
  • static-dispatch — Static generic function dispatch for Common Lisp. — MIT
  • trivial-jumptables — Provides efficient O(1) jumptables on supported Common Lisp implementations and falls back to O(log(n)) on others. — Public Domain
  • trivial-sockets — trivial-sockets — MIT
  • utility — A collection of useful functions and macros. — MIT
  • wild-package-inferred-system — Introduces the wildcards `*' and `**' into package-inferred-system — MIT
Updated projectsalexandriaaprilarchitecture.builder-protocolarchitecture.hooksasdf-vizbstcamblcari3scarriercavemancffichronicitycl-anacl-bibtexcl-cffi-gtkcl-charmscl-cognitocl-collidercl-conllucl-dbicl-digraphcl-environmentscl-epochcl-hamcrestcl-json-helpercl-ledgercl-markdowncl-patternscl-pythoncl-quickcheckcl-strcl-tetris3dcl-tiledcl-tomlcl-unificationclazyclipcloser-mopclxcodexcovercroatoandbusde.setf.wilburdefinitionsdocparserdufyeclectorevent-emitterf2clfemlispfiascoflarefloat-featuresfunction-cachefxmlgamebox-mathgendlgenhashglsl-toolkitgolden-utilsharmonyhelambdaphttp-bodyhu.dwim.web-serverip-interfacesironcladjonathanjsonrpclacklisp-binarylisp-chatlocal-timemaidenmcclimmmapopticloverlordparachuteparenscriptparser.common-rulespetalisppgloaderplexippus-xpathplumpplump-sexppostmodernprotestprotobufqbase64qlotquriracerregular-type-expressionsafety-paramssc-extensionsserapeumshadowsimple-tasksslysnakessnoozestaplestealth-mixinstefilstumpwmthe-cost-of-nothingtime-intervaltrivial-benchmarktrivial-utilitiesumbrautilities.binary-dumpvgplotwebsocket-driverwith-c-syntaxwoozacl.

To get this update, use (ql:update-dist "quicklisp")

Enjoy!

Nicolas HafnerAbout Making Games in Lisp - Gamedev

· 163 days ago

header
Recently there's been a bit of a storm brewing about a rather opinionated article about game development with Lisp. After reading Chris Bagley's very well done response, I thought I'd share my perspective on what it's like to actually make games with Lisp. I'm not writing this with the intent on convincing you of any particular arguments, but rather to give some insight into my process and what the difficulties and advantages are.

I'll start this off by saying that I've been working with games in some capacity as long as I can remember. My programming career started out when I was still a young lad and played freeware games on a dinky Compaq laptop with Windows 95. Making really terrible games is almost all I did in terms of programming all throughout primary school. I branched out into other software after that, but making games is something that has always kept sticking around in my mind.

Naturally, when it came to having learnt a new programming language, it didn't take too long before I wanted to make games again. And of course, because I'm a stubborn idiot, I decided to build an engine from scratch - it wasn't my first one, either. This is what lead to Shirakumo's Trial engine.

Since then, the team and I have built a couple of "games" with Trial:

  • LD35 Supposed to be a sort of farming game, due to massive engine problems ended up being just a test for animations and basic 3d rendering.
  • LD36 A very basic survival game that lets you build fire places and eat stuff. Based on the tech from the previous game.
  • LD38 An experiment in non-linear storytelling. The idea was to have a dialog based mystery game, but we ran out of time.
  • Rush A 2D platformer with a lighting mechanic. This one is actually a game that can be played for some time.
  • Shootman An excuse to stream some gamedev. Mostly modelled after "Enter the Gungeon," it's an isometric bullet hell shooter.

None of these are big, none of these are great. They're all more experiments to see what can be done. What I've learned most of all throughout all my time working on games is that I'm not good at making games. I'm decent at making engines, which is a very, very different thing.

If you're good at making games, you can make an engaging game with nothing more than format, read-line, and some logic thrown in. If you're bad at making games like I am, you build a large engine for all the features you imagine your game might need, and then you don't know how to proceed and the project dies. You may notice that this also has a bit of an implication, namely that for making the game part of a game, the choice of language matters very little. It matters a lot for the engine, because that's a software engineering problem.

I'm writing this because this is, to me, an important disclaimer: I don't well know how to make games. I can write code, program mechanics, make monkeys jump up and down on your screen, but that's not the meat of a game and usually not why people play games either. Thus my primary difficulty making games has absolutely nothing to do with the technology involved. Even if I were using Unity or Unreal, this problem would not go away. It was the same when I was last writing games in Java, and it was the same when I was using GameMaker.

Now, why am I not using a large, well made engine to make games? Is it because I've been tainted by Lisp and don't want to use other languages in my free time anymore? Is it because the game making problem would persist anyway so what's the point? Is it because I like making engines? Is it because I'm stupid? Well, the answers are yes, yes, yes, and yes.

Alright, so here we are: Lisp is the only choice left, I like making engines and don't know how to make games, so what are the difficulties and advantages of doing that?

As you might know, I'm currently working on a game, so I have a lot of immediate thoughts on the matter. What seems to bother me the most is that currently I don't have a built in, usable scene editor in Trial. For every game so far we had to either build an editor from scratch, or place things manually in code. Both of these things suck, and making an editor that isn't a huge pain to use takes a long, long time. Part of the issue with that is that Trial currently does not have a UI toolkit to offer. You can use it with the Qt backend and use that to offer a UI, but I really don't want to force using Qt just for an editor. Not to mention that we need in-game UI capabilities anyway.

All of the UI toolkits I've seen out there are either a giant blob of foreign code that I really don't want to bind to, or they're McCLIM which won't work with OpenGL in what I project to be the next decade or more. So, gotta do it myself again. I have some nice and good ideas for making a design that's different and actually very amenable towards games and their unique resolution constraints, but making a UI toolkit is a daunting effort that I have so far not felt the energy to tackle.

Aside from the lack of an editor and UI toolkit, I actually have very few complaints with the current state of Trial for the purposes of my game. It handles asset management, shaders and effects pipelines, input and event delivery, and so forth. A lot of the base stuff that makes OpenGL a pain in the neck has been taken care of.

That said, there's a lot of things I had to implement myself as well that could be seen as something the engine should do for you: map save and load, save states, collision detection and resolution, efficient tile maps. Some of the implementations I intend to backport into Trial, but other things that might seem simple on first look like maps and save states, are actually incredibly domain specific, and I'm currently unconvinced that I can build a good, generic system that can handle this.

One thing that I think was a very good decision for Trial that I still stand by is the idea to keep things as modular and separate as possible. This is so that, as much as possible, you won't be forced to use any particular feature of the engine and can replace them if your needs demand such. If you know anything at all about architecture, this is a very difficult thing to do, and something that I believe would be a huge lot more difficult if it weren't implemented in Lisp. Modularity, re-usability, and extensibility are where Lisp truly shines.

Unfortunately for us, games tend to need a lot of non-reusable, very problem-specific solutions and implementations. Sure, there's components that are re-usable, like a rendering engine, physics simulations, and so forth. But even within those you have a tremendous effort in implementing game-specific mechanics and features that can't be ported elsewhere.

But, that's also great for me because it means I can spend a ton of time implementing engine parts without having to worry about actually making a game. It's less great for the chances of my game ever being finished, but we'll worry about that another time.

Right now I'm working on implementing a quest and dialog system in the game, which is proving to be an interesting topic on its own. Lisp gives me a lot of nifty tools here for the end-user, since I can wrap a lot of baggage up in macros that present a very clean, domain-geared interface. This very often alleviates the need to write scripting languages and parsers. Very often, but not always however. For the dialog, the expected amount of content is so vast that I fear that I can't get away with using macros, and need to implement a parser for a very tailored markup language. I've been trying to get that going, but unfortunately for reasons beyond me my motivation has been severely lacking.

Other than that, now that all the base systems for maps, saves, chunks, tiles, and player mechanics are in place the only remaining part is UI stuff, and we already discussed the issue with that. This also means that I really need to start thinking about making a game again because I've nearly run out of engine stuff to do (for now). We'll see whether I can somehow learn to shift gears and make an actual game. I really, really hope that I can. I want this to work.

I've talked a lot about my own background and the kinds of problems I'm facing at the moment, and very little about the process of making these games. Well, the process is rather simple:

  1. Decide on a core idea of the game.
  2. Figure out what the player should be able to do and the kinds of requirements this has on the engine.
  3. Implement these requirements in the engine.
  4. Use the features of the engine to build the game content. This requires the most work.
  5. As you develop content and the vision of the game becomes clearer, new ideas and requirements will crystallise. Go back to 3.
  6. Your game is now done.

Again, the bulk of the work lies in making content, which is rather orthogonal to the choice of your language, as long as the tools are mature enough to make you productive. I believe Lisp allows me to be quicker about developing these tools than other languages, but making an actual game would be even quicker if I didn't have to make most of these tools in the first place.

So if there's anything at all that I want for developing games in Lisp, it wouldn't be some magical engine on par with Unreal or whatever, it wouldn't even be more libraries and things. I'm content enough to build those myself. What I'd really like is to find the right mindset for making game content. Maybe, hopefully, I will at some point and I'll actually be able to publish a game worth a damn. If it happens to have been developed with Lisp tools, that's just a bonus.

If you've made it this far: thank you very much for reading my incoherent ramblings. If you're interested in my game project and would like to follow it, or even help working on it, hang out in the #shirakumo channel on Freenode.

Vsevolod DyomkinStructs vs Parametric Polymorphism

· 174 days ago
Recently, Tamas Papp wrote about one problem he had with Lisp in the context of scientific computing: that it's impossible to specialize methods on parametric types.
While you can tell a function that operates on arrays that these arrays have element type double-float, you cannot dispatch on this, as Common Lisp does not have parametric types.
I encountered the same issue while developing the CL-NLP Lisp toolkit for natural language processing. For instance, I needed to specialize methods on sentences, which may come in different flavors: as lists of tokens, vectors of tokens, lists of strings or some more elaborate data-structure with attached metadata. Here's an example code. There's a generic function to perform various tagпing jobs (POS, NER, SRL etc.) It takes two arguments: the first — as with all CL-NLP generic functions — is the tagger object that is used for algorithm selection, configuration, as well as for storing intermediate state when necessary. The second one is a sentence being tagged. Here are two of its possible methods:

(defmethod tag ((tagger ap-dict-postagger) (sent string)) ...)
(defmethod tag ((tagger ap-dict-postagger) (sent list)) ...)
The first processes a raw string, which assumes that we should invoke some pre-processing machinery that tokenizes it and then, basically, call the second method, which will perform the actual tagging of the resulting tokens. So, list here means list of tokens. But what if we already have the tokenization, but haven't created the token objects? I.e. a list of strings is supplied as the input to the tag method. The CLOS machinery doesn't have a way to distinguish, so we'll have to resort to using typecase inside the method, which is exactly what defmethod replaces as a transparent and extensible alternative. Well, in most other languages we'll stop here and just have to assert that nothing can be done and it should be accepted as is. After all, it's a local nuisance and not a game changer for our code (although Tamas refers to it as a game-changer for his). In Lisp, we can do better. Thinking about this problem, I see at least 3 solutions with a varying level of elegance and portability. Surely, they may seem slightly inferior to such capability being built directly into the language, but demanding to have everything built-in is unrealistic, to say the least. Instead, having a way to build something in ourselves is the only future-proof and robust alternative. And this is what Lisp is known for. The first approach was mentioned by Tamas himself:
You can of course branch on the array element types and maybe even paper over the whole mess with sufficient macrology (which is what LLA ended up doing), but this approach is not very extensible, as, eventually, you end up hardcoding a few special types for which your functions will be "fast", otherwise they have to fall back to a generic, boxed type. With multiple arguments, the number of combinations explodes very quickly.
Essentially, rely on typecase-ing but use macros to blend it into the code in the most non-intrusive way, minimizing boilerplate. This is a straightforward path, in Lisp, and it has its drawbacks for long-running projects that need to evolve over time. But it remains a no-brainer for custom one-offs. That's why, usually, few venture further to explore other alternatives. The other solution was mentioned in the Reddit discussion of the post:
Generics dispatching on class rather than type is an interesting topic. I've definitely sometimes wanted the latter so far in doing CL for non-scientific things. It is certainly doable to make another group of generics that do this using the MOP.
I.e. use the MOP to introduce type-based generic dispatch. I won't discuss it here but will say that similar things were tried in the past quite successfully. ContextL and Layered functions are some of the examples. Yet, the MOP path is rather heavy and has portability issues (as the MOP is not in the standard, although there is the closer-to-mop project that unifies most of the implementations). In my point of view, its best use is for serious and fundamental extension of the CL object system, not to solve a local problem that may occur in some contexts but is not so pervasive. Also, I'd say that the Lisp approach that doesn't mix objects and types (almost) is, conceptually, the right one as these two facilities solve a different set of problems. There's a third — much simpler, clear and portable solution that requires minimal boilerplate and, in my view, is best suited for such level of problems. To use struct-s. Structs are somewhat underappreciated in the Lisp world, not a lot of books and study materials give them enough attention. And that is understandable as there's not a lot to explain. But structs are handy for many problems as they are a hassle-free and efficient facility that provides some fundamental capabilities. In its basic form, the solution is obvious, although a bit heavy. We'll have to define the wrapper structs for each parametric type we'd like to dispatch upon. For example, list-of-strings and list-of-tokens. This looks a little stupid and it is, because what's the semantic value of a list of strings? That's why I'd go for sentence/string and sentence/token which is a clearer naming scheme. (Or, if we want to mimic Julia, sentence<string>).

(defstruct sent/str
toks)
Now, from the method's signature, we will already see that we're dealing with sentences in the tagging process. And will be able to spot when some other tagging algorithm operates on the paragraphs instead of words: let's say, tagging parts of an email with such labels as greeting, signature, and content. Yes, this can also be conveyed via the name of the tagger, but, still, it's helpful. And it's also one of the hypothetical fail cases for a parametric type-based dispatch system: if we have two different kinds of lists of strings that need to be processed differently, we'd have to resort to similar workarounds in it as well. However, if we'd like to distinguish between lists of strings and vectors of strings, as well as more generic sequences of strings we'll have to resort to more elaborate names, like sent-vec/str, as a variant. It's worth noting though that, for the sake of producing efficient compiled code, only vectors of different types of numbers really make a difference. A list of strings or a list of tokens, in Lisp, uses the same accessors so optimization here is useless and type information may be used only for dispatch and, possibly, type checking. Actually, Lisp doesn't support type-checking of homogenous lists, so you can't say :type (list string), only :type list. (Wel, you can, actually uses (and satisfies (lambda (x) (every 'stringp x)), but what's the gain?) Yet, using structs adds more semantic dimensions to the code than just naming. They may store additional metadata and support simple inheritance, which will come handy when we'd like to track sentence positions in the text and so on.

(defstruct sent-vec/tok
(toks nil :type (vector tok)))

(defstruct (corpus-sent-vec/tok (:include sent-vec/tok))
file beg end)
And structs are efficient in terms of both space consumption and speed of slot access.
So, now we can do the following:

(defmethod tag ((tagger ap-dict-postagger) (sent sent/str)) ...)
(defmethod tag ((tagger ap-dict-postagger) (sent sent/tok)) ...)
(defmethod tag ((tagger ap-dict-postagger) (sent sent-vec/tok)) ...)
We'll also have to defstruct each parametric type we'd like to use. As a result, with this approach, we can have the following clean and efficient dispatch:

(defgeneric tag (tagger sent)
(:method (tagger (sent string))
(tag tagger (tokenize *word-splitter* sent))
(:method (tagger (sent sent/str))
(tag tagger (make-sent/tok :toks (map* ^(prog1 (make-tok
:word %
:beg off
:end (+ off (length %)))
(:+ off (1+ (length %)))
@sent.toks)))
(:method ((tagger pos-tagger) (sent sent/tok))
(copy sent :toks (map* ^(copy % :pos (classify tagger
(extract-features tagger %))
@sent.toks))))

CL-USER> (tag *pos-tagger* "This is a test.")
#S(SENT/TOK :TOKS (<This/DT 0..4> <is/VBZ 5..7> <a/DT 8..9>
<test/NN 10..14> <./. 14..15>))
Some of the functions used here, ?, map*, copy, as well as @ and ^ reader macros, come from my RUTILS, which fills the missing pieces of the CL standard library. Also an advantage of structs is that they define a lot of things in the background: invoking type-checking for slots, a readable print-function, a constructor, a builtin copy-structure and more. In my view, this solution isn't any less easy-to-use than the static-typed one (Julia's). There's a little additional boilerplate (defstructs), which may be even considered to have a positive impact on the code's overall clarity. And yes, you have to write boilerplate in Lisp sometimes, although not so much of it. Here's a fun quote on the topic I saw on twitter some days ago:
Lisp is an embarrassingly concise language. If you're writing a bunch of boilerplate in it, you need to read SICP & "Lisp: A Language for Stratified Design".
P.S. There's one more thing I wanted to address from Tamas's post
Now I think that one of the main reasons for this is that while you can write scientific code in CL that will be (1) fast, (2) portable, and (3) convenient, you cannot do all of these at the same time.
I'd say that this choice (or rather a need to prioritize one over the others) exists in every ecosystem. At least, looking at his Julia example, there's no word of portability (citing Tamas's own words about the language: "At this stage, code that was written half a year ago is very likely to be broken with the most recent release."), while convenience may be manifest well for his current use case, but what if we require to implement in the same system features that deal with other areas outside of numeric computing? I'm not so convinced. Or, speaking about Python, which is a goto language for scientific computing. In terms of performance, the only viable solution is to implement the critical parts in C (or Cython). Portable? No. Convenient — likewise. Well, as a user you get convenience, and speed, and portability (although, pretty limited). But at what cost? I'd argue that developing the Common Lisp scientific computing ecosystem to a similar quality would have required only 10% of the effort that went into building numpy and scipy...

Wimpie NortjeHow to write test fixtures for FiveAM.

· 176 days ago

When you write a comprehensive test suite you will most likely need to repeat the same set up and tear down process multiple times because a lot of the tests will test the same basic scenario in a slightly different way.

Testing frameworks address this code repetition problem with "fixtures". FiveAM also has this concept, although slightly limited1.

FiveAM implements fixtures as a wrapper around defmacro. The documentation states:

NB: A FiveAM fixture is nothing more than a macro. Since the term 'fixture' is so common in testing frameworks we've provided a wrapper around defmacro for this purpose.

There are no examples in the documentation on what such a fixture-macro should look like. Do you need the usual macro symbology like backticks and splicing or not? If so how? This can be difficult to decipher if you are not fluent in reading macros. The single example in the source code is making things worse because it does include backticks and splicing.

FiveAM defines a macro def-fixture which allows you write your fixtures just like normal functions with the one exception that there is an implicit variable &body to represent your test code. No fiddling with complex macros!

This is a simple example:

(def-fixture in-test-environment ()
  "Set up and tear down the test environment."
  (setup-code)
  (&body)
  (teardown-code))

(def-test a-test ()
  "Test in clean environment."
  (with-fixture in-test-environment ()
    (is-true (some-function))))

The fixture implementation provides an easy-to-use definition syntax without any additional processing. If you need more complex macros than what def-fixture can handle you can write normal Lisp macros as usual without interfering with FiveAM's operation.

  1. Some frameworks can apply fixtures to the test suite (as opposed to a test) so that it executes only once before any test in the suite is run and once after all tests have completed, regardless of how many tests in the suite are actually executed. FiveAM does not have this capability.

Eugene ZaikonnikovSome documents on AM and EURISKO

· 199 days ago

Sharing here a small collection of documents by Douglas B. Lenat related to design AM and EURISKO that I assembled over the years. These are among the most famous programs of symbolic AI era. They represent so-called 'discovery systems'. Unlike expert systems, they run loosely-constrained heuristic search in a complex problem domain.

AM was Lenat's doctoral thesis and the first attempt of such kind. Unfortunately, it's all described in rather informal pseudocode, a decision that led to a number of misunderstandings in follow-up criticism. Lenat has responded to that in one of the better known publications, Why AM and EURISKO appear to work.

AM was built around concept formation process utilizing a set of pre-defined heuristics. EURISKO takes it a step further, adding the mechanism of running discovery search on its own heuristics. Both are specimen of what we could call 'Lisp-complete' programs: designs that require Lisp or its hypothetical, similarly metacircular equivalent to function. Their style was idiomatic to INTERLISP of 1970s, making heavy use of FEXPRs and self-modification of code.

There's quite a lot of thorough analysis available in three-part The Nature of Heuristics: part one, part two. The third part contains the most insights into the workings of EURISKO. Remarkable quote of when EURISKO discovered Lisp atoms, reflecting it was written before the two decade pause in nuclear annihilation threat:

Next, EURISKO analyzed the differences between EQ and EQUAL. Specifically, it defined the set of structures which can be EQUAL but not EQ, and then defined the complement of that set. This turned out to be the concept we refer to as LISP atoms. In analogy to humankind, once EURISKO discovered atoms it was able to destroy its environment (by clobbering CDR of atoms), and once that capability existed it was hard to prevent it from happening.

Lenat's eventual conclusion from all this was that "common sense" is necessary to drive autonomous heuristic search, and that a critical mass of knowledge is necessary. That's where his current CYC project started off in early 1990s.

Bonus material: The Elements of Artificial Intelligence Using Common Lisp by Steven L. Tanimoto describes a basic AM clone, Pythagoras.


For older items, see the Planet Lisp Archives.


Last updated: 2019-04-30 00:00