Planet Scheme

Friday, August 12, 2022

Scheme Requests for Implementation

SRFI 235: Combinators

SRFI 235 is now in draft status.

This SRFI contains various procedures that accept and return procedures, as well as a few others, drawn from an earlier version of Chicken. Common Lisp has a few of them too, and more come from the Standard Prelude from Programming Praxis.

by John Cowan (spec) and Arvydas Silanskas (implementation) at Friday, August 12, 2022

Wednesday, August 10, 2022

Scheme Requests for Implementation

SRFI 234: Topological Sorting

SRFI 234 is now in draft status.

Topological sorting is an algorithm that takes a graph consisting of nodes and other nodes that depend on them, forming a partial order, and returns a list representing a total ordering of the graph. If the graph is cyclic, the topological sort will fail, and an error will be signaled.

by John Cowan (spec) and Arvydas Silanskas (implementation) at Wednesday, August 10, 2022

SRFI 233: INI files

SRFI 233 is now in draft status.

An INI file is a configuration file that consists of key-value pairs for properties, and sections that group the properties. The name of these configuration files comes from the filename extension INI, short for initialization. The format has become an informal standard in many contexts of configuration. This SRFI provides access to the contents of an INI file.

by John Cowan (spec) and Arvydas Silanskas (implementation) at Wednesday, August 10, 2022

The Racket Blog

Racket v8.6

Racket version 8.6 is now available from

As of this release:

The following people contributed to this release:

Alex Knauth, Alexander Shopov, Alexis King, Amirouche Amazigh BOUBEKKI, Andy Keep, Ashish SHUKLA, Bob Burger, Bogdan Popa, Cameron Moy, Chung-chieh Shan, David K. Storrs, FrankHB, Fred Fu, Gustavo Massaccesi, helado de brownie, J. Ryan Stinnett, Jack Firth, Jamie Taylor, Jason Hemann, Jens Axel Søgaard, Jimmy McNutt, Joel Dueck, John Clements, José Manuel Calderón Trilla, Kevin Tew, Laurent Orseau, Matt Audesse, Matthew Flatt, Matthias Felleisen, Mike Sperber, naveen srinivasan, Niklas Larsson, Noah Ma, Oscar Waddell, Pavel Panchekha, Phil Nguyen, Philip McGrath, Philippe Meunier, rgkirch, Robby Findler, Robert Postill, Ryan Culpepper, Sam Tobin-Hochstadt, Sergiu Ivanov, Sorawee Porncharoenwase, Stephen De Gabrielle, Vincent Lee, wackbyte, and Zibing Zhang

Link to package regressions issue for the 8.6 release:

Official installers for Racket on many platforms are available from

If you are new to Racket try our Getting started guide.

Questions and feedback about the release are welcome on Discourse.

by The Unknown Author at Wednesday, August 10, 2022

Sunday, August 7, 2022

Andy Wingo

coarse or lazy?

sweeping, coarse and lazy

One of the things that had perplexed me about the Immix collector was how to effectively defragment the heap via evacuation while keeping just 2-3% of space as free blocks for an evacuation reserve. The original Immix paper states:

To evacuate the object, the collector uses the same allocator as the mutator, continuing allocation right where the mutator left off. Once it exhausts any unused recyclable blocks, it uses any completely free blocks. By default, immix sets aside a small number of free blocks that it never returns to the global allocator and only ever uses for evacuating. This headroom eases defragmentation and is counted against immix's overall heap budget. By default immix reserves 2.5% of the heap as compaction headroom, but [...] is fairly insensitive to values ranging between 1 and 3%.

To Immix, a "recyclable" block is partially full: it contains surviving data from a previous collection, but also some holes in which to allocate. But when would you have recyclable blocks at evacuation-time? Evacuation occurs as part of collection. Collection usually occurs when there's no more memory in which to allocate. At that point any recyclable block would have been allocated into already, and won't become recyclable again until the next trace of the heap identifies the block's surviving data. Of course after the next trace they could become "empty", if no object survives, or "full", if all lines have survivor objects.

In general, after a full allocation cycle, you don't know much about the heap. If you could easily know where the live data and the holes were, a garbage collector's job would be much easier :) Any algorithm that starts from the assumption that you know where the holes are can't be used before a heap trace. So, I was not sure what the Immix paper is meaning here about allocating into recyclable blocks.

Thinking on it again, I realized that Immix might trigger collection early sometimes, before it has exhausted the previous cycle's set of blocks in which to allocate. As we discussed earlier, there is a case in which you might want to trigger an early compaction: when a large object allocator runs out of blocks to decommission from the immix space. And if one evacuating collection didn't yield enough free blocks, you might trigger the next one early, reserving some recyclable and empty blocks as evacuation targets.

when do you know what you know: lazy and eager

Consider a basic question, such as "how many bytes in the heap are used by live objects". In general you don't know! Indeed you often never know precisely. For example, concurrent collectors often have some amount of "floating garbage" which is unreachable data but which survives across a collection. And of course you don't know the difference between floating garbage and precious data: if you did, you would have collected the garbage.

Even the idea of "when" is tricky in systems that allow parallel mutator threads. Unless the program has a total ordering of mutations of the object graph, there's no one timeline with respect to which you can measure the heap. Still, Immix is a stop-the-world collector, and since such collectors synchronously trace the heap while mutators are stopped, these are times when you can exactly compute properties about the heap.

Let's retake the question of measuring live bytes. For an evacuating semi-space, knowing the number of live bytes after a collection is trivial: all survivors are packed into to-space. But for a mark-sweep space, you would have to compute this information. You could compute it at mark-time, while tracing the graph, but doing so takes time, which means delaying the time at which mutators can start again.

Alternately, for a mark-sweep collector, you can compute free bytes at sweep-time. This is the phase in which you go through the whole heap and return any space that wasn't marked in the last collection to the allocator, allowing it to be used for fresh allocations. This is the point in the garbage collection cycle in which you can answer questions such as "what is the set of recyclable blocks": you know what is garbage and you know what is not.

Though you could sweep during the stop-the-world pause, you don't have to; sweeping only touches dead objects, so it is correct to allow mutators to continue and then sweep as the mutators run. There are two general strategies: spawn a thread that sweeps as fast as it can (concurrent sweeping), or make mutators sweep as needed, just before they allocate (lazy sweeping). But this introduces a lag between when you know and what you know—your count of total live heap bytes describes a time in the past, not the present, because mutators have moved on since then.

For most collectors with a sweep phase, deciding between eager (during the stop-the-world phase) and deferred (concurrent or lazy) sweeping is very easy. You don't immediately need the information that sweeping allows you to compute; it's quite sufficient to wait until the next cycle. Moving work out of the stop-the-world phase is a win for mutator responsiveness (latency). Usually people implement lazy sweeping, as it is naturally incremental with the mutator, naturally parallel for parallel mutators, and any sweeping overhead due to cache misses can be mitigated by immediately using swept space for allocation. The case for concurrent sweeping is less clear to me, but if you have cores that would otherwise be idle, sure.

eager coarse sweeping

Immix is interesting in that it chooses to sweep eagerly, during the stop-the-world phase. Instead of sweeping irregularly-sized objects, however, it sweeps over its "line mark" array: one byte for each 128-byte "line" in the mark space. For 32 kB blocks, there will be 256 bytes per block, and line mark bytes in each 4 MB slab of the heap are packed contiguously. Therefore you get relatively good locality, but this just mitigates a cost that other collectors don't have to pay. So what does eager marking over these coarse 128-byte regions buy Immix?

Firstly, eager sweeping buys you eager identification of empty blocks. If your large object space needs to steal blocks from the mark space, but the mark space doesn't have enough empties, it can just trigger collection and then it knows if enough blocks are available. If no blocks are available, you can grow the heap or signal out-of-memory. If the lospace (large object space) runs out of blocks before the mark space has used all recyclable blocks, that's no problem: evacuation can move the survivors of fragmented blocks into these recyclable blocks, which have also already been identified by the eager coarse sweep.

Without eager empty block identification, if the lospace runs out of blocks, firstly you don't know how many empty blocks the mark space has. Sweeping is a kind of wavefront that moves through the whole heap; empty blocks behind the wavefront will be identified, but those ahead of the wavefront will not. Such a lospace allocation would then have to either wait for a concurrent sweeper to advance, or perform some lazy sweeping work. The expected latency of a lospace allocation would thus be higher, without eager identification of empty blocks.

Secondly, eager sweeping might reduce allocation overhead for mutators. If allocation just has to identify holes and not compute information or decide on what to do with a block, maybe it go brr? Not sure.

lines, lines, lines

The original Immix paper also notes a relative insensitivity of the collector to line size: 64 or 256 bytes could have worked just as well. This was a somewhat surprising result to me but I think I didn't appreciate all the roles that lines play in Immix.

Obviously line size affect the worst-case fragmentation, though this is mitigated by evacuation (which evacuates objects, not lines). This I got from the paper. In this case, smaller lines are better.

Line size affects allocation-time overhead for mutators, though which way I don't know: scanning for holes will be easier with fewer lines in a block, but smaller lines would contain more free space and thus result in fewer collections. I can only imagine though that with smaller line sizes, average hole size would decrease and thus medium-sized allocations would be harder to service. Something of a wash, perhaps.

However if we ask ourselves the thought experiment, why not just have 16-byte lines? How crazy would that be? I think the impediment to having such a precise line size would mainly be Immix's eager sweep, as a fine-grained traversal of the heap would process much more data and incur possibly-unacceptable pause time overheads. But, in such a design you would do away with some other downsides of coarse-grained lines: a side table of mark bytes would make the line mark table redundant, and you eliminate much possible "dark matter" hidden by internal fragmentation in lines. You'd need to defer sweeping. But then you lose eager identification of empty blocks, and perhaps also the ability to evacuate into recyclable blocks. What would such a system look like?

Readers that have gotten this far will be pleased to hear that I have made some investigations in this area. But, this post is already long, so let's revisit this in another dispatch. Until then, happy allocations in all regions.

by Andy Wingo at Sunday, August 7, 2022

Saturday, July 30, 2022

Joe Marshall

Named Lambda and Named Let

Suppose you want to map the "add three" function over a list. You don't need to define a name for the function, you can just use a lambda expression: (lambda (n) (+ n 3)). This creates an anonymous "add three" function.

Now suppose you want to map the "factorial" function over a list. You start with (lambda (x) (if (zerop x) 1 ... , but how can you recursively call an anonymous function? We need the function to have a local name within its own body. One option is to use the Y operator:

* (map 'list (Y (lambda (fact)
                  (lambda (x)
                    (if (zerop x)
                        (* x (funcall fact (1- x)))))))
       '(3 5 7))
(6 120 5040)
but another popular option is to provide a new special form
* (map 'list (named-lambda fact (x)
               (if (zerop x)
                   (* x (fact (1- x)))))
     '(3 5 7))
(6 120 5040)
The name fact is bound to the lambda expression only within the body of the lambda expression. You don't need to defun a factorial procedure, you can just use a named-lambda.

A little puzzle for you: write named-lambda. My answer below.

Just as a let is syntactic sugar for a lambda application, a named-let is syntactic sugar for a named-lambda application. The name is bound to the lambda expression that performs the variable binding, so you can use that name to make a recursive call. In effect, you can re-invoke the named-let expression with a fresh set of values.

Scheme hackers will be familiar with named-let, but it isn't usual in Common Lisp. It's an easy transformation:

(named-let recur ((x '(a list))
                  (y 22))    
   (body)) =>

(funcall (named-lambda recur (x y) (body)) '(a list) 22)
named-let is the bee's knees for ad hoc iteration. The iteration variables are bound to their initial values by the named-let, and the body can initiate the next iteration by tail-calling the named-let with the updated values for the iteration variables. Since there is no constraint on where or when you iterate, or how you update the iteration variables, this allows very general iteration.

I have seen a tendency for Scheme hackers to overdo it with named lets. You don't need a named-let when map will do. It's usually better to express an iteration through a higher order function than to write yet another ad hoc loop in your code. But named-let is a useful and powerful tool when the simpler methods aren't cutting it.

Here's what I came up with for named-lambda:

(defmacro named-lambda (name (&rest arglist) &body body)
  `(labels ((,name ,arglist ,@body))
     (function ,name)))

by Joe Marshall ( at Saturday, July 30, 2022

Let's Play Wordle

Wordle is popular these days. Let's teach the computer how to play.

As usual, I'm using the series library. Also, if you are coding along, there's a function or two I omitted that you'll have to write yourself. Note that I use a named-let, too.

To play Wordle, you try to deduce the secret word by making guesses. After each guess, you are told which letters you got exactly right and which are right, but in the wrong position. Each guess narrows down the number of possible answers until there is one left. It's a simple matter of making good guesses.

Here I define a word as a simple string of five characters and a predicate for testing that the word is all lowercase, alphabetic, ascii characters.

(deftype word () `(simple-string 5))

(defun valid-word? (thing)
  (and (typep thing 'word)
       (str:downcase? thing)
       (every #'alpha-char-p thing)
       (every #'str:ascii-char-p thing)))

I don't use a satisfies clause in the word type. satisfies can cause issues with optimization and performance because it is can be hard to control where the compiler inserts type checks. I just manually call valid-word? when necessary.

To read a word file, we read each line in the file, trim it, and select only the valid words. This works on the canonical word files, but you can read words from the system dictionary or other places if you want.

(defun read-word-file (pathname)
  (collect 'bag
    (choose-if #'valid-word?
      (map-fn 'string #'str:trim
        (scan-file pathname #'read-line)))))

(defparameter +word-file+ "~/wordle/words.txt")
(defparameter +answer-file+ "~/wordle/answers.txt")

(defparameter +all-words+ (read-word-file +word-file+))
(defparameter +all-answers+ (read-word-file +all-answers+))

We need to score a guess. When you make a guess, the squares under the letters turn green if the letter is correct, yellow if the letter is incorrect, but appears in the answer, and gray if the letter does not appear in the answer. We'll just return a list of the colors (as keywords). For example, (score-guess "react" "relay") => (:green :green :yellow :gray :gray)

score-guess first needs a list of the letters in the answer that don't match the guess:

(let ((sg (scan 'string guess))
      (sa (scan 'string answer)))
  (collect 'bag
    (choose (map-fn 'boolean #'char/= sg sa) sa)))
then we walk the guess. If the guess character equals the answer character, we cons a :green on to the score. If the guess character is a member of the unmatched answer characters, we cons a :yellow on to the score and delete that character from the unmatched characters. Otherwise, we cons a :gray on to the score.
(defun score-guess (guess answer)
  (declare (type word guess answer))
  (let walk ((index 0)
             (score '())
             (unmatched-chars (let ((sg (scan 'string guess))
                                    (sa (scan 'string answer)))
                                (collect 'bag
                                  (choose (map-fn 'boolean #'char/= sg sa) sa)))))
    (if (>= index 5)
        (nreverse score)
        (let ((guess-char (schar guess index)))
          (cond ((char= guess-char (schar answer index))
                 (walk (1+ index) (cons :green score) unmatched-chars))
                ((member guess-char unmatched-chars)
                 (walk (1+ index) (cons :yellow score) (delete guess-char unmatched-chars)))
                 (walk (1+ index) (cons :gray score) unmatched-chars)))))))

Once we've made a guess and have a score, we'll want to narrow down the possible words. We just go over the word list and keep the words that have a matching score.

(defun prune-words (guess score words)
  (declare (optimizable-series-function) (off-line-port words))
   (lambda (word)
     (equal (score-guess guess word) score))

We'll need a strategy for picking a word to guess. Here's an easy, naive one to start with: if there is only one possible word left, guess that one, otherwise guess a completely random word and narrow down the possibility list.

(defun strategy/random-word (possibilities)
  (if (= (length possibilities) 1)
      (car possibilities)
      (random-word +all-words+)))

So let's imagine the top level. The play function will play a single round of Wordle. We'll be keeping track of the possible words as we go. We choose a guess based on our strategy, then score the guess. If we got the right answer, we're done, but otherwise we narrow down the list of possibilites to those that have the same score and play the next round.

(defun play (strategy &optional (round 1)
                        (possibilities +all-answers+)
                        (secret-word (random-word +all-answers+)))
  (let* ((guess (funcall strategy possibilities))
         (score (score-guess guess secret-word)))
    (format t "~&~d guessing ~s ~s ..." round guess score)
    (if (equal score '(:green :green :green :green :green))
        (progn (format t "correct.") round)
        (let ((new-possibilities
                (collect 'bag (prune-words guess score (scan 'list possibilities)))))
          (format t "narrowed to ~d possibilities." (length new-possibilities))
          (play strategy (+ round 1) new-possibilities secret-word)))))

WORDLE> (play #'strategy/random-word)

1 guessing "culty" (:GRAY :GRAY :GRAY :GRAY :GRAY) ...narrowed to 519 possibilities.
2 guessing "hings" (:GRAY :GRAY :GRAY :GRAY :GRAY) ...narrowed to 101 possibilities.
3 guessing "india" (:GRAY :GRAY :YELLOW :GRAY :GRAY) ...narrowed to 9 possibilities.
4 guessing "lauds" (:GRAY :GRAY :GRAY :YELLOW :GRAY) ...narrowed to 8 possibilities.
5 guessing "stedd" (:GRAY :GRAY :GRAY :GRAY :GREEN) ...narrowed to 2 possibilities.
6 guessing "khets" (:GRAY :GRAY :GRAY :GRAY :GRAY) ...narrowed to 2 possibilities.
7 guessing "bared" (:GREEN :GRAY :YELLOW :GRAY :GREEN) ...narrowed to 1 possibilities.
8 guessing "brood" (:GREEN :GREEN :GREEN :GREEN :GREEN) ...correct.

It plays Wordle. Not very well, but it plays. This strategy seems to average a bit more than seven guesses a game. A better strategy should reduce this average.

When you guess a word, you divide the space of possible answers into a set of equivalence classes by score. I picture these as a set of bins, each labeled with a different score, like (:green :gray :gray :yellow :gray). Making a guess divides the list of possible words among the bins. A bad guess will only use a few bins and have uneven bins. A good guess will use a larger set of bins and divide things more evenly.

We'll need a function to collect the counts of an item in a series

(defun collect-counts (test items)
  (declare (optimizable-series-function))
  (collect-fn t
              (lambda () (make-hash-table :test test))
              (lambda (table item)
                (incf (gethash item table 0))
So now we go through a series of words, score the guess against each one, and count how many times we get each score.
(defun partition-words (guess words)
  (declare (optimizable-series-function))
  (collect-counts 'equal
                  (map-fn 'list #'score-guess
                          (series guess)
This returns a hash table that maps scores to the number of words matching that score. We need to measure how good a job this table does at narrowing down the word list.

We'll need a couple of helpers:

(defun weighted-sum (weights elements)
  (declare (optimizable-series-function))
  (collect-sum (map-fn 'real #'* weights elements)))

(defun scan-hash-values (hash-table)
  (declare (optimizable-series-function))
  (multiple-value-bind (keys values) (scan-hash hash-table)
    (declare (ignore keys))

Now we have to decide how to evaluate how well a partition (set of bins) narrows down possible word list. Suppose our word list originally had 128 words. That's 27 items, so it would take seven binary splits to single out a word. Now suppose after narrowing, we find we're down to 16 words. That's 24 items, so the narrowing is equivalent to three binary splits. The value of an entire set of bins is the weighted average of the narrowing of each bin.

(defun evaluate-partition1 (partition)
  (let* ((original-size (collect-sum (scan-hash-values partition)))
         (original-bits (log2 original-size)))

    (flet ((info-gain (bin-size)
             (- original-bits (log2 bin-size)))

           (weight (bin-size)
             (/ (coerce bin-size 'double-float)
                (coerce original-size 'double-float))))

      (let ((bin-sizes (scan-hash-values partition)))
         (map-fn 'real #'weight bin-sizes)
         (map-fn 'real #'info-gain bin-sizes))))))

(defun evaluate-guess (guess possibilities)
  (evaluate-partition (partition-words guess (scan 'list possibilities))))

(defun best-guess (guesses possibilities)
  (best #'> guesses :key (lambda (guess) (evaluate-guess guess possibilities))))

WORDLE> (play #'strategy/best-word)

1 guessing "soare" (:GRAY :GREEN :GRAY :GRAY :GRAY) ...narrowed to 87 possibilities.
2 guessing "culty" (:GRAY :GRAY :YELLOW :GRAY :GRAY) ...narrowed to 1 possibilities.
3 guessing "login" (:GREEN :GREEN :GREEN :GREEN :GREEN) ...correct.

With this strategy, we seem to average about 3.5 guess per game. This is much better than the tad over 7 we had before.

by Joe Marshall ( at Saturday, July 30, 2022

Wednesday, July 27, 2022


Call-tables and Medea

Here’s a simple way to write JSON from call-tables. It’s very unoptimal for now since it uses an intermediate representation instead of writing directly:

(import brev-separate medea srfi-69)

 `((,symbol? . ,(o write symbol->string))
    . ,(fn (write-json
            ((over (cons (car x)
                         (list->vector (cdr x))))
             (hash-table->alist (x))))))
   (,call-table? . ,(fn (write-json
                         (hash-table->alist (x)))))

It makes writing them straight-forward:

(define fruit-color
  (ctq* apple red pear (green yellow)))

(write-json fruit-color)

by Idiomdrottning ( at Wednesday, July 27, 2022

Tuesday, July 26, 2022

Arthur A. Gleckler

Vertical Monitor

Vertical Monitor

I started my computing journey on a TRS-80 Model I, whose screen had sixty-four columns of characters in sixteen rows. It was amazing how much one could get done with such a small screen. Later, I used an Ann Arbor Ambassador terminal. It could not only display much more text, but it did so in a vertical orientation. Seeing a whole function at a time made programming easier.

As display resolution marched forward, I forgot all about vertical monitors. A few months ago, something reminded me of that Ambassador terminal, and I decided to try a modern equivalent. I splurged on a 27" UltraFine UHD IPS USB-C HDR Monitor with Ergo Stand, and rotated it to vertical. I've been in nerd heaven ever since. On that monitor, my Emacs displays 321 lines of text! That's enough to see the context of even terrible spaghetti code. And when even that is insufficient, Emacs split-window-below combined with Follow Mode shows 642 lines at once.

Unless I'm sitting down.


I know many people who have had repetitive stress injuries from long hours at the keyboard, so I try hard to follow the best advice on ergonomics. One rule is that the top of your screen should be at eye level. But my new vertical monitor is so tall that it runs into the desk. I can't lower it any further, and the top quarter of it is above eye level when I'm sitting down. I have a sit-stand desk, though, so when I'm standing up, that's not a problem. It never occurred to me that a standing desk would be good for my eyesight.

But what about when I sit down? Standing up all day is sometimes too much. Software to the rescue.

Window Chord

When I'm sitting, I sacrifice the top quarter of the screen to ergonomics, shrinking my windows to fit the bottom three quarters. To make this easy, I added a few commands to my Window Chord (Github) window management package:

ctrl-alt-shift-3geometry other-monitorMove window to other monitor.
ctrl-alt-shift-HometallMake window take full height of monitor.
ctrl-alt-shift-EndshortMake window take ¾ of monitor height.
ctrl-alt-shift-PgDngeometry bottomMove window to bottom of monitor.
ctrl-alt-shift-PgUpgeometry topMove window to top of monitor.

Even with this adjustment, I can get 157 lines of text (or 314 with Follow mode), which is great.

by Arthur A. Gleckler at Tuesday, July 26, 2022

Saturday, July 23, 2022

Scheme Requests for Implementation

SRFI 200: Pattern Matching

SRFI 200 is now in withdrawn status.

This SRFI discusses some of the existing pattern-matching libraries for the Scheme programming language — namely, the pattern matcher presented by Andrew K. Wright and Robert Cartwright in the paper "A Soft Type System for Scheme", the pattern matcher developed by Dan Friedman, Erik Hilsdale and Kent Dybvig, the racket/match module distributed with the Racket programming environment, as well as the Bigloo and Gerbil pattern matchers distributed with their respective implementations. It then extracts a pattern syntax which is compatible with three of those implementations and provides extrinsic rationale for that syntax. It also provides a simple implementation of a pattern matcher which conforms to the specification of a pattern language provided in this document.

by Panicz Maciej Godek at Saturday, July 23, 2022

Thursday, July 21, 2022


Four short-term wishes for Fedi

One way forward, I feel, is:

  • Remove the “whole known network” tab.
  • Instead, focus on a moderated local timeline, and a home timeline of people you’ve selected to follow (from across all of Fedi).
  • Make it even easier to interact with off-site posts and accounts (including “weird Fedi” like Pixelfed and Lemmy), in addition to the “paste the URL in the search box” (maybe make something like that but more specific, faster, and more reliable), maybe a browser extension or share-sheet–accessible app?
  • Keep trying to fight the rando weird harasso instances that plague our mentions. Maybe use tech from email? The nightmare scenario for Fedi would be an app that can churn out new, uniquely named, one-off harassing instances. Maybe we need to think of something preemptive for that scenario.

I simultaneously want each instance to feel more like a tight-knit, local community (by showing the local timeline, and having each instance have their own style of moderation) while also making it even easier to use one account to follow your favorite Lemmy groups, Pixelfed posters, and fave people from all over, to like and boost posts.

Best of both worlds.

Your own “home” timeline which is a sprawling eclectic selection of all your own faves from all over, and a local timeline that’s what’s going on in your instance right now, maybe the latter doesn’t even show boosts.

This list of four items are all my dreams for the relatively short-term horizon.

Longer term, stuff like Bonfire’s attempt to revive visibility circles is interesting. Making social media a little more social and a lot less world-readable.

by Idiomdrottning ( at Thursday, July 21, 2022

Talking about my generation

I was really hopeful during the “downhill battle” era for FOSS and commons.

Then came three big blows:

  1. Smartphones. We were caught up or ahead on the desktop, then the arena was moved to the pocket with these super locked down computers.
  2. Web silos. Facebook, Twitter, Instagram, MySpace, YouTube ate the web. This specifically is what I mean as a sense of “loss”. I know there is IRC and mailing lists for some projects but there used to be for all kinds of mainstream topics.
  3. DRM-laden streaming sites like Spotify and Netflix.

I feel that we who went through edit autoexec.bat, notepad index.html, ./configure && make have a different perspective on this. That’s not to say that those three were good things, they were bad, but they were markers; I’ve just noticed that more people from that era share the perspective of how messed up the current tech stacks are. They paved paradise and put up a parking lot.

The advantage of the current era is easier UI. Silents and boomers were overjoyed in the era of iPhone and Facebook since they were finally “let in”, and millennials and younger grew up in a Truman Show world where Insta and YouTube were as established and inescapable as TV, radio, roads, and grocery stores had been for us. It’s like that old story of what a fish thinks about water.

(With plenty of individual exceptions in all directions because I’m talking cohort trends here; we have our pioneer elders, our non-nerdy peers, and people of all ages who wanna to see behind the curtain of how the tech world works.)

We are the generation that see the rot under the surface. That see walled garden web silos and sideload-locked devices as steps backwards. We had it all and lost it.

by Idiomdrottning ( at Thursday, July 21, 2022

Wednesday, July 20, 2022

Andy Wingo

unintentional concurrency

Good evening, gentle hackfolk. Last time we talked about heuristics for when you might want to compact a heap. Compacting garbage collection is nice and tidy and appeals to our orderly instincts, and it enables heap shrinking and reallocation of pages to large object spaces and it can reduce fragmentation: all very good things. But evacuation is more expensive than just marking objects in place, and so a production garbage collector will usually just mark objects in place, and only compact or evacuate when needed.

Today's post is more details!


Just because it's been, oh, a couple decades, I would like to reintroduce a term I learned from Marnanel years ago on advogato, a nerdy group blog kind of a site. As I recall, there is a word that originates in the Oxbridge social environment, "narg", from "Not A Real Gentleman", and which therefore denotes things that not-real-gentlemen do: nerd out about anything that's not, like, fox-hunting or golf; or generally spending time on something not because it will advance you in conventional hierarchies but because you just can't help it, because you love it, because it is just your thing. Anyway, in the spirit of pursuits that are really not What One Does With One's Time, this post is dedicated to the word "nargery".

side note, bis: immix-style evacuation versus mark-compact

In my last post I described Immix-style evacuation, and noted that it might take a few cycles to fully compact the heap, and that it has a few pathologies: the heap might never reach full compaction, and that Immix might run out of free blocks in which to evacuate.

With these disadvantages, why bother? Why not just do a single mark-compact pass and be done? I implicitly asked this question last time but didn't really answer it.

For some people will be, yep, yebo, mark-compact is the right answer. And yet, there are a few reasons that one might choose to evacuate a fraction of the heap instead of compacting it all at once.

The first reason is object pinning. Mark-compact systems assume that all objects can be moved; you can't usefully relax this assumption. Most algorithms "slide" objects down to lower addresses, squeezing out the holes, and therefore every live object's address needs to be available to use when sliding down other objects with higher addresses. And yet, it would be nice sometimes to prevent an object from being moved. This is the case, for example, when you grant a foreign interface (e.g. a C function) access to a buffer: if garbage collection happens while in that foreign interface, it would be nice to be able to prevent garbage collection from moving the object out from under the C function's feet.

Another reason to want to pin an object is because of conservative root-finding. Guile currently uses the Boehm-Demers-Weiser collector, which conservatively scans the stack and data segments for anything that looks like a pointer to the heap. The garbage collector can't update such a global root in response to compaction, because you can't be sure that a given word is a pointer and not just an integer with an inconvenient value. In short, objects referenced by conservative roots need to be pinned. I would like to support precise roots at some point but part of my interest in Immix is to allow Guile to move to a better GC algorithm, without necessarily requiring precise enumeration of GC roots. Optimistic partial evacuation allows for the possibility that any given evacuation might fail, which makes it appropriate for conservative root-finding.

Finally, as moving objects has a cost, it's reasonable to want to only incur that cost for the part of the heap that needs it. In any given heap, there will likely be some data that stays live across a series of collections, and which, once compacted, can't be profitably moved for many cycles. Focussing evacuation on only the part of the heap with the lowest survival rates avoids wasting time on copies that don't result in additional compaction.

(I should admit one thing: sliding mark-compact compaction preserves allocation order, whereas evacuation does not. The memory layout of sliding compaction is more optimal than evacuation.)

multi-cycle evacuation

Say a mutator runs out of memory, and therefore invokes the collector. The collector decides for whatever reason that we should evacuate at least part of the heap instead of marking in place. How much of the heap can we evacuate? The answer depends primarily on how many free blocks you have reserved for evacuation. These are known-empty blocks that haven't been allocated into by the last cycle. If you don't have any, you can't evacuate! So probably you should keep some around, even when performing in-place collections. The Immix papers suggest 2% and that works for me too.

Then you evacuate some blocks. Hopefully the result is that after this collection cycle, you have more free blocks. But you haven't compacted the heap, at least probably not on the first try: not into 2% of total space. Therefore you tell the mutator to put any empty blocks it finds as a result of lazy sweeping during the next cycle onto the evacuation target list, and then the next cycle you have more blocks to evacuate into, and more and more and so on until after some number of cycles you fall below some overall heap fragmentation low-watermark target, at which point you can switch back to marking in place.

I don't know how this works in practice! In my test setups which triggers compaction at 10% fragmentation and continues until it drops below 5%, it's rare that it takes more than 3 cycles of evacuation until the heap drops to effectively 0% fragmentation. Of course I had to introduce fragmented allocation patterns into the microbenchmarks to even cause evacuation to happen at all. I look forward to some day soon testing with real applications.


Just as a terminological note, in the world of garbage collectors, "parallel" refers to multiple threads being used by a garbage collector. Parallelism within a collector is essentially an implementation detail; when the world is stopped for collection, the mutator (the user program) generally doesn't care if the collector uses 1 thread or 15. On the other hand, "concurrent" means the collector and the mutator running at the same time.

Different parts of the collector can be concurrent with the mutator: for example, sweeping, marking, or evacuation. Concurrent sweeping is just a detail, because it just visits dead objects. Concurrent marking is interesting, because it can significantly reduce stop-the-world pauses by performing most of the computation while the mutator is running. It's tricky, as you might imagine; the collector traverses the object graph while the mutator is, you know, mutating it. But there are standard techniques to make this work. Concurrent evacuation is a nightmare. It's not that you can't implement it; you can. But it's very very hard to get an overall performance win from concurrent evacuation/copying.

So if you are looking for a good bargain in the marketplace of garbage collector algorithms, it would seem that you need to avoid concurrent copying/evacuation. It's an expensive product that would seem to not buy you very much.

All that is just a prelude to an observation that there is a funny source of concurrency even in some systems that don't see themselves as concurrent: mutator threads marking their own roots. To recall, when you stop the world for a garbage collection, all mutator threads have to somehow notice the request to stop, reach a safepoint, and then stop. Then the collector traces the roots from all mutators and everything they reference, transitively. Then you let the threads go again. Thing is, once you get more than a thread or four, stopping threads can take time. You'd be tempted to just have threads notice that they need to stop, then traverse their own stacks at their own safepoint to find their roots, then stop. But, this introduces concurrency between root-tracing and other mutators that might not have seen the request to stop. For marking, this concurrency can be fine: you are just setting mark bits, not mutating the roots. You might need to add an additional mark pattern that can be distinguished from marked-last-time and marked-the-time-before-but-dead-now, but that's a detail. Fine.

But if you instead start an evacuating collection, the gates of hell open wide and toothy maws and horns fill your vision. One thread could be stopping and evacuating the objects referenced by its roots, while another hasn't noticed the request to stop and is happily using the same objects: chaos! You are trying to make a minor optimization to move some work out of the stop-the-world phase but instead everything falls apart.

Anyway, this whole article was really to get here and note that you can't do ragged-stops with evacuation without supporting full concurrent evacuation. Otherwise, you need to postpone root traversal until all threads are stopped. Perhaps this is another argument that evacuation is expensive, relative to marking in place. In practice I haven't seen the ragged-stop effect making so much of a difference, but perhaps that is because evacuation is infrequent in my test cases.

Zokay? Zokay. Welp, this evening's nargery was indeed nargy. Happy hacking to all collectors out there, and until next time.

by Andy Wingo at Wednesday, July 20, 2022

Tuesday, July 19, 2022

Scheme Requests for Implementation

SRFI 211: Scheme Macro Libraries

SRFI 211 is now in final status.

This SRFI describes common syntactic extensions of the syntax-rules macro facility of R5RS and the base R6RS and R7RS libraries. In particular, library namespaces are defined where these extensions can be located and which can be tested against in cond-expand forms.

by Marc Nieper-Wißkirchen at Tuesday, July 19, 2022


Re: Thoughts on RSS

Matt Rickard wrote on RSS:

RSS readers act like email clients in how they render content via HTML. Unfortunately, email content isn’t as rich as the JavaScript-powered web today. Maybe that’s OK for email (and podcasts), but not for generic blog content.

Even the negatives on his list seem like huge positives.

Technically, he’s wrong here; there’s nothing stopping an RSS feed from getting slathered in JavaScript and CSS and there are many reading apps that will display that.

But culturally, that practice isn’t very widespread, thankfully, and RSS (including Atom) has been a readers’ bastion. This is a good thing, not a bad.

Substack has revitalized the blogging movement by giving away free hosting and email lists, and a business model for supporting writers. As email newsletters grow, RSS is a decent alternative to an increasingly cluttered email inbox.

One of the few things on his list that comes across as being meant as a positive, and I still disagree with it, since it’s possible to pipe email to RSS readers or RSS feeds to email apps.

Commercial incentives work against RSS. The protocol competes with internet advertising models (Google search ads, Facebook feed ads) and subscription models. Walled garden content aggregation is significantly more profitable than free syndication (e.g., Reddit, Facebook)

This is great for readers. Imagine if novels were littered with ads.

RSS doesn’t have a true sponsor. Netscape initially developed it. Later, Aaron Swartz led a redesign and fork. Yahoo designed the Media RSS specification. There’s also been some political strife with the RSS Advisory Board.

Because it already works. Not everything needs to be a race to the bottom. Some things can just be as they are.

Creator incentives work against RSS. The protocol does not benefit content creators because it doesn’t give them any insight into their audience (number of subscribers, emails, or other data).

Technically not true (there can be tracking pixels and such) except for the emails thing, which is a good thing. You can just read. Imagine if a book made you write down your address before opening it.

RSS is one-way publishing; there is no way for content creators and their audience to interact (e.g., through comments or replies).

I kinda don’t think of the people reading this as an “audience”; they can have their own blog.

Curation and discoverability are more difficult on RSS than on native platforms. Of course, you can build this into the reader, but that requires scale to get good signals (scale only available to the internet advertising companies).

This is actually my favorite thing about RSS! I’ve kinda stopped thinking of upsell and spamming as “discoverability”; with RSS, I can follow along with what I don’t wanna miss, but I don’t get sucked into rabbit holes.

RSS had usability issues – discovering a feed and seeing raw XML was too technical for the average user.

True. This was a li’l bit alleviated when browsers had built in RSS readers (and I’m glad they were optional, because they sucked).

One of the best things about RSS is that the Fediverse uses it.♥ So RSS can’t die as long as the Fediverse lives.

by Idiomdrottning ( at Tuesday, July 19, 2022