Planet Scheme

Friday, May 1, 2026

Scheme Requests for Implementation

SRFI 270: Hexadecimal Floating-Point Constants

SRFI 270 is now in draft status.

Floating-point numbers are generally in radix 2, but are written by users in radix 10. This SRFI introduces Scheme syntax for hexadecimal floating point constants based on C99's syntax.

by Peter McGoron at Friday, May 1, 2026

Wednesday, April 29, 2026

jointhefreeworld

Why I Still Reach for Scheme and Lisp Instead of Haskell

There is a persistent tension in software engineering between the beautiful, mathematically pure ideal of a program, and the messy, pragmatic reality of just getting things done. Over my career, I’ve explored the depths of both extremes in an attempt to find my personal sweet spot for hacking.

Before you sharpen your keyboards and start a flame war over the title, let me point out that I haven’t written this post to talk bad about Haskell, or any other tool for that matter. In fact, I love Haskell. I taught myself, banged my head against the wall over the course of three years, and built several real-world projects with it (some even became a bit lucrative).

Between my time in the web development world, the Go world, the JVM world with Java, Scala and Kotlin, and my long history hacking in Lisp ( Emacs, Common, Scheme), I have come to deeply appreciate functional programming.


Enlightening as it can be  #

Haskell has what likely is the most amazing, enlightening and complex type system to work with (as do more ML languages).

It is also the undisputed king of introducing mathematical ideas and concepts to programming, and popularizing them. Haskell circles are frequented by PhDs, computer science researchers, category theorists and all kinds of smart people (don’t underestimate other communities, like Schemers though).

Some of the amazing innovations of Haskell or that it has helped popularize, which blew my mind several times:

All these kind of things often feel bolted-on or missing entirely in other languages!

For all its brilliance, Haskell resists most of the attempts people make to just hack and write useful code quickly.

Specially people new to functional programming (or god forbid new to monads and functors! A monad is just a monoid in the category of endofunctors, what’s the problem?)


When pragmatism enables actual productivity  #

Scheme (and Lisp in general) might lack Haskell’s innovations and purity, favoring a minimalistic flexibility instead, but it mixes practicality with functional beauty in a way that makes it a functional language for human beings.

Actually, in my opinion, Scheme (and Lisp) allows you to express complex systems and problem domains in more simple terms than any other language can.

Take a recent adventure of mine, for example. I was spinning up a prototype for a bookmark management tool, just one of many projects I’ve come up with over the years.

I started in Haskell as I thought the beauty of data modelling and pure side-effect-free reasoning would work well: it’s also fast, elegant, and once you’ve used modules like Parsec, Servant, and optparse-applicative, it’s tough to imagine writing certain things, like a parser, without it.

One of the steps in the proof-of-concept was transforming some data models to XML and output them to a file.

If I were doing this in Kotlin or Java, it would be trivial: drop a dependency into Gradle, wire up Jackson or a standard DOM parser, and ten minutes later the data is in memory and ready to manipulate.

After a frustrating hour with my Haskell project, and even after years of experience with the language, I was still wrestling with the dependencies, and later with monadic API, and I ended up giving up on the whole thing after I noticed I even forgot what I was doing in the first place.

This has often been my friction point with Haskell. It is beautiful, but it fights you when you just want to get your hands dirty and prototype, without a big design upfront even though type-driven development can also be nice and work well in some cases.

Scheme ( GNU Guile for me) doesn’t have Haskell’s brutally efficient compiler, although it is quite speedy thanks to the C foundation. What it has is the terseness, power, and more importantly, it makes the actual act of hacking a joy.

As elegant as Haskell’s purely functional foundation is, it can really complicate simple, crucial, impure tasks like writing to files or talking over a network.

Monads are Haskell’s answer to this, but they often feel like a heavy abstraction tax; they allow you to write useful software, but they rarely make it intuitive or fast to prototype.

These kind of heavy-handed abstractions are in my opinion really beautiful, but not justifiable for most projects. Please do ask yourself, do I really need a functional effect system, is it worth the complexity and cognitive load? Do I really need the pure/impure computation strictness enforced at compile time? Remember that later, just adding a simple print somewhere is not going to work without refactor (welcome to the IO monad).

As a long-time Lisper, for me this is a massive barrier to usability. In many ways, you can only fix what you can observe.

Scheme happily sacrifices academic purity so you can slap a (write ...) anywhere in your code and instantly see what’s going on. I’m sure a Haskell purist is burying their face in their hands right now, citing Debug.Trace or questioning why I’d want side-effects in a lazy, well-optimized language. They aren’t technically wrong, but the friction added to quick-and-dirty debugging is a tax I am simply not willing to pay when I’m trying to move fast.


Meta-programming and DSLs  #

The second problem with Monads is directly tied to their greatest strength: they are synonymous with Domain Specific Languages (DSLs).

The promise of DSLs is fantastic—don’t write a complex program to solve a problem; write a simple program in a bespoke language designed solely for that task. Parsec is the golden child here; the parsing function is practically identical to the BNF grammar.

But the success of Parsec has filled Hackage with hundreds of bespoke DSLs for everything. One for parsing, one for XML, one for generating PDFs. Each is completely different, and each demands its own learning curve. Consider parsing XML, mutating it based on some JSON from a web API, and writing it to a PDF. In the Java ecosystem for example you expect a certain level of consistency. You pull in three libraries, and they generally follow familiar object-oriented or functional-lite conventions. But in Haskell, three DSLs designed for three different tasks usually mean the authors optimized strictly for the domain, completely ignoring syntax consistency. Instead of five minutes skimming JavaDocs, you have hours of DSL documentation and tutorials ahead of you.

As we Schemers know, Scheme is intentionally simple. That simplicity isn’t a limitation; it’s what makes it endlessly flexible.

While modern JVM languages rely heavily on reflection or complex compiler plugins (like Kotlin’s KSP) to achieve this, Lisp hackers have been effortlessly reshaping the language for decades using the powerful macro system and extending and bending the language to their will.

( define-syntax  define-repo-method
  ( syntax-rules ()
                ((_ method-name accessor docstring)
                 ( define* ( method-name repo . args)
                   docstring
                   (apply (accessor repo) args)))))

Haskell, much like Scala’s advanced type-level programming, often requires a mountain of language extensions to achieve similar flexibility ( Template Haskell and its powerful but scary API).

{-# LANGUAGE TemplateHaskell #-}
import Control.Monad
import Language.Haskell.TH

curryN :: Int -> Q Exp
curryN n = do
  f  <- newName "f"
  xs <- replicateM n (newName "x")
  let args = map VarP (f:xs)
      ntup = TupE (map (Just . VarE) xs)
  return $ LamE args (AppE (VarE f) ntup)

I’ve used Scheme for countless projects because of its combination of features and philosophies that bring it to my personal “sweet spot”. It’s also an advanced language, which keeps pioneering, and of unconstrained innovation (e.g. delimited continuations). When you want to mold the syntax directly to your will, Scheme gets out of your way and helps you achieve it.

Of course, to be completely fair about my toolkit, standard Scheme can sometimes lack the heavyweight, “batteries-included” ecosystem required for massive enterprise production compared to the JVM. Also, when compared to Haskell, Lisp compilers are modest and simple, at best, but that makes them also that much more approachable (and the error messages that much friendlier).

I’m not saying Scheme is objectively better than Haskell. Languages are tools, and we should choose the right tool for the job.

I will always remember all I learnt from Haskell’s functional beauty and ideas, but to me, Haskell remains a platonic ideal of a programming language: lighting the way in a certain direction, but a bit too rigid for most of what I do.


Then there is the REPL: Interactive workflow, developer power  #

A REPL (Read-Eval-Print Loop) is an interactive environment, which can be used connected to your console, running application, language compiler and more, which gives you superpowers as an engineer 🦸🏼.

Lisp dialects, more specifically Guile Scheme, have great support for this. I personally of course like to do this with Guix, Emacs, ( Arei/Ares + sesman) you can get an ultimate extensible powerful editor experience, miles ahead of traditional IDEs 🐂 .

And no, it’s not the same kind of REPL you know from Haskell (GHCIDE or others) or Python. Lisp REPLs can do so much more and integrate seamlessly to your editor. Evaluate, check, change and debug live, seamlessly.

It fundamentally changes the development workflow by eliminating the slow edit, save, compile, run cycle. Instead of writing a whole program and then running it to see what happens, you get a fast, conversational workflow. What does this mean for in practice?

  • Incremental Development: Write, test, inspect, evaluate one function or even one line at a time. Get immediate feedback without running the entire app.
  • Powerful Debugging: Forget adding print statements and restarting. You can pause, inspect objects, change values, and even redefine a broken function on the fly to test a fix in any environment (yes even in production, while running).
  • Fast Prototyping & Learning: Instantly experiment with a new library or API. Just load it and start calling functions to see how they work, which is much faster than only reading documentation.

When integrated into your code editor, you can execute any piece of code (a line, a selection, or a file) with a keyboard shortcut and see the result instantly, creating a seamless and powerful development experience.

Overall Lisp languages are simply the sweet spot for me and of what I consider good developer experience. They also give you super powers and let you create beatiful systems that can last.

Wednesday, April 29, 2026

Monday, April 27, 2026

Arthur A. Gleckler

validate-email-address

I'm building a new web site in Scheme for BALISP, the Bay Area Lisp and Scheme Users Group. (The site isn't launched yet, but will replace the current Meetup.com redirect at balisp.org sometime before our next meeting.)

The BALISP site needs to validate users' email addresses to make sure that they comply with RFC 5322, but I couldn't find a complete validator written in Scheme. Everything I read said that making a correct validator is a surprising amount of work. Many people write a complicated regular expression that produces false positives and negatives, but that felt wrong.

Fortunately, Dominic Sayers had published a thorough set of tests as part of his isemail validator, written in PHP. With those tests and the help of Claude Code, I was able to implement a complete validator that works in Chibi Scheme and Gauche Scheme. My new Scheme library is called validate-email-address, and is licensed under the MIT license except for the test data, which are licensed under Dominic's original BSD 3-Clause license. I hope it's useful to other Scheme hackers.

by Arthur A. Gleckler at Monday, April 27, 2026

Sunday, April 26, 2026

Scheme Requests for Implementation

SRFI 267: Raw String Syntax

SRFI 267 is now in final status.

Raw strings are a lexical syntax for strings that do not interpret escapes inside of them and are useful in cases where the string data has a lot of characters such as \ or " that would otherwise have to be escaped. This SRFI proposes a raw string syntax that allows for a customized delimiter to enclose the character data. Importantly, for any string, there exists a delimiter such that the raw string using that delimiter can represent the string verbatim. The raw strings in this SRFI do not do any special whitespace handling.

by Peter McGoron at Sunday, April 26, 2026

Tuesday, April 21, 2026

spritely.institute

Spritely Goblins v0.18.0: Sleepy actors!

Goblins version 0.18.0 release art: a Spritely goblin takes a nap in a chair by a fireplace, tea steams on a nearby table

We’re excited to announce the release of Spritely Goblins 0.18.0! This release features a new caching layer called “sleepy actors”, OCapN protocol updates, and numerous bug fixes. So get cozy by the fire, pull out a steaming cup of tea, and let’s have a nice relaxing read about this exciting new Goblins release!

Sleepy actors

Remember when we introduced persistence back in Goblins 0.13.0? You’re not sure? Okay, as a quick refresher, Goblins’ persistence system is able to serialize a running Goblins program for you and wake it back up later! Pretty cool!

Goblins is pretty smart about only saving the changes that need to change. But... if we can save actors to disk, do we really need them to be “awake” all at once? What if we let them take a little nap, and just woke them up when it’s time for them to do something? Then they could go back to bed when they aren’t needed anymore!

Well that’s exactly what we’ve built! Sleepy actors are a new, optional caching layer has been added to the core of Goblins. Actors may now go to sleep or be woken up depending on a customizable caching algorithm known as a “sleep strategy”. When an actor goes to sleep, it is saved to the vat’s persistence store but its reference remains live. When an asleep actor receives a message, its state is restored from the vat’s persistence store and the message is processed as usual.

Goblins currently ships with two sleep strategies: an extremely simple strategy where your little goblins head to bed after each and every turn, and a “least recently used” algorithm, which functions as a hot cache where only the most recently activated goblins stay awake, and the rest go take a nap.

For a feature that’s so sleepy, we’re pretty wired about its potential, and we hope you are too!

OCapN protocol updates

The OCapN draft specification has changed in the time since the last Goblins release. The op:deliver-only operation has been dropped in favor of a single op:deliver operation. GC operations now accept a list of export positions instead of a single position so that GC can be done in batches; their operation names have likewise been changed to the plural form (op:gc-export is now op:gc-exports, etc.) The protocol version number has thus been bumped, which means that applications built with an earlier release of Goblins are incompatible with the OCapN shipped in this release.

Notable bug fixes

  • Fixed a race condition when restoring multiple vats from persisted data. If Alice in vat A is referenced by Bob in vat B but no other actors in vat A then it was possible for Alice to be garbage collected before vat B is restored.

  • Fixed a signing oracle vulnerability in the WebSocket netlayer’s designator authentication code.

Getting the release

This release includes all the features detailed above as well as many bug fixes. See the NEWS for more information about all of the changes.

As usual, Guix users can upgrade to 0.18.0 by running the following:

guix pull
guix install guile-goblins

Otherwise, you can find the tarball on our release page.

If you’re making something with Goblins or want to contribute to Goblins itself, be sure to join our community at community.spritely.institute! We also host regular office hours where you can come and ask questions or discuss our projects. Information about office hours is available on the forum. Thanks for following along and hope to see you there!

by Dave Thompson and Christine Lemmer-Webber at Tuesday, April 21, 2026

Saturday, April 11, 2026

Idiomdrottning

What Delta Chat was

Being able to quickly write replies to email, real actual email, was very valuable. That was the core of what drew me to Delta Chat.

There are plenty of proprietary email apps set up around that feature but in the free world, not so much. Delta Chat was it and it was a gem because it was in many ways better than those other sparks and spikes and whatever they were called. Not to mention the incredible leap of faith it takes to go for a proprietary mail app since they can read the emails.

Delta Chat is rapidly moving away from being usable for that. If someone forks it or finds a good alternative (that’s FOSS, obvs), I would love to know.

I know I’ve worked a little on Notmuch, and I’ve talked a little bit with the people who make aerc, but for all their conveniences they’re still traditional mail apps where the threads look like files that you have to open up and enter into and work with. The few extra clicks involved with using a normal mail app might sound like no big deal but it really adds up. All the opening, searching, archiving, threads management… Whereas with Delta Chat in its prime, you just see the message right away and can reply right away. Easy peasy.

Maybe K-9 but it got bought out by Mozilla and they hate autocrypt which I don’t. I think WKD is better, sure, but I try to use both. K-9 used to be one of the best autocrypt clients out there.

by Idiomdrottning at Saturday, April 11, 2026

jointhefreeworld

Functional repository pattern in Scheme? Decoupling and abstracting the data layer in Lisp

Implementing the Repository Pattern with Hygienic Macros in Scheme

Hi everyone!

I’ve been working on a new approach for the data layer of my projects lately, and I’d love to poke your brains and get some feedback.

Coming from a background in Scala, Java and other OOP languages and a fascination for FP languages and Lisps (as well as Rust and Haskell), I’ve seen a lot of patterns come and go.

Recently, I noticed a common anti-pattern in my own Scheme projects: a tight coupling between my controller layer and the SQLite implementation. It wasn’t ideal, and I really missed the clean separation of the Repository Pattern.

So, I set out to decouple my data layer from my controller layer in the MVC architecture I love. I wanted to do this using pure functional programming, and I ended up building something really fun using Scheme’s hygienic macros.

(If you want to see this implemented in a real project, check out my example repo here: lucidplan)

I am working on adding it to byggsteg too.

I plan to bring this pattern to all my projects to reap the benefits of the eDSL, better decoupling, and easier testing. Here is how I built it.

The Macros  #

I created two main macros. define-record-with-kw magically defines a keyword-argument constructor, bypassing the need for strict parameter ordering. It’s highly ergonomic.

define-repo-method is the real superpower. It accepts any arity, plus optional or #:keyword arguments. This saves a ton of work, reduces tedious parameter passing, and gives you a very clean eDSL definition.

( define-module ( lucidplan domain repo)
   #:declarative? #t
   #:use-module (srfi srfi-9)
   #:export (define-repo-method define-record-with-kw))

( define-syntax  define-repo-method
  ( syntax-rules ()
                ((_ method-name accessor docstring)
                 ( define* ( method-name repo . args)
                   docstring
                   (apply (accessor repo) args)))))

( define-syntax  define-record-with-kw
  ( syntax-rules ()
                ((_ (type-name constructor-name pred) kw-constructor-name
                    (field-name accessor-name) ...)
                 ( begin
                    ;;  Define the standard SRFI-9 record
                   ( define-record-type type-name
                     (constructor-name field-name ...) pred
                     (field-name accessor-name) ...)

                    ;;  Define the keyword-argument constructor
                   ( define* ( kw-constructor-name  #:key field-name ...)
                     (constructor-name field-name ...))

                    ;;  Auto-export members
                   ( export type-name pred kw-constructor-name accessor-name
                           ...)))))

Defining the Domain eDSL  #

Here is how I use those macros to define my DSL for a “projects” entity:

( define-module ( lucidplan domain project)
   #:declarative? #t
   #:use-module (srfi srfi-9)
   #:use-module (lucidplan domain repo)
   #:export (get-projects))

 ;;  -- Record definition ---

(define-record-with-kw (  %make-project-repository
                                             project-repository?)
                       mk-project-repository
                       (get-projects-proc repo-get-projects))

 ;;  --- eDSL: Embedded Domain Specific Language ---

(define-repo-method get-projects repo-get-projects
  "Retrieves a list of all active projects from the given REPO.")

The SQLite Implementation  #

Finally, here is the concrete SQLite implementation using Artanis. this is completely decoupled from the rest of the application logic.

( define-module ( lucidplan sqlite project)
                #:declarative? #t
                #:use-module (srfi srfi-9)
                #:use-module (kracht prelude)
                #:use-module (artanis db)
                #:use-module (lucidplan sqlite util)
                #:use-module (lucidplan domain project)
                #:export (make-sqlite-project-repository))

 ;;  --- Artanis + SQLite implementation ---
( define ( make-sqlite-project-repository rc)
        ( define  columns
                '(id human-id
                     title
                     url
                     vcs-url
                     description
                     created-at
                     updated-at
                     deleted-at))

        ( define ( get-projects)
                ( let* ((query (format #f
                                       "SELECT ~a
                    FROM project WHERE deleted_at IS NULL
                    ORDER BY human_id ASC"
                                      (symbols->sql-columns-list columns)))
                       (_ (log-info  "get-projects query:\n\t~a\n" query))
                       (rows ( map sql-row->scheme-alist
                                  (DB-get-all-rows (DB-query (DB-open rc) query))))
                       (_ (log-info  "get-projects rows: ~a\n"
                                    (length rows))))
                  rows))

        (mk-project-repository  #:get-projects-proc get-projects))

A condensed example with keyword arguments:

 ;;  The DSL (notice how arity is clean)
(define-repo-method get-jobs repo-get-jobs
                   "Retrieves a list of active jobs from the given REPO.")

 ;;  SQLite implementation
( define* ( get-jobs  #:key limit offset)
  ( let* ((query (format #f
                  "SELECT ~a FROM job
                  ORDER BY created_at DESC LIMIT ~a OFFSET ~a"
                 (symbols->sql-columns-list columns) limit offset))
         (_ (log-info  "get-jobs query:\n\t~a\n" query))
         (rows ( map sql-row->scheme-alist
                    (DB-get-all-rows (DB-query (DB-open rc) query))))
         (_ (log-info  "get-jobs rows: ~a\n"
                      (length rows))))
    rows))

Using it can look like

( let*
  (job-repo (make-sqlite-job-repository rc))
  (jobs (get-jobs job-repo  #:limit 50  #:offset 0))
.......)

I believe I have something really powerful cooking here, but I know there is always room for improvement.

What do you all think? How would you go about improving this? I’m entirely open to criticism, feedback, and brainstorming!

Thanks for reading this :)

Saturday, April 11, 2026

Thursday, April 2, 2026

Scheme Requests for Implementation

SRFI 269: Portable Test Definitions

SRFI 269 is now in draft status.

This SRFI defines a portable API for test definitions that is decoupled from test execution and reporting. It provides three primitives: the universal is macro for assertions, test for grouping assertions into independently executable units, and suite for organizing tests into hierarchies. Tests and suites can carry user-provided metadata to adjust the behavior of a test runner, for example, to select tests by tags or to enforce timeout values. The API is tiny, yet capable and flexible. By focusing on the definition and leaving execution semantics to test runners, this SRFI offers a common ground that can reduce fragmentation among testing libraries.

Unlike side-effect-driven testing frameworks (e.g. SRFI-64), this API produces first-class runtime entities, making it easy to filter, schedule, wrap them in exception guards and continuation barriers, run in arbitrary order, and re-run dynamically generated test subsets. In addition to the usual CLI test runners, it enables runtime-friendly test runners that integrate well with highly interactive development workflows inside REPLs and IDEs, significantly increasing control over test execution and shortening the feedback loop.

To bridge the test definitions and test runners, the SRFI specifies a message-passing programming interface, and test loading and execution semantics recommendations for test runner implementers.

by Andrew Tropin and Ramin Honary at Thursday, April 2, 2026

Tuesday, March 31, 2026

Andy Wingo

wastrelly wabbits

Good day! Today (tonight), some notes on the last couple months of Wastrel, my ahead-of-time WebAssembly compiler.

Back in the beginning of February, I showed Wastrel running programs that use garbage collection, using an embedded copy of the Whippet collector, specialized to the types present in the Wasm program. But, the two synthetic GC-using programs I tested on were just ported microbenchmarks, and didn’t reflect the output of any real toolchain.

In this cycle I worked on compiling the output from the Hoot Scheme-to-Wasm compiler. There were some interesting challenges!

bignums

When I originally wrote the Hoot compiler, it targetted the browser, which already has a bignum implementation in the form of BigInt, which I worked on back in the day. Hoot-generated Wasm files use host bigints via externref (though wrapped in structs to allow for hashing and identity).

In Wastrel, then, I implemented the imports that implement bignum operations: addition, multiplication, and so on. I did so using mini-gmp, a stripped-down implementation of the workhorse GNU multi-precision library. At some point if bignums become important, this gives me the option to link to the full GMP instead.

Bignums were the first managed data type in Wastrel that wasn’t defined as part of the Wasm module itself, instead hiding behind externref, so I had to add a facility to allocate type codes to these “host” data types. More types will come in time: weak maps, ephemerons, and so on.

I think bignums would be a great proposal for the Wasm standard, similar to stringref ideally (sniff!), possibly in an attenuated form.

exception handling

Hoot used to emit a pre-standardization form of exception handling, and hadn’t gotten around to updating to the newer version that was standardized last July. I updated Hoot to emit the newer kind of exceptions, as it was easier to implement them in Wastrel that way.

Some of the problems Chris Fallin contended with in Wasmtime don’t apply in the Wastrel case: since the set of instances is known at compile-time, we can statically allocate type codes for exception tags. Also, I didn’t really have to do the back-end: I can just use setjmp and longjmp.

This whole paragraph was meant to be a bit of an aside in which I briefly mentioned why just using setjmp was fine. Indeed, because Wastrel never re-uses a temporary, relying entirely on GCC to “re-use” the register / stack slot on our behalf, I had thought that I didn’t need to worry about the “volatile problem”. From the C99 specification:

[...] values of objects of automatic storage duration that are local to the function containing the invocation of the corresponding setjmp macro that do not have volatile-qualified type and have been changed between the setjmp invocation and longjmp call are indeterminate.

My thought was, though I might set a value between setjmp and longjmp, that would only be the case for values whose lifetime did not reach the longjmp (i.e., whose last possible use was before the jump). Wastrel didn’t introduce any such cases, so I was good.

However, I forgot about local.set: mutations of locals (ahem, objects of automatic storage duration) in the source Wasm file could run afoul of this rule. So, because of writing this blog post, I went back and did an analysis pass on each function to determine the set of locals which are mutated inside the body of a try_table. Thank you, rubber duck readers!

bugs

Oh my goodness there were many bugs. Lacunae, if we are being generous; things not implemented quite right, which resulted in errors either when generating C or when compiling the C. The type-preserving translation strategy does seem to have borne fruit, in that I have spent very little time in GDB: once things compile, they work.

coevolution

Sometimes Hoot would use a browser facility where it was convenient, but for which in a better world we would just do our own thing. Such was the case for the number->string operation on floating-point numbers: we did something awful but expedient.

I didn’t have this facility in Wastrel, so instead we moved to do float-to-string conversions in Scheme. This turns out to have been a good test for bignums too; the algorithm we use is a bit dated and relies on bignums to do its thing. The move to Scheme also allows for printing floating-point numbers in other radices.

There are a few more Hoot patches that were inspired by Wastrel, about which more later; it has been good for both to work on the two at the same time.

tail calls

My plan for Wasm’s return_call and friends was to use the new musttail annotation for calls, which has been in clang for a while and was recently added to GCC. I was careful to limit the number of function parameters such that no call should require stack allocation, and therefore a compiler should have no reason to reject any particular tail call.

However, there were bugs. Funny ones, at first: attributes applying to a preceding label instead of the following call, or the need to insert if (1) before the tail call. More dire ones, in which tail callers inlined into their callees would cause the tail calls to fail, worked around with judicious application of noinline. Thanks to GCC’s Andrew Pinski for help debugging these and other issues; with GCC things are fine now.

I did have to change the code I emitted to return “top types only”: if you have a function returning type T, you can tail-call a function returning U if U is a subtype of T, but there is no nice way to encode this into the C type system. Instead, we return the top type of T (or U, it’s the same), e.g. anyref, and insert downcasts at call sites to recover the precise types. Not so nice, but it’s what we got.

Trying tail calls on clang, I ran into a funny restriction: clang not only requires that return types match, but requires that tail caller and tail callee have the same parameters as well. I can see why they did this (it requires no stack shuffling and thus such a tail call is always possible, even with 500 arguments), but it’s not the design point that I need. Fortunately there are discussions about moving to a different constraint.

scale

I spent way more time that I had planned to on improving the speed of Wastrel itself. My initial idea was to just emit one big C file, and that would provide the maximum possibility for GCC to just go and do its thing: it can see everything, everything is static, there are loads of always_inline helpers that should compile away to single instructions, that sort of thing. But, this doesn’t scale, in a few ways.

In the first obvious way, consider whitequark’s llvm.wasm. This is all of LLVM in one 70 megabyte Wasm file. Wastrel made a huuuuuuge C file, then GCC chugged on it forever; 80 minutes at -O1, and I wasn’t aiming for -O1.

I realized that in many ways, GCC wasn’t designed to be a compiler target. The shape of code that one might emit from a Wasm-to-C compiler like Wastrel is different from that that one would write by hand. I even ran into a segfault compiling with -Wall, because GCC accidentally recursed instead of iterated in the -Winfinite-recursion pass.

So, I dealt with this in a few ways. After many hours spent pleading and bargaining with different -O options, I bit the bullet and made Wastrel emit multiple C files. It will compute a DAG forest of all the functions in a module, where edges are direct calls, and go through that forest, greedily consuming (and possibly splitting) subtrees until we have “enough” code to split out a partition, as measured by number of Wasm instructions. They say that -flto makes this a fine approach, but one never knows when a translation unit boundary will turn out to be important. I compute needed symbol visibilities as much as I can so as to declare functions that don’t escape their compilation unit as static; who knows if this is of value. Anyway, this partitioning introduced no performance regression in my limited tests so far, and compiles are much much much faster.

scale, bis

A brief observation: Wastrel used to emit indented code, because it could, and what does it matter, anyway. However, consider Wasm’s br_table: it takes an array of n labels and an integer operand, and will branch to the nth label, or the last if the operand is out of range. To set up a label in Wasm, you make a block, of which there are a handful of kinds; the label is visible in the block, and for n labels, the br_table will be the most nested expression in the n nested blocks.

Now consider that block indentation is proportional to n. This means, the file size of an indented C file is quadratic in the number of branch targets of the br_table.

Yes, this actually bit me; there are br_table instances with tens of thousands of targets. No, wastrel does not indent any more.

scale, ter

Right now, the long pole in Wastrel is the compile-to-C phase; the C-to-native phase parallelises very well and is less of an issue. So, one might think: OK, you have partitioned the functions in this Wasm module into a number of files, why not emit the files in parallel?

I gave this a go. It did not speed up C generation. From my cursory investigations, I think this is because the bottleneck is garbage collection in Wastrel itself; Wastrel is written in Guile, and Guile still uses the Boehm-Demers-Weiser collector, which does not parallelize well for multiple mutators. It’s terrible but I ripped out parallelization and things are fine. Someone on Mastodon suggested fork; they’re not wrong, but also not Right either. I’ll just keep this as a nice test case for the Guile-on-Whippet branch I want to poke later this year.

scale, quator

Finally, I had another realization: GCC was having trouble compiling the C that Wastrel emitted, because Hoot had emitted bad WebAssembly. Not bad as in “invalid”; rather, “not good”.

There were two cases in which Hoot emitted ginormous (technical term) functions. One, for an odd debugging feature: Hoot does a CPS transform on its code, and allocates return continuations on a stack. This is a gnarly technique but it gets us delimited continuations and all that goodness even before stack switching has landed, so it’s here for now. It also gives us a reified return stack of funcref values, which lets us print Scheme-level backtraces.

Or it would, if we could associate data with a funcref. Unfortunately func is not a subtype of eq, so we can’t. Unless... we pass the funcref out to the embedder (e.g. JavaScript), and the embedder checks the funcref for equality (e.g. using ===); then we can map a funcref to an index, and use that index to map to other properties.

How to pass that funcref/index map to the host? When I initially wrote Hoot, I didn’t want to just, you know, put the funcrefs of interet into a table and let the index of a function’s slot be the value in the key-value mapping; that would be useless memory usage. Instead, we emitted functions that took an integer, and which would return a funcref. Yes, these used br_table, and yes, there could be tens of thousands of cases, depending on what you were compiling.

Then to map the integer index to, say, a function name, likewise I didn’t want a table; that would force eager allocation of all strings. Instead I emitted a function with a br_table whose branches would return string.const values.

Except, of course, stringref didn’t become a thing, and so instead we would end up lowering to allocate string constants as globals.

Except, of course, Wasm’s idea of what a “constant” is is quite restricted, so we have a pass that moves non-constant global initializers to the “start” function. This results in an enormous start function. The straightforward solution was to partition global initializations into separate functions, called by the start function.

For the funcref debugging, the solution was more intricate: firstly, we represent the funcref-to-index mapping just as a table. It’s fine. Then for the side table mapping indices to function names and sources, we emit DWARF, and attach a special attribute to each “introspectable” function. In this way, reading the DWARF sequentially, we reconstruct a mapping from index to DWARF entry, and thus to a byte range in the Wasm code section, and thus to source information in the .debug_line section. It sounds gnarly but Guile already used DWARF as its own debugging representation; switching to emit it in Hoot was not a huge deal, and as we only need to consume the DWARF that we emit, we only needed some 400 lines of JS for the web/node run-time support code.

This switch to data instead of code removed the last really long pole from the GCC part of Wastrel’s pipeline. What’s more, Wastrel can now implement the code_name and code_source imports for Hoot programs ahead of time: it can parse the DWARF at compile-time, and generate functions that look up functions by address in a sorted array to return their names and source locations. As of today, this works!

fin

There are still a few things that Hoot wants from a host that Wastrel has stubbed out: weak refs and so on. I’ll get to this soon; my goal is a proper Scheme REPL. Today’s note is a waypoint on the journey. Until next time, happy hacking!

by Andy Wingo at Tuesday, March 31, 2026

Monday, March 30, 2026

Scheme Requests for Implementation

SRFI 268: Multidimensional Array Literals

SRFI 268 is now in draft status.

This is a specification of a lexical syntax for multi-dimensional arrays. Textually it is an alteration of SRFI 163, which is an extension of the Common Lisp array reader syntax to handle non-zero lower bounds and optional uniform element types (compatibly with SRFI 4 and SRFI 160). It can be used in conjunction with SRFI 25, SRFI 122, or SRFI 231. There are recommendations for output formatting, read-array and write-array procedures, and a suggested format-array procedure.

by Per Bothner (SRFI 163), Peter McGoron (design), John Cowan (editor and steward), and Wolfgang Corcoran-Mathe (implementation) at Monday, March 30, 2026

Wednesday, March 25, 2026

Idiomdrottning

My Butlerian hypocrisy

In the Butlerian Jihad (from Dune but popularized by many smolnet posters like Alex Schroeder) we rightly hate bots and scrapers but I’m in a bit of a glass house around that, since I’ve made a few scrapers for my own personal use as a way to get RSS Atom feeds out of sites that don’t have feeds. I love scraping and mashing.♥︎ The JS-laden SPA era was a nightmare for me. I hate browsers and server-side styling. I love getting texts from URLs.

Follow-ups

An Inhabitant in Carcosa responds:

Bad in intent: it is intended to do something unethical, whether that be LLM training, denial of service, privatizing the commons, or immanentizing the eschaton. This is pretty subjective in an “I know it when I see it” kind of way. Scraping for a search index, scraping for a full-text RSS feed, and scraping for LLM training are all the same act as far as the server can tell, but only the last one is /evil/.

Having a full-text RSS feed as a way to not have to deal with ads or paywalls—even when the reasons to not be able to otherwise handle ads and paywalls are 100% a11y issues—goes against the intent of the server owners.

And I’m not so sure LLMs are evil.

It may ignore robots.txt, it may lie about being another user-agent

Have done both those too!

Either bad intent or bad implementation is enough; a bot doesn’t need both to be bad.

That’s not exactly my philosophy.

I love the open readable simple web where each document has one URL and you can read it on your own terms. I can’t deal with the junk web.

by Idiomdrottning at Wednesday, March 25, 2026

Friday, March 13, 2026

crumbles.blog

HOWTO: Unlock LUKS encrypted disks over SSH on a Raspberry Pi 4 running NixOS

Follow the instructions on the NixOS wiki. For a Raspberry Pi 4 connected over Ethernet, you need:

boot.initrd.availableKernelModules = [
  "xhci_pci"
  "usbhid"
  "uas"
  "pcie-brcmstb"
  "reset-raspberrypi"
  "genet"
  "broadcom"
  "bcm_phy_lib"
];

Also note: use cipher xchacha20,aes-adiantum-plain64 on Raspberry Pi 4 due to the lack of AES hardware instructions. The default aes-xts-plain64 is slow without these instructions; xchacha20,aes-adiantum-plain64 is over twice as fast. (Raspberry Pi 5 has AES instructions, but doesn’t support NixOS very well yet.) If you forget to set the cipher when creating the encrypted device, cryptsetup reencrypt can help, but it may take multiple days once you have any real amount of data on the disk at all.

Friday, March 13, 2026

Thursday, February 26, 2026

Retropikzel's blog

Wednesday, February 25, 2026

spritely.institute

Hoot 0.8.0 released!

We are excited to announce the release of Hoot 0.8.0! Hoot is a Scheme to WebAssembly compiler backend for Guile, as well as a general purpose WebAssembly toolchain. In other words, Scheme in the browser!

This release contains new features and bug fixes and since the 0.7.0 release back in October.

New features

  • New (hoot repl) module. At long last, there is now a built-in read-eval-print loop implementation! Previous releases added a macro expander, a Scheme interpreter, and a runtime module system, but now it’s possible to do live hacking from a Hoot program inside a WebAssembly runtime!

    • To use the REPL, compile your Wasm binary with the necessary debug flag during development: guild compile-wasm -g1. This will include the runtime module system in the resulting binary. Expect compilation time and binary size to increase significantly. The trade-off is that a live hacking workflow will make recompilations fewer and farther between.

  • While not shipping in Hoot directly, initial support for using the Hoot REPL from Emacs has been added in the new geiser-hoot extension. We have submitted geiser-hoot for inclusion in MELPA and Guix so it will be easy to install in the very near future.

  • Enhanced (hoot web-server) module. To support the use of REPLs running within a web browser tab, the most common development use case, the web server doubles as a REPL server, proxying TCP traffic from REPL clients (more about that below) over a WebSocket to the connected browser tab.

    • These enhancements introduce two new, optional depedencies to Hoot: Fibers and guile-websocket. If either of these dependencies are not present at build time, the (hoot web-server) module will not be built.

    • The web server can now be extended with a user-supplied request router. An example of this can be found in our hoot-slides repository.

  • New (hoot web-repl) module. This module can be imported and compiled into the Wasm binary so that it can act as a REPL server. This is complicated by the fact that a browser client cannot act as a server, it is strictly a client. Instead, it connects to the aforementioned (hoot web-server) which acts as a proxy for all connected REPL clients.

  • New hoot command-line tool. This command will be used as a place to collect handy Hoot development tools. So far, there are two subcommands:

    • hoot repl: Open a REPL running in Node. Useful for quickly trying out basic Scheme expressions in Hoot without having to compile a standalone WebAssembly program.

    • hoot server: Conveniently launch the development web server in (hoot web-server).

  • New (web request) and (web response) modules that export a sliver of the API defined in Guile’s modules of the same names.

  • New (web socket) module that provides a input/output interface to WebSocket client connections. Mimicks the module of the same name in guile-websocket.

  • Added customizable module loader interface via new current-module-loader parameter. Two concrete loaders are provided: By default, modules are loaded from the file system by searching a load path. This is useful when running in a non-browser runtime such as NodeJS. When run-web-repl in (web repl) is used, connected REPLs are configured to use an HTTP-based loader. This loader makes HTTP requests to a special endpoint on the development web server to fetch source code.

    • Note that modules loaded at runtime are loaded from source and then interpreted. Unlike Guile, where modules are automatically compiled to bytecode, Hoot cannot compile individual modules to Wasm (which would require compiling the compiler to Wasm which is an interesting future possibility).

Community highlights

Check out this chiptune tracker made with Hoot by Vivianne Langdon!

Additionally, check out Wastrel, a Wasm GC to C compiler developed by Andy Wingo. Wastrel notably uses Hoot’s Wasm toolchain. A Wasm program compiled with Wastrel runs faster than the same program on NodeJS!

Documentation changes

  • Updated Installation chapter to mention new optional dependencies.

  • Added Modules and REPL sections to the Scheme reference chapter.

  • Added Development chapter.

  • Update Status section to remove mention of missing R7RS support that we have now.

  • Removed docs for obsolete --emit-names flag

  • Add documentation for -g flag to guild compile-wasm.

  • Fixed example in the JavaScript reflection section that was using the obsolete load_main signature.

Toolchain changes

  • Split Wasm validation out of (wasm vm) and into new (wasm validation) module.

  • Keep data computed within the validation pass in <validated-wasm> records so that data can be used during instantiation rather than redundantly recomputing it.

  • Added explicit support for representing a “canonicalization”: a world in which structurally equal types are equal.

  • (wasm vm) types <wasm-func>, <wasm-struct>, <wasm-array> now refer to their types by index into a canonicalized set.

  • Added untagged <wasm-array> backing stores to (wasm vm) for all simple scalar numeric types, including i8 and i16 packed types.

  • Modified (wasm vm) to look up named heap type references in the instance’s canonicalization.

  • Added bytevector->wasm-array, wasm-array->bytevector to (wasm vm).

  • Added support for some of the “none” bottom types.

  • Packed array data is now stored signed, wrapped from i32 when set, and only unwrapped to unsigned in get_u functions.

  • Added string.from_code_point and string.concat lowerings in (wasm lower-stringrefs).

  • Renamed outdated extern.internalize and extern.externalize to their current names, any.convert_extern and extern.convert_any.

  • Added new has-wasm-header? procedure to (wasm parse).

  • Parse core reference types to <ref-type> records rather than symbol abbreviations in (wasm parse).

Miscellaneous changes

  • Modified schedule-task in (fibers scheduler) (which is implemented using inline Wasm on the target) to be a no-op when called at expansion time on the host i.e. used at the module top-level or from a procedural macro.

  • Added support for vector and call-with-values primitives to (hoot primitives) module so they can be used in interpreted code.

  • truncate is now exported from (guile).

  • Allow exports to clobber each other in module-declare! to support live hacking of modules where define-module forms are often re-evaluated many times.

  • Extracted JS Uint8Array bindings from internals of (fibers streams) to new (hoot typed-arrays) module.

  • Implement subset of Guile’s procedural module API for hackable programs (i.e. programs that are built with runtime module support).

  • Added (hoot config) target-side module for accessing certain build-time constants (currently just the Hoot version string).

  • Extracted (hoot library) module from (hoot library-group) so that the library parser can be used on the target for live hacking purposes.

  • Added define-module implementation to (guile) that simply throws an error if used during compilation. A separate implementation is installed for use by the interpreter in hackable programs.

  • Added #:replace? argument to module-export! to allow replacement of exports for live hacking purposes.

  • Exported module-root from (hoot modules).

  • Added module-imported-modules procedure to (hoot modules).

  • Changed file I/O host functions to return null when a file cannot be opened so a Scheme exception that can be handled by user code rather than a host exception that cannot.

  • Extracted contents of (scheme file) to new (hoot file) module for use in internal code such as the implementation of the file system module loader in (hoot hackable).

  • Moved implementation of string-join, string-concatenate, string-prefix?, and string-prefix-ci? from (guile) to (hoot strings).

  • Moved case-insensitive string procedures from (scheme char) to (hoot strings).

  • Added string-drop to (hoot strings).

  • Added every and fold-right procedures to (hoot lists).

  • Moved implementation of and-map and or-map from (guile) to (hoot lists).

  • Added symbol-append to (hoot symbols).

  • Added less verbose custom printer for <module> record type.

  • Switched from positional to keyword arguments for make-soft-port in (hoot ports).

  • Added list-index to (guile).

Bug fixes

  • Fixed format-exception not writing all of its output to the current error port.

  • Fix eof-object export in (ice-9 binary-ports).

  • Fixed off-by-one error for procedures with rest args in (hoot eval).

  • Fixed min/max to only accept real numbers, handle NaNs, and normalize exact zeroes.

  • Fixed continuation composition leaving an unwind continuation on the stack.

  • Fixed prompt unwinding in certain join continuation situations.

  • Fixed compilation of unwind primcalls at join points.

  • Fixed runtime module system ignoring replacement bindings in Guile modules.

Browser compatibility

  • Compatible with Safari 26 or later.

  • Compatible with Firefox 121 or later.

  • Compatible with Chrome 119 or later.

Get Hoot

Hoot is available in GNU Guix:

$ guix pull
$ guix install guile guile-hoot

Also, Hoot is now available in Debian, though it will take awhile for this release to make it there.

Otherwise, Hoot can be built from source via our release tarball. See the Hoot homepage for a download link and GPG signature.

Documentation for Hoot 0.8.0, including build instructions, can be found here.

Get in touch

For bug reports, pull requests, or just to follow along with development, check out the Hoot project on Codeberg.

If you build something cool with Hoot, let us know on our community forum!

Thanks to our supporters

Your support makes our work possible! If you like what we do, please consider becoming a Spritely supporter today!

Diamond tier

  • Aeva Palecek
  • David Anderson
  • Holmes Wilson
  • Lassi Kiuru

Gold tier

  • Alex Sassmannshausen
  • Juan Lizarraga Cubillos

Silver tier

  • Austin Robinson
  • Brit Butler
  • Charlie McMackin
  • Dan Connolly
  • Danny OBrien
  • Deb Nicholson
  • Eric Bavier
  • Eric Schultz
  • Evangelo Stavro Prodromou
  • Evgeni Ku
  • Glenn Thompson
  • James Luke
  • Jonathan Frederickson
  • Jonathan Wright
  • Joshua Simmons
  • Justin Sheehy
  • Matt Panhans
  • Michel Lind
  • Mike Ledoux
  • Nathan TeBlunthuis
  • Nia Bickford
  • Noah Beasley
  • Steve Sprang
  • Travis Smith
  • Travis Vachon

Bronze tier

  • Alan Zimmerman
  • Aria Stewart
  • BJ Bolender
  • Ben Hamill
  • Benjamin Grimm-Lebsanft
  • Brooke Vibber
  • Brooklyn Zelenka
  • Carl A
  • Crazypedia No
  • François Joulaud
  • Gerome Bochmann
  • Grant Gould
  • Gregory Buhtz
  • Ivan Sagalaev
  • James Smith
  • Jason Wodicka
  • Jeff Forcier
  • Marty McGuire
  • Mason DeVries
  • Michael Orbinpost
  • Neil Brudnak
  • Nelson Pavlosky
  • Philipp Nassua
  • Robin Heggelund Hansen
  • Rodion Goritskov
  • Ron Welch
  • Stefan Magdalinski
  • Stephen Herrick
  • Steven De Herdt
  • Tamara Schmitz
  • Thomas Talbot
  • William Murphy
  • a b
  • r g
  • terra tauri

Until next time, happy hooting! 🦉

by Dave Thompson at Wednesday, February 25, 2026

Tuesday, February 24, 2026

The Racket Blog

Racket v9.1

posted by Stephen De Gabrielle and John Clements


We are pleased to announce Racket v9.1 is now available from https://download.racket-lang.org/.

As of this release:

  • Documentation organization and navigation can be specialized by language family, to allow users to interact with documentation in a way that is tailored to that language family. This is currently used by Rhombus.
  • The for form and its variants accept an #:on-length-mismatch specifier. 3.18 Iterations and Comprehensions: for, for/list, …
  • DrRacket improves the GUI for choosing color schemes.
  • DrRacket has curved syntax arrows. The degree of curvature indicates the relative left- or right-displacement of the arrow’s target.
  • DrRacket’s “Insert Large Letters” uses characters that match the comment syntax of the buffer’s language, making it useful (and fun!) in Rhombus.
  • The exn-classify-errno maps network and filesystem error numbers on various platforms to posix-standard symbols, to enable more portable code. 10.2 Exceptions
  • The behavior of Racket BC on certain character operations (most notably eq?) is changed to match that of Racket CS, with a small performance penalty for these operations for BC programs. 19 Performance 1.5 Implementations
  • The make-struct-type procedure can inherit the current inspector using a 'current flag. This is the default behavior, but there are situations in which it’s not possible to refer to the current inspector. 5.2 Creating Structure Types
  • Bundle configurations can better control the conventions for locating shared object files with the --enable-sofind=<conv> flags.
  • The system-type function can report on platform and shared-object-library conventions with new flags. 15.8 Environment and Runtime Information
  • The openssl/legacy library makes it possible to access OpenSSL’s built-in “legacy” provider, to get access to insecure and outdated algorithms. OpenSSL: Secure Communication
  • Typed Racket improves expected type propagation for keyword argument functions.
  • There are many other repairs and documentation improvements!

Don’t forget to run raco pkg migrate 9.0

Thank you

The following people contributed to this release:

Alexander Shopov, beast-hacker, Bob Burger, Brad Lucier, Cadence Ember, David Van Horn, evan, François-René Rideau, Gustavo Massaccesi, Jacqueline Firth, Jade Sailor, Jason Hemann, Jens Axel Søgaard, John Clements, Jonas Rinke, Matthew Flatt, Matthias Felleisen, Mike Sperber, Noah Ma, Pavel Panchekha, Rob Durst, Robby Findler, Ryan Culpepper, Sam Tobin-Hochstadt, Stephen De Gabrielle, and Wing Hei Chan.

Racket is a community developed open source project and we welcome new contributors. See racket/README.md to learn how you can be a part of this amazing project.

Feedback Welcome

Questions and discussion welcome at the Racket community on Discourse or Discord.

Please share

If you can - please help get the word out to users and platform specific repo packagers

Racket - the Language-Oriented Programming Language - version 9.1 is now available from https://download.racket-lang.org

See https://blog.racket-lang.org/2026/02/racket-v9-1.html for the release announcement and highlights.

Tuesday, February 24, 2026