Planet Scheme

Wednesday, May 13, 2026

Scheme Requests for Implementation

SRFI 272: Pretty Printing

SRFI 272 is now in draft status.

This SRFI follows the traditional Scheme model of pretty printing, which treats it as a process distinct from general controlled formatting. While general-purpose formatters often prioritize specialized presentation at the expense of machine-readability, Scheme’s pretty-printers (such as those of SLIB and MIT Scheme) have traditionally treated pretty printing as a variant of write, differing primarily in the insertion of whitespace to make the presentation more palatable to humans. Common Lisp’s pretty-printer, by contrast, fills two roles simultaneously by integrating pretty printing with both its format facility and its generalized write procedures. This unified approach offers great power, but at the cost of complexity that can make it difficult to use effectively. We propose a specialized, layered approach, specifying five libraries of increasing functionality, where all but the first are optional. The libraries are downward-compatible: more powerful libraries satisfy all requirements of the simpler ones while adding new features. Implementors may choose to support a maximum level of functionality appropriate for their systems. Integration with monadic and string-based formatting libraries is supported.

by Sergei Egorov at Wednesday, May 13, 2026

spritely.institute

Hoot 0.9.0 released!

We are excited to announce the release of Hoot 0.9.0! Hoot is a Scheme to WebAssembly compiler backend for Guile, as well as a general purpose WebAssembly toolchain. In other words, Scheme in the browser!

This release contains new features and bug fixes and since the 0.8.0 release back in February.

Use Hoot in the upcoming Lisp Game Jam!

On Friday (May 15th 2026), the Spring edition of the Lisp Game Jam will begin! It's a 10-day long game jam where participants make games using their favorite flavor of Lisp.

Does making a small web game in Scheme using Hoot sound appealing to you? Well, then we have just the thing to get you started: the Hoot game jam template! This template project has everything you need to start making an HTML5 game with Hoot quickly.

The template repository includes:

  • Bindings to the necessary web APIs to make an interactive game with HTML5 canvas

  • A Makefile for compiling, running a development web server, and generating a .zip bundle for uploading to itch.io

  • A very simple Breakout-like example game that demonstrates how to put all the pieces together

Thanks to contributor Gonzalo Delgado, the game jam template now features gamepad input support!

For more inspiration, here are some games made with Hoot for past jams:

Now, onto the release notes!

Toolchain changes

  • Added the concept of host provided types. Useful for Hoot on Wastrel support.

  • Switched from legacy exceptions to standard Wasm exceptions, which were officially adopted in July 2025 but have been available in browsers for much longer. The --experimetal-wasm-exnref flag is passed to NodeJS in case it is old enough to still require this feature flag (which is currently the case for Guix).

  • Added support for DWARF custom section.

Compiler changes

  • Replaced function name/source metadata with DWARF.

  • Changed default debug level to 1, which now includes emitting DWARF. For a production build stripped of all debug data, use debug level 0 (-g0 in the CLI) or strip the binary afterwards using hoot strip.

  • Update compiler backend for new primitive bytevector predicates introduced in Guile 3.0.11.

  • Added support for bitvector literals on big endian host systems.

Runtime changes

  • Floating point number to string conversion is now implemented in Scheme rather than relying on an import. This makes binaries slightly bigger but makes it easier to support Hoot on Wastrel and non-JavaScript runtimes generally.

  • Bignum imports have been monomorphized to ease non-JavaScript runtimes (Wastrel, again).

  • Scheme binaries now export a main function that takes 0 arguments and returns 0 values that invokes the internal $load function. This makes it posible for Wastrel to boot a Hoot program without support for the Scheme reflection interface.

  • Added DWARF parser to reflect.js.

  • Removed fsqrt import in favor of using f64.sqrt instruction.

Scheme changes

  • Range errors now include the range in the exception irritants list.

  • Added uint8array->bytevector procedure to (hoot typed-arrays).

  • Vectors are now considered self-evaluating in (hoot expander).

  • Added internal (hoot module-syntax) module to gather runtime module macros and support code.

  • define-record-type now works in the Scheme interpreter for record types with up to 8 fields. (Supporting more than 8 fields is planned but will require larger changes to the Hoot runtime.)

CLI changes

  • guild compile-wasm has been deprecated in favor of the new hoot compile subcommand. Both commands accept the same flags.

  • Added hoot help subcommand.

  • Added hoot strip subcommand to remove debugging information from a Wasm binary.

  • Feature flags have been split out from debug options in hoot compile/guild compile-wasm. For example, -gruntime-modules is no longer valid; use -fruntime-modules instead. The -g flag now exclusively handles debugging data such as whether to emit DWARF or a names section. The -f flag handles things that change program behavior such as whether to include a runtime module system for use with the Scheme interpreter.

Documentation changes

  • Updated manual to use hoot compile instead of guild compile-wasm.

  • hoot compile --bundle is now recommended in web deployment section rather than manually copying support files.

Bug fixes

  • The repl.js file missing from 0.8.0 (which broke hoot repl) is now included in the release tarball. Apologies for the oversight!

  • Fixed pathologically large function emitted by lower-globals, which Wasm engines such as Wastrel can struggle with. Instead, many smaller functions are emitted.

  • hash-set!, hashq-set!, etc. now return the passed value. This matches Guile's behavior and allows more existing Guile programs to work as expected.

  • Fixed bug in parsing zero-length custom sections in Wasm binaries.

  • Fixed validation of return_call_indirect instruction.

  • Fixed missing EOF handler in REPL meta-command reader.

Browser compatibility

  • Compatible with Safari 26 or later.

  • Compatible with Firefox 121 or later.

  • Compatible with Chrome 119 or later.

Get Hoot

Hoot is available in GNU Guix:

$ guix pull
$ guix install guile guile-hoot

Also, Hoot is now available in Debian, though it will take awhile for this release to make it there.

Otherwise, Hoot can be built from source via our release tarball. See the Hoot homepage for a download link and GPG signature.

Documentation for Hoot 0.9.0, including build instructions, can be found here.

Get in touch

For bug reports, pull requests, or just to follow along with development, check out the Hoot project on Codeberg.

If you build something cool with Hoot, let us know on our community forum!

Thanks to our supporters

Your support makes our work possible! If you like what we do, please consider becoming a Spritely supporter today!

Diamond tier

  • Aeva Palecek
  • David Anderson
  • Holmes Wilson
  • Jonathan Frederickson
  • Lassi Kiuru

Gold tier

  • Alex Sassmannshausen
  • Juan Lizarraga Cubillos

Silver tier

  • Austin Robinson
  • Brit Butler
  • Charlie McMackin
  • Dan Connolly
  • Deb Nicholson
  • Eric Bavier
  • Eric Schultz
  • Evangelo Stavro Prodromou
  • Evgeni Ku
  • Glenn Thompson
  • James Luke
  • Jonathan Wright
  • Michel Lind
  • Mike Ledoux
  • Nathan TeBlunthuis
  • Nia Bickford
  • Noah Beasley
  • Steve Sprang
  • Travis Smith
  • Travis Vachon

Bronze tier

  • Alan Zimmerman
  • Aria Stewart
  • BJ Bolender
  • Ben Hamill
  • Benjamin Grimm-Lebsanft
  • Brooke Vibber
  • Brooklyn Zelenka
  • Carl A
  • Crazypedia No
  • Ellie High
  • François Joulaud
  • Gerome Bochmann
  • Grant Gould
  • Gregory Buhtz
  • Ivan Sagalaev
  • James Smith
  • Jason Wodicka
  • Jeff Forcier
  • Marty McGuire
  • Mason DeVries
  • Michael Orbinpost
  • Neil Brudnak
  • Nelson Pavlosky
  • Philipp Nassua
  • Robin Heggelund Hansen
  • Ron Welch
  • Stefan Magdalinski
  • Stephen Herrick
  • Steven De Herdt
  • Tamara Schmitz
  • Thomas Talbot
  • William Murphy
  • a b
  • chee rabbits
  • r g
  • terra tauri

Until next time, happy hooting! 🦉

by Dave Thompson at Wednesday, May 13, 2026

Saturday, May 9, 2026

Idiomdrottning

Pixel art apps on F-Droid, comparison

Here’s a comparison between the three pixel/​sprite/​tile making apps I could find on F-Droid. I know there’s stuff in that vein on Varvara, but I haven’t figured out a good way to run Varvara apps on Android yet, especially in a way where I could get files in and out. (Definitively still interested in that approach though.) So here’s PxerStudio, Pixel Artist, and PixaPencil.

PxerStudio

First I tried PxerStudio. My initial review of this was: Okay, I can use this one, but I really hope one of the other options is better. Why?

Because it’s “floaty” and “fidgety” in a way that makes my nerves knot up. This floatingness starts right away even when selecting the resolution (for all three of these apps I used 16×16) requires those aaaaawful number-sliders Android has. It feels horrible trying to get it to land on exactly sixteen.

Same goes for selecting colors: there’s no way to use palettes or entering hex digits or RGB values. There’s a color picker which you’ll have to rely on religiously. So if you reverse-engineer the project format you could open images with a palette base layer to pick from.

Also don’t press “back” by mistake because it might close the whole thing down (without saving, it seemed like,o but I’m not sure).

Placing the actal pixels is the biggest problem. There are three options: a drawing tool (basically a one pixel brush), a line tool, and a box tool. And a flood fill that does work well. The line tool doesn’t actually draw full lines if you go at an angle; it’s really conservative and only places the pixels it’s sure of. So it might only place two or three pixels inbetween long gaps that you’d have to fill in by hand. That’s fine. The biggest lack is a way to just tap in pixels. Tapping on a pixel doesn’t do anything, only dragging does. So I’m constantly adding pixels by mistake when I accidentally drag but where I do want to add pixels, I have to drag. The box-drawing tool is almost the best tool since you can make 1×1 “boxes” to make pixels but that still requires dragging.

There’s full undo support though, and that mitigates a lot of these issues. But wow is this app an anxiety factory for me.

The export options are great (of the three apps this one exports the best) and unique among the three apps is that you can use layers. The layers is a killer feature that might push me into choosing this one over the others.

Pixel Artist

Okay TL;DR: probably don’t use this Pixel Artist until they implement better export options.

Here we go the opposite direction with an extremely minimalist app that only has one drawing tool: tap on a square (a pixel) to make it the selected color. Long press on a pixel to color-pick from it. No other tools, this is all you’ve got, and the only available size is 16×16 which might be a showstopper for some projects but suits me perfectly.

In addition to the awesome color-pick-by-long-press super power, there’s a predefined palette along one edge (and that’s your only “toolbar”, the palette); you can’t import palettes but you can replace colors by long-pressing them. However, those new colors you can only select by R, G, and B sliders (and no number so you can’t select a specific R, G or B value, just ballpark). If you save or reload a file, you still get the default palettes and your customly added colors are gone. You can still color pick them from the image but you can’t copy picked colors into the palette. So maybe don’t use custom colors except the default palette is bad with no ramps, all colors the same brightness, a single yellow and seven nearly indistinguishable greens.

There’s no zoom option either, so on the Retroid Pocket Classic, the image doesn’t fit without panning (although I can see the whole image by opening the file menu, where it’s embedded) and on the Paper 7 (which has fewer pixels than the RPC so apparently this is determined by display size, not pixel density), there is a huge white margin to the right and below the image.

I can’t comfortably draw with a “tap by tap” app like this (and I hope you like tapping because a 16×16 image requires 256 taps), but if I sketch on 1mm grid paper, I can then “digitalize” those paper sketches by usig this Pixel Artist app as a “data entry tool”. The garish, unramped colors would’ve been fine if if they had at least been distinguishable, since I can re-palette them in the game engine and I’d only be using the app to get a digital representation of a paper sketches and I can use all kinds of colored pens when making the paper sketch versions on grid paper.

Dragging with a single finger pans the image and tapping sets a pixel to the chosen color. There’s no undo at all.

So far some pros and cons: great for quickly placing pixels with precision, but that’s all it can do. Now for what breaks it: exporting the images! Three issues with that:

  1. You need to remember to turn off the grid before exporting, otherwise the grid will be visible in the image (and it’ll have a different resolution so it can fit the grid). Not a dealbreaker but I do want the grid on while drawing (that especially goes for this app with it’s dot-by-dot mental model) so having to turn that off before exporting each image is a chore.

  2. One pixel is not one pixel. It’s display dependent. So images I made on the RPC are 1280×1280 (that’s 80×80 pixels per pixel) while images I made on the Paper 7 are 640 by 640 (so 40 by 40 pixels per pixel). That’s nothing imagemagick can’t fix (it’s only a waste of space and bandwith but I can live with that).

  3. Now to the biggest problem: that big exported picture is a JPEG for some reason! What in the heck! Yeah, yeah, ImageMagick’s convert utility can probably reindex and requantize the images to hopefully dodge any artifacts so the JPEG decision doesn’t have to ruin anything except it also wastes space.

So okay, my lede on Pixel Studio said not to use until they fix exports not literally true because ImageMagick can rescue the image data. Just not easily.

PixaPencil

Uh-oh! F-Droid warns that “source code no longer available” for PixaPencil, so I was really hoping it wasn’t gonna be the best of the three. (Turns out the sitaution is that the app has gone proprietary. It’s “source-available” but for our DFSG purposes it might as well be in /dev/null. Except! Older versions can be forked! That’s a distinction I sometimes wish F-Droid would make but maybe that’s splitting hairs. I see that on F-Droid apps and I think oh no there’s been a drive failure and the source code is literally gone (I’ve been there, fam♥︎😭) but what happened instead is that the main dev is making no more DFSG-free updates so may the forks be with you. Lookin at the sourcecode there’s a subpackage named “dao” which is a pretty bad red flag for me.)

Unfortunately it might the best one.

How good is it at the basics?

It doesn’t have layers, is the major missing feature that PxerStudio had. Export options are also limited compared to PxerStudio but it’s a good clean PNG where you can select raw (one pixel is one pixel, great for using in the game project) or scaled (one pixels is bigger, great for display and posting). That’s all I need. Other tools can take it from there. The PNG images are in indexed, 8-bit PaletteAlpha format which is perfect for this.

Of the three apps, this is the only one that lets you actually enter any hex triplet color (or I guess hex quadruplet since there’s alpha too). You can even import entire palettes (You need to use a specific website to do that, though. It’s called “Lospec”. You need to paste only the last part of the URL, the “palette identifier”.).

The only way to zoom is the zoom buttons and the only way to pan is the panning tool. Okay, I love that restriction. This means that dragging my finger (or pen but my pen is broken right now) over the screen does only one thing: the selected tool. Tapping pixels to add them is even more reliable than in Pixel Artist (where it semeed like it sometimes did require a little effort), and as I said, Pxer Studio can’t even do that.

The line tool is crisp and reliable and there are a couple of other tools like boxes, flood fill, polylines and so on.

Beyond the basics

I’ve got to remember than if anything in this section is bad, that’s fine, just don’t use it, it’s a “bonus section” anyway. (For example, the project I’m working on doesn’t benefit from dithering.) There’s mirror symmetry, dither and spray tools, other brush shapes (and your currently selected brush is used when using the line tools). There’s darkening, lightening, and color-inverting the entire image (maybe would’ve been more useful if there had been layers and/or selections).

There’s also a “darken/lighten” tool which I love and hate. Be careful when using it because if you accidentally lift your pen, it re-darkens already darkened squares or re-lightens already lightened ones, and it also lightens any black outlines you have. (Again, no layers…)

It does not stick to the ramps in your palette, is the major problem. Those caveats aside, I’m still thinking I might want to use this tool a lot, to crank out a bunch of sprites quickly: I’ll just make a way simpler hue-only palette for the main flats and then shade them with this darken tool, being more restrictive against lighten since that can mess up the outlines. I can re-palette the sprites in the game engine anyway later. But, this darken/lighten tool is at the expense of a more generic “only replace selected color” draw setting that MS Paintbrush had back in the early 90s. It’s both worse and better because if you’re satisfied with what it does, it’s fewer clicks to add a li’l depth to your images and make them a li’l less NES and a li’l more TG-16.

I also like the “pixel perfect” setting. With it on, I can draw sloppily and it deletes stray pixels after I lift my finger. I love that it’s not the default because it’s pretty surprising behavior, but it’s a great option that I’ll use often if I do go with this app.

Emacs in an SSH

My original plan when I got this 1mm grid paper was to enter the pixels directly into Emacs and writing a li’l something something to convert it into pixel data (and then I learned that pceas has a hex-nybble-to-pixel-index importer built in which made this even easier).

I mean something like:

00A0
00A0
00A0
00A0

(that’s a li’l 4 by 4 toy example, the real entries would be 8 by 8 or 16 by 16.)

I quickly learned that as much as I actually do type stuff in here with the OSK, that might be fine for coding but not so much for this amount of “data entry”.

In other words, I can’t do it on-the-go unless I bring my keyboard. But at home or when I do have my keyboard this might still be the best method for getting paper-sketched ideas into the game! I can choose homerow glyphs or otherwise comfortably-placed glyphs while entering and fix them with a tr filter.

Kind of sucks that the grid paper has “major grid line” every ten lines instead of every eight lines but I can live with that (I’ve already made plenty of sprites with this method and it works fine).

Nostalgia for my desktop

When I still had my desktop computer I loved making pixel art in a combination of Inkscape, MyPaint and GIMP #ChangeTheName. Waaay back in the day I’ve used Synfig also. Inkscape might sound like a bad choice for pixelart and it was, until I found some extensions that made it better. The icons I made for Heartfeed where all made in Inkscape, manually rehinted for every size. (Well, some of them started life in Blender but I did the hinting in Inkscape). Knowing that outlined objects need to start at .5-offset pixel increments while un-outlined objects need to start at integer pixel increments, that sort of stuff is necessary to be aware of when working with Inkscape. It’s definitively not safe-and-good straight out of the box. It’s just that at the time my brain was just really attuned into Inkscape. GIMP #ChangeTheName is also great for color curves, scaling options, applying gradients and so on, and there’s a million formats to export and import, and it’s great for making animations easily.

CSP (non-FOSS alert)

Then after I moved apartments and couldn’t use my desktop anymore because of lack of physical space and lack of power outlets and I’m relegated to tablets I got CSP for the iPad but I’ve let that subscription lapse. Looking into maybe getting a cross-device renewal later that works on both iPad and Android.

CSP was okay for pixel art actually. I wasn’t that happy with how the one image I made with it turned out, but I was pretty happy with the tools. The “pixel perfect” drawing mode that PixaPencil has would’ve been welcome, but in exchange it’s freeing to be able to use pencil sketching and painting tools for the first iterations and then go down to the nitty gritty for refining them. We get all symmetry lines, skewing, molding, pushing, warping, masking, layer joy we could ever need and can then cook it down to pixel size.

That’s an approach that none of these specialized pixel art apps can do.

It’s the difference between drawing and painting. Sometimes painting feels like molding clay, I love it, I can push and pull, add and sculpt. Sketching has some of the same quality with a loose-enough pencil scribbling approach (I like normal HB pencils the best). Drawing implies laying down the lines exactly where they should be and getting them right in the first try. That’s the mentality the pixel apps require and that’s a pretty huge limitation on all of them. They’re very waterfall and not so iterative.

Conclusion

Nothing I can do on tablet, among the options I’ve found so far, comes close to what I could do on my Debian desktop with MyPaint, Krita, Blender, Inkscape, and GIMP #ChangeTheName. Except for CSP which did come pretty close.

Of the three Android options I’m gonna go with the old version of PixaPencil in the hope of forks. I’m not saying no to the Varvara stuff if I can get them to work on Android although I’m working on a project that uses four-bit color, not two-bit.

If I ever rejoin society I might go trawling on the App Store and Play Store (including considering maybe renewing my CSP subscription).

Definitively not throwing out my 1mm grid paper either. After looking at these pixel apps I’m so glad I got it. I’m sure some of the sprites and tiles will be drawn entirely that way and hand-entered into Emacs, others will be drawn mostly that way, and hand-entered and refined in one of the pixel apps, some will made primarily in the pixel apps based on loose paper sketches, and some won’t use the grid paper at all and that’s fine too. I’m completely overwhelmed by the amount of art I have to make so I really appreciate the multi-faceted approach.

by Idiomdrottning at Saturday, May 9, 2026

Sunday, May 3, 2026

Scheme Requests for Implementation

SRFI 271: Random port libraries

SRFI 271 is now in draft status.

This SRFI proposes a pattern of libraries for binary input ports that produce random bytes. Libraries are divided into “randomized” and “determinized” categories to address different uses of random data. The design leaves the details of random number generation to the implementer and the transformation of bytes to other types (floats, etc.) to higher-level libraries. A mechanism for saving random-port states as bytevectors and for propagating those states to new ports is also provided.

by Wolfgang Corcoran-Mathe at Sunday, May 3, 2026

Friday, May 1, 2026

Scheme Requests for Implementation

SRFI 270: Hexadecimal Floating-Point Constants

SRFI 270 is now in draft status.

Floating-point numbers are usually stored in radix 2, but are written by users in radix 10. This SRFI introduces Scheme syntax for hexadecimal floating point constants based on C99's syntax, that use radix 16 for writing the integer and fractional part, and a radix 10 exponent part that raises the whole value to a power of 2.

by Peter McGoron at Friday, May 1, 2026

Wednesday, April 29, 2026

jointhefreeworld

Why I Still Reach for Scheme and Lisp Instead of Haskell

There is a persistent tension in software engineering between the beautiful, mathematically pure ideal of a program, and the messy, pragmatic reality of just getting things done. Over my career, I’ve explored the depths of both extremes in an attempt to find my personal sweet spot for hacking.

Before you sharpen your keyboards and start a flame war over the title, let me point out that I haven’t written this post to talk bad about Haskell, or any other tool for that matter. In fact, I love Haskell. I taught myself, banged my head against the wall over the course of three years, and built several real-world projects with it (some even became a bit lucrative).

Between my time in the web development world, the Go world, the JVM world with Java, Scala and Kotlin, and my long history hacking in Lisp ( Emacs, Common, Scheme), I have come to deeply appreciate functional programming.


Enlightening as it can be  #

Haskell has what likely is the most amazing, enlightening and complex type system to work with (as do more ML languages).

It is also the undisputed king of introducing mathematical ideas and concepts to programming, and popularizing them. Haskell circles are frequented by PhDs, computer science researchers, category theorists and all kinds of smart people (don’t underestimate other communities, like Schemers though).

Some of the amazing innovations of Haskell or that it has helped popularize, which blew my mind several times:

All these kind of things often feel bolted-on or missing entirely in other languages!

For all its brilliance, Haskell resists most of the attempts people make to just hack and write useful code quickly.

Specially people new to functional programming (or god forbid new to monads and functors! A monad is just a monoid in the category of endofunctors, what’s the problem?)


When pragmatism enables actual productivity  #

Scheme (and Lisp in general) might lack Haskell’s innovations and purity, favoring a minimalistic flexibility instead, but it mixes practicality with functional beauty in a way that makes it a functional language for human beings.

Actually, in my opinion, Scheme (and Lisp) allows you to express complex systems and problem domains in more simple terms than any other language can.

Take a recent adventure of mine, for example. I was spinning up a prototype for a bookmark management tool, just one of many projects I’ve come up with over the years.

I started in Haskell as I thought the beauty of data modelling and pure side-effect-free reasoning would work well: it’s also fast, elegant, and once you’ve used modules like Parsec, Servant, and optparse-applicative, it’s tough to imagine writing certain things, like a parser, without it.

One of the steps in the proof-of-concept was transforming some data models to XML and output them to a file.

If I were doing this in Kotlin or Java, it would be trivial: drop a dependency into Gradle, wire up Jackson or a standard DOM parser, and ten minutes later the data is in memory and ready to manipulate.

After a frustrating hour with my Haskell project, and even after years of experience with the language, I was still wrestling with the dependencies, and later with monadic API, and I ended up giving up on the whole thing after I noticed I even forgot what I was doing in the first place.

This has often been my friction point with Haskell. It is beautiful, but it fights you when you just want to get your hands dirty and prototype, without a big design upfront even though type-driven development can also be nice and work well in some cases.

Scheme ( GNU Guile for me) doesn’t have Haskell’s brutally efficient compiler, although it is quite speedy thanks to the C foundation. What it has is the terseness, power, and more importantly, it makes the actual act of hacking a joy.

As elegant as Haskell’s purely functional foundation is, it can really complicate simple, crucial, impure tasks like writing to files or talking over a network.

Monads are Haskell’s answer to this, but they often feel like a heavy abstraction tax; they allow you to write useful software, but they rarely make it intuitive or fast to prototype.

These kind of heavy-handed abstractions are in my opinion really beautiful, but not justifiable for most projects. Please do ask yourself, do I really need a functional effect system, is it worth the complexity and cognitive load? Do I really need the pure/impure computation strictness enforced at compile time? Remember that later, just adding a simple print somewhere is not going to work without refactor (welcome to the IO monad).

As a long-time Lisper, for me this is a massive barrier to usability. In many ways, you can only fix what you can observe.

Scheme happily sacrifices academic purity so you can slap a (write ...) anywhere in your code and instantly see what’s going on. I’m sure a Haskell purist is burying their face in their hands right now, citing Debug.Trace or questioning why I’d want side-effects in a lazy, well-optimized language. They aren’t technically wrong, but the friction added to quick-and-dirty debugging is a tax I am simply not willing to pay when I’m trying to move fast.


Meta-programming and DSLs  #

The second problem with Monads is directly tied to their greatest strength: they are synonymous with Domain Specific Languages (DSLs).

The promise of DSLs is fantastic—don’t write a complex program to solve a problem; write a simple program in a bespoke language designed solely for that task. Parsec is the golden child here; the parsing function is practically identical to the BNF grammar.

But the success of Parsec has filled Hackage with hundreds of bespoke DSLs for everything. One for parsing, one for XML, one for generating PDFs. Each is completely different, and each demands its own learning curve. Consider parsing XML, mutating it based on some JSON from a web API, and writing it to a PDF. In the Java ecosystem for example you expect a certain level of consistency. You pull in three libraries, and they generally follow familiar object-oriented or functional-lite conventions. But in Haskell, three DSLs designed for three different tasks usually mean the authors optimized strictly for the domain, completely ignoring syntax consistency. Instead of five minutes skimming JavaDocs, you have hours of DSL documentation and tutorials ahead of you.

As we Schemers know, Scheme is intentionally simple. That simplicity isn’t a limitation; it’s what makes it endlessly flexible.

While modern JVM languages rely heavily on reflection or complex compiler plugins (like Kotlin’s KSP) to achieve this, Lisp hackers have been effortlessly reshaping the language for decades using the powerful macro system and extending and bending the language to their will.

( define-syntax  define-repo-method
  ( syntax-rules ()
                ((_ method-name accessor docstring)
                 ( define* ( method-name repo . args)
                   docstring
                   (apply (accessor repo) args)))))

Haskell, much like Scala’s advanced type-level programming, often requires a mountain of language extensions to achieve similar flexibility ( Template Haskell and its powerful but scary API).

{-# LANGUAGE TemplateHaskell #-}
import Control.Monad
import Language.Haskell.TH

curryN :: Int -> Q Exp
curryN n = do
  f  <- newName "f"
  xs <- replicateM n (newName "x")
  let args = map VarP (f:xs)
      ntup = TupE (map (Just . VarE) xs)
  return $ LamE args (AppE (VarE f) ntup)

I’ve used Scheme for countless projects because of its combination of features and philosophies that bring it to my personal “sweet spot”. It’s also an advanced language, which keeps pioneering, and of unconstrained innovation (e.g. delimited continuations). When you want to mold the syntax directly to your will, Scheme gets out of your way and helps you achieve it.

Of course, to be completely fair about my toolkit, standard Scheme can sometimes lack the heavyweight, “batteries-included” ecosystem required for massive enterprise production compared to the JVM. Also, when compared to Haskell, Lisp compilers are modest and simple, at best, but that makes them also that much more approachable (and the error messages that much friendlier).

I’m not saying Scheme is objectively better than Haskell. Languages are tools, and we should choose the right tool for the job.

I will always remember all I learnt from Haskell’s functional beauty and ideas, but to me, Haskell remains a platonic ideal of a programming language: lighting the way in a certain direction, but a bit too rigid for most of what I do.


Then there is the REPL: Interactive workflow, developer power  #

A REPL (Read-Eval-Print Loop) is an interactive environment, which can be used connected to your console, running application, language compiler and more, which gives you superpowers as an engineer 🦸🏼.

Lisp dialects, more specifically Guile Scheme, have great support for this. I personally of course like to do this with Guix, Emacs, ( Arei/Ares + sesman) you can get an ultimate extensible powerful editor experience, miles ahead of traditional IDEs 🐂 .

And no, it’s not the same kind of REPL you know from Haskell (GHCIDE or others) or Python. Lisp REPLs can do so much more and integrate seamlessly to your editor. Evaluate, check, change and debug live, seamlessly.

It fundamentally changes the development workflow by eliminating the slow edit, save, compile, run cycle. Instead of writing a whole program and then running it to see what happens, you get a fast, conversational workflow. What does this mean for in practice?

  • Incremental Development: Write, test, inspect, evaluate one function or even one line at a time. Get immediate feedback without running the entire app.
  • Powerful Debugging: Forget adding print statements and restarting. You can pause, inspect objects, change values, and even redefine a broken function on the fly to test a fix in any environment (yes even in production, while running).
  • Fast Prototyping & Learning: Instantly experiment with a new library or API. Just load it and start calling functions to see how they work, which is much faster than only reading documentation.

When integrated into your code editor, you can execute any piece of code (a line, a selection, or a file) with a keyboard shortcut and see the result instantly, creating a seamless and powerful development experience.

Overall Lisp languages are simply the sweet spot for me and of what I consider good developer experience. They also give you super powers and let you create beatiful systems that can last.

Wednesday, April 29, 2026

Monday, April 27, 2026

Arthur A. Gleckler

validate-email-address

I'm building a new web site in Scheme for BALISP, the Bay Area Lisp and Scheme Users Group. (The site isn't launched yet, but will replace the current Meetup.com redirect at balisp.org sometime before our next meeting.)

The BALISP site needs to validate users' email addresses to make sure that they comply with RFC 5322, but I couldn't find a complete validator written in Scheme. Everything I read said that making a correct validator is a surprising amount of work. Many people write a complicated regular expression that produces false positives and negatives, but that felt wrong.

Fortunately, Dominic Sayers had published a thorough set of tests as part of his isemail validator, written in PHP. With those tests and the help of Claude Code, I was able to implement a complete validator that works in Chibi Scheme and Gauche Scheme. My new Scheme library is called validate-email-address, and is licensed under the MIT license except for the test data, which are licensed under Dominic's original BSD 3-Clause license. I hope it's useful to other Scheme hackers.

by Arthur A. Gleckler at Monday, April 27, 2026

Sunday, April 26, 2026

Scheme Requests for Implementation

SRFI 267: Raw String Syntax

SRFI 267 is now in final status.

Raw strings are a lexical syntax for strings that do not interpret escapes inside of them and are useful in cases where the string data has a lot of characters such as \ or " that would otherwise have to be escaped. This SRFI proposes a raw string syntax that allows for a customized delimiter to enclose the character data. Importantly, for any string, there exists a delimiter such that the raw string using that delimiter can represent the string verbatim. The raw strings in this SRFI do not do any special whitespace handling.

by Peter McGoron at Sunday, April 26, 2026

Tuesday, April 21, 2026

spritely.institute

Spritely Goblins v0.18.0: Sleepy actors!

Goblins version 0.18.0 release art: a Spritely goblin takes a nap in a chair by a fireplace, tea steams on a nearby table

We’re excited to announce the release of Spritely Goblins 0.18.0! This release features a new caching layer called “sleepy actors”, OCapN protocol updates, and numerous bug fixes. So get cozy by the fire, pull out a steaming cup of tea, and let’s have a nice relaxing read about this exciting new Goblins release!

Sleepy actors

Remember when we introduced persistence back in Goblins 0.13.0? You’re not sure? Okay, as a quick refresher, Goblins’ persistence system is able to serialize a running Goblins program for you and wake it back up later! Pretty cool!

Goblins is pretty smart about only saving the changes that need to change. But... if we can save actors to disk, do we really need them to be “awake” all at once? What if we let them take a little nap, and just woke them up when it’s time for them to do something? Then they could go back to bed when they aren’t needed anymore!

Well that’s exactly what we’ve built! Sleepy actors are a new, optional caching layer has been added to the core of Goblins. Actors may now go to sleep or be woken up depending on a customizable caching algorithm known as a “sleep strategy”. When an actor goes to sleep, it is saved to the vat’s persistence store but its reference remains live. When an asleep actor receives a message, its state is restored from the vat’s persistence store and the message is processed as usual.

Goblins currently ships with two sleep strategies: an extremely simple strategy where your little goblins head to bed after each and every turn, and a “least recently used” algorithm, which functions as a hot cache where only the most recently activated goblins stay awake, and the rest go take a nap.

For a feature that’s so sleepy, we’re pretty wired about its potential, and we hope you are too!

OCapN protocol updates

The OCapN draft specification has changed in the time since the last Goblins release. The op:deliver-only operation has been dropped in favor of a single op:deliver operation. GC operations now accept a list of export positions instead of a single position so that GC can be done in batches; their operation names have likewise been changed to the plural form (op:gc-export is now op:gc-exports, etc.) The protocol version number has thus been bumped, which means that applications built with an earlier release of Goblins are incompatible with the OCapN shipped in this release.

Notable bug fixes

  • Fixed a race condition when restoring multiple vats from persisted data. If Alice in vat A is referenced by Bob in vat B but no other actors in vat A then it was possible for Alice to be garbage collected before vat B is restored.

  • Fixed a signing oracle vulnerability in the WebSocket netlayer’s designator authentication code.

Getting the release

This release includes all the features detailed above as well as many bug fixes. See the NEWS for more information about all of the changes.

As usual, Guix users can upgrade to 0.18.0 by running the following:

guix pull
guix install guile-goblins

Otherwise, you can find the tarball on our release page.

If you’re making something with Goblins or want to contribute to Goblins itself, be sure to join our community at community.spritely.institute! We also host regular office hours where you can come and ask questions or discuss our projects. Information about office hours is available on the forum. Thanks for following along and hope to see you there!

Thanks to our supporters

Your support makes our work possible! If you like what we do, please consider becoming a Spritely supporter today!

Diamond tier

  • Aeva Palecek
  • Holmes Wilson
  • Lassi Kiuru

Gold tier

  • Juan Lizarraga Cubillos

Silver tier

  • Austin Robinson
  • Brit Butler
  • Charlie McMackin
  • Dan Connolly
  • Deb Nicholson
  • Evangelo Stavro Prodromou
  • Glenn Thompson
  • James Luke
  • Jonathan Wright
  • Michel Lind
  • Mike Ledoux
  • Nia Bickford
  • Steve Sprang
  • Travis Smith

Bronze tier

  • Alan Zimmerman
  • BJ Bolender
  • Ben Hamill
  • Benjamin Grimm-Lebsanft
  • Brooke Vibber
  • Brooklyn Zelenka
  • Crazypedia No
  • Ellie High
  • François Joulaud
  • Gerome Bochmann
  • Grant Gould
  • Gregory Buhtz
  • Ivan Sagalaev
  • Jason Wodicka
  • Jeff Forcier
  • Marty McGuire
  • Mason DeVries
  • Michael Orbinpost
  • Neil Brudnak
  • Nelson Pavlosky
  • Philipp Nassua
  • Robin Heggelund Hansen
  • Ron Welch
  • Stephen Herrick
  • Steven De Herdt
  • Tamara Schmitz
  • Thomas Talbot
  • William Murphy
  • a b
  • r g
  • terra tauri

by Dave Thompson and Christine Lemmer-Webber at Tuesday, April 21, 2026

Saturday, April 11, 2026

Idiomdrottning

What Delta Chat was

Being able to quickly write replies to email, real actual email, was very valuable. That was the core of what drew me to Delta Chat.

There are plenty of proprietary email apps set up around that feature but in the free world, not so much. Delta Chat was it and it was a gem because it was in many ways better than those other sparks and spikes and whatever they were called. Not to mention the incredible leap of faith it takes to go for a proprietary mail app since they can read the emails.

Delta Chat is rapidly moving away from being usable for that. If someone forks it or finds a good alternative (that’s FOSS, obvs), I would love to know.

I know I’ve worked a little on Notmuch, and I’ve talked a little bit with the people who make aerc, but for all their conveniences they’re still traditional mail apps where the threads look like files that you have to open up and enter into and work with. The few extra clicks involved with using a normal mail app might sound like no big deal but it really adds up. All the opening, searching, archiving, threads management… Whereas with Delta Chat in its prime, you just see the message right away and can reply right away. Easy peasy.

Maybe K-9 but it got bought out by Mozilla and they hate autocrypt which I don’t. I think WKD is better, sure, but I try to use both. K-9 used to be one of the best autocrypt clients out there.

by Idiomdrottning at Saturday, April 11, 2026

jointhefreeworld

Functional repository pattern in Scheme? Decoupling and abstracting the data layer in Lisp

Implementing the Repository Pattern with Hygienic Macros in Scheme

Hi everyone!

I’ve been working on a new approach for the data layer of my projects lately, and I’d love to poke your brains and get some feedback.

Coming from a background in Scala, Java and other OOP languages and a fascination for FP languages and Lisps (as well as Rust and Haskell), I’ve seen a lot of patterns come and go.

Recently, I noticed a common anti-pattern in my own Scheme projects: a tight coupling between my controller layer and the SQLite implementation. It wasn’t ideal, and I really missed the clean separation of the Repository Pattern.

So, I set out to decouple my data layer from my controller layer in the MVC architecture I love. I wanted to do this using pure functional programming, and I ended up building something really fun using Scheme’s hygienic macros.

(If you want to see this implemented in a real project, check out my example repo here: lucidplan)

I am working on adding it to byggsteg too.

I plan to bring this pattern to all my projects to reap the benefits of the eDSL, better decoupling, and easier testing. Here is how I built it.

The Macros  #

I created two main macros. define-record-with-kw magically defines a keyword-argument constructor, bypassing the need for strict parameter ordering. It’s highly ergonomic.

define-repo-method is the real superpower. It accepts any arity, plus optional or #:keyword arguments. This saves a ton of work, reduces tedious parameter passing, and gives you a very clean eDSL definition.

( define-module ( lucidplan domain repo)
   #:declarative? #t
   #:use-module (srfi srfi-9)
   #:export (define-repo-method define-record-with-kw))

( define-syntax  define-repo-method
  ( syntax-rules ()
                ((_ method-name accessor docstring)
                 ( define* ( method-name repo . args)
                   docstring
                   (apply (accessor repo) args)))))

( define-syntax  define-record-with-kw
  ( syntax-rules ()
                ((_ (type-name constructor-name pred) kw-constructor-name
                    (field-name accessor-name) ...)
                 ( begin
                    ;;  Define the standard SRFI-9 record
                   ( define-record-type type-name
                     (constructor-name field-name ...) pred
                     (field-name accessor-name) ...)

                    ;;  Define the keyword-argument constructor
                   ( define* ( kw-constructor-name  #:key field-name ...)
                     (constructor-name field-name ...))

                    ;;  Auto-export members
                   ( export type-name pred kw-constructor-name accessor-name
                           ...)))))

Defining the Domain eDSL  #

Here is how I use those macros to define my DSL for a “projects” entity:

( define-module ( lucidplan domain project)
   #:declarative? #t
   #:use-module (srfi srfi-9)
   #:use-module (lucidplan domain repo)
   #:export (get-projects))

 ;;  -- Record definition ---

(define-record-with-kw (  %make-project-repository
                                             project-repository?)
                       mk-project-repository
                       (get-projects-proc repo-get-projects))

 ;;  --- eDSL: Embedded Domain Specific Language ---

(define-repo-method get-projects repo-get-projects
  "Retrieves a list of all active projects from the given REPO.")

The SQLite Implementation  #

Finally, here is the concrete SQLite implementation using Artanis. this is completely decoupled from the rest of the application logic.

( define-module ( lucidplan sqlite project)
                #:declarative? #t
                #:use-module (srfi srfi-9)
                #:use-module (kracht prelude)
                #:use-module (artanis db)
                #:use-module (lucidplan sqlite util)
                #:use-module (lucidplan domain project)
                #:export (make-sqlite-project-repository))

 ;;  --- Artanis + SQLite implementation ---
( define ( make-sqlite-project-repository rc)
        ( define  columns
                '(id human-id
                     title
                     url
                     vcs-url
                     description
                     created-at
                     updated-at
                     deleted-at))

        ( define ( get-projects)
                ( let* ((query (format #f
                                       "SELECT ~a
                    FROM project WHERE deleted_at IS NULL
                    ORDER BY human_id ASC"
                                      (symbols->sql-columns-list columns)))
                       (_ (log-info  "get-projects query:\n\t~a\n" query))
                       (rows ( map sql-row->scheme-alist
                                  (DB-get-all-rows (DB-query (DB-open rc) query))))
                       (_ (log-info  "get-projects rows: ~a\n"
                                    (length rows))))
                  rows))

        (mk-project-repository  #:get-projects-proc get-projects))

A condensed example with keyword arguments:

 ;;  The DSL (notice how arity is clean)
(define-repo-method get-jobs repo-get-jobs
                   "Retrieves a list of active jobs from the given REPO.")

 ;;  SQLite implementation
( define* ( get-jobs  #:key limit offset)
  ( let* ((query (format #f
                  "SELECT ~a FROM job
                  ORDER BY created_at DESC LIMIT ~a OFFSET ~a"
                 (symbols->sql-columns-list columns) limit offset))
         (_ (log-info  "get-jobs query:\n\t~a\n" query))
         (rows ( map sql-row->scheme-alist
                    (DB-get-all-rows (DB-query (DB-open rc) query))))
         (_ (log-info  "get-jobs rows: ~a\n"
                      (length rows))))
    rows))

Using it can look like

( let*
  (job-repo (make-sqlite-job-repository rc))
  (jobs (get-jobs job-repo  #:limit 50  #:offset 0))
.......)

I believe I have something really powerful cooking here, but I know there is always room for improvement.

What do you all think? How would you go about improving this? I’m entirely open to criticism, feedback, and brainstorming!

Thanks for reading this :)

Saturday, April 11, 2026

Thursday, April 2, 2026

Scheme Requests for Implementation

SRFI 269: Portable Test Definitions

SRFI 269 is now in draft status.

This SRFI defines a portable API for test definitions that is decoupled from test execution and reporting. It provides three primitives: the universal is macro for assertions, test for grouping assertions into independently executable units, and suite for organizing tests into hierarchies. Tests and suites can carry user-provided metadata to adjust the behavior of a test runner, for example, to select tests by tags or to enforce timeout values. The API is tiny, yet capable and flexible. By focusing on the definition and leaving execution semantics to test runners, this SRFI offers a common ground that can reduce fragmentation among testing libraries.

Unlike side-effect-driven testing frameworks (e.g. SRFI-64), this API produces first-class runtime entities, making it easy to filter, schedule, wrap them in exception guards and continuation barriers, run in arbitrary order, and re-run dynamically generated test subsets. In addition to the usual CLI test runners, it enables runtime-friendly test runners that integrate well with highly interactive development workflows inside REPLs and IDEs, significantly increasing control over test execution and shortening the feedback loop.

To bridge the test definitions and test runners, the SRFI specifies a message-passing programming interface, and test loading and execution semantics recommendations for test runner implementers.

by Andrew Tropin and Ramin Honary at Thursday, April 2, 2026

Tuesday, March 31, 2026

Andy Wingo

wastrelly wabbits

Good day! Today (tonight), some notes on the last couple months of Wastrel, my ahead-of-time WebAssembly compiler.

Back in the beginning of February, I showed Wastrel running programs that use garbage collection, using an embedded copy of the Whippet collector, specialized to the types present in the Wasm program. But, the two synthetic GC-using programs I tested on were just ported microbenchmarks, and didn’t reflect the output of any real toolchain.

In this cycle I worked on compiling the output from the Hoot Scheme-to-Wasm compiler. There were some interesting challenges!

bignums

When I originally wrote the Hoot compiler, it targetted the browser, which already has a bignum implementation in the form of BigInt, which I worked on back in the day. Hoot-generated Wasm files use host bigints via externref (though wrapped in structs to allow for hashing and identity).

In Wastrel, then, I implemented the imports that implement bignum operations: addition, multiplication, and so on. I did so using mini-gmp, a stripped-down implementation of the workhorse GNU multi-precision library. At some point if bignums become important, this gives me the option to link to the full GMP instead.

Bignums were the first managed data type in Wastrel that wasn’t defined as part of the Wasm module itself, instead hiding behind externref, so I had to add a facility to allocate type codes to these “host” data types. More types will come in time: weak maps, ephemerons, and so on.

I think bignums would be a great proposal for the Wasm standard, similar to stringref ideally (sniff!), possibly in an attenuated form.

exception handling

Hoot used to emit a pre-standardization form of exception handling, and hadn’t gotten around to updating to the newer version that was standardized last July. I updated Hoot to emit the newer kind of exceptions, as it was easier to implement them in Wastrel that way.

Some of the problems Chris Fallin contended with in Wasmtime don’t apply in the Wastrel case: since the set of instances is known at compile-time, we can statically allocate type codes for exception tags. Also, I didn’t really have to do the back-end: I can just use setjmp and longjmp.

This whole paragraph was meant to be a bit of an aside in which I briefly mentioned why just using setjmp was fine. Indeed, because Wastrel never re-uses a temporary, relying entirely on GCC to “re-use” the register / stack slot on our behalf, I had thought that I didn’t need to worry about the “volatile problem”. From the C99 specification:

[...] values of objects of automatic storage duration that are local to the function containing the invocation of the corresponding setjmp macro that do not have volatile-qualified type and have been changed between the setjmp invocation and longjmp call are indeterminate.

My thought was, though I might set a value between setjmp and longjmp, that would only be the case for values whose lifetime did not reach the longjmp (i.e., whose last possible use was before the jump). Wastrel didn’t introduce any such cases, so I was good.

However, I forgot about local.set: mutations of locals (ahem, objects of automatic storage duration) in the source Wasm file could run afoul of this rule. So, because of writing this blog post, I went back and did an analysis pass on each function to determine the set of locals which are mutated inside the body of a try_table. Thank you, rubber duck readers!

bugs

Oh my goodness there were many bugs. Lacunae, if we are being generous; things not implemented quite right, which resulted in errors either when generating C or when compiling the C. The type-preserving translation strategy does seem to have borne fruit, in that I have spent very little time in GDB: once things compile, they work.

coevolution

Sometimes Hoot would use a browser facility where it was convenient, but for which in a better world we would just do our own thing. Such was the case for the number->string operation on floating-point numbers: we did something awful but expedient.

I didn’t have this facility in Wastrel, so instead we moved to do float-to-string conversions in Scheme. This turns out to have been a good test for bignums too; the algorithm we use is a bit dated and relies on bignums to do its thing. The move to Scheme also allows for printing floating-point numbers in other radices.

There are a few more Hoot patches that were inspired by Wastrel, about which more later; it has been good for both to work on the two at the same time.

tail calls

My plan for Wasm’s return_call and friends was to use the new musttail annotation for calls, which has been in clang for a while and was recently added to GCC. I was careful to limit the number of function parameters such that no call should require stack allocation, and therefore a compiler should have no reason to reject any particular tail call.

However, there were bugs. Funny ones, at first: attributes applying to a preceding label instead of the following call, or the need to insert if (1) before the tail call. More dire ones, in which tail callers inlined into their callees would cause the tail calls to fail, worked around with judicious application of noinline. Thanks to GCC’s Andrew Pinski for help debugging these and other issues; with GCC things are fine now.

I did have to change the code I emitted to return “top types only”: if you have a function returning type T, you can tail-call a function returning U if U is a subtype of T, but there is no nice way to encode this into the C type system. Instead, we return the top type of T (or U, it’s the same), e.g. anyref, and insert downcasts at call sites to recover the precise types. Not so nice, but it’s what we got.

Trying tail calls on clang, I ran into a funny restriction: clang not only requires that return types match, but requires that tail caller and tail callee have the same parameters as well. I can see why they did this (it requires no stack shuffling and thus such a tail call is always possible, even with 500 arguments), but it’s not the design point that I need. Fortunately there are discussions about moving to a different constraint.

scale

I spent way more time that I had planned to on improving the speed of Wastrel itself. My initial idea was to just emit one big C file, and that would provide the maximum possibility for GCC to just go and do its thing: it can see everything, everything is static, there are loads of always_inline helpers that should compile away to single instructions, that sort of thing. But, this doesn’t scale, in a few ways.

In the first obvious way, consider whitequark’s llvm.wasm. This is all of LLVM in one 70 megabyte Wasm file. Wastrel made a huuuuuuge C file, then GCC chugged on it forever; 80 minutes at -O1, and I wasn’t aiming for -O1.

I realized that in many ways, GCC wasn’t designed to be a compiler target. The shape of code that one might emit from a Wasm-to-C compiler like Wastrel is different from that that one would write by hand. I even ran into a segfault compiling with -Wall, because GCC accidentally recursed instead of iterated in the -Winfinite-recursion pass.

So, I dealt with this in a few ways. After many hours spent pleading and bargaining with different -O options, I bit the bullet and made Wastrel emit multiple C files. It will compute a DAG forest of all the functions in a module, where edges are direct calls, and go through that forest, greedily consuming (and possibly splitting) subtrees until we have “enough” code to split out a partition, as measured by number of Wasm instructions. They say that -flto makes this a fine approach, but one never knows when a translation unit boundary will turn out to be important. I compute needed symbol visibilities as much as I can so as to declare functions that don’t escape their compilation unit as static; who knows if this is of value. Anyway, this partitioning introduced no performance regression in my limited tests so far, and compiles are much much much faster.

scale, bis

A brief observation: Wastrel used to emit indented code, because it could, and what does it matter, anyway. However, consider Wasm’s br_table: it takes an array of n labels and an integer operand, and will branch to the nth label, or the last if the operand is out of range. To set up a label in Wasm, you make a block, of which there are a handful of kinds; the label is visible in the block, and for n labels, the br_table will be the most nested expression in the n nested blocks.

Now consider that block indentation is proportional to n. This means, the file size of an indented C file is quadratic in the number of branch targets of the br_table.

Yes, this actually bit me; there are br_table instances with tens of thousands of targets. No, wastrel does not indent any more.

scale, ter

Right now, the long pole in Wastrel is the compile-to-C phase; the C-to-native phase parallelises very well and is less of an issue. So, one might think: OK, you have partitioned the functions in this Wasm module into a number of files, why not emit the files in parallel?

I gave this a go. It did not speed up C generation. From my cursory investigations, I think this is because the bottleneck is garbage collection in Wastrel itself; Wastrel is written in Guile, and Guile still uses the Boehm-Demers-Weiser collector, which does not parallelize well for multiple mutators. It’s terrible but I ripped out parallelization and things are fine. Someone on Mastodon suggested fork; they’re not wrong, but also not Right either. I’ll just keep this as a nice test case for the Guile-on-Whippet branch I want to poke later this year.

scale, quator

Finally, I had another realization: GCC was having trouble compiling the C that Wastrel emitted, because Hoot had emitted bad WebAssembly. Not bad as in “invalid”; rather, “not good”.

There were two cases in which Hoot emitted ginormous (technical term) functions. One, for an odd debugging feature: Hoot does a CPS transform on its code, and allocates return continuations on a stack. This is a gnarly technique but it gets us delimited continuations and all that goodness even before stack switching has landed, so it’s here for now. It also gives us a reified return stack of funcref values, which lets us print Scheme-level backtraces.

Or it would, if we could associate data with a funcref. Unfortunately func is not a subtype of eq, so we can’t. Unless... we pass the funcref out to the embedder (e.g. JavaScript), and the embedder checks the funcref for equality (e.g. using ===); then we can map a funcref to an index, and use that index to map to other properties.

How to pass that funcref/index map to the host? When I initially wrote Hoot, I didn’t want to just, you know, put the funcrefs of interet into a table and let the index of a function’s slot be the value in the key-value mapping; that would be useless memory usage. Instead, we emitted functions that took an integer, and which would return a funcref. Yes, these used br_table, and yes, there could be tens of thousands of cases, depending on what you were compiling.

Then to map the integer index to, say, a function name, likewise I didn’t want a table; that would force eager allocation of all strings. Instead I emitted a function with a br_table whose branches would return string.const values.

Except, of course, stringref didn’t become a thing, and so instead we would end up lowering to allocate string constants as globals.

Except, of course, Wasm’s idea of what a “constant” is is quite restricted, so we have a pass that moves non-constant global initializers to the “start” function. This results in an enormous start function. The straightforward solution was to partition global initializations into separate functions, called by the start function.

For the funcref debugging, the solution was more intricate: firstly, we represent the funcref-to-index mapping just as a table. It’s fine. Then for the side table mapping indices to function names and sources, we emit DWARF, and attach a special attribute to each “introspectable” function. In this way, reading the DWARF sequentially, we reconstruct a mapping from index to DWARF entry, and thus to a byte range in the Wasm code section, and thus to source information in the .debug_line section. It sounds gnarly but Guile already used DWARF as its own debugging representation; switching to emit it in Hoot was not a huge deal, and as we only need to consume the DWARF that we emit, we only needed some 400 lines of JS for the web/node run-time support code.

This switch to data instead of code removed the last really long pole from the GCC part of Wastrel’s pipeline. What’s more, Wastrel can now implement the code_name and code_source imports for Hoot programs ahead of time: it can parse the DWARF at compile-time, and generate functions that look up functions by address in a sorted array to return their names and source locations. As of today, this works!

fin

There are still a few things that Hoot wants from a host that Wastrel has stubbed out: weak refs and so on. I’ll get to this soon; my goal is a proper Scheme REPL. Today’s note is a waypoint on the journey. Until next time, happy hacking!

by Andy Wingo at Tuesday, March 31, 2026

Monday, March 30, 2026

Scheme Requests for Implementation

SRFI 268: Multidimensional Array Literals

SRFI 268 is now in draft status.

This is a specification of a lexical syntax for multi-dimensional arrays. Textually it is an alteration of SRFI 163, which is an extension of the Common Lisp array reader syntax to handle non-zero lower bounds and optional uniform element types (compatibly with SRFI 4 and SRFI 160). It can be used in conjunction with SRFI 25, SRFI 122, or SRFI 231. There are recommendations for output formatting, read-array and write-array procedures, and a suggested format-array procedure.

by Per Bothner (SRFI 163), Peter McGoron (design), John Cowan (editor and steward), and Wolfgang Corcoran-Mathe (implementation) at Monday, March 30, 2026

Wednesday, March 25, 2026

Idiomdrottning

My Butlerian hypocrisy

In the Butlerian Jihad (from Dune but popularized by many smolnet posters like Alex Schroeder) we rightly hate bots and scrapers but I’m in a bit of a glass house around that, since I’ve made a few scrapers for my own personal use as a way to get RSS Atom feeds out of sites that don’t have feeds. I love scraping and mashing.♥︎ The JS-laden SPA era was a nightmare for me. I hate browsers and server-side styling. I love getting texts from URLs.

Follow-ups

An Inhabitant in Carcosa responds:

Bad in intent: it is intended to do something unethical, whether that be LLM training, denial of service, privatizing the commons, or immanentizing the eschaton. This is pretty subjective in an “I know it when I see it” kind of way. Scraping for a search index, scraping for a full-text RSS feed, and scraping for LLM training are all the same act as far as the server can tell, but only the last one is /evil/.

Having a full-text RSS feed as a way to not have to deal with ads or paywalls—even when the reasons to not be able to otherwise handle ads and paywalls are 100% a11y issues—goes against the intent of the server owners.

And I’m not so sure LLMs are evil.

It may ignore robots.txt, it may lie about being another user-agent

Have done both those too!

Either bad intent or bad implementation is enough; a bot doesn’t need both to be bad.

That’s not exactly my philosophy.

I love the open readable simple web where each document has one URL and you can read it on your own terms. I can’t deal with the junk web.

by Idiomdrottning at Wednesday, March 25, 2026