Planet Scheme

Tuesday, March 31, 2026

Andy Wingo

wastrelly wabbits

Good day! Today (tonight), some notes on the last couple months of Wastrel, my ahead-of-time WebAssembly compiler.

Back in the beginning of February, I showed Wastrel running programs that use garbage collection, using an embedded copy of the Whippet collector, specialized to the types present in the Wasm program. But, the two synthetic GC-using programs I tested on were just ported microbenchmarks, and didn’t reflect the output of any real toolchain.

In this cycle I worked on compiling the output from the Hoot Scheme-to-Wasm compiler. There were some interesting challenges!

bignums

When I originally wrote the Hoot compiler, it targetted the browser, which already has a bignum implementation in the form of BigInt, which I worked on back in the day. Hoot-generated Wasm files use host bigints via externref (though wrapped in structs to allow for hashing and identity).

In Wastrel, then, I implemented the imports that implement bignum operations: addition, multiplication, and so on. I did so using mini-gmp, a stripped-down implementation of the workhorse GNU multi-precision library. At some point if bignums become important, this gives me the option to link to the full GMP instead.

Bignums were the first managed data type in Wastrel that wasn’t defined as part of the Wasm module itself, instead hiding behind externref, so I had to add a facility to allocate type codes to these “host” data types. More types will come in time: weak maps, ephemerons, and so on.

I think bignums would be a great proposal for the Wasm standard, similar to stringref ideally (sniff!), possibly in an attenuated form.

exception handling

Hoot used to emit a pre-standardization form of exception handling, and hadn’t gotten around to updating to the newer version that was standardized last July. I updated Hoot to emit the newer kind of exceptions, as it was easier to implement them in Wastrel that way.

Some of the problems Chris Fallin contended with in Wasmtime don’t apply in the Wastrel case: since the set of instances is known at compile-time, we can statically allocate type codes for exception tags. Also, I didn’t really have to do the back-end: I can just use setjmp and longjmp.

This whole paragraph was meant to be a bit of an aside in which I briefly mentioned why just using setjmp was fine. Indeed, because Wastrel never re-uses a temporary, relying entirely on GCC to “re-use” the register / stack slot on our behalf, I had thought that I didn’t need to worry about the “volatile problem”. From the C99 specification:

[...] values of objects of automatic storage duration that are local to the function containing the invocation of the corresponding setjmp macro that do not have volatile-qualified type and have been changed between the setjmp invocation and longjmp call are indeterminate.

My thought was, though I might set a value between setjmp and longjmp, that would only be the case for values whose lifetime did not reach the longjmp (i.e., whose last possible use was before the jump). Wastrel didn’t introduce any such cases, so I was good.

However, I forgot about local.set: mutations of locals (ahem, objects of automatic storage duration) in the source Wasm file could run afoul of this rule. So, because of writing this blog post, I went back and did an analysis pass on each function to determine the set of locals which are mutated inside the body of a try_table. Thank you, rubber duck readers!

bugs

Oh my goodness there were many bugs. Lacunae, if we are being generous; things not implemented quite right, which resulted in errors either when generating C or when compiling the C. The type-preserving translation strategy does seem to have borne fruit, in that I have spent very little time in GDB: once things compile, they work.

coevolution

Sometimes Hoot would use a browser facility where it was convenient, but for which in a better world we would just do our own thing. Such was the case for the number->string operation on floating-point numbers: we did something awful but expedient.

I didn’t have this facility in Wastrel, so instead we moved to do float-to-string conversions in Scheme. This turns out to have been a good test for bignums too; the algorithm we use is a bit dated and relies on bignums to do its thing. The move to Scheme also allows for printing floating-point numbers in other radices.

There are a few more Hoot patches that were inspired by Wastrel, about which more later; it has been good for both to work on the two at the same time.

tail calls

My plan for Wasm’s return_call and friends was to use the new musttail annotation for calls, which has been in clang for a while and was recently added to GCC. I was careful to limit the number of function parameters such that no call should require stack allocation, and therefore a compiler should have no reason to reject any particular tail call.

However, there were bugs. Funny ones, at first: attributes applying to a preceding label instead of the following call, or the need to insert if (1) before the tail call. More dire ones, in which tail callers inlined into their callees would cause the tail calls to fail, worked around with judicious application of noinline. Thanks to GCC’s Andrew Pinski for help debugging these and other issues; with GCC things are fine now.

I did have to change the code I emitted to return “top types only”: if you have a function returning type T, you can tail-call a function returning U if U is a subtype of T, but there is no nice way to encode this into the C type system. Instead, we return the top type of T (or U, it’s the same), e.g. anyref, and insert downcasts at call sites to recover the precise types. Not so nice, but it’s what we got.

Trying tail calls on clang, I ran into a funny restriction: clang not only requires that return types match, but requires that tail caller and tail callee have the same parameters as well. I can see why they did this (it requires no stack shuffling and thus such a tail call is always possible, even with 500 arguments), but it’s not the design point that I need. Fortunately there are discussions about moving to a different constraint.

scale

I spent way more time that I had planned to on improving the speed of Wastrel itself. My initial idea was to just emit one big C file, and that would provide the maximum possibility for GCC to just go and do its thing: it can see everything, everything is static, there are loads of always_inline helpers that should compile away to single instructions, that sort of thing. But, this doesn’t scale, in a few ways.

In the first obvious way, consider whitequark’s llvm.wasm. This is all of LLVM in one 70 megabyte Wasm file. Wastrel made a huuuuuuge C file, then GCC chugged on it forever; 80 minutes at -O1, and I wasn’t aiming for -O1.

I realized that in many ways, GCC wasn’t designed to be a compiler target. The shape of code that one might emit from a Wasm-to-C compiler like Wastrel is different from that that one would write by hand. I even ran into a segfault compiling with -Wall, because GCC accidentally recursed instead of iterated in the -Winfinite-recursion pass.

So, I dealt with this in a few ways. After many hours spent pleading and bargaining with different -O options, I bit the bullet and made Wastrel emit multiple C files. It will compute a DAG forest of all the functions in a module, where edges are direct calls, and go through that forest, greedily consuming (and possibly splitting) subtrees until we have “enough” code to split out a partition, as measured by number of Wasm instructions. They say that -flto makes this a fine approach, but one never knows when a translation unit boundary will turn out to be important. I compute needed symbol visibilities as much as I can so as to declare functions that don’t escape their compilation unit as static; who knows if this is of value. Anyway, this partitioning introduced no performance regression in my limited tests so far, and compiles are much much much faster.

scale, bis

A brief observation: Wastrel used to emit indented code, because it could, and what does it matter, anyway. However, consider Wasm’s br_table: it takes an array of n labels and an integer operand, and will branch to the nth label, or the last if the operand is out of range. To set up a label in Wasm, you make a block, of which there are a handful of kinds; the label is visible in the block, and for n labels, the br_table will be the most nested expression in the n nested blocks.

Now consider that block indentation is proportional to n. This means, the file size of an indented C file is quadratic in the number of branch targets of the br_table.

Yes, this actually bit me; there are br_table instances with tens of thousands of targets. No, wastrel does not indent any more.

scale, ter

Right now, the long pole in Wastrel is the compile-to-C phase; the C-to-native phase parallelises very well and is less of an issue. So, one might think: OK, you have partitioned the functions in this Wasm module into a number of files, why not emit the files in parallel?

I gave this a go. It did not speed up C generation. From my cursory investigations, I think this is because the bottleneck is garbage collection in Wastrel itself; Wastrel is written in Guile, and Guile still uses the Boehm-Demers-Weiser collector, which does not parallelize well for multiple mutators. It’s terrible but I ripped out parallelization and things are fine. Someone on Mastodon suggested fork; they’re not wrong, but also not Right either. I’ll just keep this as a nice test case for the Guile-on-Whippet branch I want to poke later this year.

scale, quator

Finally, I had another realization: GCC was having trouble compiling the C that Wastrel emitted, because Hoot had emitted bad WebAssembly. Not bad as in “invalid”; rather, “not good”.

There were two cases in which Hoot emitted ginormous (technical term) functions. One, for an odd debugging feature: Hoot does a CPS transform on its code, and allocates return continuations on a stack. This is a gnarly technique but it gets us delimited continuations and all that goodness even before stack switching has landed, so it’s here for now. It also gives us a reified return stack of funcref values, which lets us print Scheme-level backtraces.

Or it would, if we could associate data with a funcref. Unfortunately func is not a subtype of eq, so we can’t. Unless... we pass the funcref out to the embedder (e.g. JavaScript), and the embedder checks the funcref for equality (e.g. using ===); then we can map a funcref to an index, and use that index to map to other properties.

How to pass that funcref/index map to the host? When I initially wrote Hoot, I didn’t want to just, you know, put the funcrefs of interet into a table and let the index of a function’s slot be the value in the key-value mapping; that would be useless memory usage. Instead, we emitted functions that took an integer, and which would return a funcref. Yes, these used br_table, and yes, there could be tens of thousands of cases, depending on what you were compiling.

Then to map the integer index to, say, a function name, likewise I didn’t want a table; that would force eager allocation of all strings. Instead I emitted a function with a br_table whose branches would return string.const values.

Except, of course, stringref didn’t become a thing, and so instead we would end up lowering to allocate string constants as globals.

Except, of course, Wasm’s idea of what a “constant” is is quite restricted, so we have a pass that moves non-constant global initializers to the “start” function. This results in an enormous start function. The straightforward solution was to partition global initializations into separate functions, called by the start function.

For the funcref debugging, the solution was more intricate: firstly, we represent the funcref-to-index mapping just as a table. It’s fine. Then for the side table mapping indices to function names and sources, we emit DWARF, and attach a special attribute to each “introspectable” function. In this way, reading the DWARF sequentially, we reconstruct a mapping from index to DWARF entry, and thus to a byte range in the Wasm code section, and thus to source information in the .debug_line section. It sounds gnarly but Guile already used DWARF as its own debugging representation; switching to emit it in Hoot was not a huge deal, and as we only need to consume the DWARF that we emit, we only needed some 400 lines of JS for the web/node run-time support code.

This switch to data instead of code removed the last really long pole from the GCC part of Wastrel’s pipeline. What’s more, Wastrel can now implement the code_name and code_source imports for Hoot programs ahead of time: it can parse the DWARF at compile-time, and generate functions that look up functions by address in a sorted array to return their names and source locations. As of today, this works!

fin

There are still a few things that Hoot wants from a host that Wastrel has stubbed out: weak refs and so on. I’ll get to this soon; my goal is a proper Scheme REPL. Today’s note is a waypoint on the journey. Until next time, happy hacking!

by Andy Wingo at Tuesday, March 31, 2026

Monday, March 30, 2026

Scheme Requests for Implementation

SRFI 268: Multidimensional Array Literals

SRFI 268 is now in draft status.

This is a specification of a lexical syntax for multi-dimensional arrays. It is a modest alteration of SRFI 163, which is an extension of the Common Lisp array reader syntax to handle non-zero lower bounds and optional uniform element types (compatibly with SRFI 4 and SRFI 160). It can be used in conjunction with SRFI 25, SRFI 122, or SRFI 213. There are recommendations for output formatting and a suggested format-array procedure.

by Per Bothner (SRFI 163), Peter McGoron (design), and John Cowan (editor and steward) at Monday, March 30, 2026

Wednesday, March 25, 2026

Idiomdrottning

My Butlerian hypocrisy

In the Butlerian Jihad (from Dune but popularized by many smolnet posters like Alex Schroeder) we rightly hate bots and scrapers but I’m in a bit of a glass house around that, since I’ve made a few scrapers for my own personal use as a way to get RSS Atom feeds out of sites that don’t have feeds. I love scraping and mashing.♥︎ The JS-laden SPA era was a nightmare for me. I hate browsers and server-side styling. I love getting texts from URLs.

Follow-ups

An Inhabitant in Carcosa responds:

Bad in intent: it is intended to do something unethical, whether that be LLM training, denial of service, privatizing the commons, or immanentizing the eschaton. This is pretty subjective in an “I know it when I see it” kind of way. Scraping for a search index, scraping for a full-text RSS feed, and scraping for LLM training are all the same act as far as the server can tell, but only the last one is /evil/.

Having a full-text RSS feed as a way to not have to deal with ads or paywalls—even when the reasons to not be able to otherwise handle ads and paywalls are 100% a11y issues—goes against the intent of the server owners.

And I’m not so sure LLMs are evil.

It may ignore robots.txt, it may lie about being another user-agent

Have done both those too!

Either bad intent or bad implementation is enough; a bot doesn’t need both to be bad.

That’s not exactly my philosophy.

I love the open readable simple web where each document has one URL and you can read it on your own terms. I can’t deal with the junk web.

by Idiomdrottning (sandra.snan@idiomdrottning.org) at Wednesday, March 25, 2026

Friday, March 13, 2026

crumbles.blog

HOWTO: Unlock LUKS encrypted disks over SSH on a Raspberry Pi 4 running NixOS

Follow the instructions on the NixOS wiki. For a Raspberry Pi 4 connected over Ethernet, you need:

boot.initrd.availableKernelModules = [
  "xhci_pci"
  "usbhid"
  "uas"
  "pcie-brcmstb"
  "reset-raspberrypi"
  "genet"
  "broadcom"
  "bcm_phy_lib"
];

Also note: use cipher xchacha20,aes-adiantum-plain64 on Raspberry Pi 4 due to the lack of AES hardware instructions. The default aes-xts-plain64 is slow without these instructions; xchacha20,aes-adiantum-plain64 is over twice as fast. (Raspberry Pi 5 has AES instructions, but doesn’t support NixOS very well yet.) If you forget to set the cipher when creating the encrypted device, cryptsetup reencrypt can help, but it may take multiple days once you have any real amount of data on the disk at all.

Friday, March 13, 2026

Wednesday, February 25, 2026

Retropikzel's blog

spritely.institute

Hoot 0.8.0 released!

We are excited to announce the release of Hoot 0.8.0! Hoot is a Scheme to WebAssembly compiler backend for Guile, as well as a general purpose WebAssembly toolchain. In other words, Scheme in the browser!

This release contains new features and bug fixes and since the 0.7.0 release back in October.

New features

  • New (hoot repl) module. At long last, there is now a built-in read-eval-print loop implementation! Previous releases added a macro expander, a Scheme interpreter, and a runtime module system, but now it’s possible to do live hacking from a Hoot program inside a WebAssembly runtime!

    • To use the REPL, compile your Wasm binary with the necessary debug flag during development: guild compile-wasm -g1. This will include the runtime module system in the resulting binary. Expect compilation time and binary size to increase significantly. The trade-off is that a live hacking workflow will make recompilations fewer and farther between.

  • While not shipping in Hoot directly, initial support for using the Hoot REPL from Emacs has been added in the new geiser-hoot extension. We have submitted geiser-hoot for inclusion in MELPA and Guix so it will be easy to install in the very near future.

  • Enhanced (hoot web-server) module. To support the use of REPLs running within a web browser tab, the most common development use case, the web server doubles as a REPL server, proxying TCP traffic from REPL clients (more about that below) over a WebSocket to the connected browser tab.

    • These enhancements introduce two new, optional depedencies to Hoot: Fibers and guile-websocket. If either of these dependencies are not present at build time, the (hoot web-server) module will not be built.

    • The web server can now be extended with a user-supplied request router. An example of this can be found in our hoot-slides repository.

  • New (hoot web-repl) module. This module can be imported and compiled into the Wasm binary so that it can act as a REPL server. This is complicated by the fact that a browser client cannot act as a server, it is strictly a client. Instead, it connects to the aforementioned (hoot web-server) which acts as a proxy for all connected REPL clients.

  • New hoot command-line tool. This command will be used as a place to collect handy Hoot development tools. So far, there are two subcommands:

    • hoot repl: Open a REPL running in Node. Useful for quickly trying out basic Scheme expressions in Hoot without having to compile a standalone WebAssembly program.

    • hoot server: Conveniently launch the development web server in (hoot web-server).

  • New (web request) and (web response) modules that export a sliver of the API defined in Guile’s modules of the same names.

  • New (web socket) module that provides a input/output interface to WebSocket client connections. Mimicks the module of the same name in guile-websocket.

  • Added customizable module loader interface via new current-module-loader parameter. Two concrete loaders are provided: By default, modules are loaded from the file system by searching a load path. This is useful when running in a non-browser runtime such as NodeJS. When run-web-repl in (web repl) is used, connected REPLs are configured to use an HTTP-based loader. This loader makes HTTP requests to a special endpoint on the development web server to fetch source code.

    • Note that modules loaded at runtime are loaded from source and then interpreted. Unlike Guile, where modules are automatically compiled to bytecode, Hoot cannot compile individual modules to Wasm (which would require compiling the compiler to Wasm which is an interesting future possibility).

Community highlights

Check out this chiptune tracker made with Hoot by Vivianne Langdon!

Additionally, check out Wastrel, a Wasm GC to C compiler developed by Andy Wingo. Wastrel notably uses Hoot’s Wasm toolchain. A Wasm program compiled with Wastrel runs faster than the same program on NodeJS!

Documentation changes

  • Updated Installation chapter to mention new optional dependencies.

  • Added Modules and REPL sections to the Scheme reference chapter.

  • Added Development chapter.

  • Update Status section to remove mention of missing R7RS support that we have now.

  • Removed docs for obsolete --emit-names flag

  • Add documentation for -g flag to guild compile-wasm.

  • Fixed example in the JavaScript reflection section that was using the obsolete load_main signature.

Toolchain changes

  • Split Wasm validation out of (wasm vm) and into new (wasm validation) module.

  • Keep data computed within the validation pass in <validated-wasm> records so that data can be used during instantiation rather than redundantly recomputing it.

  • Added explicit support for representing a “canonicalizationâ€�: a world in which structurally equal types are equal.

  • (wasm vm) types <wasm-func>, <wasm-struct>, <wasm-array> now refer to their types by index into a canonicalized set.

  • Added untagged <wasm-array> backing stores to (wasm vm) for all simple scalar numeric types, including i8 and i16 packed types.

  • Modified (wasm vm) to look up named heap type references in the instance’s canonicalization.

  • Added bytevector->wasm-array, wasm-array->bytevector to (wasm vm).

  • Added support for some of the “noneâ€� bottom types.

  • Packed array data is now stored signed, wrapped from i32 when set, and only unwrapped to unsigned in get_u functions.

  • Added string.from_code_point and string.concat lowerings in (wasm lower-stringrefs).

  • Renamed outdated extern.internalize and extern.externalize to their current names, any.convert_extern and extern.convert_any.

  • Added new has-wasm-header? procedure to (wasm parse).

  • Parse core reference types to <ref-type> records rather than symbol abbreviations in (wasm parse).

Miscellaneous changes

  • Modified schedule-task in (fibers scheduler) (which is implemented using inline Wasm on the target) to be a no-op when called at expansion time on the host i.e. used at the module top-level or from a procedural macro.

  • Added support for vector and call-with-values primitives to (hoot primitives) module so they can be used in interpreted code.

  • truncate is now exported from (guile).

  • Allow exports to clobber each other in module-declare! to support live hacking of modules where define-module forms are often re-evaluated many times.

  • Extracted JS Uint8Array bindings from internals of (fibers streams) to new (hoot typed-arrays) module.

  • Implement subset of Guile’s procedural module API for hackable programs (i.e. programs that are built with runtime module support).

  • Added (hoot config) target-side module for accessing certain build-time constants (currently just the Hoot version string).

  • Extracted (hoot library) module from (hoot library-group) so that the library parser can be used on the target for live hacking purposes.

  • Added define-module implementation to (guile) that simply throws an error if used during compilation. A separate implementation is installed for use by the interpreter in hackable programs.

  • Added #:replace? argument to module-export! to allow replacement of exports for live hacking purposes.

  • Exported module-root from (hoot modules).

  • Added module-imported-modules procedure to (hoot modules).

  • Changed file I/O host functions to return null when a file cannot be opened so a Scheme exception that can be handled by user code rather than a host exception that cannot.

  • Extracted contents of (scheme file) to new (hoot file) module for use in internal code such as the implementation of the file system module loader in (hoot hackable).

  • Moved implementation of string-join, string-concatenate, string-prefix?, and string-prefix-ci? from (guile) to (hoot strings).

  • Moved case-insensitive string procedures from (scheme char) to (hoot strings).

  • Added string-drop to (hoot strings).

  • Added every and fold-right procedures to (hoot lists).

  • Moved implementation of and-map and or-map from (guile) to (hoot lists).

  • Added symbol-append to (hoot symbols).

  • Added less verbose custom printer for <module> record type.

  • Switched from positional to keyword arguments for make-soft-port in (hoot ports).

  • Added list-index to (guile).

Bug fixes

  • Fixed format-exception not writing all of its output to the current error port.

  • Fix eof-object export in (ice-9 binary-ports).

  • Fixed off-by-one error for procedures with rest args in (hoot eval).

  • Fixed min/max to only accept real numbers, handle NaNs, and normalize exact zeroes.

  • Fixed continuation composition leaving an unwind continuation on the stack.

  • Fixed prompt unwinding in certain join continuation situations.

  • Fixed compilation of unwind primcalls at join points.

  • Fixed runtime module system ignoring replacement bindings in Guile modules.

Browser compatibility

  • Compatible with Safari 26 or later.

  • Compatible with Firefox 121 or later.

  • Compatible with Chrome 119 or later.

Get Hoot

Hoot is available in GNU Guix:

$ guix pull
$ guix install guile guile-hoot

Also, Hoot is now available in Debian, though it will take awhile for this release to make it there.

Otherwise, Hoot can be built from source via our release tarball. See the Hoot homepage for a download link and GPG signature.

Documentation for Hoot 0.8.0, including build instructions, can be found here.

Get in touch

For bug reports, pull requests, or just to follow along with development, check out the Hoot project on Codeberg.

If you build something cool with Hoot, let us know on our community forum!

Thanks to our supporters

Your support makes our work possible! If you like what we do, please consider becoming a Spritely supporter today!

Diamond tier

  • Aeva Palecek
  • David Anderson
  • Holmes Wilson
  • Lassi Kiuru

Gold tier

  • Alex Sassmannshausen
  • Juan Lizarraga Cubillos

Silver tier

  • Austin Robinson
  • Brit Butler
  • Charlie McMackin
  • Dan Connolly
  • Danny OBrien
  • Deb Nicholson
  • Eric Bavier
  • Eric Schultz
  • Evangelo Stavro Prodromou
  • Evgeni Ku
  • Glenn Thompson
  • James Luke
  • Jonathan Frederickson
  • Jonathan Wright
  • Joshua Simmons
  • Justin Sheehy
  • Matt Panhans
  • Michel Lind
  • Mike Ledoux
  • Nathan TeBlunthuis
  • Nia Bickford
  • Noah Beasley
  • Steve Sprang
  • Travis Smith
  • Travis Vachon

Bronze tier

  • Alan Zimmerman
  • Aria Stewart
  • BJ Bolender
  • Ben Hamill
  • Benjamin Grimm-Lebsanft
  • Brooke Vibber
  • Brooklyn Zelenka
  • Carl A
  • Crazypedia No
  • François Joulaud
  • Gerome Bochmann
  • Grant Gould
  • Gregory Buhtz
  • Ivan Sagalaev
  • James Smith
  • Jason Wodicka
  • Jeff Forcier
  • Marty McGuire
  • Mason DeVries
  • Michael Orbinpost
  • Neil Brudnak
  • Nelson Pavlosky
  • Philipp Nassua
  • Robin Heggelund Hansen
  • Rodion Goritskov
  • Ron Welch
  • Stefan Magdalinski
  • Stephen Herrick
  • Steven De Herdt
  • Tamara Schmitz
  • Thomas Talbot
  • William Murphy
  • a b
  • r g
  • terra tauri

Until next time, happy hooting! 🦉

by Dave Thompson (contact@spritely.institute) at Wednesday, February 25, 2026

Tuesday, February 24, 2026

The Racket Blog

Racket v9.1

posted by Stephen De Gabrielle and John Clements


We are pleased to announce Racket v9.1 is now available from https://download.racket-lang.org/.

As of this release:

  • Documentation organization and navigation can be specialized by language family, to allow users to interact with documentation in a way that is tailored to that language family. This is currently used by Rhombus.
  • The for form and its variants accept an #:on-length-mismatch specifier. 3.18 Iterations and Comprehensions: for, for/list, …
  • DrRacket improves the GUI for choosing color schemes.
  • DrRacket has curved syntax arrows. The degree of curvature indicates the relative left- or right-displacement of the arrow’s target.
  • DrRacket’s “Insert Large Letters” uses characters that match the comment syntax of the buffer’s language, making it useful (and fun!) in Rhombus.
  • The exn-classify-errno maps network and filesystem error numbers on various platforms to posix-standard symbols, to enable more portable code. 10.2 Exceptions
  • The behavior of Racket BC on certain character operations (most notably eq?) is changed to match that of Racket CS, with a small performance penalty for these operations for BC programs. 19 Performance 1.5 Implementations
  • The make-struct-type procedure can inherit the current inspector using a 'current flag. This is the default behavior, but there are situations in which it’s not possible to refer to the current inspector. 5.2 Creating Structure Types
  • Bundle configurations can better control the conventions for locating shared object files with the --enable-sofind=<conv> flags.
  • The system-type function can report on platform and shared-object-library conventions with new flags. 15.8 Environment and Runtime Information
  • The openssl/legacy library makes it possible to access OpenSSL’s built-in “legacy” provider, to get access to insecure and outdated algorithms. OpenSSL: Secure Communication
  • Typed Racket improves expected type propagation for keyword argument functions.
  • There are many other repairs and documentation improvements!

Don’t forget to run raco pkg migrate 9.0

Thank you

The following people contributed to this release:

Alexander Shopov, beast-hacker, Bob Burger, Brad Lucier, Cadence Ember, David Van Horn, evan, François-René Rideau, Gustavo Massaccesi, Jacqueline Firth, Jade Sailor, Jason Hemann, Jens Axel Søgaard, John Clements, Jonas Rinke, Matthew Flatt, Matthias Felleisen, Mike Sperber, Noah Ma, Pavel Panchekha, Rob Durst, Robby Findler, Ryan Culpepper, Sam Tobin-Hochstadt, Stephen De Gabrielle, and Wing Hei Chan.

Racket is a community developed open source project and we welcome new contributors. See racket/README.md to learn how you can be a part of this amazing project.

Feedback Welcome

Questions and discussion welcome at the Racket community on Discourse or Discord.

Please share

If you can - please help get the word out to users and platform specific repo packagers

Racket - the Language-Oriented Programming Language - version 9.1 is now available from https://download.racket-lang.org

See https://blog.racket-lang.org/2026/02/racket-v9-1.html for the release announcement and highlights.

by John Clements, Stephen De Gabrielle at Tuesday, February 24, 2026

Wednesday, February 18, 2026

Retropikzel's blog

Andy Wingo

two mechanisms for dynamic type checks

Today, a very quick note on dynamic instance type checks in virtual machines with single inheritance.

The problem is that given an object o whose type is t, you want to check if o actually is of some more specific type u. To my knowledge, there are two sensible ways to implement these type checks.

if the set of types is fixed: dfs numbering

Consider a set of types T := {t, u, ...} and a set of edges S := {<t|ε, u>, ...} indicating that t is the direct supertype of u, or ε if u is a top type. S should not contain cycles and is thus a direct acyclic graph rooted at ε.

First, compute a pre-order and post-order numbering for each t in the graph by doing a depth-first search over S from ε. Something like this:

def visit(t, counter):
    t.pre_order = counter
    counter = counter + 1
    for u in S[t]:
        counter = visit(u, counter)
    t.post_order = counter
    return counter

Then at run-time, when making an object of type t, you arrange to store the type’s pre-order number (its tag) in the object itself. To test if the object is of type u, you extract the tag from the object and check if tagu.pre_order mod 2n < u.post_order–u.pre_order.

Two notes, probably obvious but anyway: one, you know the numbering for u at compile-time and so can embed those variables as immediates. Also, if the type has no subtypes, it can be a simple equality check.

Note that this approach applies only if the set of types T is fixed. This is the case when statically compiling a WebAssembly module in a system that doesn’t allow modules to be instantiated at run-time, like Wastrel. Interestingly, it can also be the case in JIT compilers, when modeling types inside the optimizer.

if the set of types is unbounded: the display hack

If types may be added to a system at run-time, maintaining a sorted set of type tags may be too much to ask. In that case, the standard solution is something I learned of as the display hack, but whose name is apparently ungooglable. It is described in a 4-page technical note by Norman H. Cohen, from 1991: Type-Extension Type Tests Can Be Performed In Constant Time.

The basic idea is that each type t should have an associated sorted array of supertypes, starting with its top type and ending with t itself. Each t also has a depth, indicating the number of edges between it and its top type. A type u is a subtype of t if u[t.depth]=t, if u.depth <= t.depth.

There are some tricks one can do to optimize out the depth check, but it’s probably a wash given the check performs a memory access or two on the way. But the essence of the whole thing is in Cohen’s paper; go take a look!

Jan Vitek notes in a followup paper (Efficient Type Inclusion Tests) that Christian Queinnec discovered the technique around the same time. Vitek also mentions the DFS technique, but as prior art, apparently already deployed in DEC Modula-3 systems. The term “display” was bouncing around in the 80s to describe some uses of arrays; I learned it from Dybvig’s implementation of flat closures, who learned it from Cardelli. I don’t know though where “display hack” comes from.

That’s it! If you know of any other standard techniques for type checks with single-inheritance subtyping, do let me know in the comments. Until next time, happy hacking!

Addendum: Thanks to kind readers, I have some new references! Michael Schinz refers to Yoav Zibin’s PhD thesis as a good overview. Alex Bradbury points to a survey article by Roland Ducournau as describing the DFS technique as “Schubert numbering”. CF Bolz-Tereick unearthed the 1983 Schubert paper, and it is a weird one. Still, I can’t but think that the DFS technique was known earlier; I have a 1979 graph theory book by Shimon Even that describes a test for “separation vertices” that is precisely the same, though it does not mention the application to type tests. Many thanks also to fellow traveller Max Bernstein for related discussions.

by Andy Wingo at Wednesday, February 18, 2026

Saturday, February 7, 2026

crumbles.blog

HOWTO: Set up an IPsec VPN on NixOS and connect to it with Mac OS and iOS

There are two options for IPsec VPNs on NixOS: Libreswan and Strongswan. Since Strongswan has much better NixOS configuration, we’ll use that.

Note! In the best tradition of howto guides on blogs, I’m not an expert on IPsec, Strongswan, VPN configuration, nor even really NixOS. However, these are the settings that worked for me, derived mostly from the DigitalOcean guide to setting up Strongswan on Ubuntu and adapted for Nix. Please try every other support forum you can think of before asking me personally for help, because I probably have no idea :-)

Generating the needed certificates, etc.

For its own very special reasons, Strongswan’s command-line tools for generating keys and certificates don’t work without an /etc/strongswan.conf file present, even though they don’t need it. Fortunately, an empty one is okay, so let’s create one:

$ sudo touch /etc/strongswan.conf

Now use the Strongswan command line tools in a Nix shell to generate the necessary credentials. (Use nix-shell -p strongswan if you don’t have flakes and nix-command enabled!)

$ nix shell nixpkgs#strongswan

Create directory to hold our keys and certificates and set some permissions for safety:

$ mkdir -p ~/pki/{cacerts,certs,private}
$ chmod 700 ~/pki
$ pki --gen --type rsa --size 4096 --outform pem > ~/pki/private/ca-key.pem
$ pki --self --ca --lifetime 3650 --in ~/pki/private/ca-key.pem --type rsa --dn "CN=VPN root CA" --outform pem > ~/pki/cacerts/ca-cert.pem
$ pki --gen --type rsa --size 4096 --outform pem > ~/pki/private/server-key.pem

For a private VPN, I suggest connecting only through its IP address rather than messing about with DNS. In the following, replace 12.34.56.78 with the IP of your own server. If you really want to connect through a domain name, you can delete the last --san @12.34.56.78 below and just use the other two with the IP address replaced by your domain name.

$ pki --pub --in ~/pki/private/server-key.pem --type rsa \
    | pki --issue --lifetime 1825 \
          --cacert ~/pki/cacerts/ca-cert.pem \
          --cakey ~/pki/private/ca-key.pem \
          --dn "CN=12.34.56.78" --san 12.34.56.78 --san @12.34.56.78 \
          --flag serverAuth --flag ikeIntermediate --outform pem \
        > ~/pki/certs/server-cert.pem

Now you can exit the Nix shell and copy these new keys and certificates to the right place:

$ sudo cp -r ~/pki/* /etc/ipsec.d/

Finally, delete the temporary strongswan.conf file we created; NixOS will manage all further Strongswan configuration.

$ sudo rm /etc/strongswan.conf

Configuring Strongswan in configuration.nix

services.strongswan = {
  enable = true;
  # Where your user authentication information will be stored:
  secrets = [ "/etc/ipsec.d/ipsec.secrets" ];

  setup = {
    # Log daemon statuses
    charondebug = "ike 1, knl 1, cfg 0";
    # Allow multiple connections
    uniqueids = "no";
  };

  connections = {
    # You can change the name of the connection from `vpn` if you
    # like; this is only used internally
    vpn = {
      auto = "add";
      compress = "no";
      type = "tunnel";
      keyexchange = "ikev2";
      fragmentation = "yes";
      forceencaps = "yes";
      # Detect and clear any hung connections
      dpdaction = "clear";
      dpddelay = "300s";
      send_cert = "always";
      rekey = "no";
      # Accept connections on any local network interface
      left = "%any";
      # Set this to your domain name (prefixed with @) or your IP address
      leftid = "12.34.56.78";
      leftcert = "server-cert.pem";
      leftsendcert = "always";
      # Tell clients to use this VPN connection for connections to
      # all other IP addresses
      leftsubnet = "0.0.0.0/0";
      # Accept connection from any remote client
      right = "%any";
      # Accept connection from any remote client ID
      rightid = "%any";
      # Authentication method `eap-mschap-v2` works on Mac OS, iOS,
      # and allegedly on Android and Windows too
      rightauth = "eap-mschapv2";
      # Give clients local IP addresses in the 10.0.0.0/1 subnet
      rightsourceip = "10.0.0.0/24";
      # Set this to your preferred DNS server
      rightdns = "1.1.1.1";
      # Clients do not need to send certificates
      rightsendcert = "never";
      # Ask clients for identification when they connect
      eap_identity="%identity";
      # Recommended ciphersuite settings for iOS and Mac; you may need
      # different ones on other platforms
      esp = "aes256-sha256-modp2048";
      ike = "aes256-sha256-modp2048-modpnone";
    };
  };
};

We also need to configure the kernel to allow IP forwarding and do some related hardening by setting the appropriate sysctls:

boot.kernel.sysctl."net.ipv4.ip_forward" = 1;
boot.kernel.sysctl."net.ipv6.all.forwarding" = 1;
boot.kernel.sysctl."net.ipv4.ip_no_pmtu_disc" = 1;
boot.kernel.sysctl."net.ipv4.conf.all.accept_redirects" = 0;
boot.kernel.sysctl."net.ipv4.conf.all.send_redirects" = 0;
boot.kernel.sysctl."net.ipv6.conf.all.accept_redirects" = 0;
boot.kernel.sysctl."net.ipv6.conf.all.send_redirects" = 0;

Finally, we need to configure the NixOS firewall to allow connections on the IPsec ports, and also to route connections through the VPN properly. (Thanks to Erik Dombi on the Strongswan issue tracker for the information on how to set this up.)

You will need to know the name of your network interface. If you don’t use a declarative, static configuration of your IP address (which for a VPN server you probably should, unless you are using Dynamic DNS or something) you may not know it. Find it with ip route; the network interface name is the word that appears after dev. (In my case, it says default via 98.76.54.32 dev ens3 proto static, so my interface is ens3.) Here I’m using ens3. Replace ens3 everywhere in the extraCommands configuration with the name of your own interface if it’s different for you.

# UDP ports 500 and 4500 are used for IPsec connections
networking.firewall.allowedUDPPorts = [ 500 4500 ];
networking.firewall.extraCommands =
  ''
    iptables -P INPUT ACCEPT
    iptables -P FORWARD ACCEPT
    iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
    iptables -A FORWARD --match policy --pol ipsec --dir in  --proto esp -s 10.0.0.0/24 -j ACCEPT
    iptables -A FORWARD --match policy --pol ipsec --dir out --proto esp -d 10.0.0.0/24 -j ACCEPT
    iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o ens3 -m policy --pol ipsec --dir out -j ACCEPT
    iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o ens3 -j MASQUERADE
    iptables -t mangle -A FORWARD --match policy --pol ipsec --dir in -s 10.0.0.0/24 -o ens3 -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
    iptables -A INPUT -j DROP
    iptables -A FORWARD -j DROP
  '';

Setting up users

Above in configuration.nix we told the Strongswan NixOS module that our secrets will come from the file /etc/ipsec.d/ipsec.secrets, so we need to create it, tell it where to find our server private key, and add some users:

: RSA "server-key.pem"
username : EAP "password"

Replace username and password by your chosen credentials. You can add more lines of this type if you want more users.

Run it!

nixos-rebuild will start the Strongswan service, reconfigure the firewall, and you should have a working VPN.

For some reason after a system reboot, the VPN seems to take a minute or two to start; nixos-rebuild manages to bring it up immediately.

Connecting on Mac OS and iOS

The advantage of an IPsec VPN is that it’s supported natively by Mac OS and iOS. The process is pretty similar on both:

  1. Download the ca-cert.pem file from /etc/ipsec.d/cacerts

  2. On Mac, open it in Keychain Access and add it to the System keychain. You then need to open the System keychain, right click on the certificate and choose ‘Get Info’, unfold the ‘Trust’ disclosure triangle, and select at least ‘Always Trust’ for IPsec.

    On an iOS device, after downloading the file, you will be offered to install the new ‘profile’ by going to the Settings app. Once you install it, you still need to mark the certificate as trusted. Go to General → About → Certificate Trust Settings (right at the bottom) and enable full trust for that root certificate. (There is no fine-grained trust setting on iOS, at least for manual configuration, as far as I know.)

  3. Add a VPN in the VPN settings pane, choosing IKEv2 as the type. Set both the ‘Server address’ and ‘Remote ID’ to the IP address or domain name of your VPN server. Under ‘Authentication’, select ‘User authentication’ method ‘Username’ and enter the username and password you put in the /etc/ipsec.d/ipsec.secrets file.

  4. Create the VPN and turn it on. Hopefully it will work!

Improvements I want to make in a future version of this guide

  • Use Agenix to store secrets instead of just dumping them in /etc
  • Understand more about what those iptables incantations do and maybe pare them down

Saturday, February 7, 2026

Joe Marshall

Vibe Coded Scheme Interpreter

Mark Friedman just released his Scheme-JS interpreter which is a Scheme with transparent JavaScript interoperability. See his blog post at furious ideas.

This interpreter apparently uses the techniques of lightweight stack inspection — Mark consulted me a bit about that hack works. I'm looking forward to seeing the vibe coded architecture.

by Joe Marshall (noreply@blogger.com) at Saturday, February 7, 2026

Wednesday, January 21, 2026

Peter Bex

FOSS for digital sovereignty in the EU

The European Commission has posted a "call for evidence" on open source for digital sovereignty. This seeks feedback from the public on how to reduce its dependency on software from non-EU companies through Free and Open Source Software (FOSS).

This is my response, with proper formatting (the web form replies all seem to have gotten their spaces collapsed) and for future reference.

The added value of FOSS

In times where international relations are tense, it is wise to invest in digital sovereignty. For example, recently there was a controversy surrounding the International Criminal Court losing access to e-mail hosted by Microsoft, a US company, for political reasons.

A year earlier, a faulty CrowdStrike update caused the largest IT outage in history. This was an accident, but it was a good reminder of the power that rests in foreign hands. We have to consider the possibility of a foreign government pressuring a company to issue a malicious update on purpose. This update could target only specific countries.

Bringing essential infrastructure into EU hands makes sense. But why does this have to be FOSS? For instance, the CrowdStrike incident could also have happened with FOSS.

With FOSS, one does not have to trust a single company to maintain high code quality and security. Independent security researchers and programmers will be looking at this code with a fresh perspective. It is also an industry truism that FOSS code tends to be of higher quality, simply because releasing bad code is too embarrassing.

FOSS also reduces vendor lock-in. One can switch vendors and keep using the same product when for example the vendor:

  • goes bankrupt,
  • drops support for the product,
  • drastically increases prices,
  • decides on a different direction for the product than the user wants,
  • or gets acquired by a foreign company.

Therefore, FOSS brings sovereignty by not being at the mercy of a single vendor.

Public sector and consultancies

The EU can set a good example by starting in the public sector: government EU organisations and those of the member states, as well as semi-government organisations like universities and libraries. Closed source software still reigns supreme there. Only "established" companies may apply to tenders. These often employ professionals certified in proprietary tech. This encourages vendor lock-in. The existing dependency ensures lock-in for future projects, as compatibility is often a key requirement.

These same vendors are ruthless and have repeatedly sabotaged FOSS migrations. Microsoft was involved in multiple bribery scandals in The Netherlands, Romania, Italy and Hungary, for example. There have also been allegations of illegal deals that were never investigated, such as with the LiMux project in Munich.

How the EU can help:

  • Fully commit to FOSS. Set a date by which all software used by the public sector must be FOSS and running on hardware within the EU, at fully EU-owned companies. No compromises, no excuses and no easy outs - those were the bane of previous efforts.
  • Map out missing requirements and pay EU consultancy firms to improve FOSS where it is lacking. This will also make said software also more attractive for large private organisations that provide essential services in the EU.

Concrete examples:

  • Many EU and member state institutes rely on American services for hosting or securing their e-mail. E-mail software is a complete commodity, for which there are good European alternatives, based on FOSS. It should be easy to switch.
  • Workstations for public servants typically run on Windows and use Microsoft Office. Switch these to a proven open operating system like Linux and office suite like LibreOffice.

Education and mind share

In schools, informatics is typically taught using proprietary software. This is often cloud software. Schools do not have the expertise or funds to run their own servers. Therefore, they use the easy option that teachers are familiar with: "free" online offerings from US Big Tech. Network effects ensure deeper entrenchment. Big Tech offers steep discounts for educational licenses for these exact reasons.

Vocational schools focus on proprietary tech most used in industry. This goes beyond IT studies. For example, statistics and psychology courses use SPSS over PSPP or R. Mathematics and physics courses use MATLAB over GNU Octave. Engineering courses use AutoCAD instead of FreeCAD or LibreCAD.

A focus on the impact of tech choices in education could change the situation from the ground up. In high school, there could be a place (e.g. in civic education class) to focus on the impact of tech choices on society. This goes beyond domestic versus foreign "cloud" hosting and open versus proprietary code. For example, studies show that social media can have harmful effects on mental well-being, societal cohesion and even democracy.

How the EU can help:

  • Provide funding for course material, and/or create a certification programme for suitable course material to wean schools off of Big Tech software.
  • Start an education campaign aimed at the broader public in order to explain why closed software and the non-EU cloud are harmful. For example, it could focus on concrete issues that affect anyone like data protection, privacy and resistance against "enshittification" such as unwanted ads, price hikes and feature removal.
  • For the existing work force, the EU can fund training in open alternatives so that people feel confident with these alternatives. Such training should include a theoretical component to discuss the benefits of using open alternatives to ensure people are fully on board.

Existing FOSS companies and economic situation

The EU has plenty of FOSS businesses already. A handful of examples: SUSE was one of the first companies to provide FOSS server and desktop operating systems for the enterprise. Tuta and Proton Mail provide innovative secure e-mail solutions. Nextcloud offers cloud-based content collaboration tools. GitLab and Codeberg offer code hosting platforms.

These companies are innovative and profitable, but small in the global market place. Competitors from the US benefit from economies of scale. The initial US market is a large country with a single language and minimal legislation. This allows for quick domestic growth followed by global expansion. The EU market is more fragmented so it is harder to gain a foothold, requiring more up front investment to e.g. support the languages spoken in the EU.

Venture capital is also less likely to invest in the EU because of stricter legislation. Because FOSS solutions give competing companies a chance to offer the product, the returns on investment are lower than with proprietary software where a single company has a monopoly on the software.

Some EU companies have realised that this legislation is an asset: it allows for differentiation from US-based offerings. EU software can compete in the global market place on its own merits.

How the EU can help:

  • Promote tech sovereignty to countries across the world. Start with countries who are not formally allied to the US. This could help EU companies to expand into the global market.
  • Help EU companies become more well-known by organising trade shows exhibiting only FOSS EU companies.
  • Provide funding to organisations like the FSF Europe to run awareness campaigns about FOSS alternatives.
  • Perhaps controversial: heavily tax proprietary, non-EU software or provide tax breaks for FOSS EU software to level the playing field.
  • Even more controversially: prevent foreign-owned companies from operating data centers in the EU. Make it as hard as possible for them to offer high-speed cloud software here. These data centers are already unpopular, as they use precious water and land, and they only make foreign companies more powerful.

Conclusion

The reasons for dependency on foreign proprietary solutions are systemic. The causes are various: from inertia and ignorance to market effects and bribery. The solutions must be equally systemic: from education to policy and funding, all points must be attacked in order to succeed. This is the only way we can get rid of our dependency on non-EU software.

by Peter Bex at Wednesday, January 21, 2026

Saturday, January 17, 2026

Scheme Requests for Implementation

SRFI 267: Raw String Syntax

SRFI 267 is now in draft status.

Raw string syntax is lexical syntax for strings that do not interpret escapes inside of them. They are useful in cases where the string data has a lot of characters like \ or " that would otherwise have to be escaped. The raw string syntax in this document is derived from C++11's raw string literals.

by Peter McGoron at Saturday, January 17, 2026

Friday, January 16, 2026

Scheme Requests for Implementation

SRFI 266: The expr syntax

SRFI 266 is now in draft status.

The syntax expr allows one to write arithmetic expressions using a syntax near to mathematical notation, potentially improving the readability of Scheme programs.

by José Bollo at Friday, January 16, 2026

Tuesday, January 13, 2026

Gauche Devlog

Extension package registry

Extension package registry

We just renewed the Gauche homepage. It's mostly cosmetics, but one notable change is the Extension Packages page

It's been in our todo list for very long time to create some system to track Gauche extension packages. It is trivial to create a site where users can put the info. What's not trivial is how to keep the info updated.

It's a burden to the user if we ask them to keep updating such info whenever they update their extension package.

If a user puts their website/email for the package, but then moves away from Gauche development, and eventually the site/email become inactive and go away, we don't know what to do with the entry; it'd be also difficult if somebody wants to take over the project.

Should anybody be able to update the package's info? Limiting it to the original authors becomes an issue if they go inactive and out of touch. Allowing it may cause a security issue if someone replaces the distribution URL to malicious one.

To vet the users entering info, we need some mechanism of user registration and authentication, which adds another database to maintain.

These implications kept us from implementing the official mechanism to provide the extension package registry.


Things has changed in the last decade or so.

First, distributed VCS and their hosting services have become a norm. Instead of having personal websites to serve extension package tarballs and documents, developers can put their repository on one of those services and make it public.

Recent Gauche provides a standard framework of building extensions. One important aspect of it is package.scm in the source tree to keep meta information about the package, including version number, authors, "official" repository url, dependencies, etc.

So, once we know the package's repository URL, we can get its meta information!

The author updates package.scm as the development proceeds, because it is a part of the source. No need to update information on the registry separately.

Anybody can create account on those services, but the service gives certain identity to each one and the place to interact with each other. Sure, people move away eventually, but it's rarely that they bother to remove the repositories; and it's easy to inherit the abandoned project.

We already have a official way to state such transfer of control in package.scm (superseded-by slot). If the successor can contact the original author/maitainer/committer, the package.scm in the original repository can have superseded-by field pointing to the new repository. It is not mandatory, but it can make it clear where is the "official" successor.

In other words, we can use the existing public repositories as the metadata database, and merely maintain pointers to them by ourselves.


So, how to manage those pointers? We don't have thousands of extension packages updated daily, so we don't need a sophisticated database server for it.

We decided to piggyback on the public DVCS service again. Gauche package repository index github repo maintains the list of package urls under its packages directory. If you want your packages to be listed, just fork it, add your package, and send a pull request. (If you don't want to use GitHub, just send a patch via email.)

Which repository is added when, by whose request, is recorded in the commits of that repo.

Currenly, pulling metadata and reflecting it on the webpage is done in occasional batch process. We'll adjust the frequency as it goes. If we ever get very popular and receiving tons of new package registration requests, we might need to upgrade the system, but until then, this will be the least-maintenance-cost solution.


To be in the registry, your extension package needs package.scm. I scanned through the existing list on wiki (WiLiKi:Gauche:Packages) and added packages that I could find the public repository with package.scm.

If your extension is old enough not to have package.scm, a convenient way is to run gauche-package populate in the top directory of the source tree. It gives you a template package.scm with some fields filled by whatever information it can find.

Tag: Extensions

Tuesday, January 13, 2026