Planet Scheme

Sunday, September 14, 2025

LIPS Scheme Blog

How to Serialize Any Object in JavaScript?

In this article, I will explain how the serialization of objects (in dump compiler) works in LIPS.


see the rest of the article

Sunday, September 14, 2025

Wednesday, September 10, 2025

spritely.institute

Shepherd × Goblins update

The Shepherd is an init system and process manager initially built for GNU Hurd and now used by Guix. It can run with either root or user privileges to launch daemons, execute tasks, and manage processes. As we’ve discussed previously, Spritely has been working to port the Shepherd to Goblins. We’ve been a bit quiet since that announcement, so what’s the buzz?

First, as a quick refresher, the Shepherd is a great project to port to Goblins because it’s already built on the actor model. By switching to Goblins, we can bring the following benefits to the project:

  • Streamline the codebase by replacing the Shepherd’s ad-hoc actor model implementation.
  • Reduce the likelihood of concurrency bugs caused by the existing actor model implementation that exposes too much of its CSP foundation using Fibers.
  • Transform services (and other actors) into object capabilities for fine-grained management of privileges, which will (eventually) make it to possible to unify the currently separate worlds of “system” Shepherds that run as root under PID 1 and “user” Shepherds that run as an unprivileged user.
  • Enable Shepherd to use the Object Capability Network (OCapN) to open the door for distributed networks of “Communicating Shepherd Processes” in the future.

Since our last post, we’ve done the following:

  • Wrote Goblins versions of the core actors like the service controller, service registry, and process monitor.
  • Added unit tests for all of the core actors (there were none before).
  • Rewrote the public API as a compatibility layer on top of a new Goblins actor API. This new API is private for now.

What this means is that all of the extant Shepherd functionality will soon be available in the Goblins port. We’re currently working out the remaining sneaky, tricky, subtle bugs in order to have a full 1:1 port that passes the existing test suite. We’re getting very close, so we felt it was time to share this update!

What we’ve been up to

To better explain the work we've been doing, we need to discuss some Shepherd internals, particularly how actors work. Shepherd includes an ad-hoc actor model that has notable differences with Goblins. Shepherd actors are implemented as an event loop running in a fiber (a lightweight thread) and send messages to each other over channels. In contrast, Goblins is also built on Fibers but mostly hides this behind an abstraction barrier. Rather than each actor managing its own event loop, many Goblins actors share an event loop known as a vat. Each Goblins actor is mapped to its current “behavior”, a procedure that is called when the actor receives a message. The differences between these two actor model implementations means that porting an actor from one to the other isn’t as straightforward as it might seem.

The central unit of abstraction is the service, which represents something managed by the Shepherd. This could be an external process, a one-off task (known as a “one-shot” service), a timer, or whatever the user wants it to be. Internally, services are represented by a record holding immutable configuration information and a service controller actor which manages the running state. The service controller (named ^service in the code) was the first target of our porting efforts as it helped elucidate the shape of the new architecture.

Going hand-in-hand with the service controller actor is the service registry (^service-registry). The service registry is responsible for mapping the names of services to their associated service controller. Porting the service registry was one of the simplest parts of the project; most of the original actor logic was copied over with minimal changes.

The Shepherd makes heavy use of dynamically scoped variables known as parameters to pass around shared state like the current registry, the current service, the current client socket, etc. This created an issue for us, however, as vats introduce a continuity barrier for parameters. In the existing actor system, actors inherit the dynamic environment in which they are spawned because each actor is a new fiber spun off the current fiber. In Goblins, actors are spawned within a vat’s event loop which has an entirely separate dynamic state from the caller. Furthemore, Goblins discourages the use of parameters because they are inherently ambient and thus not capability-safe.

Removing these parameters would be a backwards incompatible change, so instead we capture the current state of relevant parameters in the compatibility layer before passing those values off to Goblins actors. The most obvious use of this technique is for I/O handling. The Shepherd uses a custom soft port for logging to standard output, a client socket, and/or the system log. A little named actor tentatively named ^writer handles these concerns now.

The most significant change is the introduction of a coordinating actor called simply ^shepherd. This actor is where we pushed all of the logic related to starting, stopping, respawning, etc. Procedures such as start-service and stop-service are now thin wrappers around calls to this actor.

Related to process orchestration, the Shepherd has a process monitor actor, whose job is to watch for the termination of processes associated with services and notify other actors about it. This was also relatively simple to port to a Goblins ^process-monitor actor once all of the shared state was properly captured. Goblins promises somewhat simplified the logic involved in responding to these changing states.

Perhaps the trickiest part of the port was logging. Loggers read from an input port and write timestamped log lines to some destination, perhaps a file or the system log. Port I/O requires some understanding how Fibers and Goblins interact. If you’re not careful, you can suspend a vat’s fiber, potentially stalling the program. Goblins provides the ^io actor to handle many common I/O needs safely, but the needs of Shepherd were beyond what that actor could provide. A first attempt at porting this logic proved too buggy to rely upon, so we’ve recently reworked logging actors into something more robust (and more similar to the original logging actors, too).

Finally, we have made a variety of smaller changes so existing code plays nicely with new Goblins actors. For example, the Shepherd provides a collection of helpers for things like starting and stopping processes (make-forkexec-constructor, for example). The shift to Goblins required the introduction of Goblins message passing and promise handling to keep some of these working as expected. A lot of time has been spent devising ways to keep the public API the same so that existing user code will continue to function as expected, as if nothing has really changed. To support all of this work, we’ve introduced a few of our own helper procedures and macros, and we’ve modified some existing ones to be Goblins-friendly.

Whew, that’s a lot! It’s the culmination of over a year of work, so it can be difficult to take in. If you’d like to try, you can see the current state of the port in our WIP pull request on Codeberg!

Demo time

Okay, but does it work? We’re so glad you asked!

In addition to using Shepherd for its init system (PID 1), Guix provides helpful facilities for running user-level Shepherd daemons through the home-shepherd-service-type in guix home. This is the same kind of user Shepherd daemon mentioned before; Guix just provides a nice, declarative interface to configure and launch the daemon when defining a user's home-environment. We used this functionality to swap in our Goblins Shepherd and manage an Emacs background daemon with it. Here's an actual session running in the context of guix home container:

juli shepherd λ guix home container home-shepherd.scm
substitute: recherche des substituts sur « https://substitutes.nonguix.org »… 100.0%
substitute: recherche des substituts sur « https://bordeaux.guix.gnu.org »…   0.0%guix substitute: avertissement : bordeaux.guix.gnu.org : la connexion à échouée : Connexion refusée
substitute:
substitute: recherche des substituts sur « https://ci.guix.gnu.org »… 100.0%
Les dérivations suivantes seront compilées :
  /gnu/store/ixpviqjakf55j24ag523pdl5g9k8xld7-provenance.drv
  /gnu/store/ynykwmi237h4jxrgdgkwqs4sgvf1h3cc-home.drv

substitute: recherche des substituts sur « https://bordeaux.guix.gnu.org »…   0.0%
construction de /gnu/store/ixpviqjakf55j24ag523pdl5g9k8xld7-provenance.drv...
construction de /gnu/store/ynykwmi237h4jxrgdgkwqs4sgvf1h3cc-home.drv...
WARNING: (guile-user): imported module (guix build utils) overrides core binding `delete'
WARNING: (guile-user): imported module (guix build utils) overrides core binding `delete'
Symlinking /home/juli/.bash_profile -> /gnu/store/9vidh7q8sp353rb1jnrndyif9wl2fjna-bash_profile... done
Symlinking /home/juli/.profile -> /gnu/store/jjvk66x9wwzxw38byk796y9b6kvi21b0-shell-profile... done
Symlinking /home/juli/.bashrc -> /gnu/store/mdp6zf77631kqr8cw26p4m3vvbr7vk01-bashrc... done
Symlinking /home/juli/.config/shepherd/init.scm -> /gnu/store/nag703p683l66s2adad719810xfrhx3w-shepherd.conf... done
Symlinking /home/juli/.config/fontconfig/fonts.conf -> /gnu/store/bqhrpq7na79bxm3sbpmnana10g6sc4d5-fonts.conf... done
 done
Finished updating symlinks.

Comparing /gnu/store/non-existing-generation/profile/share/fonts and
          /gnu/store/yyyn7zy4lx8z9qsb41imkbxb11wrrqqc-home/profile/share/fonts... done (same)
Evaluating on-change gexps.

On-change gexps evaluation finished.

juli@sordidus ~$ herd status
Started:
 + emacs
 + root
juli@sordidus ~$ herd status emacs
Status of emacs:
  It is running since 19:40:05 (9 seconds ago).
  Main PID: 40
  Command: /gnu/store/b6f34g5rsz35z40fc0myimw9zgj654xj-emacs-no-x-30.1/bin/emacs --fg-daemon
  It is enabled.
  Provides: emacs
  Will be respawned.

Recent messages (use '-n' to view more or less):
  2025-09-05 19:40:06 Starting Emacs daemon.
juli@sordidus ~$ emacsclient -c
juli@sordidus ~$ herd stop emacs
juli@sordidus ~$ herd status emacs
Status of emacs:
  It is stopped since 19:40:31 (2 seconds ago).
  Process exited with code 15.
  It is enabled.
  Provides: emacs
  Will be respawned.
juli@sordidus ~$ herd start emacs
Service emacs has been started.
juli@sordidus ~$ herd status emacs
Status of emacs:
  It is running since 19:40:38 (2 seconds ago).
  Main PID: 144
  Command: /gnu/store/b6f34g5rsz35z40fc0myimw9zgj654xj-emacs-no-x-30.1/bin/emacs --fg-daemon
  It is enabled.
  Provides: emacs
  Will be respawned.

Recent messages (use '-n' to view more or less):
  2025-09-05 19:40:38 Starting Emacs daemon.
juli@sordidus ~$ herd restart emacs
Service emacs has been started.
juli@sordidus ~$ herd status emacs
Status of emacs:
  It is running since 19:40:45 (2 seconds ago).
  Main PID: 180
  Command: /gnu/store/b6f34g5rsz35z40fc0myimw9zgj654xj-emacs-no-x-30.1/bin/emacs --fg-daemon
  It is enabled.
  Provides: emacs
  Will be respawned.

Recent messages (use '-n' to view more or less):
  2025-09-05 19:40:45 Starting Emacs daemon.
juli@sordidus ~$ ls -al $(command -v herd)
lrwxrwxrwx 1 65534 overflow 80 Jan  1  1970 /home/juli/.guix-home/profile/bin/herd -> /gnu/store/4l4b2qb91bq3djj9ldg66jx6p98hxvin-goblins-shepherd-1.0.99-git/bin/herd

As simple as this demo is, it demonstrates that the Goblins Shepherd can already handle its basic job, despite the work left to achieve parity with mainline. If you’d like to try this out for yourself, first ensure you have Guix installed and up-to-date, then run the following commands:

git clone https://codeberg.org/spritely/shepherd
cd shepherd
git checkout -b goblins-shepherd-guix-home-demo
guix home container home-shepherd.scm

You can also look on Codeberg to see the home config itself.

“That’s so exciting!” we hear you saying; “When will this be shipping in a Guix distribution near me? When can I use Goblins to boot my operating system?” As exciting as this is, we’re not quite ready for prime time, as we’ll explain below.

Remaining work

Before we deploy Shepherd onto our own systems, and especially before we try it in PID 1, we want to ensure that we reliably pass Shepherd’s existing suite of shell-based tests. Somewhere between 4 and 7 tests fail as of writing; some tests fail every time, others only intermittently, indicating there may be some subtle race conditions lurking.

One key component that remains unsupported is system log support, the lack of which accounts for a considerable chunk of the remaining test failures. Nonetheless, our branch is passing nearly all of the existing tests, which is great progress! Once the test suite issues have been sorted out, we’ll try using our Shepherd build on a real Guix system and see how stable it is over time.

There is also plenty of code to clean up. We've left all the original actor code in place during development to make rebasing on upstream less prone to conflicts, but the time has come to start removing it. There are also numerous refactors that can be done to improve the code style and readability.

Beyond a direct port, though, this work will empower the Shepherd with everything Goblins and OCapN have to offer. So how could those powers be used? Well, we've got some ideas!

Single-system unification

On Guix systems, where Shepherd serves as the init system in PID 1, it is common to run additional Shepherd instances for unprivileged users. One option is to use guix home, as mentioned above. These Shepherd instances are entirely separate from each other. Only users with access to the root user (likely via sudo) can interact with the system Shepherd; it’s all-or-nothing. It would be nice to be able to give unprivileged users access to a subset of the system services, following the principle of least authority. The object capability security model provided by Goblins makes this possible!

herd clients communicate with shepherd daemons using a custom protocol over a Unix domain socket. If the client were to be modified to speak the OCapN protocol instead, users of herd would only be able to interact with services for which they hold a capability. Consider a shared server: the system administrator could give capabilities to other users of the machine that grant access to just a subset of the available system services — and perhaps to only a subset of the available service actions. The fine-grained nature of object capabilities means that access can be scoped to the minimum necessary for each user to do what they need.

As a first step in this direction, we’ve added a prerequisite component to Goblins, a Unix domain socket netlayer for OCapN. This is necessary to have interconnected machines running Shepherd communicate over OCapN at the system layer. Read on to see an example of what this might look like!

Fleet orchestration

Moving on from Shepherd on a single machine, an OCapN-enabled Shepherd will allow for orchestration of entire server fleets. To demonstrate, let’s walk through an example scenario. Carol, a DevOps engineer, is responsible for the web servers running on a small fleet of machines named A and B. Each machine is running Shepherd with a web-server service registered. To model this scenario on a single machine, we’ll use three Goblins vats:

(define a-vat (spawn-vat #:name "Server A"))
(define b-vat (spawn-vat #:name "Server B"))
(define c-vat (spawn-vat #:name "Carol"))

Then we’ll setup some loggers to distinguish which “machine” logged which line:

(define-actor (^prefix-logger bcom prefix)
  (lambda (str)
    (format #t "~a: ~a\n" prefix str)))

(define a-output (with-vat c-vat (spawn ^prefix-logger "A")))
(define b-output (with-vat c-vat (spawn ^prefix-logger "B")))
(define c-output (with-vat c-vat (spawn ^prefix-logger "C")))

Servers A and B have identical configuration with a web-server service that depends on the networking service:

(define (spawn-networking-service)
  (spawn ^service '(networking)
         #:start-handler (const #t)
         #:stop-handler (const #f)))

(define (spawn-web-server-service)
  (spawn ^service '(web-server)
         #:requirement '(networking)
         #:start-handler (const #t)
         #:stop-handler (const #f)))

(define a-registry (with-vat a-vat (spawn ^service-registry)))
(define a-shepherd (with-vat a-vat (spawn ^shepherd a-registry)))
(define a-networking (with-vat a-vat (spawn-networking-service)))
(define a-web-server (with-vat a-vat (spawn-web-server-service)))
(with-vat a-vat
  (all-of (<- a-shepherd 'register a-networking a-output)
          (<- a-shepherd 'register a-web-server a-output)))

(define b-registry (with-vat b-vat (spawn ^service-registry)))
(define b-shepherd (with-vat b-vat (spawn ^shepherd b-registry)))
(define b-networking (with-vat b-vat (spawn-networking-service)))
(define b-web-server (with-vat b-vat (spawn-web-server-service)))
(with-vat b-vat
  (all-of (<- b-shepherd 'register b-networking b-output)
          (<- b-shepherd 'register b-web-server b-output)))

Carol would like to issue a single command to start or stop all of the web servers. To do this, Carol first acquires references to the web-server service actors on each machine. At first glance this might seem to cause a name collision problem as both services have the same name, but fear not! Carol can assign locally meaningful names to these remote services in her local Shepherd. On Carol’s machine, the remote services are registered as web-server-a and web-server-b, respectively.

;; Naive, but enough for demo purposes.
(define-actor (^exported-service bcom writer shepherd service provision)
  (extend-methods service
    ((canonical-name) (car provision))
    ((provision) provision)
    ((requirement) '())
    (start
     (lambda args
       (let-on ((status (<- service 'status)))
         (match status
           ('stopped  `(started ,(apply <- shepherd 'start service writer args)))
           ('starting `(starting ,(<- service 'running)))
           ('stopping `(stopping ,(<- service 'running)))
           ('running  `(running ,(<- service 'running)))))))
    (stop
     (lambda args
       (apply <- shepherd 'stop service writer args)))))

(define c-registry (with-vat c-vat (spawn ^service-registry)))
(define c-shepherd (with-vat c-vat (spawn ^shepherd c-registry)))
(define c-web-server-a
  (with-vat c-vat
    (spawn ^exported-service a-output a-shepherd a-web-server '(web-server-a))))
(define c-web-server-b
  (with-vat c-vat
    (spawn ^exported-service b-output b-shepherd b-web-server '(web-server-b))))

These exported services are actually proxy objects that you can think of like micro-herd clients that only control a single service. In this simplistic example, Carol can only start or stop the exported services, but it would also be possible to allow other actions to be invoked.

To conveniently orchestrate all of the remote web-server services with a single command, Carol binds them together with her own local web-server-fleet service that depends on both web-server-a and web-server-b.

(define c-web-server-fleet
  (with-vat c-vat
    (spawn ^service '(web-server-fleet)
           #:requirement '(web-server-a web-server-b)
           #:start-handler (const #t)
           #:stop-handler (const #f))))
(with-vat c-vat
  (let-on ((_ (<- c-shepherd 'register c-web-server-a c-output))
           (_ (<- c-shepherd 'register c-web-server-b c-output))
           (_ (<- c-shepherd 'register c-web-server-fleet c-output)))
    (<- c-shepherd 'start c-web-server-fleet c-output)))

Now all Carol has to do is run herd start web-server-fleet (which we simulate above with the start method call) and her local Shepherd will report the success or failure of starting all the remote web servers in the fleet! Assembling the logs from all three machines, the event log would look something like this:

A: Service networking has been started.
B: Service networking has been started.
A: Service web-server has been started.
C: Service web-server-a has been started.
B: Service web-server has been started.
C: Service web-server-b has been started.
C: Service web-server-fleet has been started.

Neat, huh?

Guix deployment over OCapN

One final idea we’ll share is for a new Guix feature: a guix deploy agent. This would be a capability-safe take on the modern DevOps practice of deploying through dedicated agents instead of generic SSH. To make this work, there would be a guix-deploy Shepherd service that runs on the target machine with a special deploy action to start the deployment process. The workstation that is invoking guix deploy would receive a capability to that service, perhaps in sturdyref form, and associate it with a Guix machine declaration. That code might look something like this:

(define my-server
  (machine
    (operating-system my-os)
    (environment ocapn-environment-type)
    (configuration (machine-ocapn-configuration
                    (sturdyref "ocapn://pubkey.tcp-tls/s/swissnum?host=example.com&port=8888")
                    (system "x86_64-linux")))))

Any volunteers interested in building this?

Wrapping up

Porting Shepherd to Goblins has been a long time coming, but we’re starting to see encouraging results! If you’d like to discuss this blog post, help us make some of the ideas described above a reality, or talk about anything else Spritely related, consider joining our community forum!

by Juli Sims & David Thompson (contact@spritely.institute) at Wednesday, September 10, 2025

Wednesday, September 3, 2025

spritely.institute

Spritely Goblins v0.16.1 released!

Today we're happy to announce the release of Goblins 0.16.1. This is a small patch release for the 0.16.0 release from a few weeks ago. Unfortunately, shortly after releasing it we discovered some issues which prevented using OCapN (the peer-to-peer networking element of Goblins) with Hoot. This release resolves that along with a couple of minor bug fixes.

For more details about the changes in the release, see the NEWS file.

Bug Fixes

  • Fixed an issue where (goblins actor-lib io), which is heavily used by our netlayers, used current-scheduler from Fibers. This procedure is however not present in Hoot's fibers API.

  • Fixed an issue where multiple connections between two OCapN peers could exist if the OCapN Locator's hints differed.

  • Fixed an issue where multiple connections would occur between two OCapN peers due to a record hashing bug. This could not actually be reached due to the above IO actor bug.

  • Fixed an issue where resizing the vat event log would lose data. This was due to a bug in ring-buffer-resize! from (goblins utils ring-buffer).

Getting the release

As usual, if you're using Guix, you can upgrade to 0.16.1 by running:

$ guix pull
$ guix install guile-goblins

Otherwise, you can find the source download links on the Goblins homepage.

Get in touch!

If you're making something with Goblins or want to contribute to Goblins itself, be sure to join our community forum! We also host regular office hours where you can come and ask questions or discuss projects. For more information, see the forum. Thanks for following along and hope to see you there!

by Jessica Tallon (contact@spritely.institute) at Wednesday, September 3, 2025

Thursday, August 21, 2025

Idiomdrottning

Ditt jobb kommer ersättas av AI

Vi ser då och då listor i media med vilka jobb som AI kommer kunna ersätta.

Vet inte hur dom resonerar men så här tänker jag själv, att det kanske gäller iframtiden om AI blir mycket bättre än hur det är nu, för nu är det kasst.

Men den framtiden kan komma väldigt snabbt och plötsligt. Jag tycker att det är bra att sprida medvetenhet om hur idén med “arbetsmarknad” är ganska konstig i en värld där vi bygger arbetsbesparande maskiner.

Jag skulle gärna se att vi (gärna gradvis men snarast) går mot ett radikalt annorlunda sätt att fördela resurser och arbetsuppgifter än marknadstänket för det känns lite floppigt att när en arbetsmarknad styr oss så kan vi som bäst uppfinna bättre redskap, mer effektiva “spadar” och “hammare” och “krattor” som låter oss gräva djupare, hamra hårdare, och kratta fortare, men inte skära in på den där helvetiska 40-timmars–veckan, hur mycket verktyg vi än uppfinner.

Och eftersom vi lever i ett samhälle som styrs av kapitalägande och -utvecklande så kommer dom där nya verktygen att komma och inte på ett sätt som är bra för dom som jobbar. Det är det största problemet med AI, att det på ett drastiskt sätt ökar ägandekoncentrationen av produktionsmedlen. (Det andra stora problemet är såklart att det precis som all template production och all automatisering undergräver medvetenheten mellan klimatpåverkan och produktion eftersom den andra stora marknadsbuggen är att saker som inte kan räknas in i transaktioner, som ex vis miljökvadd eller nätberoende, belönas mer ju värre det är.)

Så när det kommer listor med “dom här jobben hotas av AI: typ alla” så är det bara bra, för det kan förhoppningsvis få oss att äntligen komma igång med att knepa ihop några nya sätt att leva som inte är lika marknadsberoende för isåfall kan kornukopian äntligen komma inom räckhåll. Rätten till lättja, rätten till att drömma och skapa och dela och ge fritt.

Så länge vi är satta i lönearbete för att få mat på bordet och tak över huvet är en grävskopa inte bättre än en spade. Dagarna är ändå likadana: “Gå upp, gå till jobbet, jobba, jobba, äta lunch. Samma sak händer imorgon. Jobba, åka trick hem och sätta sig och glo.” Att verktyget gör att din chefs vägbygge kommer längre eller din chefs gruva blir djupare eller din chefs dataprogram blir obegripligare hjälper inte oss det minsta i en sån värld.

Och inte nog med att verktygen inte gör vår dag bättre. Dom kan leda till massiv fattigdom och nöd i form av arbetslöshet eftersom jobb, hur knäppt och hemskt det än är att jobba, har gjorts till en förutsättning för att få äta och sova tryggt i det här skeva världsbygget.

När spinning jenny, trådmaskinen från sjuttonhundratalet, kom, då skälvde världen på ett sätt som fortfarande är på vippen att döda den eftersom industralismens miljökatastrofer fortfarande inte är lösta utan fortfarande är skenande. Nu när vi är på vippen att skapa en maskin som skapar maskiner då vete tusan vad vi ska göra. Att bara låta bli att uppfinna den och bara köra på som vi gör nu kommer inte hända för det finns ett fåtal superrika supertaskiga personer som tjänar grovt på att den kommer alltså kommer den.

Men det behöver inte vara fel om vi bara kan få bukt med den sjukdom som kallas produktionsmedelsägande och istället alla få en del av la dolce vita att ta det lugnt medan datorn gör den tråkigaste delen av jobbet. Bara det inte blir tvärt om, att den gör den roliga delen och tvingar oss göra det trista.

by Idiomdrottning (sandra.snan@idiomdrottning.org) at Thursday, August 21, 2025

Wednesday, August 20, 2025

The Racket Blog

Racket v8.18

posted by Stephen De Gabrielle


We are pleased to announce Racket v8.18 is now available from https://download.racket-lang.org/.

As of this release:

  • The racket-lang.org website no longer distributes Racket BC bundles, but it includes pre-built bundles for two flavors of ARM linux, AArch64 and 32-bit ARMv6 VFP.
  • XML structures are serializable.
  • Scribble’s HTML generation conforms better to modern standards.
  • Racket uses Unicode 16.0 for character and string operations.
  • The redex-check default generation strategy always uses random generation to supplement the enumerator.
  • DrRacket supports the use of shift-tab to go backward to previous indentation positions.
  • The macro stepper supports the string-constants library, allowing internationalization of the stepper itself.
  • The struct form supports #:properties prop-list-expr, making it more convenient to attach multiple property values to a structure type.
  • Build-system improvements support containers registered at Docker Hub to build for all platforms that have downloads from the main Racket download site; improvements also support Unix-style builds for Mac OS in the style of MacPorts.
  • The expt function produces a more accurate result when its first argument is a flonum and its second argument is an exact integer that has no equivalent flonum representation than it did in prior versions.
  • TCP ports use SO_KEEPALIVE correctly.
  • Unsafe code can use “uninterruptible mode” instead of “atomic mode” to allow futures to run concurrently while preventing interruptions from other threads.
  • The net/imap library supports IMAP’s move operation.
  • There are many other repairs and documentation improvements!

Thank you

The following people contributed to this release:

Bob Burger, Bogdan Popa, Brad Lucier, Carl Gay, Chloé Vulquin, D. Ben Knoble, Gustavo Massaccesi, Jacqueline Firth, Jade Sailor, Jarhmander, Jason Hemann, Jens Axel Søgaard, Joel Dueck, John Clements, jyn, Jörgen Brandt, Mao Yifu, Marc Nieper-Wißkirchen, Matthew Flatt, Matthias Felleisen, Mike Sperber, Noah Ma, paralogismos, Pavel Panchekha, Philip McGrath, Robby Findler, Ryan Culpepper, Sam Tobin-Hochstadt, Shalok Shalom, Stephen De Gabrielle, Steve Byan, Vincent Lee, Wing Hei Chan, and ZC Findler.

Racket is a community developed open source project and we welcome new contributors. See racket/README.md to learn how you can be a part of this amazing project.

Feedback Welcome

Questions and discussion welcome at the Racket community on Discourse or Discord.

Please share

If you can - please help get the word out to users and platform specific repo packagers

Racket - the Language-Oriented Programming Language - version 8.18 is now available from https://download.racket-lang.org

See https://blog.racket-lang.org/2025/08/racket-v8-18.html for the release announcement and highlights.

by John Clements, Stephen De Gabrielle at Wednesday, August 20, 2025

Thursday, August 7, 2025

Andy Wingo

whippet hacklog: adding freelists to the no-freelist space

August greetings, comrades! Today I want to bookend some recent work on my Immix-inspired garbage collector: firstly, an idea with muddled results, then a slog through heuristics.

the big idea

My mostly-marking collector’s main space is called the “nofl space”. Its name comes from its historical evolution from mark-sweep to mark-region: instead of sweeping unused memory to freelists and allocating from those freelists, sweeping is interleaved with allocation; “nofl” means “no free-list”. As it finds holes, the collector bump-pointer allocates into those holes. If an allocation doesn’t fit into the current hole, the collector sweeps some more to find the next hole, possibly fetching another block. Space for holes that are too small is effectively wasted as fragmentation; mutators will try again after the next GC. Blocks with lots of holes will be chosen for opportunistic evacuation, which is the heap defragmentation mechanism.

Hole-too-small fragmentation has bothered me, because it presents a potential pathology. You don’t know how a GC will be used or what the user’s allocation pattern will be; if it is a mix of medium (say, a kilobyte) and small (say, 16 bytes) allocations, one could imagine a medium allocation having to sweep over lots of holes, discarding them in the process, which hastens the next collection. Seems wasteful, especially for non-moving configurations.

So I had a thought: why not collect those holes into a size-segregated freelist? We just cleared the hole, the memory is core-local, and we might as well. Then before fetching a new block, the allocator slow-path can see if it can service an allocation from the second-chance freelist of holes. This decreases locality a bit, but maybe it’s worth it.

Thing is, I implemented it, and I don’t know if it’s worth it! It seems to interfere with evacuation, in that the blocks that would otherwise be most profitable to evacuate, because they contain many holes, are instead filled up with junk due to second-chance allocation from the freelist. I need to do more measurements, but I think my big-brained idea is a bit of a wash, at least if evacuation is enabled.

heap growth

When running the new collector in Guile, we have a performance oracle in the form of BDW: it had better be faster for Guile to compile a Scheme file with the new nofl-based collector than with BDW. In this use case we have an additional degree of freedom, in that unlike the lab tests of nofl vs BDW, we don’t impose a fixed heap size, and instead allow heuristics to determine the growth.

BDW’s built-in heap growth heuristics are very opaque. You give it a heap multiplier, but as a divisor truncated to an integer. It’s very imprecise. Additionally, there are nonlinearities: BDW is relatively more generous for smaller heaps, because attempts to model and amortize tracing cost, and there are some fixed costs (thread sizes, static data sizes) that don’t depend on live data size.

Thing is, BDW’s heuristics work pretty well. For example, I had a process that ended with a heap of about 60M, for a peak live data size of 25M or so. If I ran my collector with a fixed heap multiplier, it wouldn’t do as well as BDW, because it collected much more frequently when the heap was smaller.

I ended up switching from the primitive “size the heap as a multiple of live data” strategy to live data plus a square root factor; this is like what Racket ended up doing in its simple implementation of MemBalancer. (I do have a proper implementation of MemBalancer, with time measurement and shrinking and all, but I haven’t put it through its paces yet.) With this fix I can meet BDW’s performance for my Guile-compiling-Guile-with-growable-heap workload. It would be nice to exceed BDW of course!

parallel worklist tweaks

Previously, in parallel configurations, trace workers would each have a Chase-Lev deque to which they could publish objects needing tracing. Any worker could steal an object from the top of a worker’s public deque. Also, each worker had a local, unsynchronized FIFO worklist, some 1000 entries in length; when this worklist filled up, the worker would publish its contents.

There is a pathology for this kind of setup, in which one worker can end up with a lot of work that it never publishes. For example, if there are 100 long singly-linked lists on the heap, and the worker happens to have them all on its local FIFO, then perhaps they never get published, because the FIFO never overflows; you end up not parallelising. This seems to be the case in one microbenchmark. I switched to not have local worklists at all; perhaps this was not the right thing, but who knows. Will poke in future.

a hilarious bug

Sometimes you need to know whether a given address is in an object managed by the garbage collector. For the nofl space it’s pretty easy, as we have big slabs of memory; bisecting over the array of slabs is fast. But for large objects whose memory comes from the kernel, we don’t have that. (Yes, you can reserve a big ol’ region with PROT_NONE and such, and then allocate into that region; I don’t do that currently.)

Previously I had a splay tree for lookup. Splay trees are great but not so amenable to concurrent access, and parallel marking is one place where we need to do this lookup. So I prepare a sorted array before marking, and then bisect over that array.

Except a funny thing happened: I switched the bisect routine to return the start address if an address is in a region. Suddenly, weird failures started happening randomly. Turns out, in some places I was testing if bisection succeeded with an int; if the region happened to be 32-bit-aligned, then the nonzero 64-bit uintptr_t got truncated to its low 32 bits, which were zero. Yes, crusty reader, Rust would have caught this!

fin

I want this new collector to work. Getting the growth heuristic good enough is a step forward. I am annoyed that second-chance allocation didn’t work out as well as I had hoped; perhaps I will find some time this fall to give a proper evaluation. In any case, thanks for reading, and hack at you later!

by Andy Wingo at Thursday, August 7, 2025

spritely.institute

Spritely Goblins v0.16.0 released!

We are excited to announce Spritely Goblins v0.16.0! This release of Goblins is faster than ever, with two major core speedups benefiting all Goblins-using programs! Furthermore, we have a brand new Unix Domain Socket netlayer, which means our OCapN protocol is now usable for efficient machine-local inter-process communication!

A new Unix Domain Sockets netlayer

Another new netlayer has come to Goblins, this time one based on Unix Domain Sockets! Unix domain sockets are ideal for communication between multiple processes running on the same machine. We think being able to use Goblins and OCapN to wire together a kind of efficient local inter-process communication is pretty neat!

Many users might be familiar with using Unix domain sockets using file paths on the system, however since the file system on Unix(-like) systems use ACLs, this can lead to security vulnerabilities via confused deputy attacks.

Our implementation uses a feature of Unix domain sockets which allow sockets to be sent and received over other sockets. We built an introduction server which you run on your system. You can think of it as a little OCaps kernel in amongst the modern ACL sea that our systems are built on today! The Unix domain socket netlayer can connect to one or multiple of these introduction servers and so long as two netlayers share the same introduction server, they can securely communicate with one another.

We look forward to seeing what you all use this new netlayer for... we have several exciting uses planned ourselves we hope to show off soon!

Speeding, speeding, speeding ahead!

When we're talking speedups in this release, we're not talking in mere single digit percentage speed boosts. No, not even double digit... keep going! Each of our two core speed-boosts bring make Goblins each improve the speed of common Goblins operations 10-20x, benefiting all Goblins-using programs!

The short version is, spawn has gotten faster, and bcom and promises have also gotten faster, all of which are core to all Goblins programs. For the interested reader, we explain more in further detail below. (This may go into more detail than many readers care for; feel free to skip past!)

Speeding up spawn by bypassing the elfs

Once upon a time, when Goblins was being created, a decision was made: debugging programs is important, and so all objects shall carry a debug name, and that debug name shall be, by default, the name of the procedure that constructed the actor! This was a sensible decision, and we believe, generally the correct one: it has served us well.

This decision was made long ago, in early days when Goblins was a Racket library, and we began to focus on speed much more after the port to Guile. But unfortunately, in Guile, asking a procedure "what is your name?" resulted in a journey to the land of elfs.

Or rather, that is all to say, calling procedure-name in Guile on every spawn, which we did for debuggability purposes, turns out to be painfully slow. And the reason it is slow is that normally procedure-name is only called when experimenting at the REPL or when printing a backtrace. While optimizing Goblins programs, we found that tracing an ordinary spawn (using Guile's lovely ,trace tool) was printing reams and reams of pages of lines of code. Guile's internal object file format is (perhaps surprising to some readers!) the very same ELF as, yes, the Executable and Linkable Format used by Linux executables! However, Guile uses this for different purposes; it turns out this format is just very well thought through, and Guile's lead dev Andy Wingo has a nice blogpost explaining why ELF was chosen. What this effectively meant is that ELF-parsing code would be executed all the time when simply trying to grab the name of a procedure while spawning an object.

What to do? Many paths were considered: We could try to optimize this code intended to be rarely-used in Guile itself, or cache the result and attach to the constructor somehow, or evaluate lazily. But each of these had problems: it was slow or otherwise complicated.

We could change spawn to be a macro, and grab the name referred to by the constructor at compile time. Alas, this had its own pitfall: this would break any case where spawn was already being used with apply.

The solution is to support both cases! Here is the new code for spawn:

;; When an actor is spawned and a name is not specified, we default to
;; the name of its constructor.  However, 'procedure-name' is very
;; slow and can involve parsing ELF for compiled code.  To speed
;; things up, we take advantage of the fact that actor constructors
;; are typically specified as identifiers in the source, so we can
;; simply use that identifier as the name.  To preserve the illusion
;; that 'spawn' is just a regular ol' procedure, there is identifier
;; syntax.
(define-syntax spawn
  (lambda (stx)
    (syntax-case stx ()
      ((_ constructor arg ...)          ; fast path
       (identifier? #'constructor)
       #'(spawn-named 'constructor constructor arg ...))
      ((_ constructor arg ...)          ; slow path
       #'(%spawn constructor arg ...))
      (id                               ; identifier syntax; also slow
       (identifier? #'id)
       #'%spawn))))

What this means is that when a Goblins program is compiled, most invocations of spawn will cleverly use the name of the constructor being passed in at compile time. But if this cannot be determined simply at compile time, or if spawn is to be invoked via apply or passed around as if it were a function, we fall back to using spawn as an ordinary procedure (ie, fall back to the internal %spawn procedure, which calls procedure-name as normal).

We still love our friends the elfs, and upon occasion, some Goblins programs might journey into elf land, should they need their help to provide a simple debugging name. But most of the time, we can be much faster now, by looking around where we are at compile time!

Become your new you, faster than ever

Previously we discussed how Goblins actors got much faster with spawn, but this is only part of an actor's journey. First, we are born, and then, we grow and change based upon experience. So it is too with Goblins actors!

When an actor is spawned, its constructor returns what will be its first behavior. But actors may change their behavior based upon experience: in response to a message, a Goblins actor may choose to bcom (pronounced "become") a new version of its behavior.

An actor having many experiences (receiving many messages) may experience a large amount of change, and thus may invoke bcom a lot. The way bcom was implemented used pretty much the same sealers/unsealers technique from the appendix of The Heart of Spritely, itself a technique borrowed from W7, the very security kernel from A Security Kernel Based on the Lambda Calculus!

This is a cool technique, and takes advantage of being able to construct new types at runtime. However, constructing new types at runtime turns out to have some overhead. The details are unimportant, but we moved to a new implementation of sealers which are functionally equivalent but use an encapsulated "cookie" comparison, which turns out to be dramatically faster... about as fast as two accessor calls and an identity-comparison invocation of eq?!

In other words, actors can now change their behavior with bcom quite quickly! And several other aspects of Goblins have gotten faster too with this new sealers technique, in particular several aspects of promises! Zoom zoom!

Getting the release

This release includes all the features detailed above as well as many bug fixes. See the NEWS for more information.

As usual, if you're using Guix you can upgrade to 0.16 by using the following:

guix pull
guix install guile-goblins

Otherwise, you can find the tarball on our release page.

The above features and speedups spoken about in this blogpost refer to the Guile version of Goblins, which is nowadays the primary version of Spritely Goblins. However, we do maintain our older Racket version, which has now also gotten updated to maintain OCapN compatibility with Guile Goblins. Racket users can run the following:

raco pkg install goblins

If you're making something with Goblins or want to contribute to Goblins itself, be sure to join our community at community.spritely.institute! We also host regular office hours where you can come and ask questions or discuss our projects, you can find information about those on our community forum. Thanks for following along and hope to see you there!

by Christine Lemmer-Webber (contact@spritely.institute) at Thursday, August 7, 2025

Thursday, July 31, 2025

spritely.institute

Spritely presented spirited speeches spanning the planet

Over the past 6 months, Spritely has been busy bringing our message to new audiences. I thought it might be nice to compile a list for everyone to watch our talks. Christine Lemmer-Webber, the Executive Director of Spritely, has been busy giving most of these presentations, but the entire team has helped as well. The talks cover our technology, our values, our past, and our vision.

Org mode Witchcraft at Spritely

In January, five Spritely members went to the annual FOSDEM conference to talk about the organization and how we all contribute to it. The first talk was actually by me, talking about how we organize our organization, and how we manage the many whitepapers we have put together using Org Mode. I also gave a sneak-peak of the trans-bean program which can use a Magit-style menu to edit plaintext accounting ledgers.

An update from that video is that trans-bean is now available on Codeberg if you want to try it out!

Today's fediverse: a good start, but there's more to do

Unfortunately, this talk is one of those things you had to experience in person. The camera didn't capture the performance. It is still worth a listen though! Our founder, Christine Lemmer-Webber, and the Chief Technologist at Spritely, Jessica Tallon, talk about their view of the fediverse, from their perspective as two of the primary authors to the AcitivityPub spec.

Object-Capability Security with Spritely Goblins for Secure Collaboration

Juliana gave a beautiful presentation of how our shared values regarding individual rights and consent led naturally to the technical choices Spritely has made. She also gives a great overview of Ocaps which is worth watching even if you are familiar with object capability security already.

Minimalist web application deployment with Scheme

Dave is fighting an uphill battle against dependency hell, and he needs your help. Part of the solution is, of course, Guile Scheme! Spritely's scheme-to-webassembly compiler, Hoot, is now mature enough to utilize in your next web application, and Dave thinks you should try it. What you will get in return is true reproducibility and a good bootstrapping story, along with all the web APIs you're used to. He even goes on to show how reactive programming can work through webassembly and Scheme, and plenty of other goodies.

Goblins: The framework for your next project!

Jessica is the lead technologist at Spritely and brings a lot of experience to the table when it comes to defining networking standards. She is a co-author of the ActivityPub. Now, working on Goblins at Spritely, she believes the Goblins library can be used for much more than just social media.

Shepherd with Spritely Goblins for Secure System Layer Collaboration

Juliana's second talk at FOSDEM this year was about her work on bringing the distributed networking power of Goblins to the Shepherd, which is responsible for coordinating services on a Guix system. With this project well underway, system administration, across the internet, can soon be done in a capability-secure way. This talk covers the current status of the project as well as how the Plan 9 system inspired her to start.

Spritely and a secure, collaborative, distributed future

Christine gave the last talk from Spritely at FOSDEM this year, and walked through the larger concepts of Spritely and how they come together, and why we decided to make mascots for all the different components. The Spritely project plan from 6 years ago is still the current plan, despite all the work that was done in between. As more and more characters have been coming to life, we have been getting closer to fulfilling our promise of peer-to-peer application development made easy and secure.

c-base Fireside chat

Christine had an intimate conversation at Berlin's famous c-base space station with Volker Grassmuck, ranging from topics about her personal life, her experience working on the ActivityPub, and the work she is doing now at Spritely. She ends it with a powerful and hopeful message about the future of decentralized networking.

Fediforum keynote

Christine again gave an amazing talk about the values that led to Spritely, most importantly including fun and enjoyment. She talks about the differences between the fediverse and Bluesky, and how each can learn from each other, as well as our current battle against surveillance capitalism. Throughout all of this, she gives an optimistic view of what can be accomplishe through community activism.

What the future holds

We are all putting our heads down to work on delivering the promises we talked about this year so far. In the current environment, the tools we are building are more important than ever. We hope that these talks inspire you to try out our technology and read our papers, maybe even donate! And each month, you can come listen to more of us talk at our monthly Office Hours.

Have a great rest of the Summer!

by Amy Pillow (contact@spritely.institute) at Thursday, July 31, 2025

Friday, July 25, 2025

Scheme Requests for Implementation

SRFI 264: String Syntax for Scheme Regular Expressions

SRFI 264 is now in draft status.

This SRFI proposes SSRE, an alternative string-based syntax for Scheme Regular Expressions as defined by SRFI 115. String syntax is both compact and familiar to many regexp users; it is translated directly into SRE S-expressions, providing equivalent constructs. While the proposed syntax mostly follows PCRE, it takes into account specifics of Scheme string syntax and limitations of SRE, leaving out constructs that either duplicate functionality provided by Scheme strings or have no SRE equivalents. The repertoire of named sets and boundary conditions can be extended via a parameter mechanism. Extensions to PCRE syntax allow concise expression of operations on named character sets.

by Sergei Egorov at Friday, July 25, 2025

Tuesday, July 8, 2025

Andy Wingo

guile lab notebook: on the move!

Hey, a quick update, then a little story. The big news is that I got Guile wired to a moving garbage collector!

Specifically, this is the mostly-moving collector with conservative stack scanning. Most collections will be marked in place. When the collector wants to compact, it will scan ambiguous roots in the beginning of the collection cycle, marking objects referenced by such roots in place. Then the collector will select some blocks for evacuation, and when visiting an object in those blocks, it will try to copy the object to one of the evacuation target blocks that are held in reserve. If the collector runs out of space in the evacuation reserve, it falls back to marking in place.

Given that the collector has to cope with failed evacuations, it is easy to give the it the ability to pin any object in place. This proved useful when making the needed modifications to Guile: for example, when we copy a stack slice containing ambiguous references to a heap-allocated continuation, we eagerly traverse that stack to pin the referents of those ambiguous edges. Also, whenever the address of an object is taken and exposed to Scheme, we pin that object. This happens frequently for identity hashes (hashq).

Anyway, the bulk of the work here was a pile of refactors to Guile to allow a centralized scm_trace_object function to be written, exposing some object representation details to the internal object-tracing function definition while not exposing them to the user in the form of API or ABI.

bugs

I found quite a few bugs. Not many of them were in Whippet, but some were, and a few are still there; Guile exercises a GC more than my test workbench is able to. Today I’d like to write about a funny one that I haven’t fixed yet.

So, small objects in this garbage collector are managed by a Nofl space. During a collection, each pointer-containing reachable object is traced by a global user-supplied tracing procedure. That tracing procedure should call a collector-supplied inline function on each of the object’s fields. Obviously the procedure needs a way to distinguish between different kinds of objects, to trace them appropriately; in Guile, we use an the low bits of the initial word of heap objects for this purpose.

Object marks are stored in a side table in associated 4-MB aligned slabs, with one mark byte per granule (16 bytes). 4 MB is 0x400000, so for an object at address A, its slab base is at A & ~0x3fffff, and the mark byte is offset by (A & 0x3fffff) >> 4. When the tracer sees an edge into a block scheduled for evacuation, it first checks the mark byte to see if it’s already marked in place; in that case there’s nothing to do. Otherwise it will try to evacuate the object, which proceeds as follows...

But before you read, consider that there are a number of threads which all try to make progress on the worklist of outstanding objects needing tracing (the grey objects). The mutator threads are paused; though we will probably add concurrent tracing at some point, we are unlikely to implement concurrent evacuation. But it could be that two GC threads try to process two different edges to the same evacuatable object at the same time, and we need to do so correctly!

With that caveat out of the way, the implementation is here. The user has to supply an annoyingly-large state machine to manage the storage for the forwarding word; Guile’s is here. Basically, a thread will try to claim the object by swapping in a busy value (-1) for the initial word. If that worked, it will allocate space for the object. If that failed, it first marks the object in place, then restores the first word. Otherwise it installs a forwarding pointer in the first word of the object’s old location, which has a specific tag in its low 3 bits allowing forwarded objects to be distinguished from other kinds of object.

I don’t know how to prove this kind of operation correct, and probably I should learn how to do so. I think it’s right, though, in the sense that either the object gets marked in place or evacuated, all edges get updated to the tospace locations, and the thread that shades the object grey (and no other thread) will enqueue the object for further tracing (via its new location if it was evacuated).

But there is an invisible bug, and one that is the reason for me writing these words :) Whichever thread manages to shade the object from white to grey will enqueue it on its grey worklist. Let’s say the object is on an block to be evacuated, but evacuation fails, and the object gets marked in place. But concurrently, another thread goes to do the same; it turns out there is a timeline in which the thread A has marked the object, published it to a worklist for tracing, but thread B has briefly swapped out the object’s the first word with the busy value before realizing the object was marked. The object might then be traced with its initial word stompled, which is totally invalid.

What’s the fix? I do not know. Probably I need to manage the state machine within the side array of mark bytes, and not split between the two places (mark byte and in-object). Anyway, I thought that readers of this web log might enjoy a look in the window of this clown car.

next?

The obvious question is, how does it perform? Basically I don’t know yet; I haven’t done enough testing, and some of the heuristics need tweaking. As it is, it appears to be a net improvement over the non-moving configuration and a marginal improvement over BDW, but which currently has more variance. I am deliberately imprecise here because I have been more focused on correctness than performance; measuring properly takes time, and as you can see from the story above, there are still a couple correctness issues. I will be sure to let folks know when I have something. Until then, happy hacking!

by Andy Wingo at Tuesday, July 8, 2025

Wednesday, June 11, 2025

Andy Wingo

whippet in guile hacklog: evacuation

Good evening, hackfolk. A quick note this evening to record a waypoint in my efforts to improve Guile’s memory manager.

So, I got Guile running on top of the Whippet API. This API can be implemented by a number of concrete garbage collector implementations. The implementation backed by the Boehm collector is fine, as expected. The implementation that uses the bump-pointer-allocation-into-holes strategy is less good. The minor reason is heap sizing heuristics; I still get it wrong about when to grow the heap and when not to do so. But the major reason is that non-moving Immix collectors appear to have pathological fragmentation characteristics.

Fragmentation, for our purposes, is memory under the control of the GC which was free after the previous collection, but which the current cycle failed to use for allocation. I have the feeling that for the non-moving Immix-family collector implementations, fragmentation is much higher than for size-segregated freelist-based mark-sweep collectors. For an allocation of, say, 1024 bytes, the collector might have to scan over many smaller holes until you find a hole that is big enough. This wastes free memory. Fragmentation memory is not gone—it is still available for allocation!—but it won’t be allocatable until after the current cycle when we visit all holes again. In Immix, fragmentation wastes allocatable memory during a cycle, hastening collection and causing more frequent whole-heap traversals.

The value proposition of Immix is that if there is too much fragmentation, you can just go into evacuating mode, and probably improve things. I still buy it. However I don’t think that non-moving Immix is a winner. I still need to do more science to know for sure. I need to fix Guile to support the stack-conservative, heap-precise version of the Immix-family collector which will allow for evacuation.

So that’s where I’m at: a load of gnarly Guile refactors to allow for precise tracing of the heap. I probably have another couple weeks left until I can run some tests. Fingers crossed; we’ll see!

by Andy Wingo at Wednesday, June 11, 2025

Monday, June 9, 2025

Scheme Requests for Implementation

SRFI 263: Prototype Object System

SRFI 263 is now in draft status.

This SRFI proposes a "Self"-inspired prototype object system. Such an object system works by having prototype objects that are cloned repeatedly to modify, extend, and use them, and is interacted with by passing messages.

by Daniel Ziltener at Monday, June 9, 2025

Wednesday, June 4, 2025

spritely.institute

Goblinville: A Spring Lisp Game Jam 2025 retrospective

Spritely participates in the Lisp Game Jam to make interactive artifacts demonstrating our progress building out our tech stack. The 2025 edition of the Spring Lisp Game Jam recently wrapped up and this time around we were finally able to show off using both Hoot and Goblins together to create a multiplayer virtual world demo! Now that we’ve had a moment to breathe, it’s time to share what we built and reflect on the experience.

But first, some stats about the jam overall.

Jam stats

Out of 26 total entries, 7 were made with Guile Scheme, including ours. Of those 7, all but one used Hoot, our Scheme to WebAssembly compiler. Guile tied for first place with Fennel as the most used Lisp implementation for the jam. We’re thrilled to see that Guile and Hoot have become popular choices for this jam!

Though many entries used Hoot, our entry was the only one that used Goblins, our distributed programming framework. However, David Wilson of System Crafters gets an honorable mention because he streamed several times throughout the jam while working on a MUD built with Goblins that was ultimately unsubmitted.

Our entry was Goblinville and it was rated the 7th best game in the jam overall. Not bad!

About Goblinville

Goblinville is a 2D, multiplayer, virtual world demo. During last year’s Spring Lisp Game Jam we made Cirkoban with a restricted subset of Goblins that had no network functionality. Since then, we’ve made a lot of progress porting Goblins to Hoot, culminating with the Goblins 0.15.0 release in January that featured OCapN working in the web browser using WebSockets.

Given all of this progress, we really wanted to show off a networked game this time. Making a multiplayer game for a jam is generally considered a bad idea, but Spritely is all about building networked communities so that’s what we set out to do. Our goal was to make something of a spiritual successor to the community garden demo I made when I first joined Spritely.

Screenshot of Jessica Tallon inGoblinville

What went well

First, let’s reflect on the good stuff. Here’s what went well:

  • Having participated in this jam a number of times, we have gotten pretty good at scoping projects down into something achievable.

  • Goblins made it easy to describe the game world as a collection of actors that communicate asynchronously. Initially, the entire world was hosted inside a single web browser tab. Once enough essential actors were implemented it was a simple task to push most of those actors into a separate server process. Since sending a message to a Goblins actor is the same whether it is local or remote, this change required little more than setting up an OCapN connection.

  • Communicating with actors over OCapN really helped with creating an architecture that separated server state from client-side input and rendering concerns. This was harder to think about with Cirkoban because there was no network separation.

  • The Hoot game jam template made it easy to get started quickly. It had been a year since we made our last game, so having a small template project was useful while we were refreshing our memory about the various Web APIs we needed to use.

  • The vast amount of freely licensed Liberated Pixel Cup (something our Executive Director Christine Lemmer-Webber organized back in her days at Creative Commons) assets allowed us to focus on the code while still having pleasing graphics that felt unified.

As a bonus, David Wilson gave Goblinville a shout out on a System Crafters stream and a bunch of people joined the server while I was online! It was a really cool moment.

Screenshot of six Goblinville players on screen at once

What didn’t go so well

Game jams are fast paced (even though the Lisp Game Jam is more relaxed than the average jam) and not everything goes according to plan. A big part of the game jam experience is to practice adjusting project scope as difficulties arise. Issues with the project included:

  • Time pressure. Unfortunately, we didn’t have as much time to dedicate to this project that we would have liked. We weren’t able to start work until the Monday after the jam started, so we only had 7 days instead of 10. Also, I came down with a cold at the end of the week which didn’t help my productivity. Making something that felt as polished as Cirkoban simply wasn’t possible.

  • Lack of persistence for the game world. There’s still some amount of pre-planning that goes into writing actors that can persist that we didn’t have time for. Furthermore, while our persistence system is written to support incremental updates, we don’t have a storage backend that supports it yet. Each tick of the game world would trigger a full re-serialization and we felt that was too much of a performance penalty. We hope that by the next jam this will no longer be an issue.

  • As predicted, multiplayer increased overall complexity. What felt like a stable enough world during local testing was quickly shown to have several performance issues and bugs once it was released to the public and other people started using it. We had to restart the server once every day or so during the jam rating period (though we have resolved these issues in a post-jam update). Since we weren’t persisting the game world, each restart wiped out all registered players and the state of the map.

  • No client-side prediction to mask lag. For example, when you press an arrow key to move, you won’t see the player sprite move in the client until it receives a notification from the server that the move was valid. In other words, how responsive the controls feel is directly tied to server lag. A production game client would move the player immediately and fix things up later if it receives contradictory information from the server.

Screenshot of 3 Goblinville players online, user “djm” is saying“hi!”

Post-jam updates

We did a bit of additional work after the jam was over to sand some of the roughest edges:

  • Re-architected the server update loop to greatly reduce message volume. Because it was simple to implement, actors in the game world were being sent a tick message at 60Hz to update their internal state. Most of the time, the actors would simply do nothing. A plant that is done growing has nothing left to do, so that’s 60 wasteful messages per second per plant. Instead, a timer system was added to schedule things to happen after so many ticks of the game world and the tick method was removed from all game objects. This greatly improved server stability, especially for worlds with lots of live objects. As of writing, we’ve had a server running for six days without any noticeable increase in lag.

  • Added a server event log. It was hard to see what was going on in the world during the jam rating period without being connected to the graphical client. Now the server process emits a timestamped log of every event to standard output.

  • Added character sprite selection. This feature just barely missed the jam submission deadline, but it’s in now! Instead of all players being the same sprite, there are now six to choose from.

  • Took down the public server. For the jam submission version, we had baked a URI into the itch.io client to a public server we were hosting so the game would “just work”. This was particularly important for the other participants who were rating the submitted games and giving feedback. Since the jam rating period is now over, we took down the public server. If you’re interested in trying out Goblinville, you can follow the instructions in the README to host your own server.

Also, Spritely co-founder Randy Farmer stopped by our updated Goblinville world!

Screenshot of current Spritely staff plus co-founder Randy Farmer inGoblinville

Wrapping up

Goblinville turned out to be more of a tech demo than a true game, but we’re quite happy with the result. We think it’s a good demonstration of what can be built with Goblins and Hoot in a short amount of time. We hope to build on this success to create even more engaging, featureful demos in the future!

by Dave Thompson (contact@spritely.institute) at Wednesday, June 4, 2025

Tuesday, May 27, 2025

Idiomdrottning

Endless scroll

The “endless scroll” debate was after it replaced pages where you’d scroll scroll scroll click, scroll scroll scroll click, scroll scroll scroll click. That was annoying while still not actually stemming addiction (at least for me). I’d still read through those megathreads on RPG.net, UI annoyances or no. The endless scroll it just took the clicks out of that process which was an improvement. But what I want is instead taking scrolls out of the process! So it’s tap, tap, tap, tap—like an ebook!

Probably going to be just as addictive but I won’t get anxiety from all the scrolling.

Scrolling and panning is fiddly and I never get exactly the right amount of page scrolled it’s like threding a needle repeatedly and most psge down algos are no good either since they’re paging in a text format that’s not designed for pages so you have to read the same couple of lines twice, last on this page and first on the next. So in the future maybe we’ll render HTML as actual pages (after all, epub readers can [sorta] do it). Even less and more on Unix can do it; they show all of one page, then all of the next page separately and so on. The weaksauce nature of page down in GUI apps like Netscape was one of the biggest letdowns when I first started using them in the nineties.

However, the addiction dark pattern has another component; the endless and often junky content which really makes the scroll endless. That part can not stay.

That’s a secondary reason for why I don’t like discover algorithms on Mastodon, the primary reason being how it’s artificial virality.

by Idiomdrottning (sandra.snan@idiomdrottning.org) at Tuesday, May 27, 2025

Thursday, May 22, 2025

Andy Wingo

whippet lab notebook: guile, heuristics, and heap growth

Greets all! Another brief note today. I have gotten Guile working with one of the Nofl-based collectors, specifically the one that scans all edges conservatively (heap-conservative-mmc / heap-conservative-parallel-mmc). Hurrah!

It was a pleasant surprise how easy it was to switch—from the user’s point of view, you just pass --with-gc=heap-conservative-parallel-mmc to Guile’s build (on the wip-whippet branch); when developing I also pass --with-gc-debug, and I had a couple bugs to fix—but, but, there are still some issues. Today’s note thinks through the ones related to heap sizing heuristics.

growable heaps

Whippet has three heap sizing strategies: fixed, growable, and adaptive (MemBalancer). The adaptive policy is the one I would like in the long term; it will grow the heap for processes with a high allocation rate, and shrink when they go idle. However I won’t really be able to test heap shrinking until I get precise tracing of heap edges, which will allow me to evacuate sparse blocks.

So for now, Guile uses the growable policy, which attempts to size the heap so it is at least as large as the live data size, times some multiplier. The multiplier currently defaults to 1.75×, but can be set on the command line via the GUILE_GC_OPTIONS environment variable. For example to set an initial heap size of 10 megabytes and a 4× multiplier, you would set GUILE_GC_OPTIONS=heap-size-multiplier=4,heap-size=10M.

Anyway, I have run into problems! The fundamental issue is fragmentation. Consider a 10MB growable heap with a 2× multiplier, consisting of a sequence of 16-byte objects followed by 16-byte holes. You go to allocate a 32-byte object. This is a small object (8192 bytes or less), and so it goes in the Nofl space. A Nofl mutator holds on to a block from the list of sweepable blocks, and will sequentially scan that block to find holes. However, each hole is only 16 bytes, so we can’t fit our 32-byte object: we finish with the current block, grab another one, repeat until no blocks are left and we cause GC. GC runs, and after collection we have an opportunity to grow the heap: but the heap size is already twice the live object size, so the heuristics say we’re all good, no resize needed, leading to the same sweep again, leading to a livelock.

I actually ran into this case during Guile’s bootstrap, while allocating a 7072-byte vector. So it’s a thing that needs fixing!

observations

The root of the problem is fragmentation. One way to solve the problem is to remove fragmentation; using a semi-space collector comprehensively resolves the issue, modulo any block-level fragmentation.

However, let’s say you have to live with fragmentation, for example because your heap has ambiguous edges that need to be traced conservatively. What can we do? Raising the heap multiplier is an effective mitigation, as it increases the average hole size, but for it to be a comprehensive solution in e.g. the case of 16-byte live objects equally interspersed with holes, you would need a multiplier of 512× to ensure that the largest 8192-byte “small” objects will find a hole. I could live with 2× or something, but 512× is too much.

We could consider changing the heap organization entirely. For example, most mark-sweep collectors (BDW-GC included) partition the heap into blocks whose allocations are of the same size, so you might have some blocks that only hold 16-byte allocations. It is theoretically possible to run into the same issue, though, if each block only has one live object, and the necessary multiplier that would “allow” for more empty blocks to be allocated is of the same order (256× for 4096-byte blocks each with a single 16-byte allocation, or even 4096× if your blocks are page-sized and you have 64kB pages).

My conclusion is that practically speaking, if you can’t deal with fragmentation, then it is impossible to just rely on a heap multiplier to size your heap. It is certainly an error to live-lock the process, hoping that some other thread mutates the graph in such a way to free up a suitable hole. At the same time, if you have configured your heap to be growable at run-time, it would be bad policy to fail an allocation, just because you calculated that the heap is big enough already.

It’s a shame, because we lose a mooring on reality: “how big will my heap get” becomes an unanswerable question because the heap might grow in response to fragmentation, which is not deterministic if there are threads around, and so we can’t reliably compare performance between different configurations. Ah well. If reliability is a goal, I think one needs to allow for evacuation, one way or another.

for nofl?

In this concrete case, I am still working on a solution. It’s going to be heuristic, which is a bit of a disappointment, but here we are.

My initial thought has two parts. Firstly, if the heap is growable but cannot defragment, then we need to reserve some empty blocks after each collection, even if reserving them would grow the heap beyond the configured heap size multiplier. In that way we will always be able to allocate into the Nofl space after a collection, because there will always be some empty blocks. How many empties? Who knows. Currently Nofl blocks are 64 kB, and the largest “small object” is 8kB. I’ll probably try some constant multiplier of the heap size.

The second thought is that searching through the entire heap for a hole is a silly way for the mutator to spend its time. Immix will reserve a block for overflow allocation: if a medium-sized allocation (more than 256B and less than 8192B) fails because no hole in the current block is big enough—note that Immix’s holes have 128B granularity—then the allocation goes to a dedicated overflow block, which is taken from the empty block set. This reduces fragmentation (holes which were not used for allocation because they were too small).

Nofl should probably do the same, but given its finer granularity, it might be better to sweep over a variable number of blocks, for example based on the logarithm of the allocation size; one could instead sweep over clz(min-size)–clz(size) blocks before taking from the empty block list, which would at least bound the sweeping work of any given allocation.

fin

Welp, just wanted to get this out of my head. So far, my experience with this Nofl-based heap configuration is mostly colored by live-locks, and otherwise its implementation of a growable heap sizing policy seems to be more tight-fisted regarding memory allocation than BDW-GC’s implementation. I am optimistic though that I will be able to get precise tracing sometime soon, as measured in development time; the problem as always is fragmentation, in that I don’t have a hole in my calendar at the moment. Until then, sweep on Wayne, cons on Garth, onwards and upwards!

by Andy Wingo at Thursday, May 22, 2025