Planet Scheme

Monday, February 26, 2024

Phil Dawes

The point of macros

Pascal Costanza nails the point of macros.

(and illustrates why lispers actually like lisp's strange syntax so much)

Monday, February 26, 2024

And another new programming language

Last year I embarked on somwhat of a journey to find a better language for my home projects after getting a bit frustrated by python's lack of blocks and general cruftyness. After a couple of months of trying various different things I settled on Gambit Scheme for my spare-time data indexing project. A minimal core language with uniform syntax and macros. Lots of potentual for adapting and building language features that make sense to me. Sorted.

Then last week I got round to reading Richard Jones's minimal forth code. A language compiler and runtime in a couple of pages of well documented X86 assembler; small enough to read and understand the whole system in a bus journey.

Like Scheme, Forth has the ability to construct new language features, allowing user code to get between the parser and the evaluator to modify the language itself. Also, like Scheme, it has a uniform syntax - words separated by spaces, which makes re-writing code on the fly practical. Both these features mean that a fully fledged programming environment can be bootstrapped up from a minimal core. And the hook is: that minimal core is so much smaller for Forth than it is for Scheme.

One thing lead to another and I got interested in stack languages. Now I'm looking at Factor and wondering...

Factor has ticks in all the right boxes: minimal core, machine code compiler, macros, continuations, lightweight threads and message passing concurrency. It's also the tersest language I've seen - code to do something always seems a fraction of the size I expect it to be. On the other hand it's so radically different to anything else I've programmed in and it makes my head hurt. Could it be a contender to knock Scheme off the top spot?

Monday, February 26, 2024

Some hardcore Gambit-C features

Somebody asked me about gambit-c the other day, and why I was using that as opposed to some other language or runtime for my own-time coding stuff. Despite the scheme language being all cool, the thing that really made his eyes light up was the C features in gambit (it is called gambit-c for a reason). Here's some cool stuff you can do with gambit:

  1. Scheme compiles to native machine code
  2. The gambit scheme compiler compiles to C, which gcc then compiles to shared libraries or executables. The gsc command wraps this whole process, so you do a 'gsc < myschemefile>' which drops a shared library out of the other end. The (load) procedure in gambit will import either interpreted scheme code or compiled object files into the process, so you're good to go. In addition, the gsc compiler can also be run as an interactive interpreter (gsc -i) which acts just like the normal gambit scheme repl interpreter except you also have access to the compiler from your code. E.g. As well as dropping interpreted scheme code into the repl I can also compile a file and load it into the running repl process without dropping to the command line - cool!.
  3. You can embed C code directly into gambit scheme files.
  4. (c-include) lets you paste C code into your scheme, and (c-lambda) lets you define lambdas in C. This is really sweet. I thought the python C api was good, but because you have to write your c stuff in seperate files it always requires some sort of make/build system, and that's always been just too much of a barrier for me to use it day-to-day. With gambit you can just switch in a few lines of C into your performance hotspot and you're good to go. This also gives you trivial access to C libraries and low-level stuff - e.g. I use it for mmapped files. Having the C in the same file as scheme means the GCC compiler can optimize and inline C code into compiled scheme and vice-versa. The other advantage of this approach is that the gambit-c environment pretty much requires you to have a C compiler in the mix, so as a developer I can rely on it being there when distributing source to other developers, pasting code into emails etc..
  5. You can compile and load C into a running scheme process
  6. Actually this is just a mix of (1) and (2), but it's really cool when you think about what's going on. Make an update to your C code and dynamically re-load it into your running process. I have an emacs keybinding which executes a 'cload' function in the repl:
    (define (cload f)
      (compile-file f)
      (load f))
    I.e. edit the C code, whack the button and it's in the repl process. This keeps the dev loop really tight even when writing C.
  7. You can compile the whole thing into a native binary.
  8. This is especially cool and important when you consider that gambit-c isn't currently a popular runtime. It means you can distribute native binaries of your app for windows and mac users so that they can try your app without worrying about dependencies.

(*) N.B. although uncommon outside the lisp world, these compilation features are actually pretty common in lisp/scheme implementations. E.g. I think chicken, bigloo and SBCL provide simililar things.

Monday, February 26, 2024

Getting the hang of the lisp style

Finally, it's starting to feel like I'm getting the hang of the lisp style. The key for me seems to be in re-reading the contents of srfi-1 every time I think I need a loop.

Monday, February 26, 2024

Refactoring and the Repl

I'm still perservering with Gambit scheme, and progressing pretty slowly it has to be said. The first thing I've been missing is the lack of refactoring tools for scheme.

I wrote the basic python refactoring functionality in bicyclerepairman a long while ago, and having it as part of my daily toolset has strongly influenced the way I program. For example, I tend to follow the 'bash out some code and then clean it up' style of development. In particular, I have a habit of naming variables and functions badly and then renaming them later as I code.

So my initial thought is: no problem - I'll just knock up a bicyclerepairman for scheme! The problem is that I'm not quite sure how to do automated refactoring with a repl. You see Python has no real repl culture (sure it has a repl, but nobody uses it except for trying out simple expressions). People tend to run their program/unittests from scratch each iteration, which means the entire environment gets re-evaluated on each run.

The challenge with running a repl while you develop is keeping it in sync with your refactored code: E.g. if I rename a function that's used in multiple places, that results in lots of code that needs re-evaluating. Can this be done automatically (e.g. could it be made to work by just re-eval'ing files?). Hmm.. I think I need to talk to somebody with a lot more scheme experience than I have. Unfortunately I don't actually know any experienced schemers, especially not in London or Birmingham; maybe somebody from lshift can help?

Monday, February 26, 2024

Scheme is love

I've been battling again with Scheme recently. Having spent the last couple of months playing with various languages, I've come to the conclusion that scheme is the only one that has any real possibility of becoming my next 'general purpose language'. Python held that crown for many years, but its lack of blocks and concurrency caused me to start looking elsewhere and now I'm spoilt.

So, to Scheme. I've not found another language that can offer:

  • functional programming
  • message-passing concurrency (see termite)
  • macros
  • continuations
  • terse syntax
  • hardly any language cludges

...and as somebody who programs for fun in his spare time, these things really do matter to me. The biggest obstacle to full enlightment is the s-expression aesthetic: To my algol-shaped brain that lisp syntax just looks so damn ugly!

Anyway, I'm finding that the most enjoyable and self-affirming way to develop some scheme skills is (ironically) to re-read Peter Seibel's 'Practical Common Lisp' book with scheme glasses on. Now if there's anyone going to convince me that lisp syntax isn't just a grotty heap of parentheses, it's going to be Peter. His book just radiates lisp-love, and you can't help but be hooked. It says 'Look! You fools! Just look what you're missing!'. I've been translating various examples into scheme, just to test the water.

Monday, February 26, 2024

Scheme development environment

I recently had my work laptop nicked while I was in paris, so I've had to reconstruct my linux development environment on another laptop. That reminded me that I intended to document this stuff since I had to dig around a bit for it when I first picked up scheme a few months ago.

Things I use:

  • Gambit scheme
  • The main reason I've stuck with this is termite, but I've also found the mailing list friendly and helpful (and full of much bigger brains than mine). Gambit can also compile static C binaries that run on windows - that's important if you're going to write code in an esoteric environment like scheme. It also has a decent FFI which allows you to embed C code in with the scheme - tres cool, especially when you're writing performance critical code.
  • Emacs
  • You have to use this if you're doing lisp development. Personally I use emacs for development in any language, including Java. Steve Yegge wrote: "Emacs is to Eclipse as a light saber is to a blaster - but a blaster is a lot easier for anyone to pick up and use.". Nuff said. I tend to have two frames open - one with the gambit repl in it and the other to do the actually coding. For people that aren't in the know, the lisp development experience is slightly different to most languages: you basically have a process running (called the REPL) and inject blocks of code into it. This makes the development cycle turnaround super-fast and it becomes a bit frustrating when you go back to waiting for compile cycles in other languages. The various emacs scheme modes provide key presses for sending various regions of code to the repl, the most useful being the current definition (i.e. function) and the last expression.
  • Quack.el
  • Quack has lots of features that make editing scheme much easier. My favourite is automatically converting the word 'lambda' into a single greek lambda character (a-la DR scheme). In addition to making the code smaller, having greek letters all over the place makes me feel pretty hard-core (which is obviously very important).
  • E-tags
  • Emacs tags isn't a patch on what bicyclerepairman provides when I'm writing python, but it does enough to make navigating code relatively hassle free, plus it works with every language you can think of. Basically it parses files and creates an index of all the definitions so that you can jump to the definition of a symbol and back in a single key chord.
  • GNU Info
  • It's handy to have all the documentation in info format because then you can use emacs to jump to the apropriate docs when you need them without having to switch to browser etc.. E.g. this function maps [f1] to jump to the r5rs docs for whatever the cursor is currently pointing at. (with thanks to Bill Clementson for this).
    (add-hook 'scheme-mode-hook
          (lambda ()
            (define-key scheme-mode-map [f1]
              '(lambda ()
               (let ((symbol (thing-at-point 'symbol)))
                 (info "(r5rs)")
                 (Info-index symbol)))))))
    (N.B. quack has a feature to auto-open web based docs into emacs while you're coding, but I work offline so much that I don't use this much)
  • A unit testing framework.
  • I didn't really get on with any of the ones I tried, so I wrote a simple DSL myself (took about half a day after I'd figured out how syntax-case macros work). Ideally I want to end up using something like Nat's Protest system, which chucks out documentation as a side effect of testing. A Scheme DSL should be a good fit for this style, since you can name tests using strings rather than having to document with function names. For the time being though it provides just enough to get me testing (and also served to teach me a few things about macros, which is good.)

Is there anything missing?

Monday, February 26, 2024

A poor man’s scheme profiler

Gambit scheme lacks a profiler that can profile scheme with embedded C code. (There's statprof, but unfortunately it doesn't profile embeded C). I needed to do this pretty desperately for my triple indexing stuff so I've written a simple macro which takes wall clock timings of functions and accumulates them.

You replace 'define' with 'define-timed' in the functions you want profiled, and then the time spent in each function is accumulated in a global hash table. It's not pretty, but it's simple.

The macro needs some supporting code (which I keep in a seperate module so that it can be compiled in order to minimise overhead). :

(define *timerhash* (make-table))

;;; call this before running the code
(define (reset-timer) (set! *timerhash* (make-table)))

;;; adds the time spent in the thunk to an entry in the hashtable
(define (accum-time name thunk)
  (let* ((timebefore (real-time))
         (res (thunk)))
    (table-set! *timerhash* name 
                (let ((current (table-ref *timerhash* name '(0 0))))
                  (list (+ (car current) (- (real-time) timebefore))
                        (+ (cadr current) 1))))

;;; call this afterwards to get the times
(define (get-times)
  (map (lambda (e) (list (first e) 
                    (* 1000 (second e))
                    (third e)))

       (sort-list (table->list *timerhash*)
                  (lambda (a b) (> (second a)
                              (second b))))))

And here's the macro. It basically just wraps the function body in a call to 'accum-time':

(define-syntax define-timed
  (syntax-rules ()
    ((define-timed (name . args) body ...)
     (define (name . args)
       (accum-time 'name (lambda ()
                           body ...))))))

Here's some example output. The first number is the accumulated milliseconds spent in the function, and the second is the number of times it was called.

((handle-query-clause 2182.3294162750244 8)
 (do-substring-search-query 1678.9898872375488 1)
 (run-query 1678.9379119873047 1)
 (main 1678.929090499878 1)
 (handle-substr-search 705.5349349975586 2)
 (convert-ids-to-symbols 400.3608226776123 1)
 (handle-s_o-clause 192.81506538391113 2)
 (lookup-o->s 192.52514839172363 2)
 (lookup-2-lvl-index 192.48294830322266 2)
 (join 163.8491153717041 8)
 (hash-join 163.6500358581543 5)
 (fetch-int32-results 118.37220191955566 655)
 (project 100.7072925567627 3)
 (concat 77.42834091186523 11793)
 (handle-p->so-clause 55.348873138427734 2)
 (bounds-matching-1intersect 51.54275894165039 2)
 (lookup-p->2-lvl-index 50.98700523376465 2)
 (search-32-32-index 42.65856742858887 94)
 (cells-at-positions 35.561561584472656 11697)
 (fetch-32-32-results 1.171112060546875 94)
 (extract-column .2582073211669922 4)
 (get-join-columns .05888938903808594 5)
 (text-search-to-structured-query .032901763916015625 1)
 (columns .0324249267578125 10)
 (column-position .031232833862304688 8)
 (rows .011920928955078125 4))

Hmmm.. I really need to get something to html pretty print the scheme code.

Monday, February 26, 2024

A simple scheme unittest DSL

One of the first things I wrote when I was in the 'nesting'* phase of learning gambit scheme was a unittest DSL. Part of this was that I wanted an excuse to use r5rs syntax-rules macros, but the real motivation was that I'd been seduced by the idea of using tests for documentation ala Nat Pryce's 'Protest'. Here's what I came up with: (example):

(define-tests bus-tests

  (drive-to-next-stop        ; name of fn/class/symbol being tested
    ("takes bus to next stop"
      (drive-to-next-stop bus)
      (assert-equal 'next-stop (bus 'position)))
    ("doesn't stop off at chipshop on the way"
      ; test code to detect chipshop hasn't been stopped at

    ("picks up passengers from the bus stop"
      test code )
    ("doesn't leave passengers at the stop"
      test code )
    ("waits for old lady running to the stop"
      test code )))

The point is that it's easy to write some lisp to traverse this code and generate documentation from it.

I found when using protest in python that the documentation angle reinforced some healthy habits: When you write tests you naturally think 'how would this look to another person?' 'how can I document the behaviour of this?' which encourages more complete testing. Also when you look at generated documentation it's easy to see which bits you aren't testing because the documentation is missing (which then encourages you to write more tests).

The implementation is a bit clunky and makes use of gambit exceptions as a way of terminating tests early because of assert failures (which is a bit rubbish). What probably should be happening is that the outer macro should be re-writing each assert as an 'if' or something to conditionally execute the rest of the test. (which would portable to other scheme implementations) To be honest I knocked this up as fast as I could so that I could move onto writing other things (I'm developing a data aggregation and indexing tool), but the point of this post is more to convey the idea than the implementation.

That said, the crufty (currently gambit specific) implementation is here - hope this is interesting/useful to somebody.

* nesting as in 'building a nest'

Monday, February 26, 2024

Gambit-C namespaces

One of the first issues I had when evaluating scheme as a possible replacement for Python as my hobby language was its apparent lack of module/library/namespace system. How do people possibly build big programs? I wondered.

Now it turns out most (all?) scheme implementations have features to deal with modules and libraries. Gambit's is particularly nebulous in that it doesn't appear to be documented anywhere. Anyway, here's how it appears to work. I'm sure somebody will correct me if I've got something wrong:

Gambit has the 'namespace' primitive, with which you can declare that certain definitions belong in certain namespaces. Here's an example:

> (namespace ("f#" foo) ("b#" bar baz))

This means (AFAICS): "any further use/definition of the term 'foo' will reference the f# namespace and any use of bah/baz will reference the b# namespace".


> (namespace ("f#" foo) ("b#" bar baz))

> (define (foo) "I am foo")  ; defines f#foo

> (foo)
"I am foo"

> (f#foo)
"I am foo"

> (define (bar) "I am bar")

> (b#bar)
"I am bar"

This is cool because it allows you to retroactively fit namespaces to scheme code loaded from other files. E.g. If mod1.scm and mod2.scm both defined a procedure 'foo', you could use namespaces to allow both to be used in the same environment thus:

> (namespace ("m1#" foo))
> (load "mod1")    ; contains: (define (foo) "I am mod1's foo")

> (namespace ("m2#" foo))
> (load "mod2")    ; contains: (define (foo) "I am mod2's foo")

> (m1#foo)
"I am mod1's foo"

> (m2#foo)
"I am mod2's foo"

Job done. Now I haven't really used gambit namespaces much, so I not in a position to provide a good comparison with other approaches, however the feature does seem in keeping with the rest of the scheme language. By that I mean rather than a large set of fully blown rich language features you get a small bunch of simple but very extensible primitives with which to build your own language.

An good example of building a big system over these small primitives is Christian Jaeger's chjmodule library where he has used namespaces along with 'load' and 'compile-file' (and of course macros) to build a more industrial strength module system. This includes an 'import' keyword that loads from a search path and a procedure to recursively compile and import modules. Some example code from the README:

$ gsc -i -e '(load "chjmodule")' -

> (import 'srfi-1)
> fold
> (fold + 0 '(1 2 3))
> (build-recursively/import 'regex-case)
            ; recompiles regex.scm (a dependency) if necessary,
            ; then (re)compiles regex-case.scm if necessary and imports it.
> (regex-case "" ("http://(.+)" (_ url) url) (else url))
> *module-search-path* 

Sweet. I'm guessing it'll also be possible to build the r6rs library syntax in gambit scheme the same way.

Monday, February 26, 2024

Andy Wingo

on the impossibility of composing finalizers and ffi

While poking the other day at making a Guile binding for Harfbuzz, I remembered why I don’t much do this any more: it is impossible to compose GC with explicit ownership.

Allow me to illustrate with an example. Harfbuzz has a concept of blobs, which are refcounted sequences of bytes. It uses these in a number of places, for example when loading OpenType fonts. You can get a peek at the blob’s contents back with hb_blob_get_data, which gives you a pointer and a length.

Say you are in LuaJIT. (To think that for a couple years, I wrote LuaJIT all day long; now I can hardly remember.) You get a blob from somewhere and want to get its data. You define a wrapper for hb_blob_get_data:

local hb = ffi.load("harfbuzz")
ffi.cdef [[
typedef struct hb_blob_t hb_blob_t;

const char *
hb_blob_get_data (hb_blob_t *blob, unsigned int *length);

Presumably you then arrange to release LuaJIT’s reference on the blob when GC collects a Lua wrapper for a blob:

ffi.cdef [[
void hb_blob_destroy (hb_blob_t *blob);

function adopt_blob(ptr)
  return ffi.gc(ptr, hb.hb_blob_destroy)

OK, so let’s say we get a blob from somewhere, and want to copy out its contents as a byte string.

function blob_contents(blob)
   local len_out ='unsigned int')
   local contents = hb.hb_blob_get_data(blob, len_out)
   local len = len_out[0];
   return ffi.string(contents, len)

The thing is, this code is as correct as you can get it, but it’s not correct enough. In between the call to hb_blob_get_data and, well, anything else, GC could run, and if blob is not used in the future of the program execution (the continuation), then it could be collected, causing the hb_blob_destroy finalizer to release the last reference on the blob, freeing contents: we would then be accessing invalid memory.

Among GC implementors, it is a truth universally acknowledged that a program containing finalizers must be in want of a segfault. The semantics of LuaJIT do not prescribe when GC can happen and what values will be live, so the GC and the compiler are not constrained to extend the liveness of blob to, say, the entirety of its lexical scope. It is perfectly valid to collect blob after its last use, and so at some point a GC will evolve to do just that.

I chose LuaJIT not to pick on it, but rather because its FFI is very straightforward. All other languages with GC that I am aware of have this same issue. There are but two work-arounds, and neither are satisfactory: either develop a deep and correct knowledge of what the compiler and run-time will do for a given piece of code, and then pray that knowledge does not go out of date, or attempt to manually extend the lifetime of a finalizable object, and then pray the compiler and GC don’t learn new tricks to invalidate your trick.

This latter strategy takes the form of “remember-this” procedures that are designed to outsmart the compiler. They have mostly worked for the last few decades, but I wouldn’t bet on them in the future.

Another way to look at the problem is that once you have a system working—though, how would you know it’s correct?—then you either never update the compiler and run-time, or you become fast friends with whoever maintains your GC, and probably your compiler too.

For more on this topic, as always Hans Boehm has the first and last word; see for example the 2002 Destructors, finalizers, and synchronization. These considerations don’t really apply to destructors, which are used in languages with ownership and generally run synchronously.

Happy hacking, and be safe out there!

by Andy Wingo at Monday, February 26, 2024

Wednesday, February 14, 2024



Creating zshbrev kinda stopped my releasing of apps into a halt because now instead of making a whole app I’ll just add a few lines to .zshbrev/functions.scm, usually hardcoding paths and variables too just because that’s simpler.

Like right now, I wanted a simple CLI app to turn a shell glob into a .pls playlist so I just:

(define (lspls . files)
  (print "[playlist]
NumberOfEntries=" (length files))
  ((over (print "File" (add1 i) "=" x))
    (strse "/var/example/path1"
    (map realpath files)))

(Using my real file paths and URL prefixes instead of those example ones.)

Zshbrev too good! The Emacs-inspired whole “big ball of spaghetti” approach where any function can call any other function—like “realpath” in this example, originally defined for something else—turned out to be quite the ticket for code reuse.

by Idiomdrottning ( at Wednesday, February 14, 2024

Saturday, February 10, 2024

GNU Guix

Guix Days 2024 and FOSDEM recap

Guix contributors and users got together in Brussels to explore Guix's status, chat about new ideas and spend some time together enjoying Belgian beer! Here's a recap of what was discussed.

Day 1

The first day kicked off with an update on the project's health, given by Efraim Flashner representing the project's Maintainer collective. Efraim relayed that the project is doing well, with lots of exciting new features coming into the archive and new users taking part. It was really cool listening to all the new capabilities - thank-you to all our volunteer contributors who are making Guix better! Efraim noted that the introduction of Teams has improved collaboration - equally, that there's plenty of areas we can improve. For example, concern remains over the "bus factor" in key areas like infrastructure. There's also a desire to release more often as this provides an updated installer and lets us talk about new capabilities.

Christopher Baines gave a general talk about the QA infrastructure and the ongoing work to develop automated builds. Chris showed a diagram of the way the services interact which shows how complex it is. Increasing automation is very valuable for users and contributors, as it removes tedious and unpleasant drudgery!

Then, Julien Lepiller, representing the Guix Foundation, told us about the work it does. Julien also brought some great stickers! The Guix Foundation is a non-profit association that can receive donations, host activities and support the Guix project. Did you know that it's simple and easy to join? Anyone can do so by simply filling in the form and paying the 10 Euro membership fee. Contact the Guix Foundation if you'd like to know more.

The rest of the day was taken up with small groups discussing topics:

  • Goblins, Hoot and Guix: Christine Lemmer-Webber gave an introduction to the Spritely Institute's mission to create decentralized networks and community infrastructure that respects user freedom and security. There was a lot of interesting discussion about how the network capabilities could be used in Guix, for example enabling distributed build infrastructure.

  • Infrastructure: There was a working session on how the projects infrastructure works and can be improved. Christopher Baines has been putting lots of effort into the QA and build infrastructure.

  • Guix Home: Gábor Boskovits coordinated a session on Guix Home. It was exciting to think about how Guix Home introduces the "Guix way" in a completely different way from packages. This could introduce a whole new audience to the project. There was interest in improving the overall experience so it can be used with other distributions (e.g. Fedora, Arch Linux, Debian and Ubuntu).

  • Release management: Julien Lepiller led us through a discussion of release management, explaining the ways that all the parts fit together. The most important part that has to be done is testing the installation image which is a manual process.

Day 2

The second day's sessions:

  • Funding: A big group discussed funding for the project. Funding is important because it determines many aspects of what the group can achieve. Guix is a global project so there are pools of money in the United States and Europe (France). Andreas Enge and Julien Lepiller represented the group that handle finance, giving answers on the practical elements. Listening to their description of this difficult and involved work, I was struck how grateful we all are that they're willing to do it!

  • Governance: Guix is a living project that continues to grow and evolve. The governance discussion concerned how the project continues to chart a clear direction, make good decisions and bring both current and new users on the journey. There was reflection on the need for accountability and quick decision making, without onerous bureaurcacy, while also acknowledging that everyone is a volunteer. There was a lot of interest in how groups can join together, perhaps using approaches like Sociocracy.

    Simon Tournier has been working on an RFC process, which the project will use to discuss major changes and make decisions. Further discussion is taking place on the development mailing-list if you'd like to take part.

  • Alternative Architectures: The Guix team continues to work on alternative architectures. Efraim had his 32-bit PowerPC (Powerbook G4) with him, and there's continued work on PowerPC64, ARM64 and RISC-V 64. The big goal is a complete source bootscrap across all architectures.

  • Hurd: Janneke Nieuwenhuizen led a discussion around GNU Hurd, which is a microkernel-based architecture. Activity has increased in the last couple of years, and there's support for SMP and 64-bit (x86) is work in progress. There's lots of ideas and excitement about getting Guix to work on Hurd.

  • Guix CLI improvements: Jonathan coordinated a discussion about the state of the Guix CLI. A consistent, self-explaining and intuitive experience is important for our users. There are 39 top-level commands, that cover all the functionality from package management through to environment and system creation! Various improvements were discussed, such as making extensions available and improving documentation about the REPL work-flow.

FOSDEM 2024 videos

Guix Days 2024 took place just before FOSDEM 2024. FOSDEM was a fantastic two days of interesting talks and conversations. If you'd like to watch the GUIX-related talks the videos are being put online:

Join Us

There's lots happening in Guix and many ways to get involved. We're a small and friendly project that values user freedom and a welcoming community. If this recap has inspired your interest, take a look at the raw notes and join us!

by Steve George at Saturday, February 10, 2024

Thursday, February 8, 2024

The Racket Blog

Racket v8.12

posted by Stephen De Gabrielle

Racket v8.12 is now available



We are pleased to announce Racket v8.12 is now available from

As of this release:

  • The “Die Macht der Abstraktion” language levels are no longer present, replaced by the “Schreibe dein Programm” language levels which have been available for several years. (see DeinProgramm - Schreibe Dein Programm! )
  • The release fixes a problem with the binding structure of the for/fold form in the rare situation when an iteration clause identifier shadowed an accumulator identifier. This change may break code that depends on the old binding structure. (see 3.18 Iterations and Comprehensions: for, for/list, … )
  • Racket automatically sets the close-on-exec flag when opening a file, on systems where this is available. This change lowers the cost of avoiding problems that can occur when file descriptors become accidentally shared between processes. (see 13.1.5 File Ports )
  • Match includes hash and hash* patterns. (see 9 Pattern Matching )
  • The vector-set/copy function allows creation of a new vector that differs at only one index. This change also adds vector-append and vector-copy primitives. (see 4.12 Vectors )
  • The pregexp-quote function brings the functionality of regexp-quote to pregexps. (see 4.8 Regular Expressions )
  • The C FFI convention-based converter supports PascalCase and camelCase in addition to an underscore-based convention. (see 5.6 Defining Bindings )
  • The racket/case library allows case-like forms that use different equality comparisons, such as eq? and equal-always?. (see 3.13 Dispatch: case)
  • Scribble rendering to HTML adds linking and information buttons when hovering over heading titles.




This release also includes many other documentation improvements, optimizations, and bug fixes!

Thank you

Thank you to the people who contributed to this release:

Alex Harsányi, Alex Knauth, Alex Muscar, Alexis King, Ben Greenman, Bert De Ketelaere, Bob Burger, Bogdan Popa, Chris Payne, Fred Fu, J. Ryan Stinnett, Jamie Taylor, Jared Forsyth, Jarhmander, Jens Axel Søgaard, Joel Dueck, John Clements, Jordan Johnson, Ken Harris, Laurent Orseau, Mao Yifu, Marc Nieper-Wißkirchen, Matteo d’Addio, Matthew Flatt, Matthias Felleisen, Micah Cantor, Mike Sperber, naveen srinivasan, Oscar Waddell, Philip McGrath, Philippe Meunier, Robby Findler, Rocketnia, Sam Phillips, Sam Tobin-Hochstadt, Sarthak Shah, Shu-Hung You, Sorawee Porncharoenwase, Stephen De Gabrielle, Tom Price, ur4t, Wing Hei Chan, and ZhangHao

Feedback Welcome

Questions and discussion welcome at the Racket community Discourse or Discord

Please share

If you can - please help get the word out to users and platform specific repo packagers

Racket - the Language-Oriented Programming Language - version 8.12 is now available from

See for the release announcement and highlights.

Thank you to the many people who contributed to this release!

Feedback Welcome

by The Unknown Author at Thursday, February 8, 2024

Tuesday, February 6, 2024


Creating PRs from the command line

Update: The upstream documentation was updated shortly after I wrote this, clarifying some of the things I was unsure about.

On Gitea instances and Forgejo instances (like Codeberg) you can create git PRs from the command line without using the web and, unlike GitHub, you don’t have to install any new apps outside of the git you cloned with.

You do need an account on that particular instance that the repo is hosted on. It’s not truly decentralized the way git send-email or git request-pull is. Those two methods are way more awesome and I’m always glad to see more support being built for them instead of for this local-account-only PR workflow. Needing a thousand accounts all over the place is such a pain with the Gitea/Forgejo model.

Clone the repo, make and commit your changes, figure out what branch you want to push it to (this example will use the branch name “main”).

Then run

git push origin HEAD:refs/for/main -o topic=main

That prompts you for your username and password on that instance and now the awesome part is that it creates the fork and the PR and everything else for you, you don’t have to fiddle with an annoying web UI. It’s all from the comforts of your own cozily scriptable CLI.

If you wanna use a feature branch, let’s say you’re working on a feature called “hacks” that you wanna push into main, that’d be:

git push origin HEAD:refs/for/main -o topic=hacks

You can also replace the word for (which means a normal PR) with drafts or for-review and change the title or description with -o title="My important PR" -o description="This fixes the dosh-distimmer so we don't have to rely on that darn Gostak" or whatever.

Now, pushing to origin with that weird custom refspec again will open new PRs.

If you instead wanna push more changes to your already-submitted-waiting-for-merge PR, I’m not sure how to do that yet, maybe that’s what the -o topic= is for.

Unlike GitHub where every project you send PRs to will litter your own account page with a useless dangling fork, here the forks seem more ephemeral, they don’t show up on the webpage and you can’t even clone them down again, they only exist for the PR. So when I make drive-by commits on GH, I often just work from /tmp because if I ever need my own version of that repo, I can clone it from the web page, but Forgejo/Gitea works differently in that regard.

Older repos?

I got the impression that this is on by default for repos created within the last few years but older repos need to turn on agit support.

by Idiomdrottning ( at Tuesday, February 6, 2024