# Planet Scheme

## Monday, November 28, 2022

### Andy Wingo

#### are ephemerons primitive?

Good evening :) A quick note, tonight: I've long thought that ephemerons are primitive and can't be implemented with mark functions and/or finalizers, but today I think I have a counterexample.

For context, one of the goals of the GC implementation I have been working on on is to replace Guile's current use of the Boehm-Demers-Weiser (BDW) conservative collector. Of course, changing a garbage collector for a production language runtime is risky, and for Guile one of the mitigation strategies for this work is that the new collector is behind an abstract API whose implementation can be chosen at compile-time, without requiring changes to user code. That way we can first switch to BDW-implementing-the-new-GC-API, then switch the implementation behind that API to something else.

Abstracting GC is a tricky problem to get right, and I thank the MMTk project for showing that this is possible -- you have user-facing APIs that need to be implemented by concrete collectors, but also extension points so that the user can provide some compile-time configuration too, for example to provide field-tracing visitors that take into account how a user wants to lay out objects.

Anyway. As we discussed last time, ephemerons are usually have explicit support from the GC, so we need an ephemeron abstraction as part of the abstract GC API. The question is, can BDW-GC provide an implementation of this API?

I think the answer is "yes, but it's very gnarly and will kill performance so bad that you won't want to do it."

the contenders

Consider that the primitives that you get with BDW-GC are custom mark functions, run on objects when they are found to be live by the mark workers; disappearing links, a kind of weak reference; and finalizers, which receive the object being finalized, can allocate, and indeed can resurrect the object.

BDW-GC's finalizers are a powerful primitive, but not one that is useful for implementing the "conjunction" aspect of ephemerons, as they cannot constrain the marker's idea of graph connectivity: a finalizer can only prolong the life of an object subgraph, not cut it short. So let's put finalizers aside.

Weak references have a tantalizingly close kind of conjunction property: if the weak reference itself is alive, and the referent is also otherwise reachable, then the weak reference can be dereferenced. However this primitive only involves the two objects E and K; there's no way to then condition traceability of a third object V to E and K.

We are left with mark functions. These are an extraordinarily powerful interface in BDW-GC, but somewhat expensive also: not inlined, and going against the grain of what BDW-GC is really about (heaps in which the majority of all references are conservative). But, OK. They way they work is, your program allocates a number of GC "kinds", and associates mark functions with those kinds. Then when you allocate objects, you use those kinds. BDW-GC will call your mark functions when tracing an object of those kinds.

Let's assume firstly that you have a kind for ephemerons; then when you go to mark an ephemeron E, you mark the value V only if the key K has been marked. Problem solved, right? Only halfway: you also have to handle the case in which E is marked first, then K. So you publish E to a global hash table, and... well. You would mark V when you mark a K for which there is a published E. But, for that you need a hook into marking V, and V can be any object...

So now we assume additionally that all objects are allocated with user-provided custom mark functions, and that all mark functions check if the marked object is in the published table of pending ephemerons, and if so marks values. This is essentially what a proper ephemeron implementation would do, though there are some optimizations one can do to avoid checking the table for each object before the mark stack runs empty for the first time. In this case, yes you can do it! Additionally if you register disappearing links for the K field in each E, you can know if an ephemeron E was marked dead in a previous collection. Add a pre-mark hook (something BDW-GC provides) to clear the pending ephemeron table, and you are in business.

yes, but no

So, it is possible to implement ephemerons with just custom mark functions. I wouldn't want to do it, though: missing the mostly-avoid-pending-ephemeron-check optimization would be devastating, and really what you want is support in the GC implementation. I think that for the BDW-GC implementation in whippet I'll just implement weak-key associations, in which the value is always marked strongly unless the key was dead on a previous collection, using disappearing links on the key field. That way a (possibly indirect) reference from a value V to a key K can indeed keep K alive, but oh well: it's a conservative approximation of what should happen, and not worse than what Guile has currently.

Good night and happy hacking!

## Tuesday, November 22, 2022

### Scheme Requests for Implementation

#### SRFI 236: Evaluating expressions in an unspecified order

SRFI 236 is now in final status.

This SRFI defines the independently syntax, which can be used to combine side effects into one expression without specifying their relative order.

## Sunday, November 20, 2022

### Idiomdrottning

#### Matchable’s custom destructing

Easily the two features I use the most in Matchable (which I mostly use through my match generics) are ? and =, the custom predicate and the custom accessor respectively.

I’ve found that the ?-> combinator is a great match for = when you conditionally want to coerce a data type into another type.

Another comfy use for = is with a custom destructurer that you can then further destructure with matchable. You can just keep on nesting = procs deep in there at every layer.

Here’s a simple example.

This is from the SVG path lib from earlier today. The context is a procure named distance that takes two attribute alists, like ((coords (x n) (y n))), and calculates the Pythagorean diagonal distance between them.

Here is the custom destructurer, which in turn uses alist-ref in its own destructuring parameter list to grab ((x n) (y n)) and then returns a list of those just two numbers.

(define (get-coords (= (c alist-ref 'coords) atts))
(list (car (alist-ref 'x atts))
(car (alist-ref 'y atts))))


And here’s how that destructurer is used in practice, twice, in order to bind the values in those lists directly.

(define (distance (= get-coords (x1 y1)) (= get-coords (x2 y2)))
(hypothenuse (- x1 x2) (- y1 y2)))


Using = first and then furthering destructuring the value with another pattern might not have been intended (since it was created primarily for record access) but it’s so flexible and it, along with ?, is an easy way for end users to extend Matchable.

### Scheme Requests for Implementation

SRFI 243 is now in draft status.

This SRFI standardizes a widely used lexical syntax for writing unreadable objects.

### The Racket Blog

#### Shallow and Optional Typed Racket

With the Racket 8.7 release, Typed Racket includes two languages that weaken the run-time behavior of types: Shallow Typed Racket and Optional Typed Racket. Whereas normal Typed Racket types (Deep types) enforce guarantees that any module can depend on, Shallow types enforce only local invariants in typed code, and Optional types enforce nothing. In return, Shallow and Optional types add less overhead to gradual interactions. Code often runs faster and simpler than with Deep types.

Shallow Typed Racket and Optional Typed Racket use the same static types and typechecker as normal (Deep) Typed Racket.

# 1 Background: Typed–Untyped Interaction

A key feature of Typed Racket is that it allows typed code to interact with untyped code. An untyped module can import from a typed one with a normal require form, and a typed module can import from an untyped one by using a require/typed form:

For example, if an untyped module provides a struct and a function:

 #lang racket ; distance.rkt (provide (struct-out pt) distance) (struct pt (x y)) ; distance : pt pt -> real (define (distance p1 p2) (sqrt (+ (sqr (- (pt-x p2) (pt-x p1))) (sqr (- (pt-y p2) (pt-y p1))))))

then a typed module can import the untyped bindings:

 #lang typed/racket (require/typed "distance.rkt" [#:struct pt ([x : Real] [y : Real])] [distance (-> pt pt Real)]) (distance (pt 3 5) (pt 7 0))

So far so good. One program combines untyped and typed code.

Now, what if the declared types are wrong?

The module below gives a wrong type to the distance function. This type expects an integer result instead of a real number:

 #lang typed/racket (require/typed "distance.rkt" [#:struct pt ([x : Real] [y : Real])] [distance (-> pt pt Integer)]) (distance (pt 3 5) (pt 7 0))

Even with the wrong type, the program still typechecks (!) because Typed Racket does not analyze untyped code. It assumes the require/typed types are correct and moves on.

But this program does have a type error. At run-time, the call to distance returns a float instead of an integer, contradicting the static type.

If we want to catch the error, then Typed Racket needs to enforce types at run-time when typed and untyped code interact.

# 2 Enforcing Type Boundaries

By default, Typed Racket compiles types to higher-order contracts. The function type (-> pt pt Integer), for example, compiles to a function contract that will raise an exception if gets attached to a function that eventually returns a non-integer result.

Contracts enforce types with strong guarantees and offer useful debugging information if an error occurs. But they can also be expensive, especially when large, mutable, or higher-order values frequently cross boundaries. These high costs inspired a long search for cheaper ways to enforce types than the standard Deep strategy.

Two promising alternatives are Shallow and Optional types, neither of which use higher-order contracts.

Shallow types use lightweight assertions called shape checks to provide a basic soundness guarantee. Instead of putting heavy contracts at module boundaries, Shallow Typed Racket rewrites typed code to incrementally check the shape of values.

Optional types use no checks. They are completely unreliable for reasoning about typed-untyped interactions. But, they also have zero cost.

# 3 How to Select an Enforcement Strategy

The #lang of a Typed Racket module decides how its types behave at run-time. To change strategies, change the language.

For a complete list of forms that change depending on the #lang, see Forms that Depend on the Behavior of Types in the Typed Racket Reference.

# 4 Example: Fewer Run-time Checks

Deep types can be significantly more expensive than Shallow and Optional. For example, the two functions below expect a data structure and access part of the data: list-first returns the first element of a list and vec-length counts the number of elements in a vector.

 (: list-first (-> (Listof Real) Real)) (define (list-first l) (car l)) (: vec-length (-> (Vectorof Real) Index)) (define (vec-length v) (vector-length v))

When these functions get called from untyped code, they have very different costs depending on the behavior of types:

• Deep types check all incoming data structures exhaustively. Lists undergo a full traversal that validates every list element, including unused ones. Vectors get wrapped in a chaperone to guard against future writes.

• Shallow types check the shape of the incoming data structures using list? and vector?. Elements of these structures get checked only when they are used by typed code.

• Optional types check nothing at all.

Further Reading has links to larger examples where the costs of Deep types are huge compared to Shallow and Optional.

# 5 Example: Weaker Types, Simpler Behavior

Shallow and Optional types raise fewer run-time errors than Deep. In many cases, the lack of an error means that a bug goes undetected. Deep finds the bug and the other two miss it because they skipped a run-time check.

But for some programs, the Deep types are too cautious. They reject a program that could run safely.

One restrictive type in the Deep world is Procedure, the type that describes any function. Because this type says nothing about argument and return types, Deep Typed Racket never allows calls to a procedure, even after a cast:

 #lang typed/racket ; or #lang typed/racket/deep (: call-proc (-> Procedure Symbol)) (define (call-proc f) ((cast f (-> Symbol Symbol)) 'hello))
 (call-proc identity) g4: arity mismatch; the expected number of arguments does not match the given number given: 1

Shallow types do allow calls to a Procedure, after a cast:

 #lang typed/racket/shallow (: call-proc (-> Procedure Symbol)) (define (call-proc f) ((cast f (-> Symbol Symbol)) 'hello))
 (call-proc identity) - : Symbol 'hello

Optional types also allow calls to a Procedure:

 #lang typed/racket/optional (: call-proc (-> Procedure Symbol)) (define (call-proc f) ((cast f (-> Symbol Symbol)) 'hello))
 (call-proc identity) - : Symbol 'hello

# 6 Four-Way Interactions

Typed–untyped interactions are much more exciting now that “typed code” can be Deep, Shallow, or Optional. These three styles of typed code can all interact with untyped code (of course), and they can also interact with one another.

Types in this four-way world need to be enforced at run-time depending on how they were defined:

• Deep types get enforced with contracts at all boundaries to non-Deep code. This means that Deep–Shallow and Deep–Optional interactions can be expensive.

• Shallow types guard the boundaries to Optional and untyped code with shape checks.

• Optional types never enforce themselves. But a Deep-typed or Shallow-typed client of Optional code will insert run-time checks as a consequence of their strategies.

These checks between Deep, Shallow, and Optional may come as a surprise, especially because all typed code gets validated by the same static type checker. But the checks are necessary because run-time interactions can mix untyped values into some of these typed modules. If an Optional module were to import an untyped function and send it to Deep-typed code without a contract wrapper, it could break the Deep type guarantees.

# 7 Reflections on Deep, Shallow, and Optional

Deep types, Shallow types, and Optional types have complementary strengths. When and where does each one work best? Here are a few suggestions, based on When to Use Deep, Shallow, or Optional? from the Typed Racket Guide:

• Deep types make the most of static checking and optimizations. Use them for self-sufficient groups of typed modules. Avoid them at high-traffic boundaries to untyped or to non-Deep code.

• Shallow types provide a weak but useful soundness guarantee at low cost. Use them in typed modules that frequently communicate with the untyped world. Avoid them, however, in large typed modules because every expression in typed code potentially needs a Shallow shape check.

• Optional types use types for static analysis and nothing more. Use them when you do not want types enforced at run-time.

We are very excited to have these languages in the Typed Racket family. Learning more about where they fit in practical applications and about how developers tend to use them will be part of the adventure.

### Idiomdrottning

#### An SVG path data library for Chicken Scheme

This library can turn SVG path data to tagged sexp format (compatible with SXML). You can switch between absolute and relative coordinates, and turn it into a minified string.

The procedures work on both this library’s own tagged format and on path data strings.

The data format uses single-letter symbols for the commands and exacts for the numbers. Every node is always a separate list, even when there’s multiple subsequent nodes of the same command.

Every node can have attributes with an @ attribute marker, just like in SXML.

You can put in any attributes you want so you can keep track of node-local data. They’re stripped out when you export the path back to string.

Here’s an example of an H command with one argument, which happens to be 105, and with absolute coords added as an attribute:

(H (@ (coords (x 105) (y 200))) 105)


## Procedures

### path->string

(path->string "M100 100 L101,101 90 102")
⇒ "M100 100l1 1-11 1"


Reads a path (in sexp or string format) and returns a minified string.

### ->apath

(->apath "M100 200L200 100-100-200")
⇒ ((M 100 200) (L 200 100) (L -100 -200))


Reads a path (in sexp or string format) and returns sexp format with absolute coordinates.

### ->rpath

(->rpath "M100 200L200 100-100-200")
⇒ ((m 100 200) (l 100 -100) (l -300 -300))


Reads a path (in sexp or string format) and returns sexp format with relative coordinates.

### ->mpath

(->mpath "M100 200L200 100-100-200")
⇒ ((M 100 200) (L 200 100) (L -100 -200))


Reads a path (in sexp or string format) and returns sexp format with whatever coordinate format will print the shortest. This is almost never what useful when doing path programming, but path->string uses it internally.

Change a path (in sexp or string format) into sexp format with a coords attribute added that stores the absolute coordinates of the node. If the path already had coords, only the newly computated coords will be returned.

Change a path (in sexp or string format) into sexp format with a distance attribute added that stores the pythagorean diagonal distance to the next node. If the path has a coords attribute on all of its nodes, they will be trusted, otherwise new ones will be calculated and added for all nodes. If the path already had distance attributes, only the newly computated distance attributes will be returned.

## Source code

For a repo,

git clone https://idiomdrottning.org/svgpath


## Friday, November 18, 2022

### Arthur A. Gleckler

#### NanoVG-REPL

The Stomachion, rendered using NanoVG-REPL

# NanoVG-REPL

Last year, I tried an experiment. I wanted to see how easy it would be to do standard 2D graphics from Scheme, but not through a foreign function interface. For the application I had in mind, low refresh rates would be just fine, so I decided to try communicating with a graphics library using a text protocol over a pipe. NanoVG-REPL is the result of that experiment.

Using a text protocol made porting from language to language a breeze. I even tried doing graphics from Bash.

For simple animations, NanoVG-REPL can reach more than two hundred frames per second on my laptop. But the graphics primitives are universal, so if performance becomes an issue, it should be possible to port a program to another, faster library that does use an FFI.

NanoVG-REPL is a wrapper around NanoVG, a small antialiased vector graphics rendering library for OpenGL. I provides a Read-Eval-Print Loop for running NanoVG commands. To use it, just send commands to stdout, read responses from stdin, and listen for asychronous events (e.g. keyboard and mouse events) on a named pipe.

I may never use NanoVG-REPL for serious work, but I consider the experiment a success. I'd like to try wrapping a more modern 2D graphics library this way. After I wrote NanoVG-REPL, I found out about GTK-server, which uses the same approach to wrap the GTK GUI library, so I'm not the only one thinking along these lines.

### Idiomdrottning

#### A proposed package alert system

Maybe package managers, I’m thinking of something like go get here, could work like this:

1. Writing a package were easy and you could just publish a random repo anywhere.
2. End users and admins could pull packages from any of those rando stranger’s repo but updates weren’t pushed out.
3. Package writers could alert the package manager app team for serious security updates.
4. The manager team would vet the fixed version, and if it was a false alarm would send out a message “don’t trust this package, they misuse the security update system” and if it was a real fix they’d send out a message telling you to update and giving you a way to optionally press “yes” right there and get their vetted version. (Or if it seems like an honest false alarm as opposed to a malicious one, just do nothing.)

In other words, when it’s business as normal, your everyday feature creep and bug fixes, it’s not centralized and there’s zero hoops to publish a package and install a package straight from the repo. When there’s danger, it is centralized and the package manager itself takes over and that team decides whether to alert or not alert.

## Wednesday, November 16, 2022

### Jeremy Kun

#### Polynomial Multiplication Using the FFT

Problem: Compute the product of two polynomials efficiently.

Solution:

import numpy
from numpy.fft import fft, ifft

def poly_mul(p1, p2):
"""Multiply two polynomials.

p1 and p2 are arrays of coefficients in degree-increasing order.
"""
deg1 = p1.shape[0] - 1
deg2 = p1.shape[0] - 1
# Would be 2*(deg1 + deg2) + 1, but the next-power-of-2 handles the +1
total_num_pts = 2 * (deg1 + deg2)
next_power_of_2 = 1 << (total_num_pts - 1).bit_length()

ff_p1 = fft(numpy.pad(p1, (0, next_power_of_2 - p1.shape[0])))
ff_p2 = fft(numpy.pad(p2, (0, next_power_of_2 - p2.shape[0])))
product = ff_p1 * ff_p2
inverted = ifft(product)
rounded = numpy.round(numpy.real(inverted)).astype(numpy.int32)
return numpy.trim_zeros(rounded, trim='b')


Discussion: The Fourier Transform has a lot of applications to science, and I’ve covered it on this blog before, see the Signal Processing section of Main Content. But it also has applications to fast computational mathematics.

The naive algorithm for multiplying two polynomials is the “grade-school” algorithm most readers will already be familiar with (see e.g., this page), but for large polynomials that algorithm is slow. It requires $O(n^2)$ arithmetic operations to multiply two polynomials of degree $n$.

This short tip shows a different approach, which is based on the idea of polynomial interpolation. As a side note, I show the basic theory of polynomial interpolation in chapter 2 of my book, A Programmer’s Introduction to Mathematics, along with an application to cryptography called “Secret Sharing.”

The core idea is that given $n+1$ distinct evaluations of a polynomial $p(x)$ (i.e., points $(x, p(x))$ with different $x$ inputs), you can reconstruct the coefficients of $p(x)$ exactly. And if you have two such point sets for two different polynomials $p(x), q(x)$, a valid point set of the product $(pq)(x)$ is the product of the points that have the same $x$ inputs.

\begin{aligned} p(x) &= \{ (x_0, p(x_0)), (x_1, p(x_1)), \dots, (x_n, p(x_n)) \} \\ q(x) &= \{ (x_0, q(x_0)), (x_1, q(x_1)), \dots, (x_n, q(x_n)) \} \\ (pq)(x) &= \{ (x_0, p(x_0)q(x_0)), (x_1, p(x_1)q(x_1)), \dots, (x_n, p(x_n)q(x_n)) \} \end{aligned}

The above uses $=$ loosely to represent that the polynomial $p$ can be represented by the point set on the right hand side.

So given two polynomials $p(x), q(x)$ in their coefficient forms, one can first convert them to their point forms, multiply the points, and then reconstruct the resulting product.

The problem is that the two conversions, both to and from the coefficient form, are inefficient for arbitrary choices of points $x_0, \dots, x_n$. The trick comes from choosing special points, in such a way that the intermediate values computed in the conversion steps can be reused. This is where the Fourier Transform comes in: choose $x_0 = \omega_{N}$, the complex-N-th root of unity, and $x_k = \omega_N^k$ as its exponents. $N$ is required to be large enough so that $\omega_N$’s exponents have at least $2n+1$ distinct values required for interpolating a degree-at-most-$2n$ polynomial, and because we’re doing the Fourier Transform, it will naturally be “the next largest power of 2” bigger than the degree of the product polynomial.

Then one has to observe that, by its very formula, the Fourier Transform is exactly the evaluation of a polynomial at the powers of the $N$-th root of unity! In formulas: if $a = (a_0, \dots, a_{n-1})$ is a list of real numbers define $p_a(x) = a_0 + a_1x + \dots + a_{n-1}x^{n-1}$. Then $\mathscr{F}(a)(k)$, the Fourier Transform of $a$ at index $k$, is equal to $p_a(\omega_n^k)$. These notes by Denis Pankratov have more details showing that the Fourier Transform formula is a polynomial evaluation (see Section 3), and this YouTube video by Reducible also has a nice exposition. This interpretation of the FT as polynomial evaluation seems to inspire quite a few additional techniques for computing the Fourier Transform that I plan to write about in the future.

The last step is to reconstruct the product polynomial from the product of the two point sets, but because the Fourier Transform is an invertible function (and linear, too), the inverse Fourier Transform does exactly that: given a list of the $n$ evaluations of a polynomial at $\omega_n^k, k=0, \dots, n-1$, return the coefficients of the polynomial.

This all fits together into the code above:

1. Pad the input coefficient lists with zeros, so that the lists are a power of 2 and at least 1 more than the degree of the output product polynomial.
2. Compute the FFT of the two padded polynomials.
3. Multiply the result pointwise.
4. Compute the IFFT of the product.
5. Round the resulting (complex) values back to integers.

Hey, wait a minute! What about precision issues?

They are a problem when you have large numbers or large polynomials, because the intermediate values in steps 2-4 can lose precision due to the floating point math involved. This short note of Richard Fateman discusses some of those issues, and two paths forward include: deal with it somehow, or use an integer-exact analogue called the Number Theoretic Transform (which itself has issues I’ll discuss in a future, longer article).

Postscript: I’m not sure how widely this technique is used. It appears the NTL library uses the polynomial version of Karatsuba multiplication instead (though it implements FFT elsewhere). However, I know for sure that much software involved in doing fully homomorphic encryption rely on the FFT for performance reasons, and the ones that don’t instead use the NTT.

## Monday, November 14, 2022

### The Racket Blog

#### Racket v8.7

As of this release:

The following people contributed to this release:

Adit Cahya Ramadhan, Alex Harsányi, Bart van Strien, Ben Greenman, Bob Burger, Bogdan Popa, Cameron Moy, cheeze2000, D. Ben Knoble, Dan Anderson, Fred Fu, Geoffrey Knauth, Gustavo Massaccesi, J. Ryan Stinnett, Jack Firth, Jason Hemann, Jimmy McNutt, John Clements, Lîm Tsú-thuàn, M. Taimoor Zaeem, Mao Yifu, Matthew Flatt, Matthias Felleisen, Mike Sperber, Noah Ma, Oliver Flatt, Paulo Matos, Philip McGrath, Reuben Thomas, Robby Findler, Ryan Culpepper, Sam Phillips, Sam Tobin-Hochstadt, Samuel Bronson, Shu-Hung You, Sorawee Porncharoenwase, Sorin Muntean, Stephen Chang, William J. Bowman, and Winston Weinert

If you are new to Racket try our Getting started guide.

Questions and feedback about the release are welcome on Discourse.

## Saturday, November 12, 2022

### Jérémy Korwin-Zmijowski

#### First steps toward Guix Home

In this article, I will show you, step by step, how one can simply install and configure software they use with Guix Home. For the demonstration, I will focus on a single package. Disclaimer: I'm not an experienced Guix Home user. So don't take my words for granted. Read manuals, ask questions.

So far, to install a software, say Emacs, for my user, I could simply : $guix install emacs Or, I could write a manifest and use it to populate my default user profile (or a custom one) : $ guix package --manifest=$HOME/manifest.scm Where manifest.scm contains the following : (specifications->manifest (list "emacs"))  Then, the configuration for the new software had to be edited separately. Here is a dummy ~/.config/emacs/init.el : (setq initial-scratch-message nil)  Now, let's use Guix Home to start managing the whole ! ## The blank home Guix Home ask you for two things : a list of packages and a list of home services. So a blank Guix Home configuration file (which installs and configures nothing) would look like this : (use-modules (gnu home)) (home-environment (packages (list)) (services (list)))  You can save this expressions in a file named home-configuration.scm. So you now can invoke Guix Home to generate a home environment from this blank configuration : $ guix home container home-configuration.scm


No worries, this won't affect your current environment. Take a few seconds to see how empty this shell is and move on !

Oops, you need to leave the empty shell.

$exit  ## Installing the software To tell Guix Home to add a software package to the generated home environment, you have to edit home-configuration.scm and add its name to the package list : (use-modules (gnu home) (gnu packages emacs)) (home-environment (packages (list emacs)))  You can now try it out. $ guix home container home-configuration.scm


In this shell, you can run Emacs, proof that Emacs has been added to the generated home environment.

$emacs -nw  Then you can quit the shell. ## Install the software's configuration From Guix Home perspective, things are packages or services. It's time to look for services. Especially one that can handle the process to configure Emacs. Simply put, a service capable of installing the init.el file on the right place : home-xdg-configuration-files-service-type. Edit home-configuration.scm to be like : (use-modules (gnu home) (gnu home services) (gnu packages emacs) (gnu services) (guix gexp)) (home-environment (packages (list emacs)) (services (list (service home-xdg-configuration-files-service-type (("emacs/init.el" ,(local-file "init.el")))))))  Then, create a init.el file, with your Emacs configuration, next to the home-configuration.scm : (setq initial-scratch-message nil)  Try it with : $ guix home container home-configuration.scm


Here you can see there is the init.el file at ~/.config/emacs/init.el ! So the Emacs in this environment will use this configuration file at startupâ€¦ Ok, maybe not in the container, but trust me, it will when running (careful, it will have an effect to your current environment this time) :

\$ guix home reconfigure home-configuration.scm


Warning : because there is no shell configuration in the home-configuration.scm (yet), you will need to manually configure your shell to make it benefits from the generated home environment.

Don't hesitate to give me your opinion, suggest an idea for improvement, report an error, or ask a question ! I would be so glad to discuss about the topic covered here with you ! You can reach me here.

Don't miss out on the next ones ! Either via RSS or via e-mail !

GPG: 036B 4D54 B7B4 D6C8 DA62 2746 700F 5E0C CBB2 E2D1

## Friday, November 11, 2022

### Scheme Requests for Implementation

#### SRFI 242: The CFG Language

SRFI 242 is now in draft status.

This SRFI defines a language to describe control-flow graphs (CFGs) suitable for formulating iterative and recursive algorithms. Using the notion of a CFG term, this language can be seamlessly embedded in the Scheme language. Complex CFG terms can be composed from simple CFG terms.

ICFP 2022 logo

# Scheme Workshop 2022

I had the honor to be co-chair, with Andy Keep, of this year's Scheme Workshop, held as part of ICFP 2022 in Ljubljana, Slovenia.

Videos of the talks are now on YouTube. We're still working on publishing the proceedings. Once they're available, I'll update this blog entry. I plan to make an announcement on /r/scheme, too.

Here was the program:

• Why Functional Programming Matters in CS Education, by Marco T Morazan (YouTube)
• Scheme Pearl: Quantum Continuations, by Borislav Agapiev Yotta, Vikraman Choudhury, and Amr Sabry (YouTube)
• Macro-embedding Compiler Intermediate Languages in Racket, by William J. Bowman (YouTube)
• Scheme Requests for Implementation Status Report, by Arthur Gleckler (YouTube)
• Automating the Design Recipe, by Hazel Levine and Sam Tobin-Hochstadt (YouTube)
• Introducing Visual Scheme for Applications: Modernizing Office Solutions on the CLR, by Bob Calco (YouTube)
• An FFI between Gambit Scheme and CPython, by Marc-André Bélanger and Marc Feeley (YouTube)
• R7RS Large Status Report, by John Cowan (YouTube)
• Programming is (should be) fun!, by Gerald Jay Sussman (YouTube)

Many thanks to all the people who presented at the workshop, to the Program Committee, and to the many ICFP volunteers who made it all happen.

By the way, next year's ICFP will be held in Seattle.

## Thursday, November 10, 2022

### Scheme Requests for Implementation

#### SRFI 241: Match — Simple Pattern-Matching Syntax to Express Catamorphisms on Scheme Data

SRFI 241 is now in draft status.

This SRFI describes a simple pattern matcher based on one originally devised by Kent Dybvig, Dan Friedman, and Eric Hilsdale, which has a catamorphism feature to perform recursion automatically.

## Monday, November 7, 2022

### Idiomdrottning

#### fix-me-now

If you have a tree that’s almost perfect, but you just need to, uh, “fix it up” a little, that’s where this fix-me-now macro can be your friend.

It’s a combination of strse* from the strse egg and pre-post-order-splice* from sxml-transforms.

The first argument is the tree you wanna fix, followed by zero or more matches and replacements (so an even number), followed by zero or one alist of tags and bindings.

The match and replacement works like strse* while the bindings use pre-post-order-splice* but has sane defaults for *default* and *text* (which you can still override).

(fix-me-now
'(and (she buying) 1 stairway 2 heaven)
"2 h"
"to h"
((she . ,(fn (cons* x 'is y)))))


⇒ (and (she is buying) 1 stairway to heaven)

(fix-me-now '(a (b c (d e f))) "e f" "(e f)")


⇒ (a (b c (d (e f))))

Note that if you use any strse* operators, the tree will be written and re-read using write semantics, which will mess up any procedures and stuff you have in there. If you only use the pre-post-order-splice* binding alist, you don’t need to worry about that.

## Source code

git clone https://idiomdrottning.org/fix-me-now