Planet Scheme

Friday, May 20, 2022

Idiomdrottning

Trivial Patents

I opposed and ended a trivial patent once (I found prior art) but the lesson I learned from that is that it’s important to oppose even non-trivial patents. If I identify and stop only the stupid patents, I’m only doing the work of the patent office for them.

In the case of something like “Bionic Reading” (a proprietary system that bolds part of the words) the claimed part is not the implementation, it’s the idea and the research to verify the idea. That’s exactly the difference between patents and copyright. Copyright restricts an implementation (which, for something like code or a movie can be a very non-trivial implementation of a trivial idea. Like, the movie “The Matrix”, the idea is simple (“guy gets chosen to fight in modem land”) but the implementation cost $63 million. Patents restricts an idea (which can be an idea that’s unintuitive and difficult to come up with and verify), but the implementation is like two lines of JS.

It’s patents and patenting as a whole that’s the problem, not just trivial patents.

(I personally would want fewer—maybe half—fixations than they’re using. It’s very frustrating for me to read “bionic reading” text because it forces me to fixate much more than I otherwise would. It feels like getting a slap after every word.)

by Idiomdrottning (sandra.snan@idiomdrottning.org) at Friday, May 20, 2022

Saturday, May 14, 2022

Jeremy Kun

“Practical Math” Preview: Collect Sensitive Survey Responses Privately

This is a draft of a chapter from my in-progress book, Practical Math for Programmers: A Tour of Mathematics in Production Software.

Tip: Determine an aggregate statistic about a sensitive question, when survey respondents do not trust that their responses will be kept secret.

Solution:

import random

def respond_privately(true_answer: bool) -> bool:
    '''Respond to a survey with plausible deniability about your answer.'''
    be_honest = random.random() < 0.5
    random_answer = random.random() < 0.5
    return true_answer if be_honest else random_answer

def aggregate_responses(responses: List[bool]) -> Tuple[float, float]:
    '''Return the estimated fraction of survey respondents that have a truthful
    Yes answer to the survey question.
    '''
    yes_response_count = sum(responses)
    n = len(responses)
    mean = 2 * yes_response_count / n - 0.5
    # Use n-1 when estimating variance, as per Bessel's correction.
    variance = 3 / (4 * (n - 1))
    return (mean, variance)

In the late 1960’s, most abortions were illegal in the United States. Daniel G. Horvitz, a statistician at The Research Triangle Institute in North Carolina and a leader in survey design for social sciences, was tasked with estimating how many women in North Carolina were receiving illegal abortions. The goal was to inform state and federal policymakers about the statistics around abortions, many of which were unreported, even when done legally.

The obstacles were obvious. As Horvitz put it, “a prudent woman would not divulge to a stranger the fact that she was party to a crime for which she could be prosecuted.” [Abernathy70] This resulted in a strong bias in survey responses. Similar issues had plagued surveys of illegal activity of all kinds, including drug abuse and violent crime. Lack of awareness into basic statistics about illegal behavior led to a variety of misconceptions, such as that abortions were not frequently sought out.

Horvitz worked with biostatisticians James Abernathy and Bernard Greenberg to test out a new method to overcome this obstacle, without violating the respondent’s privacy or ability to plausibly deny illegal behavior. The method, called randomized response, was invented by Stanley Warner in 1965, just a few years earlier. [Warner65] Warner’s method was a bit different from what we present in this Tip, but both Warner’s method and the code sample above use the same strategy of adding randomization to the survey.

The mechanism, as presented in the code above, requires respondents to start by flipping a coin. If heads, they answer the sensitive question truthfully. If tails, they flip a second coin to determine how to answer the question—heads resulting in a “yes” answer, tails in a “no” answer. Naturally, the coin flips are private and controlled by the respondent. And so if a respondent answers “Yes” to the question, they may plausibly claim the “Yes” was determined by the coin, preserving their privacy. The figure below describes this process as a diagram.

A branching diagram showing the process a survey respondent takes to record their response.

Another way to describe the outcome is to say that each respondent’s answer is a single bit of information that is flipped with probability 1/4. This is half way between two extremes on the privacy/accuracy tradeoff curve. The first extreme is a “perfectly honest” response, where the bit is never flipped and all information is preserved. The second extreme has the bit flipped with probability 1/2, which is equivalent to ignoring the question and choosing your answer completely at random, losing all information in the aggregate responses. In this perspective, the aggregate survey responses can be thought of as a digital signal, and the privacy mechanism adds noise to that signal.

It remains to determine how to recover the aggregate signal from these noisy responses. In other words, the surveyor cannot know any individual’s true answer, but they can, with some extra work, estimate statistics about the underlying population by correcting for the statistical bias. This is possible because the randomization is well understood. The expected fraction of “Yes” answers can be written as a function of the true fraction of “Yes” answers, and hence the true fraction can be solved for. In this case, where the random coin is fair, that formula is as follows (where \mathbf{P} stands for “the probability of”).

\displaystyle \mathbf{P}(\textup{Yes answer}) = \frac{1}{2} \mathbf{P}(\textup{Truthful yes answer}) + \frac{1}{4}

And so we solve for \mathbf{P}(\textup{Truthful yes answer})

\displaystyle \mathbf{P}(\textup{Truthful yes answer}) = 2 \mathbf{P}(\textup{Yes answer}) - \frac{1}{2}

We can replace the true probability \mathbf{P}(\textup{Yes answer}) above with our fraction of “Yes” responses from the survey, and the result is an estimate \hat{p} of \mathbf{P}(\textup{Truthful yes answer}). This estimate is unbiased, but has additional variance—beyond the usual variance caused by picking a finite random sample from the population of interest—introduced by the randomization mechanism.

With a bit of effort, one can calculate that the variance of the estimate is

\displaystyle \textup{Var}(\hat{p}) = \frac{3}{4n}

And via Chebyshev’s inequality, which bounds the likelihood that an estimator is far away from its expectation, we can craft a confidence interval and determine the needed sample sizes. Specifically, the estimate \hat{p} has additive error at most q with probability at most \textup{Var}(\hat{p}) / q^2. This implies that for a confidence of 1-c, one requires at least n \geq 3 / (4 c q^2) samples. For example, to achieve error 0.01 with 90 percent confidence (c=0.1), one requires 7,500 responses.

Horvitz’s randomization mechanism didn’t use coin flips. Instead they used an opaque box with red or blue colored balls which the respondent, who was in the same room as the surveyor, would shake and privately reveal a random color through a small window facing away from the surveyor. The statistical principle is the same. Horvitz and his associates surveyed the women about their opinions of the privacy protections of this mechanism. When asked whether their friends would answer a direct question about abortion honestly, over 80% either believed their friends would lie, or were unsure. [footnote: A common trick in survey methodology when asking someone if they would be dishonest is to instead ask if their friends would be dishonest. This tends to elicit more honesty, because people are less likely to uphold a false perception of the moral integrity of others, and people also don’t realize that their opinion of their friends correlates with their own personal behavior and attitudes. In other words, liars don’t admit to lying, but they think lying is much more common than it really is.] But 60% were convinced there was no trick involved in the randomization, while 20% were unsure and 20% thought there was a trick. This suggests many people were convinced that Horvitz’s randomization mechanism provided the needed safety guarantees to answer honestly.

Horvitz’s survey was a resounding success, both for randomized response as a method and for measuring abortion prevalence. [Abernathy70] They estimated the abortion rate at about 22 per 100 conceptions, with a distinct racial bias—minorities were twice as likely as whites to receive an abortion. Comparing their findings to a prior nationwide study from 1955—the so-called Arden House estimate—which gave a range of between 200,000 and 1.2 million abortions per year, Horvitz’s team estimated more precisely that there were 699,000 abortions in 1955 in the United States, with a reported standard deviation of about 6,000, less than one percent. For 1967, the year of their study, they estimated 829,000.

Their estimate was referenced widely in the flurry of abortion law and court cases that followed due to a surging public interest in the topic. For example, it is cited in the 1970 California Supreme Court opinion for the case Ballard v. Anderson, which concerned whether a minor needs parental consent to receive an otherwise legal abortion. [Ballard71, Roemer71] It was also cited in amici curiae briefs submitted to the United States Supreme Court in 1971 for Roe v. Wade, the famous case that invalidated most U.S. laws making abortion illegal. One such brief was filed jointly by the country’s leading women’s rights organizations like the National Organization for Women. Citing Horvitz for this paragraph, it wrote, [Womens71]

While the realities of law enforcement, social and public health problems posed by abortion laws have been openly discussed […] only within a period of not more than the last ten years, one fact appears undeniable, although unverifiable statistically. There are at least one million illegal abortions in the United States each year. Indeed, studies indicate that, if the local law still has qualifying requirements, the relaxation in the law has not diminished to any substantial extent the numbers in which women procure illegal abortions.

It’s unclear how the authors got this one million number (Horvitz’s estimate was 20% less for 1967), nor what they meant by “unverifiable statistically.” It may have been a misinterpretation of the randomized response technique. In any event, randomized response played a crucial role in providing a foundation for political debate.

Despite Horvitz’s success, and decades of additional research on crime, drug use, and other sensitive topics, randomized response mechanisms have been applied poorly. In some cases, the desired randomization is inextricably complex, such as when requiring a continuous random number. In these cases, a manual randomization mechanism is too complex for a respondent to use accurately. Trying to use software-assisted devices can help, but can also produce mistrust in the interviewee. See [Rueda16] for additional discussion of these pitfalls and what software packages exist for assisting in using randomized response. See [Fox16] for an analysis of the statistical differences between the variety of methods used between 1970 and 2010.

In other contexts, analogues to randomized response may not elicit the intended effect. In the 1950’s, Utah used death by firing squad as capital punishment. To avoid a guilty conscience of the shooters, one of five marksmen was randomly given a blank, providing him some plausible deniability that he knew he had delivered the killing shot. However, this approach failed on two counts. First, once a shot was fired the marksman could tell whether the bullet was real based on the recoil. Second, a 20% chance of a blank was not enough to dissuade a guilty marksman from purposely missing. In the 1951 execution of Elisio Mares, all four real bullets missed the condemned man’s heart, hitting his chest, stomach, and hip. He died, but it was neither painless nor instant.

Of many lessons one might draw from the botched execution, one is that randomization mechanisms must take into account both the psychology of the participants as well as the severity of a failed outcome.

References

@book{Fox16,
  title = {{Randomized Response and Related Methods: Surveying Sensitive Data}},
  author = {James Alan Fox},
  edition = {2nd},
  year = {2016},
  doi = {10.4135/9781506300122},
}

@article{Abernathy70,
  author = {Abernathy, James R. and Greenberg, Bernard G. and Horvitz, Daniel G.
            },
  title = {{Estimates of induced abortion in urban North Carolina}},
  journal = {Demography},
  volume = {7},
  number = {1},
  pages = {19-29},
  year = {1970},
  month = {02},
  issn = {0070-3370},
  doi = {10.2307/2060019},
  url = {https://doi.org/10.2307/2060019},
}

@article{Warner65,
  author = {Stanley L. Warner},
  journal = {Journal of the American Statistical Association},
  number = {309},
  pages = {63--69},
  publisher = {{American Statistical Association, Taylor \& Francis, Ltd.}},
  title = {Randomized Response: A Survey Technique for Eliminating Evasive
           Answer Bias},
  volume = {60},
  year = {1965},
}

@article{Ballard71,
  title = {{Ballard v. Anderson}},
  journal = {California Supreme Court L.A. 29834},
  year = {1971},
  url = {https://caselaw.findlaw.com/ca-supreme-court/1826726.html},
}

@misc{Womens71,
  title = {{Motion for Leave to File Brief Amici Curiae on Behalf of Women’s
           Organizations and Named Women in Support of Appellants in Each Case,
           and Brief Amici Curiae.}},
  booktitle = {{Appellate Briefs for the case of Roe v. Wade}},
  number = {WL 128048},
  year = {1971},
  publisher = {Supreme Court of the United States},
}

@article{Roemer71,
  author = {R. Roemer},
  journal = {Am J Public Health},
  pages = {500--509},
  title = {Abortion law reform and repeal: legislative and judicial developments
           },
  volume = {61},
  number = {3},
  year = {1971},
}

@incollection{Rueda16,
  title = {Chapter 10 - Software for Randomized Response Techniques},
  editor = {Arijit Chaudhuri and Tasos C. Christofides and C.R. Rao},
  series = {Handbook of Statistics},
  publisher = {Elsevier},
  volume = {34},
  pages = {155-167},
  year = {2016},
  booktitle = {Data Gathering, Analysis and Protection of Privacy Through
               Randomized Response Techniques: Qualitative and Quantitative Human
               Traits},
  doi = {https://doi.org/10.1016/bs.host.2016.01.009},
  author = {M. Rueda and B. Cobo and A. Arcos and R. Arnab},
}

by j2kun at Saturday, May 14, 2022

Saturday, April 30, 2022

The Racket Blog

Racket v8.5

Racket version 8.5 is now available from https://download.racket-lang.org.

As of this release:

  • Racket’s new -y flag automatically keeps compiled files up to date, reducing subsequent load times.

  • Error-message realms allow Racket-hosted languages to adapt and rewrite error messages to make sense in a particular context.

  • Nonprivileged users can control package installation scope using an “other-version” directory in the addon-dir.

  • Racket CS runs on platforms where native-code generation is not currently supported (e.g., s390x or ppc64). See “README.txt” in the source distribution for more information on the —enable-pb flag to configure.

  • DrRacket’s new ‘Reopen Closed Tab’ file menu item will open previously closed tabs.

  • Typed Racket has support for the xml library; use typed/xml.

  • Rackunit reports source locations for failed test cases in the Typed Racket language.

  • Plot has violin plots and improved box-and-whisker plots.

  • Boxes are supported alongside lists, vectors etc. in place-channel messages.

  • Those who manually configure Racket CS to use Zlib compression for compiled code should be aware of CVE–2018–25032; the next release and the current snapshot builds use a newer, safer version of zlib.

  • The release includes many other repairs and changes!

The following people contributed to this release:

Alex Harsányi, Alexander Shopov, Alexis King, Andrew Mauer-Oats, Ben Greenman, Benedek Szilvasy, Bert De Ketelaere, Bogdan Popa, Cameron Moy, Chung-chieh Shan, Fred Fu, Gustavo Massaccesi, J. Ryan Stinnett, Jamie Taylor, Joel Dueck, John Clements, Joseph Griego, Khadija Sidhpuri, Laurent Orseau, Maciej Barć, Matthew Flatt, Matthias Felleisen, Mike Sperber, Noah Ma, Philip McGrath, Robby Findler, Ryan Culpepper, Sam Tobin-Hochstadt, Sorawee Porncharoenwase, Stephen De Gabrielle, Tim Jervis, and Trevor Paley

Link to package regressions issue for the 8.5 release: https://github.com/racket/racket/issues/4202

by The Unknown Author at Saturday, April 30, 2022

Wednesday, April 27, 2022

Idiomdrottning

preserve

preserve is a combinator that caches the result of a procedure for a given number of seconds.

Example

(define slow-plus (preserve 5 +))

(list (slow-plus 1 2) (slow-plus 3 4))

This returns (3 3). Calling (slow-plus 3 4) five seconds later returns 7.

Code

sudo chicken-install preserve

For a repo,

git clone https://idiomdrottning.org/preserve

by Idiomdrottning (sandra.snan@idiomdrottning.org) at Wednesday, April 27, 2022

Sunday, April 24, 2022

Idiomdrottning

You can tell I really really wanna complain about it but I’m not gonna

There is this popular CSS framework that I wanna complain about (not attack or contact them, of course, just kvetch about it here) but I simultaneously feel like I don’t wanna, because they don’t break any web standards and there are no accessibility issues (unlike SPA frameworks). It’s a little bandwidth-wasteful but I don’t really care, that’s fine.

The framework I have in mind is only bad (just a really bad and self-foot-shootingly way to make web) for the devs that use it, and if they can stomach it, it doesn’t hurt anyone else. I mean, if you use it you are fucking up royal and there are probably a lot of things wrong with your entire pipeline and it’s a sign that something has gone seriously wrong with how we teach web and make tools for web, but, those things only hurt yourself.

So I would only be tearing down someone else’s work (the creators of this particular framework) from my own armchair, which I feel is only legit when there’s a standards and/or accessibility and/or political issue. My instincts are so often to tear down bad things when it’s so much better to build up good things.

If I want good CSS, I should instead lead by example, write good CSS, make tools for writing good CSS, and write about how to write good CSS, instead of just tearing into a bad CSS framework without having good alternatives readily available.

In other words, this is a rare case of the it’s no big deal if someone is wrong on the Internet xkcd actually applies because while they are very, painfully wrong, it doesn’t really hurt me or anyone else except sympathetically, so I should just let it be.

The analogy is seeing someone build a bicycle painstakingly with a toothpick instead of with good tools. If the bike still works, all else is moot.

I could be warning my dev friends about the framework but if they’ve painted themselves into a corner so deep that they’re actually considering it, I don’t wanna restrict their options further with armchair advice. Again, what I should be doing is STFU and write constructive posts instead of this, and the same goes for my previous post on it.

by Idiomdrottning (sandra.snan@idiomdrottning.org) at Sunday, April 24, 2022

Monday, April 18, 2022

GNU Guix

10 years of stories behind Guix

It’s been ten years today since the very first commit to what was already called Guix—the unimaginative name is a homage to Guile and Nix, which Guix started by blending together. On April 18th, 2012, there was very little to see and no actual “project”. The project formed in the following months and became a collective adventure around a shared vision.

Ten years later, it’s amazing to see what more than 600 people achieved, with 94K commits, countless hours of translation, system administration, web design work, and no less than 175 blog posts to share our enthusiasm at each major milestone. It’s been quite a ride!

What follows is a series of personal accounts by some of the contributors who offered their time and energy and made it all possible. Read their stories and perhaps you too will be inspired to join all the nice folks on this journey?

10 year anniversary artwork

Alice Brenon

As a conclusion, Guix is a really neat system and I hope you enjoy it as much as I do!

My story with Guix is a bit topsy-turvy so I thought I might as well start by the end :) I first ranked it last among systems I wanted to test, then was a bit puzzled by it when I had the chance to test it after all the others had disappointed me, and we finally got to know each other once I installed it on my work laptop, because when you need a good, stable system you know you can rely on, why not use the most surprising one? Strangely, the alchemy worked and it has never let me down so far.

Like all good computer things, it looked way scarier from a distance than it really was, and seemed to be very much about ethics and theory while it's actually very pragmatic. I had struggled for years with the myriad of incompatible package formats for systems, then for each specific languages, and was flushed to discover at last what seemed to be a reasonable universal format. That's probably what I like best about it: the ability to use potentially any software I want without trashing my system. The welcoming community eager to help and valuing my contributions made it even better, and submitting patches came naturally. I mostly use it for development, and to keep my sanity in spite of all the data science tools I have to use for work. I sometimes wish it were easier to tweak the core of the system, but I blame my lack of free time at least as much as its design. I would absolutely love to see my Guix system using the runit init system one day but it just works and I finally learnt that it was all that mattered if you wanted to get things done in the end.

Andreas Enge

When I think of Guix, I always kid myself into believing that I had the idea — I remember chatting with Ludovic around a GNU hackers' meeting about Nix; I joked that since Guile is the GNU language, Nix should be rewritten in Guile. But it turned out that Ludovic had already started such a project in earnest... Luckily there is the git history to refresh our memories. Apparently I installed Guix, at the time only the package manager, with Ludovic's help in December 2012, and immediately reported a few bugs. My next action was to update the package for my own software. Learning Scheme was quite difficult, but I fondly remember discussing quotes and quasiquotes with Ludovic. After that, I mostly added packages to Guix, which was possible without knowing much of functional programming; the most tricky packages that stayed in my mind were ImageMagick and TeX Live. I came to appreciate the GNU Autotools — with all their shortcomings, having a uniform (and usually reliable) way of compiling and installing a software makes creating a Guix package almost trivial.

The most compelling feature of Guix was (and still is, I think) the ability to roll back package installations, and now complete system installations — no more fear of updating a package to a non-working state! And on a completely different level, the nice and welcoming atmosphere in the community, in no small part thanks to Ludovic's initial efforts of creating an inclusive environment.

Many formidable adventures are attached to working on Guix. Buying our first server for the build farm was difficult, since we wanted to use a machine with Libreboot that would work well with the GNU system. Eventually we succeeded, and it is still hosted with the non-profit Aquilenet in Bordeaux, so we managed to start our own infrastructure in accordance with our values.

Writing the bylaws of the Guix Europe non-profit was another exciting adventure; again we tried to create a structure in line with our values, where the decisions were taken as collectively as possible.

And personally I have fond memories of taking part in Guix meetings at FOSDEM and co-organising the Guix Days in Brussels; these are such nice occasions to meet people passionate about Guix and free software! I will never forget the Guix Days 2020 when I had just returned from a Reproducible Build Summit where I admired their way of facilitating the workshop, which I then tried to copy for our own meeting.

The Guix system meets all my daily needs now, so I have no technical wishes for the future — but I trust the many creative minds working on advancing the project to come up with nice new ideas. And I wish the human adventure and community building around Guix to continue!

Andrew Tropin

It's all lisp, no distraction, pure consistency! Every few years I migrate to a different workhorse and it has always been a pain to bring my software setup with me: forgot a pam/udev rule here, package here and some tiny hack here and everything is messed up, easier to reinstall everything from ground up. With declarative and reproducible nature of Guix System, Guix Home and rde project it's just a pure pleasure: write the configuration once, use it everywhere! Daemon configurations, package build phases, cron tasks, everything described in Guile Scheme. The one language to rule them all! I look forward for a little more: wider adoption of Guix for managing development environments and infrastructures.

GNU Guix respects you and your freedoms: anyone can explore and study, tweak and adjust, and share everything they want, every program available. Moreover every package is bootstrapped, what a miracle! Yes, some hardware isn't supported, but for a good reason, for this price you get a lot. I look forward for a little more: decentralized substitutes and RISC-V laptops running GNU Guix.

Thank you the community for all the hard work, already enjoy Guix for more than an year and look forward for more exciting things to appear!

Arun Isaac

I was introduced to Guix in 2016. That was around the time I was learning lisp and having my mind blown. As soon as I heard that Guix was written in a lisp and that it was a FSDG compliant distro, I knew that this was going to be the best distro ever and immediately jumped ship.

While I was immediately smitten by the perfection and elegance of Guix, I have stayed this long not for the technical excellence but for the people. The Guix community is the warmest and most caring free software community I have ever been a part of. I honestly did not believe such people could exist online until I saw Guix. In the future, I would love for this community to grow and thrive with a smoother issue tracking and contribution process so that potential contributors, especially newcomers, don't turn away frustrated.

Björn Höfling

In 2016, I searched a GNU/Linux distribution for a friend's laptop and chose GuixSD (now Guix System), inspired by a LibrePlanet video from MediaGoblin. The laptop failed to digest this OS, as it demanded binary, non-free drivers. In contrast, my wetware is 100% GNU FSDG-compatible, and so, generation 1 of Guix burned successfully into my brain, without any headaches. Or at least, they went away with the help from the very friendly, supportive, tolerant (only to the tolerant, thanks to the code of conduct) community.

My contributions started with not understanding Guix, and asking stupid questions on the mailing lists. Sometimes during that process I found bugs and analyzed them further, which helped Ludovic and other developers to fix them. I reviewed mostly Java packages and added some on my own. I very much enjoyed co-mentoring for Outreachy, and learned more on automated video/screencast generation. I should be really more active in the community again!

For the future of Guix, I would like to see more Java and Go packages (hem, this comes not from alone, I should review and contribute more). Internally I wish a more intuitive bug- and patch-tracker instead of Debbugs+Mumi. Externally I wish a much bigger awareness in the commercial "Open Source" world about software freedom, bootstrapping your environment and dependencies from free source code in a real reproducible way, instead of relying on opaque, binary containers. I wish people would much more take care of their dependencies (with Guix, of course!), put much more thoughts in their usage of dependencies, break-up dependency hell and support third parties in building their sources with free software (instead of relying on binary dependencies and opaque containerized dev-environments).

Blake Shaw

New media artists and designers suffer from following dilemma: our work, with its primary medium being code, is at once perhaps the simplest medium to distribute — requiring little more than copying a directory of text files from one hard drive to another — yet the works themselves remain a total nightmare to faithfully reproduce across varying machines at different points in time. Among other reasons, this is because our works are often composed of disparate parts with accompanying technical debt: an audio-visual installation may use the C++ library openFrameworks for high-performance interactive graphics, Haskell's TidalCycles for realtime sequencing, the fantastic FAUST signal processing language for zero-delay audio DSP, plus the usual dependencies; openCV, libfreenect, cairo, gstreamer, ffmpeg and so on. Time and its corresponding ABI changes intensify our predicament; not only is it often an error-prone and laborious task to get all of this running correctly across many machines, the nature of technical debt means that getting an installation from 2014 up and running in 2022 is often more trouble than its worth. Sadly, these seemingly immaterial works of art that are the simplest to copy are simultaneously some of the most difficult to reproduce and the quickest to depreciate.

Guix, on the other hand, offers its users provenance-preserving bit-reproducible builds of their entire operating systems: using Guix's implementation of the functional software deployment model, I should be able to reproduce, bit-for-bit, the exact same results across equivalent hardware. Suddenly our artworks can be deterministically produced not only at the level of the source code's changelog but also as the level of the build, offering the guarantee that our usually glued-together towers of systems that power our installations can be revisited and reproduced in the future, and the deployment processes becomes as simple as packing your system into a container to be deployed to a remote target. These guarantees means that the scaling of our works become simplified: if you want to do an installation that involves 100 Raspberry Pis communicating with one another in a dynamic system, you can focus on working on just a small parameterized subset, and then generate their varying configurations using the infrastructure that Guix provides. I'm currently in the early phases of developing re::producer, a "creative plumber's toolkit" that seeks to simplify this process for artists, tailoring the tools provided by Guix to the domain-specific needs of media art and allowing artists to declaratively define complex media art systems using one of the greatest programming language of all time, Scheme.

This is still new terrain, so there is plenty of work to do. But that’s no excuse to keep up with your old habits, so roll up your sleeves and come hack the good hack!

Cyril Roelandt

Back in early 2013, I was lucky enough to be unemployed for a few months. This gave me a lot of time to try out GNU Guix. Ludovic had told me about it a few months earlier, while we were still working at the same place. It was so easy to add new packages that I naturally ended up submitting a few patches and quickly trolled the whole project by adding my editor of choice, vim. Debugging package definitions was also very simple since the builds were reproducible by default.

I also had some fun writing the first version of the linter and improving the importer/updater for Python packages. I even hacked tox to make it use Guix instead of virtualenv and gave a talk about this at FOSDEM. Even though I left the project a few years ago, I'm glad to see it's doing well and is used in science and has joined forces with Software Heritage.

Efraim Flashner

Back in 2015 or so I had been using GNU/Linux on the desktop for a number of years and I wanted to contribute somehow. I had just finished a course in University using Lisp and Prolog and then I heard about Guix having its 0.8.3 (or so) release and it looked like something that I could try to contribute to. I certainly made a number of mistakes in the beginning; I didn't know that git revert was an actual command and I tried to revert a commit by hand, leaving a dangling parenthesis and breaking the repo. Another time I added Java as a dependency to an image library and broke the graphics stack for half the architectures until I reverted that! I even had a stint as a failed GSoC student. I was working on Bournish, a Gash/Gash-utils like utility to make debugging in the early boot process far easier by providing common CLI utilities. I had some issues with time management and ended up spending more time than I should have updating packages in the repository, as a result I didn't spend enough time working on Bournish and it's languished since then.

Currently, I enjoy working on troublesome packages and expanding the number of packages available on non-popular architectures. Sometimes it's removing compiler flags or ‘ifdef gating’ architecture-specific includes and other times certain parts of programs need to be disabled. Then everything needs to be double-checked for cross-compiling. Right now I'm working on riscv64-linux support in Guix, it has a lot of potential but powerful boards are hard to come by. Also there are some lingering bugs with guix show showing different supported-systems for packages depending on which architecture you run it from; on x86_64-linux only two are shown, from aarch64-linux all 9 architectures are shown.

Ekaitz Zarraga

A friend of mine introduced me to Nix and Guix a while ago but I was hesitant to try it because I hate configuring stuff and this didn't look as an easy distribution to use. Once I discovered we could have separate environments and how easy is to write a package (despite all other difficulties Guix has) I was completely into it. I installed Guix in my laptop and never looked back. In the 10 years I've been using a GNU/Linux distribution I never interacted that directly with my packages: creating custom ones, sending them upstream, making fixes… That's also freedom! Now, a couple of years later, I am working on improving the bootstrap process for RISC-V and using Guix as a mechanism that provides reproducible builds and an easy way to manage all the actors I have to deal with: very old software with old dependencies, colliding libraries, environment variables, custom patches in source code… This would be a pain to build in any other environment, but Guix makes hard things easy. Guix also makes easy things hard sometimes, but we are working on that!

Eric Bavier

As a young(ish) computer programmer, I had been running GNU/Linux systems for about 7 years but wanted to find a project I could contribute back to. Fortunately, I came upon a release announcement for Guix after having poked around the GNU Hurd and Guile spheres. To me at the time Guix had the exact mix of upstart energy, optimism, and long-term vision that I was hoping to find. Over the years I've been able to contribute packages I use in both my personal and work lives, and I'm proud to have implemented the first version of guix refresh --list-dependents. I've really loved how Guix allows me to easily move my environments around to different systems, and "rollback" gives me much peace of mind knowing that I can tinker with the system and recover should something go wrong. But probably my favorite part of Guix is the fantastic community I've seen grow around the project. It exemplifies the sort of caring, kind, supportive group I wish many other projects had. Together I know we'll be able to make advances on many fronts. In particular, I'd like to see further work on peer-to-peer substitutes delivery, a native build daemon, additional tools for managing relocatable pack collections, and continued leadership in bootstrapping.

Florian Pelz (pelzflorian)

GNU Guix to me is a group that cares about its software and about the people involved. I’ve got to know Guix by reading discussions on how to do sandboxing properly. But actually, Guix convinced me with its clarity and approachability and principles. Guix opened my eyes on how parts of GNU fit together. Thanks to all who give us this freedom to understand and decide. By contributing to the translation, I hope to make it reach more users and developers.

Guillaume Le Vaillant

Before I started using Guix in 2019, I was using Gentoo because I liked how I could easily package software and make package variants with some custom patches. However one day an upgrade didn't go well and many packages ended in a bad state. I realized I would have to reinstall the whole system to get things to work again. Before recompiling the whole system, I tried Nix and Guix, because I had read somewhere that they used functional package management, which gives the possibility to roll back to a working state when an upgrade causes problems. I chose Guix because I thought it was going in the right direction by using only free software and trying to get reproducible builds. The fact that package definitions use the Scheme language was a bonus point as I like Lisp languages. And there was even a build system for Common Lisp packages, which is rarely the case in the GNU/Linux distributions I tried over time. So I started using Guix, and packaging software I wanted that were not in Guix yet. One day someone asked me if I would be interested in having commit access, and I accepted. I also found a way to improve the build system for Common Lisp packages that simplified package definitions. In the future, I think it would be nice to add an importer fetching information from Quicklisp, as it would make packaging Common Lisp software even easier.

Hartmut Goebel

Christian Grothoff (GNU Taler) pointed me to Guix early 2016, saying “This will become the new Debian!” and asking me to look at it for GNU Taler. Well, quickly I was attracted by the ideas of reproducible build and the ease of packaging software. I also love the one-time usage of programs without littering my system.

Curiously, even as I'm a Python developer, my first contributions have been about Java packaging. And I spend quite some time trying to build maven. This challenge I gave up after two (or three? can't remember) attempts. Glad Julien Lepiller continued the endeavor and created the Maven build system.

Nowadays I still use Guix on a foreign distro only, as KDE desktop and some of my main applications are still not here. Guix keeps my main system tidy, while I can have different development environments without dependency conflicts.

As you can imagine, I'd like to see KDE desktop in Guix as well as some guix compose for managing compound containers.

Jan (janneke) Nieuwenhuizen

At FOSDEM 2016 there were seven talks about GNU Guix: A talk about the Hurd by Manolis Ragkousis, about functional package management by Ricardo Wurmus and that was just what I needed to hear: Finally a viable promise for the GNU System and much more innovative than I could have hoped for. At the time I also worked on a project where building binary releases was becoming more unmanageable with every release because of conflicting requirements. We were slowly migrating away from C++ to GNU Guile, so while not directly applicable the talk “Your distro is a Scheme library” by Ludovic Courtès also made me feel: Go Guix!

Using Guix, my intricate dependency problems building binary packages quickly and easily disappeared. That gave me the confidence that I needed and I wanted to get involved. My first contributions where a programming game called Laby and its dependencies and a few more packages that I missed. After running Guix on top of Debian GNU/Linux for three months I switched to what we now call Guix System. Guix did not have log rotation yet in those days, so I created a package.

This is how I found how amazingly helpful and friendly the community was. I created the MinGW cross build for Guile 2.0 and then "found out" about the bootstrap binaries: The only packages in Guix that are not built from source. Something just did not feel right. The manual said: “These big chunks of binary code are practically non-auditable which breaks the source to binary transparency that we get in the rest of the package dependency graph.” So, I wrote GNU Mes and started working on solving this problem. Twice we halved the size of the bootstrap binaries and the work is still ongoing.

What possibly started somewhat as a April fools joke in 2020 about the Hurd—this is still unclear—was (mis?)taken by some as a real project and led to a fun hacking frenzy of several months finally producing the "Childhurd": A Guix Shepherd service that gives access to the GNU/Hurd in a VM. My wish for the near future would be see an up-to-date Hurd including the Debian rumpkernel patches that may finally enable running the Hurd on real hardware again.

John Kehayias

All I wanted to do was to try out a new status bar, but the author only officially supported Nix for building. That started me to finally look at Nix after hearing about it in passing before. I was intrigued by the focus on reproducible and declarative builds. The language, not so much. My brother mentioned another project in the same vein but built on a Lisp. As a lover of all things Lisp, that was basically enough for me to dive right in. Beyond the banner features of the powerful package and system management, reproducible builds, system configuration, and, of course, Guile, I quickly found perhaps the biggest and most important: the GNU Guix community. They have been nothing short of amazing: helpful, intelligent, supportive, and fun to hang out with on the #guix channel. In less than a year, my journey so far has taken me through the (is it infamous yet?) recent big core-updates branch and merge, submitting patches for random libraries and key desktop features I use, and participating in the motivating Guix Days 2022. Looking to the future, I hope we can better harness the energy and resources of the growing Guix community. It is already a great project to get involved with and make your own, but with better and quicker patch review, further building out and using our development tools and ecosystem, and continuing to smooth out the rough edges for new users/contributors, I'm sure the next 10 years of GNU Guix will be very bright indeed.

Konrad Hinsen

In my work as a computational scientist, my first encounter with reproducibility issues happened in 1995, when a quantum chemistry package produced different results on two almost identical Silicon Graphics workstations. This was the beginning of a long quest for better computational reproducibility, in the course of which I discovered in 2014 Nix and Guix as two implementations of the same promising idea: the fully automated construction of a complete reproducible software stack. Of the two, Guix was more aligned with my lispy past, and already had a burgeoning computational science user community. I started playing with Guix in 2016, in a virtual machine under macOS, but only fully adopted Guix for my everyday work in 2021, when I left macOS for Linux. During those five years, I also learned to appreciate the Guix community, which is friendly, competent, and refreshingly low-ceremony in spite of continuous growth. That makes for an easy transition from newbie to contributor (mostly contributing packages, but also the time-machine command that matters for reproducibility). The anniversary is a good occasion to express my thanks to all those who answered my many questions, ranging from conceptual to technical, and to the maintainer team that does an important but not very visible work by critically examining all submitted packages and code enhancements. My main wish for the future is a lower barrier to adoption for my colleagues in computational science, and I hope to contribute to making this happen.

Lars-Dominik Braun

Around the end of 2019 we were looking for a way to provide reproducible software environments to researchers in(?) psychology and I was researching software to accomplish that. Binder/repo2docker was the obvious and most mature solution at that time and a colleague of mine had set up a proof of concept server already. But it could only handle public projects out-of-the-box and setting up an entire Kubernetes cluster didn’t seem particularly appealing at that time, because no other project was moving in that direction yet. So I set out to look for alternatives. Another idea was based around OpenStack and one virtual machine per project with preinstalled software, which we would keep for eternity. Also not ideal and OpenStack is very hard to master too. So I looked further at Nix, which – at that time – lacked an obvious way to spawn ad-hoc environments with a certain set of packages. Thankfully I stumbled upon GNU Guix by mere accident, which had exactly that feature. And so in December 2019 my first code contribution was merged.

Prior to that I had never written a single line of Scheme or Lisp and even now it’s still a steep hill. GNU Guix still powers our project and allows us to easily share software environments while providing excellent application startup times. I also started contributing software that I run on my own machines, but I’m not running Guix System, because compared to systemd, Shepherd is quite limited on the desktop and Guix’ lack of first-class support for non-free drivers/firmware, which I need to even boot my machine.

Ludovic Courtès

It all started as a geeky itch-scratching experiment: a tiny bit of Guile code to make remote procedure calls (RPCs) to the Nix build daemon. Why? As I was involved in and excited about Guile and Nix, it felt natural to try and bridge them. Guile had just had its 2.0.0 release, which broadened its scope, and I wanted to take advantage of it. Whether to go beyond the mere experiment is a decision I made sometime after a presentation at the 2012 GNU Hackers Meeting.

It was far from obvious that this would lead us anywhere—did the world really need another package manager? The decisive turn of event, for me, was to see that, at the time Guix officially became part of GNU in November 2012, it had already become a group effort; there was, it seems, a shared vision of why such a crazy-looking project made sense not just technically but also socially—for GNU, for user freedom. I remember Nikita Karetnikov as the first heroic contributor at a time when Guix could barely install packages.

One of my “ah ha!” moments was when I built the first bootable image a year later. G-expressions, the service framework, and checkout authentication are among my favorite hacks. What’s mind-blowing to me though is what others have achieved over the years: the incredible bootstrapping work, Disarchive, Emacs-Guix, the installer, Hurd support, Guix Home, supporting tools like Cuirass, the Data Service, and mumi. There’s also the less visible but crucial work: Outreachy and GSoC mentoring, support on IRC and the mailing lists, build farm administration, translation, dealing with the occasional incident on communication channels, organizing events such as the Guix Days or FOSDEM, and more.

As much as I love hacking the good hack, I think Guix’s main asset is its community: a friendly, productive, and creative group with a sense of attention to the other. I started clueless about what it means “to build a community” and learned a lot from everyone met on the way. We did it, we built this! Thumbs up, Guix!

Luis Felipe

When I found that Guix existed, I saw it could make it easier for GNU to release its Operating System and reach a wider audience. I intended to propose some graphic designs related to this, and sent a message to GNU in order to test the waters. Things didn't go as I expected, so, instead, I decided to direct my contributions towards GNU Guix and its distribution of GNU.

Since then, I've contributed with graphics (Guix and Guile logos and website designs, desktop backgrounds, release and promotional artwork), testing, bug reporting, packaging, and Spanish translations.

It's been about 8 years of Guix for me (the heck!). I started using the package manager on Debian, gradually switched the provenance of my software from Debian to Guix, and, once GNOME became available, I moved to Guix’s distribution of the GNU operating system, which I've been using as my main system for about 3 years now (and I don't see that changing anytime soon).

Right now, I'm enjoying developing software using Guix's reproducible environments and containers, and using one single package manager for every dependency.

I hope this system reaches a wider audience and brings science to home computing along the way. Homes should be capable of producing scientific work too.

Manolis Ragkousis

When I think how I started with Guix, I use one word to describe it, luck! It was early 2014 when I encountered Guix by luck, while I was still a student at Crete, Greece. I remember there was a strike during that time and I had plenty of free time for a week, so I decided that I will try to start working on this. Then an idea came in mind, why not try porting Guix to GNU/Hurd and build a system with it? One thing led to another and it also became a GSoC project in 2015 and 2016. In 2016 I also gave a FOSDEM talk about this, which somehow ended up being the start of me helping out with the GNU Guile devroom in 2017 and 2018, and then what became the Minimalistic Languages until today. When I am thinking about Guix is like thinking about the story of me growing up and the people I met through all these years I consider family! Guix is a big part of my life, I use it everywhere and even though I am not able to help much nowadays I am following the project as much as I can. Here's hoping to another 10 year!

Marius Bakke

I originally got interested in Guix after facing shortcomings in traditional configuration management tools. A fully declarative and immutable system which cleans out old user accounts and packages, that also offers reproducibility, rollbacks, and the ability to generate virtual machines and containers from the same code. Where do I sign up?

It turns out, signing up was easy, and I soon found myself contributing the pieces I needed to make it a daily driver. Watching the community grow from a handful of contributors to 100 monthly has been astonishing. I have learned a lot from this community and am proud to be a part of it. Can't wait to see what the next decade brings. Happy birthday Guix!

Mathieu Othacehe

I was introduced to GNU Guix by a colleague, Clément Lassieur in 2016. At first I found the concept overwhelming. Writing Guile wrappers for each and every Linux service out there and keeping them up to date seemed like impossible. However, I quickly fell in love with the package manager, the distribution and the community behind. A few months later, GNU Guix was running on all my machines and I started hacking on the continuous integration tool: Cuirass.

Since then GNU Guix has been an important part of my life. I wrote most of the Guix System installer while traveling by bike to China in 2018. During the 2020 lockdown, I worked with janneke on the new image API and the Hurd port. At that time, I was proposed a co-maintainer position of the project. In 2021, thanks to an NGI sponsorship, I dedicated 6 months to improving our continuous integration process and overall substitutes coverage.

Recently it has been harder to dedicate as much efforts on the project but I'm sure this is a transient phase. I can't wait to start working again with the incredibly talented people making this piece of software so special to me.

Paul Garlick

I began using and contributing to the Guix project in 2016. I had been searching for a way to preserve software environments that are used for numerical simulation. The applications that run in these environments often comprise a combination of specialised code and building blocks drawn from an underlying framework. There are many moving parts and changes to a low-level library can block the operation of the high- level application. How much better things would be if one could specify the exact components of the environment and re-create it whenever it is needed. I discovered that Guix provides the machinery to do just that. Scheme was new to me then so I had some learning to do before contributing. This included a detour via Vonnegut/Cat's Cradle, of course, to discover the meaning of ice-9. Suitably informed I returned to add a number of finite volume and finite element frameworks to the Guix package collection. Keeping these packages up- to-date and welcoming new simulation-related packages is the next target. Looking ahead to the next ten years, an important task is to popularise the use of the Guix tools. Many more engineers and scientists stand to benefit from the use of the dependable software environments that are now made possible.

Ricardo Wurmus

In 2014 I became responsible for building and installing scientific software at the Max Delbrück Centre, a research institute in Berlin. We used CentOS, so I built custom RPMs, installing applications to ad-hoc prefix directories. After a few weeks I took a minute to consider the horrific implications of maintaining a growing collection of custom software with RPM. As I tried to remember what life choices had led me to this moment, I recalled an announcement email of a quirky GNU package manager written in Scheme. A short web search later I was playing around with Guix.

After an encouraging chat on IRC I realized that I could probably replace our custom RPM repository and build different variants of scientific software on much more solid ground—all the while contributing to a project that felt like a new and exciting take on GNU. We're building the GNU system!

Guix only had very few of the packages I needed, so I got busy. I packaged and bootstrapped the JDK because I was under the mistaken assumption that I would need it for R (turns out Java is optional). Many more foolish adventures followed, and some of them have actually been useful for others.

I had found my tribe of fellow hackers who cared about the vision of the GNU system, encouraged playful experimentation, and were rooting for each other to succeed in building a better system that made software freedom a practical reality, blurring the lines between developers and users. In the decades to come I hope many more people will get to experience what I did and end up calling this community their home.

Simon Tournier

Back in 2014, I watched the video “Growing a GNU with Guix” at FOSDEM but the real revelation had been in 2015 with “GNU Guix: The Emacs of Distros”, again at FOSDEM. Then, I was following the development but not using Guix yet. 2016, new job where I was spending my time to fight against dependencies and Modulefiles. Then I have totally jumped in Guix in December 2018. My first interaction with the project — and not yet running Guix — was a in-person event in Paris before the Reproducible Builds workshop. Back to home, I proofread cover to cover the French manual — my first contribution — and installed Guix on the top of my Debian GNU/Linux system. So amazing! Guix fixes many issues I had at work — and introduce new ones^W challenges. Plus, thanks to people around, I am learning a lot, both about technical details and about inter-personal interactions. My wish for the near future is a community more structured: more events and meetups, more process for smoothing the contributions (“teams” for improving the reviewing by sharing the load, RFC for discussing new features, regular releases, etc.), and more materials for using Guix in various configurations.

In scientific context, transparency — being able to audit the whole computational environment from the source codes to the production of binaries — is one of the keys for a true reproducible research. Since Guix is transparent by design, it appears to me one part for a solution in tackling the computational side of the replication crisis. For the near future, I wish more scientific practitioners will employ Guix.

Thiago Jung Bauermann

I learned about Guix when I was looking for alternative, safe ways of installing an up-to-date Rust toolchain on my machine (at the time rustup didn't verify signatures of downloaded binaries, and it still doesn't do the full job). Guix is a great way to have the latest and greatest software on top of your slower-moving Linux distribution. I love how easy it makes to create instant, ad hoc environments with the packages you need for a specific task. Or to temporarily try out some new app or tool, leaving Guix to garbage-collect it and its dependencies. The Guix community is amazing as well! It's a pleasure to participate on the mailing lists. And I've been enjoying learning Scheme! For the future, I hope Guix can get even better test coverage so that every update of the master branch is guaranteed to not introduce regressions. And that the project gets more committers, to help with the constant influx of patches.

raingloom

There are multiple reasons I started using Guix. On the tech side, I'd been playing around with 9front for a while at that time, but kept running into issues where the imperative structure of namespaces was getting in my way. I like Haskell a lot and heard about the many benefits of a pure functional approach to build systems, et cetera. I ran Guix on top of Arch for a while and liked it a lot. Using package transformations still feels magical. But yall already know about this cool stuff from Ambrevar's blog post. On the social side, I saw that one of my favorite compsci people — Christine Lemmer Webber — was involved with the project, so I knew it probably has a nice community, which turned out to be very true. This is one of the best communities centered around a piece of tech that I've been in and yall inspire me to be better with each interaction. Huge thank you for that. My favorite Guix memory is when someone CC'd me in a patch for the egg importer, which was built on top of the Chicken Scheme build system I contributed. Seeing others build on top of my work is an amazing feeling. For the future, I hope the service management improvements will keep coming, but what I'd like to see the most is Guix running on old and slow devices. There is a lot of work to be done to make it more bandwidth and space efficient and to support development on systems with little RAM. If I could use it instead of PostmarketOS/Alpine, I'd be elated. On the human side, I hope we can keep contributors from burnout, while increasing the software's quality. I think the way Blender development is structured could be a source of inspiration. On that note, using Guix for reproducible art workflows would be rad. Okay that's it, bye yall lovely people.

Vagrant Cascadian

I think I first heard of Guix in 2016, triggering a late-night session trying to wrap my head around the crazy symlink farms at the heart of Guix. By late 2017 I was filing bug reports and eventually patches!

I am deeply fascinated that Guix has Reproducible Builds built right in, with normalized, containerized build environments and the "guix challenge" tool to verify reproducibility. I had heard of Nix as an interesting model, but valued the strong commitment to Free Software with Guix.

Eventually I even grew crazy enough to package Guix in Debian... which indirectly lead to one of my most creative contributions to a Free Software project, a typo poem embedded in!

I really appreciate the community around Guix and the process, values and thoughtfulness that work proactively to maintain a healthy community, even in the face of inevitable and occasional conflict. Guix balances formal and informal in a way that works for me.

I look forward to the day when Guix has a full source bootstrap!

10 Years of Guix artwork by Luis Felipe.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

by Guix Hackers at Monday, April 18, 2022

Mark Damon Hughes

A Schemer in Common Lisp Land

I got my Land of Lisp tshirt (shop is now closed), and thought for my weekend goof-off I'd read thru the book again and try actually using the Common Lisp examples; before I'd just converted it on the fly to Scheme, as I usually do with any Lisp stuff. But doing this exposed me to a fairly annoying environment, so I've been writing up my notes, function equivalents, workarounds. And I'll keep updating this page as I do more with it:

I had thought going in, maybe CL would be usable for some projects. But now, I know there's no way I would try to use this in production.

by mdhughes at Monday, April 18, 2022

Wednesday, April 6, 2022

Scheme Requests for Implementation

SRFI 232: Flexible curried procedures

SRFI 232 is now in final status.

Scheme lacks a flexible way to create and apply curried procedures. This SRFI describes curried, a variant of lambda that creates true curried procedures which also behave just like ordinary Scheme procedures. They can be applied to their arguments one by one, all at once, or anywhere in between, without any novel syntax. curried also supports nullary and variadic procedures, and procedures created with it have predictable behavior when applied to surplus arguments.

by Wolfgang Corcoran-Mathe at Wednesday, April 6, 2022

Saturday, April 2, 2022

Gauche Devlog

Running gosh without installing

Running gosh without installing

Recently I wrote some test scripts in Gauche for a project that didn't use Gauche in particular. I could've kicked get-gauche script during make check to install Gauche locally as needed, but that seemed a bit of overkill, especially it was just for small test scripts.

Then I thought, well, I already have a Docker image. If I can feed a local script to it...

So here it is. I included in a Docker image a small script gosh-script, which chdirs into /home/app and run gosh. If you mount local cwd on /home/app, the scripts, libraries and data in it are all visible to gosh in the Docker:

docker run --rm -ti -v `pwd`:/home/app practicalscheme/gauche gosh-script TEST-SCRIPT

Or, you can use run-gosh-in-docker.sh script.

You can't acceses local resources other than the filesystem below the current directory, and you can't use extra libraries. But for the simple tasks this is enough.

See README in Gauche-docker-image for the details.

Tag: Docker

Saturday, April 2, 2022

Friday, April 1, 2022

Jeremy Kun

Cocktails

It’s April Cools! We’re taking back April Fools.

When I was younger I had a strange relationship with alcohol, not because of any sort of trauma, but because I found it decidedly boring and disgusting to the taste. I didn’t drink in high school, didn’t enjoy parties in college, and didn’t care for tailgating or other sports-based events where drinking was common. I also never enjoyed wine—red is too tannic, white is just meh—and almost all beer tastes awful to me (lambic is a delightful exception).

This attitude may have been largely because (in America) young drinkers are idiots and the goal is to get plastered quickly and cheaply. Underage drinkers in the US only have access to cheap liquor, and sugary sodas are the standard mixer. People who grow up in these circumstances largely seem to acclimate to the idea that alcoholic drinks should taste awful.

With this attitude, I was surprised to find myself drawn to making cocktails.

From what I can remember, the first good cocktail I had was some variation on an old fashioned at a fancy restaurant. It was the kind of drink that is only good if it’s well made. This piqued my interest—and I still love a good old fashioned—but what convinced me to start making cocktails was the variety of flavors and surprising combinations that could be made into a drink.

For example, one of my favorite cocktails to make when trying to impress a friend mixes snap peas, mint, caraway (Brennivin), and yuzu lemon—a milder, Japanese variety of lemon that almost tastes like a mix between a Eureka lemon and a pear—into a bright green drink with a naturally foamy top layer (due to shaking the citrus).

“The Mendel,” my name for this cocktail whose central flavor is snap peas and mint.

But literally any flavor you want can be the star of a cocktail. I make a pecan-pie flavored drink with Rivulet’s amazing liqueur, some aged rum, egg white, and pumpkin-pie spice. I made a drink showcasing hibiscus with a syrup made from hibiscus flowers, cinnamon, allspice, vanilla beans, ginger, and sugar water. I’ve made a smokey carrot cocktail (Mezcal, carrot juice, balsamic vinegar, black pepper), and a mango lassi cocktail (gin, mango liqueur, plain yogurt, simple syrup).

The flavors, in my opinion, largely come from two sources: liqueurs and syrups. Liqueurs often start as purified ethanol spirit that is sweetened and infused with various ingredients, and then watered down to a low-alcohol content, often 15%, not much more than a glass of wine. Many world-famous liqueurs originated as bitter herbal remedies as far back as the 1600’s. This was the case for Chartreuse, named after the Grande Chartreuse monastery in the Chartreuse mountains north of Grenoble, France. These monks originally developed Chartreuse because they were trying to make the “elixir of long life.” Elon Musk should try it. Because Yellow Chartreuse is my absolute favorite liqueur, I made a little pilgrimage to their distillery in Voiron, France.

Elixir Vegetal, the herbal remedy still marketed as a cure-all tonic. I have a bottle as a novelty. They’re kept in a wooden outer bottle to prevent sunlight from making the liquid lose its color.

Anyway, point is liqueurs are delicious, even on their own or with tonic water. And there are liqueurs made from every conceivable fruit, peppers, birch bark, nuts, violet, you name it.

Syrups, on the other hand, allow you to infuse the flavor of anything by boiling it with equal parts water and sugar (or honey, or maple syrup). Tea, peanut butter, port, cardamom, whatever you want to experiment with, you can basically make the flavoring parts yourself and add them to a drink.

Beyond those two, texture is an interesting component for me personally. If you’ve ever had a whiskey sour or a pisco sour, you may know the nice foaminess in those drinks comes from “dry shaking” (shaking without ice) an egg white to emulsify and create a nice foam. It also tends to tone down harsher tannic flavors. You can achieve the same sort of effect with plain yogurt or the brine from a can of chickpeas. And then there are these techniques like “milk washing” and other forms of clarifying spirits, where you use liquids with special proteins to “wash off” the unwanted solids in another drink (like tannic solids in wine).

The rabbit whole goes a whole lot deeper. If you want to get into this and geek out a bit more on the science, check out Dave Arnold’s book Liquid Intelligence. It basically taught me all the advanced techniques I know, and many I have yet to muster the courage to try, like vacuum-infusing solids with liquor, using liquid nitrogen to freeze solids before muddling them, and plunging a RED HOT COPPER POKER into a drink to add flavor.

A page from Liquid Intelligence showing the red-hot copper poker technique.

I also enjoyed The Aviary restaurant’s cocktail book, which included a ton of interesting recipes, along with new ideas like spherification (which I didn’t realize I would hate until after I made it, mainly because I hate boba and it has a similar texture).

At 22 I hadn’t yet figured out why alcohol and drinking culture was unappealing. At 32, I think I finally understand that an activity, hobby, or subject appeals to me principally through its sense of quality and craftsmanship. For the quality of the finished work, yes, but also in the attention to detail, and stewardship of the workspace and tools with which that work is done. Like the apocryphal 30-second Picasso napkin art, I strive to practice my crafts (mostly programming, writing, math) so that quality becomes second nature. Focusing on tools, methods, and patterns of work are critical to that.

A bonus of all this is that cocktails become an easy point of conversation when I am stuck in a drinking-culture setting. I offer to “bartend” small parties of my friends so that I can have something to do besides drink. It gives me an excuse to leave an awkward conversation and a nice excuse to join a new conversation. It usually leaves an impression and people remember me. And it’s the best excuse to get my friends to visit me (I have all the ingredients!) so that I don’t have to go anywhere to enjoy myself.

If you want to get into cocktails but aren’t sure where to get started, I’d recommend getting the minimal tools to make a shaken cocktail (Boston shaker tins, Hawthorne strainer, and a measuring jig), watching a video on how to use the tins to shake a cocktail with ice, then go to fancy restaurants, try cocktails, write down the ingredients, and go buy them and try to experiment with recreating them at home. A good default recipe uses 2oz liquor, 3/4 oz – 1 oz sweet stuff (syrups and/or liqueurs), 1oz citrus juice, shaken over ice. Personally I default to a 1-1 honey-water simple syrup, and add a small pinch of salt to mixtures before I shake them. That’ll give you a solid but flexible base for which choices of particular ingredients covers the gamut of gimlets, daiquiris, sours (all that + egg), margaritas, and more.

Anyhow. The math content will return shortly. Enjoy!

by j2kun at Friday, April 1, 2022

Thursday, March 24, 2022

Jeremy Kun

Silent Duels—Constructing the Solution part 2

Previous posts in this series:

Silent Duels and an Old Paper of Restrepo
Silent Duels—Parsing the Construction
Silent Duels—Constructing the Solution part 1

Since it’s been three years since the last post in this series, and the reason for the delay is that I got totally stuck on the implementation. I’m publishing this draft article as partial progress until I can find time to work on it again.

If you haven’t read the last post, please do. Code repo. Paper I’m working through. In this post I make progress on implementing a general construction of the optimal strategies from the paper, but at the end the first serious test case fails. At the end I’m stuck, and I’m not sure what to do next to fix it.

Refactoring

Let’s start by refactoring the mess of code from last post. Now is a good time for that, because our tinkering in the last post instilled certainty we understand the basic mechanics. The central functions for the generic solution construction is in this file, as of this commit.

We’re using Sympy to represent functions and evaluate integrals, and it helps to name types appropriately. Note, I’ll using python type annotations in the code, but the types don’t validate because of missing stubs in sympy. Call it room for improvement.

from sympy import Lambda
SuccessFn = NewType('SuccessFn', Lambda)

@dataclass
class SilentDuelInput:
    '''Class containing the static input data to the silent duel problem.'''
    player_1_action_count: int
    player_2_action_count: int
    player_1_action_success: SuccessFn
    player_2_action_success: SuccessFn

Now we turn to the output. Recall, a strategy is a partition of [0,1] into n intervals—where n is the number of actions specified in the input—with a probability distribution for each piece of the partition. This suggests a natural breakdown

@dataclass
class Strategy:
    '''
    A strategy is a list of action distribution functions, each of which
    describes the probability of taking an action on the interval of its
    support.
    '''
    action_distributions: List[ActionDistribution]

And ActionDistribution is defined as:

@dataclass
class ActionDistribution:
    '''The interval on which this distribution occurs.'''
    support_start: float
    support_end: float

    '''
    The cumulative density function for the distribution.

    May be improper if point_mass > 0.
    '''
    cumulative_density_function: Lambda

    '''
    If nonzero, corresponds to an extra point mass at the support_end.
    Only used in the last action in the optimal strategy.
    '''
    point_mass: float = 0

    t = Symbol('t', nonnegative=True)

    def draw(self, uniform_random_01=DEFAULT_RNG):
        '''Return a random draw from this distribution.

        Args:
         - uniform_random_01: a callable that accepts zero arguments
           and returns a uniform random float between 0 and 1. Defaults
           to using python's standard random.random
        '''
        if self.support_start >= self.support_end:  # treat as a point mass
            return self.support_start

        uniform_random_number = uniform_random_01()

        if uniform_random_number > 1 - self.point_mass - EPSILON:
            return self.support_end

        return solve_unique_real(
            self.cumulative_density_function(self.t) - uniform_random_number,
            self.t,
            solution_min=self.support_start,
            solution_max=self.support_end,
        )

We represent the probability distribution as a cumulative density function F(t), which for a given input a outputs the total probability weight of values in the distribution that are less than a. One reason density functions are convenient is that it makes it easy to sample: pick a uniform random a \in [0,1], and then solve F(t) = a. The resulting t is the sampled value.

Cumulative density functions are related to their more often-used (in mathematics) counterpart, the probability density function f(t), which measures the “instantaneous” probability. I.e., the probability of a range of values (a,b) is given by \int_a^b f(t) dt. The probability density and cumulative density are related via F(a) = \int_{-\infty}^a f(t) dt. In our case, the lower bound of integration is finite since we’re working on a fixed subinterval of [0,1].

The helper function solve_unique_real abstracts the work of using sympy to solve an equation that the caller guarantees has a unique real solution in a given range. It’s source can be found here. This is guaranteed for cumulative distribution functions, because they’re strictly increasing functions of the input.

Finally, the output is a strategy for each player.

@dataclass
class SilentDuelOutput:
    p1_strategy: Strategy
    p2_strategy: Strategy

I also chose to make a little data structure to help maintain the joint progress of building up the partition and normalizing constants for each player.

@dataclass
class IntermediateState:
    '''
    A list of the transition times computed so far. This field
    maintains the invariant of being sorted. Thus, the first element
    in the list is a_{i + 1}, the most recently computed value of
    player 1's transition times, and the last element is a_{n + 1} = 1.
    This value is set on initialization with `new`.
    '''
    player_1_transition_times: Deque[Expr]

    '''
    Same as player_1_transition_times, but for player 2 with b_j and b_m.
    '''
    player_2_transition_times: Deque[Expr]

    '''
    The values of h_i so far, the normalizing constants for the action
    probability distributions for player 1. Has the same sorting
    invariant as the transition time lists.
    '''
    player_1_normalizing_constants: Deque[Expr]

    '''
    Same as player_1_normalizing_constants, but for player 2,
    i.e., the k_j normalizing constants.
    '''
    player_2_normalizing_constants: Deque[Expr]

    @staticmethod
    def new():
        '''Create a new state object, and set a_{n+1} = 1, b_{m+1}=1.'''
        return IntermediateState(
            player_1_transition_times=deque([1]),
            player_2_transition_times=deque([1]),
            player_1_normalizing_constants=deque([]),
            player_2_normalizing_constants=deque([]),
        )

    def add_p1(self, transition_time, normalizing_constant):
        self.player_1_transition_times.appendleft(transition_time)
        self.player_1_normalizing_constants.appendleft(normalizing_constant)

    def add_p2(self, transition_time, normalizing_constant):
        self.player_2_transition_times.appendleft(transition_time)
        self.player_2_normalizing_constants.appendleft(normalizing_constant)

Now that we’ve established the types, we move on to the construction.

Construction

There are three parts to the construction:

  1. Computing the correct \alpha and \beta.
  2. Finding the transition times and normalizing constants for each player.
  3. Using the above to build the output strategies.

Building the output strategy

We’ll start with the last one: computing the output given the right values. The construction is symmetric for each player, so we can have a single function called with different inputs.

def compute_strategy(
        player_action_success: SuccessFn,
        player_transition_times: List[float],
        player_normalizing_constants: List[float],
        opponent_action_success: SuccessFn,
        opponent_transition_times: List[float],
        time_1_point_mass: float = 0) -> Strategy:

This function computes the construction Restrepo gives.

One difficulty is in expressing discontinuous breaks. The definition of f^* is a product over b_j > t, but if a b_j lies inside the action interval, that will cause a discontinuity. I don’t know of an easy way to express the f^* product literally as written in sympy (let me know if you know better), so instead I opted to construct a piecewise function manually, which sympy supports nicely.

This results in the following function for building the final f^* for a single action for one player. The piecewise function is to break the action interval into pieces according to the discontinuities introduced by the opponent’s transition times that lie in the same interval.

def f_star(player_action_success: SuccessFn,
           opponent_action_success: SuccessFn,
           variable: Symbol,
           opponent_transition_times: Iterable[float]) -> Expr:
    '''Compute f^* as in Restrepo '57.
    The inputs can be chosen so that the appropriate f^* is built
    for either player. I.e., if we want to compute f^* for player 1,
    player_action_success should correspond to P, opponent_action_success
    to Q, and larger_transition_times to the b_j.
    If the inputs are switched appropriately, f^* is computed for player 2.
    '''
    P: SuccessFn = player_action_success
    Q: SuccessFn = opponent_action_success

    '''
    We compute f^* as a Piecewise function of the following form:
    [prod_{i=1}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_1
    [prod_{i=2}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_2
    [prod_{i=3}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_3
             .
             .
             .
    [1] *  Q'(t) / Q^2(t) P(t)                        if t >= b_m
    '''
    non_product_term = diff(Q(variable), variable) / (Q(variable)**2 * P(variable))
    piecewise_components = []
    for i, b_j in enumerate(opponent_transition_times):
        larger_transition_times = opponent_transition_times[i:]

        product = 1
        for b in larger_transition_times:
            product *= (1 - Q(b))

        term = product * non_product_term
        piecewise_components.append((term, variable < b_j))

    # last term is when there are no larger transition times.
    piecewise_components.append((non_product_term, True))

    return Piecewise(*piecewise_components)

The piecewise components are probability densities, so we can integrate them piecewise to get the cumulative densities. There are a few helpers here to deal with piecewise functions. First, we define a mask_piecewise to modify the sympy-internal representation of a piecewise function to have specified values outside of a fixed interval (the interval on which the action occurs). For a pdf this should be zero, and for a cdf it should be 0 before the action interval and 1 after. There are also some nuanced bits where hidden calls to expr.simplify() allow the integration to happen much faster than otherwise.

def compute_strategy(
        player_action_success: SuccessFn,
        player_transition_times: List[float],
        player_normalizing_constants: List[float],
        opponent_action_success: SuccessFn,
        opponent_transition_times: List[float],
        time_1_point_mass: float = 0) -> Strategy:
    '''
    Given the transition times for a player, compute the action cumulative
    density functions for the optimal strategy of the player.
    '''
    action_distributions = []
    x = Symbol('x', real=True)
    t = Symbol('t', real=True)

    # chop off the last transition time, which is always 1
    opponent_transition_times = [
        x for x in opponent_transition_times if x < 1
    ]

    pairs = subsequent_pairs(player_transition_times)
    for (i, (action_start, action_end)) in enumerate(pairs):
        normalizing_constant = player_normalizing_constants[i]

        dF = normalizing_constant * f_star(
            player_action_success,
            opponent_action_success,
            x,
            opponent_transition_times,
        )
        piece_pdf = mask_piecewise(dF, x, action_start, action_end)
        piece_cdf = integrate(piece_pdf, x, action_start, t)
        piece_cdf = mask_piecewise(
            piece_cdf,
            t,
            action_start,
            action_end,
            before_domain_val=0,
            after_domain_val=1
        )

        action_distributions.append(ActionDistribution(
            support_start=action_start,
            support_end=action_end,
            cumulative_density_function=Lambda((t,), piece_cdf),
        ))

    action_distributions[-1].point_mass = time_1_point_mass
    return Strategy(action_distributions=action_distributions)

def compute_player_strategies(silent_duel_input, intermediate_state, alpha, beta):
    p1_strategy = compute_strategy(
        player_action_success=silent_duel_input.player_1_action_success,
        player_transition_times=intermediate_state.player_1_transition_times,
        player_normalizing_constants=intermediate_state.player_1_normalizing_constants,
        opponent_action_success=silent_duel_input.player_2_action_success,
        opponent_transition_times=intermediate_state.player_2_transition_times,
        time_1_point_mass=alpha,
    )
    p2_strategy = compute_strategy(
        player_action_success=silent_duel_input.player_2_action_success,
        player_transition_times=intermediate_state.player_2_transition_times,
        player_normalizing_constants=intermediate_state.player_2_normalizing_constants,
        opponent_action_success=silent_duel_input.player_1_action_success,
        opponent_transition_times=intermediate_state.player_1_transition_times,
        time_1_point_mass=beta,
    )
    return SilentDuelOutput(p1_strategy=p1_strategy, p2_strategy=p2_strategy)

Note that “Lambda” is a sympy-internal anonymous function declaration that we’re using here to imply functionality. A Lambda also supports function call notation by overloading __call__. This allows the later action distribution to be treated as if it were a function.

Finding the transition times

Next we move on to computing the transition times for a given \alpha, \beta. This step is largely the same as in the previous post in this series, but we’re refactoring the code to be simpler and more generic.

The outer loop will construct an intermediate state object, and handle the process that Restrepo describes of “taking the larger parameter” and then computing the next a_i, b_j using the previously saved parameters. There is one caveat, that Restrepo misses when he writes, “for definiteness, we assume that a_n > b_m…in the next step…a new b_m is computed…using the single parameter a_n^*.” To the best of what I can tell (and experimenting with symmetric examples), when the computed transition times are equal, keeping only one and following Restrepo’s algorithm faithfully will result in an inconsistent output. To the best of what I can tell, this is a minor mistake, and if the transition times are equal then you must keep them both, and not recompute one using the other as a parameter.

def compute_as_and_bs(duel_input: SilentDuelInput,
                      alpha: float = 0,
                      beta: float = 0) -> IntermediateState:
    '''
    Compute the a's and b's for the silent duel, given a fixed
    alpha and beta as input.
    '''
    t = Symbol('t0', nonnegative=True, real=True)

    p1_index = duel_input.player_1_action_count
    p2_index = duel_input.player_2_action_count
    intermediate_state = IntermediateState.new()

    while p1_index > 0 or p2_index > 0:
        # the larger of a_i, b_j is kept as a parameter, then the other will be repeated
        # in the next iteration; e.g., a_{i-1} and b_j (the latter using a_i in its f^*)
        (a_i, b_j, h_i, k_j) = compute_ai_and_bj(
            duel_input, intermediate_state, alpha=alpha, beta=beta
        )

        # there is one exception, if a_i == b_j, then the computation of f^* in the next
        # iteration (I believe) should not include the previously kept parameter. I.e.,
        # in the symmetric version, if a_n is kept and the next computation of b_m uses
        # the previous a_n, then it will produce the wrong value.
        #
        # I resolve this by keeping both parameters when a_i == b_j.
        if abs(a_i - b_j) < EPSILON and p1_index > 0 and p2_index > 0:
            # use the average of the two to avoid roundoff errors
            transition = (a_i + b_j) / 2
            intermediate_state.add_p1(float(transition), float(h_i))
            intermediate_state.add_p2(float(transition), float(k_j))
            p1_index -= 1
            p2_index -= 1
        elif (a_i > b_j and p1_index > 0) or p2_index == 0:
            intermediate_state.add_p1(float(a_i), float(h_i))
            p1_index -= 1
        elif (b_j > a_i and p2_index > 0) or p1_index == 0:
            intermediate_state.add_p2(float(b_j), float(k_j))
            p2_index -= 1

    return intermediate_state

It remains to compute an individual a_i, b_j pair given an intermediate state. We refactored the loop body from the last post into a generic function that works for both a_n, b_m (which need access to \alpha, \beta) and lower a_i, b_j. With the exception of “simple_f_star” there is nothing new happening here.

def simple_f_star(player_action_success: SuccessFn,
                  opponent_action_success: SuccessFn,
                  variable: Symbol,
                  larger_transition_times: Iterable[float]) -> Expr:
    P: SuccessFn = player_action_success
    Q: SuccessFn = opponent_action_success

    non_product_term = diff(Q(variable), variable) / (Q(variable)**2 * P(variable))

    product = 1
    for b in larger_transition_times:
        product *= (1 - Q(b))

    return product * non_product_term

def compute_ai_and_bj(duel_input: SilentDuelInput,
                      intermediate_state: IntermediateState,
                      alpha: float = 0,
                      beta: float = 0):
    '''
    Compute a pair of a_i and b_j transition times for both players,
    using the intermediate state of the algorithm computed so far.
    This function also computes a_n and b_m when the intermediate_state
    input has no larger transition times for the opposite player. In
    those cases, the integrals and equations being solved are slightly
    different; they include some terms involving alpha and beta. In all
    other cases, the alpha and beta parameters are unused.
    '''
    P: SuccessFn = duel_input.player_1_action_success
    Q: SuccessFn = duel_input.player_2_action_success
    t = Symbol('t0', nonnegative=True, real=True)
    a_i = Symbol('a_i', positive=True, real=True)
    b_j = Symbol('b_j', positive=True, real=True)

    p1_transitions = intermediate_state.player_1_transition_times
    p2_transitions = intermediate_state.player_2_transition_times

    # the left end of the transitions arrays contain the smallest
    # (latest computed) transition time for each player.
    # these are set to 1 for an empty intermediate state, i.e. for a_n, b_m
    a_i_plus_one = p1_transitions[0]
    b_j_plus_one = p2_transitions[0]
    computing_a_n = a_i_plus_one == 1
    computing_b_m = b_j_plus_one == 1

    p1_fstar_parameters = list(p2_transitions)[:-1]  # ignore b_{m+1} = 1
    p1_fstar = simple_f_star(P, Q, t, p1_fstar_parameters)
    # the a_i part
    if computing_a_n:
        p1_integrand = ((1 + alpha) - (1 - alpha) * P(t)) * p1_fstar
        p1_integral_target = 2 * (1 - alpha)
    else:
        p1_integrand = (1 - P(t)) * p1_fstar
        p1_integral_target = 1 / intermediate_state.player_1_normalizing_constants[0]

    a_i_integrated = integrate(p1_integrand, t, a_i, a_i_plus_one)
    a_i = solve_unique_real(
        a_i_integrated - p1_integral_target,
        a_i,
        solution_min=0,
        solution_max=a_i_plus_one
    )

    # the b_j part
    p2_fstar_parameters = list(p1_transitions)[:-1]  # ignore a_{n+1} = 1
    p2_fstar = simple_f_star(Q, P, t, p2_fstar_parameters)
    if computing_b_m:
        p2_integrand = ((1 + beta) - (1 - beta) * Q(t)) * p2_fstar
        p2_integral_target = 2 * (1 - beta)
    else:
        p2_integrand = (1 - Q(t)) * p2_fstar
        p2_integral_target = 1 / intermediate_state.player_2_normalizing_constants[0]

    b_j_integrated = integrate(p2_integrand, t, b_j, b_j_plus_one)
    b_j = solve_unique_real(
        b_j_integrated - p2_integral_target,
        b_j,
        solution_min=0,
        solution_max=b_j_plus_one
    )

    # the h_i part
    h_i_integrated = integrate(p1_fstar, t, a_i, a_i_plus_one)
    h_i_numerator = (1 - alpha) if computing_a_n else 1
    h_i = h_i_numerator / h_i_integrated

    # the k_j part
    k_j_integrated = integrate(p2_fstar, t, b_j, b_j_plus_one)
    k_j_numerator = (1 - beta) if computing_b_m else 1
    k_j = k_j_numerator / k_j_integrated

    return (a_i, b_j, h_i, k_j)

You might be wondering what is going on with “simple_f_star”. Let me save that for the end of the post, as it’s related to how I’m currently stuck in understanding the construction.

Binary searching for alpha and beta

Finally, we need to binary search for the desired output a_1 = b_1 as a function of \alpha, \beta. Following Restrepo’s claim, we first compute a_1, b_1 using \alpha=0, \beta=0. If a_1 > b_1, then we are looking for a \beta > 0, and vice versa for \alpha.

To facilitate the binary search, I decided to implement an abstract binary search routine (that only works for floating point domains). You can see the code here. The important part is that it abstracts the test (did you find it?) and the response (no, too low), so that we can have the core of this test be a computation of a_1, b_1.

First the non-binary-search bits:

def optimal_strategies(silent_duel_input: SilentDuelInput) -> SilentDuelOutput:
    '''Compute an optimal pair of corresponding strategies for the silent duel problem.'''
    # First compute a's and b's, and check to see if a_1 == b_1, in which case quit.
    intermediate_state = compute_as_and_bs(silent_duel_input, alpha=0, beta=0)
    a1 = intermediate_state.player_1_transition_times[0]
    b1 = intermediate_state.player_2_transition_times[0]

    if abs(a1 - b1) < EPSILON:
        return compute_player_strategies(
            silent_duel_input, intermediate_state, alpha=0, beta=0,
        )

    # Otherwise, binary search for an alpha/beta
    searching_for_beta = b1 < a1

    <snip>

    intermediate_state = compute_as_and_bs(
        silent_duel_input, alpha=final_alpha, beta=final_beta
    )
    player_strategies = compute_player_strategies(
        silent_duel_input, intermediate_state, final_alpha, final_beta
    )

    return player_strategies

The above first checks to see if a search is needed, and in either case uses our previously defined functions to compute the output strategies. The binary search part looks like this:

    if searching_for_beta:
        def test(beta_value):
            new_state = compute_as_and_bs(
                silent_duel_input, alpha=0, beta=beta_value
            )
            new_a1 = new_state.player_1_transition_times[0]
            new_b1 = new_state.player_2_transition_times[0]
            found = abs(new_a1 - new_b1) < EPSILON
            return BinarySearchHint(found=found, tooLow=new_b1 < new_a1)
    else:  # searching for alpha
        def test(alpha_value):
            new_state = compute_as_and_bs(
                silent_duel_input, alpha=alpha_value, beta=0
            )
            new_a1 = new_state.player_1_transition_times[0]
            new_b1 = new_state.player_2_transition_times[0]
            found = abs(new_a1 - new_b1) < EPSILON
            return BinarySearchHint(found=found, tooLow=new_a1 < new_b1)

    search_result = binary_search(
        test, param_min=0, param_max=1, callback=print
    )
    assert search_result.found

    # the optimal (alpha, beta) pair have product zero.
    final_alpha = 0 if searching_for_beta else search_result.value
    final_beta = search_result.value if searching_for_beta else 0

Put it all together

Let’s run the complete construction on some examples. I added a couple of print statements in the code and overloaded __str__ on the dataclasses to help. First we can verify we get the same result as our previous symmetric example:

x = Symbol('x')
P = Lambda((x,), x)
Q = Lambda((x,), x)

duel_input = SilentDuelInput(
    player_1_action_count=3,
    player_2_action_count=3,
    player_1_action_success=P,
    player_2_action_success=Q,
)

print("Input: {}".format(duel_input))
output = optimal_strategies(duel_input)
print(output)
output.validate()

# output is:

Input: SilentDuelInput(player_1_action_count=3, player_2_action_count=3, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(_x, _x))
a_1 = 0.143 b_1 = 0.143
P1:
(0.143, 0.200): dF/dt = Piecewise((0, (t > 0.2) | (t < 0.142857142857143)), (0.083/t**3, t < 0.2))
(0.200, 0.333): dF/dt = Piecewise((0, (t > 0.333333333333333) | (t < 0.2)), (0.13/t**3, t < 0.333333333333333))
(0.333, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.333333333333333)), (0.25/t**3, t < 1))

P2:
(0.143, 0.200): dF/dt = Piecewise((0, (t < 0.2) | (t < 0.142857142857143)), (0.083/t**3, t < 0.2))
(0.200, 0.333): dF/dt = Piecewise((0, (t > 0.333333333333333) | (t < 0.2)), (0.13/t**3, t < 0.333333333333333))
(0.333, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.333333333333333)), (0.25/t**3, t > 1))
Validating P1
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating P2
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0

This lines up: the players have the same strategy, and the transition times are 1/7, 1/5, and 1/3.

Next up, replace Q = Lambda((x,), x**2), and only have a single action for each player. This should require a binary search, but it will be straightforward to verify manually. I added a callback that prints the bounds during each iteration of the binary search to observe. Also note that sympy integration is quite slow, so this binary search takes a minute or two.

Input: SilentDuelInput(player_1_action_count=1, player_2_action_count=1, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(x, x**2))
a_1 = 0.48109 b_1 = 0.42716
Binary searching for beta
{'current_min': 0, 'current_max': 1, 'tested_value': 0.5}
a_1 = 0.37545 b_1 = 0.65730
{'current_min': 0, 'current_max': 0.5, 'tested_value': 0.25}
a_1 = 0.40168 b_1 = 0.54770
{'current_min': 0, 'current_max': 0.25, 'tested_value': 0.125}
a_1 = 0.41139 b_1 = 0.50000
{'current_min': 0, 'current_max': 0.125, 'tested_value': 0.0625}
a_1 = 0.48109 b_1 = 0.44754
{'current_min': 0.0625, 'current_max': 0.125, 'tested_value': 0.09375}
a_1 = 0.41358 b_1 = 0.48860
{'current_min': 0.0625, 'current_max': 0.09375, 'tested_value': 0.078125}
a_1 = 0.41465 b_1 = 0.48297
{'current_min': 0.0625, 'current_max': 0.078125, 'tested_value': 0.0703125}
a_1 = 0.48109 b_1 = 0.45013
{'current_min': 0.0703125, 'current_max': 0.078125, 'tested_value': 0.07421875}
a_1 = 0.41492 b_1 = 0.48157
{'current_min': 0.0703125, 'current_max': 0.07421875, 'tested_value': 0.072265625}
a_1 = 0.48109 b_1 = 0.45078
{'current_min': 0.072265625, 'current_max': 0.07421875, 'tested_value': 0.0732421875}
a_1 = 0.41498 b_1 = 0.48122
{'current_min': 0.072265625, 'current_max': 0.0732421875, 'tested_value': 0.07275390625}
a_1 = 0.48109 b_1 = 0.45094
{'current_min': 0.07275390625, 'current_max': 0.0732421875, 'tested_value': 0.072998046875}
a_1 = 0.41500 b_1 = 0.48113
{'current_min': 0.07275390625, 'current_max': 0.072998046875, 'tested_value': 0.0728759765625}
a_1 = 0.48109 b_1 = 0.45098
{'current_min': 0.0728759765625, 'current_max': 0.072998046875, 'tested_value': 0.07293701171875}
a_1 = 0.41500 b_1 = 0.48111
{'current_min': 0.0728759765625, 'current_max': 0.07293701171875, 'tested_value': 0.072906494140625}
a_1 = 0.41500 b_1 = 0.48110
{'current_min': 0.0728759765625, 'current_max': 0.072906494140625, 'tested_value': 0.0728912353515625}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.0728759765625, 'current_max': 0.0728912353515625, 'tested_value': 0.07288360595703125}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288360595703125, 'current_max': 0.0728912353515625, 'tested_value': 0.07288742065429688}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288360595703125, 'current_max': 0.07288742065429688, 'tested_value': 0.07288551330566406}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288551330566406, 'current_max': 0.07288742065429688, 'tested_value': 0.07288646697998047}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288646697998047, 'current_max': 0.07288742065429688, 'tested_value': 0.07288694381713867}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288646697998047, 'current_max': 0.07288694381713867, 'tested_value': 0.07288670539855957}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288670539855957, 'current_max': 0.07288694381713867, 'tested_value': 0.07288682460784912}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288670539855957, 'current_max': 0.07288682460784912, 'tested_value': 0.07288676500320435}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288670539855957, 'current_max': 0.07288676500320435, 'tested_value': 0.07288673520088196}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288673520088196, 'current_max': 0.07288676500320435, 'tested_value': 0.07288675010204315}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675010204315, 'current_max': 0.07288676500320435, 'tested_value': 0.07288675755262375}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675755262375, 'current_max': 0.07288676500320435, 'tested_value': 0.07288676127791405}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.07288676127791405, 'tested_value': 0.0728867594152689}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.0728867594152689, 'tested_value': 0.07288675848394632}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.07288675848394632, 'tested_value': 0.07288675801828504}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675801828504, 'current_max': 0.07288675848394632, 'tested_value': 0.07288675825111568}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675825111568, 'current_max': 0.07288675848394632, 'tested_value': 0.072886758367531}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675825111568, 'current_max': 0.072886758367531, 'tested_value': 0.07288675830932334}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675830932334, 'current_max': 0.072886758367531, 'tested_value': 0.07288675833842717}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675833842717, 'current_max': 0.072886758367531, 'tested_value': 0.07288675835297909}
a_1 = 0.48109 b_1 = 0.48109
a_1 = 0.48109 b_1 = 0.48109
P1:
(0.481, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.481089134572278)), (0.38/t**4, t < 1))

P2:
(0.481, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.481089134572086)), (0.35/t**4, t < 1)); Point mass of 0.0728868 at 1.000
Validating P1
Validating. prob_mass=1.00000000000000 point_mass=0
Validating P2
Validating. prob_mass=0.927113241647021 point_mass=0.07288675835297909

This passes the sanity check of the output distributions having probability mass 1. P2 should also have a point mass at the end, because P2’s distribution is f(x) = x^2, which has less weight at the beginning and sharply increases at the end. This gives P2 a disadvantage, and pushes their action probability towards the end. According to Restrepo’s theorem, it would be optimal to wait until the end about 7% of the time to guarantee a perfect shot. We can work through the example by hand, and turn the result into a unit test.

Note that we haven’t verified this example is correct by hand. We’re just looking at some sanity checks at this point.

Where it falls apart

At this point I was feeling pretty good, and then the following example shows my implementation is broken:

x = Symbol('x')
P = Lambda((x,), x)
Q = Lambda((x,), x**2)

duel_input = SilentDuelInput(
    player_1_action_count=2,
    player_2_action_count=2,
    player_1_action_success=P,
    player_2_action_success=Q,
)

print("Input: {}".format(duel_input))
output = optimal_strategies(duel_input)
print(output)
output.validate(err_on_fail=False)

The output shows that the resulting probability distribution does not have a total probability mass of 1. I.e., it’s not a distribution. Uh oh.

Input: SilentDuelInput(player_1_action_count=2, player_2_action_count=2, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(x, x**2))
a_1 = 0.34405 b_1 = 0.28087
Binary searching for beta
{'current_min': 0, 'current_max': 1, 'tested_value': 0.5}
a_1 = 0.29894 b_1 = 0.45541
{'current_min': 0, 'current_max': 0.5, 'tested_value': 0.25}
a_1 = 0.32078 b_1 = 0.36181
{'current_min': 0, 'current_max': 0.25, 'tested_value': 0.125}
a_1 = 0.32660 b_1 = 0.34015
{'current_min': 0, 'current_max': 0.125, 'tested_value': 0.0625}
a_1 = 0.34292 b_1 = 0.29023
{'current_min': 0.0625, 'current_max': 0.125, 'tested_value': 0.09375}
a_1 = 0.33530 b_1 = 0.30495
{'current_min': 0.09375, 'current_max': 0.125, 'tested_value': 0.109375}
a_1 = 0.32726 b_1 = 0.33741
{'current_min': 0.09375, 'current_max': 0.109375, 'tested_value': 0.1015625}
a_1 = 0.32758 b_1 = 0.33604
{'current_min': 0.09375, 'current_max': 0.1015625, 'tested_value': 0.09765625}
a_1 = 0.32774 b_1 = 0.33535
{'current_min': 0.09375, 'current_max': 0.09765625, 'tested_value': 0.095703125}
a_1 = 0.33524 b_1 = 0.30526
{'current_min': 0.095703125, 'current_max': 0.09765625, 'tested_value': 0.0966796875}
a_1 = 0.33520 b_1 = 0.30542
{'current_min': 0.0966796875, 'current_max': 0.09765625, 'tested_value': 0.09716796875}
a_1 = 0.32776 b_1 = 0.33526
{'current_min': 0.0966796875, 'current_max': 0.09716796875, 'tested_value': 0.096923828125}
a_1 = 0.32777 b_1 = 0.33522
{'current_min': 0.0966796875, 'current_max': 0.096923828125, 'tested_value': 0.0968017578125}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.0968017578125, 'current_max': 0.096923828125, 'tested_value': 0.09686279296875}
a_1 = 0.32777 b_1 = 0.33521
{'current_min': 0.0968017578125, 'current_max': 0.09686279296875, 'tested_value': 0.096832275390625}
a_1 = 0.32777 b_1 = 0.33521
{'current_min': 0.0968017578125, 'current_max': 0.096832275390625, 'tested_value': 0.0968170166015625}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.0968170166015625, 'tested_value': 0.09680938720703125}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.09680938720703125, 'tested_value': 0.09680557250976562}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.09680557250976562, 'tested_value': 0.09680366516113281}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680366516113281, 'current_max': 0.09680557250976562, 'tested_value': 0.09680461883544922}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680461883544922, 'tested_value': 0.09680414199829102}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680414199829102, 'tested_value': 0.09680390357971191}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680390357971191, 'tested_value': 0.09680378437042236}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680378437042236, 'current_max': 0.09680390357971191, 'tested_value': 0.09680384397506714}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680384397506714, 'current_max': 0.09680390357971191, 'tested_value': 0.09680387377738953}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680387377738953, 'tested_value': 0.09680385887622833}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680385887622833, 'tested_value': 0.09680385142564774}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680385142564774, 'tested_value': 0.09680384770035744}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680384770035744, 'current_max': 0.09680385142564774, 'tested_value': 0.09680384956300259}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384956300259, 'tested_value': 0.09680384863168001}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384863168001, 'tested_value': 0.09680384816601872}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384816601872, 'tested_value': 0.09680384793318808}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384793318808, 'tested_value': 0.09680384781677276}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384781677276, 'tested_value': 0.0968038477585651}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.0968038477585651, 'current_max': 0.09680384781677276, 'tested_value': 0.09680384778766893}
a_1 = 0.33520 b_1 = 0.33520
a_1 = 0.33520 b_1 = 0.33520
deque([0.33520049631043414, 0.45303439059299566, 1])
deque([0.33520049631043414, 0.4897058985296734, 1])
P1:
(0.335, 0.453): dF/dt = Piecewise((0, t < 0.335200496310434), (0.19/t**4, t < 0.453034390592996), (0, t > 0.453034390592996))
(0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))

P2:
(0.335, 0.490): dF/dt = Piecewise((0, t < 0.335200496310434), (0.17/t**4, t < 0.453034390592996), (0.3/t**4, t < 0.489705898529673), (0, t > 0.489705898529673))
(0.490, 1.000): dF/dt = Piecewise((0, t < 0.489705898529673), (0.36/t**4, t < 1), (0, t > 1)); Point mass of 0.0968038 at 1.000
Validating P1
Validating. prob_mass=0.999999999999130 point_mass=0
Validating. prob_mass=1.24303353980824 point_mass=0   INVALID
Probability distribution does not have mass 1: (0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
Validating P2
Validating. prob_mass=1.10285404591706 point_mass=0   INVALID
Probability distribution does not have mass 1: (0.335, 0.490): dF/dt = Piecewise((0, t < 0.335200496310434), (0.17/t**4, t < 0.453034390592996), (0.3/t**4, t < 0.489705898529673), (0, t > 0.489705898529673))
Validating. prob_mass=0.903196152212331 point_mass=0.09680384778766893

What’s fundamentally different about this example? The central thing I can tell is that this is the simplest example for which player 1 has an action that has a player 2 transition time in the middle. It’s this action:

(0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
...
Validating. prob_mass=1.24303353980824 point_mass=0   INVALID

This is where the discontinuity of f^* actually matters. In the previous example either there was only one action, and by design the starting times of the action ranges are equal, or else the game was symmetric, so that the players had the same action range endpoints.

In my first implementation, I had actually ignored the discontinuities entirely, and because the game was symmetric it didn’t impact the output distributions. This is what’s currently in the code as “simple_f_star.” Neither player’s action transitions fell inside the bounds of any of the other player’s action ranges, and so I missed that the discontinuity was important.

In any event, in the three years since I first worked on this, I haven’t been able to figure out what I did wrong. I probably won’t come back to this problem for a while, and in the mean time perhaps some nice reader has the patience to figure it out. You can see a log of my confusion in this GitHub issue, and some of the strange workarounds I tried to get it to work.

by j2kun at Thursday, March 24, 2022

Monday, March 21, 2022

Idiomdrottning

Types are useful

I tried to design a dynamically typed lisp with maximal applicability of procedures—as in, “reverse” should be able to reverse a string, a vector, or a list, as opposed to having “string-reverse” or “vector-reverse” defined separately. I found that in order to implement that, I needed an isa-hierarchy of types (or, rather, derive an isa-graph from a tag cluster, kinda duck typing style). For example, strings, vectors, and lists are all “sequences”. So, in that sense, types are great.

In order to not have to deal with types (as much) on the app level, I really do need to do them carefully (at the primitives level). So types aren’t useless. I still hate them though, and languages that make it a point to do everything through types. Dependent types are the worst.

I don’t wanna type annotate and I don’t wanna limit procedures to only compile on certain types. I don’t want the compiler to prevent you from taking your car to the hair salon.

by Idiomdrottning (sandra.snan@idiomdrottning.org) at Monday, March 21, 2022

Scheme Requests for Implementation

SRFI 205: POSIX Terminal Fundamentals

SRFI 205 is now in withdrawn status.

This SRFI describes procedures for command-line and terminal interface programs to safely change and reset terminal modes, for example from cooked to raw and back, and for serial-line device manipulation for interfacing with embedded hardware and the like.

It is intended to provide all the termios structure functionality a modern Scheme programmer might desire by supplying a stty procedure, and simple abstractions on top of it.

by John Cowan and Harold Ancell at Monday, March 21, 2022

GNU Guix

Keeping one’s home tidy

How much effort to recreate your work environment when you switch to a new machine? What would it take to roll back to your previous environment once you’ve noticed a program no longer behaves as expected? What about sharing your environment with friends of yours? These are some of the things that Guix Home, which landed in Guix as a “technology preview” in September 2021, aims to make effortless, reliable, and fun.

In a nutshell, Guix Home brings the fully declarative configuration of Guix System to home directories. With Guix System, users and administrators provide a configuration file that defines the operating system configuration; with Guix Home, users provide a configuration file that defines the configuration of their work environment in their home directory—their home environment. That configuration is meant to stand alone, to describe all the relevant aspects of your work environment. But what exactly goes in a home environment?

“Dot files” don’t live in a vacuum

Among seasoned Unix-style users, we often equate “home environment” with “dot files”—configuration files in our home directory, from ~/.bashrc and ~/.ssh/config to ~/.emacs and everything under ~/.config. These files are precious and many store them under version control, to keep track of changes made to their configuration. That’s a good idea, but is that all it takes to describe the home environment? To roll back to a previous version?

Of course not. Dot files don’t exist in a vacuum; at the very least, your home environment is not just a set of dot files, but also a set of installed packages. They work together: if you upgrade a package, the corresponding dot file might need to be adjusted; if a package is missing, its dot file is not of any use. Sometimes a home environment contains additional things: daemons (programs that run in the background), or periodically executed jobs.

Guix Home goes beyond dot files: it lets you declare and instantiate all these aspects that make up your home environment.

Genesis

Guix Home was initially developed by Andrew Tropin as part of the rde project; it was integrated in Guix proper six months ago. I am writing this as an adopter and contributor, but there were a number of earlier adopters and earlier contributors. In fact, despite being still very much under development, the tool has already attracted a number of excited users eager to find a way to keep their home tidy!

The idea of writing down a declaration of your home environment that you can reproduce anytime is a natural followup to everything Guix does—you could already declare a package set in a manifest or even a complete operating system. It had been floating around, in Nix land with Home Manager and in Guix land with the now-defunct Guix Home Manager by Julien Lepiller. The latter was similar to today’s Guix Home, but went one step further by making your home directory read-only—yes, read-only! The main advantage is that it would ensure statelessness—you’d be sure that absolutely all your home configuration is under Guix Home Manager’s control; sub-directories containing mutable data would have to be explicitly declared. The downside is that it raised the barrier to entry: you’d have to either switch entirely, or not use it at all. Guix Home takes a more pragmatic approach and happily coexists with configuration managed “the old way”.

Getting started

To get started, you need a Home configuration file. There’s documentation, but as always, starting from a blank page is a bit intimidating. So instead of starting from a blank page, you can let guix home import generate an initial config for you:

guix home import ~/src/guix-config

This will create the ~/src/guix-config directory and populate it with a bunch of files among which home-configuration.scm along these lines:

(use-modules (gnu home)
             (gnu packages)
             (gnu services)
             (guix gexp)
             (gnu home services shells))

(home-environment
 (packages
  (map (compose list specification->package+output)
       (list "emacs-geiser-guile"
             "emacs-geiser"
             "pinentry-emacs"
             "emacs-exwm"
             "gnome-maps"
             "pipe-viewer"
             "emacs"
             "pavucontrol"
             "git"
             "xterm"
             "qemu"
             "openssh")))
 (services
  (list (service home-bash-service-type
                 (home-bash-configuration
                  (aliases
                   '(("grep" . "grep --color=auto")
                     ("ll" . "ls -l")
                     ("ls" . "ls -p --color=auto")
                     ("qemu" . "qemu-system-x86_64 -enable-kvm -m 512")
                     ("rm" . "rm --one-file-system")))
                  (bashrc
                   (list (local-file "/home/charlie/src/guix-config/.bashrc" 
                                     "bashrc")))
                  (bash-profile
                   (list (local-file
                          "/home/charlie/src/guix-config/.bash_profile"
                          "bash_profile"))))))))

guix home import automatically added the packages of ~/.guix-profile to the packages field. Because I’m using Bash, it also added an instance of home-bash-service-type with aliases extracted from my ~/.bashrc; it also made copies of ~/.bashrc and ~/.bash_profile and refers to them.

Now that I have an initial configuration, I can first test it in an isolated container:

guix home container ~/src/guix-config/home-configuration.scm

This command gives an interactive shell in a container where my home environment, as declared in home-configuration.scm, is deployed. There I can see my home directory as it would look like if I deploy my home environment “for real”: I can see my ~/.bashrc and co., I can check that all the packages declared are in $PATH and visible in ~/.guix-home, and so on. And all this is safe: my actual home directory has been left unchanged!

Once satisfied with my configuration, I can instantiate it:

guix home reconfigure ~/src/guix-config/home-configuration.scm

At that point, my actual home directory corresponds to that configuration. Some of my dot files are now provided by Guix Home, and thus they’re symbolic links (“symlinks”) to their read-only copy in /gnu/store:

$ ls -l ~/.bashrc ~/.bash_profile
lrwxrwxrwx 1 charlie users 56 Mar  7 15:46 /home/charlie/.bash_profile -> /gnu/store/lpdydssyyxx9n0xvp2jmv7yqgyr2pcg3-bash_profile
lrwxrwxrwx 1 charlie users 50 Mar  7 15:46 /home/charlie/.bashrc -> /gnu/store/kxc0j4i05sib04vf92nr8xxkb8isdfn7-bashrc

But don’t worry: before creating those symlinks, guix home reconfigure created backups of existing files under ~/TIMESTAMP-guix-home-legacy-configs-backup, where TIMESTAMP is a Unix-style timestamp.

And voilà, I have my first Guix Home generation!

$ guix home describe
Generation 1    Mar 07 2022 15:46:20   (current)
  file name: /var/guix/profiles/per-user/charlie/guix-home-1-link
  canonical file name: /gnu/store/qr1c5jpfrj815ncv6yr2lfdgs8nq8kkn-home
  channels:
    guix:
      repository URL: https://git.savannah.gnu.org/git/guix.git
      branch: master
      commit: 3ac1366648f933f7244c2d0b9926f7ba5d92a113
  configuration file: /gnu/store/xfgasfms9rhhigyj7i8za77zpqx6zbhn-configuration.scm

guix home describe shows provenance tracking we know and love from Guix System: all the info we need to redeploy the same home environment elsewhere, or at a different point in time. It’s also information guix home reconfigure relies on to make sure you never accidentally downgrade you home environment to an older Guix revision.

Going further

Alright, at this point, you might be thinking that it’s a lot of fuss but the “only” benefit over dot files under version control is that guix home also takes care of installing packages. Guix Home really shines once you use higher-level services, and when you start composing services together.

To the example above, in the services field, we can add a service declaration that runs Redshift, a program that adjusts the display color temperature according to the time of day:

(service home-redshift-service-type
         (home-redshift-configuration
          (location-provider 'manual)
          (latitude 35.81)    ;northern hemisphere
          (longitude -0.80))) ;west of Greenwich

The effect is that, as soon as we log in, under Xorg, Redshift will be started in the background as a Shepherd service. The Home-generated ~/.profile takes care of spawning shepherd, which in turn spawns the redshift service:

$ herd status
Started:
 + root
 + redshift

We gained another thing here: a consistent, unified configuration language. Instead of learning Redshift’s configuration file format, we define a home-redshift-configuration record, right in Scheme. Under the hood, that configuration is converted into Redshift’s file format; any error is caught at configuration time, when running guix home reconfigure, and we can be sure that Redshift is passed a valid configuration file.

We can similarly define a periodic mcron job, for example one that updates a GNU Idutils search database (that’s a pretty convenient and speedy way to look for code or documents!):

(simple-service 'idutils home-mcron-service-type
                ;; Every day at 12:15 and 19:15.
                (list #~(job '(next-minute-from (next-hour '(12 19)) '(15))
                             (string-append #$idutils "/bin/mkid \
-o $HOME/.idutils/src.db $HOME/src"))))

Again, guix home creates a Shepherd service that start mcron with a configuration file containing definitions for periodic jobs, which we can inspect via herd:

$ herd schedule mcron | head -5
Sun Mar 20 19:15:00 2022 +0000
/gnu/store/2d026nan309qkci968k8gpa8fcv9q4mv-idutils-4.6/bin/mkid -o $HOME/.idutils/src $HOME/src

Mon Mar 21 12:15:00 2022 +0000
/gnu/store/2d026nan309qkci968k8gpa8fcv9q4mv-idutils-4.6/bin/mkid -o $HOME/.idutils/src $HOME/src

Services, composed

If you already use Guix System, all the above certainly looks familiar: Guix Home builds upon the service framework that powers Guix System; Home services are defined in the (gnu home services …) module tree.

That framework lets us define relations among “services”, in a broad sense, and how services extend each other—in the example above, redshift and mcron both extend shepherd by giving it a daemon to take care of. We can see those relations at play by running:

guix home extension-graph home-configuration.scm

… which, for the configuration described above, gives a graph that looks like this:

Extension graph for home services.

We see redshift, mcron, and shepherd, but we also see lower-level services that guix home instantiates for us, such as the profile service which takes care of deploying packages listed in the packages field under ~/.guix-home/profile. Each arrow denotes a service extension. You can read more (and view more!) about service composition. To satisfy our math and functional-programming geek audience, we should mention that service types and their extension operation form a monoid.

What’s next?

Let’s be clear: Guix Home is pretty new and chances are that guix home search—the command to search for services by keyword—won’t give you the service you’re looking for. There’s also a bunch of open questions left, such as how to reuse services initially defined for Guix System in cases where they could be equally useful in Guix Home—Syncthing, for example.

But while it’s still a “technology preview”, it’s already a tool that tinkerers can play with and benefit from. Patches adding new services have already been proposed; maybe your favorite service is next? Consider contributing.

With a new release and ten-year anniversary coming up, we’re happy to celebrate with a tool that extends the reach of declarative and reproducible deployment!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

by Ludovic Courtès at Monday, March 21, 2022

Idiomdrottning

The worst things about iPad OS

Last September, after 22 years using only FOSS except for driver firmware and video games, I got an iPad. Here are the three worst things about its OS. (The sustainability issue of the difficult-to-repair hardware is a topic for another day.)

Why I got this? A combination of a few things:

  1. I got frog boiled by Nintendo. Some NES and GB roms are harmless but I soon enough found myself in the boiling pot of Switch eshop and crappy OS. The iPad isn’t a far leap from that.
  2. I got sick of the dead-end, open-washed, un-upgradeable world of Android. I had an Android tablet with better specs than the PineTab but with an obsoleted OS. (Also Samsung uses a material I am obviously allergic to. Maybe something in the plastic…?)
  3. Jitsi. I need to use it and it’s too heavy for the DevTerms and Frameworks and Cyberdecks of the world.
  4. I had a pretty bad interaction with a FOSS advocate online, who (unclear why) gave me a very hurtful rant after I mentioned I’ve been on FOSS for two decades. It kinda made me lose heart. I still love the licenses. It’ll take me a while to get on board with the community.

I hate laptops. Desktops were fine when I could be up but since I’ve been stuck in bed for a while, tablets seemed pretty good.

Notifications

This is the number one bad thing.

Unix has had biff since 1980 but on this thing I can’t get a reliable mail notification unless I register every message with Apple? I can’t even. This is the number one cause of stress with this thing. I can relax and focus on my work if I know that I’ll get notified if the world blows up, leading me to keep check-check-checking and stress-stress-stressing.

Sideloading

Android sucks but at least it has F-droid, a repo of somewhat crappy but trustworthy software. Centrally compiled through cryptographically verified source code (Debian has a similar setup). On iOS, it feels like everyone in the App Store is out to scam you. I still haven’t found a good PDF viewer or text and it’s been six months. This is only the second worst thing because I have a friend who’s a dev who can complete things for me.

In this category, I’ll also put how frustrating it is that some first party apps are “first class”. On Android, you can change the launcher. Here, I can’t even put in a better notes or reminders app because no other notes app could have lock screen editing, and other reminder apps can’t have this degree of Siri integration.

Closed source

I would’ve thought that this would’ve been the worst thing but I underestimated how constantly stressful not having notifications or a trustworthy app repo would be. Still, it’s not great. I’ve ran into so many crashing bugs and annoyances that I can’t fix. For example, I’m writing the first draft of this essay in the Notes app using the pen and the “swipe to type” mini on screen keyboard. There’s a neat feature system wide in iOS where you can use the pen to make edit marks to easily insert or remove space between words or scribble out unwanted text. But the one place it doesn’t work is where I’d want it the most: in Notes!

Speaking of the swipe-to-type keyboard, it has a very frustrating bug. It uses a bigram-based autocorrect where it sometimes changes a word based on the following word, and when it gets this wrong and you delete it, it doesn’t let you rewrite the original, correct word. The idea seems to be that if you deleted a word, you must mean something else, and will keep giving you new suggestions for similar gestures, but it doesn’t understand that the wits I deleted wasn’t the original take (which was correct) but instead the one autocorrect mistakenly changed it to, post hoc.

Some good things / “Uses this”

I haven’t bought in to the iMessage, Apple Mail ecosystem. Spend most of my time in blink and Delta Chat and notes and reminders.

I also miss a really good painting program. CSP is good for line art, but if I could get Krita or MyPaint but with this pen and screen… wow ♥

by Idiomdrottning (sandra.snan@idiomdrottning.org) at Monday, March 21, 2022