
Julien Malka
PhD Student in Software Supply Chain Security
PhD student at the Polytechnic Institute of Paris researching software supply chain security. Former theoretical CS student at École Normale Supérieure. NixOS Steering Committee member and contributor, working on unattended boot security and maintaining a Proxmox port on NixOS. FOSS enthusiast running a hyper-converged, highly available home cluster. Interested in putting digital skills in the service of the public good.
Available for freelance work in software security, NixOS, or infrastructure projects.
My hopes and plans for the NixOS community
About me
I'm excited to share that I'm running for a seat on the NixOS Steering Committee. Before diving into my motivations, I'd like to take a moment to introduce myself --- especially for those in the community I haven't had the chance to work with yet.
I am 28 years old and currently based in Paris, France. I have been involved in the NixOS community for about four years, with contributions spanning technical, organizational, and academic domains:
Technical work: I am a committer to nixpkgs, where I contribute to maintaining the
systemd-bootbackend; I am also the author of a port of the Proxmox hypervisor on NixOS.Community involvement: I have helped organize NixCon 2022, participated in the coordination the NixOS devroom at FOSDEM in 2023 and 2024, and organized the Paris NixOS meetup;
Academic research: In the context of my PhD, I study the impact of functional package management on the software supply chain and explore possible avenues for improvement.
In addition, over the past five years I have served on the boards of several nonprofit organizations, which has given me experience in governance, collective decision-making, and community leadership.
Does Functional Package Management Enable Reproducible Builds at Scale? Yes.
MSR 2025
ACM SIGSOFT Distinguished Paper AwardResearchIncreasing trust in the open source supply chain with reproducible builds and functional package management
ICSE Doctoral Symposium 24
(Updated) Proposal to rebuild the moderation team
This contains a rework of my initial proposal to rebuild the moderation team, informed by the comments and remarks of the other members of the Steering Committee.
The proposal keeps the same spirit/idea, that is put all the stakeholders of moderation in the same room to discuss/collaborate on/negotiate on how moderation should be done for the community in the future. The goal was also to strike a balance between the power the bootstrap team has on the SC and reciprocally, in such a way that there is a clear incentive for everyone to discuss and seek compromise with the other party.
Here is the text of the proposal, that was approved by the Steering Committee on DATE:
Steering Committee Report - Jan 31 2026
In this report, I'd like to start by wishing everyone an great 2026. On the project level, of course, I hope that this year we will be able to solve some of the conflicts within our community and continue developing great tech, but I also wish every contributor a great year on the personal level!
Work on rebuilding the moderation team
I has been a while since I made a report, and part of the reason for this (besides chronical overworking), is that the matter has progressed very slowly. In my last note, the Steering Committee was on the verge of approving my (updated) proposal to rebuild the moderation team. The proposal was indeed accepted shortly after that note, but the process to move forward after that milestone has since then been very slow for multiple reasons:
end of year holidays have kept all of us pretty occupied;
the communication rate within the Steering Committee is too low.
Steering Committee Report - Dec 5 2025
This part of the report have been written before the Steering Committee meeting on Dec 4:
You may have noticed that I have not published a SC report in the last 2 weeks. It’s not for lack of personal motivation or even personal activity. The fact is that the SC has been idle. No meeting happened nor any particular async activity or discussion around our core focus points, including how to rebuild the moderation team. It has all been… very calm. Which I regret deeply, because I am concerned (like a lot of community members that reached out to me) that the moderation situation not being solved effectively enough is prone to the creation of deeper issues in the community, and strain on Lassulus and all the other contributors that are affected by the minimal level of moderation that Lassulus can deliver right now.
On a personal level, while I am a bit discouraged by the lack of active discussion, I did try to have informal discussions with some SC members and with the wider community to build a sensible concrete proposal that I could submit to the other members, in order to spark discussions. I have submitted it to the SC on Dec 2, but for now it's unclear if other members are interested in going in the general direction. I plan to call on a vote on it after I have given enough time for people to submit amendments. As far as I know, there is no other concrete proposals on the table.
This part of the report have been written after the Steering Committee meeting on Dec 4:
We had a meeting on Dec 4 that was in my opinion very productive. We were able to review my proposal in depth and it appears to me that apart some minor points that I could re-work before submitting it as a vote, this way forward could be adopted by the SC. Nobody in the meeting has actually been blocking the proposal, and I think we are making progress.
As a correction to the last sentence of the first part of this report, cafkafk submitted a formal vote for her vision of how to rebuild the moderation team, involving the SC directly recruiting the new moderators with the guidance of a mediator, which I am in opposition with for the same reasons that I exposed in the motivation part of my proposal. Both proposals (cafkafk's one and a revised version of mine) are being voted on currently so there is good chances meaningful progress will be achieved and published in the next days.
(Initial) Proposal to rebuild the moderation team
I designed this after discussions with K900, Philip and other members of the community. My main driver for writing this proposal is that I don't think the SC should directly select the members of the moderation team for numerous reasons: the most obvious of course is that it will with high probability become a evaluation of each candidate politics with everyone trying to steer the composition towards their conception to a politically diverse moderation team. This will end up with endless bikeshedding on people selection, which I don't want. The other (and not less important) reason is that I feel we are not fit to evaluate the qualities necessary to be a good moderator, having for the most of us very low experience in the moderation job. Additionally, my intention behind this proposal is to put all the stakeholders of the "moderation crisis" back at the discussion table and to give them incentives to find a common ground for the long term stability of the relationship between SC<>moderation.
Version 1
We create an interim bootstrap moderation team composed of old moderators (from the team the resigned but also older ones). I was told finding such old moderators agreeing to do the job would be possible.
The bootstrap moderation team is in charge of the moderation during its existence.
The bootstrap moderation team has an existence limited in time (4-6 months is s good baseline I think).
The main goal of the bootstrap moderation team is to perform the recruitment of moderators for the new moderation team.
The newly created moderation team may contain at most one member of the bootstrap moderation team for transmission of knowledge.
Recruitment can be constrained by some guidelines that we decide beforehand: I think for example asking for geographic representativity is reasonable.
SC may not impose candidates to the new moderation team, but SC must approve the final moderation team as a whole.
The bootstrap moderation team is also tasked with the creation of a document summarizing the « moderation philosophy », that explains how moderation is carried out in the community, with which goals, what parts of it is human judgement, what is objective, how decisions are taken, etc. (essentially similar to the slides from K900 in terms for contents) in order to be transparent about what moderation is and what the goals are. Document will be approved by SC, and may be updated in the future by agreement of SC + moderation team. Document is not a new CoC nor is it a « rulebook ».
Inspired by the Coq/Roq community, we put in place a regular (every 6 month) check-in meeting between moderation team and SC where moderation team can present major action points and estimated state of the community in terms of conflicts, without entering in the specifics of decisions. This allows SC to follow long term trends and maintain oversight of moderation team; eventually inflect policies in needed but on the long run.
Steering Committee Report - Nov 16 2025
Frustrations
This second week was filled with a mix of frustration and hope. Frustration because of course when I decided to run for SC I was ready to dive into a conflicted entity —which it is— but I didn’t expect to encounter so much operational dysfunction. I’ve been surprised several times by the lack of clear processes, which slows down our work and makes coordination harder than it needs to be. For instance, when I first tried to submit motions for a vote, it wasn’t clear how to formally trigger one. Should I wait for a meeting? Could I post in the chat and count 👍/👎 reactions as votes? I asked, but didn’t get a clear answer at the time. More generally, it wasn’t obvious how the SC conducts its day-to-day operations, which made me confused. I imagine it was also the case of the other members that got elected at the same time as me, because to my knowledge, there is not really any kind of internal documentation that we can read that helps understand day to day operations.
On the same line, we also had our first real meeting. 2h before the meeting, we still had no agenda and even the meeting time was unclear to some of us. The meeting ended up quite chaotic, and I felt we struggled (myself included) to take turns, follow the agenda, or make meaningful progress on the topics we care about (for example concrete steps to rebuild the moderation team).
When a governance entity has internal disagreements like we have, every action, every decision, every communication is the outcome of a compromise. The weight of this can only be made more bearable by extremely efficient collaboration processes, which I cannot say really exist right now.
Hope
On the other hand, I feel hope is not lost. Given these observations, I have given myself the mission of creating a formal structure (a set of rules) for the operations of the SC. I feel like giving ourselves a set of rules on how to work together has a double benefit: it will help us be more efficient and use our time more respectfully, but also be a basis we can pass on to the next SCs! Instead of being confused about how work could be done when I got elected, they can use these agreed upon procedures to start getting stuff done. My first proposal was about async voting. Now we have a formal way to trigger async votes and publish results publicly, which I think is a tremendous step forward!
NixOS Steering Committee
Governance body of the NixOS community. Created by the NixOS Constitutional Assembly after community crisis. Contains 7 elected members.
Steering Committee Report - Nov 10 2025
One week has passed since I started my 1 year mandate in the NixOS Steering Committee. My most essential concern for this mandate is being able to restore the trust in the project governance and ensure the long term sustainability of it. Towards these goals, I have decided to regularly publish (tentatively on the weekly or bi-weekly basis) an informal report on my action on the SC, but also on the current state of business: what are we working on, what are the blockages, what are my positions on agenda items, etc. The goal is to help the community understand what exactly the SC is doing, and do my part in terms of accountability. Community members are of course free to question my actions and my positions and contact me publicly or privately. I'll strive to strike a balance between these reports being able to penetrate the layer of secrecy that SC discussions can have, while maintaining the confidentiality I owe to my fellow SC members and not leaking private conversations. I'll also keep these notes informal and with a lesser degree of preparation that e.g. a blog post. The idea is to reduce the barrier for me to communicate with the community as much as possible.
Voting no-confidence
My first order of business was to decide where to position myself on a potential vote of no-confidence. The matter was prompted to me by several community members privately, but also came naturally: the presence of several members on the committee whose resignation has been asked by a substantial amount of important community members could be a reason to discredit the future actions of this SC. For that reason my first action once elected was to probe the other newly elected SC members to understand if this new SC was in a position to achieve productive work or would be deadlocked. My conclusion is that at this point the work can go forward and that a vote of no-confidence would not go through, even if I were to vote in favor, settling the matter.
Transition and access to SC resources
New SC members have been given appropriate access, and discussion has now started on business items on Zulip. We have been given full access to the historical Zulip messages, which I think is a very good thing for the memory of the institution, and I have been able to verify that messages related to the moderation team conflict are present, giving me a high level of confidence that no messages have been deleted prior to the transition.
Starting to work
This is how I approach the beginning of this SC term:
The first priority: finding an exit to the moderation crisis
Artiflakery, an easy way to distribute static Nix flake artifacts
The problem of slides distribution...
As a PhD student, I often do presentations. A lot of presentations. In 2024 alone, I gave about 15 talks in different public venues, with a different slideset at each of those occurrences. As often as I can, I try to make those slides available to the audience if they need to re-read them after my presentation, but it frequently happens that I forget to do so, for multiple recurring reasons:
I want to hold onto control of my slides a bit longer: they were not perfectly polished before the presentation, somebody pointed out a typo that I want to correct, in other words I am fine sharing the slides but I'd still like to be able to edit them;
I don't want to just send the files via email because the next time someone else will ask me for those slides, I'll have to duplicate that work again.
The solution to those (procrastination inducing) problems is extremely simple. I should upload my slides to my website, point people to that link, and if I ever need (or have time) to update the slides I could simply overwrite that file. So why I am not doing just that? Well, having to re-upload files each time I produce a new version is taxing, and I also like sharing "living" documents that get updated pretty often, so I needed something more automated. A natural solution to this set of constraints is to use continuous integration: let CI build my slides or documents when their source change and push it somewhere where it could get served by a webserver, right? I used this approach successfully for a while, but it was not that satisfying:
The artifacts I want to share are defined in multiple repositories, some public, some private, some where I control CI, some where I don't, hosted on multiple forges, etc. Having a sufficiently general solution for the problem was just too painful after a while.
I also wanted to add some kind of authentification layer on top of all that. There are some artifacts I want to share easily, but not to everyone. Ideally, I wanted to have some kind of "per-artifact" authentification.
Time to invent Artiflakery!
This prompted the absolute need for a homegrown solution! Introducing Artiflakery, a webserver for on the fly delivery of Nix flake artifacts! The idea is very simple: you define pairs of the form (route1, flakeref1) and upon loading of route1 Artiflakery will serve the artifacts associated with flakeref1. For example, if you define:
/foo/bar/ github:foo/bar
then when a user will load https://artiflakery-domain/foo/bar/, the artifacts corresponding to the default package of the flake in the GitHub repository foo/bar will be served.
Of course, the served artifact are not really built "on the fly", but rather asynchronously updated when a request comes up.
Automatic reloading
How NixOS and reproducible builds could have detected the xz backdoor for the benefit of all
Introduction
In March 2024, a backdoor was discovered in xz, a (de)-compression software that is regularly used at the core of Linux distributions to unpack source tarballs of packaged software. The backdoor had been covertly inserted by a malicious maintainer under the pseudonym of Jia Tan over a period of three years. This event deeply stunned the open source community as the attack was both of massive impact (it allowed remote code execution on all affected machines that had ssh installed) and extremely difficult to detect. In fact, it was only thanks to the diligence (and maybe luck) of Andres Freund -- a Postgres developer working at Microsoft -- that the catastrophe was avoided: while investigating a seemingly unrelated 500ms performance regression in ssh that he was experiencing on several Debian unstable machines, he was able to trace it back to the liblzma library, identify the backdoor and document it.
While it was already established that the open source supply chain was often the target of malicious actors, what is stunning is the amount of energy invested by Jia Tan to gain the trust of the maintainer of the xz project, acquire push access to the repository and then among other perfectly legitimate contributions insert -- piece by piece -- the code for a very sophisticated and obfuscated backdoor. This should be a wake up call for the OSS community. We should consider the open source supply chain a high value target for powerful threat actors, and to collectively find countermeasures against such attacks.
In this article, I'll discuss the inner workings of the xz backdoor and how I think we could have mechanically detected it thanks to build reproducibility.
How does the attack work?
The main intent of the backdoor is to allow for remote code execution on the target by hijacking the ssh program. To do that, it replaces the behavior of some of ssh's functions (most importantly the RSA_public_decrypt one) in order to allow an attacker to execute arbitrary commands on a victim's machine when some specific RSA key is used to log in. Two main pieces are combined to put together to install and activate the backdoor:
A script to de-obfuscate and install a malicious object file as part of the
xzbuild process. Interestingly the backdoor was not comprehensively contained in the source code forxz. Instead, the malicious components were only contained in tarballs built and signed by the malicious maintainer Jia Tan and published alongside releases5.6.0and5.6.1ofxz. This time the additional release tarball contained slight and disguised modifications to extract a malicious object file from the.xzfiles used as data for some test contained in the repository.
How reproducible is NixOS?
Venue: FOSDEM 2025
Recording: https://fosdem.org/2025/schedule/event/fosdem-2025-4430-how-reproducible-is-nixos-/
How reproducible is NixOS? In this talk at FOSDEM 2025, I present the results of our large-scale study of bitwise reproducibility in the NixOS distribution, covering 709,816 packages rebuilt from historical snapshots of nixpkgs between 2017 and 2023.
Is NixOS truly reproducible?
Build reproducibility is often considered as a de facto feature provided by functional package managers like Nix. Although the functional package manager model has important assets in the quest for build reproducibility (like reproducibility of build environments for examplefn1), it is clear among practitioners that Nix does not guarantee that all its builds achieve bitwise reproducibility. In fact, it is not complicated to write a Nix package that builds an artifact non-deterministically:
let
pkgs = import <nixpkgs> { };
in
pkgs.runCommand "random" { } ''
echo $RANDOM > $out
''
Despite this, build reproducibility has historically been used as a marketing argument by the NixOS community, with the catchphrase "Reproducible builds and deployments/" appearing as a headline of the nixos.org page until 2023fn2. This situation has even occasionally created tensions with members of the reproducible-builds group who dedicate a lot of time contributing patches in compilers and downstream projects to make them bitwise reproducible /for everyone, and prompted blog posts such as "NixOS is not reproducible" by Foxboron.
Furthermore, an objective answer to the question "How good is NixOS for bitwise reproducibility" is difficult to give, as there exist no reproducibility monitoring at the scale of the Nix package set (nixpkgs), contrary to other Linux distributions like Debian. One of the reasons for that is that nixpkgs is such a big package set (about 100k packages at the time of writing), that systematically testing for bitwise reproducibility demands huge resourcesfn3.
Why is build reproducibility important?
One direct application of reproducible-builds is increasing trust in the software supply chain by allowing users to independently verify the trustworthiness of binaries they download. Indeed, in most typical scenarios, users of Linux distributions will not compile their software directly on their machine but rather download a pre-compiled version supplied by their distribution. The problem here is that the user have to trust that the artifacts that they acquire have not been tampered with (for example, if the compilation server is compromised).
When a software is reproducible, it makes it possible on the other hand to locally compile it and verify that the exact same artifacts are obtained, hence allowing to build trust in the artifact distributed by the Linux distribution. It is also possible to delegate this verification to one or several third-parties, hence "distributing" the trust one have in a given artifact.
Science to the rescue!
Portable software bills of materials with Nix and systemd portable services
Venue: All Systems Go 2024
Recording: https://app.media.ccc.de/v/all-systems-go-2024-315-portable-software-bills-of-materials-with-nix-and-systemd-portable-services
A talk about leveraging Nix and systemd portable services to create portable software bills of materials, presented at All Systems Go 2024.
Reproducibility of Build Environments through Space and Time
Authors: J. Malka, S. Zacchiroli, T. Zimmermann
Venue: ICSE-NIER'24
Preprint: https://hal.science/hal-04430009v1
Modern software engineering builds up on the composability of software components, that rely on more and more direct and transitive dependencies to build their functionalities. This principle of reusability however makes it harder to reproduce projects' build environments, even though reproducibility of build environments is essential for collaboration, maintenance and component lifetime. In this work, we argue that functional package managers provide the tooling to make build environments reproducible in space and time, and we produce a preliminary evaluation to justify this claim. Using historical data, we show that we are able to reproduce build environments of about 7 million Nix packages, and to rebuild 99.94% of the 14 thousand packages from a 6-year-old Nixpkgs revision.
Reproducibility of Build Environments through Space and Time
Venue: ICSE 2024
Presentation of our ICSE-NIER'24 paper on how functional package managers provide the tooling to make build environments reproducible in space and time, with a preliminary evaluation using historical Nix data.
Public calendars aggregation using Linkal
Venue: FOSDEM 2024
Recording: https://archive.fosdem.org/2024/schedule/event/fosdem-2024-3443-public-calendars-aggregation-using-linkal/
A talk about public calendar aggregation using Linkal, presented at FOSDEM 2024.
Debug your stage-1 systemd with GDB and the NixOS test framework
Venue: FOSDEM 2024
Recording: https://archive.fosdem.org/2024/schedule/event/fosdem-2024-2784-debug-your-stage-1-systemd-with-gdb-and-the-nixos-test-framework/
A talk about debugging stage-1 systemd using GDB and the NixOS test framework, presented at FOSDEM 2024.