Building kernel modules on NixOS
NixOS is a favorite of mine. I waste little opportunity to mention it when coworkers lament or when ideas come up that are not only deftly solved by Nix and solved in a way that's remarkably similar to their own lofty goals or vision. I'm not concerned with even promising more posts about #Nix, #NixOS, and Nixpkgs – they're coming. Whether I want them to or not. Be prepared.
NixOS is a favorite, but it's also quirky as hell.
The #linux distribution isn't your typical one: it has everything content addressed, wrapped to deal with this, and is immutable in nature (ostensibly so). With the advantages (more to come, or read the NixOS website) that it brings, it also brings along its deficiencies and disadvantages. In my case, and for the focus of this post, this recently bit me when trying to make a proprietary driver work for the video capture card that I bought: a Blackmagic Design DeckLink 4K Mini Recorder (whoa, that's a stupidly long name).
The driver from the card's company is distributed behind a “click to accept the terms of this license” button and, more disappointingly, with proprietary binary blobs. Yay.
For most [^1] Linux users, this isn't an issue. Companies that “support Linux users” will usually [^1] build their blobs with the same toolchain and shared libraries used by popular distributions. When they don't, I've seen them drop in their own shared libraries and plan on the user having at least a compatible linker and kernel ABI [^2]. There's a fun story here about some ancient IBM TTS system.. for another time. In my case, NixOS being a not-popular nor “typical” distribution, I had issues.
NixOS' kernel sources don't land in /usr/src
. It's kernel modules don't land in /lib/modules/$(uname -r)
. They're in /run/current-system/kernel-modules
. Ugh. The differences don't end there and they're nuanced. If a linker was hardcoded into the executable, proprietary binary blobs.. well, on my system those paths are wrong. I don't install the .deb
, or the .rpm
. Typically, I install software my distribution packages with commands like nix-env -iA nixos.ripgrep
. These commands wind up pulling in runtime dependencies (or build dependencies for a fallback build) and place the “package” in /nix/store
. This works very well for appropriately licensed and distributed software – but the closed source ones.. they take the cake for adventures to places I didn't contemplate going.
These drivers, admittedly, were not closed source. So? Was it easy? Not a walk in the park, but not too bad. I've dealt with worse on NixOS.
The Linux solution published by the fine folks at Blackmagic Design, included a few utilities, helper executables, and, of course, the kernel module for the PCIe card itself. There's a separate SDK vended too – it's needed for building ffmpeg
with decklink
support, but that's very out of scope.
Back to the kernel module.
This expression handles a couple things:
- unpack the assumptive tarball (it's basically an rpm but not)
- patching the upstream sources (to actually work.. thanks to Arch Linux maintainers here!)
- building the kernel module (it's a more-or-less standard process)
- ripping out “impure” [^3] or unneeded references (using
nuke-refs
)
I'm not explaining Nix expressions in this go, maybe another time. I'm continuing on through anyway.
{ stdenv
, fetchpatch
, nukeReferences
, linuxPackages
, kernel ? linuxPackages.kernel
, version
, src
}:
stdenv.mkDerivation {
name = "blackmagic-${version}-module-${kernel.modDirVersion}";
inherit version;
buildInputs = [ nukeReferences ];
kernel = kernel.dev;
kernelVersion = kernel.modDirVersion;
inherit src;
patches = [
(fetchpatch {
name = "fix-get_user_pages-and-mmap_lock.patch";
url = "https://aur.archlinux.org/cgit/aur.git/plain/02-fix-get_user_pages-and-mmap_lock.patch?h=decklink&id=8f19ef584c0603105415160d2ba4e8dfa47495ce";
sha256 = "08m4qwrk0vg8rix59y591bjih95d2wp6bmm1p37nyfvhi2n9jw2m";
})
(fetchpatch {
name = "fix-have_unlocked_ioctl.patch";
url = "https://aur.archlinux.org/cgit/aur.git/plain/03-fix-have_unlocked_ioctl.patch?h=decklink&id=8f19ef584c0603105415160d2ba4e8dfa47495ce";
sha256 = "0j9p62qa4mc6ir2v4fzrdapdrvi1dabrjrx1c295pwa3vmsi1x4f";
})
];
postUnpack = ''
cd */usr/src
sourceRoot="$(pwd -P)"
'';
buildPhase = ''
cd $sourceRoot/blackmagic-''${version}*/
# missing some "touch" commands, make sure they exist for build.
touch .bmd-support.o.cmd
make -C $kernel/lib/modules/$kernelVersion/build modules "M=$(pwd -P)"
cd $sourceRoot/blackmagic-io-''${version}*/
# missing some "touch" commands, make sure they exist for build.
touch .blackmagic.o.cmd
make -C $kernel/lib/modules/$kernelVersion/build modules "M=$(pwd -P)"
cd $sourceRoot
'';
installPhase = ''
mkdir -p $out/lib/modules/$kernelVersion/misc
for x in $(find . -name '*.ko'); do
nuke-refs $x
cp $x $out/lib/modules/$kernelVersion/misc/
done
'';
meta.platforms = [ "x86_64-linux" ];
}
Building the kernel module was a piece of cake, really. The meat of the “interesting parts” here was a snippet I kept from another round of kernel module hackery (oh, FireWire..):
make -C $kernel/lib/modules/$kernelVersion/build modules "M=$(pwd -P)"
The above line builds the appropriate kernel module with, and for, the provided kernel. That's pretty typical of any given kernel module – in tree and out of tree modules. When the above derivation is realised (ie: built), the derivation (ie: a package, but not in the traditional sense) outputs a kernel module – the .ko
– such that it'll appear at /run/current-system/kernel-modules/lib/modules/5.9.10/misc
. modprobe
is configured on NixOS hosts to use this directory, so all is well!
But, that's not all. I mentioned that there were helper executables. They're needed in order to setup a capture device when the PCI probe matches one by way of udev. Without the helpers, the device is halfway initialized – software that's capable of using the correct IOCTLs can't even see the devices until the helpers have a chat with the kernel module.
In full glory, here's the tools. Just the tools. There's still the SDK too..
{ stdenv
, autoPatchelfHook
, makeWrapper
, libGL, libGLU
, libuuid
, dbus_libs
, alsaLib
, xorg
, fontconfig
, freetype
, glib
, version
, src
, mediaexpress ? null
}:
stdenv.mkDerivation {
pname = "blackmagic-tools";
inherit version;
inherit src mediaexpress;
donStrip = true;
nativeBuildInputs = [ makeWrapper autoPatchelfHook ];
buildInputs = [ libGL libGLU libuuid dbus_libs freetype fontconfig glib alsaLib ]
++ (with xorg; [libxcb libXrender libICE libX11 libXinerama libXrandr libSM ]);
postUnpack = ''
if [[ -s "$mediaexpress" ]]; then
tar -C $sourceRoot --strip-components=1 -xf $mediaexpress
fi
'';
doBuild = false;
installPhase = ''
bins=( $(cd usr/bin; ls) )
# Add the helpers to bin. They're needed by udev rules and the supporting
# systemd service.
bins+=( DesktopVideoNotifier DesktopVideoHelper )
# Prune vended systemd units and the symlinked bin stubs.
rm -rf usr/lib/systemd usr/bin
cp -R usr $out
# Replace symlinks with wrappers to include dynamicly dlopen'd libraries.
# patchelf may corrupt the executables when adding a static entry that would
# normally influence the RPATH.
libBin=$out/lib/blackmagic/DesktopVideo
for x in "''${bins[@]}"; do
towrap="$out/lib/blackmagic/MediaExpress/$x"
if ! [[ -x "$towrap" ]]; then
towrap="$libBin/$x"
fi
makeWrapper $towrap $out/bin/$x \
--prefix LD_LIBRARY_PATH ':' $out/lib \
--prefix QT_PLUGIN_PATH ':' $libBin/plugins
done
# Need to substitute the executable path in these rules to use the $out/bin
# path.
mkdir -p $out/lib/udev/rules.d
substitute \
etc/udev/rules.d/55-blackmagic.rules \
$out/lib/udev/rules.d/55-blackmagic.rules \
--replace /usr/lib/blackmagic/DesktopVideo/DesktopVideoNotifier \
$out/lib/blackmagic/DesktopVideo/DesktopVideoNotifier
'';
preFixup = ''
# add $out/lib to the RPATH set on executables to use bundled version of
# Qt5.
runtimeDependencies+=" $out"
'';
}
The tools provided the helpful little viewer and card setting utilities, right next to DesktopVideoNotifier
and DesktopVideoHelper
which provide the other half of capture card initialization. So, because I was being pragmatic, I lumped together these tools and did “brain surgery” in the context of executables by using patchelf
(by way of autoPatchelf
): the bundled shared libraries and expected runtime libraries, which were expected to be present like the popular crowds, were all in fact shoehorned into the executables' RPATH. And then wrapped.
As it turns out, wrappers are a good thing. Nixpkgs uses them liberally to improve the flexibility and reusability of its more build-time-consuming derivations (not as a goal per-se). This saves on build time and also disk space – if you wrap the thing you want to customize in a way that permits stubbing or subbing out dependencies, runtime configuration, or plain 'ol dependency resolution then you can easily change your mind later on. If you read through packages in the Nixpkgs repository, you'll come to see that its not uncommon to have packages several layers deep of derivations. Often times they're layered specifically to allow users to override some characteristic or improve composition of multiple derivations to produce a final useful environment.
You might have noticed there's missing.. everything.. that should have connected these up to be something. In this case, you'll have to deal with it. I returned the capture card after struggling to keep it and the devices plugged into it on the correct output/input formats – on top of the whole “we're NVIDIA and what is a GPL?” thing here in November of 2020 which made CUDA/NVENC workloads non-functional with Linux 5.9.
I've got the code hanging around and arguably should flesh this out more. I used more words than planned anyhow, so this'll be it.
[^1]: I'm not going to even try to quantify
[^2]: This is cool stuff, there's lot's to read on this topic, foo
[^3]: Just search for impure
in the Nix manual and Nixpkg's, purity is a nuisance to the solution that Nix proposes.