<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Practicing Pragmatism</title>
    <link>https://blog.prag.dev/</link>
    <description></description>
    <pubDate>Sat, 25 Apr 2026 15:33:47 +0000</pubDate>
    <item>
      <title>enable SNMP under rkscli for Ruckus access points</title>
      <link>https://blog.prag.dev/enable-snmp-under-rkscli-for-ruckus-access-points?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Editor&#39;s note: I, the author and editor, was mistaken about the granularity at which the SNMP service is configured in an Unleashed set of APs - a summary is accessible at the primary controller AP IP via SNMP. See posted SNMP guide for your version (eg: https://support.ruckuswireless.com/documents/3435-ruckus-unleashed-200-9-ga-snmp-reference-guide/download)&#xA;&#xA;  The following is here for posterity - it does fiddle the right bits, but not in Unleashed&#xA;&#xA;Found this while trying to get snmp enabled on all my APs rather than just the active controller AP at a given time (the web UI seems to configure the local to it snmp service).&#xA;&#xA;Thanks to: &#xA;&#xA;https://community.ruckuswireless.com/t5/Access-Points-Indoor-and-Outdoor/is-it-possible-to-enable-SNMP-on-Wireless-AP-and-get-it-via-the/td-p/44004&#xA;&#xA;rkscli: get snmp&#xA;SNMP   enable             : disable&#xA;...&#xA;OK&#xA;&#xA;Set up your SNMP details (check help set snmp) and then run the following to enable the service:&#xA;&#xA;rkscli: set remote-mgmt snmp&#xA;OK&#xA;&#xA;Now you&#39;re cooking with snmp.&#xA;&#xA;rkscli: get snmp&#xA;SNMP   enable             : enable&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Editor&#39;s note: I, the author and editor, was mistaken about the granularity at which the SNMP service is configured in an Unleashed set of APs – a summary is accessible at the primary controller AP IP via SNMP. See posted SNMP guide for your version (eg: <a href="https://support.ruckuswireless.com/documents/3435-ruckus-unleashed-200-9-ga-snmp-reference-guide/download" rel="nofollow">https://support.ruckuswireless.com/documents/3435-ruckus-unleashed-200-9-ga-snmp-reference-guide/download</a>)</p>

<blockquote><p>The following is here for posterity – it does fiddle the right bits, but not in Unleashed</p></blockquote>

<p>Found this while trying to get snmp enabled on all my APs rather than just the active controller AP at a given time (the web UI seems to configure the <em>local to it</em> snmp service).</p>

<p>Thanks to:</p>

<p><a href="https://community.ruckuswireless.com/t5/Access-Points-Indoor-and-Outdoor/is-it-possible-to-enable-SNMP-on-Wireless-AP-and-get-it-via-the/td-p/44004" rel="nofollow">https://community.ruckuswireless.com/t5/Access-Points-Indoor-and-Outdoor/is-it-possible-to-enable-SNMP-on-Wireless-AP-and-get-it-via-the/td-p/44004</a></p>

<pre><code>rkscli: get snmp
SNMP   enable             : disable
...
OK
</code></pre>

<p>Set up your SNMP details (check <code>help set snmp</code>) and then run the following to enable the service:</p>

<pre><code>rkscli: set remote-mgmt snmp
OK
</code></pre>

<p>Now you&#39;re cooking with snmp.</p>

<pre><code>rkscli: get snmp
SNMP   enable             : enable
</code></pre>
]]></content:encoded>
      <guid>https://blog.prag.dev/enable-snmp-under-rkscli-for-ruckus-access-points</guid>
      <pubDate>Wed, 01 Nov 2023 23:42:23 +0000</pubDate>
    </item>
    <item>
      <title>ZSH: whence is helpful</title>
      <link>https://blog.prag.dev/zsh-whence-is-helpful?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[There&#39;s been times where checking all elements of $PATH for something is necessary - perhaps for debugging, overriding purposes, or otherwise.&#xA;&#xA;One liners are handy for this - I&#39;m certainly not one to shy away from a neato shell one-liner to accomplish the job (or use find with some shell replacements). However! Code that you don&#39;t have to write is great in flow state (even if its a fun thought exercise that scratches an itch..) - ZSH lends itself to the task with a built in: whence (its the same builtin behind which). Check the manpage for the details, my copy (edited to fit nicely) says:&#xA;&#xA;whence [ -vcwfpamsS ] [ -x num ] name ...&#xA;&#xA;  For each name, indicate how it would be interpreted if used as a command&#xA;  name.&#xA;&#xA;  If name is not an alias, built-in command, external command, shell&#xA;  function, hashed command, or a reserved word, the exit status shall be&#xA;  non-zero, and -- if -v, -c, or -w was passed -- a message will be written&#xA;  to standard output. (This is different from other shells that write that&#xA;  message to standard error.)&#xA;&#xA;  whence is most useful when name is only the last path component of a&#xA;  command, i.e. does not include a `/&#39;; in particular, pattern matching&#xA;  only succeeds if just the non-directory component of the command is&#xA;  passed.&#xA;&#xA;  -v     Produce a more verbose report.&#xA;&#xA;  -c     Print the results in a csh-like format. This takes precedence over -v.&#xA;&#xA;  -w     For each name, print `name: word&#39; where word is one of alias,&#xA;         builtin, command, function, hashed, reserved or none, according as&#xA;         name corresponds to an alias, a built-in command, an external&#xA;         command, a shell function, a command defined with the hash&#xA;         builtin, a reserved word, or is not recognised. This takes&#xA;         precedence over -v and -c.&#xA;&#xA;  -f     Causes the contents of a shell function to be displayed, which&#xA;         would otherwise not happen unless the -c flag were used.&#xA;&#xA;  -p     Do a path search for name even if it is an alias, reserved word,&#xA;         shell function or builtin.&#xA;&#xA;  -a     Do a search for all occurrences of name throughout the command&#xA;         path. Normally only the first occurrence is printed.&#xA;&#xA;  -m     The arguments are taken as patterns (pattern characters should be&#xA;         quoted), and the information is displayed for each command&#xA;         matching one of these patterns.&#xA;&#xA;  -s     If a pathname contains symlinks, print the symlink-free pathname&#xA;         as well.&#xA;&#xA;  -S     As -s, but if the pathname had to be resolved by following&#xA;         multiple symlinks, the intermediate steps are printed, too. The&#xA;         symlink re‐ solved at each step might be anywhere in the path.&#xA;&#xA;  -x num Expand tabs when outputting shell functions using the -c option.&#xA;         This has the same effect as the -x option to the functions builtin.&#xA;&#xA;So the typical usage I&#39;m looking for is:&#xA;&#xA;show builtin &amp; the command&#xA;whence -a time&#xA;giving this on NixOS (first being a builtin)&#xA;time&#xA;/run/current-system/sw/bin/time&#xA;&#xA;or checking which git exectuables exist in (current, semi-contrived) PATH&#xA;whence -ap git&#xA;giving this on Nix-on-NonNixOS host&#xA;/home/jake/.nix-profile/bin/git                                                                                                                                                                                  &#xA;/nix/store/50lch2g9xn0sw32b2r508d3hr6mfq07f-git-with-svn-2.41.0/bin/git                                                                                                                                            &#xA;/usr/bin/git&#xA;&#xA;Nice!]]&gt;</description>
      <content:encoded><![CDATA[<p>There&#39;s been times where checking all elements of <code>$PATH</code> for something is necessary – perhaps for debugging, overriding purposes, or otherwise.</p>

<p>One liners are handy for this – I&#39;m certainly not one to shy away from a neato shell one-liner to accomplish the job (or use <code>find</code> with some shell replacements). <em>However!</em> Code that you don&#39;t have to write is great in flow state (even if its a fun thought exercise that scratches an itch..) – ZSH lends itself to the task with a built in: <code>whence</code> (its the same builtin behind <code>which</code>). Check the <a href="https://linux.die.net/man/1/zshbuiltins" rel="nofollow">manpage</a> for the details, my copy (edited to fit nicely) says:</p>

<pre><code>whence [ -vcwfpamsS ] [ -x num ] name ...

  For each name, indicate how it would be interpreted if used as a command
  name.

  If name is not an alias, built-in command, external command, shell
  function, hashed command, or a reserved word, the exit status shall be
  non-zero, and -- if -v, -c, or -w was passed -- a message will be written
  to standard output. (This is different from other shells that write that
  message to standard error.)

  whence is most useful when name is only the last path component of a
  command, i.e. does not include a `/&#39;; in particular, pattern matching
  only succeeds if just the non-directory component of the command is
  passed.

  -v     Produce a more verbose report.

  -c     Print the results in a csh-like format. This takes precedence over -v.

  -w     For each name, print `name: word&#39; where word is one of alias,
         builtin, command, function, hashed, reserved or none, according as
         name corresponds to an alias, a built-in command, an external
         command, a shell function, a command defined with the hash
         builtin, a reserved word, or is not recognised. This takes
         precedence over -v and -c.

  -f     Causes the contents of a shell function to be displayed, which
         would otherwise not happen unless the -c flag were used.

  -p     Do a path search for name even if it is an alias, reserved word,
         shell function or builtin.

  -a     Do a search for all occurrences of name throughout the command
         path. Normally only the first occurrence is printed.

  -m     The arguments are taken as patterns (pattern characters should be
         quoted), and the information is displayed for each command
         matching one of these patterns.

  -s     If a pathname contains symlinks, print the symlink-free pathname
         as well.

  -S     As -s, but if the pathname had to be resolved by following
         multiple symlinks, the intermediate steps are printed, too. The
         symlink re‐ solved at each step might be anywhere in the path.

  -x num Expand tabs when outputting shell functions using the -c option.
         This has the same effect as the -x option to the functions builtin.

</code></pre>

<p>So the typical usage I&#39;m looking for is:</p>

<pre><code># show builtin &amp; the command
whence -a time
# giving this on NixOS (first being a builtin)
time
/run/current-system/sw/bin/time

# or checking which `git` exectuables exist in (current, semi-contrived) PATH
whence -ap git
# giving this on Nix-on-NonNixOS host
/home/jake/.nix-profile/bin/git                                                                                                                                                                                  
/nix/store/50lch2g9xn0sw32b2r508d3hr6mfq07f-git-with-svn-2.41.0/bin/git                                                                                                                                            
/usr/bin/git
</code></pre>

<p>Nice!</p>
]]></content:encoded>
      <guid>https://blog.prag.dev/zsh-whence-is-helpful</guid>
      <pubDate>Wed, 23 Aug 2023 21:38:30 +0000</pubDate>
    </item>
    <item>
      <title>Repairing Nix store</title>
      <link>https://blog.prag.dev/repairing-nix-store?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I have seen Nix break itself (its DB) a few times. Once I even managed to wipe 90% of my /nix/store after not noticing the error and running nix-collect-garbage -d (to clear space). &#xA;&#xA;Nix uses a database to track added paths and facilitate untrusted user&#39;s added paths with respect to the derivations used to create them. If this database is corrupted, then Nix has no idea about the paths that already exist on disk. In the case where I wiped a bunch of paths off my disk (deleting basically the entire system, the equivalent to rm -rf /usr on many other Linux distributions) the paths aren&#39;t at all registered, tracked, or considered valid.&#xA;&#xA;So, Nix says &#34;dunno what these are&#34; and carries on. If you garbage collect during this time, well, those paths aren&#39;t registered and suspect to deletion.&#xA;&#xA;You probably came for the repair. On to that bit now.&#xA;&#xA;When needing to repair the database you&#39;ll need to either do one of two things:&#xA;&#xA;mangle whatever you have left of the database&#xA;call it a loss and create a new database&#xA;&#xA;I&#39;ve done 1. It isn&#39;t fun. You&#39;ll need to check that the SQLite3 structures are valid and that constraints are still adhered to - this amounts to first running  sqlite3 /nix/var/nix/db/db.sqlite &#34;pragma integritycheck&#34; to verify, and then dumping the database to individual insertions to remove the culprit (if you can find them all, which you can do, but holy heck is that tedious).&#xA;&#xA;dump whatever the database contains, to edit and create a new db with&#xA;sqlite3 -readonly /nix/var/nix/db/db.sqlite .dump   /nix/var/nix/db/dump-$EPOCHSECONDS.sql&#xA;&#xA;after editing, load it into a new database.. and cross your fingers&#xA;mv /nix/var/nix/db/{db.sqlite,db.sqlite.$EPOCHSECONDS}&#xA;sqlite3 /nix/var/nix/db/db.sqlite &lt; /nix/var/nix/db/edited-dump.sql&#xA;&#xA;Note: this is not guaranteed to work. Invalid paths and database constraints may plague the process over many iterations. You&#39;ve been warned :)&#xA;&#xA;After having done 1 more than once (yes, more than once), I&#39;ve found its not worthwhile for my workstation purposes. In practice, the actual content is still on disk and won&#39;t be deleted if you&#39;re being cautious. Route 2 keeps these contents around - during which time I&#39;d even bet the store may even still serve its purposes for you - and you instead import the paths into a fresh database. This does mean you&#39;ll lose some amount of information. I suspect_ that this means you&#39;ll lose the association between a store path, the producing derivation, and possibly compressed logs. I haven&#39;t confirmed this myself because I&#39;m content gaining operability rather than worry about lost logs for successfully built derivations.&#xA;&#xA;How do you go about taking route 2?&#xA;&#xA;I&#39;ll show you, but first there&#39;s some amount of care to exercise:&#xA;&#xA;stash the bad db away. Just in case.&#xA;mkdir -p /nix/var/nix/db/prev.$$&#xA;mv /nix/var/nix/db/{schema,reserved,db.sqlite} /nix/var/nix/db/prev.$$&#xA;&#xA;If you don&#39;t have nix on PATH still, then you might find it in /nix/store (or get a new copy from https://nixos.org/nix):&#xA;&#xA;ls -1 /nix/store/nix/bin/nix&#xA;&#xA;Pick one. If you can recall which version you were using - use that! - if not, then use one of them (preferably not a pre-release version - which might be why you landed here in the first place).&#xA;&#xA;/nix/store/n8x6ig1yf8ffpa07mwvxg6b7ilrrvfy1-nix-2.4/bin/nix-store --init&#xA;&#xA;And then, again, there&#39;s tedium or there&#39;s pragmatic ignorance (reasonable laziness perhaps?), either to live with unregistered paths (that are registered on demand as you use Nix) or to add the paths back to the database.&#xA;&#xA;Again, I&#39;ve done both. On a big builder host I&#39;d take the time to import the paths, but on my workstation I&#39;d rather just move on with my life. It isn&#39;t worth the time in that case because you will grow the database back to the learn about what you still have on disk. If the derivation continues to produce the same hash then Nix can and will import paths. Not in all cases, mind you, but still. It&#39;s a cost that some regions of the world with limited internet can&#39;t accept so I won&#39;t say its the right choice for everyone.&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I have seen Nix break itself (its DB) a few times. Once I even managed to wipe 90% of my <code>/nix/store</code> after not noticing the error and running <code>nix-collect-garbage -d</code> (to clear space).</p>

<p>Nix uses a database to track added paths and facilitate untrusted user&#39;s added paths with respect to the derivations used to create them. If this database is corrupted, then Nix has no idea about the paths that already exist on disk. In the case where I wiped a bunch of paths off my disk (deleting basically the entire system, the equivalent to <code>rm -rf /usr</code> on many other Linux distributions) the paths aren&#39;t at all registered, tracked, or considered valid.</p>

<p>So, Nix says “dunno what these are” and carries on. If you garbage collect during this time, well, those paths aren&#39;t registered and suspect to deletion.</p>

<p>You probably came for the repair. On to that bit now.</p>

<p>When needing to repair the database you&#39;ll need to either do one of two things:</p>
<ol><li>mangle whatever you have left of the database</li>
<li>call it a loss and create a new database</li></ol>

<p>I&#39;ve done 1. It isn&#39;t fun. You&#39;ll need to check that the SQLite3 structures are valid and that constraints are still adhered to – this amounts to first running <code>sqlite3 /nix/var/nix/db/db.sqlite &#34;pragma integrity_check&#34;</code> to verify, and then dumping the database to individual insertions to remove the culprit (if you can find them all, which you <em>can</em> do, but holy heck is that tedious).</p>

<pre><code class="language-bash"># dump whatever the database contains, to edit and create a new db with
sqlite3 -readonly /nix/var/nix/db/db.sqlite .dump &gt; /nix/var/nix/db/dump-$EPOCHSECONDS.sql
</code></pre>

<pre><code class="language-bash"># after editing, load it into a new database.. and cross your fingers
mv /nix/var/nix/db/{db.sqlite,db.sqlite.$EPOCHSECONDS}
sqlite3 /nix/var/nix/db/db.sqlite &lt; /nix/var/nix/db/edited-dump.sql
</code></pre>

<p>Note: this is not guaranteed to work. Invalid paths and database constraints may plague the process over many iterations. You&#39;ve been warned :)</p>

<p>After having done 1 more than once (yes, more than once), I&#39;ve found its not worthwhile for my workstation purposes. In practice, the actual content <em>is still on disk and won&#39;t be deleted if you&#39;re being cautious</em>. Route 2 keeps these contents around – during which time I&#39;d even bet the store may even still serve its purposes for you – and you instead import the paths into a fresh database. This does mean you&#39;ll lose some amount of information. I <em>suspect</em> that this means you&#39;ll lose the association between a store path, the producing derivation, and possibly compressed logs. I haven&#39;t confirmed this myself because I&#39;m content gaining operability rather than worry about lost logs for successfully built derivations.</p>

<p>How do you go about taking route 2?</p>

<p>I&#39;ll show you, but first there&#39;s <em>some amount of care to exercise</em>:</p>

<pre><code class="language-bash"># stash the bad db away. Just in case.
mkdir -p /nix/var/nix/db/prev.$$
mv /nix/var/nix/db/{schema,reserved,db.sqlite} /nix/var/nix/db/prev.$$
</code></pre>

<p>If you don&#39;t have <code>nix</code> on <code>PATH</code> still, then you might find it in <code>/nix/store</code> (or get a new copy from <a href="https://nixos.org/nix):" rel="nofollow">https://nixos.org/nix):</a></p>

<pre><code class="language-bash">ls -1 /nix/store/*nix*/bin/nix
</code></pre>

<p>Pick one. If you can recall which version you were using – use that! – if not, then use one of them (preferably not a pre-release version – which might be why you landed here in the first place).</p>

<pre><code class="language-bash">/nix/store/n8x6ig1yf8ffpa07mwvxg6b7ilrrvfy1-nix-2.4/bin/nix-store --init
</code></pre>

<p>And then, again, there&#39;s tedium or there&#39;s pragmatic ignorance (reasonable laziness perhaps?), either to live with unregistered paths (that are registered on demand as you use Nix) or to add the paths back to the database.</p>

<p>Again, I&#39;ve done both. On a big builder host I&#39;d take the time to import the paths, but on my workstation I&#39;d rather just move on with my life. It isn&#39;t worth the time in that case because you will grow the database back to the learn about what you still have on disk. If the derivation continues to produce the same hash then Nix can and will import paths. Not in all cases, mind you, but still. It&#39;s a cost that some regions of the world with limited internet can&#39;t accept so I won&#39;t say its the right choice for everyone.</p>
]]></content:encoded>
      <guid>https://blog.prag.dev/repairing-nix-store</guid>
      <pubDate>Wed, 23 Aug 2023 21:27:04 +0000</pubDate>
    </item>
    <item>
      <title>Statistics on data with GNU datamash</title>
      <link>https://blog.prag.dev/statistics-on-data-with-gnu-datamash?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Today I needed to get some statistics for memory and storage analysis. There are a significant number of records that we&#39;ll be processing and an even larger set that represents every record ever observed!&#xA;&#xA;The design is coming along well and we have a path to tackle this. However, I wanted to establish some expectation with regard to the memory required at runtime and the storage required over time.&#xA;&#xA;Tentatively, the plan is to write out records not unlike the OCI Content descriptor, so these records are actually line delimited serialized JSON. Each entry is a file read in from a filesystem and records its path, size, and content digest.&#xA;&#xA;That&#39;s the background.&#xA;&#xA;Now, I needed to process a set of these records to determine what a typical record size is to scale up to a larger theoretical set size. I was going to hack it out with awk (because I have a tendency to do that) but found myself instead looking into GNU datamash.&#xA;&#xA;It does everything I want to and gives my my quantiles as well!&#xA;&#xA;datamash --headers count 1 min 1 max 1 median 1 perc:99 1 &lt; records.sizes | column -t&#xA;You specify which stat and which field you want to display that stat for and voilà:&#xA;&#xA;count(101)  min(101)  max(101)  median(101)  perc:99(101)&#xA;6329640     69        539       260          452&#xA;&#xA;Now I have readily accessible and usable data to work with! Neat.&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Today I needed to get some statistics for memory and storage analysis. There are a significant number of records that we&#39;ll be processing and an even larger set that represents every record ever observed!</p>

<p>The design is coming along well and we have a path to tackle this. However, I wanted to establish some expectation with regard to the memory required at runtime and the storage required over time.</p>

<p>Tentatively, the plan is to write out records not unlike the <a href="https://github.com/opencontainers/image-spec/blob/d265d74f4fad249d39fe092122f53c7998afbfe9/descriptor.md#oci-content-descriptors" rel="nofollow">OCI Content descriptor</a>, so these records are actually line delimited serialized JSON. Each entry is a file read in from a filesystem and records its path, size, and content digest.</p>

<p>That&#39;s the background.</p>

<p>Now, I needed to process a set of these records to determine what a typical record size is to scale up to a larger theoretical set size. I was going to hack it out with <code>awk</code> (because I have a tendency to do that) but found myself instead looking into <a href="https://www.gnu.org/software/datamash/" rel="nofollow">GNU <code>datamash</code></a>.</p>

<p>It does everything I want to and gives my my quantiles as well!</p>

<pre><code>datamash --headers count 1 min 1 max 1 median 1 perc:99 1 &lt; records.sizes | column -t
</code></pre>

<p>You specify which stat and which field you want to display that stat for and voilà:</p>

<pre><code>count(101)  min(101)  max(101)  median(101)  perc:99(101)
6329640     69        539       260          452
</code></pre>

<p>Now I have readily accessible and usable data to work with! Neat.</p>
]]></content:encoded>
      <guid>https://blog.prag.dev/statistics-on-data-with-gnu-datamash</guid>
      <pubDate>Wed, 17 Aug 2022 19:37:35 +0000</pubDate>
    </item>
    <item>
      <title>Code reviews: an invitation to grow collaboratively</title>
      <link>https://blog.prag.dev/code-reviews-an-invitation-to-grow-collaboratively?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I can&#39;t say I know everything about code reviews, but I have learned a thing or two doing them at a pretty large company over the last half-decade with a variety of peers. The biggest takeaways, if you find nothing more, that I encourage you to apply to your code reviews:&#xA;&#xA;ask questions&#xA;assume the best of others&#xA;be honest, be respectful&#xA;&#xA;ask questions&#xA;&#xA;Asking questions is a great way to acknowledge &amp; explore the solution/change that an author is presenting in the code review. What&#39;s unfortunate is that I&#39;m (anecdotally) seeing less conversation on code reviews in the past few years in favor of moving quickly and keeping the velocity up up up. Questions that could have been asked during code review - ones that might even ask &#34;the obvious&#34; - can eliminate frustrating late night conversations had after being paged! Avoiding a misstep isn&#39;t the only benefit to asking questions.&#xA;&#xA;The biggest benefit to asking questions - almost any question during code review - is that not only do the participants get their questions answered, but it also disperses the knowledge being discussed. Think &#34;ideal university lecturer with lots of engagement&#34;: everyone wins (if they&#39;re paying attention).&#xA;&#xA;Reflecting on this a bit right now, I think we&#39;re beginning to focus too heavily on the &#34;do it now&#34; part of software development to even consider encouraging questions that folks new to a team, community, or company might ask. I&#39;m biased towards wanting to, myself, understand and to share what I know with others - I enjoy teaching and helping others grow - so I&#39;m nearly always inclined to keep folks on the same page, even at the cost of velocity. Why? I like folks on the same page because then that investment pays off in dividends in being able to take on more and more complex projects. It builds a culture of understanding and mentorship between engineers. Most of all, investing time to establish a shared understanding means that any/all of the engineers can tackle challenges with skill, expertise, and curiosity.  &#xA;&#xA;assume the best of others&#xA;&#xA;You know who (probably|most-likely|absolutely) isn&#39;t intentionally throwing away an error that they should be handling? That person on the other end of the code review.&#xA;&#xA;Personally speaking, I&#39;ve seen a few different faces on the receiving end of code reviews: pedantry, unhealthy/stubborn skepticism, blatant disregard for shared goals, ambivalence, and total engagement. They all have their drawbacks and not all of them can be constructively communicated with.&#xA;&#xA;Can you imagine folks thinking the worst of you, your code, and decisions that led you there? If you&#39;ve been in the industry and seen this: I&#39;m sorry, that sucks for you and sucks for the folks involved (whether they know it or not, its their loss as well). When it happens in a review, you can feel it. To do anything other than assume the best is to allow communication to be suspect to negative biases, inter-personal challenges, or - more innocently - plain &amp; simple disengagement.&#xA;&#xA;I implore every reviewer to assume the best of the code review&#39;s author &amp; their intentions. Point out flaws, but do not assume anything about those flaws without starting a conversation to clear up a misunderstanding or resolve a latent bug! &#xA;&#xA;Existing hang ups are difficult or impossible to avoid in some cases, and even then both the author and reviewers can balance out if they&#39;re all (at a minimum) trying to be positive and assume the best of the author.  An entire team working to keep that &#34;good energy&#34; going is bound to pour that back into themselves &amp; new engineers.  In the same way that companies like Amazon have found success modeling their strategy around a virtuous cycle, so can code reviews invest into their participants by leaving every person free (free of the burden of assumptions) to ask questions, to be wrong (and to be right!), and teach each other in the process.&#xA;&#xA;be honest, be respectful&#xA;&#xA;Honestly, I have no credentials to speak to the psychology of this all. Take this with a grain of salt.&#xA;&#xA;In my experience, balancing both honesty and respect in your words is the hardest part to practice and use in the code review.&#xA;&#xA;To be honest with yourself (in that you speak your mind, your thoughts, and voice your intuition as constructive feedback in the review) might mean writing  Typing this function&#39;s error handling isn&#39;t great (which you might honestly think and judge the code as such) is still a ways from I think we need to handle a few more edge cases in this function. In this case .... Granted: yes, there are more words. That&#39;s not a bad thing, but it can be if you wind up saying nothing. The tradeoff you are making is to trade time for consciously written words - the return is great: you&#39;re not starting an argument about the code and, ideally, there&#39;s no mistaking your words as making a slight against the author.&#xA;&#xA;Writing positive criticisms is a wordsmithing artform - it&#39;ll take practice and time to get the feel for. I recommend adopting the principals (and framework) of Nonviolent Communication - it&#39;s largely a superset of the above points in that its goals are specifically intended to foster empathy and explicitly not to avoid disagreements. The psychology behind the concept, to me anyway, seems to be well in line with code review! You need to be able to disagree, but also have constructive, productive, and collaborative conversation immediately follow. For those still not sold, here&#39;s an excerpt from the above linked Wikipedia article on Nonviolent Communication (NVC):&#xA;&#xA;  It is not a technique to end disagreements, but rather a method designed to increase empathy and improve the quality of life of those who utilize the method and the people around them. &#xA;&#xA;I&#39;ve personally seen conversation thrive in a setting of NVC, much more than one without (or with a bias towards terse corrections) - I can attest that not only does it improve (my, an engineer using NVC) quality of life but also the code I and my coworkers are working on. The technique is certainly not for everyone and every situation (or every code review for that matter) but it is a helpful tool to kick things off. &#xA;&#xA;Regardless of what communication tools and patterns you reach for, I encourage you to remain honest - freely providing feedback and your thoughts - while respecting others with your delivery. Internalizing communication habits take time and will be a conscious struggle until its not - but the effort is worth it. For everyone.&#xA;&#xA;summary&#xA;&#xA;So, this is a bit of a rant, but I needed to get my own thoughts collected. In the last 2 years, I&#39;ve switched teams, had engineers come and go on the team. In one month I even saw a NOTABLE improvement to the team&#39;s code review habits and feedback when a frequently argumentative engineer left the team. &#xA;&#xA;Not to toot my own horn, but the folks on my current team have privately messaged me and thanking me for cracking the door on code reviews for them. I&#39;ve had discussions and brief exchanges with folks that largely find the same: having positive interactions that leave room for questions, for discussion, that also do not belittle (whether by strongarming or words) is a damn sight better than one with no discussion and certainly better than one with a one sided or negative review.&#xA;&#xA;I&#39;m convinced that engineers teach other engineers how to talk. So here&#39;s me throwing my part in for that effort!]]&gt;</description>
      <content:encoded><![CDATA[<p>I can&#39;t say I know everything about code reviews, but I have learned a thing or two doing them at a pretty large company over the last half-decade with a variety of peers. The biggest takeaways, if you find nothing more, that I encourage you to apply to your code reviews:</p>
<ul><li>ask questions</li>
<li>assume the best of others</li>
<li>be honest, be respectful</li></ul>

<h2 id="ask-questions" id="ask-questions">ask questions</h2>

<p>Asking questions is a great way to acknowledge &amp; explore the solution/change that an author is presenting in the code review. What&#39;s unfortunate is that I&#39;m (anecdotally) seeing less conversation on code reviews in the past few years in favor of moving quickly and keeping the velocity up up up. Questions that could have been asked during code review – ones that might even ask “the obvious” – can eliminate frustrating late night conversations had after being paged! Avoiding a misstep isn&#39;t the only benefit to asking questions.</p>

<p>The biggest benefit to asking questions – <em>almost any question</em> during code review – is that not only do the participants get their questions answered, but it also disperses the knowledge being discussed. Think “ideal university lecturer with lots of engagement”: <em>everyone</em> wins (if they&#39;re paying attention).</p>

<p>Reflecting on this a bit right now, <em>I think</em> we&#39;re beginning to focus too heavily on the “do it now” part of software development to even consider <em>encouraging</em> questions that folks new to a team, community, or company might ask. I&#39;m biased towards wanting to, myself, understand and to share what I know with others – I enjoy teaching and helping others grow – so I&#39;m nearly always inclined to keep folks on the same page, even at the cost of velocity. Why? I like folks on the same page because then that investment pays off in dividends in being able to take on more and more complex projects. It builds a culture of understanding and mentorship between engineers. Most of all, investing time to establish a shared understanding means that <em>any</em>/<em>all</em> of the engineers can tackle challenges with <em>skill</em>, <em>expertise</em>, and <em>curiosity</em>.</p>

<h2 id="assume-the-best-of-others" id="assume-the-best-of-others">assume the best of others</h2>

<p>You know who (probably|most-likely|absolutely) isn&#39;t intentionally throwing away an error that they should be handling? That person on the other end of the code review.</p>

<p>Personally speaking, I&#39;ve seen a few different faces on the receiving end of code reviews: pedantry, unhealthy/stubborn skepticism, blatant disregard for shared goals, ambivalence, and total engagement. They all have their drawbacks and not all of them can be constructively communicated with.</p>

<p>Can you imagine folks thinking the worst of you, your code, and decisions that led you there? If you&#39;ve been in the industry and seen this: I&#39;m sorry, that sucks for you and sucks for the folks involved (whether they know it or not, its <em>their</em> loss as well). When it happens in a review, you can <em>feel</em> it. To do anything other than assume the best is to allow communication to be suspect to negative biases, inter-personal challenges, or – more innocently – plain &amp; simple disengagement.</p>

<p>I implore every reviewer to assume the best of the code review&#39;s author &amp; their intentions. Point out flaws, but <em>do not assume</em> anything about those flaws without starting a conversation to clear up a misunderstanding or resolve a latent bug!</p>

<p>Existing hang ups are difficult or impossible to avoid in some cases, and even then both the author and reviewers can balance out if they&#39;re all (at a minimum) <em>trying to be positive and assume the best of the author</em>.  An entire team working to keep that “good energy” going is bound to pour that back into themselves <em>&amp; new engineers</em>.  In the same way that companies like <a href="https://fourweekmba.com/amazon-flywheel/" rel="nofollow">Amazon have found success modeling their strategy around a virtuous cycle</a>, so can code reviews invest into their participants by leaving every person free (free of the burden of assumptions) to ask questions, to be wrong (and to be right!), and teach each other in the process.</p>

<h2 id="be-honest-be-respectful" id="be-honest-be-respectful">be honest, be respectful</h2>

<p>Honestly, I have no credentials to speak to the psychology of this all. Take this with a grain of salt.</p>

<p>In my experience, balancing both honesty and respect in your words is the hardest part to practice and use in the code review.</p>

<p>To be honest with yourself (in that you speak your mind, your thoughts, and voice your intuition as constructive feedback in the review) might mean writing  Typing <code>this function&#39;s error handling isn&#39;t great</code> (which you might honestly think and judge the code as such) is still a ways from <code>I think we need to handle a few more edge cases in this function. In this case ...</code>. Granted: yes, there are more words. That&#39;s not a bad thing, but it <em>can be</em> if you wind up saying nothing. The tradeoff you are making is to trade time for consciously written words – the return is great: you&#39;re not starting an argument about the code and, ideally, there&#39;s no mistaking your words as making a slight against the author.</p>

<p>Writing positive criticisms is a wordsmithing artform – it&#39;ll take practice and time to get the feel for. I recommend adopting the principals (and framework) of <a href="https://en.wikipedia.org/wiki/Nonviolent_Communication" rel="nofollow">Nonviolent Communication</a> – it&#39;s largely a superset of the above points in that its goals are specifically intended to foster empathy and <em>explicitly not to avoid disagreements</em>. The psychology behind the concept, to me anyway, seems to be well in line with code review! You <em>need</em> to be able to disagree, but also have <em>constructive, productive, and collaborative conversation immediately follow</em>. For those still not sold, here&#39;s an excerpt from the above linked Wikipedia article on Nonviolent Communication (NVC):</p>

<blockquote><p> It is not a technique to end disagreements, but rather a method designed to increase empathy and improve the quality of life of those who utilize the method and the people around them.</p></blockquote>

<p>I&#39;ve personally seen conversation thrive in a setting of NVC, much more than one without (or with a bias towards terse corrections) – I can attest that not only does it improve (my, an engineer using NVC) quality of life but also the code I and my coworkers are working on. The technique is certainly not for everyone and every situation (or every code review for that matter) but it is a helpful tool to kick things off.</p>

<p>Regardless of what communication tools and patterns you reach for, I encourage you to remain honest – freely providing feedback and your thoughts – while respecting others with your delivery. Internalizing communication habits take time and will be a conscious struggle until its not – but the effort is worth it. For everyone.</p>

<h2 id="summary" id="summary">summary</h2>

<p>So, this is a bit of a rant, but I needed to get my own thoughts collected. In the last 2 years, I&#39;ve switched teams, had engineers come and go on the team. In one month I even saw a NOTABLE improvement to the team&#39;s code review habits and feedback when a frequently argumentative engineer left the team.</p>

<p>Not to toot my own horn, but the folks on my current team have privately messaged me and thanking me for cracking the door on code reviews for them. I&#39;ve had discussions and brief exchanges with folks that largely find the same: having positive interactions that leave room for questions, for discussion, that also do not belittle (whether by strongarming or words) is a damn sight better than one with no discussion and certainly better than one with a one sided or negative review.</p>

<p>I&#39;m convinced that engineers teach other engineers how to talk. So here&#39;s me throwing my part in for that effort!</p>
]]></content:encoded>
      <guid>https://blog.prag.dev/code-reviews-an-invitation-to-grow-collaboratively</guid>
      <pubDate>Thu, 14 Apr 2022 00:48:01 +0000</pubDate>
    </item>
    <item>
      <title>Telling Git how to use SSH, nicely</title>
      <link>https://blog.prag.dev/telling-git-how-to-use-ssh-nicely?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[It seems like when you want to do something unusual that one finds another human&#39;s interesting configuration and workflows that they found worked for them in their time of need.&#xA;&#xA;The best part is that their pain is shared and skipped by others. In my case, there was this StackOverflow Q/A (How to tell git which private key to use?) that shared the (at the time) new configuration item to set Git&#39;s SSH command that&#39;s used for pushes and pulls.&#xA;&#xA;However, their command wasn&#39;t 100% there for me. I generally use a PGP hardware key with an authentication subkey when using SSH to log into my own hosts and also for VCS forges, like GitHub. This doesn&#39;t work very well, at all, for mechanical processes where automation doesn&#39;t have access to my hardware key or pin to unlock the hardware token. &#xA;&#xA;Ideally, I&#39;d have a scheduled job that would be able to authenticate and push a snapshot of config to GitHub periodically. Hence finding the aforementioned SO Q/A and wanting to apply this for myself. The wrench was that when I was sudo-ing as the user to test things out is that it &#34;sees&#34; my hardware token (because it uses an SSH Agent process to handle cryptographic operations) and gracefully falls back to using an on-disk machine-specific SSH key.&#xA;&#xA;So, I got to take advantage of the SO&#39;s answerer&#39;s suggestion to use core.sshCommand by setting further options for my use case:&#xA;&#xA;Where they suggested:&#xA;&#xA;git config core.sshCommand &#34;ssh $HOME/.ssh/idrsaexample -F /dev/null&#34; &#xA;&#xA;I needed to add an option to disable the use of any SSH agent process:&#xA;&#xA;git config core.sshCommand &#34;ssh -o IdentityAgent=none -i $HOME/.ssh/idrsaexample -F /dev/null&#34; &#xA;&#xA;Flags used here (see man page):&#xA;&#xA;-o OptionKey[=]OptionValue : override/set config value&#xA;-F config-file : set to /dev/null to avoid loading any config files (and avoid loading conflicting or unexpected config)&#xA;&#xA;With that, I can run my SSH commands without having to see a prompt for my hardware token and/or failure to communicate with my user&#39;s private SSH agent socket.&#xA;&#xA;Nice. And this allows simple per-repository tuning because its &#34;just&#34; git-config. Astute readers will also note that this means you can use further conditional configuration to set per-remote and per-ref configuration even!&#xA;&#xA;These are pragmatic bits I found this morning. I hope you find a practical and pragmatic use for them as well. Cheers! ]]&gt;</description>
      <content:encoded><![CDATA[<p>It seems like when you want to do something unusual that one finds another human&#39;s interesting configuration and workflows that they found worked for them in their time of need.</p>

<p>The best part is that their pain is shared and skipped by others. In my case, there was <a href="https://superuser.com/a/912281" rel="nofollow">this StackOverflow Q/A (How to tell git which private key to use?)</a> that shared the (at the time) new configuration item to set Git&#39;s SSH command that&#39;s used for pushes and pulls.</p>

<p>However, their command wasn&#39;t 100% there for me. I generally use a PGP hardware key with an authentication subkey when using SSH to log into my own hosts and also for VCS forges, like GitHub. This doesn&#39;t work very well, at all, for mechanical processes where automation doesn&#39;t have access to my hardware key or pin to unlock the hardware token.</p>

<p>Ideally, I&#39;d have a scheduled job that would be able to authenticate and push a snapshot of config to GitHub periodically. Hence finding the <a href="https://superuser.com/a/912281" rel="nofollow">aforementioned SO Q/A</a> and wanting to apply this for myself. The wrench was that when I was <code>sudo</code>-ing as the user to test things out is that it “sees” my hardware token (because it uses an SSH Agent process to handle cryptographic operations) and gracefully falls back to using an on-disk machine-specific SSH key.</p>

<p>So, I got to take advantage of the SO&#39;s answerer&#39;s suggestion to use <code>core.sshCommand</code> by setting further options for my use case:</p>

<p>Where they suggested:</p>

<pre><code class="language-bash">git config core.sshCommand &#34;ssh $HOME/.ssh/id_rsa_example -F /dev/null&#34; 
</code></pre>

<p>I needed to add an option to disable the use of <em>any</em> SSH agent process:</p>

<pre><code class="language-bash">git config core.sshCommand &#34;ssh -o IdentityAgent=none -i $HOME/.ssh/id_rsa_example -F /dev/null&#34; 
</code></pre>

<p>Flags used here (see <a href="https://man.openbsd.org/ssh" rel="nofollow">man page</a>):</p>
<ul><li><a href="https://man.openbsd.org/ssh#o" rel="nofollow"><code>-o OptionKey[=]OptionValue</code></a> : override/set config value</li>
<li><a href="https://man.openbsd.org/ssh#F" rel="nofollow"><code>-F &lt;config-file&gt;</code></a> : set to <code>/dev/null</code> to avoid loading any config files (and avoid loading conflicting or unexpected config)</li></ul>

<p>With that, I can run my SSH commands without having to see a prompt for my hardware token and/or failure to communicate with my user&#39;s private SSH agent socket.</p>

<p>Nice. And this allows simple per-repository tuning because its “just” <code>git-config</code>. Astute readers will also note that this means you can use further conditional configuration to set per-remote and per-ref configuration even!</p>

<p>These are pragmatic bits I found this morning. I hope you find a practical and pragmatic use for them as well. Cheers!</p>
]]></content:encoded>
      <guid>https://blog.prag.dev/telling-git-how-to-use-ssh-nicely</guid>
      <pubDate>Mon, 08 Nov 2021 19:05:39 +0000</pubDate>
    </item>
    <item>
      <title>LD: always magical</title>
      <link>https://blog.prag.dev/ld-always-magical?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I was reading lwn articles recently and saw a post pointing to and quoting a well versed walk through of processes&#39; lifecycle on Linux.&#xA;&#xA;Their excerpt included a bit about setting LDSHOWAUXV to make ld (I think? I opened the tab in the background to read...). This was immediately interesting to me: any time that someone is sharing their own hard earned understanding LD.. I listen well! There&#39;s a lot of engineering behind the likes of ld and linux (naturally!) - in this case, the tip to use this environment variable was immediately at hand:&#xA;&#xA;I ran sh with the variable exported on my system:&#xA;&#xA;❯ LDSHOWAUXV=1 /bin/sh &#xA;ATSYSINFOEHDR:      0x7ffed19db000&#xA;ATHWCAP:             bfebfbff&#xA;ATPAGESZ:            4096&#xA;ATCLKTCK:            100&#xA;ATPHDR:              0x400040&#xA;ATPHENT:             56&#xA;ATPHNUM:             11&#xA;ATBASE:              0x7f5004b68000&#xA;ATFLAGS:             0x0&#xA;ATENTRY:             0x41db30&#xA;ATUID:               1000&#xA;ATEUID:              1000&#xA;ATGID:               100&#xA;ATEGID:              100&#xA;ATSECURE:            0&#xA;ATRANDOM:            0x7ffed19c5f89&#xA;ATHWCAP2:            0x2&#xA;ATEXECFN:            /bin/sh&#xA;ATPLATFORM:          x8664&#xA;&#xA;Neat! I know most of these short and terse identifiers with others that are new to me. These are the edges of the engineering world I like to find, avenues to explore further to integrate and apply to my own work.&#xA;&#xA;Well, enough of that! What else? Let&#39;s try another command, after all the environment variable is exported. How about true - that&#39;s pretty trivial.&#xA;&#xA;sh-4.4$ true&#xA;&#xA;But also boring. There&#39;s no output. Well, that&#39;s because we&#39;re probably using bash&#39;s builtin true &#34;function&#34;. &#xA;&#xA;sh-4.4$ /run/current-system/sw/bin/true&#xA;ATSYSINFOEHDR:      0x7ffd951e6000&#xA;ATHWCAP:             bfebfbff&#xA;ATPAGESZ:            4096&#xA;ATCLKTCK:            100&#xA;ATPHDR:              0x400040&#xA;ATPHENT:             56&#xA;ATPHNUM:             11&#xA;ATBASE:              0x7eff8120f000&#xA;ATFLAGS:             0x0&#xA;ATENTRY:             0x4089c0&#xA;ATUID:               1000&#xA;ATEUID:              1000&#xA;ATGID:               100&#xA;ATEGID:              100&#xA;ATSECURE:            0&#xA;ATRANDOM:            0x7ffd95067819&#xA;ATHWCAP2:            0x2&#xA;ATEXECFN:            /run/current-system/sw/bin/true&#xA;ATPLATFORM:          x8664&#xA;&#xA;  Note: my output says things like /run/current-system/sw/bin/true with that long path because I use NixOS and that&#39;s just where its located. Pretend lines like those say /bin/true or /usr/bin/true if that tickles your fancy.&#xA;&#xA;I want to see the opposite now, where nothing is printed out with the environment still configured, because I suspect that this is implemented in glibc based on the little I&#39;ve actually read of the article (I will read it, eventually).&#xA;&#xA;To do this, I wrote a quick Go program to say hi that I can use static compilation to avoid linking to a libc:&#xA;&#xA;package main&#xA;&#xA;func main() { print(&#34;hi there\n&#34;) }&#xA;&#xA;I built it:&#xA;&#xA;CGOENABLED=0 go build static-go-bin.go&#xA;&#xA;Checked it for riddles:&#xA;&#xA;sh-4.4$ ldd ./static-go-bin&#xA;&#x9;not a dynamic executable&#xA;&#xA;And then ran it with the environment variable set:&#xA;&#xA;❯ LDSHOW_AUXV=1 ./static-go-bin &#xA;hi there&#xA;&#xA;Hrm. Okay, nothing printed, but that&#39;s not conclusive. We still need to know if its ld or something that ld loaded that makes something print.&#xA;&#xA;You know what, I&#39;m going to stop here and just go read the article! That seems most pragmatic at this point.]]&gt;</description>
      <content:encoded><![CDATA[<p>I was reading <a href="https://lwn.net" rel="nofollow">lwn</a> articles recently and saw <a href="https://lwn.net/Articles/875108" rel="nofollow">a post</a> pointing to and quoting <a href="http://dbp-consulting.com/tutorials/debugging/linuxProgramStartup.html" rel="nofollow">a well versed walk through of processes&#39; lifecycle on Linux</a>.</p>

<p>Their excerpt included a bit about setting <code>LD_SHOW_AUXV</code> to make <code>ld</code> (I think? I opened the tab in the background to read...). This was immediately interesting to me: any time that someone is sharing their own hard earned understanding LD.. I listen well! There&#39;s a lot of engineering behind the likes of <code>ld</code> and <code>linux</code> (naturally!) – in this case, the tip to use this environment variable was immediately at hand:</p>

<p>I ran <code>sh</code> with the variable exported on my system:</p>

<pre><code>❯ LD_SHOW_AUXV=1 /bin/sh 
AT_SYSINFO_EHDR:      0x7ffed19db000
AT_HWCAP:             bfebfbff
AT_PAGESZ:            4096
AT_CLKTCK:            100
AT_PHDR:              0x400040
AT_PHENT:             56
AT_PHNUM:             11
AT_BASE:              0x7f5004b68000
AT_FLAGS:             0x0
AT_ENTRY:             0x41db30
AT_UID:               1000
AT_EUID:              1000
AT_GID:               100
AT_EGID:              100
AT_SECURE:            0
AT_RANDOM:            0x7ffed19c5f89
AT_HWCAP2:            0x2
AT_EXECFN:            /bin/sh
AT_PLATFORM:          x86_64
</code></pre>

<p>Neat! I know most of these short and terse identifiers with others that are new to me. These are the edges of the engineering world I like to find, avenues to explore further to integrate and apply to my own work.</p>

<p>Well, enough of that! What else? Let&#39;s try another command, after all the environment variable is exported. How about <code>true</code> – that&#39;s pretty trivial.</p>

<pre><code>sh-4.4$ true
</code></pre>

<p>But also boring. There&#39;s no output. Well, that&#39;s because we&#39;re probably using <code>bash</code>&#39;s builtin <code>true</code> “function”.</p>

<pre><code>sh-4.4$ /run/current-system/sw/bin/true
AT_SYSINFO_EHDR:      0x7ffd951e6000
AT_HWCAP:             bfebfbff
AT_PAGESZ:            4096
AT_CLKTCK:            100
AT_PHDR:              0x400040
AT_PHENT:             56
AT_PHNUM:             11
AT_BASE:              0x7eff8120f000
AT_FLAGS:             0x0
AT_ENTRY:             0x4089c0
AT_UID:               1000
AT_EUID:              1000
AT_GID:               100
AT_EGID:              100
AT_SECURE:            0
AT_RANDOM:            0x7ffd95067819
AT_HWCAP2:            0x2
AT_EXECFN:            /run/current-system/sw/bin/true
AT_PLATFORM:          x86_64
</code></pre>

<blockquote><p>Note: my output says things like <code>/run/current-system/sw/bin/true</code> with that long path because I use <a href="https://nixos.org" rel="nofollow">NixOS</a> and that&#39;s just where its located. Pretend lines like those say <code>/bin/true</code> or <code>/usr/bin/true</code> if that tickles your fancy.</p></blockquote>

<p>I want to see the opposite now, where nothing is printed out with the environment still configured, because I suspect that this is implemented in <code>glibc</code> based on the little I&#39;ve actually read of the article (I <em>will</em> read it, eventually).</p>

<p>To do this, I wrote a quick Go program to say <code>hi</code> that I can use static compilation to avoid linking to a <code>libc</code>:</p>

<pre><code class="language-go">package main

func main() { print(&#34;hi there\n&#34;) }
</code></pre>

<p>I built it:</p>

<pre><code>CGO_ENABLED=0 go build static-go-bin.go
</code></pre>

<p>Checked it for riddles:</p>

<pre><code>sh-4.4$ ldd ./static-go-bin
	not a dynamic executable
</code></pre>

<p>And then ran it with the environment variable set:</p>

<pre><code>❯ LD_SHOW_AUXV=1 ./static-go-bin 
hi there
</code></pre>

<p>Hrm. Okay, nothing printed, but that&#39;s not conclusive. We still need to know if its <code>ld</code> or something that <code>ld</code> loaded that makes something print.</p>

<p>You know what, I&#39;m going to stop here and just go read the article! That seems most pragmatic at this point.</p>
]]></content:encoded>
      <guid>https://blog.prag.dev/ld-always-magical</guid>
      <pubDate>Thu, 04 Nov 2021 23:47:11 +0000</pubDate>
    </item>
    <item>
      <title>Ruby: lightweight templating</title>
      <link>https://blog.prag.dev/ruby-lightweight-templating?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I had a TON of configuration files that I needed to deal with at work. After writing myself a handy tool to produce the &#34;input&#34; configuration details, I figured I&#39;d just use some quick and dirty Ruby (and ERB) to write the necessary files:&#xA;&#xA;!/usr/bin/env ruby&#xA;require &#39;json&#39;&#xA;require &#39;erb&#39;&#xA;&#xA;templatefile = &#34;./template.toml.erb&#34;&#xA;outputfilename = &#34;./configuration/foo/conf-%{idtoken}.toml&#34;&#xA;&#xA;given a JSON array through STDIN of [ {conf}, {conf} ]&#xA;confs = JSON.parse($stdin.read, symbolizenames: true).map{|c| OpenStruct.new c }&#xA;&#xA;we can prepare the ERB template&#xA;template = ERB.new(File.read(templatefile), 0, &#34;%-&#34;))&#xA;&#xA;finally, write out files for the conf objects&#xA;confs.each do |c|&#xA;  result = template.result(c.instanceeval(&#34;binding&#34;))&#xA;  File.write(outputfile_name % c, result)&#xA;end&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I had a TON of configuration files that I needed to deal with at work. After writing myself a handy tool to produce the “input” configuration details, I figured I&#39;d just use some quick and dirty Ruby (and <a href="https://ruby-doc.org/stdlib-2.7.4/libdoc/erb/rdoc/ERB.html" rel="nofollow"><code>ERB</code></a>) to write the necessary files:</p>

<pre><code class="language-ruby">#!/usr/bin/env ruby
require &#39;json&#39;
require &#39;erb&#39;

template_file = &#34;./template.toml.erb&#34;
output_file_name = &#34;./configuration/foo/conf-%{id_token}.toml&#34;

# given a JSON array through STDIN of [ {conf}, {conf} ]
confs = JSON.parse($stdin.read, symbolize_names: true).map{|c| OpenStruct.new c }

# we can prepare the ERB template
template = ERB.new(File.read(template_file), 0, &#34;%-&#34;))

# finally, write out files for the conf objects
confs.each do |c|
  result = template.result(c.instance_eval(&#34;binding&#34;))
  File.write(output_file_name % c, result)
end
</code></pre>
]]></content:encoded>
      <guid>https://blog.prag.dev/ruby-lightweight-templating</guid>
      <pubDate>Tue, 21 Sep 2021 22:04:16 +0000</pubDate>
    </item>
    <item>
      <title>Emacs 28.0.50: turning commands into new frames</title>
      <link>https://blog.prag.dev/emacs-28-0-50-turning-commands-into-new-frames?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[tl;dr: try out C-x 5 5! (will be available in 28.1)&#xA;&#xA;Emacs&#39; frames are mighty powerful, though they can easily turn into a mess of windows.. but that&#39;s not what I wanted to share here.&#xA;&#xA;Today, I wanted to open up an Info page in its own frame. That way I could keep it up and try out some command that typically mucks about with the current window setup. So, to avoid that a new frame would be handy - I thought - and so I went looking into the C-x 5 prefix&#39;s bindings:&#xA;&#xA;C-x 5 C-h:&#xA;&#xA;Global Bindings Starting With C-x 5:&#xA;key             binding&#xA;---             -------&#xA;&#xA;C-x 5 C-f       find-file-other-frame&#xA;C-x 5 C-o       display-buffer-other-frame&#xA;C-x 5 .         xref-find-definitions-other-frame&#xA;C-x 5 0         delete-frame&#xA;C-x 5 1         delete-other-frames&#xA;C-x 5 2         make-frame-command&#xA;C-x 5 5         other-frame-prefix&#xA;C-x 5 b         switch-to-buffer-other-frame&#xA;C-x 5 d         dired-other-frame&#xA;C-x 5 f         find-file-other-frame&#xA;C-x 5 m         compose-mail-other-frame&#xA;C-x 5 o         other-frame&#xA;C-x 5 p         project-other-frame-command&#xA;C-x 5 r         find-file-read-only-other-frame&#xA;&#xA;This one caught my eye - its new and coming in Emacs 28.1! &#xA;&#xA;C-x 5 5         other-frame-prefix&#xA;&#xA;So, to get my new frame with my desired Info page up, I hit the following keys:&#xA;&#xA;C-x 5 5 M-x i n f o RET&#xA;&#xA;That&#39;s it.&#xA;&#xA;---&#xA;&#xA;I keep up with the development branch of Emacs (mirrored on GitHub) on all my machines - and its coming features like this that remind me how much folks still care about Emacs and why I ought to keep up with the latest development efforts. Between general API improvements and the performance gains of Native Compilation, Emacs is a worthy tool to belong to every and any pragmatic fool (I dare you to be pragmatic when elisp is at your fingertips).]]&gt;</description>
      <content:encoded><![CDATA[<p>tl;dr: try out <code>C-x 5 5</code>! (will be available in <a href="https://github.com/emacs-mirror/emacs/commit/ba8370bc38ace70149f0af9a88fcdb35e33fe31e" rel="nofollow">28.1</a>)</p>

<p>Emacs&#39; <a href="https://www.gnu.org/software/emacs/manual/html_node/emacs/Frames.html" rel="nofollow">frames</a> are mighty powerful, though they can easily turn into a mess of windows.. but that&#39;s not what I wanted to share here.</p>

<p>Today, I wanted to open up an <a href="https://www.gnu.org/software/emacs/manual/html_mono/info.html" rel="nofollow">Info</a> page in its own frame. That way I could keep it up and try out some command that typically mucks about with the current window setup. So, to avoid that a new frame would be handy – I thought – and so I went looking into the <code>C-x 5</code> prefix&#39;s bindings:</p>

<p><code>C-x 5 C-h</code>:</p>

<pre><code>Global Bindings Starting With C-x 5:
key             binding
---             -------

C-x 5 C-f       find-file-other-frame
C-x 5 C-o       display-buffer-other-frame
C-x 5 .         xref-find-definitions-other-frame
C-x 5 0         delete-frame
C-x 5 1         delete-other-frames
C-x 5 2         make-frame-command
C-x 5 5         other-frame-prefix
C-x 5 b         switch-to-buffer-other-frame
C-x 5 d         dired-other-frame
C-x 5 f         find-file-other-frame
C-x 5 m         compose-mail-other-frame
C-x 5 o         other-frame
C-x 5 p         project-other-frame-command
C-x 5 r         find-file-read-only-other-frame
</code></pre>

<p>This one caught my eye – its <a href="https://github.com/emacs-mirror/emacs/commit/ba8370bc38ace70149f0af9a88fcdb35e33fe31e" rel="nofollow">new</a> and coming in Emacs 28.1!</p>

<pre><code>C-x 5 5         other-frame-prefix
</code></pre>

<p>So, to get my new frame with my desired Info page up, I hit the following keys:</p>

<pre><code>C-x 5 5 M-x i n f o &lt;RET&gt;
</code></pre>

<p>That&#39;s it.</p>

<hr/>

<p>I keep up with the <a href="https://github.com/emacs-mirror/emacs/commits/master" rel="nofollow">development branch of Emacs (mirrored on GitHub)</a> on all my machines – and its coming features like this that remind me how much folks still care about Emacs and why I ought to keep up with the latest development efforts. Between general API improvements and the performance gains of Native Compilation, Emacs is a worthy tool to belong to every and any pragmatic fool (I dare you to be pragmatic when elisp is at your fingertips).</p>
]]></content:encoded>
      <guid>https://blog.prag.dev/emacs-28-0-50-turning-commands-into-new-frames</guid>
      <pubDate>Tue, 03 Aug 2021 22:25:21 +0000</pubDate>
    </item>
    <item>
      <title>Using FreeBSD&#39;s sysrc and serving tftp resources</title>
      <link>https://blog.prag.dev/using-freebsds-sysrc-and-serving-tftp-resources?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[TIL about sysrc(8)&#xA;&#xA;The sysrc tool is very handy for administering FreeBSD hosts. I&#39;ve never shied away from vim /etc/rc.conf (though really emacs /etc/rc.conf for tidy editing, I&#39;m an Emacser) so its not a tool that would have been sought out in my case. That said, any programmatic access to the system configuration is always welcomed with open arms.&#xA;&#xA;Nevertheless, having more tools in the pragmatic belt makes for a better belt - as long as it makes things more practical and pragmatic in approach. In my case, I wanted to setup a tFTP server to allow me to install a fresh image on my network hardware. They require a tftp://$serverip/junos/$releaseblob.tgz URL to load themselves up from and this means.. well that you need a tFTP server.&#xA;&#xA;So, to set up the server we&#39;ll need have inetd start it and then start (or restart) inetd: &#xA;&#xA;Enable inetd services&#xA;sysrc inetdenable=YES&#xA;Configure tFTP to start (by inetd)&#xA;sed -i &#39;&#39; -E &#39;s/#(tftp.*udp[[:blank:]])/\1/&#39; /etc/inetd.conf&#xA;Start it up!&#xA;service inetd start&#xA;&#xA;My host uses ZFS as its root filesystem, so I also added a filesystem where /tftpboot lives:&#xA;&#xA;zfs create -o mountpoint=/tftpboot $rootfs/tftpboot&#xA;&#xA;With that, you can place files in /tftpboot that&#39;ll be retrievable by clients. Neat &amp; simple.]]&gt;</description>
      <content:encoded><![CDATA[<p>TIL about <a href="https://www.freebsd.org/cgi/man.cgi?query=sysrc&amp;apropos=0&amp;sektion=8&amp;manpath=FreeBSD+13.0-current&amp;arch=default&amp;format=html" rel="nofollow"><code>sysrc(8)</code></a></p>

<p>The <code>sysrc</code> tool is <em>very</em> handy for administering FreeBSD hosts. I&#39;ve never shied away from <code>vim /etc/rc.conf</code> (though really <code>emacs /etc/rc.conf</code> for tidy editing, I&#39;m an Emacser) so its not a tool that would have been sought out in my case. That said, any programmatic access to the system configuration is always welcomed with open arms.</p>

<p>Nevertheless, having more tools in the pragmatic belt makes for a better belt – as long as it makes things more practical <em>and</em> pragmatic in approach. In my case, I wanted to setup a tFTP server to allow me to install a fresh image on my network hardware. They require a <code>tftp://$serverip/junos/$release_blob.tgz</code> URL to load themselves up from and this means.. well that you need a tFTP server.</p>

<p>So, to set up the server we&#39;ll need have <code>inetd</code> start it and then start (or restart) inetd:</p>

<pre><code class="language-bash"># Enable inetd services
sysrc inetd_enable=YES
# Configure tFTP to start (by inetd)
sed -i &#39;&#39; -E &#39;s/#(tftp.*udp[[:blank:]])/\1/&#39; /etc/inetd.conf
# Start it up!
service inetd start
</code></pre>

<p>My host uses ZFS as its root filesystem, so I also added a filesystem where <code>/tftpboot</code> lives:</p>

<pre><code class="language-bash">zfs create -o mountpoint=/tftpboot $rootfs/tftpboot
</code></pre>

<p>With that, you can place files in <code>/tftpboot</code> that&#39;ll be retrievable by clients. Neat &amp; simple.</p>
]]></content:encoded>
      <guid>https://blog.prag.dev/using-freebsds-sysrc-and-serving-tftp-resources</guid>
      <pubDate>Mon, 24 May 2021 20:18:56 +0000</pubDate>
    </item>
  </channel>
</rss>